Results

Implementation of VIP algorithms

We finished the implementation and optimization of the VIP caching and forwarding algorithms on top of the NDN Forwarding Daemon (NFD) and NDN-DPDK. The implementation of the VIP algorithms includes three main parts: 1) a structured VIP table maintaining statistics as the core of the VIP framework, 2) design of a control message exchanging mechanism, and 3) packet-level forwarding and caching functions. After implementation, we tested the VIP algorithms with both af_packet and DPDK dedicated drivers on a local testbed at Northeastern University with 4 machines connected in a chain topology (consumer-forwarder- forwarder-producer) and further optimized the implementation during the test.

Set-up and maintenance of WAN testbed

For the SANDIE WAN testbed, VLANs are set up across multiple campus networks, multiple regional networks, Internet2, SCinet, ESnet, and CENIC. Specifically, we have 3 sites in total: NEU, Caltech, and CSU. We had an extra site (i.e., SC19 booth) at the Denver site during SC19. We deploy the following VLAN paths in the testbed: 1) NEU (VLAN 3698) to Caltech (VLAN 3610), which supports ~10Gbps data rate, 2) NEU (VLAN 3700) to CSU (VLAN 3549), which supports ~10Gbps data rate, 3) Caltech (VLAN 3611) to CSU (VLAN 3551), which supports ~10Gbps data rate, 4) NEU (VLAN 3699) to SC19 site, which supports ~10Gbps data rate, 5) Caltech (VLAN 3950) to SC19 site, which supports ~100Gbps data rate, and 6) CSU (VLAN 3550) to SC19 site, which supports ~10Gbps data rate. The core switching is mainly configured in Internet2 and SCinet.​

Implementation and improvements of NDN-DPDK based consumer and producer applications

We have implemented DPDK based consumer and producer applications able to communicate with the NDN DPDK high-performance forwarder. Both applications can encode and decode Interest and Data packets using the NDN v0.3 Packet format standard. Both consumer and producer are using two threads: one for receiving and one for transmitting packets. The consumer expresses three different types of Interests for the following filesystem calls: “open”, “fstat” and “read”. The producer can access the local POSIX filesystem and reply with Data to each different type of Interest packet. Depending on the response from the producer, the consumer will continue requesting data from the file or terminate the process.

We have also developed NDNgo based consumer and producer applications. The NDNgo library was developed by the NDN team at NIST and is one of the only two available NDN libraries that offer support for interfacing with the newly developed high-throughput NDN-DPDK forwarder that will enable NDN based application to achieve throughputs higher than 1Gbps for data intensive experiments. We devoted most of our efforts in developing a new consumer application based on this library as well as a new producer application which could have replaced the original XRootD NDN OSS plugin developed using the ndn-cxx library and the NFD forwarder. An Open Storage System plugin represents a dynamic library which the XRootD framework loads at run-time and it needs to offer an implementation of all related file system calls. In the first version of these applications we have implemented the following system calls: open, read, fstat and close, which are enough to read offsets from a given file, or entire files over the NDN network.



Demonstration at SC19

During SC19, the team demonstrated the throughput and caching performance of the Named Data Networking (NDN) networking architecture with the NDN-DPDK forwarder. The demonstration was done using high-performance consumer and producer applications over a transcontinental layer-2 testbed, as well as NDN-DPDK forwarders with VIP algorithms implemented. The data files used for the demonstration were obtained from CMS datasets, with sizes chosen around 2.5GB and 1GB. The tests for throughput and caching were performed in parallel on two paths (Caltech-SC19 booth and CSU-NEU) on the WAN testbed.

1. Throughput test on the path between Caltech and the SC19 booth: We configured a specific untagged VLAN path from Caltech to the SC19 Caltech Booth for this demonstration. We used a chain topology with a single-threaded producer at Caltech and an NDN-DPDK forwarder and a single-threaded consumer running on two different servers at the Caltech Booth. The live demonstration lasted 3 hours during which the consumer requested the same dataset composed of over 20 different CMS files. Meanwhile, the consumer application pushed real-time throughput and the number of Data, Interest and lost packets to a Grafana server used for displaying live status

2. Caching test on the path between NEU and CSU: A caching demonstration was carried out on a linear path on the SANDIE WAN, which includes 2 CSU machines and 1 NEU machine connected through VLAN with 10Gbps network cards. In the test, 2 CSU machines acted as a consumer and a forwarder respectively, and the NEU machine acted as a producer which was distant from the consumer and forwarder. With the consumer making requests for 10 datablocks residing at the producer, we showed the cached contents at the forwarder, and the caching hit performance.

During the demonstration at SC19, the team showed for the first time that the Named Data Networking (NDN) networking architecture can deliver Large Hadron Collider (LHC) high energy physics (HEP) data over a transcontinental layer-2 testbed at over 6.7 Gbps over a single thread. The team also showed that its optimized VIP caching and forwarding algorithms can decrease download times by a factor of 10.

The first experiment at SC19 focused on throughput. The producer application was running one thread for processing incoming packets and one thread for processing outgoing packets. The consumer used a fixed window size, bursting 64 packets at a time. Each Data packet contained a payload of 7KB. During the 3 hour live demonstration, more than 3.2 TB of data was transferred between Caltech Booth in Denver and Caltech, in more than 48 million packets at a mean throughput of 6.5Gbps, reaching a maximum of 6.7Gbps and a minimum of 5Gbps.

The second experiment done on SC19 is a caching test on the path between NEU and CSU. During SC19, due to network capacity constraints on this NEU-CSU path, we can only achieve around 200 Mbps throughput without caching at the forwarder node. In the test we allocated 3GB RAM caching space at the one-thread forwarder for temporarily
saving data and prepared 10 1GB files at the producer for answering requests. Keeping the packet loss rate at around 0.5%.

Containerization of XRootD plugin, consumer and producer applications

We have upgraded the SANDIE testbed to work with Docker containers configured for NDN-DPDK high-throughput forwarder: 1) five machines at Caltech, all prepared with Mellanox ConnectX-5 network cards 2) one machine at NEU configured with an Intel DPDK compatible network card. We had previously prepared Dockerfile to work with Centos 8.x only and we have found that it is hard to keep it updated every time NDN-DPDK has some major update or Kernel update, because Kernel versions had. To me the same. On the host machine and container. It was decided to work not using VF and not use SR-IOV to create multiple virtual function devices and use physical hardware to have freedom on container and host OS. We have adapted the existing NDN-DPDK Dockerfile, which is updated and maintained by the NDN-DPDK team.

Since NDN-DPDK offered support for memif, we were able to run multiple applications in the same container without needing to support sr-iov anymore, thus being able to run the container in privileged mode using host NIC.

For the completion of this task, we have modified the Dockerfile provided in the NDN-DPDK repository and added the rte_pmd_mlx5.so library and only after that to compiled DPDK with support for Mellanox. Once this was done, the binding to the host NIC from inside the container was tested as well as package transmission between two different hosts with payloads up to 8KB (MTU 9000).