5 servers are used. Two servers, Starlight dtn103 and NEU/MGHPCC sandie-2, act as consumer nodes. Two servers, Starlight dtn098 and Tennessee Tech pegasus act as forwarder nodes with 20 GB cache space each. One server at UCLA acts as the producer node.
The test has three scenarios: nocache, ARC and VIP. Each consumer node requests 30 4GB files randomly with the zipf(1) distribution for 2 hours.
Multi-path Topology VIP Caching Test
We use 4 servers in the test. Starlight server (dtn098) acts as a consumer node. Tennessee Tech server acts as a consumer and a forwarder node with 20GB cache capacity. MGHPCC/NEU server acts as a forwarder node with 20GB cache capacity. UCLA server acts as a producer node.
The test has three scenarios: round robin with ARC, fast route with ARC, and VIP. Each consumer node requests 30 4GB files randomly with the zipf(1) distribution for 2 hours.
LOCAL THROUGHPUT TEST
Local Test at Caltech SANDIE-7
With all data cached in advance in DRAM
18 NDNc consumers with AIMD congestion control
8192 initial window
lifetime 500 ms.
~80Gbps
WAN THROUGHPUT TEST
Caltech to Starlight
From Caltech sandie-7(client) to Starlight dtn098(producer), we use 6 forwarding threads in the NDN-DPDK forwarder and cache the requested files at Starlight dtn098 in advance. On the client side, 18 consumer applications are running with 8192 fixed pipeline sizes each. Each consumer application requests one 1GB file repeatedly for 500 times. Achieve ~40Gbps.
Starlight & SC22 booth to Caltech
Test Topology: 2 consumers (SC22 booth, Starlight) and 1 producer(Caltech)
100 Gbps tagged VLAN connections to both consumers.
Each NDN-DPDK forwarder uses 6 forwarding threads
Each consumer server runs 18 consumer applications to request 18 1GB files named by CMS .
Requested data is cached in advance in the DRAM of the Caltech server
~50Gbps
Xrootd Plugin
File transfer between Docker containers at the Caltech site – A consumer running the plugin and an NDN-DPDK fileserver – Same parameters as earlier throughput test (fixed pipeline with size 8192, packet lifetime of 500ms) – Result is roughly equivalent to the performance of a single NDNc file transfer client (4.8 Gb/s)
FPGA Accleration
We run NDNcat and then have the forwarder send inputs to both CPU and FPGA for hashing.
Demonstration of the hashing part for a single thread where computation time is compared.
Genomics Data Lake, Kubernetes and NDN
Genomics
In this demo we will show creating a genomics data lake using K8s over NDN. We publish the data using a file server and can retrieve it using NDNc.
Run file server on K8 cluster on GCP and possibly on NDISE testbed
Run NDNc to retrieve datasets
Demonstrate data can be pulled out of the K8 clusters
Kubernetes compute placement based on names
Create named service endpoints on K8s
Create multiple K8s clusters running the same services
Show compute requests can be sent to a K8s cluster based on the name
Create multiple service end-points based on different names and show that the network can route requests based on the request names