5 servers are used. Two servers, Starlight dtn103 and NEU/MGHPCC sandie-2, act as consumer nodes. Two servers, Starlight dtn098 and Tennessee Tech pegasus act as forwarder nodes with 20 GB cache space each. One server at UCLA acts as the producer node.
The test has three scenarios: nocache, ARC and VIP. Each consumer node requests 30 1GB files randomly with the zipf(1) distribution for 2 hours.
We use 4 servers in the test. Starlight server (dtn098) acts as a consumer node. Tennessee Tech server acts as a consumer and a forwarder node with 20GB cache capacity. MGHPCC/NEU server acts as a forwarder node with 20GB cache capacity. UCLA server acts as a producer node.
The test has three scenarios: round robin with ARC, fast route with ARC, and VIP. Each consumer node requests 30 1GB files randomly with the zipf(1) distribution for 2 hours.
From Caltech sandie-7(client) to Starlight dtn098(producer), we use 6 forwarding threads in the NDN-DPDK forwarder and cache the requested files at Starlight dtn098 in advance. On the client side, 18 consumer applications are running with 8192 fixed pipeline sizes each. Each consumer application requests one 1GB file repeatedly for 500 times. Achieve ~40Gbps.
In this demo we will show creating a genomics data lake using K8s over NDN. We publish the data using a file server and can retrieve it using NDNc.
Kubernetes compute placement based on names