***UDPDK*** is a minimal [**UDP**](https://tools.ietf.org/html/rfc768) stack based on [**DPDK**](https://www.dpdk.org/) for fast point-to-point communication between servers.
***UDPDK*** is a minimal [**UDP**](https://tools.ietf.org/html/rfc768) stack based on [**DPDK**](https://www.dpdk.org/) for fast point-to-point communication between servers.
It runs completely in userspace, so that you can move your packets quickly without going through the cumbersome kernel stack.
It runs completely in userspace, so that you can move your packets quickly without going through the cumbersome kernel stack.
...
@@ -28,13 +40,14 @@ Table of Contents
...
@@ -28,13 +40,14 @@ Table of Contents
*[API](#api)
*[API](#api)
*[Examples](#examples)
*[Examples](#examples)
*[How it works](#how-it-works)
*[How it works](#how-it-works)
*[Performance](#performance)
*[License](#license)
*[License](#license)
*[Contributing](#contributing)
*[Contributing](#contributing)
## Requirements
## Requirements
In order to use UDPDK, your machines must be equipped with DPDK-enabled NICs; these are typically found in servers, not in laptops and desktop machines.
In order to use UDPDK, your machines must be equipped with DPDK-enabled NICs; these are typically found in servers, not in laptops and desktop machines.
The list of hardware officially supported by DPDK is available [here](https://core.dpdk.org/supported/). Specifically, UDPDK has been developed and tested on *Intel X710-DA2* with *igb_uio* and *vfio* drivers; other devices may still work, possibly with minor changes to the framework.
The list of hardware officially supported by DPDK is available [here](https://core.dpdk.org/supported/). Specifically, UDPDK has been developed and tested on *Intel X710-DA2* with *igb_uio* and *vfio* drivers; other devices should work as long as DPDK supports them.
## Install Dependencies
## Install Dependencies
...
@@ -61,6 +74,9 @@ From the menu, do the following:
...
@@ -61,6 +74,9 @@ From the menu, do the following:
3. Configure hugepages (e.g. 1024M for each NUMA node)
3. Configure hugepages (e.g. 1024M for each NUMA node)
4. Bind the NIC to vfio driver, specifying its PCI address
4. Bind the NIC to vfio driver, specifying its PCI address
> :warning: **If you use the VFIO driver**, then you must enable the IOMMU in your system.
> To enable it, open `/etc/default/grub`, add the flag `intel_iommu=on` in `GRUB_CMDLINE_LINUX_DEFAULT`, then `sudo update-grub` and finally reboot.
**inih**
**inih**
[inih](https://github.com/benhoyt/inih) is used for convenience to parse `.ini` configuration files.
[inih](https://github.com/benhoyt/inih) is used for convenience to parse `.ini` configuration files.