Homa: Some problems about this implementation of Homa
Hi,
I read the paper in sigcomm 2018 about Homa. And I also check the codes roughly in this repo.
I have many problems about this implementation.
For example, in Homa/src/Protocol.h, there is nothing about priority in Grant or Data packet Header.
I also check the Homa/src/Sender.cc , I find in Sender::sendMessage function, the packets are sent without any consideration about priority (the priority is just set to 0 in line 119).
Also in Homa/src/Receiver.cc, it seems there is nothing about processing priority when send GRANT packets.
So Is this implementation far away about what described in the paper sigcomm 2018 or I check wrong places ? Does this implementation is the final version used in the paper?
Expect your reply Thanks!
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 17 (9 by maintainers)
Hi @btlcmr0702 We first started with implementing Homa in RAMCloud code base because RAMCloud has a flexible transport management system that allowed us to easily plug Homa in. This way RAMCloud could be used as an application on top of Homa transport scheme and it simplified our initial development and performance debugging of Homa. Furthermore, with RAMCloud as the workload generator, we could easily evaluate Homa’s performance against Infiniband and TCP since RAMCloud can be configured to use any of these transports.
However, the implementation of Homa in RAMCloud is tied to RAMCloud’s RPC system and communication plumbing and it wont be usable for other applications. So we started two separate projects to implement Homa as standalone packages, that developers can use in their own applications:
The user-space (ie. Kernel bypass) implementation here https://github.com/PlatformLab/Homa , depends on intel DPDK technology and it requires the network admins to enable priorities (ie. QoS levels) in the network switches. This project is aimed for those developers who want lowest possible latency and best performance that Homa can provide. These developers should be willing to bypass the kernel and, to some degrees, sacrifice the isolation, multi tenancy, and security that Linux Kernel provides and more importantly they should be willing to build/change their applications to conform to the API that Homa provides. But, in return, they get a blazingly fast transport and RPC system.
The Kernel implementation of Homa here: https://github.com/PlatformLab/HomaModule doesn’t have any dependency on any third party technology other than it requires the network priorities to be enabled in the fabric and as the name suggests, it runs in the kernel. Our goal with this implementation, more than any thing, is ease of adoption for Homa in production datacenters; our hope is to have this implementation eventually be compliant with the Linux socket interface. Our prediction is that this implementation wont be as fast as the user-space implementation but still orders of magnitude faster than TCP and Infiniband and other implemented transports. We’d like to allow developers to enjoy most of the benefits that Homa provides without changing much in their applications; so using this implementation would be as easy as loading a kernel module in Linux and get all the benefits that Linux kernel provides.
That said, these two project are still ongoing work and we don’t have any prediction when they will be complete.
Hope this helps.