When GTS is running in a container runtime which has configured CPU or memory limits or under an init system that uses cgroups to impose CPU and memory limits the values the Go runtime sees for GOMAXPROCS and GOMEMLIMIT are still based on the host resources, not the cgroup. At least for the throttling middlewares which use GOMAXPROCS to configure their queue size, this can result in GTS running with values too big compared to the resources that will actuall be available to it. This introduces 2 dependencies which can pick up resource contraints from the current cgroup and tune the Go runtime accordingly. This should result in the different queues being appropriately sized and in general more predictable performance. These dependencies are a no-op on non-Linux systems or if running in a cgroup that doesn't set a limit on CPU or memory. The automatic tuning of GOMEMLIMIT can be disabled by either explicitly setting GOMEMLIMIT yourself or by setting AUTOMEMLIMIT=off. The automatic tuning of GOMAXPROCS can similarly be counteracted by setting GOMAXPROCS yourself.
1,014 B
How to contribute
Development is on GitHub and contributions in the form of pull requests and issues reporting bugs or suggesting new features are welcome. Please take a look at the architecture to get a better understanding for the high-level goals.
New features must be accompanied by tests. Before starting work on any large feature, please join the #libbpf-go channel on Slack to discuss the design first.
When submitting pull requests, consider writing details about what problem you are solving and why the proposed approach solves that problem in commit messages and/or pull request description to help future library users and maintainers to reason about the proposed changes.
Running the tests
Many of the tests require privileges to set resource limits and load eBPF code.
The easiest way to obtain these is to run the tests with sudo
:
sudo go test ./...