When GTS is running in a container runtime which has configured CPU or memory limits or under an init system that uses cgroups to impose CPU and memory limits the values the Go runtime sees for GOMAXPROCS and GOMEMLIMIT are still based on the host resources, not the cgroup. At least for the throttling middlewares which use GOMAXPROCS to configure their queue size, this can result in GTS running with values too big compared to the resources that will actuall be available to it. This introduces 2 dependencies which can pick up resource contraints from the current cgroup and tune the Go runtime accordingly. This should result in the different queues being appropriately sized and in general more predictable performance. These dependencies are a no-op on non-Linux systems or if running in a cgroup that doesn't set a limit on CPU or memory. The automatic tuning of GOMEMLIMIT can be disabled by either explicitly setting GOMEMLIMIT yourself or by setting AUTOMEMLIMIT=off. The automatic tuning of GOMAXPROCS can similarly be counteracted by setting GOMAXPROCS yourself.
3.8 KiB
cgroups
Go package for creating, managing, inspecting, and destroying cgroups. The resources format for settings on the cgroup uses the OCI runtime-spec found here.
Examples
Create a new cgroup
This creates a new cgroup using a static path for all subsystems under /test
.
- /sys/fs/cgroup/cpu/test
- /sys/fs/cgroup/memory/test
- etc....
It uses a single hierarchy and specifies cpu shares as a resource constraint and uses the v1 implementation of cgroups.
shares := uint64(100)
control, err := cgroups.New(cgroups.V1, cgroups.StaticPath("/test"), &specs.LinuxResources{
CPU: &specs.LinuxCPU{
Shares: &shares,
},
})
defer control.Delete()
Create with systemd slice support
control, err := cgroups.New(cgroups.Systemd, cgroups.Slice("system.slice", "runc-test"), &specs.LinuxResources{
CPU: &specs.CPU{
Shares: &shares,
},
})
Load an existing cgroup
control, err = cgroups.Load(cgroups.V1, cgroups.StaticPath("/test"))
Add a process to the cgroup
if err := control.Add(cgroups.Process{Pid:1234}); err != nil {
}
Update the cgroup
To update the resources applied in the cgroup
shares = uint64(200)
if err := control.Update(&specs.LinuxResources{
CPU: &specs.LinuxCPU{
Shares: &shares,
},
}); err != nil {
}
Freeze and Thaw the cgroup
if err := control.Freeze(); err != nil {
}
if err := control.Thaw(); err != nil {
}
List all processes in the cgroup or recursively
processes, err := control.Processes(cgroups.Devices, recursive)
Get Stats on the cgroup
stats, err := control.Stat()
By adding cgroups.IgnoreNotExist
all non-existent files will be ignored, e.g. swap memory stats without swap enabled
stats, err := control.Stat(cgroups.IgnoreNotExist)
Move process across cgroups
This allows you to take processes from one cgroup and move them to another.
err := control.MoveTo(destination)
Create subcgroup
subCgroup, err := control.New("child", resources)
Registering for memory events
This allows you to get notified by an eventfd for v1 memory cgroups events.
event := cgroups.MemoryThresholdEvent(50 * 1024 * 1024, false)
efd, err := control.RegisterMemoryEvent(event)
event := cgroups.MemoryPressureEvent(cgroups.MediumPressure, cgroups.DefaultMode)
efd, err := control.RegisterMemoryEvent(event)
efd, err := control.OOMEventFD()
// or by using RegisterMemoryEvent
event := cgroups.OOMEvent()
efd, err := control.RegisterMemoryEvent(event)
Attention
All static path should not include /sys/fs/cgroup/
prefix, it should start with your own cgroups name
Project details
Cgroups is a containerd sub-project, licensed under the Apache 2.0 license. As a containerd sub-project, you will find the:
information in our containerd/project
repository.