Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724
Add spire-agent workload API rate limiting by pod UID with OS UID fal…#6724terahertz5k wants to merge 3 commits intospiffe:mainfrom
Conversation
…lback Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
|
Hi @terahertz5k, thanks for opening a PR for this! It's something missing from the agent that we'd definitely want improved. We were wondering if you can give some details about the particular scenario that you ran into that this PR helped you. |
| // perCallerRateLimiter maintains per-caller rate limiters using a two-generation GC pattern. | ||
| // The key is a string that may represent a pod UID or OS UID (with a prefix to avoid collisions). | ||
| type perCallerRateLimiter struct { | ||
| limit int | ||
|
|
||
| mtx sync.RWMutex | ||
|
|
||
| // previous holds all the limiters that were current at the GC. | ||
| previous map[string]*rate.Limiter | ||
|
|
||
| // current holds all the limiters that have been created or moved | ||
| // from the previous limiters since the last GC. | ||
| current map[string]*rate.Limiter | ||
|
|
||
| // lastGC is the time of the last GC. | ||
| lastGC time.Time | ||
| } |
There was a problem hiding this comment.
There's a lot in common between this structure and the perIPLimiter in the server. Is is possible to factor out some of the common code into pkg/common/ratelimit so that the agent and the server can share the code?
There was a problem hiding this comment.
Good call.
I can extract a shared PerKeyLimiter into pkg/common/ratelimit with a Limiter interface that covers both AllowN and WaitN. The two-generation GC map logic moves there, and both the agent's perCallerRateLimiter and server's perIPLimiter become thin wrappers that just handle key extraction.
Today, the agent stores *rate.Limiter directly while the server stores its own rawRateLimiter interface (so tests can inject fakes). One shared Limiter interface replaces rawRateLimiter and works for both cases: the agent calls AllowN on it, the server calls WaitN, and the server's test fakes just need a no-op AllowN stub added.
| // OS UID prefixed with "uid:". | ||
| func (m workloadRateLimitMiddleware) resolveRateLimitKey(caller peertracker.CallerInfo) string { | ||
| if m.resolver != nil { | ||
| if podUID := m.resolver.GetPodUID(caller.PID); podUID != "" { |
There was a problem hiding this comment.
This operation can be expensive and will add some RPC time even in cases where spire is not running on Kubernetes but has rate limits configured. We would need to find a way to address this where we don't do unnecessary work on nodes that don't need it.
It would be good to understand the scenario you ran into that this change helped with. We can see how can change this to handle that and also not affect non-k8s users.
There was a problem hiding this comment.
Agreed, the procfs reads on every RPC are unnecessary on non-k8s nodes.
Thinking that I can add two things:
- Check KUBERNETES_SERVICE_HOST env var at resolver construction time. If unset, return a nil resolver so there's zero overhead on non-k8s nodes.
- On k8s nodes, we could wrap the resolver with a per-PID cache (short TTL ~10s) so we only hit procfs once per PID per interval, instead of on every RPC. Not sure if this is needed or not though.
I replied separately, but we were seeing agent memory increase due to badly behaving applications hammering the socket for certain operations. This would cause k8s to kill the agent due to OOM for the entire k8s node.
|
hi @sorindumitru , We were seeing badly behaving applications hammer the socket and cause memory usage on the agent to increase high enough for k8s to kill the spire-agent pod due to OOM. We will be using this for x509 and jwt fetch. I'll have a look at your other two comments and see what I can do there. |
How were these applications misbehaving? |
We have an application that spins up multiple short-lived pods simultaneously, with processes forking within containers and creating a new connection to the agent socket on every retry rather than reusing existing ones. We worked with the team to fix their retry logic and raised agent memory limits, which stabilized things. We didn't know it was this application at first, but we were able to see on our graphs that something was making lots of jwtfetch requests at certain times of the day as well as the number of agent pod restarts. That said, we want to add this rate limiting as a safety net. It's valid for applications to have multiple identities and make fetch requests for each, but a single caller shouldn't be able to overwhelm the agent to the point of OOM. We'd rather shed excess load gracefully than let one misbehaving workload take down the agent for everyone on the node. |
…ution Extract the two-generation GC per-key rate limiter into pkg/common/ratelimit so both the agent and server share the same implementation. Add Kubernetes auto-detection to skip procfs reads on non-k8s nodes and cache pod UID lookups per PID to reduce overhead. Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>
|
Thanks @terahertz5k for the answers above. One issue we have with the current approach is that it's very specific to k8s (and a bit to unix). We're wondering if applying the ratelimit per workload SPIFFE ID might be a better approach. This has the benefit of being useful in non-k8s cases but also comes with some disadvantages:
Do you think you might be able to apply the ratelimit after the attestation and see if it still works ok for the cases you ran into? |
|
Hello, I would like to work on this issue. |
|
@sorindumitru , thanks for the feedback. The issue we hit was specifically that FetchJWTSVID is unary and every call triggers a fresh attestation. Moving the rate limit to after attestation would most likely undermine the primary goal of this change, which is to protect the agent from memory exhaustion. The concurrent attestation work is what drives the OOM, I think. The pod UID resolution is the only k8s specific piece and it's only active on Linux nodes running k8s. On all other platforms, it falls back to OS UID. It's really an enhancement for cases (that are more likely in K8s) where container images may use the same OS UID. To validate this, I'll move the rate limit after attestation and run my load tests. I'll report back with the results. |
|
Based on my tests at 128MB and a 10 RPS rate limit, pre-attestation code survived 5 minutes of 4 pods with 100 forked workers each hammering FetchJWTSVID on a tight loop, but post-attestation code OOMKilled instantly. Doing more tests at 256MB but with 4x300 workers, pre-attestation code peaked at 168 MB, but post-attestation code was OOMKilled. This shows there is a large delta as load increases between pre- and post- attestation rate limiting. Granted this load is pretty extreme for a single agent. I increased memory to 512MB just to see what the actual memory peak was at 4x300 workers with 10 RPS rate limit. |
|
Sorry for the long delay here @terahertz5k, I was out for a few weeks. Would it be possible to get some heap profiling snapshots of these? I'm curious where this extra time is spent. I've done this locally using the unix workload attestor and the main memory increase seems to come from gRPC message buffers/parsing:
You can configure the agent with something like the following: This will tell the agent to periodically dump heap profiling dumps to disk, in a |
|
Or you could try the following patch, which reduces the read buffer size from 32KB to 1KB: I don't know if that's the right thing to do or if this is the right size to pick, but we do usually have smaller incoming payloads, so this may be a good improvement to have. |
|
After investigating further, I found that my refactor in 4f8c01a introduced a memory regression. This time around, I switched from unlimited RPS to a 50 RPS/pod cap this time around, since pre-attestation rejection is nearly instantaneous while post-attestation has to complete a full attestation first so unlimited RPS means the two modes see completely different request volumes and aren't really comparable. The issue is the sync.Map cache that 4f8c01a added, keyed by PID, to try and avoid repeated procfs reads. That works fine if callers are long-lived processes, but spire-agent api fetch jwt spawns a fresh subprocess per call. Every request gets a new PID and the cache never hits. The map grows without bound. The fix is just removing the cache. The rest of the refactor, extracting PerKeyLimiter into the shared package, the cleaner agent/server code, is fine and worth keeping. Numbers (120s sustained, 50 RPS/pod across 4 pods, 10 RPS rate limit, 512 MiB): No RL: 74 MiB (baseline) Pre-attestation RL with the cache removed and post-attestation RL land at nearly the same place. Happy to go either direction. |
|
After the cache removal fix for pre-attestation RL, I ran comparison tests on your 1k read buffer patch. Numbers are averaged across the 120s window rather than peak: No RL: 58 MiB without, 55 MiB with (-5%) There's a small but consistent reduction in sustained memory across all configs, most visible on post-RL at -10%. |
|
Thanks @terahertz5k. Doing the rate limit post attestation with the key being the SPIFFE ID sounds like the best way forward. We get most of the benefits and we also end up not being k8s specific. Out of curiosity, I see that the memory numbers you mentioned now are smaller. Was this a run with fewer workloads? |
|
The number of workloads was the same, but the old tests were at unlimited RPS. Pre-attestation rejection would be nearly instantaneous compared to doing full attestation. That means pre-RL ends up seeing far more total requests than the other configs, making it an unequal comparison. Capping at 50 RPS/pod keeps the request volume equal across all three configs. It also takes out some of the variance of my cpu being totally maxed out and probably thermal throttling. I'll work on the post-attestation code and commit that soon. Did you want me to include the read buffer patch? Or is that something you're working on elsewhere? |
Signed-off-by: Kevin Lui <kevin.lui@thetradedesk.com>

…lback
Pull Request check list
Affected functionality
SPIRE agent Workload API — adds optional per-caller rate limiting for all 5 Workload API methods.
Description of change
Adds an optional
ratelimitconfiguration block to the SPIRE agent that enforces per-caller token-bucket rate limits on Workload API methods (FetchX509SVID,FetchX509Bundles,FetchJWTSVID,FetchJWTBundles,ValidateJWTSVID). When a caller exhausts its burst, the agent returnsResourceExhaustedimmediately. All limits default to 0 (disabled), so existing deployments are unaffected.Key design decisions:
uid=0pods from sharing one bucket. Falls back to OS UID on non-Kubernetes or non-Linux systems.AllowN) — Unlike the server-sideperIPLimiterwhich uses blockingWaitN, the Workload API fails fast withResourceExhaustedto avoid silently queuing goroutines during reconnection storms.perIPLimiter: O(active-callers) memory, no background goroutine, automatic eviction after ~2 minutes of inactivity.workload_api.rate_limit_exceededcounter withmethodandkey_typelabels. Per-rejection logging was intentionally omitted to avoidlogrus.Entryallocation pressure under heavy load.middleware.Middleware, inserted last in the chain (afterverifySecurityHeader). No changes to existing handler code.Configuration example:
Testing:
Which issue this PR fixes
fixes #2010