{"_id":"doc-en-kubernetes-88466bc637a805045217da94feabbdbd3df0ba52cd0d9904e95d3984c4cf4efa","title":"","text":"( noticed this) HardPodAffinitySymmetricWeight should be in the , not in KubeSchedulerConfiguration If/when fixing this, we need to be careful to do it in a backward-compatilble way.\nDo you mean ? It seems scheduler policy config's args can not passed to FitPredicates; it need to update related interface. For the : mark to be deprecated and set scheduler policy config to HardPodAffinitySymmetricWeight if policy config is empty.\nAre you working on this? if no, I can take it.\n, nop; please go ahead.\n/ / / , When reviewing the PR, there's one question on : the default value of is 1, which means the weight (1) will always add to priorities if term matched (), does it our expectation? And the user has to set it to 0 to disable it. IMO, the default value of should be 0.\nThe value 1 (instead of 0) was chosen so that if you say A must run with B, there is an automatic preference that B should run with A. This seems like reasonable behavior.\nThanks very much; reasonable case to me. But it seems we can not disable it: if set to 0, the default funcs will set it to default (1); similar result to un-set. Anyway, no user complain this now :)."} {"_id":"doc-en-kubernetes-599cbd9914ac52a34ee08ac617d1f4bd774f52f2d5bdcb07fa1a0e76f7194545","title":"","text":"On a large cluster, the routecontroller basically makes O(n) requests all at once, and most of them error out with \"Rate Limit Exceeded\". We then wait for operations to complete on the ones that did stick, and damn the rest, they'll be caught next reconcile (which is a while). In fact, in some cases it appears we're spamming hard enough to get rate limited by the upper-level API DOS protections and seeing something akin to , e.g.: We can do better than this!\nc.f. , which is the other leg of this problem\nI took a look into code and it seems that we are doing full reconciliation every 10s. If we need to create 2000 routes at the beginning it seems to frequently. I think that we need to: first compute how many routes we have to create based on that try to spread them over some time to reduce possibility of causing 'rate limit exceeded' I will try to put some small PR together as a starting point.\nI'm reopening this one, since this doesn't seem to be solved. So basically something strange is happening here. With PR we have 10/s limit on API calls send to GCE. And we are still rate-limitted. However, according to documentation we are allowed for 20/s: So something either doesn't work or is weird.\nWhat is super interesting from the logs of recent run: It starts to creating routes: After 2 minutes from that, we are hitting \"Rate limit Exceeded\" for the first time: And only after 9 minutes we create the first route ever. The second one is created after 15 minutes, and from that point we create a number of them So it seems that nothing useful is happening for the first 2 minutes, and then between 3m30 and 9m30\nOK - so I think that what is happening is that all controllers are sharing GCE interface (and thuse they have common throttling). So if one controller is generating a lot of API calls, then other controllers may be throttled. has a hypothesis that it may be caused by nodecontroller.\nOne thing my PR didn't address (because it was late): Now we have some obnoxious differences in logging when you go to make a request, because the versus could actually be absorbing a fair amount of ratelimiter."} {"_id":"doc-en-kubernetes-bf24c8733a67a958496376416be2eb636c6e4bfb94999815173220461c93fd0c","title":"","text":"It could before because of the operation rate limiter and operation polling anyways, but now could be worse.\n- yes I'm aware of that. And probably this is exactly the case here.\nI got fooled by that - I guess we should fix it somehow.\nOne approach to fixing it is a similar approach to the operation polling code, but in the : warn if we ratelimited for more than N seconds (and dump the request or something? not clear how to clarify what we were sending).\nI found something else - Kubelet is also building GCE interface: We have 2000 kubelets, so if all of them send multiple requests to GCE, we may hit limits.\nAny idea in what circumstances Kubelet will contact cloud provider?\nHmm - it seems it contacts GCE exactly once at the beginning. So it doesn't explain much.\nI think it's only here: I've rarely seen GCE ratelimit readonly calls, in practice, and kubelets come up pretty well staggered.\nOK - so with logs that I , it is pretty clear that we are very heavily throttled on GCE api calls. In particular, when I was running 1000-node cluster, I see lines where we were throttle for 1m30. I'm pretty sure it's significantly higher in 2000-node cluster. I'm going to investigate a bit more where all those reuqests come from.\nActually - it seems that we have a pretty big problem here. Basically, Kubelet seems to send a lot of requests to GCE. So to clarify: every time Kubelet is updating NodeStatus (so every 10s by default) it is sending an API call to GCE to get it's not addresses: that means, that if we have 2000 nodes, kubelets generate 200 QPS to GCE (which is significantly more than we can afford) We should think how to solve this issue, but I think the options are: increasing QPS quota or calling GCE only once per X statusUpdates.\nThe second issue is that we are throttled at the controller level too, and this one I'm tryin to understand right now."} {"_id":"doc-en-kubernetes-b1dad1b374e4c58d925d9fef62d9846d5e0e5eda04aecdd78a124036a36cf31c","title":"","text":"- that said, any rate-limitting on our side, will not solve the kubelet-related problem\nHmm - actually looking into implementation it seems that NodeAddresses are only contacting metadata server, so I'm not longer that convinced...\nAlso, I think that important thing is that a single call to GCE API can translate to multiple api calls to GCE. In particular - CreateRoute translates to: get instance insert route a bunch of getOperation (until this is finished)\nOK - so I think that what is happening here is that since CreateRoute translates to: get instance insert route a bunch of getOperations That means that if we call say 2000 CreateRoute() at the same time, all of them will try to issue: getInstance at the beginning. So we will quickly accumulate 2000 GET requests in the queue. Once they are processed, we generate the \"POST route\" then for any processed. So It's kind of expected what is happening here. So getting back to the Zach`s PR - I think that throttling POST and GET and OPERATION calls separate is kind of good idea in general.\nBut I still don't really understand why do we get \"RateLimitExceeded\". Is that because of Kubelets?\nI think that one problem we have at the RouteController level is that if CreateRoute fails, we simply don't react on it. And we will wait at least 5 minutes before even retrying it. We should add some retries at the route controller level too. I will send out some short PR for it.\nBut I still don't understand why do we get those \"RateLimitExceeded\" error\nOK - so as an update. In GCE, we have a separate rate-limit for number of in-flight CreateRoute call per project. is supposed to address this issue.\nOK - so this one is actually fixed. I mean that it is still long, but currently it's purely blocked on GCE.\nAs a comment - we have an internal bug for it already."} {"_id":"doc-en-kubernetes-631bc5da562ebc33bd49790f3078648639a2126878c5fc7773cd8143bd3bdf27","title":"","text":"TestRoundTripSocks5AndNewConnection Disabled by No response No response No response /sig\nThere are no sig labels on this issue. Please add an appropriate label by using one of the following commands: - Please see the for a listing of the SIGs, working groups, and committees available.