content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
Prometheus as a datasource] Let's explore the metrics in Amazon Managed Grafana now: Click the explore button, and search for ethtool: image::mon\_explore\_metrics.png[Node\_ethtool metrics] Let's build a dashboard for the linklocal\_allowance\_exceeded metric by using the query `rate(node\_net\_ethtool{device="eth0",t... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/networking/monitoring.adoc | mainline | aws-eks-best-practices | [
-0.04142799973487854,
0.050556644797325134,
-0.029885955154895782,
0.021496394649147987,
0.04647715389728546,
-0.11573324352502823,
-0.004584318492561579,
-0.0026614840608090162,
-0.010909853503108025,
0.0616622194647789,
0.0013701313873752952,
-0.11245500296354294,
0.051702726632356644,
0... | 0.104476 |
= Optimizing over time (Right Sizing) Right Sizing as per the AWS Well-Architected Framework, is "`... using the lowest cost resource that still meets the technical specifications of a specific workload`". When you specify the resource `requests` for the Containers in a Pod, the scheduler uses this information to decid... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/optimizing.adoc | mainline | aws-eks-best-practices | [
0.03822128847241402,
0.11919623613357544,
0.05083239823579788,
0.031206898391246796,
-0.04768720269203186,
-0.027948442846536636,
-0.021745862439274788,
-0.011469566263258457,
0.06805875152349472,
-0.019494641572237015,
-0.09960748255252838,
-0.017563462257385254,
-0.0239451564848423,
-0.0... | 0.120289 |
applications and microservices running on Amazon Elastic Kubernetes Service. The metrics include utilization for resources such as CPU, memory, disk, and network - which can help with right-sizing Pods and save costs. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-metrics.ht... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/optimizing.adoc | mainline | aws-eks-best-practices | [
-0.022125622257590294,
0.01912551559507847,
-0.010593018494546413,
0.03157547488808632,
0.03378630056977272,
-0.07899374514818192,
0.03599206358194351,
-0.042312685400247574,
0.07769308984279633,
0.03509771078824997,
-0.06485287100076675,
-0.06388984620571136,
0.009727898053824902,
-0.0225... | 0.245132 |
//!!NODE\_ROOT [."topic"] [[cost-opt-observability,cost-opt-observability.title]] = Cost Optimization - Observability :info\_doctype: section :imagesdir: images/ :info\_title: Observability :info\_abstract: Observability :info\_titleabbrev: Observability :authors: ["Rachel Leekin", "Nirmal Mehta"] :date: 2023-09-29 == ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
-0.016354409977793694,
0.05855851620435715,
0.0012569804675877094,
0.0004087718261871487,
0.030705157667398453,
-0.13658057153224945,
0.027438953518867493,
0.035522930324077606,
-0.012062196619808674,
0.05993705242872238,
-0.0358213447034359,
-0.06198355183005333,
0.03074544295668602,
0.03... | 0.21459 |
can customize the retention policy for each log group based on your workload requirements. In a development environment, a lengthy retention period may not be necessary. But in a production environment, you can set a longer retention policy to meet troubleshooting, compliance, and capacity planning requirements. For ex... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
0.006696177180856466,
0.009334747679531574,
-0.01782223768532276,
0.0551784485578537,
0.08971289545297623,
-0.011158931069076061,
-0.003798738354817033,
-0.056600604206323624,
0.05381540581583977,
0.04848286882042885,
-0.013951726257801056,
0.026048464700579643,
0.02837342955172062,
0.0465... | 0.012354 |
default. For your application logs, adjust the log levels to align with the criticality of the workload and environment. For example, the java application below is outputting `INFO` logs which is the typical default application configuration and depending on the code can result in a high volume of log data. ---- import... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
0.09264124184846878,
0.028072061017155647,
-0.02675546519458294,
-0.03648824617266655,
0.023772623389959335,
-0.11973904818296432,
-0.02188420668244362,
0.08142993599176407,
0.03006366640329361,
0.044351689517498016,
-0.01081933081150055,
-0.019503528252243996,
0.007568068336695433,
0.0116... | 0.068877 |
health of resources, allowing you to take proactive measures when necessary. Generally observability costs scale with telemetry data collection and retention. Below are a few strategies you can implement to reduce the cost of metric telemetry: collecting only metrics that matter, reducing the cardinality of your teleme... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
0.00110450794454664,
0.051154568791389465,
-0.01847846433520317,
0.003918839152902365,
-0.028639862313866615,
-0.07994116842746735,
0.03721034154295921,
-0.0064039938151836395,
0.04634389653801918,
0.06747018545866013,
-0.017040830105543137,
-0.03723946213722229,
0.022821133956313133,
-0.0... | 0.08714 |
can run the following PROMQL query to determine which scrape targets have the highest number of metrics (cardinality): [,promql] ---- topk\_max(5, max\_over\_time(scrape\_samples\_scraped[1h])) ---- and the following PROMQL query can help you determine which scrape targets have the highest metrics churn (how many new m... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
-0.0322546660900116,
-0.06417679041624069,
-0.08684051781892776,
0.001090398756787181,
-0.010709569789469242,
-0.10296700149774551,
-0.013089601881802082,
-0.022309821099042892,
0.007042968180030584,
-0.0022994312457740307,
-0.028107110410928726,
-0.08547920733690262,
0.027671145275235176,
... | 0.042196 |
and 10% of the requests to a payment page this might leave you with 300 traces for an 30 minute period. With an ADOT Tail Sampling rule of that filters specific errors, you could be left with 200 traces which decreases the number of traces stored. ---- processors: groupbytrace: wait\_duration: 10s num\_traces: 300 tail... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_observability.adoc | mainline | aws-eks-best-practices | [
-0.04211380332708359,
-0.008865350857377052,
-0.03380917012691498,
0.011571145616471767,
0.038571249693632126,
-0.05256280303001404,
0.0025809556245803833,
-0.05795443803071976,
0.00747215049341321,
0.04332711547613144,
-0.015735870227217674,
-0.0530022494494915,
-0.002177305519580841,
-0.... | -0.009222 |
[."topic"] [#cost-opt-networking] = Cost Optimization - Networking :info\_doctype: section :authors: ["Lukonde Mwila"] :date: 2023-09-22 :info\_titleabbrev: Network :imagesdir: images/ Architecting systems for high availability (HA) is a best practice in order to accomplish resilience and fault-tolerance. In practice, ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
-0.004234744235873222,
0.05019473284482956,
0.009057426825165749,
0.048530466854572296,
0.036771345883607864,
-0.00953591987490654,
-0.00823136419057846,
0.031358908861875534,
0.038262419402599335,
0.06296990811824799,
-0.03801954910159111,
-0.02929100953042507,
0.0901591032743454,
-0.0714... | 0.162252 |
also set a \_hint\_ for the zone. \_Hints\_ describe which zone an endpoint should serve traffic for. `kube-proxy` will then route traffic from a zone to an endpoint based on the \_hints\_ that get applied. The diagram below shows how EndpointSlices with hints are organized in such a way that `kube-proxy` can know what... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
0.0016624678391963243,
-0.03814692422747612,
0.029447374865412712,
-0.062132950872182846,
-0.03391559422016144,
-0.02249520644545555,
0.0709046944975853,
-0.04288056492805481,
0.062268342822790146,
0.034531038254499435,
-0.062094420194625854,
-0.04208016395568848,
-0.022462690249085426,
-0... | 0.110571 |
the increased load in the affected zone. \* This situation in turn can lead to resource inefficiency. When cluster autoscalers like Karpenter detect the pod scale-out across different AZs, they may provision additional nodes in the unaffected AZs, resulting in unnecessary resource allocation. To overcome this challenge... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
-0.0031008454971015453,
0.024472763761878014,
-0.0005032152985222638,
0.035328563302755356,
-0.021558748558163643,
-0.0753229483962059,
-0.0033547242637723684,
0.017896005883812904,
-0.039115216583013535,
0.0657082125544548,
-0.047346845269203186,
-0.048885319381952286,
0.006438885349780321,... | 0.175841 |
A could go to any replica of Microservice B on any given node across the different AZs. However, with the Service's internal traffic policy set to `Local`, traffic will be restricted to endpoints on the node that the traffic originated from. This policy dictates the exclusive use of node-local endpoints. By implication... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
0.06344802677631378,
0.0013572475872933865,
0.010645482689142227,
-0.03857040032744408,
-0.025022407993674278,
-0.0060895830392837524,
0.02702282927930355,
-0.06295091658830643,
0.04927524924278259,
0.0745997428894043,
-0.07601305842399597,
-0.03056889772415161,
0.032350245863199234,
0.027... | 0.120949 |
cluster. Your architecture may comprise internal and/or external facing load balancers. Depending on your architecture and network traffic configurations, the communication between load balancers and Pods can contribute a significant amount to data transfer charges. You can use the https://kubernetes-sigs.github.io/aws... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
0.037151090800762177,
0.04232928156852722,
-0.0009708321886137128,
0.078410305082798,
0.0053153266198933125,
-0.028637979179620743,
0.04014722257852554,
-0.003343455959111452,
0.1388615518808365,
0.09416133910417557,
-0.0352964922785759,
-0.09394697844982147,
0.06604069471359253,
-0.076795... | 0.212481 |
data transfer costs from a container registry to the EKS worker nodes. == Data Transfer to Internet & AWS Services It's a common practice to integrate Kubernetes workloads with other AWS services or third-party tools and platforms via the Internet. The underlying network infrastructure used to route traffic to and from... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
0.019447732716798782,
0.06972750276327133,
0.022504664957523346,
0.03918294608592987,
0.028430279344320297,
-0.020821118727326393,
0.054555971175432205,
-0.007481216453015804,
0.10408969968557358,
0.07881031185388565,
-0.05296747013926506,
-0.03883802145719528,
0.014995519071817398,
-0.074... | 0.240463 |
Internet Gateways to establish communication between workloads in different VPCs. image::between\_vpcs.png[Between VPCs] === VPC Peering Connections To reduce costs for such use cases, you can make use of https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html[VPC Peering]. With a VPC Peering connection... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
-0.024154311046004295,
-0.035079337656497955,
-0.059806447476148605,
0.040296994149684906,
-0.012172120623290539,
0.015136437490582466,
0.04790867120027542,
-0.04086140915751457,
0.09949033707380295,
0.056262340396642685,
-0.10109914094209671,
-0.007078548427671194,
0.046800047159194946,
0... | 0.014764 |
to Pod replicas in different AZs. Below is a code block example of a Destination Rule resource in Istio. As can be seen below, this resource specifies weighted configurations for incoming traffic from 3 different AZs in the `eu-west-1` region. These configurations declare that a majority of the incoming traffic (70% in... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
0.04168803617358208,
0.00116645242087543,
-0.00722740963101387,
0.01864881068468094,
-0.030304672196507454,
-0.07888095825910568,
0.06521324813365936,
-0.020507721230387688,
0.022913966327905655,
0.0967162474989891,
-0.07509668916463852,
-0.06671178340911865,
0.007750424090772867,
0.062195... | 0.401932 |
A\*\* (on \*NODE 3) to\* \*\*APP C\*\* fails because there are no available \_node-local endpoints\_ for \*\*APP C\*\*. As the diagram shows, APP C has no replicas on NODE 3. \*\*\*\* The screenshots below are captured from a live example of this approach. The first set of screenshots demonstrate a successful external ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_networking.adoc | mainline | aws-eks-best-practices | [
-0.06002172827720642,
0.0015271069714799523,
0.00721127400174737,
0.020434999838471413,
-0.012343849055469036,
-0.08565409481525421,
-0.03230713680386543,
-0.033302705734968185,
-0.006560216657817364,
0.1036667674779892,
-0.03120323084294796,
-0.08118697255849838,
0.05247671157121658,
0.01... | 0.225639 |
[."topic"] [#cost-opt-framework] = Cost Optimization Framework :info\_doctype: section :info\_titleabbrev: Framework :imagesdir: images/ AWS Cloud Economics is a discipline that helps customers increase efficiency and reduce their costs through the adoption of modern compute technologies like Amazon EKS. The discipline... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cfm_framework.adoc | mainline | aws-eks-best-practices | [
0.047959212213754654,
0.026855865493416786,
-0.057005252689123154,
0.027136078104376793,
0.10361072421073914,
-0.03291745111346245,
0.05052424594759941,
0.027873320505023003,
0.040002573281526566,
0.07250934094190598,
-0.024782145395874977,
-0.050320062786340714,
0.06107631325721741,
-0.00... | 0.112495 |
be considered as part of the workload. For considerations relating to EKS storage costs, please consult the xref:cost-opt-storage[Cost Optimization - Storage] section of this guide. == The Plan pillar: Planning and forecasting Once the recommendations in the See pillar are implemented, clusters are optimized on an on-g... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cfm_framework.adoc | mainline | aws-eks-best-practices | [
0.01630621775984764,
0.0811171680688858,
-0.027437282726168633,
0.06866355240345001,
0.03502364084124565,
-0.039060287177562714,
-0.043293315917253494,
0.023336339741945267,
0.0754631981253624,
0.10297567397356033,
-0.06199163198471069,
-0.053366757929325104,
0.04495146498084068,
-0.024969... | 0.156452 |
//!!NODE\_ROOT [[cost-opt,cost-opt.title]] = Cost Optimization - Introduction :doctype: book :sectnums: :toc: left :icons: font :experimental: :idprefix: :idseparator: - :sourcedir: . :info\_doctype: chapter :info\_title: Best Practices for Cost Optimization :info\_abstract: Best Practices for Cost Optimization :info\_... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_optimization_index.adoc | mainline | aws-eks-best-practices | [
0.007434320170432329,
0.08583728224039078,
0.018109483644366264,
0.0209671501070261,
0.018979426473379135,
-0.04661575332283974,
-0.04582856222987175,
0.017727747559547424,
0.062058623880147934,
0.056676194071769714,
-0.04795417562127113,
-0.04395660012960434,
0.056375082582235336,
-0.0440... | 0.098607 |
This guide is being released on GitHub so as to collect direct feedback and suggestions from the broader EKS/Kubernetes community. If you have a best practice that you feel we ought to include in the guide, please file an issue or submit a PR in the GitHub repository. Our intention is to update the guide periodically a... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_optimization_index.adoc | mainline | aws-eks-best-practices | [
-0.03220519423484802,
-0.004003992769867182,
0.0005101196584291756,
0.061855822801589966,
-0.0034263639245182276,
-0.033106982707977295,
0.022130893543362617,
0.029505552724003792,
0.00358534581027925,
0.06698553264141083,
0.02759789302945137,
-0.07367825508117676,
-0.027530387043952942,
-... | 0.199362 |
//!!NODE\_ROOT [."topic"] [[cost-opt-storage,cost-opt-storage.title]] = Cost Optimization - Storage :info\_doctype: section :imagesdir: images/ :info\_title: Storage :info\_abstract: Storage :info\_titleabbrev: Storage :authors: ["Chance Lee"] :date: 2023-10-31 == Overview There are scenarios where you may want to run ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
-0.0025762226432561874,
0.060700129717588425,
0.05574662610888481,
0.028758130967617035,
0.0004864196525886655,
-0.01989300362765789,
-0.04414507374167442,
0.06771289557218552,
0.11133178323507309,
0.05991470441222191,
-0.01425235066562891,
-0.006170880049467087,
0.06978151947259903,
0.023... | 0.117899 |
to preserve persistent data or information from one request to the next. Databases are a common example for such use cases. However, Pods, and the containers or processes inside them, are ephemeral in nature. To persist data beyond the lifetime of a Pod, you can use PVs to define access to storage at a specific locatio... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
-0.019493233412504196,
0.024279629811644554,
-0.0007385950302705169,
0.0365745984017849,
0.03394913673400879,
0.003863930469378829,
-0.003720691427588463,
0.019924215972423553,
0.1471942663192749,
0.08226311951875687,
-0.03361310437321663,
0.06549631804227829,
0.027368614450097084,
-0.0666... | 0.194864 |
your storage performance and cost to the needs of your applications. ==== Monitor and optimize over time It's important to understand your application's baseline performance and monitor it for selected volumes to check if it's meeting your requirements/expectations or if it's over-provisioned (e.g. a scenario where pro... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
0.02626330405473709,
0.09505542367696762,
0.03209686651825905,
0.00568170053884387,
0.009058828465640545,
-0.05108611658215523,
0.0014057799708098173,
0.03070370852947235,
0.056828573346138,
0.07736716419458389,
-0.08165981620550156,
-0.02103492245078087,
0.026116199791431427,
-0.030608948... | 0.075319 |
workloads and applications include Wordpress and Drupal, developer tools like JIRA and Git, and shared notebook system such as Jupyter as well as home directories. One of main benefits of Amazon EFS is that it can be mounted by multiple containers spread across multiple nodes and multiple availability zones. Another be... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
0.034322723746299744,
0.03401857614517212,
0.018085697665810585,
0.018826870247721672,
0.06824841350317001,
0.03370290249586105,
-0.019239824265241623,
0.021853845566511154,
0.07411257922649384,
0.07610050588846207,
-0.033614885061979294,
-0.00431292038410902,
0.047509368509054184,
0.01122... | 0.108365 |
Lustre provides different deployment options. The first option is called \_scratch\_ and it doesn't replicate data, while the second option is called \_persistent\_ which, as the name implies, persists data. The first option (\_scratch\_) can be used \_to reduce the cost of temporary shorter-term data processing.\_ The... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
-0.08192826807498932,
-0.01490284875035286,
-0.0038311886601150036,
0.029285473749041557,
-0.012352879159152508,
-0.03027554415166378,
-0.029084235429763794,
0.015808967873454094,
0.024929875507950783,
0.05112994462251663,
-0.0069761197082698345,
0.02111814357340336,
0.0698356032371521,
-0... | 0.102741 |
way to create minimal images. Multiple layers of packages, tools, application dependencies, libraries can easily bloat the container image size. By using multi-stage builds, you can selectively copy artifacts from one stage to another, excluding everything that isn't necessary from the final image. You can check more i... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_storage.adoc | mainline | aws-eks-best-practices | [
0.053891513496637344,
0.1330176442861557,
0.09739909321069717,
-0.02472505159676075,
-0.01875554770231247,
-0.08136497437953949,
-0.03650371730327606,
0.02856205590069294,
0.03891565278172493,
0.03822030872106552,
-0.0428868904709816,
-0.05488753318786621,
0.05627541244029999,
-0.025155583... | 0.006843 |
//!!NODE\_ROOT [."topic"] [[cost-opt-compute,cost-opt-compute.title]] = Cost Optimization - Compute and Autoscaling :info\_doctype: section :imagesdir: images/ :info\_title: Compute and Autoscaling :info\_abstract: Compute and Autoscaling :info\_titleabbrev: Compute :authors: ["Justin Garrison", "Rajdeep Saha"] :date: ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
-0.013910719193518162,
0.06527987122535706,
-0.029268190264701843,
0.06146293133497238,
0.02094998024404049,
-0.07696213573217392,
0.01825537718832493,
0.0765061154961586,
-0.06999371945858002,
0.03476833179593086,
-0.06947249919176102,
-0.03521081432700157,
0.05068465694785118,
0.01984212... | 0.119407 |
This will require getting metrics from your applications and setting configurations such as https://kubernetes.io/docs/tasks/run-application/configure-pdb/[`PodDisruptionBudgets`] and https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/deploy/pod\_readiness\_gate/[Pod Readiness Gates] to make sure your ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.04501407593488693,
-0.0013599477242678404,
0.00994630716741085,
0.018539728596806526,
-0.07161451876163483,
-0.027725163847208023,
-0.05678804963827133,
0.028606269508600235,
0.016333691775798798,
0.043852582573890686,
-0.054314762353897095,
-0.055827394127845764,
-0.021129393950104713,
... | 0.081582 |
number of nodes you need during normal business hours and "minimum" as the number of nodes you need during off-business hours. Please refer to https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup[Cluster Autoscaler FAQ] doc. === Cluster Autoscaler Prio... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.03131488710641861,
0.0403003953397274,
0.010249665006995201,
0.06462433189153671,
-0.021445151418447495,
0.018658289685845375,
0.002675604075193405,
0.044256340712308884,
-0.007969142869114876,
0.04434371739625931,
-0.0853952020406723,
-0.05046869441866875,
0.013746237382292747,
-0.01165... | 0.086062 |
and node templates to define broadly what type of EC2 instances can be created and settings about the instances as they are created. Bin packing is the practice of utilizing more of the instance's resources by packing more workloads onto fewer, optimally sized, instances. While this helps to reduce your compute costs b... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.035021692514419556,
0.023573510348796844,
0.004861404653638601,
0.03579636290669441,
0.044694747775793076,
-0.02508644387125969,
-0.06033923104405403,
-0.0002900658582802862,
0.05719200521707535,
0.06093578785657883,
-0.02331802248954773,
-0.034030649811029434,
0.006258869078010321,
-0.0... | 0.138295 |
to start with Kubernetes resource labeling and utilize tools like https://aws.amazon.com/blogs/containers/aws-and-kubecost-collaborate-to-deliver-cost-monitoring-for-eks-customers/[Kubecost] to get infrastructure cost reporting based on Kubernetes labels on pods, namespaces etc. Worker nodes need to have tags to show b... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.060990214347839355,
0.0344030000269413,
-0.03201703727245331,
0.09054645150899887,
0.015101666562259197,
0.011099530383944511,
0.016525546088814735,
-0.004707406274974346,
0.04976557195186615,
0.04394124075770378,
-0.0458904504776001,
-0.15461544692516327,
0.011434807442128658,
-0.007838... | 0.162703 |
run types in a single cluster to optimize specific workload requirements and cost. === Spot The https://aws.amazon.com/ec2/spot/[spot] capacity type provisions EC2 instances from spare capacity in an Availability Zone. Spot offers the largest discounts--up to 90% -- but those instances can be interrupted when they are ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.025258103385567665,
-0.0451018400490284,
-0.01019861176609993,
0.05294770747423172,
0.010431284084916115,
0.003325885161757469,
-0.016445327550172806,
0.02764204703271389,
0.05738174542784691,
0.0445546880364418,
-0.08258075267076492,
-0.013938182964920998,
0.019881809130311012,
-0.02440... | 0.092976 |
enroll in an Enterprise Agreement with AWS. Enterprise Agreements give customers the option to tailor agreements that best suit their needs. Customers can enjoy discounts on the pricing based on AWS EDP (Enterprise Discount Program). For additional information on Enterprise Agreements please contact your AWS sales repr... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.01653505675494671,
0.07683011144399643,
0.004497996065765619,
0.05823853239417076,
0.025082862004637718,
0.02371310070157051,
-0.020229551941156387,
0.06708800047636032,
0.005113836843520403,
0.05607641860842705,
-0.07170278578996658,
-0.0380408875644207,
0.03433751314878464,
-0.03208765... | 0.081289 |
the workload is a node which is not burstable or shareable between workloads. Fargate will save you EC2 instance management time which itself has a cost, but CPU and memory costs may be more expensive than other EC2 capacity types. Fargate pods can take advantage of compute savings plan to reduce the on-demand cost. ==... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/cost_opt_compute.adoc | mainline | aws-eks-best-practices | [
0.020288342610001564,
0.01377095840871334,
-0.012316721491515636,
0.021669801324605942,
0.025646992027759552,
-0.03887808695435524,
-0.10588058829307556,
0.06361236423254013,
0.09014204144477844,
-0.022116579115390778,
-0.10275391489267349,
0.0033151363022625446,
-0.052504364401102066,
-0.... | 0.110035 |
//!!NODE\_ROOT [."topic"] [[cost-opt-awareness,cost-opt-awareness.title]] = Expenditure awareness :info\_doctype: section :imagesdir: images/ :info\_title: Expenditure awareness :info\_abstract: Expenditure awareness :info\_titleabbrev: Awareness Expenditure awareness is understanding who, where and what is causing exp... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/awareness.adoc | mainline | aws-eks-best-practices | [
0.05410285294055939,
0.04692856967449188,
-0.02808917500078678,
0.08161910623311996,
0.10855088382959366,
-0.0038460437208414078,
0.04965827986598015,
0.0066094365902245045,
0.07429814338684082,
0.08440004289150238,
-0.011350154876708984,
-0.0908968523144722,
0.05082549899816513,
-0.032156... | 0.159143 |
Use the Kubernetes dashboard \*\_Kubernetes dashboard\_\* Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters, which provides information about the Kubernetes cluster including the resource usage at a cluster, node and pod level. The deployment of the Kubernetes dashboard on an Amazon EKS cl... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/awareness.adoc | mainline | aws-eks-best-practices | [
0.06275007873773575,
0.014714418910443783,
-0.041405897587537766,
0.04929110035300255,
0.0563114732503891,
-0.007396566681563854,
0.026181159541010857,
-0.01831928826868534,
0.08500370383262634,
0.08926856517791748,
-0.052993860095739365,
-0.10437867045402527,
0.0056846896186470985,
-0.030... | 0.190638 |
deployed REVISION: 1 TEST SUITE: None NOTES: --------------------------------------------------Kubecost has been successfully installed. When pods are Ready, you can enable port-forwarding with the following command: kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090 Next, navigate to http... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/cost/awareness.adoc | mainline | aws-eks-best-practices | [
0.02903580479323864,
0.025324534624814987,
0.02506251260638237,
-0.023809567093849182,
-0.03040217235684395,
-0.029783938080072403,
-0.11182205379009247,
-0.05756039544939995,
0.050744086503982544,
0.07126184552907944,
0.028346575796604156,
-0.11408735066652298,
-0.03660740703344345,
0.000... | 0.053456 |
[."topic"] [#scale-cluster-services] = Cluster Services :info\_doctype: section :imagesdir: images/scalability/ Cluster services run inside an EKS cluster, but they are not user workloads. If you have a Linux server you often need to run services like NTP, syslog, and a container runtime to support your workloads. Clus... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/cluster-services.adoc | mainline | aws-eks-best-practices | [
-0.00014230023953132331,
0.008365137502551079,
0.0374777689576149,
0.032340168952941895,
0.03651246055960655,
-0.003961704205721617,
-0.021153556182980537,
0.011506403796374798,
0.08662883937358856,
0.026237567886710167,
-0.007779982872307301,
-0.013452354818582535,
-0.005763942841440439,
... | 0.27297 |
256 cores or 16 nodes in the cluster--whichever happens first. If using the https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html[CoreDNS EKS add-on], consider enabling the https://docs.aws.amazon.com/eks/latest/userguide/coredns-autoscaling.html[autoscaling] option. The CoreDNS autoscaler dynamically ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/cluster-services.adoc | mainline | aws-eks-best-practices | [
0.021113933995366096,
-0.04125810042023659,
0.007975384593009949,
0.01027608010917902,
-0.04617966338992119,
0.020881295204162598,
-0.07696337252855301,
0.004755636677145958,
0.06668708473443985,
0.05222746729850769,
-0.028102388605475426,
-0.04709351807832718,
0.0024759273510426283,
-0.02... | 0.106768 |
scaling. Because there are so many different options for logging and monitoring we cannot show examples for every provider. With https://docs.fluentbit.io/manual/pipeline/filters/kubernetes[fluentbit] we recommend enabling Use\_Kubelet to fetch metadata from the local kubelet instead of the Kubernetes API Server and se... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/cluster-services.adoc | mainline | aws-eks-best-practices | [
0.04246089980006218,
0.028809936717152596,
0.052930861711502075,
-0.026529913768172264,
0.00522141857072711,
-0.05761750414967537,
0.002431885339319706,
0.039546459913253784,
0.060367975383996964,
0.04754622280597687,
-0.04554680362343788,
-0.09338206797838211,
-0.04176430404186249,
0.0145... | 0.157566 |
[."topic"] [#scale-control-plane] = Kubernetes Control Plane :info\_doctype: section :info\_titleabbrev: Control Plane :imagesdir: images/scalability/ TIP: https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c≻\_channel=el[Explore] best practices through... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.018066538497805595,
-0.007852830924093723,
-0.00969830621033907,
0.0624917671084404,
0.005481550004333258,
0.05364890769124031,
-0.035050131380558014,
0.007922044023871422,
0.060965027660131454,
0.07795785367488861,
-0.03556566685438156,
-0.04204344376921654,
0.0005635275738313794,
-0.06... | 0.07883 |
repeatedly (e.g. in a for loop) or running commands without a local cache. `kubectl` has a client-side cache that caches discovery information from the cluster to reduce the amount of API calls required. The cache is enabled by default and is refreshed every 10 minutes. If you run kubectl from a container or without a ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.02019653469324112,
0.0607459619641304,
0.02992990054190159,
0.03117152862250805,
-0.03287867084145546,
-0.017977844923734665,
-0.052069712430238724,
-0.03888098523020744,
0.12336453795433044,
0.03614705428481102,
-0.0451335683465004,
-0.0002779802889563143,
-0.03698126599192619,
-0.08497... | 0.17467 |
number of inflight requests your cluster can handle is twice (or higher if horizontally scaled out further) the inflight quota set per kube-apiserver. This amounts to several thousands of requests/second on the largest EKS clusters. Two kinds of Kubernetes objects, called PriorityLevelConfigurations and FlowSchemas, co... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.01579120382666588,
0.013628211803734303,
0.0186289194971323,
0.020107978954911232,
-0.0474514365196228,
-0.056571267545223236,
-0.02243785373866558,
-0.01241822075098753,
0.11266130954027176,
0.09745437651872635,
-0.0686168521642685,
-0.09070441126823425,
-0.012921211309731007,
-0.045380... | 0.160329 |
to drop the request and return the client a 429. Note that queuing may prevent a request from receiving a 429, but it comes with the tradeoff of increased end-to-end latency on the request. Now consider the catch-all FlowSchema that maps to the catch-all PriorityLevelConfiguration with type Reject. If clients reach the... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
-0.043505001813173294,
0.007934755645692348,
0.05100678279995918,
0.0038928387220948935,
0.011240588501095772,
-0.03417174518108368,
0.04995868727564812,
-0.03351416066288948,
0.04137832671403885,
0.0690450444817543,
-0.02509821020066738,
0.00850670039653778,
-0.009697139263153076,
-0.0108... | 0.015825 |
you care about. \* Isolate non-essential or expensive requests that can starve capacity for other request types. This can be accomplished by either changing the default FlowSchemas and PriorityLevelConfigurations or by creating new objects of these types. Operators can increase the values for assuredConcurrencyShares f... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
-0.04007402062416077,
-0.009362206794321537,
-0.03185814991593361,
-0.00422848341986537,
-0.07682646065950394,
-0.05304061248898506,
0.036887578666210175,
-0.03159143403172493,
0.037687044590711594,
0.07774338871240616,
-0.0355042889714241,
-0.04327254742383957,
-0.014924425631761551,
0.00... | 0.08126 |
scaling best practices. Suggestions in this section are provided in order starting with the options that are known to scale the best. === Use Shared Informers When building controllers and automation that integrate with the Kubernetes API you will often need to get information from Kubernetes resources. If you poll for... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.008655788376927376,
0.034162454307079315,
0.03700089827179909,
0.0003928389342036098,
-0.02121027745306492,
0.03564887493848801,
0.01930522918701172,
-0.0008451674366369843,
0.13921871781349182,
0.06583791971206665,
-0.0648685023188591,
-0.03917528688907623,
-0.0457821786403656,
-0.01811... | 0.092507 |
etcd. This call will get all pods in all namespaces without pagination or limiting the scope and require a quorum read from etcd. ---- /api/v1/pods ---- === Prevent DaemonSet thundering herds A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes join the cluster, the daemonset-controller creates po... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
-0.03950655087828636,
0.020037608221173286,
0.02355513721704483,
0.014750201255083084,
-0.003152822842821479,
-0.032097749412059784,
-0.03029016964137554,
-0.03888273239135742,
0.06240881606936455,
0.055202875286340714,
0.007512114010751247,
0.05040386691689491,
-0.002603671280667186,
-0.0... | 0.241607 |
also trigger a `RollingUpdate` and gradually replace all existing DaemonSet pods because the DaemonSet template changed. ==== Prevent thundering herds on node scale-outs Similarly to DaemonSet creation, creating new nodes at a fast rate can result in a large number of DaemonSet pods starting concurrently. You should cr... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.021034520119428635,
-0.037132423371076584,
-0.006571315694600344,
0.01140513177961111,
-0.010122775100171566,
-0.021016422659158707,
-0.03409961611032486,
-0.033474694937467575,
0.056246064603328705,
0.06070525944232941,
0.034060828387737274,
-0.05623035132884979,
0.012251046486198902,
-... | 0.076656 |
---- #!/bin/bash daemonset\_pods=$(kubectl get --raw "/api/v1/namespaces/kube-system/pods?labelSelector=name%3Dfluentd-elasticsearch" | jq -r '.items | .[].metadata.name') for pod in ${daemonset\_pods[@]}; do echo "Deleting pod $pod" kubectl delete pod $pod -n kube-system sleep 5 done ---- \* Finally, you can update yo... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/control-plane.adoc | mainline | aws-eks-best-practices | [
0.027454465627670288,
0.05273902043700218,
0.014411665499210358,
0.006410680245608091,
-0.02060128003358841,
0.008185755461454391,
0.001587401027791202,
-0.06349722295999527,
0.07629106193780899,
0.03899437189102173,
-0.004581067711114883,
0.009029973298311234,
-0.020483287051320076,
-0.07... | 0.096665 |
[."topic"] [#scale-data-plane] = Kubernetes Data Plane :info\_doctype: section :info\_titleabbrev: Data Plane :imagesdir: images/scalability/ // The Kubernetes Data Plane includes EC2 instances, load balancers, storage, and other APIs used by the Kubernetes Control Plane. For organization purposes we grouped xref:clust... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/data-plane.adoc | mainline | aws-eks-best-practices | [
0.036337438970804214,
0.02781115658581257,
0.026208246126770973,
0.028814129531383514,
-0.0015903692692518234,
0.01108333095908165,
-0.00038133739144541323,
0.035567302256822586,
0.028597919270396233,
0.03543099761009216,
-0.07325750589370728,
-0.08530917763710022,
-0.016753066331148148,
-... | 0.091468 |
outage could impact 1/3 of the cluster. You should specify node requirements and pod spread in your workloads so the Kubernetes scheduler can place workloads properly. Workloads should define the resources they need and the availability required via taints, tolerations, and https://kubernetes.io/blog/2020/05/introducin... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/data-plane.adoc | mainline | aws-eks-best-practices | [
0.012784762308001518,
-0.013371171429753304,
0.04320603981614113,
0.0630897805094719,
-0.004686288069933653,
-0.02873055450618267,
-0.044959910213947296,
-0.015705332159996033,
0.02131934091448784,
0.06064288318157196,
-0.06175775080919266,
-0.01732989028096199,
-0.027835747227072716,
-0.0... | 0.116216 |
they are available for patch releases. AMI minor versions (e.g. 1.23.5 to 1.24.3) will be available in the EKS console and API as https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html[upgrades for the node group]. Patch release versions (e.g. 1.23.5 to 1.23.6) will not be presented as upgrades... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/data-plane.adoc | mainline | aws-eks-best-practices | [
-0.049699947237968445,
-0.0016761920414865017,
0.05054325982928276,
0.04923447594046593,
0.0795697346329689,
0.034824926406145096,
-0.06981394439935684,
-0.044880785048007965,
-0.01291465014219284,
0.08749336004257202,
0.06378083676099777,
0.006177872885018587,
0.0356488935649395,
-0.07677... | 0.088572 |
you need and avoids needing complicated log rotation rules. To disable syslog you can add the following snippet to your cloud-init configuration: ---- runcmd: - [ systemctl, disable, --now, syslog.service ] ---- == Patch instances in place when OS update speed is a necessity [IMPORTANT] ==== Patching instances in place... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/data-plane.adoc | mainline | aws-eks-best-practices | [
0.0542992539703846,
-0.00860582198947668,
0.0577021948993206,
-0.05857444927096367,
0.11099490523338318,
-0.07634633034467697,
-0.007992506958544254,
-0.046082086861133575,
0.03265409171581268,
0.10470235347747803,
0.059837017208337784,
-0.015088939107954502,
0.046647652983665466,
-0.02610... | -0.008871 |
[."topic"] = Known Limits and Service Quotas :info\_doctype: section :imagesdir: images/scalability/ TIP: https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c≻\_channel=el[Explore] best practices through Amazon EKS workshops. Amazon EKS can be used for ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/quotas.adoc | mainline | aws-eks-best-practices | [
-0.021517030894756317,
-0.014945472590625286,
-0.03240659087896347,
-0.03033916838467121,
0.025574544444680214,
0.012657687067985535,
0.014262651093304157,
0.03637903556227684,
0.0992773175239563,
0.054294854402542114,
-0.06168031319975853,
-0.03126652166247368,
0.08506317436695099,
0.0163... | 0.094553 |
the networking for your cluster | L-A4707A72 | 5 | VPC | Network interfaces per Region | Can limit the number of EKS Worker nodes, or Impact EKS control plane scaling/update activities. | L-DF5E4CA3 | 5,000 | VPC | Network Address Usage | Can limit the number of Clusters per account or the control or connectivity of th... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/quotas.adoc | mainline | aws-eks-best-practices | [
0.015771258622407913,
-0.05483054742217064,
-0.05623096972703934,
0.01702568307518959,
-0.012369103729724884,
0.08072133362293243,
0.00005547935506911017,
0.00852152518928051,
-0.024441774934530258,
0.04278985783457756,
-0.03966383635997772,
-0.05524362623691559,
0.052015937864780426,
-0.1... | 0.125525 |
request throttling to ensure that they remain performant and available for all customers. Similar to Service Quotas, each AWS service maintains their own request throttling thresholds. Consider reviewing the respective AWS Service documentation if your workloads will need to quickly issue a large number of API calls or... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/quotas.adoc | mainline | aws-eks-best-practices | [
-0.019967224448919296,
0.004250660073012114,
0.02243945375084877,
0.004087013192474842,
0.012308686040341854,
-0.02238173969089985,
-0.0596928633749485,
0.025664858520030975,
0.094418004155159,
0.07491836696863174,
-0.02422669529914856,
-0.008451963774859905,
0.013139506801962852,
-0.02693... | 0.059373 |
[."topic"] = Kubernetes Upstream SLOs :info\_doctype: section :info\_titleabbrev: Kubernetes SLOs :imagesdir: images/scalability/ Amazon EKS runs the same code as the upstream Kubernetes releases and ensures that EKS clusters operate within the SLOs defined by the Kubernetes community. The Kubernetes https://github.com... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kubernetes_slos.adoc | mainline | aws-eks-best-practices | [
0.01804559864103794,
-0.01810442842543125,
0.009537633508443832,
0.02517111971974373,
0.06089521199464798,
-0.0424681082367897,
0.004772723652422428,
-0.01805473491549492,
0.13375674188137054,
0.023631954565644264,
-0.0035008061677217484,
0.008757758885622025,
-0.005796934012323618,
-0.052... | 0.255772 |
means a request can run for up to one minute (60 seconds) before being timed out and cancelled. The SLOs defined for Latency are broken out by the type of request that is being made, which can be mutating or read-only: ==== Mutating Mutating requests in Kubernetes make changes to a resource, such as creations, deletion... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kubernetes_slos.adoc | mainline | aws-eks-best-practices | [
-0.04190952703356743,
0.014673377387225628,
0.05970486253499985,
0.016247767955064774,
0.01767008565366268,
-0.038404375314712524,
-0.0321817509829998,
-0.05012197047472,
0.11014697700738907,
0.02326696738600731,
0.024259183555841446,
0.04744669795036316,
-0.038932133466005325,
-0.08821426... | 0.187764 |
dashboards, below are some examples for the SLOs above. === API Server Request Latency |=== | Metric | Definition | apiserver\_request\_sli\_duration\_seconds | Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subr... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kubernetes_slos.adoc | mainline | aws-eks-best-practices | [
-0.0024104330223053694,
-0.008502908051013947,
-0.03474290668964386,
0.024006271734833717,
-0.0024180912878364325,
-0.09779711067676544,
-0.05188519507646561,
-0.011342087760567665,
0.07736355811357498,
0.022083992138504982,
-0.012787475250661373,
-0.0958702564239502,
-0.01842869445681572,
... | 0.176959 |
| Definition | kubelet\_pod\_start\_sli\_duration\_seconds | Duration in seconds to start a pod, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch | kubelet\_pod\_start\_duration\_seconds | Duration in se... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kubernetes_slos.adoc | mainline | aws-eks-best-practices | [
0.03967434540390968,
0.003947748802602291,
0.046709779649972916,
0.03422405570745468,
0.010573336854577065,
-0.024312486872076988,
-0.044418442994356155,
0.00341401738114655,
0.08052170276641846,
-0.01740861125290394,
-0.0016052636783570051,
-0.07622908055782318,
-0.025309989228844643,
-0.... | 0.203309 |
[."topic"] = Control Plane Monitoring :info\_doctype: section :authors: ["Shane Corbett"] :date: 2023-09-22 :imagesdir: images/scalability/ == API Server When looking at our API server it's important to remember that one of its functions is to throttle inbound requests to prevent overloading the control plane. What can... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kcp_monitoring.adoc | mainline | aws-eks-best-practices | [
-0.05175729840993881,
0.012178914621472359,
0.012984818778932095,
0.000652273534797132,
0.006782324519008398,
-0.06705600768327713,
-0.022341003641486168,
-0.019302846863865852,
0.048761989921331406,
0.06917238980531693,
-0.061717115342617035,
0.02114005573093891,
0.00251292553730309,
0.01... | 0.088514 |
see the following https://docs.aws.amazon.com/eks/latest/best-practices/scale-control-plane.html#\_api\_priority\_and\_fairness[best practices guide] ==== Here we see the seven different priority groups that come by default on the cluster image::shared-concurrency.png[Shared concurrency] Next we want to see what percen... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kcp_monitoring.adoc | mainline | aws-eks-best-practices | [
-0.005471536424010992,
-0.014304406009614468,
-0.0002448295708745718,
0.01947927102446556,
0.0033681083004921675,
0.014135180041193962,
-0.057318445295095444,
-0.028172258287668228,
0.0733075737953186,
0.08669974654912949,
-0.028907926753163338,
-0.01142052561044693,
0.04400504380464554,
-... | 0.023754 |
this data we can use an Ad-Hoc PromQL or a CloudWatch Insights query to pull LIST requests from the audit log during that time frame to see which application this might be. === Finding the Source with CloudWatch Metrics are best used to find the problem area we want to look at and narrow both the timeframe and the sear... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kcp_monitoring.adoc | mainline | aws-eks-best-practices | [
0.028735622763633728,
-0.000895302218850702,
-0.017333567142486572,
0.03006281517446041,
0.06515920162200928,
-0.0588824637234211,
0.04482138156890869,
-0.06665253639221191,
0.07405570149421692,
0.04218855872750282,
-0.06678412854671478,
-0.10563909262418747,
0.017018290236592293,
0.003107... | 0.048343 |
plane to enable this function. It is also a best practice to limit the log retention as to not drive up cost over time unnecessarily. An example for turning on all logging functions using the EKSCTL tool below. ==== [,yaml] ---- cloudWatch: clusterLogging: enableTypes: ["\*"] logRetentionInDays: 10 ---- == Kube Control... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kcp_monitoring.adoc | mainline | aws-eks-best-practices | [
0.006592306774109602,
-0.02545349672436714,
-0.017238417640328407,
-0.0059531875886023045,
-0.014900646172463894,
-0.019523343071341515,
0.029200546443462372,
-0.06452000886201859,
0.05773734301328659,
0.09728885442018509,
0.01349305547773838,
-0.025208324193954468,
-0.03093058243393898,
-... | 0.124068 |
troubleshooting phase. Often in production you will find that it takes adjustments to more than one part of Kubernetes to allow the system to work at its most performant. It's easy to inadvertently troubleshoot what is just a symptom (such as a node timeout) of a much larger bottle neck. == ETCD etcd uses a memory mapp... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/kcp_monitoring.adoc | mainline | aws-eks-best-practices | [
0.004008463118225336,
0.0665108859539032,
0.12977838516235352,
-0.008032749406993389,
0.06465829163789749,
-0.0242479145526886,
-0.10381685197353363,
-0.03208405151963234,
0.09182509779930115,
0.0663573145866394,
-0.03159405291080475,
0.03650159388780594,
-0.05862373113632202,
-0.051221504... | 0.067223 |
[."topic"] [#scalability] = EKS Scalability best practices :info\_titleabbrev: Scalability :imagesdir: images/scalability/ TIP: https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c≻\_channel=el[Explore] best practices through Amazon EKS workshops. This ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/index.adoc | mainline | aws-eks-best-practices | [
0.046459395438432693,
-0.008762320503592491,
0.01979074254631996,
0.03883080556988716,
-0.010904703289270401,
0.001133653917349875,
-0.029844341799616814,
-0.003228189190849662,
0.03629734367132187,
0.03606243431568146,
-0.05737965181469917,
-0.058202799409627914,
0.024245375767350197,
-0.... | 0.162883 |
you plan and scale beyond the information provided in this guide. Amazon EKS can support up to 100,000 nodes in a single cluster if you are selected for onboarding. include::control-plane.adoc[leveloffset=+1] include::data-plane.adoc[leveloffset=+1] include::cluster-services.adoc[leveloffset=+1] include::workloads.adoc... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/index.adoc | mainline | aws-eks-best-practices | [
0.05942371115088463,
-0.027091408148407936,
-0.03774932026863098,
0.06391168385744095,
-0.002456364454701543,
-0.0173479150980711,
-0.041347797960042953,
-0.017380807548761368,
0.00021574634592980146,
0.08582930266857147,
-0.05239594727754593,
-0.08052029460668564,
0.009503847919404507,
-0... | 0.128671 |
[."topic"] [#scale-workloads] = Workloads :info\_doctype: section :imagesdir: images/scalability/ Workloads have an impact on how large your cluster can scale. Workloads that use the Kubernetes APIs heavily will limit the total amount of workloads you can have in a single cluster, but there are some defaults you can ch... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/workloads.adoc | mainline | aws-eks-best-practices | [
0.022192809730768204,
0.04676564782857895,
0.026617618277668953,
0.035644929856061935,
0.04048250615596771,
0.05997234582901001,
-0.012971683405339718,
-0.011232408694922924,
0.08131299912929535,
0.03180300071835518,
-0.028806263580918312,
0.015803953632712364,
0.02431556023657322,
0.00098... | 0.147674 |
split the service across multiple ALBs or use Kubernetes Ingress. The default NLB targets is 3000, but is limited to 500 targets per AZ. If your cluster runs more than 500 pods for an NLB service you will need to use multiple AZs or request a quota limit increase. An alternative to using a load balancer coupled to a se... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/workloads.adoc | mainline | aws-eks-best-practices | [
-0.020165864378213882,
-0.02308494970202446,
-0.03094669058918953,
0.003357505891472101,
-0.0739898681640625,
-0.0009574289433658123,
0.003052828600630164,
0.012988911010324955,
0.05758582055568695,
0.08910062909126282,
-0.09826275706291199,
-0.05989053100347519,
0.04673636332154274,
-0.04... | 0.171888 |
Kubernetes resources, you can disable auto-mounting service account secrets by setting automountServiceAccountToken: false \* If your application's secrets are static and will not be modified in the future, mark the https://kubernetes.io/docs/concepts/configuration/secret/#secret-immutable[secret as immutable]. The kub... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/workloads.adoc | mainline | aws-eks-best-practices | [
-0.0047254059463739395,
0.02111084759235382,
0.049115315079689026,
0.004030735697597265,
-0.05549159646034241,
0.007319396827369928,
0.009672200307250023,
-0.02158559486269951,
0.11103342473506927,
0.037099506705999374,
-0.05029672384262085,
-0.04455173760652542,
0.011808950453996658,
0.00... | 0.127257 |
resource version removed from etcd and depending on the size of the resources may use considerable space on the etcd host until defragmentation runs every 15 minutes. This defragmentation may cause pauses in etcd which could have other affects on the Kubernetes API and controllers. You should avoid frequent modificatio... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/workloads.adoc | mainline | aws-eks-best-practices | [
0.008660845458507538,
0.0033872572239488363,
0.07945001125335693,
-0.016643857583403587,
0.07306787371635437,
-0.04144382104277611,
-0.024190839380025864,
-0.08937965333461761,
0.11944203078746796,
0.057015519589185715,
-0.034622278064489365,
-0.003993107005953789,
-0.02202269434928894,
-0... | 0.047572 |
[."topic"] = Kubernetes Scaling Theory :info\_doctype: section :authors: ["Shane Corbett"] :date: 2023-09-22 :info\_titleabbrev: The theory behind scaling :imagesdir: images/scalability/ == Nodes vs. Churn Rate Often when we discuss the scalability of Kubernetes, we do so in terms of how many nodes there are in a singl... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/scaling_theory.adoc | mainline | aws-eks-best-practices | [
0.030712872743606567,
-0.04225711524486542,
0.018087957054376602,
0.05053452029824257,
0.021503936499357224,
-0.03608866408467293,
-0.027950387448072433,
0.04311639443039894,
0.0667969286441803,
0.028358303010463715,
-0.020720940083265305,
0.004901912994682789,
0.03283960744738579,
-0.0126... | 0.147937 |
to do this by using Kubelet as an example. Kubelet talks both to the API server and the container runtime; \*how\* and \*what\* do we need to monitor to detect whether either component is experiencing an issue? === How many Pods per Node When we look at scaling numbers, such as how many pods can run on a node, we could... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/scaling_theory.adoc | mainline | aws-eks-best-practices | [
0.010890690609812737,
0.025076040998101234,
0.032370831817388535,
0.0646776407957077,
-0.03049050271511078,
-0.058131132274866104,
-0.07428949326276779,
-0.005638835020363331,
-0.0006867742049507797,
0.032806579023599625,
-0.08006076514720917,
-0.05662235617637634,
-0.030171887949109077,
-... | 0.092365 |
right node size for our workloads these are easy-to-overlook signals that might be putting unnecessary pressure on the system thus limiting both our scale and performance. === The Cost of Unnecessary Errors Kubernetes controllers excel at retrying when error conditions arise, however this comes at a cost. These retries... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/scaling_theory.adoc | mainline | aws-eks-best-practices | [
0.05536368116736412,
-0.011456655338406563,
0.03687455877661705,
0.0810166448354721,
0.013601000420749187,
-0.04252541810274124,
-0.002567498479038477,
-0.01857026480138302,
0.06653347611427307,
0.07846278697252274,
-0.06835641711950302,
-0.0817326009273529,
0.01035637129098177,
-0.0705755... | 0.104727 |
[."topic"] = Node and Workload Efficiency :info\_doctype: section :authors: ["Shane Corbett"] :date: 2023-09-22 :info\_titleabbrev: Node efficiency and scaling :imagesdir: images/scalability/ Being efficient with our workloads and nodes reduces complexity/cost while increasing performance and scale. There are many fact... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.05605083703994751,
0.03968247026205063,
0.050388917326927185,
0.0595223493874073,
0.07423534244298935,
-0.013115257024765015,
-0.045529648661613464,
0.07278857380151749,
0.07775776833295822,
0.03931187093257904,
-0.05796530470252037,
0.018838558346033096,
0.010861847549676895,
-0.0193951... | 0.111936 |
CFS system is: busy containers (CGROUPS) are the only containers that count toward the share system. In this case, only the first container is busy so it is allowed to use all 4 cores on the node. image::cores-2.png[] Why does this matter? Let's say we ran our performance testing in a development cluster where an NGINX... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.016718285158276558,
-0.008633640594780445,
0.0028169932775199413,
0.05596546828746796,
0.06682635843753815,
-0.07052992284297943,
-0.03188960626721382,
0.044238463044166565,
0.03597251698374748,
0.014417702332139015,
-0.03516305983066559,
0.002619229955598712,
-0.01294846273958683,
-0.03... | 0.109986 |
where our node is completely full, yet our CPU utilization is zero. image::hpa-utilization.png[] === Setting Requests It would tempting to set the request at the "`sweet spot`" value for that application, however this would cause inefficiencies as pictured in the diagram below. Here we have set the request value to 2 v... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.03745495155453682,
0.03565121814608574,
0.03340902552008629,
0.0534113273024559,
-0.03437964245676994,
-0.07542679458856583,
0.00047243383596651256,
0.00898625236004591,
0.05255899205803871,
0.024869713932275772,
-0.0597379244863987,
-0.05517159029841423,
-0.04106973111629486,
-0.0688168... | 0.098097 |
- how much buffer do you want in your applications vertical scale before scaling a new pod? \* What metrics truly reflect the saturation of your application - The saturation metric for a Kafka Producer would be quite different than a complex web application. \* How do all the other applications on the node effect each ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.02352437935769558,
-0.030620822682976723,
-0.02474457025527954,
0.002608214970678091,
-0.03192368894815445,
-0.0732397586107254,
-0.06450897455215454,
0.07397246360778809,
0.009507733397185802,
0.03884332627058029,
-0.10853266716003418,
-0.022393593564629555,
-0.045901909470558167,
-0.08... | 0.138221 |
and allows us a great deal of flexibility when using custom and external metrics (non K8s metrics). As an example, we can scaling on the highest of three values (see below). We scale if the average utilization of all the pods are over 50%, if custom metrics the packets per second of the ingress exceed an average of 1,0... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
-0.008721095509827137,
0.0027273099403828382,
0.015074231661856174,
0.024520915001630783,
-0.017180930823087692,
-0.051672451198101044,
0.0077043320052325726,
0.048810333013534546,
0.06010909751057625,
0.07037065178155899,
-0.01745554246008396,
-0.06670666486024857,
-0.033848099410533905,
... | 0.233638 |
Real world data shows that this is the number one factor in scalability of Kubernetes clusters. image::smooth-scaling.png[] The key takeaway is CPU utilization is only one dimension of both application and node performance. Using CPU utilization as a sole health indicator for our nodes and applications creates problems... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.059288594871759415,
0.04256165400147438,
0.012662325985729694,
0.015180995687842369,
0.0028706826269626617,
-0.027490757405757904,
-0.013927871361374855,
0.05907974764704704,
0.02805767022073269,
0.013434200547635555,
-0.07884186506271362,
-0.03146281838417053,
-0.01186444703489542,
-0.0... | 0.110099 |
designed to act as a fail safe for memory leaks by terminating the pod completely. This is an all or nothing style proposition, however we have now been given new ways to address this problem. First, it is important to understand that setting the right amount of memory for containers is not a straightforward as it appe... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/scalability/node_efficiency.adoc | mainline | aws-eks-best-practices | [
0.03160429000854492,
0.08534009754657745,
0.01653173379600048,
0.028767485171556473,
-0.015603027306497097,
-0.00012473658716771752,
-0.023514287546277046,
0.0699763223528862,
0.06792213767766953,
0.055018823593854904,
-0.023788737133145332,
0.019228141754865646,
-0.050271932035684586,
-0.... | 0.164766 |
[."topic"] [[data-plane,data-plane.title]] = EKS Data Plane :info\_doctype: section :info\_title: EKS Data Plane :info\_abstract: EKS Data Plane :info\_titleabbrev: Data Plane :imagesdir: images/reliability/ To operate high-available and resilient applications, you need a highly-available and resilient data plane. An e... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/dataplane.adoc | mainline | aws-eks-best-practices | [
-0.014578362926840782,
0.03419204428792,
0.045462556183338165,
0.04480407387018204,
0.021825036033988,
-0.0171353742480278,
0.0056726462207734585,
-0.013007068075239658,
0.03036506660282612,
0.0939861387014389,
-0.05175993591547012,
0.0008346457616426051,
0.019693974405527115,
-0.033031061... | 0.220717 |
the topology spread constraint can't be fulfilled. It should only be set if its preferable for pods to not run instead of violating the topology spread constraint. ==== On older versions of Kubernetes, you can use pod anti-affinity rules to schedule pods across multiple AZs. The manifest below informs Kubernetes schedu... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/dataplane.adoc | mainline | aws-eks-best-practices | [
0.03651556372642517,
0.006192802917212248,
0.018943781033158302,
0.023817161098122597,
-0.027496306225657463,
-0.007444863207638264,
0.002363762818276882,
-0.06265004724264145,
0.04084213078022003,
0.056992001831531525,
-0.06134548410773277,
-0.06277448683977127,
0.04110005497932434,
0.021... | 0.128024 |
to sizing resource requests and limits for workloads: \* Do not specify resource limits on CPU. In the absence of limits, the request acts as a weight on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run[how much relative CPU time containers get]. This ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/dataplane.adoc | mainline | aws-eks-best-practices | [
0.020785128697752953,
0.055233173072338104,
0.04094851762056351,
0.01927555724978447,
-0.03535928949713707,
-0.06789785623550415,
-0.021754061803221703,
0.0030608156230300665,
0.06331310421228409,
0.029769636690616608,
-0.06380114704370499,
-0.029948364943265915,
-0.053076986223459244,
-0.... | 0.14337 |
nodes as a DaemonSet. All the pods use the DNS caching agent running on the node for name resolution instead of using `kube-dns` Service. This feature is automatically included in EKS Auto Mode. === Configure auto-scaling CoreDNS Another method of improving Cluster DNS performance is by enabling the built-in https://do... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/dataplane.adoc | mainline | aws-eks-best-practices | [
-0.02451976016163826,
-0.011657295748591423,
0.03707897290587425,
0.0372944250702858,
-0.06795764714479446,
0.003796967677772045,
-0.06094936653971672,
-0.028626084327697754,
0.11877413839101791,
0.06429877877235413,
-0.056966252624988556,
-0.018193121999502182,
0.0068041677586734295,
-0.0... | 0.202538 |
//!!NODE\_ROOT [[reliability,reliability.title]] = Amazon EKS Best Practices Guide for Reliability :doctype: book :sectnums: :toc: left :icons: font :experimental: :idprefix: :idseparator: - :sourcedir: . :info\_doctype: chapter :info\_title: Best Practices for Reliability :info\_abstract: Best Practices for Reliabilit... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/index.adoc | mainline | aws-eks-best-practices | [
-0.05970451608300209,
0.006637895945459604,
0.0030330466106534004,
0.067043736577034,
0.02951422892510891,
-0.04048825055360794,
0.004631727002561092,
0.04339421167969704,
-0.0023221569135785103,
0.037324629724025726,
-0.02888176590204239,
-0.008030137047171593,
0.09743883460760117,
-0.066... | 0.223854 |
on managed node groups. Because managed nodes run the Amazon EKS-optimized AMIs, Amazon EKS is responsible for building patched versions of these AMIs when bug fixes. However, you are responsible for deploying these patched AMI versions to your managed node groups. EKS also https://docs.aws.amazon.com/eks/latest/usergu... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/index.adoc | mainline | aws-eks-best-practices | [
-0.023791564628481865,
-0.005516210570931435,
0.04202010855078697,
0.06072348728775978,
0.07394140958786011,
-0.07972226291894913,
-0.04401259124279022,
-0.07833360880613327,
0.037276990711688995,
0.15507245063781738,
0.04508386552333832,
-0.016451386734843254,
0.03762442618608475,
-0.0846... | 0.169484 |
[."topic"] [[application,application.title]] = Running highly-available applications :info\_doctype: section :info\_title: Running highly-available applications :info\_abstract: Running highly-available applications :info\_titleabbrev: Applications :imagesdir: images/reliability/ Your customers expect your application ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
-0.0050989557057619095,
0.009169832803308964,
0.08107419312000275,
-0.03185757249593735,
0.032130226492881775,
-0.010730687528848648,
0.030443483963608742,
0.0595332533121109,
0.05048424378037453,
-0.00930383987724781,
-0.06455472856760025,
0.01573890820145607,
0.051231496036052704,
-0.027... | 0.229841 |
pod topology spread constraints allow you to make your application available across different failure (or topology) domains like hosts or AZs. This approach works very well when you're trying to ensure fault tolerance as well as availability by having multiple replicas in each of the different topology domains. Pod ant... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
-0.002382720820605755,
-0.045466646552085876,
0.039903998374938965,
-0.005773947574198246,
0.02874133549630642,
-0.04472876340150833,
-0.05231456086039543,
0.036151766777038574,
0.011453939601778984,
0.012948275543749332,
-0.08343540132045746,
-0.0032283752225339413,
0.058267198503017426,
... | 0.157805 |
demand and help you avoid impacting your customers during peak traffic. It is implemented as a control loop in Kubernetes that periodically queries metrics from APIs that provide resource metrics. HPA can retrieve metrics from the following APIs: 1. `metrics.k8s.io` also known as Resource Metrics API — Provides CPU and... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
-0.030039725825190544,
0.005462174769490957,
-0.007293618284165859,
0.015215044841170311,
-0.09831921756267548,
-0.022739160805940628,
0.012709119357168674,
0.05145842209458351,
0.07228847593069077,
0.016704929992556572,
-0.07628302276134491,
-0.07033737748861313,
-0.04784054309129715,
-0.... | 0.21586 |
the container image. You can use `kubectl` to update a Deployment like this: [source,bash] ---- kubectl --record deployment.apps/nginx-deployment set image nginx-deployment nginx=nginx:1.16.1 ---- The `--record` argument record the changes to the Deployment and helps you if you need to perform a rollback. `kubectl roll... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
0.004137999843806028,
0.028658533468842506,
0.05483369529247284,
-0.011773625388741493,
-0.01653578318655491,
-0.036892205476760864,
-0.041393645107746124,
-0.06709542125463486,
0.01602102443575859,
0.07695863395929337,
-0.005012128967791796,
0.00859766360372305,
0.01278309803456068,
-0.02... | 0.08695 |
all traffic to the old Deployment and stop sending traffic to the new Deployment. Although Kubernetes offers no native way to perform canary deployments, you can use tools such as https://github.com/weaveworks/flagger[Flagger] with https://docs.flagger.app/tutorials/istio-progressive-delivery[Istio]. == Health checks a... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
0.08345317840576172,
-0.02908189408481121,
0.10241172462701797,
-0.026867330074310303,
0.018724996596574783,
-0.05095934867858887,
-0.006484304089099169,
0.01778756082057953,
-0.019648587331175804,
0.06038666516542435,
-0.024964742362499237,
-0.009837949648499489,
-0.024970225989818573,
-0... | 0.294215 |
the maximum configured time, the Pod still fails Startup Probes, it will be terminated, and a new Pod will be created. The Startup Probe is similar to the Liveness Probe – if they fail, the Pod is recreated. As Ricardo A. explains in his post https://medium.com/swlh/fantastic-probes-and-how-to-configure-them-fef7e030bd... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
0.03867638111114502,
-0.07442272454500198,
0.007076670415699482,
0.03730214014649391,
0.06562820822000504,
-0.07890880852937698,
-0.062262147665023804,
0.048909079283475876,
-0.017806120216846466,
0.04359244182705879,
-0.02141730673611164,
0.009905162267386913,
-0.04111533984541893,
0.0459... | 0.131407 |
signal is sent. This grace period is 30 seconds by default; you can override the default by using `grace-period` flag in kubectl or declare `terminationGracePeriodSeconds` in your Podspec. `kubectl delete pod —grace-period=` It is common to have containers in which the main process doesn't have PID 1. Consider this Pyt... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
0.006758176255971193,
0.006165831349790096,
0.03033613972365856,
-0.03512359410524368,
-0.012241392396390438,
-0.05208013579249382,
-0.007440392859280109,
0.008357471786439419,
0.13516153395175934,
0.045357320457696915,
-0.04449334368109703,
-0.01070624403655529,
-0.013702824711799622,
-0.... | 0.142614 |
system deviates, Kubernetes will take action to restore the state. For example, if a worker node becomes unavailable, Kubernetes will reschedule the Pods onto another worker node. Similarly, if a `replica` crashes, the https://kubernetes.io/docs/concepts/architecture/controller/#design[Deployment Contoller] will create... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
-0.013985874131321907,
-0.01921851933002472,
0.07278425246477127,
0.0041723353788256645,
0.014103704132139683,
-0.02427225187420845,
-0.03523216024041176,
-0.018252605572342873,
0.028003117069602013,
0.044995225965976715,
-0.06950703263282776,
0.02659173496067524,
0.030993899330496788,
-0.... | 0.197701 |
application metrics In addition to monitoring the state of the application and aggregating standard metrics, you can also use the https://prometheus.io/docs/instrumenting/clientlibs/[Prometheus client library] to expose application-specific custom metrics to improve the application's observability. === Use centralized ... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/application.adoc | mainline | aws-eks-best-practices | [
0.008276043459773064,
-0.003899203147739172,
-0.004748648032546043,
0.05084332451224327,
0.02056923322379589,
-0.06206647306680679,
0.020632976666092873,
0.002900207880884409,
0.09530895948410034,
0.10024265199899673,
-0.05716550350189209,
-0.06962354481220245,
0.003392904531210661,
-0.003... | 0.191365 |
[."topic"] [[control-plane,control-plane.title]] = EKS Control Plane :info\_doctype: section :info\_title: EKS Control Plane :info\_abstract: EKS Control Plane :info\_titleabbrev: Control Plane :imagesdir: images/reliability/ :idprefix: reliability-cp TIP: https://aws-experience.com/emea/smb/events/series/get-hands-on-... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/controlplane.adoc | mainline | aws-eks-best-practices | [
-0.03808348625898361,
-0.00517427921295166,
-0.02831522561609745,
0.01888120546936989,
0.026466386392712593,
0.03292129188776016,
0.02660788968205452,
-0.024456128478050232,
0.07467375695705414,
0.07825262099504471,
-0.01523932721465826,
-0.008934516459703445,
0.021574631333351135,
-0.0524... | 0.172456 |
example, `apiserver\_request\_duration\_seconds` can indicate how long API requests are taking to run. Consider monitoring these control plane metrics: === API Server [width="100%",cols="<50%,<50%",options="header",] |=== |Metric |Description |`apiserver\_request\_total` |Counter of apiserver requests broken out for ea... | https://github.com/aws/aws-eks-best-practices/blob/mainline//latest/bpg/reliability/controlplane.adoc | mainline | aws-eks-best-practices | [
0.03882039710879326,
-0.010840414091944695,
-0.09387782216072083,
0.05836792290210724,
0.024785036221146584,
-0.050179723650217056,
-0.010020219720900059,
0.00580171775072813,
0.027089593932032585,
0.05784312263131142,
-0.016511498019099236,
-0.09915225207805634,
0.028223322704434395,
-0.0... | 0.11434 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.