diff --git "a/md_data.json" "b/md_data.json" deleted file mode 100644--- "a/md_data.json" +++ /dev/null @@ -1,26890 +0,0 @@ -[ - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Cilium", - "file_name": "TMP-LOGGING.md" - }, - "content": [ - { - "heading": "Getting Started With Structured Logging", - "data": "With structured logging, we associate a *constant* log message with some\n variable key-value pairs. For instance, suppose we wanted to log that we\n were starting reconciliation on a pod. In the Go standard library logger,\n we might write:\n In controller-runtime, we'd instead write:\n or even write\n Notice how we've broken out the information that we want to convey into\n a constant message (`\"starting reconciliation\"`) and some key-value pairs\n that convey variable information (`\"pod\", req.NamespacedName`). We've\n there-by added \"structure\" to our logs, which makes them easier to save\n and search later, as well as correlate with metrics and events.\n All of controller-runtime's logging is done via\n [logr](https://github.com/go-logr/logr), a generic interface for\n structured logging. You can use whichever logging library you want to\n implement the actual mechanics of the logging. controller-runtime\n provides some helpers to make it easy to use\n [Zap](https://go.uber.org/zap) as the implementation.\n You can configure the logging implementation using\n `\"sigs.k8s.io/controller-runtime/pkg/log\".SetLogger`. That\n package also contains the convenience functions for setting up Zap.\n You can get a handle to the \"root\" logger using\n `\"sigs.k8s.io/controller-runtime/pkg/log\".Log`, and can then call\n `WithName` to create individual named loggers. You can call `WithName`\n repeatedly to chain names together:\n As seen above, you can also call `WithValue` to create a new sub-logger\n that always attaches some key-value pairs to a logger.\n Finally, you can use `V(1)` to mark a particular log line as \"debug\" logs:\n While it's possible to use higher log levels, it's recommended that you\n stick with `V(1)` or `V(0)` (which is equivalent to not specifying `V`),\n and then filter later based on key-value pairs or messages; different\n numbers tend to lose meaning easily over time, and you'll be left\n wondering why particular logs lines are at `V(5)` instead of `V(7)`." - }, - { - "heading": "Logging errors", - "data": "Errors should *always* be logged with `log.Error`, which allows logr\n implementations to provide special handling of errors (for instance,\n providing stack traces in debug mode).\n It's acceptable to log call `log.Error` with a nil error object. This\n conveys that an error occurred in some capacity, but that no actual\n `error` object was involved.\n Errors returned by the `Reconcile` implementation of the `Reconciler` interface are commonly logged as a `Reconciler error`.\n It's a developer choice to create an additional error log in the `Reconcile` implementation so a more specific file name and line for the error are returned." - }, - { - "heading": "Logging messages", - "data": "- Don't put variable content in your messages -- use key-value pairs for\n that. Never use `fmt.Sprintf` in your message.\n - Try to match the terminology in your messages with your key-value pairs\n -- for instance, if you have a key-value pairs `api version`, use the\n term `APIVersion` instead of `GroupVersion` in your message." - }, - { - "heading": "Logging Kubernetes Objects", - "data": "Kubernetes objects should be logged directly, like `log.Info(\"this is\n a Kubernetes object\", \"pod\", somePod)`. controller-runtime provides\n a special encoder for Zap that will transform Kubernetes objects into\n `name, namespace, apiVersion, kind` objects, when available and not in\n development mode. Other logr implementations should implement similar\n logic." - }, - { - "heading": "Logging Structured Values (Key-Value pairs)", - "data": "- Use lower-case, space separated keys. For example `object` for objects,\n `api version` for `APIVersion`\n - Be consistent across your application, and with controller-runtime when\n possible.\n - Try to be brief but descriptive.\n - Match terminology in keys with terminology in the message.\n - Be careful logging non-Kubernetes objects verbatim if they're very\n large." - }, - { - "heading": "Groups, Versions, and Kinds", - "data": "- Kinds should not be logged alone (they're meaningless alone). Use\n a `GroupKind` object to log them instead, or a `GroupVersionKind` when\n version is relevant.\n - If you need to log an API version string, use `api version` as the key\n (formatted as with a `GroupVersion`, or as received directly from API\n discovery)." - }, - { - "heading": "Objects and Types", - "data": "- If code works with a generic Kubernetes `runtime.Object`, use the\n `object` key. For specific objects, prefer the resource name as the key\n (e.g. `pod` for `v1.Pod` objects).\n - For non-Kubernetes objects, the `object` key may also be used, if you\n accept a generic interface.\n - When logging a raw type, log it using the `type` key, with a value of\n `fmt.Sprintf(\"%T\", typ)`\n - If there's specific context around a type, the key may be more specific,\n but should end with `type` -- for instance, `OwnerType` should be logged\n as `owner` in the context of `log.Error(err, \"Could not get ObjectKinds\n for OwnerType\", `owner type`, fmt.Sprintf(\"%T\"))`. When possible, favor\n communicating kind instead." - }, - { - "heading": "Multiple things", - "data": "- When logging multiple things, simply pluralize the key." - }, - { - "heading": "controller-runtime Specifics", - "data": "- Reconcile requests should be logged as `request`, although normal code should favor logging the key. - Reconcile keys should be logged as with the same key as if you were logging the object directly (e.g. `log.Info(\"reconciling pod\", \"pod\", req.NamespacedName)`). This ends up having a similar effect to logging the object directly." - }, - { - "additional_info": "Logging Guidelines ================== controller-runtime uses a kind of logging called *structured logging*. If you've used a library like Zap or logrus before, you'll be familiar with the concepts we use. If you've only used a logging library like the \"log\" package (in the Go standard library) or \"glog\" (in Kubernetes), you'll need to adjust how you think about logging a bit. With structured logging, we associate a *constant* log message with some variable key-value pairs. For instance, suppose we wanted to log that we were starting reconciliation on a pod. In the Go standard library logger, we might write: ```go log.Printf(\"starting reconciliation for pod %s/%s\", podNamespace, podName) ``` In controller-runtime, we'd instead write: ```go logger.Info(\"starting reconciliation\", \"pod\", req.NamespacedName) ``` or even write ```go func (r *Reconciler) Reconcile(req reconcile.Request) (reconcile.Response, error) { logger := logger.WithValues(\"pod\", req.NamespacedName) // do some stuff logger.Info(\"starting reconciliation\") } ``` Notice how we've broken out the information that we want to convey into a constant message (`\"starting reconciliation\"`) and some key-value pairs that convey variable information (`\"pod\", req.NamespacedName`). We've there-by added \"structure\" to our logs, which makes them easier to save and search later, as well as correlate with metrics and events. All of controller-runtime's logging is done via [logr](https://github.com/go-logr/logr), a generic interface for structured logging. You can use whichever logging library you want to implement the actual mechanics of the logging. controller-runtime provides some helpers to make it easy to use [Zap](https://go.uber.org/zap) as the implementation. You can configure the logging implementation using `\"sigs.k8s.io/controller-runtime/pkg/log\".SetLogger`. That package also contains the convenience functions for setting up Zap. You can get a handle to the \"root\" logger using `\"sigs.k8s.io/controller-runtime/pkg/log\".Log`, and can then call `WithName` to create individual named loggers. You can call `WithName` repeatedly to chain names together: ```go logger := log.Log.WithName(\"controller\").WithName(\"replicaset\") // in reconcile... logger = logger.WithValues(\"replicaset\", req.NamespacedName) // later on in reconcile... logger.Info(\"doing things with pods\", \"pod\", newPod) ``` As seen above, you can also call `WithValue` to create a new sub-logger that always attaches some key-value pairs to a logger. Finally, you can use `V(1)` to mark a particular log line as \"debug\" logs: ```go logger.V(1).Info(\"this is particularly verbose!\", \"state of the world\", allKubernetesObjectsEverywhere) ``` While it's possible to use higher log levels, it's recommended that you stick with `V(1)` or `V(0)` (which is equivalent to not specifying `V`), and then filter later based on key-value pairs or messages; different numbers tend to lose meaning easily over time, and you'll be left wondering why particular logs lines are at `V(5)` instead of `V(7)`. Errors should *always* be logged with `log.Error`, which allows logr implementations to provide special handling of errors (for instance, providing stack traces in debug mode). It's acceptable to log call `log.Error` with a nil error object. This conveys that an error occurred in some capacity, but that no actual `error` object was involved. Errors returned by the `Reconcile` implementation of the `Reconciler` interface are commonly logged as a `Reconciler error`. It's a developer choice to create an additional error log in the `Reconcile` implementation so a more specific file name and line for the error are returned. - Don't put variable content in your messages -- use key-value pairs for that. Never use `fmt.Sprintf` in your message. - Try to match the terminology in your messages with your key-value pairs -- for instance, if you have a key-value pairs `api version`, use the term `APIVersion` instead of `GroupVersion` in your message. Kubernetes objects should be logged directly, like `log.Info(\"this is a Kubernetes object\", \"pod\", somePod)`. controller-runtime provides a special encoder for Zap that will transform Kubernetes objects into `name, namespace, apiVersion, kind` objects, when available and not in development mode. Other logr implementations should implement similar logic. - Use lower-case, space separated keys. For example `object` for objects, `api version` for `APIVersion` - Be consistent across your application, and with controller-runtime when possible. - Try to be brief but descriptive. - Match terminology in keys with terminology in the message. - Be careful logging non-Kubernetes objects verbatim if they're very large. - Kinds should not be logged alone (they're meaningless alone). Use a `GroupKind` object to log them instead, or a `GroupVersionKind` when version is relevant. - If you need to log an API version string, use `api version` as the key (formatted as with a `GroupVersion`, or as received directly from API discovery). - If code works with a generic Kubernetes `runtime.Object`, use the `object` key. For specific objects, prefer the resource name as the key (e.g. `pod` for `v1.Pod` objects). - For non-Kubernetes objects, the `object` key may also be used, if you accept a generic interface. - When logging a raw type, log it using the `type` key, with a value of `fmt.Sprintf(\"%T\", typ)` - If there's specific context around a type, the key may be more specific, but should end with `type` -- for instance, `OwnerType` should be logged as `owner` in the context of `log.Error(err, \"Could not get ObjectKinds for OwnerType\", `owner type`, fmt.Sprintf(\"%T\"))`. When possible, favor communicating kind instead. - When logging multiple things, simply pluralize the key. - Reconcile requests should be logged as `request`, although normal code should favor logging the key. - Reconcile keys should be logged as with the same key as if you were logging the object directly (e.g. `log.Info(\"reconciling pod\", \"pod\", req.NamespacedName)`). This ends up having a similar effect to logging the object directly." - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Cilium", - "file_name": "TROUBLESHOOTING.md" - }, - "content": [ - { - "heading": "Troubleshooting", - "data": "" - }, - { - "heading": "Unmarshaling doesn't work", - "data": "The most common reason for this issue is improper use of struct tags (eg. `yaml` or `json`). Viper uses [github.com/mitchellh/mapstructure](https://github.com/mitchellh/mapstructure) under the hood for unmarshaling values which uses `mapstructure` tags by default. Please refer to the library's documentation for using other struct tags." - }, - { - "heading": "Cannot find package", - "data": "Viper installation seems to fail a lot lately with the following (or a similar) error:\n As the error message suggests, Go tries to look up dependencies in `GOPATH` mode (as it's commonly called) from the `GOPATH`.\n Viper opted to use [Go Modules](https://github.com/golang/go/wiki/Modules) to manage its dependencies. While in many cases the two methods are interchangeable, once a dependency releases new (major) versions, `GOPATH` mode is no longer able to decide which version to use, so it'll either use one that's already present or pick a version (usually the `master` branch).\n The solution is easy: switch to using Go Modules.\n Please refer to the [wiki](https://github.com/golang/go/wiki/Modules) on how to do that.\n **tl;dr* `export GO111MODULE=on`" - }, - { - "heading": "Unquoted 'y' and 'n' characters get replaced with _true_ and _false_ when reading a YAML file", - "data": "This is a YAML 1.1 feature according to [go-yaml/yaml#740](https://github.com/go-yaml/yaml/issues/740). Potential solutions are: 1. Quoting values resolved as boolean 1. Upgrading to YAML v3 (for the time being this is possible by passing the `viper_yaml3` tag to your build)" - }, - { - "additional_info": "The most common reason for this issue is improper use of struct tags (eg. `yaml` or `json`). Viper uses [github.com/mitchellh/mapstructure](https://github.com/mitchellh/mapstructure) under the hood for unmarshaling values which uses `mapstructure` tags by default. Please refer to the library's documentation for using other struct tags. Viper installation seems to fail a lot lately with the following (or a similar) error: ``` cannot find package \"github.com/hashicorp/hcl/tree/hcl1\" in any of: /usr/local/Cellar/go/1.15.7_1/libexec/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOROOT) /Users/user/go/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOPATH) ``` As the error message suggests, Go tries to look up dependencies in `GOPATH` mode (as it's commonly called) from the `GOPATH`. Viper opted to use [Go Modules](https://github.com/golang/go/wiki/Modules) to manage its dependencies. While in many cases the two methods are interchangeable, once a dependency releases new (major) versions, `GOPATH` mode is no longer able to decide which version to use, so it'll either use one that's already present or pick a version (usually the `master` branch). The solution is easy: switch to using Go Modules. Please refer to the [wiki](https://github.com/golang/go/wiki/Modules) on how to do that. **tl;dr* `export GO111MODULE=on` This is a YAML 1.1 feature according to [go-yaml/yaml#740](https://github.com/go-yaml/yaml/issues/740). Potential solutions are: 1. Quoting values resolved as boolean 1. Upgrading to YAML v3 (for the time being this is possible by passing the `viper_yaml3` tag to your build)" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Cilium", - "file_name": "USERS.md" - }, - "content": [ - { - "additional_info": "Who is using Cilium? ==================== Sharing experiences and learning from other users is essential. We are frequently asked who is using a particular feature of Cilium so people can get in contact with other users to share experiences and best practices. People also often want to know if product/platform X has integrated Cilium. While the [Cilium Slack community](https://cilium.herokuapp.com/) allows users to get in touch, it can be challenging to find this information quickly. The following is a directory of adopters to help identify users of individual features. The users themselves directly maintain the list. Adding yourself as a user ------------------------- If you are using Cilium or it is integrated into your product, service, or platform, please consider adding yourself as a user with a quick description of your use case by opening a pull request to this file and adding a section describing your usage of Cilium. If you are open to others contacting you about your use of Cilium on Slack, add your Slack nickname as well. N: Name of user (company) D: Description U: Usage of features L: Link with further information (optional) Q: Contacts available for questions (optional) Example entry: * N: Cilium Example User Inc. D: Cilium Example User Inc. is using Cilium for scientific purposes U: ENI networking, DNS policies, ClusterMesh Q: @slacknick1, @slacknick2 Requirements to be listed ------------------------- * You must represent the user listed. Do *NOT* add entries on behalf of other users. * There is no minimum deployment size but we request to list permanent production deployments only, i.e., no demo or trial deployments. Commercial use is not required. A well-done home lab setup can be equally interesting as a large-scale commercial deployment. Users (Alphabetically) ---------------------- * N: AccuKnox D: AccuKnox uses Cilium for network visibility and network policy enforcement. U: L3/L4/L7 policy enforcement using Identity, External/VM Workloads, Network Visibility using Hubble L: https://www.accuknox.com/spifee-identity-for-cilium-presentation-at-kubecon-2021, https://www.accuknox.com/cilium Q: @nyrahul * N: Acoss D: Acoss is using cilium as their main CNI plugin (self hosted k8s, On-premises) U: CiliumNetworkPolicy, Hubble, BPF NodePort, Direct routing L: @JrCs * N: Adobe, Inc. D: Adobe's Project Ethos uses Cilium for multi-tenant, multi-cloud clusters U: L3/L4/L7 policies L: https://youtu.be/39FLsSc2P-Y * N: AirQo D: AirQo uses Cilium as the CNI plugin U: CNI, Networking, NetworkPolicy, Cluster Mesh, Hubble, Kubernetes services L: @airqo-platform * N: Alibaba Cloud D: Alibaba Cloud is using Cilium together with Terway CNI as the high-performance ENI dataplane U: Networking, NetworkPolicy, Services, IPVLAN L: https://www.alibabacloud.com/blog/how-does-alibaba-cloud-build-high-performance-cloud-native-pod-networks-in-production-environments_596590 * N: Amazon Web Services (AWS) D: AWS uses Cilium as the default CNI for EKS Anywhere U: Networking, NetworkPolicy, Services L: https://isovalent.com/blog/post/2021-09-aws-eks-anywhere-chooses-cilium * N: APPUiO by VSHN D: VSHN uses Cilium for multi-tenant networking on APPUiO Cloud and as an add-on to APPUiO Managed, both on Red Hat OpenShift and Cloud Kubernetes. U: CNI, Networking, NetworkPolicy, Hubble, IPAM, Kubernetes services L: https://products.docs.vshn.ch/products/appuio/managed/addon_cilium.html and https://www.appuio.cloud * N: ArangoDB Oasis D: ArangoDB Oasis is using Cilium in to separate database deployments in our multi-tenant cloud environment U: Networking, CiliumNetworkPolicy(cluster & local), Hubble, IPAM L: https://cloud.arangodb.com Q: @ewoutp @Robert-Stam * N: Ascend.io D: Ascend.io is using Cilium as a consistent CNI for our Data Automation Platform on GKE, EKS, and AKS. U: Transparent Encryption, Overlay Networking, Cluster Mesh, Egress Gateway, Network Policy, Hubble L: https://www.ascend.io/ Q: @Joe Stevens * N: Ayedo D: Ayedo builds and operates cloud-native container platforms based on Kubernetes U: Hubble for Visibility, Cilium as Mesh between Services L: https://www.ayedo.de/ * N: Back Market D: Back Market is using Cilium as CNI in all their clusters and environments (kOps + EKS in AWS) U: CNI, Network Policies, Transparent Encryption (WG), Hubble Q: @nitrikx L: https://www.backmarket.com/ * N: Berops D: Cilium is used as a CNI plug-in in our open-source multi-cloud and hybrid-cloud Kubernetes platform - Claudie U: CNI, Network Policies, Hubble Q: @Bernard Halas L: https://github.com/berops/claudie * N: ByteDance D: ByteDance is using Cilium as CNI plug-in for self-hosted Kubernetes. U: CNI, Networking L: @Jiang Wang * N: Canonical D: Canonical's Kubernetes distribution microk8s uses Cilium as CNI plugin U: Networking, NetworkPolicy, and Kubernetes services L: https://microk8s.io/ * N: Capital One D: Capital One uses Cilium as its standard CNI for all Kubernetes environments U: CNI, CiliumClusterWideNetworkpolicy, CiliumNetworkPolicy, Hubble, network visibility L: https://www.youtube.com/watch?v=hwOpCKBaJ-w * N: CENGN - Centre of Excellence in Next Generation Networks D: CENGN is using Cilium in multiple clusters including production and development clusters (self-hosted k8s, On-premises) U: L3/L4/L7 network policies, Monitoring via Prometheus metrics & Hubble L: https://www.youtube.com/watch?v=yXm7yZE2rk4 Q: @rmaika @mohahmed13 * N: Cistec D: Cistec is a clinical information system provider and uses Cilium as the CNI plugin. U: Networking and network policy L: https://www.cistec.com/ * N: Civo D: Civo is offering Cilium as the CNI option for Civo users to choose it for their Civo Kubernetes clusters. U: Networking and network policy L: https://www.civo.com/kubernetes * N: ClickHouse D: ClickHouse uses Cilium as CNI for AWS Kubernetes environments U: CiliumNetworkPolicy, Hubble, ClusterMesh L: https://clickhouse.com * N: Cognite D: Cognite is an industrial DataOps provider and uses Cilium as the CNI plugin Q: @Robert Collins * N: CONNY D: CONNY is legaltech platform to improve access to justice for individuals U: Networking, NetworkPolicy, Services Q: @ant31 L: https://conny.de * N: Cosmonic D: Cilium is the CNI for Cosmonic's Nomad based PaaS U: Networking, NetworkPolicy, Transparent Encryption L: https://cilium.io/blog/2023/01/18/cosmonic-user-story/ * N: Crane D: Crane uses Cilium as the default CNI U: Networking, NetworkPolicy, Services L: https://github.com/slzcc/crane Q: @slzcc * N: Cybozu D: Cybozu deploys Cilium to on-prem Kubernetes Cluster and uses it with Coil by CNI chaining. U: CNI Chaining, L4 LoadBalancer, NetworkPolicy, Hubble L: https://cybozu-global.com/ * N: Daimler Truck AG D: The CSG RuntimeDepartment of DaimlerTruck is maintaining an AKS k8s cluster as a shared resource for DevOps crews and is using Cilium as the default CNI (BYOCNI). U: Networking, NetworkPolicy and Monitoring L: https://daimlertruck.com Q: @brandshaide * N: DaoCloud - spiderpool & merbridge D: spiderpool is using Cilium as their main CNI plugin for overlay and merbridge is using Cilium eBPF library to speed up your Service Mesh U: CNI, Service load-balancing, cluster mesh L: https://github.com/spidernet-io/spiderpool, https://github.com/merbridge/merbridge Q: @weizhoublue, @kebe7jun * N: Datadog D: Datadog is using Cilium in AWS (self-hosted k8s) U: ENI Networking, Service load-balancing, Encryption, Network Policies, Hubble Q: @lbernail, @roboll, @mvisonneau * N: Dcode.tech D: We specialize in AWS and Kubernetes, and actively implement Cilium at our clients. U: CNI, CiliumNetworkPolicy, Hubble UI L: https://dcode.tech/ Q: @eliranw, @maordavidov * N: Deckhouse D: Deckhouse Kubernetes Platform is using Cilium as a one of the supported CNIs. U: Networking, Security, Hubble UI for network visibility L: https://github.com/deckhouse/deckhouse * N: Deezer D: Deezer is using Cilium as CNI for all our on-prem clusters for its performance and security. We plan to leverage BGP features as well soon U: CNI, Hubble, kube-proxy replacement, eBPF L: https://github.com/deezer * N: DigitalOcean D: DigitalOcean is using Cilium as the CNI for Digital Ocean's managed Kubernetes Services (DOKS) U: Networking and network policy L: https://github.com/digitalocean/DOKS * N: Edgeless Systems D: Edgeless Systems is using Cilium as the CNI for Edgeless System's Confidential Kubernetes Distribution (Constellation) U: Networking (CNI), Transparent Encryption (WG), L: https://docs.edgeless.systems/constellation/architecture/networking Q: @m1ghtym0 * N: Eficode D: As a cloud-native and devops consulting firm, we have implemented Cilium on customer engagements U: CNI, CiliumNetworkPolicy at L7, Hubble L: https://eficode.com/ Q: @Andy Allred * N: Elastic Path D: Elastic Path is using Cilium in their CloudOps for Kubernetes production clusters U: CNI L: https://documentation.elasticpath.com/cloudops-kubernetes/docs/index.html Q: @Neil Seward * N: Equinix D: Equinix Metal is using Cilium for production and non-production environments on bare metal U: CNI, CiliumClusterWideNetworkpolicy, CiliumNetworkPolicy, BGP advertisements, Hubble, network visibility L: https://metal.equinix.com/ Q: @matoszz * N: Equinix D: Equinix NL Managed Services is using Cilium with their Managed Kubernetes offering U: CNI, network policies, visibility L: https://www.equinix.nl/products/support-services/managed-services/netherlands Q: @jonkerj * N: Exoscale D: Exoscale is offering Cilium as a CNI option on its managed Kubernetes service named SKS (Scalable Kubernetes Service) U: CNI, Networking L: https://www.exoscale.com/sks/ Q: @Antoine * N: finleap connect D: finleap connect is using Cilium in their production clusters (self-hosted, bare-metal, private cloud) U: CNI, NetworkPolicies Q: @chue * N: Form3 D: Form3 is using Cilium in their production clusters (self-hosted, bare-metal, private cloud) U: Service load-balancing, Encryption, CNI, NetworkPolicies Q: @kevholditch-f3, samo-f3, ewilde-form3 * N: FRSCA - Factory for Repeatable Secure Creation of Artifacts D: FRSCA is utilizing tetragon integrated with Tekton to create runtime attestation to attest artifact and builder attributes U: Runtime observability L: https://github.com/buildsec/frsca Q: @Parth Patel * N: F5 Inc D: F5 helps customers with Cilium VXLAN tunnel integration with BIG-IP U: Networking L: https://github.com/f5devcentral/f5-ci-docs/blob/master/docs/cilium/cilium-bigip-info.rst Q: @vincentmli * N: Gcore D: Gcore supports Cilium as CNI provider for Gcore Managed Kubernetes Service U: CNI, Networking, NetworkPolicy, Kubernetes Services L: https://gcore.com/news/cilium-cni-support Q: @rzdebskiy * N: Giant Swarm D: Giant Swarm is using Cilium in their Cluster API based managed Kubernetes service (AWS, Azure, GCP, OpenStack, VMware Cloud Director and VMware vSphere) as CNI U: Networking L: https://www.giantswarm.io/ * N: GitLab D: GitLab is using Cilium to implement network policies inside Auto DevOps deployed clusters for customers using k8s U: Network policies L: https://docs.gitlab.com/ee/user/clusters/applications.html#install-cilium-using-gitlab-ci Q: @ap4y @whaber * N: Google D: Google is using Cilium in Anthos and Google Kubernetes Engine (GKE) as Dataplane V2 U: Networking, network policy, and network visibility L: https://cloud.google.com/blog/products/containers-kubernetes/bringing-ebpf-and-cilium-to-google-kubernetes-engine * N: G DATA CyberDefense AG D: G DATA CyberDefense AG is using Cilium on our managed on premise clusters. U: Networking, network policy, security, and network visibility L: https://gdatasoftware.com Q: @farodin91 * N: IDNIC | Kadabra D: IDNIC is the National Internet Registry administering IP addresses for INDONESIA, uses Cilium to powered Kadabra project runing services across multi data centers. U: Networking, Network Policies, kube-proxy Replacement, Service Load Balancing and Cluster Mesh L: https://ris.idnic.net/ Q: @ardikabs * N: IKEA IT AB D: IKEA IT AB is using Cilium for production and non-production environments (self-hosted, bare-metal, private cloud) U: Networking, CiliumclusterWideNetworkPolicy, CiliumNetworkPolicy, kube-proxy replacement, Hubble, Direct routing, egress gateway, hubble-otel, Multi Nic XDP, BGP advertisements, Bandwidth Manager, Service Load Balancing, Cluster Mesh L: https://www.ingka.com/ * N: Immerok D: Immerok uses Cilium for cross-cluster communication and network isolation; Immerok Cloud is a serverless platform for the full power of [Apache Flink](https://flink.apache.org) at any scale. U: Networking, network policy, observability, cluster mesh, kube-proxy replacement, security, CNI L: https://immerok.io Q: @austince, @dmvk * N: Infomaniak D: Infomaniak is using Cilium in their production clusters (self-hosted, bare-metal and openstack) U: Networking, CiliumNetworkPolicy, BPF NodePort, Direct routing, kube-proxy replacement L: https://www.infomaniak.com/en Q: @reneluria * N: innoQ Schweiz GmbH D: As a consulting company we added Cilium to a couple of our customers infrastructure U: Networking, CiliumNetworkPolicy at L7, kube-proxy replacement, encryption L: https://www.cloud-migration.ch/ Q: @fakod * N: Isovalent D: Cilium is the platform that powers Isovalent\u2019s enterprise networking, observability, and security solutions U: Networking, network policy, observability, cluster mesh, kube-proxy replacement, security, egress gateway, service load balancing, CNI L: https://isovalent.com/product/ Q: @BillMulligan * N: JUMO D: JUMO is using Cilium as their CNI plugin for all of their AWS-hosted EKS clusters U: Networking, network policy, network visibility, cluster mesh Q: @Matthieu ANTOINE, @Carlos Castro, @Joao Coutinho (Slack) * N: Keploy D: Keploy is using the Cilium to capture the network traffic to perform E2E Testing. U: Networking, network policy, Monitoring, E2E Testing L: https://keploy.io/ * N: Kilo D: Cilium is a supported CNI for Kilo. When used together, Cilium + Kilo create a full mesh via WireGuard for Kubernetes in edge environments. U: CNI, Networking, Hubble, kube-proxy replacement, network policy L: https://kilo.squat.ai/ Q: @squat, @arpagon * N: kOps D: kOps is using Cilium as one of the supported CNIs U: Networking, Hubble, Encryption, kube-proxy replacement L: kops.sigs.k8s.io/ Q: @olemarkus * N: Kryptos Logic D: Kryptos is a cyber security company that is using Kubernetes on-prem in which Cilium is our CNI of choice. U: Networking, Observability, kube-proxy replacement * N: kubeasz D: kubeasz, a certified kubernetes installer, is using Cilium as a one of the supported CNIs. U: Networking, network policy, Hubble for network visibility L: https://github.com/easzlab/kubeasz * N: Kube-OVN D: Kube-OVN uses Cilium to enhance service performance, security and monitoring. U: CNI-Chaining, Hubble, kube-proxy replacement L: https://github.com/kubeovn/kube-ovn/blob/master/docs/IntegrateCiliumIntoKubeOVN.md Q: @oilbeater * N: Kube-Hetzner D: Kube-Hetzner is a open-source Terraform project that uses Cilium as an possible CNI in its cluster deployment on Hetzner Cloud. U: Networking, Hubble, kube-proxy replacement L: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner#cni Q: @MysticalTech * N: Kubermatic D: Kubermatic Kubernetes Platform is using Cilium as a one of the supported CNIs. U: Networking, network policy, Hubble for network visibility L: https://github.com/kubermatic/kubermatic * N: KubeSphere - KubeKey D: KubeKey is an open-source lightweight tool for deploying Kubernetes clusters and addons efficiently. It uses Cilium as one of the supported CNIs. U: Networking, Security, Hubble UI for network visibility L: https://github.com/kubesphere/kubekey Q: @FeynmanZhou * N: K8e - Simple Kubernetes Distribution D: Kubernetes Easy (k8e) is a lightweight, Extensible, Enterprise Kubernetes distribution. It uses Cilium as default CNI network. U: Networking, network policy, Hubble for network visibility L: https://github.com/xiaods/k8e Q: @xds2000 * N: Liquid Reply D: Liquid Reply is a professional service provider and utilizes Cilium on suitable projects and implementations. U: Networking, network policy, Hubble for network visibility, Security L: http://liquidreply.com Q: @mkorbi * N: Magic Leap D: Magic Leap is using Hubble plugged to GKE Dataplane v2 clusters U: Hubble Q: @romachalm * N: Melenion Inc D: Melenion is using Cilium as the CNI for its on-premise production clusters U: Service Load Balancing, Hubble Q: @edude03 * N: Meltwater D: Meltwater is using Cilium in AWS on self-hosted multi-tenant k8s clusters as the CNI plugin U: ENI Networking, Encryption, Monitoring via Prometheus metrics & Hubble Q: @recollir, @dezmodue * N: Microsoft D: Microsoft is using Cilium in \"Azure CNI powered by Cilium\" AKS (Azure Kubernetes Services) clusters L: https://techcommunity.microsoft.com/t5/azure-networking-blog/azure-cni-powered-by-cilium-for-azure-kubernetes-service-aks/ba-p/3662341 Q: @tamilmani1989 @chandanAggarwal * N: Mobilab D: Mobilab uses Cilium as the CNI for its internal cloud U: CNI L: https://mobilabsolutions.com/2019/01/why-we-switched-to-cilium/ * N: MyFitnessPal D: MyFitnessPal trusts Cilium with high volume user traffic in AWS on self-hosted k8s clusters as the CNI plugin and in GKE with Dataplane V2 U: Networking (CNI, Maglev, kube-proxy replacement, local redirect policy), Observability (Network metrics with Hubble, DNS proxy, service maps, policy troubleshooting) and Security (Network Policy) L: https://www.myfitnesspal.com * N: Mux, Inc. D: Mux deploys Cilium on self-hosted k8s clusters (Cluster API) in GCP and AWS to run its video streaming/analytics platforms. U: Pod networking (CNI, IPAM, Host-reachable Services), Hubble, Cluster-mesh. TBD: Network Policy, Transparent Encryption (WG), Host Firewall. L: https://mux.com Q: @dilyevsky * N: NetBird D: NetBird uses Cilium to compile BPF to Go for cross-platform DNS management and NAT traversal U: bpf2go to compile a C source file into eBPF bytecode and then to Go L: https://netbird.io/knowledge-hub/using-xdp-ebpf-to-share-default-dns-port-between-resolvers Q: @braginini * N: NETWAYS Web Services D: NETWAYS Web Services offers Cilium to their clients as CNI option for their Managed Kubernetes clusters. U: Networking (CNI), Observability (Hubble) L: https://nws.netways.de/managed-kubernetes/ * N: New York Times (the) D: The New York Times is using Cilium on EKS to build multi-region multi-tenant shared clusters U: Networking (CNI, EKS IPAM, Maglev, kube-proxy replacement, Direct Routing), Observability (Network metrics with Hubble, policy troubleshooting) and Security (Network Policy) L: https://www.nytimes.com/, https://youtu.be/9FDpMNvPrCw Q: @abebars * N: Nexxiot D: Nexxiot is an IoT SaaS provider using Cilium as the main CNI plugin on AWS EKS clusters U: Networking (IPAM, CNI), Security (Network Policies), Visibility (hubble) L: https://nexxiot.com * N: Nine Internet Solutions AG D: Nine uses Cilium on all Nine Kubernetes Engine clusters U: CNI, network policy, kube-proxy replacement, host firewall L: https://www.nine.ch/en/kubernetes * N: Northflank D: Northflank is a PaaS and uses Cilium as the main CNI plugin across GCP, Azure, AWS and bare-metal U: Networking, network policy, hubble, packet monitoring and network visibility L: https://northflank.com Q: @NorthflankWill, @Champgoblem * N: Overstock Inc. D: Overstock is using Cilium as the main CNI plugin on bare-metal clusters (self hosted k8s). U: Networking, network policy, hubble, observability * N: Palantir Technologies Inc. D: Palantir is using Cilium as their main CNI plugin in all major cloud providers [AWS/Azure/GCP] (self hosted k8s). U: ENI networking, L3/L4 policies, FQDN based policy, FQDN filtering, IPSec Q: ungureanuvladvictor * N: Palark GmbH D: Palark uses Cilium for networking in its Kubernetes platform provided to numerous customers as a part of its DevOps as a Service offering. U: CNI, Networking, Network policy, Security, Hubble UI L: https://blog.palark.com/why-cilium-for-kubernetes-networking/ Q: @shurup * N: Parseable D: Parseable uses Tertragon for collecting and ingesting eBPF logs for Kubernetes clusters. U: Security, eBPF, Tetragon L: https://www.parseable.io/blog/ebpf-log-analytics Q: @nitisht * N: Pionative D: Pionative supplies all its clients across cloud providers with Kubernetes running Cilium to deliver the best performance out there. U: CNI, Networking, Security, eBPF L: https://www.pionative.com Q: @Pionerd * N: Plaid Inc D: Plaid is using Cilium as their CNI plugin in self-hosted Kubernetes on AWS. U: CNI, network policies L: [https://plaid.com](https://plaid.com/contact/) Q: @diversario @jandersen-plaid * N: PlanetScale D: PlanetScale is using Cilium as the CNI for its serverless database platform. U: Networking (CNI, IPAM, kube-proxy replacement, native routing), Network Security, Cluster Mesh, Load Balancing L: https://planetscale.com/ Q: @dctrwatson * N: plusserver Kubernetes Engine (PSKE) D: PSKE uses Cilium for multiple scenarios, for examples for managed Kubernetes clusters provided with Gardener Project across AWS and OpenStack. U: CNI , Overlay Network, Network Policies L: https://www.plusserver.com/en/product/managed-kubernetes/, https://github.com/gardener/gardener-extension-networking-cilium * N: Polar Signals D: Polar Signals uses Cilium as the CNI on its GKE dataplane v2 based clusters. U: Networking L: https://polarsignals.com Q: @polarsignals @brancz * N: Polverio D: Polverio KubeLift is a single-node Kubernetes distribution optimized for Azure, using Cilium as the CNI. U: CNI, IPAM L: https://polverio.com Q: @polverio @stuartpreston * N: Poseidon Laboratories D: Poseidon's Typhoon Kubernetes distro uses Cilium as the default CNI and its used internally U: Networking, policies, service load balancing L: https://github.com/poseidon/typhoon/ Q: @dghubble @typhoon8s * N: PostFinance AG D: PostFinance is using Cilium as their CNI for all mission critical, on premise k8s clusters U: Networking, network policies, kube-proxy replacement L: https://github.com/postfinance * N: Proton AG D: Proton is using Cilium as their CNI for all their Kubernetes clusters U: Networking, network policies, host firewall, kube-proxy replacement, Hubble L: https://proton.me/ Q: @j4m3s @MrFreezeex * N: Radio France D: Radio France is using Cilium in their production clusters (self-hosted k8s with kops on AWS) U: Mainly Service load-balancing Q: @francoisj * N: Rancher Labs, now part of SUSE D: Rancher Labs certified Kubernetes distribution RKE2 can be deployed with Cilium. U: Networking and network policy L: https://github.com/rancher/rke and https://github.com/rancher/rke2 * N: Rapyuta Robotics. D: Rapyuta is using cilium as their main CNI plugin. (self hosted k8s) U: CiliumNetworkPolicy, Hubble, Service Load Balancing. Q: @Gowtham * N: Rafay Systems D: Rafay's Kubernetes Operations Platform uses Cilium for centralized network visibility and network policy enforcement U: NetworkPolicy, Visibility via Prometheus metrics & Hubble L: https://rafay.co/platform/network-policy-manager/ Q: @cloudnativeboy @mohanatreya * N: Robinhood Markets D: Robinhood uses Cilium for Kubernetes overlay networking in an environment where we run tests for backend services U: CNI, Overlay networking Q: @Madhu CS * N: Santa Claus & the Elves D: All our infrastructure to process children's letters and wishes, toy making, and delivery, distributed over multiple clusters around the world, is now powered by Cilium. U: ClusterMesh, L4LB, XDP acceleration, Bandwidth manager, Encryption, Hubble L: https://qmonnet.github.io/whirl-offload/2024/01/02/santa-switches-to-cilium/ * N: SAP D: SAP uses Cilium for multiple internal scenarios. For examples for self-hosted Kubernetes scenarios on AWS with SAP Concur and for managed Kubernetes clusters provided with Gardener Project across AWS, Azure, GCP, and OpenStack. U: CNI , Overlay Network, Network Policies L: https://www.concur.com, https://gardener.cloud/, https://github.com/gardener/gardener-extension-networking-cilium Q: @dragan (SAP Concur), @docktofuture & @ScheererJ (Gardener) * N: Sapian D: Sapian uses Cilium as the default CNI in our product DialBox Cloud; DialBox cloud is an Edge Kubernetes cluster using [kilo](https://github.com/squat/kilo) for WireGuard mesh connectivity inter-nodes. Therefore, Cilium is crucial for low latency in real-time communications environments. U: CNI, Network Policies, Hubble, kube-proxy replacement L: https://sapian.com.co, https://arpagon.co/blog/k8s-edge Q: @arpagon * N: Schenker AG D: Land transportation unit of Schenker uses Cilium as default CNI in self-managed kubernetes clusters running in AWS U: CNI, Monitoring, kube-proxy replacement L: https://www.dbschenker.com/global Q: @amirkkn * N: Sealos D: Sealos is using Cilium as a consistent CNI for our Sealos Cloud. U: Networking, Service, kube-proxy replacement, Network Policy, Hubble L: https://sealos.io Q: @fanux, @yangchuansheng * N: Seznam.cz D: Seznam.cz uses Cilium in multiple scenarios in on-prem DCs. At first as L4LB which loadbalances external traffic into k8s+openstack clusters then as CNI in multiple k8s and openstack clusters which are all connected in a clustermesh to enforce NetworkPolicies across pods/VMs. U: L4LB, L3/4 CNPs+CCNPs, KPR, Hubble, HostPolicy, Direct-routing, IPv4+IPv6, ClusterMesh Q: @oblazek * N: Simple D: Simple uses cilium as default CNI in Kubernetes clusters (AWS EKS) for both development and production environments. U: CNI, Network Policies, Hubble L: https://simple.life Q: @sergeyshevch * N: Scaleway D: Scaleway uses Cilium as the default CNI for Kubernetes Kapsule U: Networking, NetworkPolicy, Services L: @jtherin @remyleone * N: Schuberg Philis D: Schuberg Philis uses Cilium as CNI for mission critical kubernetes clusters we run for our customers. U: CNI (instead of amazon-vpc-cni-k8s), DefaultDeny(Zero Trust), Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy, EKS L: https://schubergphilis.com/en Q: @stimmerman @shoekstra @mbaumann * N: SI Analytics D: SI Analytics uses Cilium as CNI in self-managed Kubernetes clusters in on-prem DCs. And also use Cilium as CNI in its GKE dataplane v2 based clusters. U: CNI, Network Policies, Hubble L: https://si-analytics.ai, https://ovision.ai Q: @jholee * N: SIGHUP D: SIGHUP integrated Cilium as a supported CNI for KFD (Kubernetes Fury Distribution), our enterprise-grade OSS reference architecture U: Available supported CNI L: https://sighup.io, https://github.com/sighupio/fury-kubernetes-networking Q: @jnardiello @nutellino * N: SmileDirectClub D: SmileDirectClub is using Cilium in manufacturing clusters (self-hosted on vSphere and AWS EC2) U: CNI Q: @joey, @onur.gokkocabas * N: Snapp D: Snapp is using Cilium in production for its on premise openshift clusters U: CNI, Network Policies, Hubble Q: @m-yosefpor * N: Solo.io D: Cilium is part of Gloo Application Networking platform, with a \u201cbatteries included but swappable\u201d manner U: CNI, Network Policies Q: @linsun * N: S&P Global D: S&P Global uses Cilium as their multi-cloud CNI U: CNI L: https://www.youtube.com/watch?v=6CZ_SSTqb4g * N: Spectro Cloud D: Spectro Cloud uses & promotes Cilium for clusters its K8S management platform (Palette) deploys U: CNI, Overlay network, kube-proxy replacement Q: @Kevin Reeuwijk * N: Spherity D: Spherity is using Cilium on AWS EKS U: CNI/ENI Networking, Network policies, Hubble Q: @solidnerd * N: Sportradar D: Sportradar is using Cilium as their main CNI plugin in AWS (using kops) U: L3/L4 policies, Hubble, BPF NodePort, CiliumClusterwideNetworkPolicy Q: @Eric Bailey, @Ole Markus * N: Sproutfi D: Sproutfi uses Cilium as the CNI on its GKE based clusters U: Service Load Balancing, Hubble, Datadog Integration for Prometheus metrics Q: @edude03 * N: SuperOrbital D: As a Kubernetes-focused consulting firm, we have implemented Cilium on customer engagements U: CNI, CiliumNetworkPolicy at L7, Hubble L: https://superorbital.io/ Q: @jmcshane * N: Syself D: Syself uses Cilium as the CNI for Syself Autopilot, a managed Kubernetes platform U: CNI, HostFirewall, Monitoring, CiliumClusterwideNetworkPolicy, Hubble L: https://syself.com Q: @sbaete * N: Talos D: Cilium is one of the supported CNI's in Talos U: Networking, NetworkPolicy, Hubble, BPF NodePort L: https://github.com/talos-systems/talos Q: @frezbo, @smira, @Ulexus * N: Tencent Cloud D: Tencent Cloud container team designed the TKE hybrid cloud container network solution with Cilium as the cluster network base U: Networking, CNI L: https://segmentfault.com/a/1190000040298428/en * N: teuto.net Netzdienste GmbH D: teuto.net is using cilium for their managed k8s service, t8s U: CNI, CiliumNetworkPolicy, Hubble, Encryption, ... L: https://teuto.net/managed-kubernetes Q: @cwrau * N: Trendyol D: Trendyol.com has recently implemented Cilium as the default CNI for its production Kubernetes clusters starting from version 1.26. U: Networking, kube-proxy replacement, eBPF, Network Visibility with Hubble and Grafana, Local Redirect Policy L: https://t.ly/FDCZK * N: T-Systems International D: TSI uses Cilium for it's Open Sovereign Cloud product, including as a CNI for Gardener-based Kubernetes clusters and bare-metal infrastructure managed by OnMetal. U: CNI, overlay network, NetworkPolicies Q: @ManuStoessel * N: uSwitch D: uSwitch is using Cilium in AWS for all their production clusters (self hosted k8s) U: ClusterMesh, CNI-Chaining (with amazon-vpc-cni-k8s) Q: @jirving * N: United Cloud D: United Cloud is using Cilium for all non-production and production clusters (on-premises) U: CNI, Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy, ClusterMesh, Encryption L: https://united.cloud Q: @boris * N: Utmost Software, Inc D: Utmost is using Cilium in all tiers of its Kubernetes ecosystem to implement zero trust U: CNI, DefaultDeny(Zero Trust), Hubble, CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy L: https://blog.utmost.co/zero-trust-security-at-utmost Q: @andrewholt * N: Trip.com D: Trip.com is using Cilium in their production clusters (self-hosted k8s, On-premises and AWS) U: ENI Networking, Service load-balancing, Direct routing (via Bird) L: https://ctripcloud.github.io/cilium/network/2020/01/19/trip-first-step-towards-cloud-native-networking.html Q: @ArthurChiao * N: Tailor Brands D: Tailor Brands is using Cilium in their production, staging, and development clusters (AWS EKS) U: CNI (instead of amazon-vpc-cni-k8s), Hubble, Datadog Integration for Prometheus metrics Q: @liorrozen * N: Twilio D: Twilio Segment is using Cilium across their k8s-based compute platform U: CNI, EKS direct routing, kube-proxy replacement, Hubble, CiliumNetworkPolicies Q: @msaah * N: ungleich D: ungleich is using Cilium as part of IPv6-only Kubernetes deployments. U: CNI, IPv6 only networking, BGP, eBPF Q: @Nico Schottelius, @nico:ungleich.ch (Matrix) * N: Veepee D: Veepee is using Cilium on their on-premise Kubernetes clusters, hosting majority of their applications. U. CNI, BGP, eBPF, Hubble, DirectRouting (via kube-router) Q: @nerzhul * N: Wildlife Studios D: Wildlife Studios is using Cilium in AWS for all their game production clusters (self hosted k8s) U: ClusterMesh, Global Service Load Balancing. Q: @Oki @luanguimaraesla @rsafonseca * N: Yahoo! D: Yahoo is using Cilium for L4 North-South Load Balancing for Kubernetes Services L: https://www.youtube.com/watch?v=-C86fBMcp5Q * N: ZeroHash D: Zero Hash is using Cilium as CNI for networking, security and monitoring features for Kubernetes clusters U: CNI/ENI Networking, Network policies, Hubble Q: @eugenestarchenko" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Cilium", - "file_name": "VERSIONING.md" - }, - "content": [ - { - "heading": "Versioning and Branching in controller-runtime", - "data": "We follow the [common KubeBuilder versioning guidelines][guidelines], and\n use the corresponding tooling.\n For the purposes of the aforementioned guidelines, controller-runtime\n counts as a \"library project\", but otherwise follows the guidelines\n exactly.\n [guidelines]: https://sigs.k8s.io/kubebuilder-release-tools/VERSIONING.md" - }, - { - "heading": "Compatibility and Release Support", - "data": "For release branches, we generally tend to support backporting one (1)\n major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back\n further if the need arises and is very pressing (e.g. security updates)." - }, - { - "heading": "Dependency Support", - "data": "Note the [guidelines on dependency versions][dep-versions]. Particularly: - We **DO** guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. - We **DO NOT** guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned. [dep-versions]: https://sigs.k8s.io/kubebuilder-release-tools/VERSIONING.md#kubernetes-version-compatibility" - }, - { - "additional_info": "We follow the [common KubeBuilder versioning guidelines][guidelines], and use the corresponding tooling. For the purposes of the aforementioned guidelines, controller-runtime counts as a \"library project\", but otherwise follows the guidelines exactly. [guidelines]: https://sigs.k8s.io/kubebuilder-release-tools/VERSIONING.md For release branches, we generally tend to support backporting one (1) major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back further if the need arises and is very pressing (e.g. security updates). Note the [guidelines on dependency versions][dep-versions]. Particularly: - We **DO** guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. - We **DO NOT** guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned. [dep-versions]: https://sigs.k8s.io/kubebuilder-release-tools/VERSIONING.md#kubernetes-version-compatibility" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Cilium", - "file_name": "VERSION_HISTORY.md" - }, - "content": [ - { - "heading": "`jwt-go` Version History", - "data": "" - }, - { - "heading": "4.0.0", - "data": "* Introduces support for Go modules. The `v4` version will be backwards compatible with `v3.x.y`." - }, - { - "heading": "3.2.2", - "data": "* Starting from this release, we are adopting the policy to support the most 2 recent versions of Go currently available. By the time of this release, this is Go 1.15 and 1.16 ([#28](https://github.com/golang-jwt/jwt/pull/28)).\n * Fixed a potential issue that could occur when the verification of `exp`, `iat` or `nbf` was not required and contained invalid contents, i.e. non-numeric/date. Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally reporting it to the formtech fork ([#40](https://github.com/golang-jwt/jwt/pull/40)).\n * Added support for EdDSA / ED25519 ([#36](https://github.com/golang-jwt/jwt/pull/36)).\n * Optimized allocations ([#33](https://github.com/golang-jwt/jwt/pull/33))." - }, - { - "heading": "3.2.1", - "data": "* **Import Path Change**: See MIGRATION_GUIDE.md for tips on updating your code\n * Changed the import path from `github.com/dgrijalva/jwt-go` to `github.com/golang-jwt/jwt`\n * Fixed type confusing issue between `string` and `[]string` in `VerifyAudience` ([#12](https://github.com/golang-jwt/jwt/pull/12)). This fixes CVE-2020-26160" - }, - { - "heading": "3.2.0", - "data": "* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation\n * HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate\n * Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before.\n * Deprecated `ParseFromRequestWithClaims` to simplify API in the future." - }, - { - "heading": "3.1.0", - "data": "* Improvements to `jwt` command line tool\n * Added `SkipClaimsValidation` option to `Parser`\n * Documentation updates" - }, - { - "heading": "3.0.0", - "data": "* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code\n * Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods.\n * `ParseFromRequest` has been moved to `request` subpackage and usage has changed\n * The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims.\n * Other Additions and Changes\n * Added `Claims` interface type to allow users to decode the claims into a custom type\n * Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into.\n * Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage\n * Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims`\n * Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`.\n * Added several new, more specific, validation errors to error type bitmask\n * Moved examples from README to executable example files\n * Signing method registry is now thread safe\n * Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser)" - }, - { - "heading": "2.7.0", - "data": "This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes.\n * Added new option `-show` to the `jwt` command that will just output the decoded token without verifying\n * Error text for expired tokens includes how long it's been expired\n * Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM`\n * Documentation updates" - }, - { - "heading": "2.6.0", - "data": "* Exposed inner error within ValidationError\n * Fixed validation errors when using UseJSONNumber flag\n * Added several unit tests" - }, - { - "heading": "2.5.0", - "data": "* Added support for signing method none. You shouldn't use this. The API tries to make this clear.\n * Updated/fixed some documentation\n * Added more helpful error message when trying to parse tokens that begin with `BEARER `" - }, - { - "heading": "2.4.0", - "data": "* Added new type, Parser, to allow for configuration of various parsing parameters\n * You can now specify a list of valid signing methods. Anything outside this set will be rejected.\n * You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON\n * Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go)\n * Fixed some bugs with ECDSA parsing" - }, - { - "heading": "2.3.0", - "data": "* Added support for ECDSA signing methods\n * Added support for RSA PSS signing methods (requires go v1.4)" - }, - { - "heading": "2.2.0", - "data": "* Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic." - }, - { - "heading": "2.1.0", - "data": "Backwards compatible API change that was missed in 2.0.0.\n * The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte`" - }, - { - "heading": "2.0.0", - "data": "There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change.\n The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`.\n It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`." - }, - { - "heading": "**Compatibility Breaking Changes", - "data": "* `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct`\n * `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct`\n * `KeyFunc` now returns `interface{}` instead of `[]byte`\n * `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key\n * `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key\n * Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type.\n * Added public package global `SigningMethodHS256`\n * Added public package global `SigningMethodHS384`\n * Added public package global `SigningMethodHS512`\n * Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type.\n * Added public package global `SigningMethodRS256`\n * Added public package global `SigningMethodRS384`\n * Added public package global `SigningMethodRS512`\n * Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged.\n * Refactored the RSA implementation to be easier to read\n * Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM`" - }, - { - "heading": "1.0.2", - "data": "* Fixed bug in parsing public keys from certificates\n * Added more tests around the parsing of keys for RS256\n * Code refactoring in RS256 implementation. No functional changes" - }, - { - "heading": "1.0.1", - "data": "* Fixed panic if RS256 signing method was passed an invalid key" - }, - { - "heading": "1.0.0", - "data": "* First versioned release * API stabilized * Supports creating, signing, parsing, and validating JWT tokens * Supports RS256 and HS256 signing methods" - }, - { - "additional_info": "* Introduces support for Go modules. The `v4` version will be backwards compatible with `v3.x.y`. * Starting from this release, we are adopting the policy to support the most 2 recent versions of Go currently available. By the time of this release, this is Go 1.15 and 1.16 ([#28](https://github.com/golang-jwt/jwt/pull/28)). * Fixed a potential issue that could occur when the verification of `exp`, `iat` or `nbf` was not required and contained invalid contents, i.e. non-numeric/date. Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally reporting it to the formtech fork ([#40](https://github.com/golang-jwt/jwt/pull/40)). * Added support for EdDSA / ED25519 ([#36](https://github.com/golang-jwt/jwt/pull/36)). * Optimized allocations ([#33](https://github.com/golang-jwt/jwt/pull/33)). * **Import Path Change**: See MIGRATION_GUIDE.md for tips on updating your code * Changed the import path from `github.com/dgrijalva/jwt-go` to `github.com/golang-jwt/jwt` * Fixed type confusing issue between `string` and `[]string` in `VerifyAudience` ([#12](https://github.com/golang-jwt/jwt/pull/12)). This fixes CVE-2020-26160 * Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation * HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate * Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before. * Deprecated `ParseFromRequestWithClaims` to simplify API in the future. * Improvements to `jwt` command line tool * Added `SkipClaimsValidation` option to `Parser` * Documentation updates * **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code * Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods. * `ParseFromRequest` has been moved to `request` subpackage and usage has changed * The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims. * Other Additions and Changes * Added `Claims` interface type to allow users to decode the claims into a custom type * Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into. * Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage * Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims` * Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`. * Added several new, more specific, validation errors to error type bitmask * Moved examples from README to executable example files * Signing method registry is now thread safe * Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser) This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes. * Added new option `-show` to the `jwt` command that will just output the decoded token without verifying * Error text for expired tokens includes how long it's been expired * Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM` * Documentation updates * Exposed inner error within ValidationError * Fixed validation errors when using UseJSONNumber flag * Added several unit tests * Added support for signing method none. You shouldn't use this. The API tries to make this clear. * Updated/fixed some documentation * Added more helpful error message when trying to parse tokens that begin with `BEARER ` * Added new type, Parser, to allow for configuration of various parsing parameters * You can now specify a list of valid signing methods. Anything outside this set will be rejected. * You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON * Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go) * Fixed some bugs with ECDSA parsing * Added support for ECDSA signing methods * Added support for RSA PSS signing methods (requires go v1.4) * Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic. Backwards compatible API change that was missed in 2.0.0. * The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte` There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change. The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`. It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`. * `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct` * `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct` * `KeyFunc` now returns `interface{}` instead of `[]byte` * `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key * `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key * Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type. * Added public package global `SigningMethodHS256` * Added public package global `SigningMethodHS384` * Added public package global `SigningMethodHS512` * Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type. * Added public package global `SigningMethodRS256` * Added public package global `SigningMethodRS384` * Added public package global `SigningMethodRS512` * Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged. * Refactored the RSA implementation to be easier to read * Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM` * Fixed bug in parsing public keys from certificates * Added more tests around the parsing of keys for RS256 * Code refactoring in RS256 implementation. No functional changes * Fixed panic if RS256 signing method was passed an invalid key * First versioned release * API stabilized * Supports creating, signing, parsing, and validating JWT tokens * Supports RS256 and HS256 signing methods" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "CHANGELOG.md" - }, - "content": [ - { - "heading": "Changelog", - "data": "" - }, - { - "heading": "v1.4.7 / 2018-01-09", - "data": "* BSD/macOS: Fix possible deadlock on closing the watcher on kqueue (thanks @nhooyr and @glycerine)\n * Tests: Fix missing verb on format string (thanks @rchiossi)\n * Linux: Fix deadlock in Remove (thanks @aarondl)\n * Linux: Watch.Add improvements (avoid race, fix consistency, reduce garbage) (thanks @twpayne)\n * Docs: Moved FAQ into the README (thanks @vahe)\n * Linux: Properly handle inotify's IN_Q_OVERFLOW event (thanks @zeldovich)\n * Docs: replace references to OS X with macOS" - }, - { - "heading": "v1.4.2 / 2016-10-10", - "data": "* Linux: use InotifyInit1 with IN_CLOEXEC to stop leaking a file descriptor to a child process when using fork/exec [#178](https://github.com/fsnotify/fsnotify/pull/178) (thanks @pattyshack)" - }, - { - "heading": "v1.4.1 / 2016-10-04", - "data": "* Fix flaky inotify stress test on Linux [#177](https://github.com/fsnotify/fsnotify/pull/177) (thanks @pattyshack)" - }, - { - "heading": "v1.4.0 / 2016-10-01", - "data": "* add a String() method to Event.Op [#165](https://github.com/fsnotify/fsnotify/pull/165) (thanks @oozie)" - }, - { - "heading": "v1.3.1 / 2016-06-28", - "data": "* Windows: fix for double backslash when watching the root of a drive [#151](https://github.com/fsnotify/fsnotify/issues/151) (thanks @brunoqc)" - }, - { - "heading": "v1.3.0 / 2016-04-19", - "data": "* Support linux/arm64 by [patching](https://go-review.googlesource.com/#/c/21971/) x/sys/unix and switching to to it from syscall (thanks @suihkulokki) [#135](https://github.com/fsnotify/fsnotify/pull/135)" - }, - { - "heading": "v1.2.10 / 2016-03-02", - "data": "* Fix golint errors in windows.go [#121](https://github.com/fsnotify/fsnotify/pull/121) (thanks @tiffanyfj)" - }, - { - "heading": "v1.2.9 / 2016-01-13", - "data": "kqueue: Fix logic for CREATE after REMOVE [#111](https://github.com/fsnotify/fsnotify/pull/111) (thanks @bep)" - }, - { - "heading": "v1.2.8 / 2015-12-17", - "data": "* kqueue: fix race condition in Close [#105](https://github.com/fsnotify/fsnotify/pull/105) (thanks @djui for reporting the issue and @ppknap for writing a failing test)\n * inotify: fix race in test\n * enable race detection for continuous integration (Linux, Mac, Windows)" - }, - { - "heading": "v1.2.5 / 2015-10-17", - "data": "* inotify: use epoll_create1 for arm64 support (requires Linux 2.6.27 or later) [#100](https://github.com/fsnotify/fsnotify/pull/100) (thanks @suihkulokki)\n * inotify: fix path leaks [#73](https://github.com/fsnotify/fsnotify/pull/73) (thanks @chamaken)\n * kqueue: watch for rename events on subdirectories [#83](https://github.com/fsnotify/fsnotify/pull/83) (thanks @guotie)\n * kqueue: avoid infinite loops from symlinks cycles [#101](https://github.com/fsnotify/fsnotify/pull/101) (thanks @illicitonion)" - }, - { - "heading": "v1.2.1 / 2015-10-14", - "data": "* kqueue: don't watch named pipes [#98](https://github.com/fsnotify/fsnotify/pull/98) (thanks @evanphx)" - }, - { - "heading": "v1.2.0 / 2015-02-08", - "data": "* inotify: use epoll to wake up readEvents [#66](https://github.com/fsnotify/fsnotify/pull/66) (thanks @PieterD)\n * inotify: closing watcher should now always shut down goroutine [#63](https://github.com/fsnotify/fsnotify/pull/63) (thanks @PieterD)\n * kqueue: close kqueue after removing watches, fixes [#59](https://github.com/fsnotify/fsnotify/issues/59)" - }, - { - "heading": "v1.1.1 / 2015-02-05", - "data": "* inotify: Retry read on EINTR [#61](https://github.com/fsnotify/fsnotify/issues/61) (thanks @PieterD)" - }, - { - "heading": "v1.1.0 / 2014-12-12", - "data": "* kqueue: rework internals [#43](https://github.com/fsnotify/fsnotify/pull/43)\n * add low-level functions\n * only need to store flags on directories\n * less mutexes [#13](https://github.com/fsnotify/fsnotify/issues/13)\n * done can be an unbuffered channel\n * remove calls to os.NewSyscallError\n * More efficient string concatenation for Event.String() [#52](https://github.com/fsnotify/fsnotify/pull/52) (thanks @mdlayher)\n * kqueue: fix regression in rework causing subdirectories to be watched [#48](https://github.com/fsnotify/fsnotify/issues/48)\n * kqueue: cleanup internal watch before sending remove event [#51](https://github.com/fsnotify/fsnotify/issues/51)" - }, - { - "heading": "v1.0.4 / 2014-09-07", - "data": "* kqueue: add dragonfly to the build tags.\n * Rename source code files, rearrange code so exported APIs are at the top.\n * Add done channel to example code. [#37](https://github.com/fsnotify/fsnotify/pull/37) (thanks @chenyukang)" - }, - { - "heading": "v1.0.3 / 2014-08-19", - "data": "* [Fix] Windows MOVED_TO now translates to Create like on BSD and Linux. [#36](https://github.com/fsnotify/fsnotify/issues/36)" - }, - { - "heading": "v1.0.2 / 2014-08-17", - "data": "* [Fix] Missing create events on macOS. [#14](https://github.com/fsnotify/fsnotify/issues/14) (thanks @zhsso)\n * [Fix] Make ./path and path equivalent. (thanks @zhsso)" - }, - { - "heading": "v1.0.0 / 2014-08-15", - "data": "* [API] Remove AddWatch on Windows, use Add.\n * Improve documentation for exported identifiers. [#30](https://github.com/fsnotify/fsnotify/issues/30)\n * Minor updates based on feedback from golint." - }, - { - "heading": "dev / 2014-07-09", - "data": "* Moved to [github.com/fsnotify/fsnotify](https://github.com/fsnotify/fsnotify).\n * Use os.NewSyscallError instead of returning errno (thanks @hariharan-uno)" - }, - { - "heading": "dev / 2014-07-04", - "data": "* kqueue: fix incorrect mutex used in Close()\n * Update example to demonstrate usage of Op." - }, - { - "heading": "dev / 2014-06-28", - "data": "* [API] Don't set the Write Op for attribute notifications [#4](https://github.com/fsnotify/fsnotify/issues/4)\n * Fix for String() method on Event (thanks Alex Brainman)\n * Don't build on Plan 9 or Solaris (thanks @4ad)" - }, - { - "heading": "dev / 2014-06-21", - "data": "* Events channel of type Event rather than *Event.\n * [internal] use syscall constants directly for inotify and kqueue.\n * [internal] kqueue: rename events to kevents and fileEvent to event." - }, - { - "heading": "dev / 2014-06-19", - "data": "* Go 1.3+ required on Windows (uses syscall.ERROR_MORE_DATA internally).\n * [internal] remove cookie from Event struct (unused).\n * [internal] Event struct has the same definition across every OS.\n * [internal] remove internal watch and removeWatch methods." - }, - { - "heading": "dev / 2014-06-12", - "data": "* [API] Renamed Watch() to Add() and RemoveWatch() to Remove().\n * [API] Pluralized channel names: Events and Errors.\n * [API] Renamed FileEvent struct to Event.\n * [API] Op constants replace methods like IsCreate()." - }, - { - "heading": "dev / 2014-06-12", - "data": "* Fix data race on kevent buffer (thanks @tilaks) [#98](https://github.com/howeyc/fsnotify/pull/98)" - }, - { - "heading": "dev / 2014-05-23", - "data": "* [API] Remove current implementation of WatchFlags.\n * current implementation doesn't take advantage of OS for efficiency\n * provides little benefit over filtering events as they are received, but has extra bookkeeping and mutexes\n * no tests for the current implementation\n * not fully implemented on Windows [#93](https://github.com/howeyc/fsnotify/issues/93#issuecomment-39285195)" - }, - { - "heading": "v0.9.3 / 2014-12-31", - "data": "* kqueue: cleanup internal watch before sending remove event [#51](https://github.com/fsnotify/fsnotify/issues/51)" - }, - { - "heading": "v0.9.2 / 2014-08-17", - "data": "* [Backport] Fix missing create events on macOS. [#14](https://github.com/fsnotify/fsnotify/issues/14) (thanks @zhsso)" - }, - { - "heading": "v0.9.1 / 2014-06-12", - "data": "* Fix data race on kevent buffer (thanks @tilaks) [#98](https://github.com/howeyc/fsnotify/pull/98)" - }, - { - "heading": "v0.9.0 / 2014-01-17", - "data": "* IsAttrib() for events that only concern a file's metadata [#79][] (thanks @abustany)\n * [Fix] kqueue: fix deadlock [#77][] (thanks @cespare)\n * [NOTICE] Development has moved to `code.google.com/p/go.exp/fsnotify` in preparation for inclusion in the Go standard library." - }, - { - "heading": "v0.8.12 / 2013-11-13", - "data": "* [API] Remove FD_SET and friends from Linux adapter" - }, - { - "heading": "v0.8.11 / 2013-11-02", - "data": "* [Doc] Add Changelog [#72][] (thanks @nathany)\n * [Doc] Spotlight and double modify events on macOS [#62][] (reported by @paulhammond)" - }, - { - "heading": "v0.8.10 / 2013-10-19", - "data": "* [Fix] kqueue: remove file watches when parent directory is removed [#71][] (reported by @mdwhatcott)\n * [Fix] kqueue: race between Close and readEvents [#70][] (reported by @bernerdschaefer)\n * [Doc] specify OS-specific limits in README (thanks @debrando)" - }, - { - "heading": "v0.8.9 / 2013-09-08", - "data": "* [Doc] Contributing (thanks @nathany)\n * [Doc] update package path in example code [#63][] (thanks @paulhammond)\n * [Doc] GoCI badge in README (Linux only) [#60][]\n * [Doc] Cross-platform testing with Vagrant [#59][] (thanks @nathany)" - }, - { - "heading": "v0.8.8 / 2013-06-17", - "data": "* [Fix] Windows: handle `ERROR_MORE_DATA` on Windows [#49][] (thanks @jbowtie)" - }, - { - "heading": "v0.8.7 / 2013-06-03", - "data": "* [API] Make syscall flags internal\n * [Fix] inotify: ignore event changes\n * [Fix] race in symlink test [#45][] (reported by @srid)\n * [Fix] tests on Windows\n * lower case error messages" - }, - { - "heading": "v0.8.6 / 2013-05-23", - "data": "* kqueue: Use EVT_ONLY flag on Darwin\n * [Doc] Update README with full example" - }, - { - "heading": "v0.8.5 / 2013-05-09", - "data": "* [Fix] inotify: allow monitoring of \"broken\" symlinks (thanks @tsg)" - }, - { - "heading": "v0.8.4 / 2013-04-07", - "data": "* [Fix] kqueue: watch all file events [#40][] (thanks @ChrisBuchholz)" - }, - { - "heading": "v0.8.3 / 2013-03-13", - "data": "* [Fix] inoitfy/kqueue memory leak [#36][] (reported by @nbkolchin)\n * [Fix] kqueue: use fsnFlags for watching a directory [#33][] (reported by @nbkolchin)" - }, - { - "heading": "v0.8.2 / 2013-02-07", - "data": "* [Doc] add Authors\n * [Fix] fix data races for map access [#29][] (thanks @fsouza)" - }, - { - "heading": "v0.8.1 / 2013-01-09", - "data": "* [Fix] Windows path separators\n * [Doc] BSD License" - }, - { - "heading": "v0.8.0 / 2012-11-09", - "data": "* kqueue: directory watching improvements (thanks @vmirage)\n * inotify: add `IN_MOVED_TO` [#25][] (requested by @cpisto)\n * [Fix] kqueue: deleting watched directory [#24][] (reported by @jakerr)" - }, - { - "heading": "v0.7.4 / 2012-10-09", - "data": "* [Fix] inotify: fixes from https://codereview.appspot.com/5418045/ (ugorji)\n * [Fix] kqueue: preserve watch flags when watching for delete [#21][] (reported by @robfig)\n * [Fix] kqueue: watch the directory even if it isn't a new watch (thanks @robfig)\n * [Fix] kqueue: modify after recreation of file" - }, - { - "heading": "v0.7.3 / 2012-09-27", - "data": "* [Fix] kqueue: watch with an existing folder inside the watched folder (thanks @vmirage)\n * [Fix] kqueue: no longer get duplicate CREATE events" - }, - { - "heading": "v0.7.2 / 2012-09-01", - "data": "* kqueue: events for created directories" - }, - { - "heading": "v0.7.1 / 2012-07-14", - "data": "* [Fix] for renaming files" - }, - { - "heading": "v0.7.0 / 2012-07-02", - "data": "* [Feature] FSNotify flags\n * [Fix] inotify: Added file name back to event path" - }, - { - "heading": "v0.6.0 / 2012-06-06", - "data": "* kqueue: watch files after directory created (thanks @tmc)" - }, - { - "heading": "v0.5.1 / 2012-05-22", - "data": "* [Fix] inotify: remove all watches before Close()" - }, - { - "heading": "v0.5.0 / 2012-05-03", - "data": "* [API] kqueue: return errors during watch instead of sending over channel\n * kqueue: match symlink behavior on Linux\n * inotify: add `DELETE_SELF` (requested by @taralx)\n * [Fix] kqueue: handle EINTR (reported by @robfig)\n * [Doc] Godoc example [#1][] (thanks @davecheney)" - }, - { - "heading": "v0.4.0 / 2012-03-30", - "data": "* Go 1 released: build with go tool\n * [Feature] Windows support using winfsnotify\n * Windows does not have attribute change notifications\n * Roll attribute notifications into IsModify" - }, - { - "heading": "v0.3.0 / 2012-02-19", - "data": "* kqueue: add files when watch directory" - }, - { - "heading": "v0.2.0 / 2011-12-30", - "data": "* update to latest Go weekly code" - }, - { - "heading": "v0.1.0 / 2011-10-19", - "data": "* kqueue: add watch on file creation to match inotify * kqueue: create file event * inotify: ignore `IN_IGNORED` events * event String() * linux: common FileEvent functions * initial commit [#79]: https://github.com/howeyc/fsnotify/pull/79 [#77]: https://github.com/howeyc/fsnotify/pull/77 [#72]: https://github.com/howeyc/fsnotify/issues/72 [#71]: https://github.com/howeyc/fsnotify/issues/71 [#70]: https://github.com/howeyc/fsnotify/issues/70 [#63]: https://github.com/howeyc/fsnotify/issues/63 [#62]: https://github.com/howeyc/fsnotify/issues/62 [#60]: https://github.com/howeyc/fsnotify/issues/60 [#59]: https://github.com/howeyc/fsnotify/issues/59 [#49]: https://github.com/howeyc/fsnotify/issues/49 [#45]: https://github.com/howeyc/fsnotify/issues/45 [#40]: https://github.com/howeyc/fsnotify/issues/40 [#36]: https://github.com/howeyc/fsnotify/issues/36 [#33]: https://github.com/howeyc/fsnotify/issues/33 [#29]: https://github.com/howeyc/fsnotify/issues/29 [#25]: https://github.com/howeyc/fsnotify/issues/25 [#24]: https://github.com/howeyc/fsnotify/issues/24 [#21]: https://github.com/howeyc/fsnotify/issues/21" - }, - { - "additional_info": "* BSD/macOS: Fix possible deadlock on closing the watcher on kqueue (thanks @nhooyr and @glycerine) * Tests: Fix missing verb on format string (thanks @rchiossi) * Linux: Fix deadlock in Remove (thanks @aarondl) * Linux: Watch.Add improvements (avoid race, fix consistency, reduce garbage) (thanks @twpayne) * Docs: Moved FAQ into the README (thanks @vahe) * Linux: Properly handle inotify's IN_Q_OVERFLOW event (thanks @zeldovich) * Docs: replace references to OS X with macOS * Linux: use InotifyInit1 with IN_CLOEXEC to stop leaking a file descriptor to a child process when using fork/exec [#178](https://github.com/fsnotify/fsnotify/pull/178) (thanks @pattyshack) * Fix flaky inotify stress test on Linux [#177](https://github.com/fsnotify/fsnotify/pull/177) (thanks @pattyshack) * add a String() method to Event.Op [#165](https://github.com/fsnotify/fsnotify/pull/165) (thanks @oozie) * Windows: fix for double backslash when watching the root of a drive [#151](https://github.com/fsnotify/fsnotify/issues/151) (thanks @brunoqc) * Support linux/arm64 by [patching](https://go-review.googlesource.com/#/c/21971/) x/sys/unix and switching to to it from syscall (thanks @suihkulokki) [#135](https://github.com/fsnotify/fsnotify/pull/135) * Fix golint errors in windows.go [#121](https://github.com/fsnotify/fsnotify/pull/121) (thanks @tiffanyfj) kqueue: Fix logic for CREATE after REMOVE [#111](https://github.com/fsnotify/fsnotify/pull/111) (thanks @bep) * kqueue: fix race condition in Close [#105](https://github.com/fsnotify/fsnotify/pull/105) (thanks @djui for reporting the issue and @ppknap for writing a failing test) * inotify: fix race in test * enable race detection for continuous integration (Linux, Mac, Windows) * inotify: use epoll_create1 for arm64 support (requires Linux 2.6.27 or later) [#100](https://github.com/fsnotify/fsnotify/pull/100) (thanks @suihkulokki) * inotify: fix path leaks [#73](https://github.com/fsnotify/fsnotify/pull/73) (thanks @chamaken) * kqueue: watch for rename events on subdirectories [#83](https://github.com/fsnotify/fsnotify/pull/83) (thanks @guotie) * kqueue: avoid infinite loops from symlinks cycles [#101](https://github.com/fsnotify/fsnotify/pull/101) (thanks @illicitonion) * kqueue: don't watch named pipes [#98](https://github.com/fsnotify/fsnotify/pull/98) (thanks @evanphx) * inotify: use epoll to wake up readEvents [#66](https://github.com/fsnotify/fsnotify/pull/66) (thanks @PieterD) * inotify: closing watcher should now always shut down goroutine [#63](https://github.com/fsnotify/fsnotify/pull/63) (thanks @PieterD) * kqueue: close kqueue after removing watches, fixes [#59](https://github.com/fsnotify/fsnotify/issues/59) * inotify: Retry read on EINTR [#61](https://github.com/fsnotify/fsnotify/issues/61) (thanks @PieterD) * kqueue: rework internals [#43](https://github.com/fsnotify/fsnotify/pull/43) * add low-level functions * only need to store flags on directories * less mutexes [#13](https://github.com/fsnotify/fsnotify/issues/13) * done can be an unbuffered channel * remove calls to os.NewSyscallError * More efficient string concatenation for Event.String() [#52](https://github.com/fsnotify/fsnotify/pull/52) (thanks @mdlayher) * kqueue: fix regression in rework causing subdirectories to be watched [#48](https://github.com/fsnotify/fsnotify/issues/48) * kqueue: cleanup internal watch before sending remove event [#51](https://github.com/fsnotify/fsnotify/issues/51) * kqueue: add dragonfly to the build tags. * Rename source code files, rearrange code so exported APIs are at the top. * Add done channel to example code. [#37](https://github.com/fsnotify/fsnotify/pull/37) (thanks @chenyukang) * [Fix] Windows MOVED_TO now translates to Create like on BSD and Linux. [#36](https://github.com/fsnotify/fsnotify/issues/36) * [Fix] Missing create events on macOS. [#14](https://github.com/fsnotify/fsnotify/issues/14) (thanks @zhsso) * [Fix] Make ./path and path equivalent. (thanks @zhsso) * [API] Remove AddWatch on Windows, use Add. * Improve documentation for exported identifiers. [#30](https://github.com/fsnotify/fsnotify/issues/30) * Minor updates based on feedback from golint. * Moved to [github.com/fsnotify/fsnotify](https://github.com/fsnotify/fsnotify). * Use os.NewSyscallError instead of returning errno (thanks @hariharan-uno) * kqueue: fix incorrect mutex used in Close() * Update example to demonstrate usage of Op. * [API] Don't set the Write Op for attribute notifications [#4](https://github.com/fsnotify/fsnotify/issues/4) * Fix for String() method on Event (thanks Alex Brainman) * Don't build on Plan 9 or Solaris (thanks @4ad) * Events channel of type Event rather than *Event. * [internal] use syscall constants directly for inotify and kqueue. * [internal] kqueue: rename events to kevents and fileEvent to event. * Go 1.3+ required on Windows (uses syscall.ERROR_MORE_DATA internally). * [internal] remove cookie from Event struct (unused). * [internal] Event struct has the same definition across every OS. * [internal] remove internal watch and removeWatch methods. * [API] Renamed Watch() to Add() and RemoveWatch() to Remove(). * [API] Pluralized channel names: Events and Errors. * [API] Renamed FileEvent struct to Event. * [API] Op constants replace methods like IsCreate(). * Fix data race on kevent buffer (thanks @tilaks) [#98](https://github.com/howeyc/fsnotify/pull/98) * [API] Remove current implementation of WatchFlags. * current implementation doesn't take advantage of OS for efficiency * provides little benefit over filtering events as they are received, but has extra bookkeeping and mutexes * no tests for the current implementation * not fully implemented on Windows [#93](https://github.com/howeyc/fsnotify/issues/93#issuecomment-39285195) * kqueue: cleanup internal watch before sending remove event [#51](https://github.com/fsnotify/fsnotify/issues/51) * [Backport] Fix missing create events on macOS. [#14](https://github.com/fsnotify/fsnotify/issues/14) (thanks @zhsso) * Fix data race on kevent buffer (thanks @tilaks) [#98](https://github.com/howeyc/fsnotify/pull/98) * IsAttrib() for events that only concern a file's metadata [#79][] (thanks @abustany) * [Fix] kqueue: fix deadlock [#77][] (thanks @cespare) * [NOTICE] Development has moved to `code.google.com/p/go.exp/fsnotify` in preparation for inclusion in the Go standard library. * [API] Remove FD_SET and friends from Linux adapter * [Doc] Add Changelog [#72][] (thanks @nathany) * [Doc] Spotlight and double modify events on macOS [#62][] (reported by @paulhammond) * [Fix] kqueue: remove file watches when parent directory is removed [#71][] (reported by @mdwhatcott) * [Fix] kqueue: race between Close and readEvents [#70][] (reported by @bernerdschaefer) * [Doc] specify OS-specific limits in README (thanks @debrando) * [Doc] Contributing (thanks @nathany) * [Doc] update package path in example code [#63][] (thanks @paulhammond) * [Doc] GoCI badge in README (Linux only) [#60][] * [Doc] Cross-platform testing with Vagrant [#59][] (thanks @nathany) * [Fix] Windows: handle `ERROR_MORE_DATA` on Windows [#49][] (thanks @jbowtie) * [API] Make syscall flags internal * [Fix] inotify: ignore event changes * [Fix] race in symlink test [#45][] (reported by @srid) * [Fix] tests on Windows * lower case error messages * kqueue: Use EVT_ONLY flag on Darwin * [Doc] Update README with full example * [Fix] inotify: allow monitoring of \"broken\" symlinks (thanks @tsg) * [Fix] kqueue: watch all file events [#40][] (thanks @ChrisBuchholz) * [Fix] inoitfy/kqueue memory leak [#36][] (reported by @nbkolchin) * [Fix] kqueue: use fsnFlags for watching a directory [#33][] (reported by @nbkolchin) * [Doc] add Authors * [Fix] fix data races for map access [#29][] (thanks @fsouza) * [Fix] Windows path separators * [Doc] BSD License * kqueue: directory watching improvements (thanks @vmirage) * inotify: add `IN_MOVED_TO` [#25][] (requested by @cpisto) * [Fix] kqueue: deleting watched directory [#24][] (reported by @jakerr) * [Fix] inotify: fixes from https://codereview.appspot.com/5418045/ (ugorji) * [Fix] kqueue: preserve watch flags when watching for delete [#21][] (reported by @robfig) * [Fix] kqueue: watch the directory even if it isn't a new watch (thanks @robfig) * [Fix] kqueue: modify after recreation of file * [Fix] kqueue: watch with an existing folder inside the watched folder (thanks @vmirage) * [Fix] kqueue: no longer get duplicate CREATE events * kqueue: events for created directories * [Fix] for renaming files * [Feature] FSNotify flags * [Fix] inotify: Added file name back to event path * kqueue: watch files after directory created (thanks @tmc) * [Fix] inotify: remove all watches before Close() * [API] kqueue: return errors during watch instead of sending over channel * kqueue: match symlink behavior on Linux * inotify: add `DELETE_SELF` (requested by @taralx) * [Fix] kqueue: handle EINTR (reported by @robfig) * [Doc] Godoc example [#1][] (thanks @davecheney) * Go 1 released: build with go tool * [Feature] Windows support using winfsnotify * Windows does not have attribute change notifications * Roll attribute notifications into IsModify * kqueue: add files when watch directory * update to latest Go weekly code * kqueue: add watch on file creation to match inotify * kqueue: create file event * inotify: ignore `IN_IGNORED` events * event String() * linux: common FileEvent functions * initial commit [#79]: https://github.com/howeyc/fsnotify/pull/79 [#77]: https://github.com/howeyc/fsnotify/pull/77 [#72]: https://github.com/howeyc/fsnotify/issues/72 [#71]: https://github.com/howeyc/fsnotify/issues/71 [#70]: https://github.com/howeyc/fsnotify/issues/70 [#63]: https://github.com/howeyc/fsnotify/issues/63 [#62]: https://github.com/howeyc/fsnotify/issues/62 [#60]: https://github.com/howeyc/fsnotify/issues/60 [#59]: https://github.com/howeyc/fsnotify/issues/59 [#49]: https://github.com/howeyc/fsnotify/issues/49 [#45]: https://github.com/howeyc/fsnotify/issues/45 [#40]: https://github.com/howeyc/fsnotify/issues/40 [#36]: https://github.com/howeyc/fsnotify/issues/36 [#33]: https://github.com/howeyc/fsnotify/issues/33 [#29]: https://github.com/howeyc/fsnotify/issues/29 [#25]: https://github.com/howeyc/fsnotify/issues/25 [#24]: https://github.com/howeyc/fsnotify/issues/24 [#21]: https://github.com/howeyc/fsnotify/issues/21" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "CHANGES.md" - }, - "content": [ - { - "heading": "API v1 (gopkg.in/hpcloud/tail.v1)", - "data": "" - }, - { - "heading": "April, 2016", - "data": "* Migrated to godep, as depman is not longer supported\n * Introduced golang vendoring feature\n * Fixed issue [#57](https://github.com/hpcloud/tail/issues/57) related to reopen deleted file" - }, - { - "heading": "July, 2015", - "data": "* Fix inotify watcher leak; remove `Cleanup` (#51)" - }, - { - "heading": "API v0 (gopkg.in/hpcloud/tail.v0)", - "data": "" - }, - { - "heading": "June, 2015", - "data": "* Don't return partial lines (PR #40)\n * Use stable version of fsnotify (#46)" - }, - { - "heading": "July, 2014", - "data": "* Fix tail for Windows (PR #36)" - }, - { - "heading": "May, 2014", - "data": "* Improved rate limiting using leaky bucket (PR #29)\n * Fix odd line splitting (PR #30)" - }, - { - "heading": "Apr, 2014", - "data": "* LimitRate now discards read buffer (PR #28)\n * allow reading of longer lines if MaxLineSize is unset (PR #24)\n * updated deps.json to latest fsnotify (441bbc86b1)" - }, - { - "heading": "Feb, 2014", - "data": "* added `Config.Logger` to suppress library logging" - }, - { - "heading": "Nov, 2013", - "data": "* add Cleanup to remove leaky inotify watches (PR #20)" - }, - { - "heading": "Aug, 2013", - "data": "* redesigned Location field (PR #12)\n * add tail.Tell (PR #14)" - }, - { - "heading": "July, 2013", - "data": "* Rate limiting (PR #10)" - }, - { - "heading": "May, 2013", - "data": "* Detect file deletions/renames in polling file watcher (PR #1)\n * Detect file truncation\n * Fix potential race condition when reopening the file (issue 5)\n * Fix potential blocking of `tail.Stop` (issue 4)\n * Fix uncleaned up ChangeEvents goroutines after calling tail.Stop\n * Support Follow=false" - }, - { - "heading": "Feb, 2013", - "data": "* Initial open source release" - }, - { - "additional_info": "* Migrated to godep, as depman is not longer supported * Introduced golang vendoring feature * Fixed issue [#57](https://github.com/hpcloud/tail/issues/57) related to reopen deleted file * Fix inotify watcher leak; remove `Cleanup` (#51) * Don't return partial lines (PR #40) * Use stable version of fsnotify (#46) * Fix tail for Windows (PR #36) * Improved rate limiting using leaky bucket (PR #29) * Fix odd line splitting (PR #30) * LimitRate now discards read buffer (PR #28) * allow reading of longer lines if MaxLineSize is unset (PR #24) * updated deps.json to latest fsnotify (441bbc86b1) * added `Config.Logger` to suppress library logging * add Cleanup to remove leaky inotify watches (PR #20) * redesigned Location field (PR #12) * add tail.Tell (PR #14) * Rate limiting (PR #10) * Detect file deletions/renames in polling file watcher (PR #1) * Detect file truncation * Fix potential race condition when reopening the file (issue 5) * Fix potential blocking of `tail.Stop` (issue 4) * Fix uncleaned up ChangeEvents goroutines after calling tail.Stop * Support Follow=false * Initial open source release" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "CNIGenieFeatureSet.md" - }, - "content": [ - { - "heading": "Features covered in each CNI-Genie version:", - "data": "" - }, - { - "heading": "Existing features", - "data": "" - }, - { - "heading": "Feature 1: CNI-Genie \"Multiple CNI Plugins\"", - "data": "* Interface Connector to 3rd party CNI-Plugins. The user can [manually select one of the multiple CNI plugins](multiple-cni-plugins/README.md)" - }, - { - "heading": "Feature 2: CNI-Genie \"Multiple IP Addresses\"", - "data": "* Injects multiple IPs to a single container. The container is reachable using any of the [multiple IP Addresses](multiple-ips/README.md)" - }, - { - "heading": "Feature 3: CNI-Genie \"Network Attachment Definition\"", - "data": "* [Network Attachment Definition](network-attachment-definitions/README.md) feature incorporates Kubernetes Network Custom Resource Definition De-facto Standard in CNI-Genie" - }, - { - "heading": "Feature 4: CNI-Genie \"Smart CNI Plugin Selection\"", - "data": "* Intelligence in selecting the CNI plugin. CNI-Genie [watches the KPI of interest and selects](smart-cni-genie/README.md) the CNI plugin, accordingly" - }, - { - "heading": "Feature 5: CNI-Genie \"Default Plugin Selection\"", - "data": "* Support to set default plugin of user choice to be used for all the pods being created" - }, - { - "heading": "Feature 6: CNI-Genie \"Network Isolation\"", - "data": "* Dedicated 'physical' network for a tenant\n * Isolated 'logical' networks for different tenants on a shared 'physical'network" - }, - { - "heading": "Future features", - "data": "" - }, - { - "heading": "Feature 7: CNI-Genie \"Network Policy Engine\"", - "data": "* [CNI-Genie network policy engine](network-policy/README.md) allows for network level ACLs" - }, - { - "heading": "Feature 8: CNI-Genie \"Real-time Network Switching\"", - "data": "* Price minimization: dynamically switching workload to a cheaper network as network prices change * Maximizing network utilization: dynamically switching workload to the less congested network at a threshold" - }, - { - "additional_info": "* Interface Connector to 3rd party CNI-Plugins. The user can [manually select one of the multiple CNI plugins](multiple-cni-plugins/README.md) * Injects multiple IPs to a single container. The container is reachable using any of the [multiple IP Addresses](multiple-ips/README.md) * [Network Attachment Definition](network-attachment-definitions/README.md) feature incorporates Kubernetes Network Custom Resource Definition De-facto Standard in CNI-Genie * Intelligence in selecting the CNI plugin. CNI-Genie [watches the KPI of interest and selects](smart-cni-genie/README.md) the CNI plugin, accordingly * Support to set default plugin of user choice to be used for all the pods being created * Dedicated 'physical' network for a tenant * Isolated 'logical' networks for different tenants on a shared 'physical'network * [CNI-Genie network policy engine](network-policy/README.md) allows for network level ACLs * Price minimization: dynamically switching workload to a cheaper network as network prices change * Maximizing network utilization: dynamically switching workload to the less congested network at a threshold" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "code-of-conduct.md" - }, - "content": [ - { - "heading": "Kubernetes Community Code of Conduct", - "data": "Please refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)" - }, - { - "additional_info": "Please refer to our [](https://git.k8s.io/community/code-of-conduct.md)" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "CODE_OF_CONDUCT.md" - }, - "content": [ - { - "heading": "Contributor Covenant Code of Conduct", - "data": "" - }, - { - "heading": "Our Pledge", - "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation." - }, - { - "heading": "Our Standards", - "data": "Examples of behavior that contributes to creating a positive environment include:\n * Using welcoming and inclusive language\n * Being respectful of differing viewpoints and experiences\n * Gracefully accepting constructive criticism\n * Focusing on what is best for the community\n * Showing empathy towards other community members\n Examples of unacceptable behavior by participants include:\n * The use of sexualized language or imagery and unwelcome sexual attention or advances\n * Trolling, insulting/derogatory comments, and personal or political attacks\n * Public or private harassment\n * Publishing others' private information, such as a physical or electronic address, without explicit permission\n * Other conduct which could reasonably be considered inappropriate in a professional setting" - }, - { - "heading": "Our Responsibilities", - "data": "Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.\n Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful." - }, - { - "heading": "Scope", - "data": "This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers." - }, - { - "heading": "Enforcement", - "data": "Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at i@dario.im. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.\n Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership." - }, - { - "heading": "Attribution", - "data": "This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] [homepage]: http://contributor-covenant.org [version]: http://contributor-covenant.org/version/1/4/" - }, - { - "additional_info": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at i@dario.im. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] [homepage]: http://contributor-covenant.org [version]: http://contributor-covenant.org/version/1/4/" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "CONTRIBUTING.md" - }, - "content": [ - { - "heading": "Contributing Guidelines", - "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://github.com/kubernetes/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt:\n _As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._" - }, - { - "heading": "Getting Started", - "data": "We have full documentation on how to get started contributing here:\n - [Contributor License Agreement](https://git.k8s.io/community/CLA.md) Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests\n - [Kubernetes Contributor Guide](http://git.k8s.io/community/contributors/guide) - Main contributor documentation, or you can just jump directly to the [contributing section](http://git.k8s.io/community/contributors/guide#contributing)\n - [Contributor Cheat Sheet](https://git.k8s.io/community/contributors/guide/contributor-cheatsheet.md) - Common resources for existing developers" - }, - { - "heading": "Mentorship", - "data": "- [Mentoring Initiatives](https://git.k8s.io/community/mentoring) - We have a diverse set of mentorship programs available that are always looking for volunteers!" - }, - { - "heading": "Contact Information", - "data": "- [Slack](https://kubernetes.slack.com/messages/sig-architecture)\n - [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-architecture)\n the first time you can add custom content here, for example:" - }, - { - "heading": "Contact Information", - "data": "- [Slack channel](https://kubernetes.slack.com/messages/kubernetes-users) - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. - [Mailing list](URL) -->" - }, - { - "additional_info": "Welcome to Kubernetes. We are excited about the prospect of you joining our [community](https://github.com/kubernetes/community)! The Kubernetes community abides by the CNCF [code of conduct](code-of-conduct.md). Here is an excerpt: _As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._ We have full documentation on how to get started contributing here: - [Contributor License Agreement](https://git.k8s.io/community/CLA.md) Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - [Kubernetes Contributor Guide](http://git.k8s.io/community/contributors/guide) - Main contributor documentation, or you can just jump directly to the [contributing section](http://git.k8s.io/community/contributors/guide#contributing) - [Contributor Cheat Sheet](https://git.k8s.io/community/contributors/guide/contributor-cheatsheet.md) - Common resources for existing developers - [Mentoring Initiatives](https://git.k8s.io/community/mentoring) - We have a diverse set of mentorship programs available that are always looking for volunteers! - [Slack](https://kubernetes.slack.com/messages/sig-architecture) - [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-architecture) the first time you can add custom content here, for example: - [Slack channel](https://kubernetes.slack.com/messages/kubernetes-users) - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. - [Mailing list](URL) -->" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "developer-guide.md" - }, - "content": [ - { - "heading": "Developer's Guide", - "data": "" - }, - { - "heading": "Build process", - "data": "After making any modification to source files, below steps can be followed to build and use the new binary.\n Note that you should install genie first before making changes to the source. This ensures genie conf file is generated successfully.\n Please make sure to run the below commands with root privilege." - }, - { - "heading": "*Building and Using CNI-Genie plugin:", - "data": "Build genie binary by running:\n Place \"genie\" binary from dest/ into /opt/cni/bin/ directory." - }, - { - "heading": "Test process", - "data": "" - }, - { - "heading": "prerequisites", - "data": "A running kubernetes cluster is required to run the tests." - }, - { - "heading": "Running the tests", - "data": "To run ginkgo tests for CNI-Genie run the following command: If Kubernetes cluster is 1.7+ If Kubernetes cluster is 1.5.x" - }, - { - "additional_info": "After making any modification to source files, below steps can be followed to build and use the new binary. Note that you should install genie first before making changes to the source. This ensures genie conf file is generated successfully. Please make sure to run the below commands with root privilege. Build genie binary by running: ``` make plugin ``` Place \"genie\" binary from dest/ into /opt/cni/bin/ directory. ``` cp dist/genie /opt/cni/bin/genie ``` A running kubernetes cluster is required to run the tests. To run ginkgo tests for CNI-Genie run the following command: If Kubernetes cluster is 1.7+ ``` make test testKubeVersion=1.7 testKubeConfig=/etc/kubernetes/admin.conf ``` If Kubernetes cluster is 1.5.x ``` make test testKubeVersion=1.5" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "external-build-dependencies.md" - }, - "content": [ - { - "heading": "CNI-Genie External Build Dependencies", - "data": "| Software | License | Repo Link |---|---|---| |k8s.io/api | Apache License 2.0 |https://github.com/kubernetes/api |k8s.io/apimachinery | Apache License 2.0 |https://github.com/kubernetes/apimachinery |k8s.io/client-go | Apache License 2.0 |https://github.com/kubernetes/client-go |k8s.io/kube-openapi | Apache License 2.0 |https://github.com/kubernetes/kube-openapi |fsnotify |BSD-3-Clause | https://github.com/fsnotify/fsnotify |inf | BSD-3-Clause |https://github.com/go-inf/inf |tomb.v1 | BSD-3-Clause |https://github.com/go-tomb/tomb/tree/v1 |yaml.v2 | Apache License 2.0 |https://github.com/go-yaml/yaml/tree/v2 |crypto | BSD-3-Clause | https://github.com/golang/crypto |exp | BSD-3-Clause | https://github.com/golang/exp |net | BSD-3-Clause | https://github.com/golang/net |sys | BSD-3-Clause | https://github.com/golang/sys |text | BSD-3-Clause | https://github.com/golang/text |time | BSD-3-Clause | https://github.com/golang/time |cni | Apache License 2.0 | https://github.com/containernetworking/cni |go-iptables| Apache License 2.0 | https://github.com/coreos/go-iptables |go-spew |ISC | https://github.com/davecgh/go-spew |yaml |MIT | https://github.com/ghodss/yaml |protobuf |BSD-3-Clause | https://github.com/gogo/protobuf golang/glog |Apache License 2.0 | https://github.com/golang/glog |golang/groupcache |Apache License 2.0 | https://github.com/golang/groupcache |golang/protobuf |Apache License 2.0 | https://github.com/golang/protobuf |cadvisor |Apache License 2.0 | https://github.com/google/cadvisor |gofuzz |Apache License 2.0 | https://github.com/google/gofuzz |btree |Apache License 2.0 | https://github.com/google/btree |gnostic |Apache License 2.0 | https://github.com/googleapis/gnostic |httpcache | MIT| https://github.com/gregjones/httpcache |golang-lru |MLP-2.0 | https://github.com/hashicorp/golang-lru |tail |MIT | https://github.com/hpcloud/tail |mergo |BSD-3-Clause | https://github.com/imdario/mergo |json-iterator/go |MIT | https://github.com/json-iterator/go |concurrent |Apache License 2.0 | https://github.com/modern-go/concurrent |reflect2 |Apache License 2.0 | https://github.com/modern-go/reflect2 |ginkgo |MIT | https://github.com/onsi/ginkgo |gomega |MIT | https://github.com/onsi/gomega |GoLLRB |BSD-3-Clause | https://github.com/petar/GoLLRB |diskv | MIT| https://github.com/peterbourgon/diskv |spf13/pflag |MIT | https://github.com/spf13/pflag" - }, - { - "additional_info": "| Software | License | Repo Link |---|---|---| |k8s.io/api | Apache License 2.0 |https://github.com/kubernetes/api |k8s.io/apimachinery | Apache License 2.0 |https://github.com/kubernetes/apimachinery |k8s.io/client-go | Apache License 2.0 |https://github.com/kubernetes/client-go |k8s.io/kube-openapi | Apache License 2.0 |https://github.com/kubernetes/kube-openapi |fsnotify |BSD-3-Clause | https://github.com/fsnotify/fsnotify |inf | BSD-3-Clause |https://github.com/go-inf/inf |tomb.v1 | BSD-3-Clause |https://github.com/go-tomb/tomb/tree/v1 |yaml.v2 | Apache License 2.0 |https://github.com/go-yaml/yaml/tree/v2 |crypto | BSD-3-Clause | https://github.com/golang/crypto |exp | BSD-3-Clause | https://github.com/golang/exp |net | BSD-3-Clause | https://github.com/golang/net |sys | BSD-3-Clause | https://github.com/golang/sys |text | BSD-3-Clause | https://github.com/golang/text |time | BSD-3-Clause | https://github.com/golang/time |cni | Apache License 2.0 | https://github.com/containernetworking/cni |go-iptables| Apache License 2.0 | https://github.com/coreos/go-iptables |go-spew |ISC | https://github.com/davecgh/go-spew |yaml |MIT | https://github.com/ghodss/yaml |protobuf |BSD-3-Clause | https://github.com/gogo/protobuf golang/glog |Apache License 2.0 | https://github.com/golang/glog |golang/groupcache |Apache License 2.0 | https://github.com/golang/groupcache |golang/protobuf |Apache License 2.0 | https://github.com/golang/protobuf |cadvisor |Apache License 2.0 | https://github.com/google/cadvisor |gofuzz |Apache License 2.0 | https://github.com/google/gofuzz |btree |Apache License 2.0 | https://github.com/google/btree |gnostic |Apache License 2.0 | https://github.com/googleapis/gnostic |httpcache | MIT| https://github.com/gregjones/httpcache |golang-lru |MLP-2.0 | https://github.com/hashicorp/golang-lru |tail |MIT | https://github.com/hpcloud/tail |mergo |BSD-3-Clause | https://github.com/imdario/mergo |json-iterator/go |MIT | https://github.com/json-iterator/go |concurrent |Apache License 2.0 | https://github.com/modern-go/concurrent |reflect2 |Apache License 2.0 | https://github.com/modern-go/reflect2 |ginkgo |MIT | https://github.com/onsi/ginkgo |gomega |MIT | https://github.com/onsi/gomega |GoLLRB |BSD-3-Clause | https://github.com/petar/GoLLRB |diskv | MIT| https://github.com/peterbourgon/diskv |spf13/pflag |MIT | https://github.com/spf13/pflag" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "fuzzy_mode_convert_table.md" - }, - "content": [ - { - "additional_info": "| json type \\ dest type | bool | int | uint | float |string| | --- | --- | --- | --- |--|--| | number | positive => true
negative => true
zero => false| 23.2 => 23
-32.1 => -32| 12.1 => 12
-12.1 => 0|as normal|same as origin| | string | empty string => false
string \"0\" => false
other strings => true | \"123.32\" => 123
\"-123.4\" => -123
\"123.23xxxw\" => 123
\"abcde12\" => 0
\"-32.1\" => -32| 13.2 => 13
-1.1 => 0 |12.1 => 12.1
-12.3 => -12.3
12.4xxa => 12.4
+1.1e2 =>110 |same as origin| | bool | true => true
false => false| true => 1
false => 0 | true => 1
false => 0 |true => 1
false => 0|true => \"true\"
false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false
nonempty array => true| [] => 0
[1,2] => 1 | [] => 0
[1,2] => 1 |[] => 0
[1,2] => 1|original json|" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "GettingStarted.md" - }, - "content": [ - { - "heading": "Getting started", - "data": "" - }, - { - "heading": "Prerequisite", - "data": "* Linux box with\n * We tested on Ubuntu 14.04 & 16.04\n * Docker installed\n * Kubernetes cluster running with CNI enabled\n * One easy way to bring up a cluster is to use [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/):\n * We tested on Kubernetes 1.5, 1.6, 1.7, 1.8\n \n Till 1.7 version:\n ```\n $ kubeadm init --use-kubernetes-version=v1.7.0 --pod-network-cidr=10.244.0.0/16\n ```\n Version 1.8 onwards:\n ```\n $ kubeadm init --pod-network-cidr=10.244.0.0/16\n ```\n Next steps:\n ```\n $ mkdir -p $HOME/.kube\n $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n $ sudo chown $(id -u):$(id -g) $HOME/.kube/config\n ```\n * To schedule pods on the master, e.g. for a single-machine Kubernetes cluster,\n \n Till 1.7 version, run:\n ```\n $ kubectl taint nodes --all dedicated-\n ```\n Version 1.8 onwards, run:\n ```\n $ kubectl taint nodes --all node-role.kubernetes.io/master-\n ```\n \n * One (or more) CNI plugin(s) installed, e.g., Calico, Weave, Flannel\n * Use this [link](https://docs.projectcalico.org/v3.2/getting-started/kubernetes) to install Calico\n * Use this [link](https://www.weave.works/docs/net/latest/kube-addon/) to install Weave\n * Use this [link](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) to install Flannel" - }, - { - "heading": "Installing genie", - "data": "We install genie as a Docker Container on every node\n Till Kubernetes 1.7 version:\n Kubernetes 1.8 version onwards:" - }, - { - "heading": "Building, Testing, Making changes to source code", - "data": "Refer to our [Developer's Guide](developer-guide.md) section." - }, - { - "heading": "Genie Logs", - "data": "For now Genie logs are stored in /var/log/syslog\n To see the logs:" - }, - { - "heading": "Troubleshooting", - "data": "* Note: one a single node cluster, after your Kubernetes master is initialized successfully, make sure you are able to schedule pods on the master by running: * Note: most plugins use differenet installation files for Kuberenetes 1.5, 1.6, 1.7 & 1.8. Make sure you use the right one!" - }, - { - "additional_info": "* Linux box with * We tested on Ubuntu 14.04 & 16.04 * Docker installed * Kubernetes cluster running with CNI enabled * One easy way to bring up a cluster is to use [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/): * We tested on Kubernetes 1.5, 1.6, 1.7, 1.8 Till 1.7 version: ``` $ kubeadm init --use-kubernetes-version=v1.7.0 --pod-network-cidr=10.244.0.0/16 ``` Version 1.8 onwards: ``` $ kubeadm init --pod-network-cidr=10.244.0.0/16 ``` Next steps: ``` $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` * To schedule pods on the master, e.g. for a single-machine Kubernetes cluster, Till 1.7 version, run: ``` $ kubectl taint nodes --all dedicated- ``` Version 1.8 onwards, run: ``` $ kubectl taint nodes --all node-role.kubernetes.io/master- ``` * One (or more) CNI plugin(s) installed, e.g., Calico, Weave, Flannel * Use this [link](https://docs.projectcalico.org/v3.2/getting-started/kubernetes) to install Calico * Use this [link](https://www.weave.works/docs/net/latest/kube-addon/) to install Weave * Use this [link](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) to install Flannel We install genie as a Docker Container on every node Till Kubernetes 1.7 version: ``` $ kubectl apply -f https://raw.githubusercontent.com/cni-genie/CNI-Genie/master/conf/1.5/genie.yaml ``` Kubernetes 1.8 version onwards: ``` $ kubectl apply -f https://raw.githubusercontent.com/cni-genie/CNI-Genie/master/releases/v3.0/genie.yaml ``` Refer to our [Developer's Guide](developer-guide.md) section. For now Genie logs are stored in /var/log/syslog To see the logs: ``` $ cat /dev/null > /var/log/syslog $ tail -f /var/log/syslog | grep 'CNI' ``` * Note: one a single node cluster, after your Kubernetes master is initialized successfully, make sure you are able to schedule pods on the master by running: ``` $ kubectl taint nodes --all node-role.kubernetes.io/master- ``` * Note: most plugins use differenet installation files for Kuberenetes 1.5, 1.6, 1.7 & 1.8. Make sure you use the right one!" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "HLD.md" - }, - "content": [ - { - "heading": "You can find here our [existing & future features covered in CNI-Genie](CNIGenieFeatureSet.md)", - "data": "" - }, - { - "heading": "CNI Genie High Level Design", - "data": "" - }, - { - "heading": "Overview", - "data": "From the viewpoint of Kubernetes kubelet CNI-Genie is treated the same as any other CNI plugin. As a result, no changes to Kubernetes are required. CNI Genie proxies for all of the CNI plugins, each providing a unique container networking solution, that are available on the host.\n We start Kubelet with **\"genie\"** as the CNI **\"type\"**. Note that for this to work we must have already placed **genie** binary under /opt/cni/bin as detailed in [getting started]( GettingStarted.md)\n * This is done by passing /etc/cni/net.d/genie.conf to kubelet" - }, - { - "heading": "Detailed workflow", - "data": "A detailed illustration of the workflow is given in the following figure: ![](CNIGenieDetailedWorkflow.png) * Step 1.\ta \u201cPod\u201d object is submitted to API Server by the user * Step 2.\tScheduler schedules the pod to one of the slave nodes * Step 3.\tKubelet of the slave node picks up the pod from API Server and creates corresponding container * Step 4.\tKubelet passes the following to CNI-Genie * a.\tCNI_COMMAND * b.\tCNI_CONTAINERID * c.\tCNI_NETNS * d.\tCNI_ARGS (K8S_POD_NAMESPACE, K8S_POD_NAME) * e.\tCNI_IFNAME (always eth0, please see kubernetes/pkg/kubelet/network/network.go) * Step 5.\tCNI-Genie queries API Server with K8S_POD_NAMESPACE, K8S_POD_NAME to get the \u201cpod\u201d object, from which it parses \u201ccni\u201d plugin type, e.g., canal, weave * Step 6.\tCNI-Genie queries the cni plugin of choice with parameters from Step 4 to get IP Address(es) for the pod * Step 7.\tCNI-Genie returns the IP Address(es) to Kubelet * Step 8.\tKubelet updates the \u201cPod\u201d object with the IP Address(es) passed from CNI-Genie" - }, - { - "additional_info": "From the viewpoint of Kubernetes kubelet CNI-Genie is treated the same as any other CNI plugin. As a result, no changes to Kubernetes are required. CNI Genie proxies for all of the CNI plugins, each providing a unique container networking solution, that are available on the host. We start Kubelet with **\"genie\"** as the CNI **\"type\"**. Note that for this to work we must have already placed **genie** binary under /opt/cni/bin as detailed in [getting started]( GettingStarted.md) * This is done by passing /etc/cni/net.d/genie.conf to kubelet ```json { \"name\": \"k8s-pod-network\", \"type\": \"genie\", \"etcd_endpoints\": \"http://10.96.232.136:6666\", \"log_level\": \"debug\", \"policy\": { \"type\": \"k8s\", \"k8s_api_root\": \"https://10.96.0.1:443\", \"k8s_auth_token\": \"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjYWxpY28tY25pLXBsdWdpbi10b2tlbi13Zzh3OSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjYWxpY28tY25pLXBsdWdpbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJlZDY2NTE3LTFiZjItMTFlNy04YmU5LWZhMTYzZTRkZWM2NyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpjYWxpY28tY25pLXBsdWdpbiJ9.GEAcibv-urfWRGTSK0gchlCB6mtCxbwnfgxgJYdEKRLDjo7Sjyekg5lWPJoMopzzPu8_-Tddd-yPZDJc44NCGRep7_ovjjJdlQvjhc0g1XA7NS8W0OMNHUJAzueyn4iuEwDHR7oNS_nwMqsfzgCsiIRkc7NkQDtKaBj8GOYTz9126zk37TqXylh7hMKlwDFkv9vCBcPv-nYU22UM67Ux6emAtf1g1Yw9i8EfOkbuqURir66jtcnwh3HLPSYMAEyADxYtYAxG9Ca-HhdXXsvnQxhd4P0h2ctgg0_NLTO6WRX47C3GNheLmq0tNttFXya0mHhcElSPQFZftzGw8ZvxTQ\" }, \"kubernetes\": { \"kubeconfig\": \"/etc/cni/net.d/genie-kubeconfig\" } } ``` A detailed illustration of the workflow is given in the following figure: ![](CNIGenieDetailedWorkflow.png) * Step 1.\ta \u201cPod\u201d object is submitted to API Server by the user * Step 2.\tScheduler schedules the pod to one of the slave nodes * Step 3.\tKubelet of the slave node picks up the pod from API Server and creates corresponding container * Step 4.\tKubelet passes the following to CNI-Genie * a.\tCNI_COMMAND * b.\tCNI_CONTAINERID * c.\tCNI_NETNS * d.\tCNI_ARGS (K8S_POD_NAMESPACE, K8S_POD_NAME) * e.\tCNI_IFNAME (always eth0, please see kubernetes/pkg/kubelet/network/network.go) * Step 5.\tCNI-Genie queries API Server with K8S_POD_NAMESPACE, K8S_POD_NAME to get the \u201cpod\u201d object, from which it parses \u201ccni\u201d plugin type, e.g., canal, weave * Step 6.\tCNI-Genie queries the cni plugin of choice with parameters from Step 4 to get IP Address(es) for the pod * Step 7.\tCNI-Genie returns the IP Address(es) to Kubelet * Step 8.\tKubelet updates the \u201cPod\u201d object with the IP Address(es) passed from CNI-Genie" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "INTRODUCTION.md" - }, - "content": [ - { - "heading": "CNI Genie: generic CNI network plugin", - "data": "CNI Genie is an add-on to [Kuberenets](https://github.com/kubernetes/kubernetes) open-source project and is designed to provide the following features: 1. Multiple CNI plugins are available to users in runtime. The user can offer any of the available CNI plugins to containers upon creating them - User-story: based on \u2018performance\u2019 requirements, \u2018application\u2019 requirements, \u201cworkload placement\u201d requirements, the user could be interested to use different CNI plugins for different application groups - Different CNI plugins are different in terms of need for port-mapping, NAT, tunneling, interrupting host ports/interfaces 2. Multiple IP addresses can be injected into a single container making the container reachable across multiple networks - User-story: in a serverless platform the \u201cRequest Dispatcher\u201d container that receives requests from customers of all different tenants needs to be able to pass the request to the right tenant. As a result, is should be reachable on the networks of all tenants - User-story: many Telecom vendors are adopting container technology. For a router/firewall application to run in a container, it needs to have multiple interfaces 3. Upon creating a pod, the user can manually select the logical network, or multiple logical networks, that the pod should be added to 4. If upon creating a pod no logical network is included in the yaml configuration, CNI Genie will automatically select one of the available CNI plugins - CNI Genie maintains a list of KPIs for all available CNI plugins. Examples of such KPIs are occupancy rate, number of subnets, response times 5. CNI Genie stores records of requests made to each CNI plugin for logging and auditing purposes and it can generate reports upon request 6. Network policy 7. Network access control Note: CNI Genie is NOT a routing solution! It gets IP addresses from various CNSs" - }, - { - "additional_info": "CNI Genie is an add-on to [Kuberenets](https://github.com/kubernetes/kubernetes) open-source project and is designed to provide the following features: 1. Multiple CNI plugins are available to users in runtime. The user can offer any of the available CNI plugins to containers upon creating them - User-story: based on \u2018performance\u2019 requirements, \u2018application\u2019 requirements, \u201cworkload placement\u201d requirements, the user could be interested to use different CNI plugins for different application groups - Different CNI plugins are different in terms of need for port-mapping, NAT, tunneling, interrupting host ports/interfaces 2. Multiple IP addresses can be injected into a single container making the container reachable across multiple networks - User-story: in a serverless platform the \u201cRequest Dispatcher\u201d container that receives requests from customers of all different tenants needs to be able to pass the request to the right tenant. As a result, is should be reachable on the networks of all tenants - User-story: many Telecom vendors are adopting container technology. For a router/firewall application to run in a container, it needs to have multiple interfaces 3. Upon creating a pod, the user can manually select the logical network, or multiple logical networks, that the pod should be added to 4. If upon creating a pod no logical network is included in the yaml configuration, CNI Genie will automatically select one of the available CNI plugins - CNI Genie maintains a list of KPIs for all available CNI plugins. Examples of such KPIs are occupancy rate, number of subnets, response times 5. CNI Genie stores records of requests made to each CNI plugin for logging and auditing purposes and it can generate reports upon request 6. Network policy 7. Network access control Note: CNI Genie is NOT a routing solution! It gets IP addresses from various CNSs" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "README.md" - }, - "content": [ - { - "heading": "YAML marshaling and unmarshaling support for Go", - "data": "[![Build Status](https://travis-ci.org/ghodss/yaml.svg)](https://travis-ci.org/ghodss/yaml)" - }, - { - "heading": "Introduction", - "data": "A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs.\n In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/)." - }, - { - "heading": "Compatibility", - "data": "This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility)." - }, - { - "heading": "Caveats", - "data": "**Caveat #1:** When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example:\n **Caveat #2:** When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys." - }, - { - "heading": "Installation and usage", - "data": "To install, run: And import using: Usage is very similar to the JSON library: `yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available:" - }, - { - "additional_info": "[![Build Status](https://travis-ci.org/ghodss/yaml.svg)](https://travis-ci.org/ghodss/yaml) A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs. In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/). This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility). **Caveat #1:** When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example: ``` BAD: exampleKey: !!binary gIGC GOOD: exampleKey: gIGC ... and decode the base64 data in your code. ``` **Caveat #2:** When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys. To install, run: ``` $ go get github.com/ghodss/yaml ``` And import using: ``` import \"github.com/ghodss/yaml\" ``` Usage is very similar to the JSON library: ```go package main import ( \"fmt\" \"github.com/ghodss/yaml\" ) type Person struct { Name string `json:\"name\"` // Affects YAML field names too. Age int `json:\"age\"` } func main() { // Marshal a Person struct to YAML. p := Person{\"John\", 30} y, err := yaml.Marshal(p) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ // Unmarshal the YAML back into a Person struct. var p2 Person err = yaml.Unmarshal(y, &p2) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(p2) /* Output: {John 30} */ } ``` `yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available: ```go package main import ( \"fmt\" \"github.com/ghodss/yaml\" ) func main() { j := []byte(`{\"name\": \"John\", \"age\": 30}`) y, err := yaml.JSONToYAML(j) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: name: John age: 30 */ j2, err := yaml.YAMLToJSON(y) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(j2)) /* Output: {\"age\":30,\"name\":\"John\"} */ } ```" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "RELEASE.md" - }, - "content": [ - { - "heading": "Release Process", - "data": "The `yaml` Project is released on an as-needed basis. The process is as follows: 1. An issue is proposing a new release with a changelog since the last release 1. All [OWNERS](OWNERS) must LGTM this release 1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` 1. The release issue is closed 1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`" - }, - { - "additional_info": "The `yaml` Project is released on an as-needed basis. The process is as follows: 1. An issue is proposing a new release with a changelog since the last release 1. All [OWNERS](OWNERS) must LGTM this release 1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` 1. The release issue is closed 1. An announcement email is sent to `kubernetes-dev@googlegroups.com` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "RELEASING.md" - }, - "content": [ - { - "additional_info": "A Gomega release is a tagged sha and a GitHub release. To cut a release: 1. Ensure CHANGELOG.md is up to date. - Use `git log --pretty=format:'- %s [%h]' HEAD...vX.X.X` to list all the commits since the last release - Categorize the changes into - Breaking Changes (requires a major version) - New Features (minor version) - Fixes (fix version) - Maintenance (which in general should not be mentioned in `CHANGELOG.md` as they have no user impact) 2. Update GOMEGA_VERSION in `gomega_dsl.go` 3. Push a commit with the version number as the commit message (e.g. `v1.3.0`) 4. Create a new [GitHub release](https://help.github.com/articles/creating-releases/) with the version number as the tag (e.g. `v1.3.0`). List the key changes in the release notes." - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "ROADMAP-old.md" - }, - "content": [ - { - "heading": "CNI-Genie Roadmap", - "data": "" - }, - { - "heading": "Openness", - "data": "- Enhancements as per Kubernetes Network Plumbing Working Group conclusions/decisions\n - CNI version upgrade based on new CNI version release\n - Support pod level network policy to co-exist with network level policy\n - Enhancement of network crd objects to provide more CNI customizations\n - Enhance network smart selection mechanism\n - New requirement/usecase support based on users demands" - }, - { - "heading": "User experience", - "data": "- Integrate genie with other ecosystem projects (e.g., kubespray)\n - Helm charts based on updated features\n - Verification and user guide update for usage of SR-IOV, DPDK" - }, - { - "heading": "Stability", - "data": "- E2E test suite additions/improvements - Enhance logging mechanisms" - }, - { - "additional_info": "- Enhancements as per Kubernetes Network Plumbing Working Group conclusions/decisions - CNI version upgrade based on new CNI version release - Support pod level network policy to co-exist with network level policy - Enhancement of network crd objects to provide more CNI customizations - Enhance network smart selection mechanism - New requirement/usecase support based on users demands - Integrate genie with other ecosystem projects (e.g., kubespray) - Helm charts based on updated features - Verification and user guide update for usage of SR-IOV, DPDK - E2E test suite additions/improvements - Enhance logging mechanisms" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "CNI-Genie", - "file_name": "ROADMAP.md" - }, - "content": [ - { - "heading": "CNI-Genie Roadmap", - "data": "" - }, - { - "heading": "Background & Motivation", - "data": "CNI-Genie was originally designed to enable multihoming for Kubernetes pods by\n enabling users to specify desired number of interfaces and the respective CNI\n drivers for those interfaces.\n While this offers great flexibility, users really only care that they get reliable multihoming capability with performant data-plane (traffic throughput) and control-plane (low network-ready latencies for pods) out of box. This became clear from a 2022 KubeCon EU stories. One particular talk discussed the complexities that were encountered using alternative solutions for multihoming. In the end, the other solution did not work because they designed their network for scale out rather than scale up. The ability to pick and choose CNI drivers becomes a much more appealing feature with a default that works well intituitively when networks start scaling." - }, - { - "heading": "New Approach", - "data": "For the past couple of years, we architected and built a pod networking solution\n based on eBPF/XDP called the [Mizar project](https://github.com/centaurus-cloud/mizar).\n Mizar was designed for fast data-plane performance by relying on eBPF/XDP to provide\n the overlay networking that completely by-passes the host network stack to ferry\n traffic between containers.\n It was also built with a control-plane design to provide low-latency network-readiness for pods in order to handle the cloud native networking needs where pods rapidly come and go. Mizar also provides native multi-tenancy network isolation and was designed for scale out networking. The goal at that time was to provide a CNI networking solution for our scale out pod orchestration solution called project [Arktos](https://github.com/centaurus-cloud/arktos). We recently successfully integrated Mizar and Arktos and also demonstrated its multi-tenant networking capabilities in Arktos scaleout architecture at the Linux Foundation Open Source Summit in Austin, TX in June 2022.\n \n We now realize that Mizar's eBPF/XDP technology can also address the critical cloud networking problems that we and others in the community face with multi-homed networking at scale." - }, - { - "heading": "New Goals", - "data": "We have identified following goals to integrate select Mizar's features into CNI-Genie:\n - Add out-of-box fast & scalable eBPF/XDP based pod networking capability.\n - Add ability for users to select the isolated networks to connect their pods into.\n - Allow users to operate multiple groups of pods in their own isolated networks.\n - Eliminate the (per-packet) overhead of network policies to achieve isolation.\n - Add ability to CNI-Genie for users to select native network isolation using\n VPC isolation concept.\n - Complete the control plane design to provide reliability and failover through\n distributed hash tables to store pod network groupings & connectivity information.\n - Natively offer Network Quality of Service (QoS) to allow users to assign relative\n network traffic priorities to their pods." - }, - { - "heading": "2022 - 2023 Goals", - "data": "For the next one year, we plan to take a few small steps and accomplish following: - Identify and on-ramp new additional maintainer(s) for the project. - Implement basic XDP multihomed pod networking features: - Implement pod-to-pod eBPF/XDP based multihomed networking with built-in isolation. - Implement service-to-pod eBPF/XDP based multihomed networking with built-in isolation. - Implement simple and very basic XDP based egress gateway. - Ensure ability to configure other CNI providers is retained. - Restart community engagement for the project. - Prototype and present new CNI-Genie roadmap features at conferences." - }, - { - "additional_info": "CNI-Genie was originally designed to enable multihoming for Kubernetes pods by enabling users to specify desired number of interfaces and the respective CNI drivers for those interfaces. While this offers great flexibility, users really only care that they get reliable multihoming capability with performant data-plane (traffic throughput) and control-plane (low network-ready latencies for pods) out of box. This became clear from a 2022 KubeCon EU stories. One particular talk discussed the complexities that were encountered using alternative solutions for multihoming. In the end, the other solution did not work because they designed their network for scale out rather than scale up. The ability to pick and choose CNI drivers becomes a much more appealing feature with a default that works well intituitively when networks start scaling. For the past couple of years, we architected and built a pod networking solution based on eBPF/XDP called the [Mizar project](https://github.com/centaurus-cloud/mizar). Mizar was designed for fast data-plane performance by relying on eBPF/XDP to provide the overlay networking that completely by-passes the host network stack to ferry traffic between containers. It was also built with a control-plane design to provide low-latency network-readiness for pods in order to handle the cloud native networking needs where pods rapidly come and go. Mizar also provides native multi-tenancy network isolation and was designed for scale out networking. The goal at that time was to provide a CNI networking solution for our scale out pod orchestration solution called project [Arktos](https://github.com/centaurus-cloud/arktos). We recently successfully integrated Mizar and Arktos and also demonstrated its multi-tenant networking capabilities in Arktos scaleout architecture at the Linux Foundation Open Source Summit in Austin, TX in June 2022. We now realize that Mizar's eBPF/XDP technology can also address the critical cloud networking problems that we and others in the community face with multi-homed networking at scale. We have identified following goals to integrate select Mizar's features into CNI-Genie: - Add out-of-box fast & scalable eBPF/XDP based pod networking capability. - Add ability for users to select the isolated networks to connect their pods into. - Allow users to operate multiple groups of pods in their own isolated networks. - Eliminate the (per-packet) overhead of network policies to achieve isolation. - Add ability to CNI-Genie for users to select native network isolation using VPC isolation concept. - Complete the control plane design to provide reliability and failover through distributed hash tables to store pod network groupings & connectivity information. - Natively offer Network Quality of Service (QoS) to allow users to assign relative network traffic priorities to their pods. For the next one year, we plan to take a few small steps and accomplish following: - Identify and on-ramp new additional maintainer(s) for the project. - Implement basic XDP multihomed pod networking features: - Implement pod-to-pod eBPF/XDP based multihomed networking with built-in isolation. - Implement service-to-pod eBPF/XDP based multihomed networking with built-in isolation. - Implement simple and very basic XDP based egress gateway. - Ensure ability to configure other CNI providers is retained. - Restart community engagement for the project. - Prototype and present new CNI-Genie roadmap features at conferences." - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Container Network Interface (CNI)", - "file_name": "cnitool.md" - }, - "content": [ - { - "heading": "Overview", - "data": "The `cnitool` is a utility that can be used to test a CNI plugin without the need for a container runtime. The `cnitool` takes a `network name` and a `network namespace` and a command to `ADD` or `DEL`,.i.e, attach or detach containers from a network. The `cnitool` relies on the following environment variables to operate properly: * `NETCONFPATH`: This environment variable needs to be set to a directory. It defaults to `/etc/cni/net.d`. The `cnitool` searches for CNI configuration files in this directory with the extension `*.conf` or `*.json`. It loads all the CNI configuration files in this directory and if it finds a CNI configuration with the `network name` given to the cnitool it returns the corresponding CNI configuration, else it returns `nil`. * `CNI_PATH`: For a given CNI configuration `cnitool` will search for the corresponding CNI plugin in this path. For the full documentation of `cnitool` see the [cnitool docs](../cnitool/README.md)" - }, - { - "additional_info": "The `cnitool` is a utility that can be used to test a CNI plugin without the need for a container runtime. The `cnitool` takes a `network name` and a `network namespace` and a command to `ADD` or `DEL`,.i.e, attach or detach containers from a network. The `cnitool` relies on the following environment variables to operate properly: * `NETCONFPATH`: This environment variable needs to be set to a directory. It defaults to `/etc/cni/net.d`. The `cnitool` searches for CNI configuration files in this directory with the extension `*.conf` or `*.json`. It loads all the CNI configuration files in this directory and if it finds a CNI configuration with the `network name` given to the cnitool it returns the corresponding CNI configuration, else it returns `nil`. * `CNI_PATH`: For a given CNI configuration `cnitool` will search for the corresponding CNI plugin in this path. For the full documentation of `cnitool` see the [cnitool docs](../cnitool/README.md)" - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Container Network Interface (CNI)", - "file_name": "CODE-OF-CONDUCT.md" - }, - "content": [ - { - "heading": "Community Code of Conduct", - "data": "CNI follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)." - }, - { - "additional_info": "CNI follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)." - } - ] - }, - { - "tag": { - "category": "Runtime", - "subcategory": "Cloud Native Network", - "project_name": "Container Network Interface (CNI)", - "file_name": "CONTRIBUTING.md" - }, - "content": [ - { - "heading": "How to Contribute", - "data": "CNI is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub\n pull requests. This document outlines some of the conventions on development\n workflow, commit message formatting, contact points and other resources to make\n it easier to get your contribution accepted.\n We gratefully welcome improvements to documentation as well as to code." - }, - { - "heading": "Certificate of Origin", - "data": "By contributing to this project you agree to the Developer Certificate of\n Origin (DCO). This document was created by the Linux Kernel community and is a\n simple statement that you, as a contributor, have the legal right to make the\n contribution. See the [DCO](DCO) file for details." - }, - { - "heading": "Email and Chat", - "data": "The project uses the cni-dev email list, IRC chat, and Slack:\n - Email: [cni-dev](https://groups.google.com/forum/#!forum/cni-dev)\n - IRC: #[containernetworking](irc://irc.freenode.net:6667/#containernetworking) channel on [freenode.net](https://freenode.net/)\n - Slack: #cni on the [CNCF slack](https://slack.cncf.io/). NOTE: the previous CNI Slack (containernetworking.slack.com) has been sunsetted.\n Please avoid emailing maintainers found in the MAINTAINERS file directly. They\n are very busy and read the mailing lists." - }, - { - "heading": "Getting Started", - "data": "- Fork the repository on GitHub\n - Read the [README](README.md) for build and test instructions\n - Play with the project, submit bugs, submit pull requests!" - }, - { - "heading": "Contribution workflow", - "data": "This is a rough outline of how to prepare a contribution:\n - Create a topic branch from where you want to base your work (usually branched from main).\n - Make commits of logical units.\n - Make sure your commit messages are in the proper format (see below).\n - Push your changes to a topic branch in your fork of the repository.\n - If you changed code:\n - add automated tests to cover your changes, using the [Ginkgo](https://onsi.github.io/ginkgo/) & [Gomega](https://onsi.github.io/gomega/) style\n - if the package did not previously have any test coverage, add it to the list\n of `TESTABLE` packages in the `test.sh` script.\n - run the full test script and ensure it passes\n - Make sure any new code files have a license header (this is now enforced by automated tests)\n - Submit a pull request to the original repository." - }, - { - "heading": "How to run the test suite", - "data": "We generally require test coverage of any new features or bug fixes.\n Here's how you can run the test suite on any system (even Mac or Windows) using\n [Vagrant](https://www.vagrantup.com/) and a hypervisor of your choice:" - }, - { - "heading": "you're now in a shell in a virtual machine", - "data": "" - }, - { - "heading": "to run the full test suite", - "data": "" - }, - { - "heading": "to focus on a particular test suite", - "data": "" - }, - { - "heading": "Acceptance policy", - "data": "These things will make a PR more likely to be accepted:\n - a well-described requirement\n - tests for new code\n - tests for old code!\n - new code and tests follow the conventions in old code and tests\n - a good commit message (see below)\n In general, we will merge a PR once two maintainers have endorsed it.\n Trivial changes (e.g., corrections to spelling) may get waved through.\n For substantial changes, more people may become involved, and you might get asked to resubmit the PR or divide the changes into more than one PR." - }, - { - "heading": "Format of the Commit Message", - "data": "We follow a rough convention for commit messages that is designed to answer two\n questions: what changed and why. The subject line should feature the what and\n the body of the commit should describe the why.\n The format can be described more formally as follows:\n The first line is the subject and should be no longer than 70 characters, the\n second line is always blank, and other lines should be wrapped at 80 characters.\n This allows the message to be easier to read on GitHub as well as in various\n git tools." - }, - { - "heading": "3rd party plugins", - "data": "So you've built a CNI plugin. Where should it live? Short answer: We'd be happy to link to it from our [list of 3rd party plugins](README.md#3rd-party-plugins). But we'd rather you kept the code in your own repo. Long answer: An advantage of the CNI model is that independent plugins can be built, distributed and used without any code changes to this repository. While some widely used plugins (and a few less-popular legacy ones) live in this repo, we're reluctant to add more. If you have a good reason why the CNI maintainers should take custody of your plugin, please open an issue or PR." - }, - { - "additional_info": "CNI is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. We gratefully welcome improvements to documentation as well as to code. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the [DCO](DCO) file for details. The project uses the cni-dev email list, IRC chat, and Slack: - Email: [cni-dev](https://groups.google.com/forum/#!forum/cni-dev) - IRC: #[containernetworking](irc://irc.freenode.net:6667/#containernetworking) channel on [freenode.net](https://freenode.net/) - Slack: #cni on the [CNCF slack](https://slack.cncf.io/). NOTE: the previous CNI Slack (containernetworking.slack.com) has been sunsetted. Please avoid emailing maintainers found in the MAINTAINERS file directly. They are very busy and read the mailing lists. - Fork the repository on GitHub - Read the [README](README.md) for build and test instructions - Play with the project, submit bugs, submit pull requests! This is a rough outline of how to prepare a contribution: - Create a topic branch from where you want to base your work (usually branched from main). - Make commits of logical units. - Make sure your commit messages are in the proper format (see below). - Push your changes to a topic branch in your fork of the repository. - If you changed code: - add automated tests to cover your changes, using the [Ginkgo](https://onsi.github.io/ginkgo/) & [Gomega](https://onsi.github.io/gomega/) style - if the package did not previously have any test coverage, add it to the list of `TESTABLE` packages in the `test.sh` script. - run the full test script and ensure it passes - Make sure any new code files have a license header (this is now enforced by automated tests) - Submit a pull request to the original repository. We generally require test coverage of any new features or bug fixes. Here's how you can run the test suite on any system (even Mac or Windows) using [Vagrant](https://www.vagrantup.com/) and a hypervisor of your choice: ```bash vagrant up vagrant ssh sudo su cd /go/src/github.com/containernetworking/cni ./test.sh cd libcni go test ``` These things will make a PR more likely to be accepted: - a well-described requirement - tests for new code - tests for old code! - new code and tests follow the conventions in old code and tests - a good commit message (see below) In general, we will merge a PR once two maintainers have endorsed it. Trivial changes (e.g., corrections to spelling) may get waved through. For substantial changes, more people may become involved, and you might get asked to resubmit the PR or divide the changes into more than one PR. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ```md scripts: add the test-cluster command this uses tmux to setup a test cluster that you can easily kill and start for debugging. Fixes #38 ``` The format can be described more formally as follows: ```md :