\\]). [ \"192.168.0.1\", 10.10.0.1/24\", \"3ffe:ffff:0:01ff::2\", \"3ffe:ffff:0:01ff::1/64\" ] The plugin may require the IP address to include a prefix length. | none | CNI `static` plugin, CNI `host-local` plugin |\n | mac | Dynamically assign MAC. Runtime can pass this to plugins which need MAC as input. | `mac` | `MAC` (string entry). \"c2:11:22:33:44:55\" | none | CNI `tuning` plugin |\n | infiniband guid | Dynamically assign Infiniband GUID to network interface. Runtime can pass this to plugins which need Infiniband GUID as input. | `infinibandGUID` | `GUID` (string entry). \"c2:11:22:33:44:55:66:77\" | none | CNI [`ib-sriov-cni`](https://github.com/Mellanox/ib-sriov-cni) plugin |\n | device id | Provide device identifier which is associated with the network to allow the CNI plugin to perform device dependent network configurations. | `deviceID` | `deviceID` (string entry). \"0000:04:00.5\" | none | CNI `host-device` plugin |\n | aliases | Provide a list of names that will be mapped to the IP addresses assigned to this interface. Other containers on the same network may use one of these names to access the container.| `aliases` | List of `alias` (string entry). [\"my-container\", \"primary-db\"] | none | CNI `alias` plugin |\n | cgroup path | Provide the cgroup path for pod as requested by CNI plugins. | `cgroupPath` | `cgroupPath` (string entry). \"/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod28ce45bc_63f8_48a3_a99b_cfb9e63c856c.slice\" | none | CNI `host-local` plugin |"
- },
- {
- "heading": "\"args\" in network config",
- "data": "`args` in [network config](SPEC.md#network-configuration) were reserved as a field in the `0.2.0` release of the CNI spec.\n > args (dictionary): Optional additional arguments provided by the container runtime. For example a dictionary of labels could be passed to CNI plugins by adding them to a labels field under args.\n `args` provide a way of providing more structured data than the flat strings that CNI_ARGS can support.\n `args` should be used for _optional_ meta-data. Runtimes can place additional data in `args` and plugins that don't understand that data should just ignore it. Runtimes should not require that a plugin understands or consumes that data provided, and so a runtime should not expect to receive an error if the data could not be acted on.\n This method of passing information to a plugin is recommended when the information is optional and the plugin can choose to ignore it. It's often that case that such information is passed to all plugins by the runtime without regard for whether the plugin can understand it.\n The conventions documented here are all namespaced under `cni` so they don't conflict with any existing `args`.\n For example:\n | Area | Purpose| Spec and Example | Runtime implementations | Plugin Implementations |\n | ----- | ------ | ------------ | ----------------------- | ---------------------- |\n | labels | Pass`key=value` labels to plugins | \"labels\" : [ { \"key\" : \"app\", \"value\" : \"myapp\" }, { \"key\" : \"env\", \"value\" : \"prod\" } ] | none | none |\n | ips | Request specific IPs | Spec:\"ips\": [\"\\[/\\]\", ...] Examples:\"ips\": [\"10.2.2.42/24\", \"2001:db8::5\"] The plugin may require the IP address to include a prefix length. | none | host-local, static |"
- },
- {
- "heading": "CNI_ARGS",
- "data": "CNI_ARGS formed part of the original CNI spec and have been present since the initial release.\n > `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\"\n The use of `CNI_ARGS` is deprecated and \"args\" should be used instead. If a runtime passes an equivalent key via `args` (eg the `ips` `args` Area and the `CNI_ARGS` `IP` Field) and the plugin understands `args`, the plugin must ignore the CNI_ARGS Field.\n | Field | Purpose| Spec and Example | Runtime implementations | Plugin Implementations |\n | ------ | ------ | ---------------- | ----------------------- | ---------------------- |\n | IP | Request a specific IP from IPAM plugins | Spec:IP=\\[/\\] Example: IP=192.168.10.4/24 The plugin may require the IP addresses to include a prefix length. | *rkt* supports passing additional arguments to plugins and the [documentation](https://coreos.com/rkt/docs/latest/networking/overriding-defaults.html) suggests IP can be used. | host-local, static |"
- },
- {
- "heading": "Chained Plugins",
- "data": "If plugins are agnostic about the type of interface created, they SHOULD work in a chained mode and configure existing interfaces. Plugins MAY also create the desired interface when not run in a chain. For example, the `bridge` plugin adds the host-side interface to a bridge. So, it should accept any previous result that includes a host-side interface, including `tap` devices. If not called as a chained plugin, it creates a `veth` pair first. Plugins that meet this convention are usable by a larger set of runtimes and interfaces, including hypervisors and DPDK providers."
- },
- {
- "additional_info": "There are three ways of passing information to plugins using the Container Network Interface (CNI), none of which require the [spec](SPEC.md) to be updated. These are - plugin specific fields in the JSON config - `args` field in the JSON config - `CNI_ARGS` environment variable This document aims to provide guidance on which method should be used and to provide a convention for how common information should be passed. Establishing these conventions allows plugins to work across multiple runtimes. This helps both plugins and the runtimes. * Plugin authors should aim to support these conventions where it makes sense for their plugin. This means they are more likely to \"just work\" with a wider range of runtimes. * Plugins should accept arguments according to these conventions if they implement the same basic functionality as other plugins. If plugins have shared functionality that isn't covered by these conventions then a PR should be opened against this document. * Runtime authors should follow these conventions if they want to pass additional information to plugins. This will allow the extra information to be consumed by the widest range of plugins. * These conventions serve as an abstraction for the runtime. For example, port forwarding is highly implementation specific, but users should be able to select the plugin of their choice without changing the runtime. Additional conventions can be created by creating PRs which modify this document. [Plugin specific fields](SPEC.md#network-configuration) formed part of the original CNI spec and have been present since the initial release. > Plugins may define additional fields that they accept and may generate an error if called with unknown fields. The exception to this is the args field may be used to pass arbitrary data which may be ignored by plugins. A plugin can define any additional fields it needs to work properly. It should return an error if it can't act on fields that were expected or where the field values were malformed. This method of passing information to a plugin is recommended when the following conditions hold: * The configuration has specific meaning to the plugin (i.e. it's not just general meta data) * the plugin is expected to act on the configuration or return an error if it can't Dynamic information (i.e. data that a runtime fills out) should be placed in a `runtimeConfig` section. Plugins can request that the runtime insert this dynamic configuration by explicitly listing their `capabilities` in the network configuration. For example, the configuration for a port mapping plugin might look like this to an operator (it should be included as part of a [network configuration list](SPEC.md#network-configuration-lists). ```json { \"name\" : \"ExamplePlugin\", \"type\" : \"port-mapper\", \"capabilities\": {\"portMappings\": true} } ``` But the runtime would fill in the mappings so the plugin itself would receive something like this. ```json { \"name\" : \"ExamplePlugin\", \"type\" : \"port-mapper\", \"runtimeConfig\": { \"portMappings\": [ {\"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\"} ] } } ``` | Area | Purpose | Capability | Spec and Example | Runtime implementations | Plugin Implementations | | ----- | ------- | -----------| ---------------- | ----------------------- | --------------------- | | port mappings | Pass mapping from ports on the host to ports in the container network namespace. | `portMappings` | A list of portmapping entries. [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" }, { \"hostPort\": 8000, \"containerPort\": 8001, \"protocol\": \"udp\" } ] | kubernetes | CNI `portmap` plugin | | ip ranges | Dynamically configure the IP range(s) for address allocation. Runtimes that manage IP pools, but not individual IP addresses, can pass these to plugins. | `ipRanges` | The same as the `ranges` key for `host-local` - a list of lists of subnets. The outer list is the number of IPs to allocate, and the inner list is a pool of subnets for each allocation. [ [ { \"subnet\": \"10.1.2.0/24\", \"rangeStart\": \"10.1.2.3\", \"rangeEnd\": 10.1.2.99\", \"gateway\": \"10.1.2.254\" } ] ] | none | CNI `host-local` plugin | | bandwidth limits | Dynamically configure interface bandwidth limits | `bandwidth` | Desired bandwidth limits. Rates are in bits per second, burst values are in bits. { \"ingressRate\": 2048, \"ingressBurst\": 1600, \"egressRate\": 4096, \"egressBurst\": 1600 } | none | CNI `bandwidth` plugin | | dns | Dynamically configure dns according to runtime | `dns` | Dictionary containing a list of `servers` (string entries), a list of `searches` (string entries), a list of `options` (string entries). { \"searches\" : [ \"internal.yoyodyne.net\", \"corp.tyrell.net\" ] \"servers\": [ \"8.8.8.8\", \"10.0.0.10\" ] } | kubernetes | CNI `win-bridge` plugin, CNI `win-overlay` plugin | | ips | Dynamically allocate IPs for container interface. Runtime which has the ability of address allocation can pass these to plugins. | `ips` | A list of `IP` (\\\\[/\\\\]). [ \"192.168.0.1\", 10.10.0.1/24\", \"3ffe:ffff:0:01ff::2\", \"3ffe:ffff:0:01ff::1/64\" ] The plugin may require the IP address to include a prefix length. | none | CNI `static` plugin, CNI `host-local` plugin | | mac | Dynamically assign MAC. Runtime can pass this to plugins which need MAC as input. | `mac` | `MAC` (string entry). \"c2:11:22:33:44:55\" | none | CNI `tuning` plugin | | infiniband guid | Dynamically assign Infiniband GUID to network interface. Runtime can pass this to plugins which need Infiniband GUID as input. | `infinibandGUID` | `GUID` (string entry). \"c2:11:22:33:44:55:66:77\" | none | CNI [`ib-sriov-cni`](https://github.com/Mellanox/ib-sriov-cni) plugin | | device id | Provide device identifier which is associated with the network to allow the CNI plugin to perform device dependent network configurations. | `deviceID` | `deviceID` (string entry). \"0000:04:00.5\" | none | CNI `host-device` plugin | | aliases | Provide a list of names that will be mapped to the IP addresses assigned to this interface. Other containers on the same network may use one of these names to access the container.| `aliases` | List of `alias` (string entry). [\"my-container\", \"primary-db\"] | none | CNI `alias` plugin | | cgroup path | Provide the cgroup path for pod as requested by CNI plugins. | `cgroupPath` | `cgroupPath` (string entry). \"/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod28ce45bc_63f8_48a3_a99b_cfb9e63c856c.slice\" | none | CNI `host-local` plugin | `args` in [network config](SPEC.md#network-configuration) were reserved as a field in the `0.2.0` release of the CNI spec. > args (dictionary): Optional additional arguments provided by the container runtime. For example a dictionary of labels could be passed to CNI plugins by adding them to a labels field under args. `args` provide a way of providing more structured data than the flat strings that CNI_ARGS can support. `args` should be used for _optional_ meta-data. Runtimes can place additional data in `args` and plugins that don't understand that data should just ignore it. Runtimes should not require that a plugin understands or consumes that data provided, and so a runtime should not expect to receive an error if the data could not be acted on. This method of passing information to a plugin is recommended when the information is optional and the plugin can choose to ignore it. It's often that case that such information is passed to all plugins by the runtime without regard for whether the plugin can understand it. The conventions documented here are all namespaced under `cni` so they don't conflict with any existing `args`. For example: ```jsonc { \"cniVersion\":\"0.2.0\", \"name\":\"net\", \"args\":{ \"cni\":{ \"labels\": [{\"key\": \"app\", \"value\": \"myapp\"}] } }, // \"ipam\":{ // } } ``` | Area | Purpose| Spec and Example | Runtime implementations | Plugin Implementations | | ----- | ------ | ------------ | ----------------------- | ---------------------- | | labels | Pass`key=value` labels to plugins | \"labels\" : [ { \"key\" : \"app\", \"value\" : \"myapp\" }, { \"key\" : \"env\", \"value\" : \"prod\" } ] | none | none | | ips | Request specific IPs | Spec:\"ips\": [\"\\[/\\]\", ...] Examples:\"ips\": [\"10.2.2.42/24\", \"2001:db8::5\"] The plugin may require the IP address to include a prefix length. | none | host-local, static | CNI_ARGS formed part of the original CNI spec and have been present since the initial release. > `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\" The use of `CNI_ARGS` is deprecated and \"args\" should be used instead. If a runtime passes an equivalent key via `args` (eg the `ips` `args` Area and the `CNI_ARGS` `IP` Field) and the plugin understands `args`, the plugin must ignore the CNI_ARGS Field. | Field | Purpose| Spec and Example | Runtime implementations | Plugin Implementations | | ------ | ------ | ---------------- | ----------------------- | ---------------------- | | IP | Request a specific IP from IPAM plugins | Spec:IP=\\[/\\] Example: IP=192.168.10.4/24 The plugin may require the IP addresses to include a prefix length. | *rkt* supports passing additional arguments to plugins and the [documentation](https://coreos.com/rkt/docs/latest/networking/overriding-defaults.html) suggests IP can be used. | host-local, static | If plugins are agnostic about the type of interface created, they SHOULD work in a chained mode and configure existing interfaces. Plugins MAY also create the desired interface when not run in a chain. For example, the `bridge` plugin adds the host-side interface to a bridge. So, it should accept any previous result that includes a host-side interface, including `tap` devices. If not called as a chained plugin, it creates a `veth` pair first. Plugins that meet this convention are usable by a larger set of runtimes and interfaces, including hypervisors and DPDK providers."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "GOVERNANCE.md"
- },
- "content": [
- {
- "heading": "CNI Governance",
- "data": "This document defines project governance for the project."
- },
- {
- "heading": "Voting",
- "data": "The CNI project employs \"organization voting\" to ensure no single organization can dominate the project.\n Individuals not associated with or employed by a company or organization are allowed one organization vote.\n Each company or organization (regardless of the number of maintainers associated with or employed by that company/organization) receives one organization vote.\n In other words, if two maintainers are employed by Company X, two by Company Y, two by Company Z, and one maintainer is an un-affiliated individual, a total of four \"organization votes\" are possible; one for X, one for Y, one for Z, and one for the un-affiliated individual.\n Any maintainer from an organization may cast the vote for that organization.\n For formal votes, a specific statement of what is being voted on should be added to the relevant github issue or PR, and a link to that issue or PR added to the maintainers meeting agenda document.\n Maintainers should indicate their yes/no vote on that issue or PR, and after a suitable period of time, the votes will be tallied and the outcome noted."
- },
- {
- "heading": "Changes in Maintainership",
- "data": "New maintainers are proposed by an existing maintainer and are elected by a 2/3 majority organization vote.\n Maintainers can be removed by a 2/3 majority organization vote."
- },
- {
- "heading": "Approving PRs",
- "data": "Non-specification-related PRs may be merged after receiving at least two organization votes.\n Changes to the CNI Specification also follow the normal PR approval process (eg, 2 organization votes), but any maintainer can request that the approval require a 2/3 majority organization vote."
- },
- {
- "heading": "Github Project Administration",
- "data": "Maintainers will be added to the containernetworking GitHub organization and added to the GitHub cni-maintainers team, and made a GitHub maintainer of that team.\n After 6 months a maintainer will be made an \"owner\" of the GitHub organization."
- },
- {
- "heading": "Changes in Governance",
- "data": "All changes in Governance require a 2/3 majority organization vote."
- },
- {
- "heading": "Other Changes",
- "data": "Unless specified above, all other changes to the project require a 2/3 majority organization vote. Additionally, any maintainer may request that any change require a 2/3 majority organization vote."
- },
- {
- "additional_info": "This document defines project governance for the project. The CNI project employs \"organization voting\" to ensure no single organization can dominate the project. Individuals not associated with or employed by a company or organization are allowed one organization vote. Each company or organization (regardless of the number of maintainers associated with or employed by that company/organization) receives one organization vote. In other words, if two maintainers are employed by Company X, two by Company Y, two by Company Z, and one maintainer is an un-affiliated individual, a total of four \"organization votes\" are possible; one for X, one for Y, one for Z, and one for the un-affiliated individual. Any maintainer from an organization may cast the vote for that organization. For formal votes, a specific statement of what is being voted on should be added to the relevant github issue or PR, and a link to that issue or PR added to the maintainers meeting agenda document. Maintainers should indicate their yes/no vote on that issue or PR, and after a suitable period of time, the votes will be tallied and the outcome noted. New maintainers are proposed by an existing maintainer and are elected by a 2/3 majority organization vote. Maintainers can be removed by a 2/3 majority organization vote. Non-specification-related PRs may be merged after receiving at least two organization votes. Changes to the CNI Specification also follow the normal PR approval process (eg, 2 organization votes), but any maintainer can request that the approval require a 2/3 majority organization vote. Maintainers will be added to the containernetworking GitHub organization and added to the GitHub cni-maintainers team, and made a GitHub maintainer of that team. After 6 months a maintainer will be made an \"owner\" of the GitHub organization. All changes in Governance require a 2/3 majority organization vote. Unless specified above, all other changes to the project require a 2/3 majority organization vote. Additionally, any maintainer may request that any change require a 2/3 majority organization vote."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "README.md"
- },
- "content": [
- {
- "heading": "cnitool",
- "data": "`cnitool` is a simple program that executes a CNI configuration. It will\n add or remove an interface in an already-created network namespace."
- },
- {
- "heading": "Environment Variables",
- "data": "* `NETCONFPATH`: This environment variable needs to be set to a\n directory. It defaults to `/etc/cni/net.d`. The `cnitool` searches\n for CNI configuration files in this directory according to the following priorities:\n 1. Search files with the extension `*.conflist`, representing a list of plugin configurations.\n 2. If there are no `*.conflist` files in the directory, search files with the extension `*.conf` or `*.json`,\n representing a single plugin configuration.\n \n It loads all the CNI configuration files in\n this directory and if it finds a CNI configuration with the `network\n name` given to the cnitool it returns the corresponding CNI\n configuration, else it returns `nil`.\n * `CNI_PATH`: For a given CNI configuration `cnitool` will search for\n the corresponding CNI plugin in this path."
- },
- {
- "heading": "Example invocation",
- "data": "First, install cnitool:\n Then, check out and build the plugins. All commands should be run from this directory."
- },
- {
- "heading": "or",
- "data": "Create a network configuration Create a network namespace. This will be called `testing`: Add the container to the network: Check whether the container's networking is as expected (ONLY for spec v0.4.0+): Test that it works: And clean up:"
- },
- {
- "additional_info": "`cnitool` is a simple program that executes a CNI configuration. It will add remove an interface in an already-created netwk namespace. * `NETCONFPATH`: This environment variable needs to be set to a directy. It defaults to `/etc/cni/net.d`. The `cnitool` searches f CNI configuration files in this directy accding to the following priities: 1. Search files with the extension `*.conflist`, representing a list of plugin configurations. 2. If there are no `*.conflist` files in the directy, search files with the extension `*.conf` `*.json`, representing a single plugin configuration. It loads all the CNI configuration files in this directy and if it finds a CNI configuration with the `netwk name` given to the cnitool it returns the cresponding CNI configuration, else it returns `nil`. * `CNI_PATH`: F a given CNI configuration `cnitool` will search f the cresponding CNI plugin in this path. First, install cnitool: ```bash go get github.com/containernetwking/cni go install github.com/containernetwking/cni/cnitool ``` Then, check out and build the plugins. All commands should be run from this directy. ```bash git clone https://github.com/containernetwking/plugins.git cd plugins ./build_linux.sh ./build_windows.sh ``` Create a netwk configuration ```bash echo '{\"cniVersion\":\"0.4.0\",\"name\":\"myptp\",\"type\":\"ptp\",\"ipMasq\":true,\"ipam\":{\"type\":\"host-local\",\"subnet\":\"172.16.29.0/24\",\"routes\":[{\"dst\":\"0.0.0.0/0\"}]}}' | sudo tee /etc/cni/net.d/10-myptp.conf ``` Create a netwk namespace. This will be called `testing`: ```bash sudo ip netns add testing ``` Add the container to the netwk: ```bash sudo CNI_PATH=./bin cnitool add myptp /var/run/netns/testing ``` Check whether the container's netwking is as expected (ONLY f spec v0.4.0+): ```bash sudo CNI_PATH=./bin cnitool check myptp /var/run/netns/testing ``` Test that it wks: ```bash sudo ip -n testing addr sudo ip netns exec testing ping -c 1 4.2.2.2 ``` And clean up: ```bash sudo CNI_PATH=./bin cnitool del myptp /var/run/netns/testing sudo ip netns del testing ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "RELEASING.md"
- },
- "content": [
- {
- "heading": "Release process",
- "data": ""
- },
- {
- "heading": "Preparing for a release",
- "data": "Releases are performed by maintainers and should usually be discussed and planned at a maintainer meeting.\n - Choose the version number. It should be prefixed with `v`, e.g. `v1.2.3`\n - Take a quick scan through the PRs and issues to make sure there isn't anything crucial that _must_ be in the next release.\n - Create a draft of the release note\n - Discuss the level of testing that's needed and create a test plan if sensible\n - Check what version of `go` is used in the build container, updating it if there's a new stable release."
- },
- {
- "heading": "Publishing the release",
- "data": "1. Make sure you are on the master branch and don't have any local uncommitted changes. 1. Create a signed tag for the release `git tag -s $VERSION` (Ensure that GPG keys are created and added to GitHub) 1. Push the tag to git `git push origin ` 1. Create a release on Github, using the tag which was just pushed. 1. Add the release note to the release. 1. Announce the release on at least the CNI mailing, IRC and Slack."
- },
- {
- "additional_info": "Releases are performed by maintainers and should usually be discussed and planned at a maintainer meeting. - Choose the version number. It should be prefixed with `v`, e.g. `v1.2.3` - Take a quick scan through the PRs and issues to make sure there isn't anything crucial that _must_ be in the next release. - Create a draft of the release note - Discuss the level of testing that's needed and create a test plan if sensible - Check what version of `go` is used in the build container, updating it if there's a new stable release. 1. Make sure you are on the master branch and don't have any local uncommitted changes. 1. Create a signed tag for the release `git tag -s $VERSION` (Ensure that GPG keys are created and added to GitHub) 1. Push the tag to git `git push origin ` 1. Create a release on Github, using the tag which was just pushed. 1. Add the release note to the release. 1. Announce the release on at least the CNI mailing, IRC and Slack."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "ROADMAP.md"
- },
- "content": [
- {
- "heading": "CNI Roadmap",
- "data": "This document defines a high level roadmap for CNI development.\n The list below is not complete, and we advise to get the current project state from the [milestones defined in GitHub](https://github.com/containernetworking/cni/milestones)."
- },
- {
- "heading": "CNI Milestones",
- "data": ""
- },
- {
- "heading": "[v1.0.0](https://github.com/containernetworking/cni/milestones/v1.0.0)",
- "data": "- Targeted for April 2020\n - More precise specification language\n - Stable SPEC\n - Complete test coverage"
- },
- {
- "heading": "Beyond v1.0.0",
- "data": "- Conformance test suite for CNI plugins (both reference and 3rd party) - Signed release binaries"
- },
- {
- "additional_info": "This document defines a high level roadmap for CNI development. The list below is not complete, and we advise to get the current project state from the [milestones defined in GitHub](https://github.com/containernetworking/cni/milestones). - Targeted for April 2020 - More precise specification language - Stable SPEC - Complete test coverage - Conformance test suite for CNI plugins (both reference and 3rd party) - Signed release binaries"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "spec-upgrades.md"
- },
- "content": [
- {
- "heading": "How to Upgrade to CNI Specification v1.0",
- "data": "CNI v1.0 has the following changes:\n - non-List configurations are removed\n - the `version` field in the `interfaces` array was redundant and is removed"
- },
- {
- "heading": "libcni Changes in CNI v1.0",
- "data": ""
- },
- {
- "heading": "`/pkg/types/current` no longer exists",
- "data": "This means that runtimes need to explicitly select a version they support.\n This reduces code breakage when revendoring cni into other projects and\n returns the decision on which CNI Spec versions a plugin supports to the\n plugin's authors.\n For example, your Go imports might look like"
- },
- {
- "heading": "Changes in CNI v0.4",
- "data": "CNI v0.4 has the following important changes:\n - A new verb, \"CHECK\", was added. Runtimes can now ask plugins to verify the status of a container's attachment\n - A new configuration flag, `disableCheck`, which indicates to the runtime that configuration should not be CHECK'ed\n No changes were made to the result type."
- },
- {
- "heading": "How to upgrade to CNI Specification v0.3.0 and later",
- "data": "The 0.3.0 specification contained a small error. The Result structure's `ip` field should have been renamed to `ips` to be consistent with the IPAM result structure definition; this rename was missed when updating the Result to accommodate multiple IP addresses and interfaces. All first-party CNI plugins (bridge, host-local, etc) were updated to use `ips` (and thus be inconsistent with the 0.3.0 specification) and most other plugins have not been updated to the 0.3.0 specification yet, so few (if any) users should be impacted by this change.\n The 0.3.1 specification corrects the `Result` structure to use the `ips` field name as originally intended. This is the only change between 0.3.0 and 0.3.1.\n Version 0.3.0 of the [CNI Specification](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md) provides rich information\n about container network configuration, including details of network interfaces\n and support for multiple IP addresses.\n To support this new data, the specification changed in a couple significant\n ways that will impact CNI users, plugin authors, and runtime authors.\n This document provides guidance for how to upgrade:\n - [For CNI Users](#for-cni-users)\n - [For Plugin Authors](#for-plugin-authors)\n - [For Runtime Authors](#for-runtime-authors)\n **Note**: the CNI Spec is versioned independently from the GitHub releases\n for this repo. For example, Release v0.4.0 supports Spec version v0.2.0,\n and Release v0.5.0 supports Spec v0.3.0.\n ----"
- },
- {
- "heading": "For CNI Users",
- "data": "If you maintain CNI configuration files for a container runtime that uses CNI,\n ensure that the configuration files specify a `cniVersion` field and that the\n version there is supported by your container runtime and CNI plugins.\n Configuration files without a version field should be given version 0.2.0.\n The CNI spec includes example configuration files for\n [single plugins](SPEC.md#example-configurations)\n and for [lists of chained plugins](SPEC.md#example-configurations).\n Consult the documentation for your runtime and plugins to determine what\n CNI spec versions they support. Test any plugin upgrades before deploying to\n production. You may find [cnitool](https://github.com/containernetworking/cni/tree/main/cnitool)\n useful. Specifically, your configuration version should be the lowest common\n version supported by your plugins."
- },
- {
- "heading": "For Plugin Authors",
- "data": "This section provides guidance for upgrading plugins to CNI Spec Version 0.3.0."
- },
- {
- "heading": "General guidance for all plugins (language agnostic)",
- "data": "To provide the smoothest upgrade path, **existing plugins should support\n multiple versions of the CNI spec**. In particular, plugins with existing\n installed bases should add support for CNI spec version 1.0.0 while maintaining\n compatibility with older versions.\n To do this, two changes are required. First, a plugin should advertise which\n CNI spec versions it supports. It does this by responding to the `VERSION`\n command with the following JSON data:\n Second, for the `ADD` command, a plugin must respect the `cniVersion` field\n provided in the [network configuration JSON](SPEC.md#network-configuration).\n That field is a request for the plugin to return results of a particular format:\n - If the `cniVersion` field is not present, then spec v0.2.0 should be assumed\n and v0.2.0 format result JSON returned.\n - If the plugin doesn't support the version, the plugin must error.\n - Otherwise, the plugin must return a [CNI Result](SPEC.md#result)\n in the format requested.\n Result formats for older CNI spec versions are available in the\n [git history for SPEC.md](https://github.com/containernetworking/cni/commits/main/SPEC.md).\n For example, suppose a plugin, via its `VERSION` response, advertises CNI specification\n support for v0.2.0 and v0.3.0. When it receives `cniVersion` key of `0.2.0`,\n the plugin must return result JSON conforming to CNI spec version 0.2.0."
- },
- {
- "heading": "Specific guidance for plugins written in Go",
- "data": "Plugins written in Go may leverage the Go language packages in this repository\n to ease the process of upgrading and supporting multiple versions. CNI\n [Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases/tag/v0.5.0)\n includes important changes to the Golang APIs. Plugins using these APIs will\n require some changes now, but should more-easily handle spec changes and\n new features going forward.\n For plugin authors, the biggest change is that `types.Result` is now an\n interface implemented by concrete struct types in the `types/100`,\n `types/040`, and `types/020` subpackages.\n Internally, plugins should use the latest spec version (eg `types/100`) structs,\n and convert to or from specific versions when required. A typical plugin will\n only need to do a single conversion when it is about to complete and\n needs to print the result JSON in the requested `cniVersion` format to stdout.\n The library function `types.PrintResult()` simplifies this by converting and\n printing in a single call.\n Additionally, the plugin should advertise which CNI Spec versions it supports\n via the 3rd argument to `skel.PluginMain()`.\n Here is some example code\n Alternately, to use the result from a delegated IPAM plugin, the `result`\n value might be formed like this:\n Other examples of spec v0.3.0-compatible plugins are the\n [main plugins in this repo](https://github.com/containernetworking/plugins/)"
- },
- {
- "heading": "For Runtime Authors",
- "data": "This section provides guidance for upgrading container runtimes to support\n CNI Spec Version 0.3.0 and later."
- },
- {
- "heading": "General guidance for all runtimes (language agnostic)",
- "data": ""
- },
- {
- "heading": "Support multiple CNI spec versions",
- "data": "To provide the smoothest upgrade path and support the broadest range of CNI\n plugins, **container runtimes should support multiple versions of the CNI spec**.\n In particular, runtimes with existing installed bases should add support for CNI\n spec version 0.3.0 and later while maintaining compatibility with older versions.\n To support multiple versions of the CNI spec, runtimes should be able to\n call both new and legacy plugins, and handle the results from either.\n When calling a plugin, the runtime must request that the plugin respond in a\n particular format by specifying the `cniVersion` field in the\n [Network Configuration](SPEC.md#network-configuration)\n JSON block. The plugin will then respond with\n a [Result](SPEC.md#result)\n in the format defined by that CNI spec version, and the runtime must parse\n and handle this result."
- },
- {
- "heading": "Handle errors due to version incompatibility",
- "data": "Plugins may respond with error indicating that they don't support the requested\n CNI version (see [Well-known Error Codes](SPEC.md#well-known-error-codes)),\n e.g.\n In that case, the runtime may retry with a lower CNI spec version, or take\n some other action."
- },
- {
- "heading": "(optional) Discover plugin version support",
- "data": "Runtimes may discover which CNI spec versions are supported by a plugin, by\n calling the plugin with the `VERSION` command. The `VERSION` command was\n added in CNI spec v0.2.0, so older plugins may not respect it. In the absence\n of a successful response to `VERSION`, assume that the plugin only supports\n CNI spec v0.1.0."
- },
- {
- "heading": "Handle missing data in v0.3.0 and later results",
- "data": "The Result for the `ADD` command in CNI spec version 0.3.0 and later includes\n a new field `interfaces`. An IP address in the `ip` field may describe which\n interface it is assigned to, by placing a numeric index in the `interface`\n subfield.\n However, some plugins which are v0.3.0 and later compatible may nonetheless\n omit the `interfaces` field and/or set the `interface` index value to `-1`.\n Runtimes should gracefully handle this situation, unless they have good reason\n to rely on the existence of the interface data. In that case, provide the user\n an error message that helps diagnose the issue."
- },
- {
- "heading": "Specific guidance for container runtimes written in Go",
- "data": "Container runtimes written in Go may leverage the Go language packages in this repository to ease the process of upgrading and supporting multiple versions. CNI [Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases) includes important changes to the Golang APIs. Runtimes using these APIs will require some changes now, but should more-easily handle spec changes and new features going forward. For runtimes, the biggest changes to the Go libraries are in the `types` package. It has been refactored to make working with versioned results simpler. The top-level `types.Result` is now an opaque interface instead of a struct, and APIs exposed by other packages, such as the high-level `libcni` package, have been updated to use this interface. Concrete types are now per-version subpackages. The `types/current` subpackage contains the latest (spec v0.3.0) types. When up-converting older result types to spec v0.3.0 and later, fields new in spec v0.3.0 and later (like `interfaces`) may be empty. Conversely, when down-converting v0.3.0 and later results to an older version, any data in those fields will be lost. | From | 0.1 | 0.2 | 0.3 | 0.4 | 1.0 | |--------|-----|-----|-----|-----|-----| | To 0.1 | \u2714 | \u2714 | x | x | x | | To 0.2 | \u2714 | \u2714 | x | x | x | | To 0.3 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | | To 0.4 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | | To 1.0 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | Key: > \u2714 : lossless conversion > \u2734 : higher-version output may have empty fields > x : lower-version output is missing some data A container runtime should use `current.NewResultFromResult()` to convert the opaque `types.Result` to a concrete `current.Result` struct. It may then work with the fields exposed by that struct:"
- },
- {
- "additional_info": "CNI v1.0 has the following changes: - non-List configurations are removed - the `version` field in the `interfaces` array was redundant and is removed This means that runtimes need to explicitly select a version they support. This reduces code breakage when revendoring cni into other projects and returns the decision on which CNI Spec versions a plugin supports to the plugin's authors. For example, your Go imports might look like ```go import ( cniv1 \"github.com/containernetworking/cni/pkg/types/100\" ) ``` CNI v0.4 has the following important changes: - A new verb, \"CHECK\", was added. Runtimes can now ask plugins to verify the status of a container's attachment - A new configuration flag, `disableCheck`, which indicates to the runtime that configuration should not be CHECK'ed No changes were made to the result type. The 0.3.0 specification contained a small error. The Result structure's `ip` field should have been renamed to `ips` to be consistent with the IPAM result structure definition; this rename was missed when updating the Result to accommodate multiple IP addresses and interfaces. All first-party CNI plugins (bridge, host-local, etc) were updated to use `ips` (and thus be inconsistent with the 0.3.0 specification) and most other plugins have not been updated to the 0.3.0 specification yet, so few (if any) users should be impacted by this change. The 0.3.1 specification corrects the `Result` structure to use the `ips` field name as originally intended. This is the only change between 0.3.0 and 0.3.1. Version 0.3.0 of the [CNI Specification](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md) provides rich information about container network configuration, including details of network interfaces and support for multiple IP addresses. To support this new data, the specification changed in a couple significant ways that will impact CNI users, plugin authors, and runtime authors. This document provides guidance for how to upgrade: - [For CNI Users](#for-cni-users) - [For Plugin Authors](#for-plugin-authors) - [For Runtime Authors](#for-runtime-authors) **Note**: the CNI Spec is versioned independently from the GitHub releases for this repo. For example, Release v0.4.0 supports Spec version v0.2.0, and Release v0.5.0 supports Spec v0.3.0. ---- If you maintain CNI configuration files for a container runtime that uses CNI, ensure that the configuration files specify a `cniVersion` field and that the version there is supported by your container runtime and CNI plugins. Configuration files without a version field should be given version 0.2.0. The CNI spec includes example configuration files for [single plugins](SPEC.md#example-configurations) and for [lists of chained plugins](SPEC.md#example-configurations). Consult the documentation for your runtime and plugins to determine what CNI spec versions they support. Test any plugin upgrades before deploying to production. You may find [cnitool](https://github.com/containernetworking/cni/tree/main/cnitool) useful. Specifically, your configuration version should be the lowest common version supported by your plugins. This section provides guidance for upgrading plugins to CNI Spec Version 0.3.0. To provide the smoothest upgrade path, **existing plugins should support multiple versions of the CNI spec**. In particular, plugins with existing installed bases should add support for CNI spec version 1.0.0 while maintaining compatibility with older versions. To do this, two changes are required. First, a plugin should advertise which CNI spec versions it supports. It does this by responding to the `VERSION` command with the following JSON data: ```json { \"cniVersion\": \"1.0.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\" ] } ``` Second, for the `ADD` command, a plugin must respect the `cniVersion` field provided in the [network configuration JSON](SPEC.md#network-configuration). That field is a request for the plugin to return results of a particular format: - If the `cniVersion` field is not present, then spec v0.2.0 should be assumed and v0.2.0 format result JSON returned. - If the plugin doesn't support the version, the plugin must error. - Otherwise, the plugin must return a [CNI Result](SPEC.md#result) in the format requested. Result formats for older CNI spec versions are available in the [git history for SPEC.md](https://github.com/containernetworking/cni/commits/main/SPEC.md). For example, suppose a plugin, via its `VERSION` response, advertises CNI specification support for v0.2.0 and v0.3.0. When it receives `cniVersion` key of `0.2.0`, the plugin must return result JSON conforming to CNI spec version 0.2.0. Plugins written in Go may leverage the Go language packages in this repository to ease the process of upgrading and supporting multiple versions. CNI [Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases/tag/v0.5.0) includes important changes to the Golang APIs. Plugins using these APIs will require some changes now, but should more-easily handle spec changes and new features going forward. For plugin authors, the biggest change is that `types.Result` is now an interface implemented by concrete struct types in the `types/100`, `types/040`, and `types/020` subpackages. Internally, plugins should use the latest spec version (eg `types/100`) structs, and convert to or from specific versions when required. A typical plugin will only need to do a single conversion when it is about to complete and needs to print the result JSON in the requested `cniVersion` format to stdout. The library function `types.PrintResult()` simplifies this by converting and printing in a single call. Additionally, the plugin should advertise which CNI Spec versions it supports via the 3rd argument to `skel.PluginMain()`. Here is some example code ```go import ( \"github.com/containernetworking/cni/pkg/skel\" \"github.com/containernetworking/cni/pkg/types\" current \"github.com/containernetworking/cni/pkg/types/100\" \"github.com/containernetworking/cni/pkg/version\" ) func cmdAdd(args *skel.CmdArgs) error { // determine spec version to use var netConf struct { types.NetConf // other plugin-specific configuration goes here } err := json.Unmarshal(args.StdinData, &netConf) cniVersion := netConf.CNIVersion // plugin does its work... // set up interfaces // assign addresses, etc // construct the result result := ¤t.Result{ Interfaces: []*current.Interface{ ... }, IPs: []*current.IPs{ ... }, ... } // print result to stdout, in the format defined by the requested cniVersion return types.PrintResult(result, cniVersion) } func main() { skel.PluginMain(cmdAdd, cmdDel, version.All) } ``` Alternately, to use the result from a delegated IPAM plugin, the `result` value might be formed like this: ```go ipamResult, err := ipam.ExecAdd(netConf.IPAM.Type, args.StdinData) result, err := current.NewResultFromResult(ipamResult) ``` Other examples of spec v0.3.0-compatible plugins are the [main plugins in this repo](https://github.com/containernetworking/plugins/) This section provides guidance for upgrading container runtimes to support CNI Spec Version 0.3.0 and later. To provide the smoothest upgrade path and support the broadest range of CNI plugins, **container runtimes should support multiple versions of the CNI spec**. In particular, runtimes with existing installed bases should add support for CNI spec version 0.3.0 and later while maintaining compatibility with older versions. To support multiple versions of the CNI spec, runtimes should be able to call both new and legacy plugins, and handle the results from either. When calling a plugin, the runtime must request that the plugin respond in a particular format by specifying the `cniVersion` field in the [Network Configuration](SPEC.md#network-configuration) JSON block. The plugin will then respond with a [Result](SPEC.md#result) in the format defined by that CNI spec version, and the runtime must parse and handle this result. Plugins may respond with error indicating that they don't support the requested CNI version (see [Well-known Error Codes](SPEC.md#well-known-error-codes)), e.g. ```json { \"cniVersion\": \"0.2.0\", \"code\": 1, \"msg\": \"CNI version not supported\" } ``` In that case, the runtime may retry with a lower CNI spec version, or take some other action. Runtimes may discover which CNI spec versions are supported by a plugin, by calling the plugin with the `VERSION` command. The `VERSION` command was added in CNI spec v0.2.0, so older plugins may not respect it. In the absence of a successful response to `VERSION`, assume that the plugin only supports CNI spec v0.1.0. The Result for the `ADD` command in CNI spec version 0.3.0 and later includes a new field `interfaces`. An IP address in the `ip` field may describe which interface it is assigned to, by placing a numeric index in the `interface` subfield. However, some plugins which are v0.3.0 and later compatible may nonetheless omit the `interfaces` field and/or set the `interface` index value to `-1`. Runtimes should gracefully handle this situation, unless they have good reason to rely on the existence of the interface data. In that case, provide the user an error message that helps diagnose the issue. Container runtimes written in Go may leverage the Go language packages in this repository to ease the process of upgrading and supporting multiple versions. CNI [Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases) includes important changes to the Golang APIs. Runtimes using these APIs will require some changes now, but should more-easily handle spec changes and new features going forward. For runtimes, the biggest changes to the Go libraries are in the `types` package. It has been refactored to make working with versioned results simpler. The top-level `types.Result` is now an opaque interface instead of a struct, and APIs exposed by other packages, such as the high-level `libcni` package, have been updated to use this interface. Concrete types are now per-version subpackages. The `types/current` subpackage contains the latest (spec v0.3.0) types. When up-converting older result types to spec v0.3.0 and later, fields new in spec v0.3.0 and later (like `interfaces`) may be empty. Conversely, when down-converting v0.3.0 and later results to an older version, any data in those fields will be lost. | From | 0.1 | 0.2 | 0.3 | 0.4 | 1.0 | |--------|-----|-----|-----|-----|-----| | To 0.1 | \u2714 | \u2714 | x | x | x | | To 0.2 | \u2714 | \u2714 | x | x | x | | To 0.3 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | | To 0.4 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | | To 1.0 | \u2734 | \u2734 | \u2714 | \u2714 | \u2714 | Key: > \u2714 : lossless conversion > \u2734 : higher-version output may have empty fields > x : lower-version output is missing some data A container runtime should use `current.NewResultFromResult()` to convert the opaque `types.Result` to a concrete `current.Result` struct. It may then work with the fields exposed by that struct: ```go // runtime invokes the plugin to get the opaque types.Result // this may conform to any CNI spec version resultInterface, err := libcni.AddNetwork(ctx, netConf, runtimeConf) // upconvert result to the current 0.3.0 spec result, err := current.NewResultFromResult(resultInterface) // use the result fields .... for _, ip := range result.IPs { ... } ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Container Network Interface (CNI)",
- "file_name": "SPEC.md"
- },
- "content": [
- {
- "heading": "Container Network Interface (CNI) Specification",
- "data": "- [Container Network Interface (CNI) Specification](#container-network-interface-cni-specification)\n - [Version](#version)\n - [Released versions](#released-versions)\n - [Overview](#overview)\n - [Summary](#summary)\n - [Section 1: Network configuration format](#section-1-network-configuration-format)\n - [Configuration format](#configuration-format)\n - [Plugin configuration objects:](#plugin-configuration-objects)\n - [Example configuration](#example-configuration)\n - [Version considerations](#version-considerations)\n - [Section 2: Execution Protocol](#section-2-execution-protocol)\n - [Overview](#overview-1)\n - [Parameters](#parameters)\n - [Errors](#errors)\n - [CNI operations](#cni-operations)\n - [`ADD`: Add container to network, or apply modifications](#add-add-container-to-network-or-apply-modifications)\n - [`DEL`: Remove container from network, or un-apply modifications](#del-remove-container-from-network-or-un-apply-modifications)\n - [`CHECK`: Check container's networking is as expected](#check-check-containers-networking-is-as-expected)\n - [`STATUS`: Check plugin status](#status-check-plugin-status)\n - [`VERSION`: probe plugin version support](#version-probe-plugin-version-support)\n - [`GC`: Clean up any stale resources](#gc-clean-up-any-stale-resources)\n - [Section 3: Execution of Network Configurations](#section-3-execution-of-network-configurations)\n - [Lifecycle \\& Ordering](#lifecycle--ordering)\n - [Attachment Parameters](#attachment-parameters)\n - [Adding an attachment](#adding-an-attachment)\n - [Deleting an attachment](#deleting-an-attachment)\n - [Checking an attachment](#checking-an-attachment)\n - [Garbage-collecting a network](#garbage-collecting-a-network)\n - [Deriving request configuration from plugin configuration](#deriving-request-configuration-from-plugin-configuration)\n - [Deriving `runtimeConfig`](#deriving-runtimeconfig)\n - [Section 4: Plugin Delegation](#section-4-plugin-delegation)\n - [Delegated Plugin protocol](#delegated-plugin-protocol)\n - [Delegated plugin execution procedure](#delegated-plugin-execution-procedure)\n - [Section 5: Result Types](#section-5-result-types)\n - [ADD Success](#add-success)\n - [Delegated plugins (IPAM)](#delegated-plugins-ipam)\n - [VERSION Success](#version-success)\n - [Error](#error)\n - [Version](#version-1)\n - [Appendix: Examples](#appendix-examples)\n - [Add example](#add-example)\n - [Check example](#check-example)\n - [Delete example](#delete-example)"
- },
- {
- "heading": "Version",
- "data": "This is CNI **spec** version **1.1.0**.\n Note that this is **independent from the version of the CNI library and plugins** in this repository (e.g. the versions of [releases](https://github.com/containernetworking/cni/releases))."
- },
- {
- "heading": "Released versions",
- "data": "Released versions of the spec are available as Git tags.\n | tag | spec permalink | major changes |\n | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | --------------------------------- |\n | [`spec-v1.0.0`](https://github.com/containernetworking/cni/releases/tag/spec-v1.0.0) | [spec at v1.0.0](https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md) | Removed non-list configurations; removed `version` field of `interfaces` array |\n | [`spec-v0.4.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.4.0) | [spec at v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) | Introduce the CHECK command and passing prevResult on DEL |\n | [`spec-v0.3.1`](https://github.com/containernetworking/cni/releases/tag/spec-v0.3.1) | [spec at v0.3.1](https://github.com/containernetworking/cni/blob/spec-v0.3.1/SPEC.md) | none (typo fix only) |\n | [`spec-v0.3.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.3.0) | [spec at v0.3.0](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md) | rich result type, plugin chaining |\n | [`spec-v0.2.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.2.0) | [spec at v0.2.0](https://github.com/containernetworking/cni/blob/spec-v0.2.0/SPEC.md) | VERSION command |\n | [`spec-v0.1.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.1.0) | [spec at v0.1.0](https://github.com/containernetworking/cni/blob/spec-v0.1.0/SPEC.md) | initial version |"
- },
- {
- "heading": "Do not rely on these tags being stable. In the future, we may change our mind about which particular commit is the right marker for a given historical spec version.",
- "data": ""
- },
- {
- "heading": "Overview",
- "data": "This document proposes a generic plugin-based networking solution for application containers on Linux, the _Container Networking Interface_, or _CNI_.\n For the purposes of this proposal, we define three terms very specifically:\n - _container_ is a network isolation domain, though the actual isolation technology is not defined by the specification. This could be a [network namespace][namespaces] or a virtual machine, for example.\n - _network_ refers to a group of endpoints that are uniquely addressable that can communicate amongst each other. This could be either an individual container (as specified above), a machine, or some other network device (e.g. a router). Containers can be conceptually _added to_ or _removed from_ one or more networks.\n - _runtime_ is the program responsible for executing CNI plugins.\n - _plugin_ is a program that applies a specified network configuration.\n This document aims to specify the interface between \"runtimes\" and \"plugins\". The key words \"must\", \"must not\", \"required\", \"shall\", \"shall not\", \"should\", \"should not\", \"recommended\", \"may\" and \"optional\" are used as specified in [RFC 2119][rfc-2119].\n [namespaces]: http://man7.org/linux/man-pages/man7/namespaces.7.html\n [rfc-2119]: https://www.ietf.org/rfc/rfc2119.txt"
- },
- {
- "heading": "Summary",
- "data": "The CNI specification defines:\n 1. A format for administrators to define network configuration.\n 2. A protocol for container runtimes to make requests to network plugins.\n 3. A procedure for executing plugins based on a supplied configuration.\n 4. A procedure for plugins to delegate functionality to other plugins.\n 5. Data types for plugins to return their results to the runtime."
- },
- {
- "heading": "Section 1: Network configuration format",
- "data": "CNI defines a network configuration format for administrators. It contains\n directives for both the container runtime as well as the plugins to consume. At\n plugin execution time, this configuration format is interpreted by the runtime and\n transformed in to a form to be passed to the plugins.\n In general, the network configuration is intended to be static. It can conceptually\n be thought of as being \"on disk\", though the CNI specification does not actually\n require this."
- },
- {
- "heading": "Configuration format",
- "data": "A network configuration consists of a JSON object with the following keys:\n - `cniVersion` (string): [Semantic Version 2.0](https://semver.org) of CNI specification to which this configuration list and all the individual configurations conform. Currently \"1.1.0\"\n - `cniVersions` (string list): List of all CNI versions which this configuration supports. See [version selection](#version-selection) below.\n - `name` (string): Network name. This should be unique across all network configurations on a host (or other administrative domain). Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore, dot (.) or hyphen (-).\n - `disableCheck` (boolean): Either `true` or `false`. If `disableCheck` is `true`, runtimes must not call `CHECK` for this network configuration list. This allows an administrator to prevent `CHECK`ing where a combination of plugins is known to return spurious errors.\n - `plugins` (list): A list of CNI plugins and their configuration, which is a list of plugin configuration objects."
- },
- {
- "heading": "Plugin configuration objects:",
- "data": "Plugin configuration objects may contain additional fields than the ones defined here.\n The runtime MUST pass through these fields, unchanged, to the plugin, as defined in section 3."
- },
- {
- "heading": "Required keys:",
- "data": "- `type` (string): Matches the name of the CNI plugin binary on disk. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\)."
- },
- {
- "heading": "Optional keys, used by the protocol:",
- "data": "- `capabilities` (dictionary): Defined in [section 3](#Deriving-runtimeConfig)"
- },
- {
- "heading": "Reserved keys, used by the protocol:",
- "data": "These keys are generated by the runtime at execution time, and thus should not be used in configuration.\n - `runtimeConfig`\n - `args`\n - Any keys starting with `cni.dev/`"
- },
- {
- "heading": "Optional keys, well-known:",
- "data": "These keys are not used by the protocol, but have a standard meaning to plugins.\n Plugins that consume any of these configuration keys should respect their intended semantics.\n - `ipMasq` (boolean): If supported by the plugin, sets up an IP masquerade on the host for this network. This is necessary if the host will act as a gateway to subnets that are not able to route to the IP assigned to the container.\n - `ipam` (dictionary): Dictionary with IPAM (IP Address Management) specific values:\n - `type` (string): Refers to the filename of the IPAM plugin executable. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\).\n - `dns` (dictionary, optional): Dictionary with DNS specific values:\n - `nameservers` (list of strings, optional): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address.\n - `domain` (string, optional): the local domain used for short hostname lookups.\n - `search` (list of strings, optional): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers.\n - `options` (list of strings, optional): list of options that can be passed to the resolver"
- },
- {
- "heading": "Other keys:",
- "data": "Plugins may define additional fields that they accept and may generate an error if called with unknown fields. Runtimes must preserve unknown fields in plugin configuration objects when transforming for execution."
- },
- {
- "heading": "Example configuration",
- "data": ""
- },
- {
- "heading": "Version considerations",
- "data": "CNI runtimes, plugins, and network configurations may support multiple CNI specification versions independently. Plugins indicate their set of supported versions through the VERSION command, while network configurations indicate their set of supported versions through the `cniVersion` and `cniVersions` fields.\n CNI runtimes MUST select the highest supported version from the set of network configuration versions given by the `cniVersion` and `cniVersions` fields. Runtimes MAY consider the set of supported plugin versions as reported by the VERSION command when determining available versions.\n The CNI protocol follows Semantic Versioning principles, so the configuration format MUST remain backwards and forwards compatible within major versions."
- },
- {
- "heading": "Section 2: Execution Protocol",
- "data": ""
- },
- {
- "heading": "Overview",
- "data": "The CNI protocol is based on execution of binaries invoked by the container runtime. CNI defines the protocol between the plugin binary and the runtime.\n A CNI plugin is responsible for configuring a container's network interface in some manner. Plugins fall in to two broad categories:\n * \"Interface\" plugins, which create a network interface inside the container and ensure it has connectivity.\n * \"Chained\" plugins, which adjust the configuration of an already-created interface (but may need to create more interfaces to do so).\n The runtime passes parameters to the plugin via environment variables and configuration. It supplies configuration via stdin. The plugin returns\n a [result](#Section-5-Result-Types) on stdout on success, or an error on stderr if the operation fails. Configuration and results are encoded in JSON.\n Parameters define invocation-specific settings, whereas configuration is, with some exceptions, the same for any given network.\n The runtime must execute the plugin in the runtime's networking domain. (For most cases, this means the root network namespace / `dom0`)."
- },
- {
- "heading": "Parameters",
- "data": "Protocol parameters are passed to the plugins via OS environment variables.\n - `CNI_COMMAND`: indicates the desired operation; `ADD`, `DEL`, `CHECK`, `GC`, or `VERSION`.\n - `CNI_CONTAINERID`: Container ID. A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-).\n - `CNI_NETNS`: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`)\n - `CNI_IFNAME`: Name of the interface to create inside the container; if the plugin is unable to use this interface name it must return an error.\n - `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\"\n - `CNI_PATH`: List of paths to search for CNI plugin executables. Paths are separated by an OS-specific list separator; for example ':' on Linux and ';' on Windows"
- },
- {
- "heading": "Errors",
- "data": "A plugin must exit with a return code of 0 on success, and non-zero on failure. If the plugin encounters an error, it should output an [\"error\" result structure](#Error) (see below)."
- },
- {
- "heading": "CNI operations",
- "data": "CNI defines 5 operations: `ADD`, `DEL`, `CHECK`, `GC`, and `VERSION`. These are passed to the plugin via the `CNI_COMMAND` environment variable."
- },
- {
- "heading": "`ADD`: Add container to network, or apply modifications",
- "data": "A CNI plugin, upon receiving an `ADD` command, should either\n - create the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`, or\n - adjust the configuration of the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`.\n If the CNI plugin is successful, it must output a [result structure](#Success) (see below) on standard out. If the plugin was supplied a `prevResult` as part of its input configuration, it MUST handle `prevResult` by either passing it through, or modifying it appropriately.\n If an interface of the requested name already exists in the container, the CNI plugin MUST return with an error.\n A runtime should not call `ADD` twice (without an intervening DEL) for the same `(CNI_CONTAINERID, CNI_IFNAME)` tuple. This implies that a given container ID may be added to a specific network more than once only if each addition is done with a different interface name."
- },
- {
- "heading": "Input:",
- "data": "The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in.\n Required environment parameters:\n - `CNI_COMMAND`\n - `CNI_CONTAINERID`\n - `CNI_NETNS`\n - `CNI_IFNAME`\n Optional environment parameters:\n - `CNI_ARGS`\n - `CNI_PATH`"
- },
- {
- "heading": "`DEL`: Remove container from network, or un-apply modifications",
- "data": "A CNI plugin, upon receiving a `DEL` command, should either\n - delete the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`, or\n - undo any modifications applied in the plugin's `ADD` functionality\n Plugins should generally complete a `DEL` action without error even if some resources are missing. For example, an IPAM plugin should generally release an IP allocation and return success even if the container network namespace no longer exists, unless that network namespace is critical for IPAM management. While DHCP may usually send a 'release' message on the container network interface, since DHCP leases have a lifetime this release action would not be considered critical and no error should be returned if this action fails. For another example, the `bridge` plugin should delegate the DEL action to the IPAM plugin and clean up its own resources even if the container network namespace and/or container network interface no longer exist.\n Plugins MUST accept multiple `DEL` calls for the same (`CNI_CONTAINERID`, `CNI_IFNAME`) pair, and return success if the interface in question, or any modifications added, are missing."
- },
- {
- "heading": "Input:",
- "data": "The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in.\n Required environment parameters:\n - `CNI_COMMAND`\n - `CNI_CONTAINERID`\n - `CNI_IFNAME`\n Optional environment parameters:\n - `CNI_NETNS`\n - `CNI_ARGS`\n - `CNI_PATH`"
- },
- {
- "heading": "`CHECK`: Check container's networking is as expected",
- "data": "`CHECK` is a way for a runtime to probe the status of an existing container.\n Plugin considerations:\n - The plugin must consult the `prevResult` to determine the expected interfaces and addresses.\n - The plugin must allow for a later chained plugin to have modified networking resources, e.g. routes, on `ADD`.\n - The plugin should return an error if a resource included in the CNI Result type (interface, address or route) was created by the plugin, and is listed in `prevResult`, but is missing or in an invalid state.\n - The plugin should return an error if other resources not tracked in the Result type such as the following are missing or are in an invalid state:\n - Firewall rules\n - Traffic shaping controls\n - IP reservations\n - External dependencies such as a daemon required for connectivity\n - etc.\n - The plugin should return an error if it is aware of a condition where the container is generally unreachable.\n - The plugin must handle `CHECK` being called immediately after an `ADD`, and therefore should allow a reasonable convergence delay for any asynchronous resources.\n - The plugin should call `CHECK` on any delegated (e.g. IPAM) plugins and pass any errors on to its caller.\n Runtime considerations:\n - A runtime must not call `CHECK` for a container that has not been `ADD`ed, or has been `DEL`eted after its last `ADD`.\n - A runtime must not call `CHECK` if `disableCheck` is set to `true` in the [configuration](#configuration-format).\n - A runtime must include a `prevResult` field in the network configuration containing the `Result` of the immediately preceding `ADD` for the container. The runtime may wish to use libcni's support for caching `Result`s.\n - A runtime may choose to stop executing `CHECK` for a chain when a plugin returns an error.\n - A runtime may execute `CHECK` from immediately after a successful `ADD`, up until the container is `DEL`eted from the network.\n - A runtime may assume that a failed `CHECK` means the container is permanently in a misconfigured state."
- },
- {
- "heading": "Input:",
- "data": "The runtime will provide a json-serialized plugin configuration object (defined below) on standard in.\n Required environment parameters:\n - `CNI_COMMAND`\n - `CNI_CONTAINERID`\n - `CNI_NETNS`\n - `CNI_IFNAME`\n Optional environment parameters:\n - `CNI_ARGS`\n - `CNI_PATH`\n All parameters, with the exception of `CNI_PATH`, must be the same as the corresponding `ADD` for this container."
- },
- {
- "heading": "`STATUS`: Check plugin status",
- "data": "`STATUS` is a way for a runtime to determine the readiness of a network plugin.\n A plugin must exit with a zero (success) return code if the plugin is ready to service ADD requests. If the plugin knows that it is not able to service ADD requests, it must exit with a non-zero return code and output an error on standard out (see below).\n For example, if a plugin relies on an external service or daemon, it should return an error to `STATUS` if that service is unavailable. Likewise, if a plugin has a limited number of resources (e.g. IP addresses, hardware queues), it should return an error if those resources are exhausted and no new `ADD` requests can be serviced.\n The following error codes are defined in the context of `STATUS`:\n - 50: The plugin is not available (i.e. cannot service `ADD` requests)\n - 51: The plugin is not available, and existing containers in the network may have limited connectivity.\n Plugin considerations:\n - Status is purely informational. A plugin MUST NOT rely on `STATUS` being called.\n - Plugins should always expect other CNI operations (like `ADD`, `DEL`, etc) even if `STATUS` returns an error. `STATUS` does not prevent other runtime requests.\n - If a plugin relies on a delegated plugin (e.g. IPAM) to service `ADD` requests, it must also execute a `STATUS` request to that plugin when it receives a `STATUS` request for itself. If the delegated plugin return an error result, the executing plugin should return an error result."
- },
- {
- "heading": "Input:",
- "data": "The runtime will provide a json-serialized plugin configuration object (defined below) on standard in.\n Optional environment parameters:\n - `CNI_PATH`"
- },
- {
- "heading": "`VERSION`: probe plugin version support",
- "data": "The plugin should output via standard-out a json-serialized version result object (see below)."
- },
- {
- "heading": "Input:",
- "data": "A json-serialized object, with the following key:\n - `cniVersion`: The version of the protocol in use.\n Required environment parameters:\n - `CNI_COMMAND`"
- },
- {
- "heading": "`GC`: Clean up any stale resources",
- "data": "The GC command provides a way for runtimes to specify the expected set of attachments to a network.\n The network plugin may then remove any resources related to attachments that do not exist in this set.\n Resources may, for example, include:\n - IPAM reservations\n - Firewall rules\n A plugin SHOULD remove as many stale resources as possible. For example, a plugin should remove any IPAM reservations associated with attachments not in the provided list. The plugin MAY assume that the isolation domain (e.g. network namespace) has been deleted, and thus any resources (e.g. network interfaces) therein have been removed.\n Plugins should generally complete a `GC` action without error. If an error is encountered, a plugin should continue; removing as many resources as possible, and report the errors back to the runtime.\n Plugins MUST, additionally, forward any GC calls to delegated plugins they are configured to use (see section 4).\n The runtime MUST NOT use GC as a substitute for DEL. Plugins may be unable to clean up some resources from GC that they would have been able to clean up from DEL."
- },
- {
- "heading": "Input:",
- "data": "The runtime must provide a JSON-serialized plugin configuration object (defined below) on standard in. It contains an additional key;\n - `cni.dev/attachments` (array of objects): The list of **still valid** attachments to this network:\n - `containerID` (string): the value of CNI_CONTAINERID as provided during the CNI ADD operation\n - `ifname` (string): the value of CNI_IFNAME as provided during the CNI ADD operation\n Required environment parameters:\n - `CNI_COMMAND`\n - `CNI_PATH`"
- },
- {
- "heading": "Output:",
- "data": "No output on success, [\"error\" result structure](#Error) on error."
- },
- {
- "heading": "Section 3: Execution of Network Configurations",
- "data": "This section describes how a container runtime interprets a network configuration (as defined in section 1) and executes plugins accordingly. A runtime may wish to _add_, _delete_, or _check_ a network configuration in a container. This results in a series of plugin `ADD`, `DELETE`, or `CHECK` executions, correspondingly. This section also defines how a network configuration is transformed and provided to the plugin.\n The operation of a network configuration on a container is called an _attachment_. An attachment may be uniquely identified by the `(CNI_CONTAINERID, CNI_IFNAME)` tuple."
- },
- {
- "heading": "Lifecycle & Ordering",
- "data": "- The container runtime must create a new network namespace for the container before invoking any plugins.\n - The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations for different containers. This includes across multiple attachments.\n - **Exception**: The runtime must exclusively execute either _gc_ or _add_ and _delete_. The runtime must ensure that no _add_ or _delete_ operations are in progress before executing _gc_, and must wait for _gc_ to complete before issuing new _add_ or _delete_ commands.\n - Plugins must handle being executed concurrently across different containers. If necessary, they must implement locking on shared resources (e.g. IPAM databases).\n - The container runtime must ensure that _add_ is eventually followed by a corresponding _delete_. The only exception is in the event of catastrophic failure, such as node loss. A _delete_ must still be executed even if the _add_ fails.\n - _delete_ may be followed by additional _deletes_.\n - The network configuration should not change between _add_ and _delete_.\n - The network configuration should not change between _attachments_.\n - The container runtime is responsible for cleanup of the container's network namespace."
- },
- {
- "heading": "Attachment Parameters",
- "data": "While a network configuration should not change between _attachments_, there are certain parameters supplied by the container runtime that are per-attachment. They are:\n - **Container ID:** A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-). During execution, always set as the `CNI_CONTAINERID` parameter.\n - **Namespace**: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`). During execution, always set as the `CNI_NETNS` parameter.\n - **Container interface name**: Name of the interface to create inside the container. During execution, always set as the `CNI_IFNAME` parameter.\n - **Generic Arguments**: Extra arguments, in the form of key-value string pairs, that are relevant to a specific attachment. During execution, always set as the `CNI_ARGS` parameter.\n - **Capability Arguments**: These are also key-value pairs. The key is a string, whereas the value is any JSON-serializable type. The keys and values are defined by [convention](CONVENTIONS.md).\n Furthermore, the runtime must be provided a list of paths to search for CNI plugins. This must also be provided to plugins during execution via the `CNI_PATH` environment variable."
- },
- {
- "heading": "Adding an attachment",
- "data": "For every configuration defined in the `plugins` key of the network configuration,\n 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error.\n 2. Derive request configuration from the plugin configuration, with the following parameters:\n - If this is the first plugin in the list, no previous result is provided,\n - For all additional plugins, the previous result is the result of the previous plugins.\n 3. Execute the plugin binary, with `CNI_COMMAND=ADD`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in.\n 4. If the plugin returns an error, halt execution and return the error to the caller.\n The runtime must store the result returned by the final plugin persistently, as it is required for _check_ and _delete_ operations."
- },
- {
- "heading": "Deleting an attachment",
- "data": "Deleting a network attachment is much the same as adding, with a few key differences:\n - The list of plugins is executed in **reverse order**\n - The previous result provided is always the final result of the _add_ operation.\n For every plugin defined in the `plugins` key of the network configuration, *in reverse order*,\n 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error.\n 2. Derive request configuration from the plugin configuration, with the previous result from the initial _add_ operation.\n 3. Execute the plugin binary, with `CNI_COMMAND=DEL`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in.\n 4. If the plugin returns an error, halt execution and return the error to the caller.\n If all plugins return success, return success to the caller."
- },
- {
- "heading": "Checking an attachment",
- "data": "The runtime may also ask every plugin to confirm that a given attachment is still functional. The runtime must use the same attachment parameters as it did for the _add_ operation.\n Checking is similar to add with two exceptions:\n - the previous result provided is always the final result of the _add_ operation.\n - If the network configuration defines `disableCheck`, then always return success to the caller.\n For every plugin defined in the `plugins` key of the network configuration,\n 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error.\n 2. Derive request configuration from the plugin configuration, with the previous result from the initial _add_ operation.\n 3. Execute the plugin binary, with `CNI_COMMAND=CHECK`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in.\n 4. If the plugin returns an error, halt execution and return the error to the caller.\n If all plugins return success, return success to the caller."
- },
- {
- "heading": "Garbage-collecting a network",
- "data": "The runtime may also ask every plugin in a network configuration to clean up any stale resources via the _GC_ command.\n When garbage-collecting a configuration, there are no [Attachment Parameters](#attachment-parameters).\n For every plugin defined in the `plugins` key of the network configuration,\n 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error.\n 2. Derive request configuration from the plugin configuration.\n 3. Execute the plugin binary, with `CNI_COMMAND=GC`. Supply the derived configuration via standard in.\n 4. If the plugin returns an error, **continue** with execution, returning all errors to the caller.\n If all plugins return success, return success to the caller."
- },
- {
- "heading": "Deriving request configuration from plugin configuration",
- "data": "The network configuration format (which is a list of plugin configurations to execute) must be transformed to a format understood by the plugin (which is a single plugin configuration). This section describes that transformation.\n The request configuration for a single plugin invocation is also JSON. It consists of the plugin configuration, primarily unchanged except for the specified additions and removals.\n The following fields are always to be inserted into the request configuration by the runtime:\n - `cniVersion`: the protocol version selected by the runtime - the string \"1.1.0\"\n - `name`: taken from the `name` field of the network configuration\n For attachment-specific operations (ADD, DEL, CHECK), additional field requirements apply:\n - `runtimeConfig`: the runtime must insert an object consisting of the union of capabilities provided by the plugin and requested by the runtime (more details below).\n - `prevResult`: the runtime must insert consisting of the result type returned by the \"previous\" plugin. The meaning of \"previous\" is defined by the specific operation (_add_, _delete_, or _check_). This field must not be set for the first _add_ in a chain.\n - `capabilities`: must not be set\n For GC operations:\n - `cni.dev/attachments`: as specified in section 2.\n All other fields not prefixed with `cni.dev/` should be passed through unaltered."
- },
- {
- "heading": "Deriving `runtimeConfig`",
- "data": "Whereas CNI_ARGS are provided to all plugins, with no indication if they are going to be consumed, _Capability arguments_ need to be declared explicitly in configuration. The runtime, thus, can determine if a given network configuration supports a specific _capability_. Capabilities are not defined by the specification - rather, they are documented [conventions](CONVENTIONS.md).\n As defined in section 1, the plugin configuration includes an optional key, `capabilities`. This example shows a plugin that supports the `portMapping` capability:\n The `runtimeConfig` parameter is derived from the `capabilities` in the network configuration and the _capability arguments_ generated by the runtime. Specifically, any capability supported by the plugin configuration and provided by the runtime should be inserted in the `runtimeConfig`.\n Thus, the above example could result in the following being passed to the plugin as part of the execution configuration:"
- },
- {
- "heading": "Section 4: Plugin Delegation",
- "data": "There are some operations that, for whatever reason, cannot reasonably be implemented as a discrete chained plugin. Rather, a CNI plugin may wish to delegate some functionality to another plugin. One common example of this is IP address management.\n As part of its operation, a CNI plugin is expected to assign (and maintain) an IP address to the interface and install any necessary routes relevant for that interface. This gives the CNI plugin great flexibility but also places a large burden on it. Many CNI plugins would need to have the same code to support several IP management schemes that users may desire (e.g. dhcp, host-local). A CNI plugin may choose to delegate IP management to another plugin.\n To lessen the burden and make IP management strategy be orthogonal to the type of CNI plugin, we define a third type of plugin -- IP Address Management Plugin (IPAM plugin), as well as a protocol for plugins to delegate functionality to other plugins.\n It is however the responsibility of the CNI plugin, rather than the runtime, to invoke the IPAM plugin at the proper moment in its execution. The IPAM plugin must determine the interface IP/subnet, Gateway and Routes and return this information to the \"main\" plugin to apply. The IPAM plugin may obtain the information via a protocol (e.g. dhcp), data stored on a local filesystem, the \"ipam\" section of the Network Configuration file, etc."
- },
- {
- "heading": "Delegated Plugin protocol",
- "data": "Like CNI plugins, delegated plugins are invoked by running an executable. The executable is searched for in a predefined list of paths, indicated to the CNI plugin via `CNI_PATH`. The delegated plugin must receive all the same environment variables that were passed in to the CNI plugin. Just like the CNI plugin, delegated plugins receive the network configuration via stdin and output results via stdout.\n Delegated plugins are provided the *complete network configuration* passed to the \"upper\" plugin. In other words, in the IPAM case, not just the `ipam` section of the configuration.\n Success is indicated by a zero return code and a _Success_ result type output to stdout."
- },
- {
- "heading": "Delegated plugin execution procedure",
- "data": "When a plugin executes a delegated plugin, it should:\n - Look up the plugin binary by searching the directories provided in `CNI_PATH` environment variable.\n - Execute that plugin with the same environment and configuration that it received.\n - Ensure that the delegated plugin's stderr is output to the calling plugin's stderr.\n If a plugin is executed with `CNI_COMMAND=CHECK`, `DEL`, or `GC`, it must also execute any delegated plugins. If any of the delegated plugins return error, error should be returned by the upper plugin.\n If, on `ADD`, a delegated plugin fails, the \"upper\" plugin should execute again with `DEL` before returning failure."
- },
- {
- "heading": "Section 5: Result Types",
- "data": "For certain operations, plugins must output result information. The output should be serialized as JSON on standard out."
- },
- {
- "heading": "ADD Success",
- "data": "Plugins must output a JSON object with the following keys upon a successful `ADD` operation:\n - `cniVersion`: The same version supplied on input - the string \"1.1.0\"\n - `interfaces`: An array of all interfaces created by the attachment, including any host-level interfaces:\n - `name` (string): The name of the interface.\n - `mac` (string): The hardware address of the interface (if applicable).\n - `mtu`: (uint) The MTU of the interface (if applicable).\n - `sandbox` (string): The isolation domain reference (e.g. path to network namespace) for the interface, or empty if on the host. For interfaces created inside the container, this should be the value passed via `CNI_NETNS`.\n - `socketPath` (string, optional): An absolute path to a socket file corresponding to this interface, if applicable.\n - `pciID` (string, optional): The platform-specific identifier of the PCI device corresponding to this interface, if applicable.\n - `ips`: IPs assigned by this attachment. Plugins may include IPs assigned external to the container.\n - `address` (string): an IP address in CIDR notation (eg \"192.168.1.3/24\").\n - `gateway` (string): the default gateway for this subnet, if one exists.\n - `interface` (uint): the index into the `interfaces` list for a [CNI Plugin Result](#result) indicating which interface this IP configuration should be applied to.\n - `routes`: Routes created by this attachment:\n - `dst`: The destination of the route, in CIDR notation\n - `gw`: The next hop address. If unset, a value in `gateway` in the `ips` array may be used.\n - `mtu` (uint): The MTU (Maximum transmission unit) along the path to the destination.\n - `advmss` (uint): The MSS (Maximal Segment Size) to advertise to these destinations when establishing TCP connections.\n - `priority` (uint): The priority of route, lower is higher.\n - `table` (uint): The table to add the route to.\n - `scope` (uint): The scope of the destinations covered by the route prefix (global (0), link (253), host (254)).\n - `dns`: a dictionary consisting of DNS configuration information\n - `nameservers` (list of strings): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address.\n - `domain` (string): the local domain used for short hostname lookups.\n - `search` (list of strings): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers.\n - `options` (list of strings): list of options that can be passed to the resolver.\n Plugins provided a `prevResult` key as part of their request configuration must output it as their result, with any possible modifications made by that plugin included. If a plugin makes no changes that would be reflected in the _Success result_ type, then it must output a result equivalent to the provided `prevResult`."
- },
- {
- "heading": "Delegated plugins (IPAM)",
- "data": "Delegated plugins may omit irrelevant sections.\n Delegated IPAM plugins must return an abbreviated _Success_ object. Specifically, it is missing the `interfaces` array, as well as the `interface` entry in `ips`."
- },
- {
- "heading": "VERSION Success",
- "data": "Plugins must output a JSON object with the following keys upon a `VERSION` operation:\n - `cniVersion`: The value of `cniVersion` specified on input\n - `supportedVersions`: A list of supported specification versions\n Example:"
- },
- {
- "heading": "Error",
- "data": "Plugins should output a JSON object with the following keys if they encounter an error:\n - `cniVersion`: The protocol version in use - \"1.1.0\"\n - `code`: A numeric error code, see below for reserved codes.\n - `msg`: A short message characterizing the error.\n - `details`: A longer message describing the error.\n Example:\n Error codes 0-99 are reserved for well-known errors. Values of 100+ can be freely used for plugin specific errors.\n Error Code|Error Description\n ---|---\n `1`|Incompatible CNI version\n `2`|Unsupported field in network configuration. The error message must contain the key and value of the unsupported field.\n `3`|Container unknown or does not exist. This error implies the runtime does not need to perform any container network cleanup (for example, calling the `DEL` action on the container).\n `4`|Invalid necessary environment variables, like CNI_COMMAND, CNI_CONTAINERID, etc. The error message must contain the names of invalid variables.\n `5`|I/O failure. For example, failed to read network config bytes from stdin.\n `6`|Failed to decode content. For example, failed to unmarshal network config from bytes or failed to decode version info from string.\n `7`|Invalid network config. If some validations on network configs do not pass, this error will be raised.\n `11`|Try again later. If the plugin detects some transient condition that should clear up, it can use this code to notify the runtime it should re-try the operation later.\n In addition, stderr can be used for unstructured output such as logs."
- },
- {
- "heading": "Version",
- "data": "Plugins must output a JSON object with the following keys upon a `VERSION` operation:\n - `cniVersion`: The value of `cniVersion` specified on input\n - `supportedVersions`: A list of supported specification versions\n Example:"
- },
- {
- "heading": "Appendix: Examples",
- "data": "We assume the network configuration [shown above](#Example-configuration) in section 1. For this attachment, the runtime produces `portmap` and `mac` capability args, along with the generic argument \"argA=foo\".\n The examples uses `CNI_IFNAME=eth0`."
- },
- {
- "heading": "Add example",
- "data": "The container runtime would perform the following steps for the `add` operation.\n 1) Call the `bridge` plugin with the following JSON, `CNI_COMMAND=ADD`:\n The bridge plugin, as it delegates IPAM to the `host-local` plugin, would execute the `host-local` binary with the exact same input, `CNI_COMMAND=ADD`.\n The `host-local` plugin returns the following result:\n The bridge plugin returns the following result, configuring the interface according to the delegated IPAM configuration:\n 2) Next, call the `tuning` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` is supplied, along with the `mac` capability argument. The request configuration passed is:\n The plugin returns the following result. Note that the **mac** has changed.\n 3) Finally, call the `portmap` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` matches that returned by `tuning`:\n The `portmap` plugin outputs the exact same result as that returned by `bridge`, as the plugin has not modified anything that would change the result (i.e. it only created iptables rules)."
- },
- {
- "heading": "Check example",
- "data": "Given the previous _Add_, the container runtime would perform the following steps for the _Check_ action:\n 1) First call the `bridge` plugin with the following request configuration, including the `prevResult` field containing the final JSON response from the _Add_ operation, **including the changed mac**. `CNI_COMMAND=CHECK`\n The `bridge` plugin, as it delegates IPAM, calls `host-local`, `CNI_COMMAND=CHECK`. It returns no error.\n Assuming the `bridge` plugin is satisfied, it produces no output on standard out and exits with a 0 return code.\n 2) Next call the `tuning` plugin with the following request configuration:\n Likewise, the `tuning` plugin exits indicating success.\n 3) Finally, call `portmap` with the following request configuration:"
- },
- {
- "heading": "Delete example",
- "data": "Given the same network configuration JSON list, the container runtime would perform the following steps for the _Delete_ action. Note that plugins are executed in reverse order from the _Add_ and _Check_ actions. 1) First, call `portmap` with the following request configuration, `CNI_COMMAND=DEL`: 2) Next, call the `tuning` plugin with the following request configuration, `CNI_COMMAND=DEL`: 3) Finally, call `bridge`: The bridge plugin executes the `host-local` delegated plugin with `CNI_COMMAND=DEL` before returning."
- },
- {
- "additional_info": "- [Container Network Interface (CNI) Specification](#container-network-interface-cni-specification) - [Version](#version) - [Released versions](#released-versions) - [Overview](#overview) - [Summary](#summary) - [Section 1: Network configuration format](#section-1-network-configuration-format) - [Configuration format](#configuration-format) - [Plugin configuration objects:](#plugin-configuration-objects) - [Example configuration](#example-configuration) - [Version considerations](#version-considerations) - [Section 2: Execution Protocol](#section-2-execution-protocol) - [Overview](#overview-1) - [Parameters](#parameters) - [Errors](#errors) - [CNI operations](#cni-operations) - [`ADD`: Add container to network, or apply modifications](#add-add-container-to-network-or-apply-modifications) - [`DEL`: Remove container from network, or un-apply modifications](#del-remove-container-from-network-or-un-apply-modifications) - [`CHECK`: Check container's networking is as expected](#check-check-containers-networking-is-as-expected) - [`STATUS`: Check plugin status](#status-check-plugin-status) - [`VERSION`: probe plugin version support](#version-probe-plugin-version-support) - [`GC`: Clean up any stale resources](#gc-clean-up-any-stale-resources) - [Section 3: Execution of Network Configurations](#section-3-execution-of-network-configurations) - [Lifecycle \\& Ordering](#lifecycle--ordering) - [Attachment Parameters](#attachment-parameters) - [Adding an attachment](#adding-an-attachment) - [Deleting an attachment](#deleting-an-attachment) - [Checking an attachment](#checking-an-attachment) - [Garbage-collecting a network](#garbage-collecting-a-network) - [Deriving request configuration from plugin configuration](#deriving-request-configuration-from-plugin-configuration) - [Deriving `runtimeConfig`](#deriving-runtimeconfig) - [Section 4: Plugin Delegation](#section-4-plugin-delegation) - [Delegated Plugin protocol](#delegated-plugin-protocol) - [Delegated plugin execution procedure](#delegated-plugin-execution-procedure) - [Section 5: Result Types](#section-5-result-types) - [ADD Success](#add-success) - [Delegated plugins (IPAM)](#delegated-plugins-ipam) - [VERSION Success](#version-success) - [Error](#error) - [Version](#version-1) - [Appendix: Examples](#appendix-examples) - [Add example](#add-example) - [Check example](#check-example) - [](#delete-example) This is CNI **spec** version **1.1.0**. Note that this is **independent from the version of the CNI library and plugins** in this repository (e.g. the versions of [releases](https://github.com/containernetworking/cni/releases)). Released versions of the spec are available as Git tags. | tag | spec permalink | major changes | | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | --------------------------------- | | [`spec-v1.0.0`](https://github.com/containernetworking/cni/releases/tag/spec-v1.0.0) | [spec at v1.0.0](https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md) | Removed non-list configurations; removed `version` field of `interfaces` array | | [`spec-v0.4.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.4.0) | [spec at v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) | Introduce the CHECK command and passing prevResult on DEL | | [`spec-v0.3.1`](https://github.com/containernetworking/cni/releases/tag/spec-v0.3.1) | [spec at v0.3.1](https://github.com/containernetworking/cni/blob/spec-v0.3.1/SPEC.md) | none (typo fix only) | | [`spec-v0.3.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.3.0) | [spec at v0.3.0](https://github.com/containernetworking/cni/blob/spec-v0.3.0/SPEC.md) | rich result type, plugin chaining | | [`spec-v0.2.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.2.0) | [spec at v0.2.0](https://github.com/containernetworking/cni/blob/spec-v0.2.0/SPEC.md) | VERSION command | | [`spec-v0.1.0`](https://github.com/containernetworking/cni/releases/tag/spec-v0.1.0) | [spec at v0.1.0](https://github.com/containernetworking/cni/blob/spec-v0.1.0/SPEC.md) | initial version | This document proposes a generic plugin-based networking solution for application containers on Linux, the _Container Networking Interface_, or _CNI_. For the purposes of this proposal, we define three terms very specifically: - _container_ is a network isolation domain, though the actual isolation technology is not defined by the specification. This could be a [network namespace][namespaces] or a virtual machine, for example. - _network_ refers to a group of endpoints that are uniquely addressable that can communicate amongst each other. This could be either an individual container (as specified above), a machine, or some other network device (e.g. a router). Containers can be conceptually _added to_ or _removed from_ one or more networks. - _runtime_ is the program responsible for executing CNI plugins. - _plugin_ is a program that applies a specified network configuration. This document aims to specify the interface between \"runtimes\" and \"plugins\". The key words \"must\", \"must not\", \"required\", \"shall\", \"shall not\", \"should\", \"should not\", \"recommended\", \"may\" and \"optional\" are used as specified in [RFC 2119][rfc-2119]. [namespaces]: http://man7.org/linux/man-pages/man7/namespaces.7.html [rfc-2119]: https://www.ietf.org/rfc/rfc2119.txt The CNI specification defines: 1. A format for administrators to define network configuration. 2. A protocol for container runtimes to make requests to network plugins. 3. A procedure for executing plugins based on a supplied configuration. 4. A procedure for plugins to delegate functionality to other plugins. 5. Data types for plugins to return their results to the runtime. CNI defines a network configuration format for administrators. It contains directives for both the container runtime as well as the plugins to consume. At plugin execution time, this configuration format is interpreted by the runtime and transformed in to a form to be passed to the plugins. In general, the network configuration is intended to be static. It can conceptually be thought of as being \"on disk\", though the CNI specification does not actually require this. A network configuration consists of a JSON object with the following keys: - `cniVersion` (string): [Semantic Version 2.0](https://semver.org) of CNI specification to which this configuration list and all the individual configurations conform. Currently \"1.1.0\" - `cniVersions` (string list): List of all CNI versions which this configuration supports. See [version selection](#version-selection) below. - `name` (string): Network name. This should be unique across all network configurations on a host (or other administrative domain). Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore, dot (.) or hyphen (-). - `disableCheck` (boolean): Either `true` or `false`. If `disableCheck` is `true`, runtimes must not call `CHECK` for this network configuration list. This allows an administrator to prevent `CHECK`ing where a combination of plugins is known to return spurious errors. - `plugins` (list): A list of CNI plugins and their configuration, which is a list of plugin configuration objects. Plugin configuration objects may contain additional fields than the ones defined here. The runtime MUST pass through these fields, unchanged, to the plugin, as defined in section 3. - `type` (string): Matches the name of the CNI plugin binary on disk. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\). - `capabilities` (dictionary): Defined in [section 3](#Deriving-runtimeConfig) These keys are generated by the runtime at execution time, and thus should not be used in configuration. - `runtimeConfig` - `args` - Any keys starting with `cni.dev/` These keys are not used by the protocol, but have a standard meaning to plugins. Plugins that consume any of these configuration keys should respect their intended semantics. - `ipMasq` (boolean): If supported by the plugin, sets up an IP masquerade on the host for this network. This is necessary if the host will act as a gateway to subnets that are not able to route to the IP assigned to the container. - `ipam` (dictionary): Dictionary with IPAM (IP Address Management) specific values: - `type` (string): Refers to the filename of the IPAM plugin executable. Must not contain characters disallowed in file paths for the system (e.g. / or \\\\). - `dns` (dictionary, optional): Dictionary with DNS specific values: - `nameservers` (list of strings, optional): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address. - `domain` (string, optional): the local domain used for short hostname lookups. - `search` (list of strings, optional): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers. - `options` (list of strings, optional): list of options that can be passed to the resolver Plugins may define additional fields that they accept and may generate an error if called with unknown fields. Runtimes must preserve unknown fields in plugin configuration objects when transforming for execution. ```jsonc { \"cniVersion\": \"1.1.0\", \"cniVersions\": [\"0.3.1\", \"0.4.0\", \"1.0.0\", \"1.1.0\"], \"name\": \"dbnet\", \"plugins\": [ { \"type\": \"bridge\", // plugin specific parameters \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", // ipam specific \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\", \"routes\": [ {\"dst\": \"0.0.0.0/0\"} ] }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } }, { \"type\": \"tuning\", \"capabilities\": { \"mac\": true }, \"sysctl\": { \"net.core.somaxconn\": \"500\" } }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true} } ] } ``` CNI runtimes, plugins, and network configurations may support multiple CNI specification versions independently. Plugins indicate their set of supported versions through the VERSION command, while network configurations indicate their set of supported versions through the `cniVersion` and `cniVersions` fields. CNI runtimes MUST select the highest supported version from the set of network configuration versions given by the `cniVersion` and `cniVersions` fields. Runtimes MAY consider the set of supported plugin versions as reported by the VERSION command when determining available versions. The CNI protocol follows Semantic Versioning principles, so the configuration format MUST remain backwards and forwards compatible within major versions. The CNI protocol is based on execution of binaries invoked by the container runtime. CNI defines the protocol between the plugin binary and the runtime. A CNI plugin is responsible for configuring a container's network interface in some manner. Plugins fall in to two broad categories: * \"Interface\" plugins, which create a network interface inside the container and ensure it has connectivity. * \"Chained\" plugins, which adjust the configuration of an already-created interface (but may need to create more interfaces to do so). The runtime passes parameters to the plugin via environment variables and configuration. It supplies configuration via stdin. The plugin returns a [result](#Section-5-Result-Types) on stdout on success, or an error on stderr if the operation fails. Configuration and results are encoded in JSON. Parameters define invocation-specific settings, whereas configuration is, with some exceptions, the same for any given network. The runtime must execute the plugin in the runtime's networking domain. (For most cases, this means the root network namespace / `dom0`). Protocol parameters are passed to the plugins via OS environment variables. - `CNI_COMMAND`: indicates the desired operation; `ADD`, `DEL`, `CHECK`, `GC`, or `VERSION`. - `CNI_CONTAINERID`: Container ID. A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-). - `CNI_NETNS`: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`) - `CNI_IFNAME`: Name of the interface to create inside the container; if the plugin is unable to use this interface name it must return an error. - `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\" - `CNI_PATH`: List of paths to search for CNI plugin executables. Paths are separated by an OS-specific list separator; for example ':' on Linux and ';' on Windows A plugin must exit with a return code of 0 on success, and non-zero on failure. If the plugin encounters an error, it should output an [\"error\" result structure](#Error) (see below). CNI defines 5 operations: `ADD`, `DEL`, `CHECK`, `GC`, and `VERSION`. These are passed to the plugin via the `CNI_COMMAND` environment variable. A CNI plugin, upon receiving an `ADD` command, should either - create the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`, or - adjust the configuration of the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`. If the CNI plugin is successful, it must output a [result structure](#Success) (see below) on standard out. If the plugin was supplied a `prevResult` as part of its input configuration, it MUST handle `prevResult` by either passing it through, or modifying it appropriately. If an interface of the requested name already exists in the container, the CNI plugin MUST return with an error. A runtime should not call `ADD` twice (without an intervening DEL) for the same `(CNI_CONTAINERID, CNI_IFNAME)` tuple. This implies that a given container ID may be added to a specific network more than once only if each addition is done with a different interface name. The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: - `CNI_COMMAND` - `CNI_CONTAINERID` - `CNI_NETNS` - `CNI_IFNAME` Optional environment parameters: - `CNI_ARGS` - `CNI_PATH` A CNI plugin, upon receiving a `DEL` command, should either - delete the interface defined by `CNI_IFNAME` inside the container at `CNI_NETNS`, or - undo any modifications applied in the plugin's `ADD` functionality Plugins should generally complete a `DEL` action without error even if some resources are missing. For example, an IPAM plugin should generally release an IP allocation and return success even if the container network namespace no longer exists, unless that network namespace is critical for IPAM management. While DHCP may usually send a 'release' message on the container network interface, since DHCP leases have a lifetime this release action would not be considered critical and no error should be returned if this action fails. For another example, the `bridge` plugin should delegate the DEL action to the IPAM plugin and clean up its own resources even if the container network namespace and/or container network interface no longer exist. Plugins MUST accept multiple `DEL` calls for the same (`CNI_CONTAINERID`, `CNI_IFNAME`) pair, and return success if the interface in question, or any modifications added, are missing. The runtime will provide a JSON-serialized plugin configuration object (defined below) on standard in. Required environment parameters: - `CNI_COMMAND` - `CNI_CONTAINERID` - `CNI_IFNAME` Optional environment parameters: - `CNI_NETNS` - `CNI_ARGS` - `CNI_PATH` `CHECK` is a way for a runtime to probe the status of an existing container. Plugin considerations: - The plugin must consult the `prevResult` to determine the expected interfaces and addresses. - The plugin must allow for a later chained plugin to have modified networking resources, e.g. routes, on `ADD`. - The plugin should return an error if a resource included in the CNI Result type (interface, address or route) was created by the plugin, and is listed in `prevResult`, but is missing or in an invalid state. - The plugin should return an error if other resources not tracked in the Result type such as the following are missing or are in an invalid state: - Firewall rules - Traffic shaping controls - IP reservations - External dependencies such as a daemon required for connectivity - etc. - The plugin should return an error if it is aware of a condition where the container is generally unreachable. - The plugin must handle `CHECK` being called immediately after an `ADD`, and therefore should allow a reasonable convergence delay for any asynchronous resources. - The plugin should call `CHECK` on any delegated (e.g. IPAM) plugins and pass any errors on to its caller. Runtime considerations: - A runtime must not call `CHECK` for a container that has not been `ADD`ed, or has been `DEL`eted after its last `ADD`. - A runtime must not call `CHECK` if `disableCheck` is set to `true` in the [configuration](#configuration-format). - A runtime must include a `prevResult` field in the network configuration containing the `Result` of the immediately preceding `ADD` for the container. The runtime may wish to use libcni's support for caching `Result`s. - A runtime may choose to stop executing `CHECK` for a chain when a plugin returns an error. - A runtime may execute `CHECK` from immediately after a successful `ADD`, up until the container is `DEL`eted from the network. - A runtime may assume that a failed `CHECK` means the container is permanently in a misconfigured state. The runtime will provide a json-serialized plugin configuration object (defined below) on standard in. Required environment parameters: - `CNI_COMMAND` - `CNI_CONTAINERID` - `CNI_NETNS` - `CNI_IFNAME` Optional environment parameters: - `CNI_ARGS` - `CNI_PATH` All parameters, with the exception of `CNI_PATH`, must be the same as the corresponding `ADD` for this container. `STATUS` is a way for a runtime to determine the readiness of a network plugin. A plugin must exit with a zero (success) return code if the plugin is ready to service ADD requests. If the plugin knows that it is not able to service ADD requests, it must exit with a non-zero return code and output an error on standard out (see below). For example, if a plugin relies on an external service or daemon, it should return an error to `STATUS` if that service is unavailable. Likewise, if a plugin has a limited number of resources (e.g. IP addresses, hardware queues), it should return an error if those resources are exhausted and no new `ADD` requests can be serviced. The following error codes are defined in the context of `STATUS`: - 50: The plugin is not available (i.e. cannot service `ADD` requests) - 51: The plugin is not available, and existing containers in the network may have limited connectivity. Plugin considerations: - Status is purely informational. A plugin MUST NOT rely on `STATUS` being called. - Plugins should always expect other CNI operations (like `ADD`, `DEL`, etc) even if `STATUS` returns an error. `STATUS` does not prevent other runtime requests. - If a plugin relies on a delegated plugin (e.g. IPAM) to service `ADD` requests, it must also execute a `STATUS` request to that plugin when it receives a `STATUS` request for itself. If the delegated plugin return an error result, the executing plugin should return an error result. The runtime will provide a json-serialized plugin configuration object (defined below) on standard in. Optional environment parameters: - `CNI_PATH` The plugin should output via standard-out a json-serialized version result object (see below). A json-serialized object, with the following key: - `cniVersion`: The version of the protocol in use. Required environment parameters: - `CNI_COMMAND` The GC command provides a way for runtimes to specify the expected set of attachments to a network. The network plugin may then remove any resources related to attachments that do not exist in this set. Resources may, for example, include: - IPAM reservations - Firewall rules A plugin SHOULD remove as many stale resources as possible. For example, a plugin should remove any IPAM reservations associated with attachments not in the provided list. The plugin MAY assume that the isolation domain (e.g. network namespace) has been deleted, and thus any resources (e.g. network interfaces) therein have been removed. Plugins should generally complete a `GC` action without error. If an error is encountered, a plugin should continue; removing as many resources as possible, and report the errors back to the runtime. Plugins MUST, additionally, forward any GC calls to delegated plugins they are configured to use (see section 4). The runtime MUST NOT use GC as a substitute for DEL. Plugins may be unable to clean up some resources from GC that they would have been able to clean up from DEL. The runtime must provide a JSON-serialized plugin configuration object (defined below) on standard in. It contains an additional key; - `cni.dev/attachments` (array of objects): The list of **still valid** attachments to this network: - `containerID` (string): the value of CNI_CONTAINERID as provided during the CNI ADD operation - `ifname` (string): the value of CNI_IFNAME as provided during the CNI ADD operation Required environment parameters: - `CNI_COMMAND` - `CNI_PATH` No output on success, [\"error\" result structure](#Error) on error. This section describes how a container runtime interprets a network configuration (as defined in section 1) and executes plugins accordingly. A runtime may wish to _add_, _delete_, or _check_ a network configuration in a container. This results in a series of plugin `ADD`, `DELETE`, or `CHECK` executions, correspondingly. This section also defines how a network configuration is transformed and provided to the plugin. The operation of a network configuration on a container is called an _attachment_. An attachment may be uniquely identified by the `(CNI_CONTAINERID, CNI_IFNAME)` tuple. - The container runtime must create a new network namespace for the container before invoking any plugins. - The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations for different containers. This includes across multiple attachments. - **Exception**: The runtime must exclusively execute either _gc_ or _add_ and _delete_. The runtime must ensure that no _add_ or _delete_ operations are in progress before executing _gc_, and must wait for _gc_ to complete before issuing new _add_ or _delete_ commands. - Plugins must handle being executed concurrently across different containers. If necessary, they must implement locking on shared resources (e.g. IPAM databases). - The container runtime must ensure that _add_ is eventually followed by a corresponding _delete_. The only exception is in the event of catastrophic failure, such as node loss. A _delete_ must still be executed even if the _add_ fails. - _delete_ may be followed by additional _deletes_. - The network configuration should not change between _add_ and _delete_. - The network configuration should not change between _attachments_. - The container runtime is responsible for cleanup of the container's network namespace. While a network configuration should not change between _attachments_, there are certain parameters supplied by the container runtime that are per-attachment. They are: - **Container ID:** A unique plaintext identifier for a container, allocated by the runtime. Must not be empty. Must start with an alphanumeric character, optionally followed by any combination of one or more alphanumeric characters, underscore (), dot (.) or hyphen (-). During execution, always set as the `CNI_CONTAINERID` parameter. - **Namespace**: A reference to the container's \"isolation domain\". If using network namespaces, then a path to the network namespace (e.g. `/run/netns/[nsname]`). During execution, always set as the `CNI_NETNS` parameter. - **Container interface name**: Name of the interface to create inside the container. During execution, always set as the `CNI_IFNAME` parameter. - **Generic Arguments**: Extra arguments, in the form of key-value string pairs, that are relevant to a specific attachment. During execution, always set as the `CNI_ARGS` parameter. - **Capability Arguments**: These are also key-value pairs. The key is a string, whereas the value is any JSON-serializable type. The keys and values are defined by [convention](CONVENTIONS.md). Furthermore, the runtime must be provided a list of paths to search for CNI plugins. This must also be provided to plugins during execution via the `CNI_PATH` environment variable. For every configuration defined in the `plugins` key of the network configuration, 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error. 2. Derive request configuration from the plugin configuration, with the following parameters: - If this is the first plugin in the list, no previous result is provided, - For all additional plugins, the previous result is the result of the previous plugins. 3. Execute the plugin binary, with `CNI_COMMAND=ADD`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. 4. If the plugin returns an error, halt execution and return the error to the caller. The runtime must store the result returned by the final plugin persistently, as it is required for _check_ and _delete_ operations. Deleting a network attachment is much the same as adding, with a few key differences: - The list of plugins is executed in **reverse order** - The previous result provided is always the final result of the _add_ operation. For every plugin defined in the `plugins` key of the network configuration, *in reverse order*, 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error. 2. Derive request configuration from the plugin configuration, with the previous result from the initial _add_ operation. 3. Execute the plugin binary, with `CNI_COMMAND=DEL`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. 4. If the plugin returns an error, halt execution and return the error to the caller. If all plugins return success, return success to the caller. The runtime may also ask every plugin to confirm that a given attachment is still functional. The runtime must use the same attachment parameters as it did for the _add_ operation. Checking is similar to add with two exceptions: - the previous result provided is always the final result of the _add_ operation. - If the network configuration defines `disableCheck`, then always return success to the caller. For every plugin defined in the `plugins` key of the network configuration, 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error. 2. Derive request configuration from the plugin configuration, with the previous result from the initial _add_ operation. 3. Execute the plugin binary, with `CNI_COMMAND=CHECK`. Provide parameters defined above as environment variables. Supply the derived configuration via standard in. 4. If the plugin returns an error, halt execution and return the error to the caller. If all plugins return success, return success to the caller. The runtime may also ask every plugin in a network configuration to clean up any stale resources via the _GC_ command. When garbage-collecting a configuration, there are no [Attachment Parameters](#attachment-parameters). For every plugin defined in the `plugins` key of the network configuration, 1. Look up the executable specified in the `type` field. If this does not exist, then this is an error. 2. Derive request configuration from the plugin configuration. 3. Execute the plugin binary, with `CNI_COMMAND=GC`. Supply the derived configuration via standard in. 4. If the plugin returns an error, **continue** with execution, returning all errors to the caller. If all plugins return success, return success to the caller. The network configuration format (which is a list of plugin configurations to execute) must be transformed to a format understood by the plugin (which is a single plugin configuration). This section describes that transformation. The request configuration for a single plugin invocation is also JSON. It consists of the plugin configuration, primarily unchanged except for the specified additions and removals. The following fields are always to be inserted into the request configuration by the runtime: - `cniVersion`: the protocol version selected by the runtime - the string \"1.1.0\" - `name`: taken from the `name` field of the network configuration For attachment-specific operations (ADD, DEL, CHECK), additional field requirements apply: - `runtimeConfig`: the runtime must insert an object consisting of the union of capabilities provided by the plugin and requested by the runtime (more details below). - `prevResult`: the runtime must insert consisting of the result type returned by the \"previous\" plugin. The meaning of \"previous\" is defined by the specific operation (_add_, _delete_, or _check_). This field must not be set for the first _add_ in a chain. - `capabilities`: must not be set For GC operations: - `cni.dev/attachments`: as specified in section 2. All other fields not prefixed with `cni.dev/` should be passed through unaltered. Whereas CNI_ARGS are provided to all plugins, with no indication if they are going to be consumed, _Capability arguments_ need to be declared explicitly in configuration. The runtime, thus, can determine if a given network configuration supports a specific _capability_. Capabilities are not defined by the specification - rather, they are documented [conventions](CONVENTIONS.md). As defined in section 1, the plugin configuration includes an optional key, `capabilities`. This example shows a plugin that supports the `portMapping` capability: ```json { \"type\": \"myPlugin\", \"capabilities\": { \"portMappings\": true } } ``` The `runtimeConfig` parameter is derived from the `capabilities` in the network configuration and the _capability arguments_ generated by the runtime. Specifically, any capability supported by the plugin configuration and provided by the runtime should be inserted in the `runtimeConfig`. Thus, the above example could result in the following being passed to the plugin as part of the execution configuration: ```json { \"type\": \"myPlugin\", \"runtimeConfig\": { \"portMappings\": [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] } ... } ``` There are some operations that, for whatever reason, cannot reasonably be implemented as a discrete chained plugin. Rather, a CNI plugin may wish to delegate some functionality to another plugin. One common example of this is IP address management. As part of its operation, a CNI plugin is expected to assign (and maintain) an IP address to the interface and install any necessary routes relevant for that interface. This gives the CNI plugin great flexibility but also places a large burden on it. Many CNI plugins would need to have the same code to support several IP management schemes that users may desire (e.g. dhcp, host-local). A CNI plugin may choose to delegate IP management to another plugin. To lessen the burden and make IP management strategy be orthogonal to the type of CNI plugin, we define a third type of plugin -- IP Address Management Plugin (IPAM plugin), as well as a protocol for plugins to delegate functionality to other plugins. It is however the responsibility of the CNI plugin, rather than the runtime, to invoke the IPAM plugin at the proper moment in its execution. The IPAM plugin must determine the interface IP/subnet, Gateway and Routes and return this information to the \"main\" plugin to apply. The IPAM plugin may obtain the information via a protocol (e.g. dhcp), data stored on a local filesystem, the \"ipam\" section of the Network Configuration file, etc. Like CNI plugins, delegated plugins are invoked by running an executable. The executable is searched for in a predefined list of paths, indicated to the CNI plugin via `CNI_PATH`. The delegated plugin must receive all the same environment variables that were passed in to the CNI plugin. Just like the CNI plugin, delegated plugins receive the network configuration via stdin and output results via stdout. Delegated plugins are provided the *complete network configuration* passed to the \"upper\" plugin. In other words, in the IPAM case, not just the `ipam` section of the configuration. Success is indicated by a zero return code and a _Success_ result type output to stdout. When a plugin executes a delegated plugin, it should: - Look up the plugin binary by searching the directories provided in `CNI_PATH` environment variable. - Execute that plugin with the same environment and configuration that it received. - Ensure that the delegated plugin's stderr is output to the calling plugin's stderr. If a plugin is executed with `CNI_COMMAND=CHECK`, `DEL`, or `GC`, it must also execute any delegated plugins. If any of the delegated plugins return error, error should be returned by the upper plugin. If, on `ADD`, a delegated plugin fails, the \"upper\" plugin should execute again with `DEL` before returning failure. For certain operations, plugins must output result information. The output should be serialized as JSON on standard out. Plugins must output a JSON object with the following keys upon a successful `ADD` operation: - `cniVersion`: The same version supplied on input - the string \"1.1.0\" - `interfaces`: An array of all interfaces created by the attachment, including any host-level interfaces: - `name` (string): The name of the interface. - `mac` (string): The hardware address of the interface (if applicable). - `mtu`: (uint) The MTU of the interface (if applicable). - `sandbox` (string): The isolation domain reference (e.g. path to network namespace) for the interface, or empty if on the host. For interfaces created inside the container, this should be the value passed via `CNI_NETNS`. - `socketPath` (string, optional): An absolute path to a socket file corresponding to this interface, if applicable. - `pciID` (string, optional): The platform-specific identifier of the PCI device corresponding to this interface, if applicable. - `ips`: IPs assigned by this attachment. Plugins may include IPs assigned external to the container. - `address` (string): an IP address in CIDR notation (eg \"192.168.1.3/24\"). - `gateway` (string): the default gateway for this subnet, if one exists. - `interface` (uint): the index into the `interfaces` list for a [CNI Plugin Result](#result) indicating which interface this IP configuration should be applied to. - `routes`: Routes created by this attachment: - `dst`: The destination of the route, in CIDR notation - `gw`: The next hop address. If unset, a value in `gateway` in the `ips` array may be used. - `mtu` (uint): The MTU (Maximum transmission unit) along the path to the destination. - `advmss` (uint): The MSS (Maximal Segment Size) to advertise to these destinations when establishing TCP connections. - `priority` (uint): The priority of route, lower is higher. - `table` (uint): The table to add the route to. - `scope` (uint): The scope of the destinations covered by the route prefix (global (0), link (253), host (254)). - `dns`: a dictionary consisting of DNS configuration information - `nameservers` (list of strings): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address. - `domain` (string): the local domain used for short hostname lookups. - `search` (list of strings): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers. - `options` (list of strings): list of options that can be passed to the resolver. Plugins provided a `prevResult` key as part of their request configuration must output it as their result, with any possible modifications made by that plugin included. If a plugin makes no changes that would be reflected in the _Success result_ type, then it must output a result equivalent to the provided `prevResult`. Delegated plugins may omit irrelevant sections. Delegated IPAM plugins must return an abbreviated _Success_ object. Specifically, it is missing the `interfaces` array, as well as the `interface` entry in `ips`. Plugins must output a JSON object with the following keys upon a `VERSION` operation: - `cniVersion`: The value of `cniVersion` specified on input - `supportedVersions`: A list of supported specification versions Example: ```json { \"cniVersion\": \"1.0.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\" ] } ``` Plugins should output a JSON object with the following keys if they encounter an error: - `cniVersion`: The protocol version in use - \"1.1.0\" - `code`: A numeric error code, see below for reserved codes. - `msg`: A short message characterizing the error. - `details`: A longer message describing the error. Example: ```json { \"cniVersion\": \"1.1.0\", \"code\": 7, \"msg\": \"Invalid Configuration\", \"details\": \"Network 192.168.0.0/31 too small to allocate from.\" } ``` Error codes 0-99 are reserved for well-known errors. Values of 100+ can be freely used for plugin specific errors. Error Code|Error Description ---|--- `1`|Incompatible CNI version `2`|Unsupported field in network configuration. The error message must contain the key and value of the unsupported field. `3`|Container unknown or does not exist. This error implies the runtime does not need to perform any container network cleanup (for example, calling the `DEL` action on the container). `4`|Invalid necessary environment variables, like CNI_COMMAND, CNI_CONTAINERID, etc. The error message must contain the names of invalid variables. `5`|I/O failure. For example, failed to read network config bytes from stdin. `6`|Failed to decode content. For example, failed to unmarshal network config from bytes or failed to decode version info from string. `7`|Invalid network config. If some validations on network configs do not pass, this error will be raised. `11`|Try again later. If the plugin detects some transient condition that should clear up, it can use this code to notify the runtime it should re-try the operation later. In addition, stderr can be used for unstructured output such as logs. Plugins must output a JSON object with the following keys upon a `VERSION` operation: - `cniVersion`: The value of `cniVersion` specified on input - `supportedVersions`: A list of supported specification versions Example: ```json { \"cniVersion\": \"1.1.0\", \"supportedVersions\": [ \"0.1.0\", \"0.2.0\", \"0.3.0\", \"0.3.1\", \"0.4.0\", \"1.0.0\", \"1.1.0\" ] } ``` We assume the network configuration [shown above](#Example-configuration) in section 1. For this attachment, the runtime produces `portmap` and `mac` capability args, along with the generic argument \"argA=foo\". The examples uses `CNI_IFNAME=eth0`. The container runtime would perform the following steps for the `add` operation. 1) Call the `bridge` plugin with the following JSON, `CNI_COMMAND=ADD`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin, as it delegates IPAM to the `host-local` plugin, would execute the `host-local` binary with the exact same input, `CNI_COMMAND=ADD`. The `host-local` plugin returns the following result: ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\" } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` The bridge plugin returns the following result, configuring the interface according to the delegated IPAM configuration: ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` 2) Next, call the `tuning` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` is supplied, along with the `mac` capability argument. The request configuration passed is: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"99:88:77:66:55:44\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The plugin returns the following result. Note that the **mac** has changed. ```json { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } ``` 3) Finally, call the `portmap` plugin, with `CNI_COMMAND=ADD`. Note that `prevResult` matches that returned by `tuning`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The `portmap` plugin outputs the exact same result as that returned by `bridge`, as the plugin has not modified anything that would change the result (i.e. it only created iptables rules). Given the previous _Add_, the container runtime would perform the following steps for the _Check_ action: 1) First call the `bridge` plugin with the following request configuration, including the `prevResult` field containing the final JSON response from the _Add_ operation, **including the changed mac**. `CNI_COMMAND=CHECK` ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The `bridge` plugin, as it delegates IPAM, calls `host-local`, `CNI_COMMAND=CHECK`. It returns no error. Assuming the `bridge` plugin is satisfied, it produces no output on standard out and exits with a 0 return code. 2) Next call the `tuning` plugin with the following request configuration: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Likewise, the `tuning` plugin exits indicating success. 3) Finally, call `portmap` with the following request configuration: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` Given the same network configuration JSON list, the container runtime would perform the following steps for the _Delete_ action. Note that plugins are executed in reverse order from the _Add_ and _Check_ actions. 1) First, call `portmap` with the following request configuration, `CNI_COMMAND=DEL`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"portmap\", \"runtimeConfig\": { \"portMappings\" : [ { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" } ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` 2) Next, call the `tuning` plugin with the following request configuration, `CNI_COMMAND=DEL`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"tuning\", \"sysctl\": { \"net.core.somaxconn\": \"500\" }, \"runtimeConfig\": { \"mac\": \"00:11:22:33:44:66\" }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` 3) Finally, call `bridge`: ```json { \"cniVersion\": \"1.1.0\", \"name\": \"dbnet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"keyA\": [\"some more\", \"plugin specific\", \"configuration\"], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.0.0/16\", \"gateway\": \"10.1.0.1\" }, \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] }, \"prevResult\": { \"ips\": [ { \"address\": \"10.1.0.5/16\", \"gateway\": \"10.1.0.1\", \"interface\": 2 } ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"interfaces\": [ { \"name\": \"cni0\", \"mac\": \"00:11:22:33:44:55\" }, { \"name\": \"veth3243\", \"mac\": \"55:44:33:22:11:11\" }, { \"name\": \"eth0\", \"mac\": \"00:11:22:33:44:66\", \"sandbox\": \"/var/run/netns/blue\" } ], \"dns\": { \"nameservers\": [ \"10.1.0.1\" ] } } } ``` The bridge plugin executes the `host-local` delegated plugin with `CNI_COMMAND=DEL` before returning."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "CONTRIBUTING.md"
- },
- "content": [
- {
- "heading": "Contributing to DANM",
- "data": ""
- },
- {
- "heading": "First of all, a big thank you!",
- "data": "For years we have been thinking that no one would be interested in our internal Kubernetes networking enhancements for TelCo applications.\n We greatly appreciate you showing interest in contributing to our project, and thus proving that we were wrong for so long!"
- },
- {
- "heading": "Do we accept contributions?",
- "data": "Absolutely!\n We are open to all kinds of community feedback: it need not even be code, we are always happy to talk about requirements, or compare notes with our fellow networking enthusiasts.\n Of course, pull requests containing small code or documentation corrections, or even the implementation of new features are also very much welcomed!\n This work is released under the 3-Clause BSD License."
- },
- {
- "heading": "When you are planning a contribution",
- "data": "Regardless whether your contribution is related to an already existing Issue, or not: it is generally a good idea to first discuss it with the core team.\n You can expect a response/engagement from the core team within a couple of days, so do not hesitate to contact us first through the [communication channels described below](#communication-channels-besides-github)\n Aligning expectations and discussing design ideas before implementing a feature can go a long way in increasing the chances of its acceptance!\n Feel free to comment on existing issues, open new ones, or simply send your ideas directly to us!"
- },
- {
- "heading": "Code of conduct",
- "data": "We promise that we will be always transparent when communicating with the community, and will respectfully hear each and every idea out!\n In exchange we expect the same behavior from our Contributors.\n However, please also note that DANM is already being used in many, vastly differing TelCo production cases.\n As a result, the maintainers of the project retain the right of refusing certain contributions, if these would not be compatible with an existing production use-case.\n Don't feel discouraged or offended if this would happen to you!\n What we can promise is that we will always respectfully and truthfully explore and entertain your reasoning before making a decision, and will always openly and transparently share our own with you in case of a conflict!"
- },
- {
- "heading": "Getting started",
- "data": "So, you are adamant you want to contribute to our project, and maybe even already discussed your idea with us. Awesome! What now?\n Keep in mind that the project is:\n - written in Golang and using `go module` feature for dependency management, so you will need a properly set-up `Golang 1.12+` development environment\n - build scripts depend on `docker`, to be able to run scripts locally you will have to install it on your machine\n Once you have the prerequisites, fork our project, code your changes, test your contribution, then start a normal GitHub review process.\n Pull requests will be only merged once at least one of the project maintainers approved them.\n The minimum expectation towards all pull requests is that:\n 1. Code is written in accordance with generic Clean Code guidelines\n 2. The Makefile in the project's root directory successfully executes, all binaries compile\n 3. All the not many -contribution opportunity 2.0- existing Unit Tests pass\n Not mandatory, *but highly appreciated:*\n 1. Existing coding style (2 spaces indentation, camelCase/CamelCase naming scheme) maintained\n 2. New Unit Tests are written to cover newly added (or even legacy) code\n When writing Unit Tests we prefer testing the packages through their public interfaces!\n We appreciate thorough and detailed commit messages.\n We are not allergic to the number of commits it took to create a contribution, you are not required to squash and amend your changes all the time.\n However, we require you to break-up big contributions into smaller, functionally coherent pieces. This approach greatly reduces both integration and review efforts!"
- },
- {
- "heading": "Future plans",
- "data": "The following topics are on our mind right now, so if you are looking for topic to start with these are as good as any!\n Being a new project, we have not yet integrated the repository to an automated CI system (like Travis).\n Increasing UT coverage of existing code is alway appreciated.\n Extending the \"reach\" of the DANM ecosystem is our primary goal!\n This includes both native, first-class integration of additional CNI plugin interfaces, and integrating more one-network Kubernetes features (e.g. NetworkPolicy) with our DanmNet API."
- },
- {
- "heading": "Community",
- "data": ""
- },
- {
- "heading": "Maintainers / core team",
- "data": "R\u00f3bert Springer (@rospring)\n Levente K\u00e1l\u00e9 (@Levovar)"
- },
- {
- "heading": "Distinguished contributors",
- "data": "Lengyel Kriszti\u00e1n (@klengyel)\n Ferenc T\u00f3th (@TothFerenc)"
- },
- {
- "heading": "Honorable mentions",
- "data": "@peterszilagyi, @libesz, @visnyei, @CsatariGergely, @clivez, @Fillamug, @janosi\n Please keep in mind we live in the CET/CEST timezone!"
- },
- {
- "heading": "Communication channels besides GitHub",
- "data": "You can contact the core team mainly via email at robert.springer@nokia.com and levente.kale@nokia.com or you can join to our [slack channel](https://danmws.slack.com) using [this](https://join.slack.com/t/danmws/shared_invite/enQtNzEzMTQ4NDM2NTMxLTA3MDM4NGM0YTRjYzlhNGRiMDVlZWRlMjdlNTkwNTBjNWUyNjM0ZDQ3Y2E4YjE3NjVhNTE1MmEyYzkyMDRlNWU) invite link. But we also do hang around various Kubernetes slack channels, you might get lucky if you look around networking, node, or resource management :)"
- },
- {
- "additional_info": "For years we have been thinking that no one would be interested in our internal Kubernetes networking enhancements for TelCo applications. We greatly appreciate you showing interest in contributing to our project, and thus proving that we were wrong for so long! Absolutely! We are open to all kinds of community feedback: it need not even be code, we are always happy to talk about requirements, or compare notes with our fellow networking enthusiasts. Of course, pull requests containing small code or documentation corrections, or even the implementation of new features are also very much welcomed! This work is released under the 3-Clause BSD License. Regardless whether your contribution is related to an already existing Issue, or not: it is generally a good idea to first discuss it with the core team. You can expect a response/engagement from the core team within a couple of days, so do not hesitate to contact us first through the [communication channels described below](#communication-channels-besides-github) Aligning expectations and discussing design ideas before implementing a feature can go a long way in increasing the chances of its acceptance! Feel free to comment on existing issues, open new ones, or simply send your ideas directly to us! We promise that we will be always transparent when communicating with the community, and will respectfully hear each and every idea out! In exchange we expect the same behavior from our Contributors. However, please also note that DANM is already being used in many, vastly differing TelCo production cases. As a result, the maintainers of the project retain the right of refusing certain contributions, if these would not be compatible with an existing production use-case. Don't feel discouraged or offended if this would happen to you! What we can promise is that we will always respectfully and truthfully explore and entertain your reasoning before making a decision, and will always openly and transparently share our own with you in case of a conflict! So, you are adamant you want to contribute to our project, and maybe even already discussed your idea with us. Awesome! What now? Keep in mind that the project is: - written in Golang and using `go module` feature for dependency management, so you will need a properly set-up `Golang 1.12+` development environment - build scripts depend on `docker`, to be able to run scripts locally you will have to install it on your machine Once you have the prerequisites, fork our project, code your changes, test your contribution, then start a normal GitHub review process. Pull requests will be only merged once at least one of the project maintainers approved them. The minimum expectation towards all pull requests is that: 1. Code is written in accordance with generic Clean Code guidelines 2. The Makefile in the project's root directory successfully executes, all binaries compile 3. All the not many -contribution opportunity 2.0- existing Unit Tests pass Not mandatory, *but highly appreciated:* 1. Existing coding style (2 spaces indentation, camelCase/CamelCase naming scheme) maintained 2. New Unit Tests are written to cover newly added (or even legacy) code When writing Unit Tests we prefer testing the packages through their public interfaces! We appreciate thorough and detailed commit messages. We are not allergic to the number of commits it took to create a contribution, you are not required to squash and amend your changes all the time. However, we require you to break-up big contributions into smaller, functionally coherent pieces. This approach greatly reduces both integration and review efforts! The following topics are on our mind right now, so if you are looking for topic to start with these are as good as any! Being a new project, we have not yet integrated the repository to an automated CI system (like Travis). Increasing UT coverage of existing code is alway appreciated. Extending the \"reach\" of the DANM ecosystem is our primary goal! This includes both native, first-class integration of additional CNI plugin interfaces, and integrating more one-network Kubernetes features (e.g. NetworkPolicy) with our DanmNet API. R\u00f3bert Springer (@rospring) Levente K\u00e1l\u00e9 (@Levovar) Lengyel Kriszti\u00e1n (@klengyel) Ferenc T\u00f3th (@TothFerenc) @peterszilagyi, @libesz, @visnyei, @CsatariGergely, @clivez, @Fillamug, @janosi Please keep in mind we live in the CET/CEST timezone! You can contact the core team mainly via email at robert.springer@nokia.com and levente.kale@nokia.com or you can join to our [slack channel](https://danmws.slack.com) using [this](https://join.slack.com/t/danmws/shared_invite/enQtNzEzMTQ4NDM2NTMxLTA3MDM4NGM0YTRjYzlhNGRiMDVlZWRlMjdlNTkwNTBjNWUyNjM0ZDQ3Y2E4YjE3NjVhNTE1MmEyYzkyMDRlNWU) invite link. But we also do hang around various Kubernetes slack channels, you might get lucky if you look around networking, node, or resource management :)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "deployment-guide.md"
- },
- "content": [
- {
- "heading": "DANM Deployment Guide",
- "data": ""
- },
- {
- "heading": "Table of Contents",
- "data": "* [Getting started](#getting-started)\n * [Prerequisites](#prerequisites)\n * [Building the binaries](#building-the-binaries)\n * [Deployment](#deployment)"
- },
- {
- "heading": "Getting started",
- "data": ""
- },
- {
- "heading": "Prerequisites",
- "data": "To begin, you need to create your own Kubernetes cluster, and install DANM manually. We suggest\n to use any of the automated Kubernetes installing solutions (kubeadm, minikube etc.) for a painless\n experience.\n We currently test DANM with Kubernetes 1.17.X. Compatibility with earlier than 1.9.X versions of\n Kubernetes is not officially supported.\n **Running with pre-1.15.X versions have known issues when used together with the production-grade\n network management APIs. These originate from Kubernetes core code limitations.**\n Best bet is to always stay up-to-date!\n The project does not currently have a Docker container release, so we will walk you through the\n entire process of building all artifacts from scratch. To be able to do that, your development\n environment shall already have Docker daemon installed and ready to build containers."
- },
- {
- "heading": "Building the binaries",
- "data": "It is actually as easy as cloning the repository from GitHub, and executing the `build_danm.sh`\n script from the root of the project!\n The result will four container images:\n - `danm-cni-plugins`: This image contains the core CNI plugins (`danm`, `fakeipam`). Later on,\n it will be deployed as a DaemonSet that puts these binaries in place in each Kubernetes node.\n - `netwatcher`: This image will be used by the `netwatcher` DaemonSet\n - `webhook`: This image will be used by the `webhook` deployment\n - `svcwatcher`: This image will be used by the `svcwatcher` DaemonSet if you choose to install it."
- },
- {
- "heading": "Deployment",
- "data": "As a quicker but currently experimental option, please also take a look at\n [Deploying using an installer job](deployment-installer-job.md). This option integrates all\n of the steps mentioned below, into a single one-stop-shop installer. However, please treat\n this option as experimental for now -- and only apply it on a Kubernetes cluster where you\n feel comfortable with tolerating the impact if something goes wrong. Also, please let\n us know any issues you encounter!\n Otherwise, the manual method of deploying the whole DANM suite into a Kubernetes cluster is\n the following:"
- },
- {
- "heading": "1. Extend the Kubernetes API",
- "data": "There are two options to choose from:\n 1. **Lightweight**: Extend the Kubernetes API with the `DanmNet` and `DanmEp` CRD objects for a\n simplified network management experience by executing the following command from the project's\n root directory:\n ```\n kubectl create -f integration/crds/lightweight\n ```\n 1. **Production**: Extend the Kubernetes API with the `TenantNetwork`, `ClusterNetwork`,\n `TenantConfig`, and `DanmEp` CRD objects for a multi-tenant capable, production-grade network\n management experience by executing the following command from the project's root directory:\n ```\n kubectl create -f integration/crds/production\n ```"
- },
- {
- "heading": "2. Create a service account for the DANM CNI",
- "data": "In order to do its job, DANM needs a service account to access the cluster, and for that account to\n have the necessary RBAC roles provisioned.\n We also need to extract the token for this service account, as it will be required in the next step:"
- },
- {
- "heading": "3. Create a valid CNI configuration file",
- "data": "Put a valid CNI config file into the CNI configuration directory of all your kubelet nodes (by\n default it is `/etc/cni/net.d/`), based on the following ecxample configuration:\n [Example CNI config file](https://github.com/nokia/danm/tree/master/integration/cni_config/00-danm.conf)\n As kubelet considers the first .conf file in the configured directory as the valid CNI config of the\n cluster, it is generally a good idea to prefix the .conf file of any CNI metaplugin with \"00\".\n Make sure to configure the optional DANM configuration parameters to match your environment!\n The parameter `kubeconfig` is mandatory, and shall point to a valid kubeconfig file.\n In order to create a valid kubeconfig file, the cluster server and CA certificate need to be known:\n *(note: Above commands may not work if you have more than one cluster in your kubeconfig file. In\n that case, adjust the commands above to pick the correct cluster, or obtain the values manually)*\n With both the service account token from step 2, and the cluster information from just above,\n a kubeconfig file can be created. If you ran the commands as show above, this is now simply\n a matter of replacing the variables either manually or with a tool like `envsubst`.\n [Example kubeconf file](https://github.com/nokia/danm/tree/master/integration/cni_config/example_kubeconfig.yaml)\n Also provision the necessary RBAC rules so DANM can do its job:"
- },
- {
- "heading": "4. Onboard container images",
- "data": "Onboard the netwatcher, svcwatcher, and webhook containers into the image registry of your cluster"
- },
- {
- "heading": "5. Create CNI plugin DaemonSet",
- "data": "Create the cni-plugin DaemonSet by executing the following command from the project's root\n directory:\n This DaemonSet will copy the `danm` and `fakeipam` binaries into the `/opt/cni/bin` directory of\n each node."
- },
- {
- "heading": "6. (OPTIONAL): Install other CNI plugins",
- "data": "Install other CNI plugins (flannel, sriov etc.) you would like to use in your cluster.\n Specific installation steps depend on the CNI plugin; some require copying into `/opt/cni/bin` on\n all nodes in your cluster, whereas others are installed using a DaemonSet (or a combination of both)."
- },
- {
- "heading": "7. Create the NetWatcher DaemonSet",
- "data": "Create the netwatcher DaemonSet by executing the following command from the project's root\n directory:\n Notes:\n - you should take a look at the example manifest, and possibly tailor it to your own environment\n first\n - we assume RBAC is configured for the Kubernetes API, so the manifests include the required\n Role and ServiceAccount for this case."
- },
- {
- "heading": "8. Create a bootstrap network",
- "data": ""
- },
- {
- "heading": "Create at least one DANM network to bootstrap your infrastructure Pods!",
- "data": "Otherwise you can easily fall into a catch 22 situation - you won't be able to bring-up Pods because\n you don't have network, but you cannot create networks because you cannot bring-up a Pod to validate\n them.\n Your bootstrap networking solution can be really anything you fancy!\n We use Flannel or Calico for the purpose in our environments, and connect Pods to it with such\n simple network descriptors like what you can find in `integration/bootstrap_networks`."
- },
- {
- "heading": "9. Create the Webhook Deployment",
- "data": "Create the webhook Deployment and provide it with certificates by executing the following commands\n from the project's root directory:\n Below scripts require the `jq` tool and `openssl`; please make sure you have them installed.\n **Disclaimer**: Webhook already leverages DANM CNI to create its network interface. Don't forget to\n change the name of the network referenced in the example manifest file to your bootstrap network!\n We also assume RBAC is configured in your cluster.\n ***You are now ready to use the services of DANM, and can start bringing-up Pods within your\n cluster!***"
- },
- {
- "heading": "10. (OPTIONAL) Create the Svcwatcher deployment",
- "data": "Create the svcwatcher Deployment by executing the following command from the project's root directory: This component is an optional part of the suite. You only need to install it if you would like to use Kubernetes Services for all the network interfaces of your Pod - but who wouldn't want that?? **Disclaimer**: Svcwatcher, and webhook already leverage DANM CNI to create their network interface. Don't forget to configure an appropriate default network in your cluster before you instantiate them! We use Flannel, or Calico for this purpose in our infrastructures. We also assume RBAC is configured in your cluster."
- },
- {
- "additional_info": "* [Getting started](#getting-started) * [Prerequisites](#prerequisites) * [Building the binaries](#building-the-binaries) * [Deployment](#deployment) To begin, you need to create your own Kubernetes cluster, and install DANM manually. We suggest to use any of the automated Kubernetes installing solutions (kubeadm, minikube etc.) for a painless experience. We currently test DANM with Kubernetes 1.17.X. Compatibility with earlier than 1.9.X versions of Kubernetes is not officially supported. **Running with pre-1.15.X versions have known issues when used together with the production-grade network management APIs. These originate from Kubernetes core code limitations.** Best bet is to always stay up-to-date! The project does not currently have a Docker container release, so we will walk you through the entire process of building all artifacts from scratch. To be able to do that, your development environment shall already have Docker daemon installed and ready to build containers. It is actually as easy as cloning the repository from GitHub, and executing the `build_danm.sh` script from the root of the project! ``` git clone github.com/nokia/danm cd danm ./build_danm.sh ``` The result will four container images: - `danm-cni-plugins`: This image contains the core CNI plugins (`danm`, `fakeipam`). Later on, it will be deployed as a DaemonSet that puts these binaries in place in each Kubernetes node. - `netwatcher`: This image will be used by the `netwatcher` DaemonSet - `webhook`: This image will be used by the `webhook` deployment - `svcwatcher`: This image will be used by the `svcwatcher` DaemonSet if you choose to install it. As a quicker but currently experimental option, please also take a look at [Deploying using an installer job](deployment-installer-job.md). This option integrates all of the steps mentioned below, into a single one-stop-shop installer. However, please treat this option as experimental for now -- and only apply it on a Kubernetes cluster where you feel comfortable with tolerating the impact if something goes wrong. Also, please let us know any issues you encounter! Otherwise, the manual method of deploying the whole DANM suite into a Kubernetes cluster is the following: There are two options to choose from: 1. **Lightweight**: Extend the Kubernetes API with the `DanmNet` and `DanmEp` CRD objects for a simplified network management experience by executing the following command from the project's root directory: ``` kubectl create -f integration/crds/lightweight ``` 1. **Production**: Extend the Kubernetes API with the `TenantNetwork`, `ClusterNetwork`, `TenantConfig`, and `DanmEp` CRD objects for a multi-tenant capable, production-grade network management experience by executing the following command from the project's root directory: ``` kubectl create -f integration/crds/production ``` In order to do its job, DANM needs a service account to access the cluster, and for that account to have the necessary RBAC roles provisioned. We also need to extract the token for this service account, as it will be required in the next step: ``` kubectl create --namespace kube-system serviceaccount danm SECRET_NAME=$(kubectl get --namespace kube-system -o jsonpath='{.secrets[0].name}' serviceaccounts danm) SERVICEACCOUNT_TOKEN=$(kubectl get --namespace kube-system secrets ${SECRET_NAME} -o jsonpath='{.data.token}' | base64 -d) ``` Put a valid CNI config file into the CNI configuration directory of all your kubelet nodes (by default it is `/etc/cni/net.d/`), based on the following ecxample configuration: [Example CNI config file](https://github.com/nokia/danm/tree/master/integration/cni_config/00-danm.conf) As kubelet considers the first .conf file in the configured directory as the valid CNI config of the cluster, it is generally a good idea to prefix the .conf file of any CNI metaplugin with \"00\". Make sure to configure the optional DANM configuration parameters to match your environment! The parameter `kubeconfig` is mandatory, and shall point to a valid kubeconfig file. In order to create a valid kubeconfig file, the cluster server and CA certificate need to be known: ``` CLUSTER_NAME=$(kubectl config view -o jsonpath='{.clusters[0].name}') CLUSTER_SERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') CLUSTER_CA_CERTIFICATE=$(kubectl config view --flatten -o jsonpath='{.clusters[0].cluster.certificate-authority-data}') ``` *(note: Above commands may not work if you have more than one cluster in your kubeconfig file. In that case, adjust the commands above to pick the correct cluster, or obtain the values manually)* With both the service account token from step 2, and the cluster information from just above, a kubeconfig file can be created. If you ran the commands as show above, this is now simply a matter of replacing the variables either manually or with a tool like `envsubst`. [Example kubeconf file](https://github.com/nokia/danm/tree/master/integration/cni_config/example_kubeconfig.yaml) Also provision the necessary RBAC rules so DANM can do its job: ``` kubectl create -f integration/cni_config/danm_rbac.yaml ``` Onboard the netwatcher, svcwatcher, and webhook containers into the image registry of your cluster Create the cni-plugin DaemonSet by executing the following command from the project's root directory: ``` kubectl create -f integration/cni_config/danm_rbac.yaml kubectl create -f integration/manifests/cni_plugins ``` This DaemonSet will copy the `danm` and `fakeipam` binaries into the `/opt/cni/bin` directory of each node. Install other CNI plugins (flannel, sriov etc.) you would like to use in your cluster. Specific installation steps depend on the CNI plugin; some require copying into `/opt/cni/bin` on all nodes in your cluster, whereas others are installed using a DaemonSet (or a combination of both). Create the netwatcher DaemonSet by executing the following command from the project's root directory: ``` kubectl create -f integration/manifests/netwatcher/ ``` Notes: - you should take a look at the example manifest, and possibly tailor it to your own environment first - we assume RBAC is configured for the Kubernetes API, so the manifests include the required Role and ServiceAccount for this case. Otherwise you can easily fall into a catch 22 situation - you won't be able to bring-up Pods because you don't have network, but you cannot create networks because you cannot bring-up a Pod to validate them. Your bootstrap networking solution can be really anything you fancy! We use Flannel or Calico for the purpose in our environments, and connect Pods to it with such simple network descriptors like what you can find in `integration/bootstrap_networks`. Create the webhook Deployment and provide it with certificates by executing the following commands from the project's root directory: Below scripts require the `jq` tool and `openssl`; please make sure you have them installed. ``` ./integration/manifests/webhook/webhook-create-signed-cert.sh \\ cat ./integration/manifests/webhook/webhook.yaml | \\ ./integration/manifests/webhook/webhook-patch-ca-bundle.sh > \\ ./integration/manifests/webhook/webhook-ca-bundle.yaml kubectl create -f integration/manifests/webhook/webhook-ca-bundle.yaml ``` **Disclaimer**: Webhook already leverages DANM CNI to create its network interface. Don't forget to change the name of the network referenced in the example manifest file to your bootstrap network! We also assume RBAC is configured in your cluster. ***You are now ready to use the services of DANM, and can start bringing-up Pods within your cluster!*** Create the svcwatcher Deployment by executing the following command from the project's root directory: ``` kubectl create -f integration/manifests/svcwatcher/ ``` This component is an optional part of the suite. You only need to install it if you would like to use Kubernetes Services for all the network interfaces of your Pod - but who wouldn't want that?? **Disclaimer**: Svcwatcher, and webhook already leverage DANM CNI to create their network interface. Don't forget to configure an appropriate default network in your cluster before you instantiate them! We use Flannel, or Calico for this purpose in our infrastructures. We also assume RBAC is configured in your cluster."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "deployment-installer-job.md"
- },
- "content": [
- {
- "heading": "Deployment via installer job",
- "data": ""
- },
- {
- "heading": "TL;DR;",
- "data": ""
- },
- {
- "heading": "Installer job",
- "data": "This (currently experimental) method installs DANM using a Kubernetes Job.\n Just like the manual deployment, it assumes that a previous CNI (also referred to\n as \"bootstrap CNI\") is already installed. In the setup deployed by this installer,\n the bootstrap CNI will both be used by DANM components themselves (ie. netwatcher\n and svcwatcher will utilize that bootstrap CNI for their own network connectivity),\n as well as being configured as a DanmNet or ClusterNetwork with the name \"default\",\n that will be used by any Kubernetes applications without a `danm.io` annotation.\n Please be aware that the existing (bootstrap) CNI configuration must be a single\n CNI, *not* a list of CNIs. This means that in your CNI configuration directory,\n `/etc/cni/net.d`, there should be an existing file with a `.conf` extension, often\n named something like `10-flannel.conf` or `10-calico.conf`.\n If there is a file with a `.conflist` extension (such as `10-calico.conflist`), then\n that is a chained list of multiple CNIs. DANM does not currently support using\n such a `.conflist` chain as a bootstrap network. Depending on your setup, you may be\n able to to extract only the first CNI from the list using a command such as the\n following:\n Either way, please be sure that you have a functional `/etc/cni/net.d/*.conf` CNI\n configuration before proceeding, and know the name of that `.conf` file."
- },
- {
- "heading": "Configuration file (configMap)",
- "data": "This file will need modification to match your setup.\n Please review/edit `integration/install/danm-installer-config.yaml`."
- },
- {
- "heading": "Installer Job resource",
- "data": "This file will need modification only if the installation container needs to be\n pulled from an external registry. If this is the case, then please review/edit\n `integration/install/danm-installer.yaml`.\n If you have built DANM locally and do not need to pull images, this file does not\n need updating."
- },
- {
- "heading": "Deploying the installer",
- "data": ""
- },
- {
- "heading": "Watching installer progress",
- "data": "After applying the installer CRD, in `kubectl get pods -n kube-system` you should\n first see a `danm-installer-*` pod starting, and shortly after, the\n `danm-cni` and `netwatcher` daemonsets, `svcwatcher`, and `danm-webhook-deployment`\n pods.\n The `danm-installer-*` pod should end up in \"Completed\" status - if not, please check\n the pod logs for any errors."
- },
- {
- "heading": "Cleaning up (optional)",
- "data": "After the installer pod ran to completion, you can remove the installer itself:"
- },
- {
- "additional_info": "``` ${EDITOR} integration/install/danm-installer-config.yaml kubectl apply -f integration/install ``` This (currently experimental) method installs DANM using a Kubernetes Job. Just like the manual deployment, it assumes that a previous CNI (also referred to as \"bootstrap CNI\") is already installed. In the setup deployed by this installer, the bootstrap CNI will both be used by DANM components themselves (ie. netwatcher and svcwatcher will utilize that bootstrap CNI for their own network connectivity), as well as being configured as a DanmNet or ClusterNetwork with the name \"default\", that will be used by any Kubernetes applications without a `danm.io` annotation. Please be aware that the existing (bootstrap) CNI configuration must be a single CNI, *not* a list of CNIs. This means that in your CNI configuration directory, `/etc/cni/net.d`, there should be an existing file with a `.conf` extension, often named something like `10-flannel.conf` or `10-calico.conf`. If there is a file with a `.conflist` extension (such as `10-calico.conflist`), then that is a chained list of multiple CNIs. DANM does not currently support using such a `.conflist` chain as a bootstrap network. Depending on your setup, you may be able to to extract only the first CNI from the list using a command such as the following: ``` jq -M '{ name: .name, cniVersion: .cniVersioni} + .plugins[0]' \\ /etc/cni/net.d/${EXISTING_CONFLIST_FILE}.conflist \\ > /etc/cni/net.d/${FIRST_PLUGIN_FROM_LIST_CONFIG_FILE}.conf ``` Either way, please be sure that you have a functional `/etc/cni/net.d/*.conf` CNI configuration before proceeding, and know the name of that `.conf` file. This file will need modification to match your setup. Please review/edit `integration/install/danm-installer-config.yaml`. This file will need modification only if the installation container needs to be pulled from an external registry. If this is the case, then please review/edit `integration/install/danm-installer.yaml`. If you have built DANM locally and do not need to pull images, this file does not need updating. ``` kubectl apply -f integration/install ``` After applying the installer CRD, in `kubectl get pods -n kube-system` you should first see a `danm-installer-*` pod starting, and shortly after, the `danm-cni` and `netwatcher` daemonsets, `svcwatcher`, and `danm-webhook-deployment` pods. The `danm-installer-*` pod should end up in \"Completed\" status - if not, please check the pod logs for any errors. After the installer pod ran to completion, you can remove the installer itself: ``` kubectl delete -f integration/install ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "ISSUE_TEMPLATE.md"
- },
- "content": [
- {
- "additional_info": " **Is this a BUG REPORT or FEATURE REQUEST?**: > Uncomment only one, leave it on its own line: > > bug > feature **What happened**: **What you expected to happen**: **How to reproduce it**: **Anything else we need to know?**: **Environment**: - DANM version (use `danm -version`): - Kubernetes version (use `kubectl version`): - DANM configuration (K8s manifests, kubeconfig files, CNI config file): - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Others:"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "PULL_REQUEST_TEMPLATE.md"
- },
- "content": [
- {
- "heading": "What type of PR is this?",
- "data": "> Uncomment only one, leave it on its own line: > > bug > cleanup > design > documentation > failing-test > feature **What does this PR give to us**: **Which issue(s) this PR fixes** *(in `fixes #(, fixes #, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: **Does this PR introduce a user-facing change?**: "
- },
- {
- "additional_info": " > Uncomment only one, leave it on its own line: > > bug > cleanup > design > documentation > failing-test > feature **What does this PR give to us**: **Which issue(s) this PR fixes** *(in `fixes #(, fixes #, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: **Does this PR introduce a user-facing change?**: ```release-note ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "README.md"
- },
- "content": [
- {
- "heading": "DANM",
- "data": "[](https://travis-ci.com/Nokia/danm)\n [](https://coveralls.io/github/nokia/danm?branch=master)\n "
- },
- {
- "heading": "Join our community!",
- "data": "Want to hang-out with us? Join our Slack under https://danmws.slack.com/!\n Feel yourself officially invited by clicking on [this](https://join.slack.com/t/danmws/shared_invite/enQtNzEzMTQ4NDM2NTMxLTA3MDM4NGM0YTRjYzlhNGRiMDVlZWRlMjdlNTkwNTBjNWUyNjM0ZDQ3Y2E4YjE3NjVhNTE1MmEyYzkyMDRlNWU) link!"
- },
- {
- "heading": "Want to get more bang for the buck? Check out DANM Utils too!",
- "data": "DANM Utils is the home to independet Operators built on top of the DANM network management platform, providing value added services to your cluster!\n Interested in adding outage resiliency to your IPAM, or universal network policy support? Look no further and hop over to https://github.com/nokia/danm-utils today!"
- },
- {
- "heading": "Table of Contents",
- "data": "* [Table of Contents](#table-of-contents)\n * [Introduction](#introduction)\n * [Install an Akraino REC and get DANM for free](#install-an-akraino-rec-and-get-danm-for-free)\n * [Our philosophy and motivation behind DANM](#our-philosophy-and-motivation-behind-danm)\n * [Scope of the project](#scope-of-the-project)\n * [Deployment](#deployment)\n * [User guide](#user-guide)\n * [Contributing](#contributing)\n * [Authors](#authors)\n * [License](#license)"
- },
- {
- "heading": "Introduction",
- "data": "__DANM__ is Nokia's solution to bring TelCo grade network management into a Kubernetes cluster! DANM has more than 4 years of history inside the company, is currently deployed into production, and it is finally available for everyone, here on GitHub.\n The name stands for \"Damn, Another Network Manager!\", because yes, we know: the last thing the K8s world needed is another TelCo company \"revolutionizing\" networking in Kubernetes.\n But still we hope that potential users checking out our project will involuntarily proclaim \"DANM, that's some good networking stuff!\" :)\n Please consider for a moment that there is a whole other world out there, with special requirements, and DANM is the result of those needs!\n We are certainly not saying DANM is __THE__ network solution, but we think it is a damn good one!\n Want to learn more about this brave new world? Don't hesitate to contact us, we are always quite happy to share the special requirements we need to satisfy each and every day.\n **In any case, DANM is more than just a plugin, it is an End-To-End solution to a whole problem domain**.\n It is:\n * a CNI plugin capable of provisioning IPVLAN interfaces with advanced features\n * an in-built IPAM module with the capability of managing multiple, ***cluster-wide***, discontinuous L3 networks with managing up to 8M allocations per network! plus providing dynamic, static, or no IP allocation scheme on-demand for both IPv4, and IPv6"
- },
- {
- "heading": "a CNI metaplugin capable of attaching multiple network interfaces to a container, either through its own CNI, or through delegating the job to any of the popular CNI solution e.g. SR-IOV, Calico, Flannel etc. ***in parallel",
- "data": "* a Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts\n * another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod\n * a standard Kubernetes Validating and Mutating Webhook responsible for making you adhere to the schemas, and also automating network resource management for tenant users in a production-grade environment"
- },
- {
- "heading": "Install an Akraino REC and get DANM for free!",
- "data": "Just kidding as DANM is always free, but if you want to install a production grade, open-source Kubernetes-based bare metal CaaS infrastructure by default equipped with DANM **and** with a single click of a button nonetheless; just head over to Linux Foundation Akraino Radio Edge Cloud (REC) wiki for the [Akraino REC Architecture](https://wiki.akraino.org/display/AK/REC+Architecture+Document) and the [Akraino REC Installation Guide](https://wiki.akraino.org/display/AK/REC+Installation+Guide)\n Not just for TelCo!\n The above functionalities are implemented by the following components:\n - **danm** is the CNI plugin which can be directly integrated with kubelet. Internally it consists of the CNI metaplugin, the CNI plugin responsible for managing IPVLAN interfaces, and the in-built IPAM plugin.\n Danm binary is integrated to kubelet as any other [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).\n - **fakeipam** is a little program used in natively integrating 3rd party CNI plugins into the DANM ecosystem. It is basically used to echo the result of DANM's in-built IPAM to CNIs DANM delegates operations to.\n Fakeipam binary should be placed into kubelet's configured CNI plugin directory, next to danm.\n Fakeipam is a temporary solution, the long-term aim is to separate DANM's IPAM component into a full-fledged, standalone IPAM solution.\n - **netwatcher** is a Kubernetes Controller watching the Kubernetes API for changes in the DANM related CRD network management APIs.\n This component is responsible for validating the semantics of network objects, and also for maintaining VxLAN and VLAN host interfaces of all Kubernetes nodes.\n Netwatcher binary is deployed in Kubernetes as a DaemonSet, running on all nodes.\n - **svcwatcher** is another Kubernetes Controller monitoring Pod, Service, Endpoint, and DanmEp API paths.\n This Controller is responsible for extending Kubernetes native Service Discovery to work even for the non-primary networks of the Pod.\n Svcwatcher binary is deployed in Kubernetes as a DaemonSet, running only on the Kubernetes master nodes in a clustered setup.\n - **webhook** is a standard Kubernetes Validating and Mutating Webhook. It has multiple, crucial responsibilities:\n - it validates all DANM introduced CRD APIs both syntactically, and semantically both during creation, and modification\n - it automatically mutates parameters only relevant to the internal implementation of DANM into the API objects\n - it automatically assigns physical network resources to the logical networks of tenant users in a production-grade infrastructure"
- },
- {
- "heading": "Our philosophy and motivation behind DANM",
- "data": "It is undeniable that TelCo products- even in containerized format- ***must*** own physically separated network interfaces, but we have always felt other projects put too much emphasis on this lone fact, and entirely ignored -or were afraid to tackle- the larger issue with Kubernetes.\n That is: capability to **provision** multiple network interfaces to Pods is a very limited enhancement if the cloud native feature of Kubernetes **cannot be used with those extra interfaces**.\n This is the very big misconception our solution aims to rectify - we strongly believe that all network interfaces shall be natively supported by K8s, and there are no such things as \"primary\", or \"secondary\" network interfaces.\n Why couldn't NetworkPolicies, Services, LoadBalancers, all of these existing and proven Kubernetes constructs work with all network interfaces?\n Why couldn't network administrators freely decide which physical networks are reachable by a Pod?\n In our opinion the answer is quite simple: because networks are not first-class citizens in Kubernetes.\n This is the historical reason why DANM's CRD based, abstract network management APIs were born, and why is the whole ecosystem built around the concept of promoting networks to first-class Kubernetes API objects.\n This approach opens-up a plethora of possibilities, even with today's Kubernetes core code!\n The following chapters will guide you through the description of these features, and will show you how you can leverage them in your Kubernetes cluster."
- },
- {
- "heading": "Scope of the project",
- "data": "You will see at the end of this README that we really went above and beyond what \"networks\" are in vanilla Kubernetes.\n But, DANM core project never did, and will break one core concept: DANM is first and foremost a run-time agnostic standard CNI system for Kubernetes, 100% adhering to the Kubernetes life-cycle management principles.\n It is important to state this, because the features DANM provides open up a couple of very enticing, but also very dangerous avenues:\n - what if we would monitor the run-time and provide added high-availability feature based on events happening on that level?\n - what if we could change the networks of existing Pods?\n We strongly feel that all such scenarios incompatible with the life-cycle of a standard CNI plugin firmly fall outside the responsibility of the core DANM project.\n That being said, tell us about your Kubernetes breaking ideas! We are open to accept such plugins into the wider umbrella of the existing eco-system: outside of the core project, but still loosely linked to suite as optional, external components.\n Just because something doesn't fit into core DANM, it does not mean it can't fit into your cloud!\n Please visit [DANM utils](https://github.com/nokia/danm-utils) repository for more info."
- },
- {
- "heading": "Deployment",
- "data": "See [Deployment Guide](deployment-guide.md)."
- },
- {
- "heading": "User guide",
- "data": "See [User Guide](user-guide.md)."
- },
- {
- "heading": "Contributing",
- "data": "Please read [CONTRIBUTING.md](https://github.com/nokia/danm/blob/master/CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us."
- },
- {
- "heading": "Authors",
- "data": "* **Robert Springer** (@rospring) - Initial work (V1 Python), IPAM, Netwatcher, Svcwatcher [Nokia](https://github.com/nokia)\n * **Levente Kale** (@Levovar) - Initial work (V2 Golang), Documentation, Integration, SCM, UTs, Metaplugin, V4 work [Nokia](https://github.com/nokia)\n Special thanks to the original author who started the whole project in 2015 by putting a proprietary network management plugin between Kubelet and Docker; and also for coining the DANM acronym:\n **Peter Braun** (@peter-braun)"
- },
- {
- "heading": "License",
- "data": "This project is licensed under the 3-Clause BSD License - see the [LICENSE](LICENSE)"
- },
- {
- "additional_info": "[](https://travis-ci.com/Nokia/danm) [](https://coveralls.io/github/nokia/danm?branch=master) Want to hang-out with us? Join our Slack under https://danmws.slack.com/! Feel yourself officially invited by clicking on [this](https://join.slack.com/t/danmws/shared_invite/enQtNzEzMTQ4NDM2NTMxLTA3MDM4NGM0YTRjYzlhNGRiMDVlZWRlMjdlNTkwNTBjNWUyNjM0ZDQ3Y2E4YjE3NjVhNTE1MmEyYzkyMDRlNWU) link! DANM Utils is the home to independet Operators built on top of the DANM network management platform, providing value added services to your cluster! Interested in adding outage resiliency to your IPAM, or universal network policy support? Look no further and hop over to https://github.com/nokia/danm-utils today! * [Table of Contents](#table-of-contents) * [Introduction](#introduction) * [Install an Akraino REC and get DANM for free](#install-an-akraino-rec-and-get-danm-for-free) * [Our philosophy and motivation behind DANM](#our-philosophy-and-motivation-behind-danm) * [Scope of the project](#scope-of-the-project) * [Deployment](#deployment) * [User guide](#user-guide) * [Contributing](#contributing) * [Authors](#authors) * [](#license) __DANM__ is Nokia's solution to bring TelCo grade network management into a Kubernetes cluster! DANM has more than 4 years of history inside the company, is currently deployed into production, and it is finally available for everyone, here on GitHub. The name stands for \"Damn, Another Network Manager!\", because yes, we know: the last thing the K8s world needed is another TelCo company \"revolutionizing\" networking in Kubernetes. But still we hope that potential users checking out our project will involuntarily proclaim \"DANM, that's some good networking stuff!\" :) Please consider for a moment that there is a whole other world out there, with special requirements, and DANM is the result of those needs! We are certainly not saying DANM is __THE__ network solution, but we think it is a damn good one! Want to learn more about this brave new world? Don't hesitate to contact us, we are always quite happy to share the special requirements we need to satisfy each and every day. **In any case, DANM is more than just a plugin, it is an End-To-End solution to a whole problem domain**. It is: * a CNI plugin capable of provisioning IPVLAN interfaces with advanced features * an in-built IPAM module with the capability of managing multiple, ***cluster-wide***, discontinuous L3 networks with managing up to 8M allocations per network! plus providing dynamic, static, or no IP allocation scheme on-demand for both IPv4, and IPv6 * a Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts * another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod * a standard Kubernetes Validating and Mutating Webhook responsible for making you adhere to the schemas, and also automating network resource management for tenant users in a production-grade environment Just kidding as DANM is always free, but if you want to install a production grade, open-source Kubernetes-based bare metal CaaS infrastructure by default equipped with DANM **and** with a single click of a button nonetheless; just head over to Linux Foundation Akraino Radio Edge Cloud (REC) wiki for the [Akraino REC Architecture](https://wiki.akraino.org/display/AK/REC+Architecture+Document) and the [Akraino REC Installation Guide](https://wiki.akraino.org/display/AK/REC+Installation+Guide) Not just for TelCo! The above functionalities are implemented by the following components: - **danm** is the CNI plugin which can be directly integrated with kubelet. Internally it consists of the CNI metaplugin, the CNI plugin responsible for managing IPVLAN interfaces, and the in-built IPAM plugin. Danm binary is integrated to kubelet as any other [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). - **fakeipam** is a little program used in natively integrating 3rd party CNI plugins into the DANM ecosystem. It is basically used to echo the result of DANM's in-built IPAM to CNIs DANM delegates operations to. Fakeipam binary should be placed into kubelet's configured CNI plugin directory, next to danm. Fakeipam is a temporary solution, the long-term aim is to separate DANM's IPAM component into a full-fledged, standalone IPAM solution. - **netwatcher** is a Kubernetes Controller watching the Kubernetes API for changes in the DANM related CRD network management APIs. This component is responsible for validating the semantics of network objects, and also for maintaining VxLAN and VLAN host interfaces of all Kubernetes nodes. Netwatcher binary is deployed in Kubernetes as a DaemonSet, running on all nodes. - **svcwatcher** is another Kubernetes Controller monitoring Pod, Service, Endpoint, and DanmEp API paths. This Controller is responsible for extending Kubernetes native Service Discovery to work even for the non-primary networks of the Pod. Svcwatcher binary is deployed in Kubernetes as a DaemonSet, running only on the Kubernetes master nodes in a clustered setup. - **webhook** is a standard Kubernetes Validating and Mutating Webhook. It has multiple, crucial responsibilities: - it validates all DANM introduced CRD APIs both syntactically, and semantically both during creation, and modification - it automatically mutates parameters only relevant to the internal implementation of DANM into the API objects - it automatically assigns physical network resources to the logical networks of tenant users in a production-grade infrastructure It is undeniable that TelCo products- even in containerized format- ***must*** own physically separated network interfaces, but we have always felt other projects put too much emphasis on this lone fact, and entirely ignored -or were afraid to tackle- the larger issue with Kubernetes. That is: capability to **provision** multiple network interfaces to Pods is a very limited enhancement if the cloud native feature of Kubernetes **cannot be used with those extra interfaces**. This is the very big misconception our solution aims to rectify - we strongly believe that all network interfaces shall be natively supported by K8s, and there are no such things as \"primary\", or \"secondary\" network interfaces. Why couldn't NetworkPolicies, Services, LoadBalancers, all of these existing and proven Kubernetes constructs work with all network interfaces? Why couldn't network administrators freely decide which physical networks are reachable by a Pod? In our opinion the answer is quite simple: because networks are not first-class citizens in Kubernetes. This is the historical reason why DANM's CRD based, abstract network management APIs were born, and why is the whole ecosystem built around the concept of promoting networks to first-class Kubernetes API objects. This approach opens-up a plethora of possibilities, even with today's Kubernetes core code! The following chapters will guide you through the description of these features, and will show you how you can leverage them in your Kubernetes cluster. You will see at the end of this README that we really went above and beyond what \"networks\" are in vanilla Kubernetes. But, DANM core project never did, and will break one core concept: DANM is first and foremost a run-time agnostic standard CNI system for Kubernetes, 100% adhering to the Kubernetes life-cycle management principles. It is important to state this, because the features DANM provides open up a couple of very enticing, but also very dangerous avenues: - what if we would monitor the run-time and provide added high-availability feature based on events happening on that level? - what if we could change the networks of existing Pods? We strongly feel that all such scenarios incompatible with the life-cycle of a standard CNI plugin firmly fall outside the responsibility of the core DANM project. That being said, tell us about your Kubernetes breaking ideas! We are open to accept such plugins into the wider umbrella of the existing eco-system: outside of the core project, but still loosely linked to suite as optional, external components. Just because something doesn't fit into core DANM, it does not mean it can't fit into your cloud! Please visit [DANM utils](https://github.com/nokia/danm-utils) repository for more info. See [Deployment Guide](deployment-guide.md). See [User Guide](user-guide.md). Please read [CONTRIBUTING.md](https://github.com/nokia/danm/blob/master/CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. * **Robert Springer** (@rospring) - Initial work (V1 Python), IPAM, Netwatcher, Svcwatcher [Nokia](https://github.com/nokia) * **Levente Kale** (@Levovar) - Initial work (V2 Golang), Documentation, Integration, SCM, UTs, Metaplugin, V4 work [Nokia](https://github.com/nokia) Special thanks to the original author who started the whole project in 2015 by putting a proprietary network management plugin between Kubelet and Docker; and also for coining the DANM acronym: **Peter Braun** (@peter-braun) This project is licensed under the 3-Clause BSD - see the [LICENSE](LICENSE)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "DANM",
- "file_name": "user-guide.md"
- },
- "content": [
- {
- "heading": "DANM User guide",
- "data": ""
- },
- {
- "heading": "Table of Contents",
- "data": "* [Usage of DANM's CNI](#usage-of-danms-cni)\n * [Configuring DANM](#configuring-danm)\n * [Network management](#network-management)\n * [Overview](#overview)\n * [Lightweight network management experience](#lightweight-network-management-experience)\n * [Production-grade network management experience](#production-grade-network-management-experience)\n * [Network management in the practical sense](#network-management-in-the-practical-sense)\n * [Generally supported DANM API features](#generally-supported-danm-api-features)\n * [Naming container interfaces](#naming-container-interfaces)\n * [Provisioning static IP routes](#provisioning-static-ip-routes)\n * [Provisioning policy-based IP routes](#provisioning-policy-based-ip-routes)\n * [Delegating to other CNI plugins](#delegating-to-other-cni-plugins)\n * [Creating the configuration for delegated CNI operations](#creating-the-configuration-for-delegated-cni-operations)\n * [Connecting Pods to specific networks](#connecting-pods-to-specific-networks)\n * [Defining default networks](#defining-default-networks)\n * [Internal workings of the metaplugin](#internal-workings-of-the-metaplugin)\n * [DANM IPAM](#danm-ipam)\n * [Using IPAM with static backends](#using-ipam-with-static-backends)\n * [IPv6 and dual-stack support](#ipv6-and-dual-stack-support)\n * [DANM IPVLAN CNI](#danm-ipvlan-cni)\n * [Device Plugin Support](#device-plugin-support)\n * [Using Intel SR-IOV CNI](#using-intel-sr-iov-cni)\n * [DPDK support](#dpdk-support)\n * [Usage of DANM's Webhook component](#usage-of-danms-webhook-component)\n * [Responsibilities](#responsibilities)\n * [Connecting TenantNetworks to TenantConfigs](#connecting-tenantnetworks-to-tenantconfigs)\n * [TenantConfig API](#tenantconfig-api)\n * [Selecting a physical interface profile](#selecting-a-physical-interface-profile)\n * [Overwrite NetworkID for static delegates](#overwrite-networkid-for-static-delegates)\n * [List of validation rules](#list-of-validation-rules)\n * [DanmNet](#danmnet)\n * [TenantNetwork](#tenantnetwork)\n * [ClusterNetwork](#clusternetwork)\n * [TenantConfig](#tenantconfig)\n * [Usage of DANM's Netwatcher component](#usage-of-danms-netwatcher-component)\n * [Feature description](#feature-description)\n * [Usage with DANM APIs](#usage-with-danm-apis)\n * [Usage with NetworkAttachmentDefinition API](#usage-with-networkattachmentdefinition-api)\n * [Usage of DANM's Svcwatcher component](#usage-of-danms-svcwatcher-component)\n * [Feature description](#feature-description)\n * [Svcwatcher compatible Service descriptors](#svcwatcher-compatible-service-descriptors)\n * [Demo: Multi-domain service discovery in Kubernetes](#demo-multi-domain-service-discovery-in-kubernetes)"
- },
- {
- "heading": "User guide",
- "data": "This section describes what features the DANM networking suite adds to a vanilla Kubernetes environment, and how can users utilize them."
- },
- {
- "heading": "Usage of DANM's CNI",
- "data": ""
- },
- {
- "heading": "Configuring DANM",
- "data": "As DANM becomes more and more complex, we offer some level of control over the internal behaviour of how network provisioning is done.\n Unless stated otherwise, DANM behaviour can be configured purely through its CNI configuration file.\n The following configuration options are currently supported:\n - cniDir: Users can define where should DANM search for the CNI config files for static delegates. Default value is /etc/cni/net.d\n - namingScheme: if it is set to legacy, container network interface names are set exactly to the value of the respective network's Spec.Options.container_prefix parameter. Otherwise refer to [Naming container interfaces](#naming-container-interfaces) for details\""
- },
- {
- "heading": "Network management",
- "data": ""
- },
- {
- "heading": "Overview",
- "data": "The DANM CNI is a full-fledged CNI metaplugin, capable of provisioning multiple network interfaces to a Pod, on-demand!\n DANM can utilize any of the existing and already integrated CNI plugins to do so.\n DANM supports two kind of network management experiences as of DANM 4.0 - **lightweight** (the only supported mode before v4.0), and **production-grade**.\n Your experience depends on which CRD-based management APIs you chose to add to your cluster during installation.\n If you want you can even add all available APIs at the same time to see which method better fits your need!"
- },
- {
- "heading": "Lightweight network management experience",
- "data": "We advise new users, or users operating a single tenant Kubernetes cluster to start out with a streamlined, lightweight network management experience.\n In this \"mode\" DANM only recognizes one network management API, called **DanmNet**.\n Both administrators, and tenant users manage their networks through the same API. Everyone has the same level of access, and can configure all the parameters supported by DANM at their leisure.\n At the same time it is impossible to create networks, which can be used across tenants (disclaimer: we use the word \"tenant\" as a synonym to \"Kubernetes namespace\" throughout the document)."
- },
- {
- "heading": "Production-grade network management experience",
- "data": "In a real, production-grade cluster the lightweight management paradigm does not suffice, because usually there are different users, with different roles interacting with each other.\n There are possibly multiple users using their own segment of the cloud -or should we say tenant?- at the same time; while there can be administrator(s) overseeing that everything is configured, and works as it should be.\n The idea behind production-grade network management is that:\n - tenant users shall be restricted to using only the network resources allocated to them by the administrators, but should be able to freely decide what to do with these resources within the confines of their tenant\n - administrators, and only administrators shall have control over the network resources of the whole cloud\n To satisfy the needs of this complex ecosystem, DANM provides different APIs for the different purposes: **TenantNetworks**, and **ClusterNetworks**!\n **TenantNetworks** is a namespaced API, and can be freely created by tenant users. It basically is the same API as DanmNet, with one big difference: parameters any way related to host settings cannot be freely configured through this API. These parameters are automatically filled by DANM instead!\n Wonder how? Refer to chapter [Connecting TenantNetworks to TenantConfigs](#connecting-tenantnetworks-to-tenantconfigs) for more information.\n **ClusterNetworks** on the other hand is a cluster-wide API, and as such, can be -or should be- only provisioned by administrator level users. Administrators can freely set all available configuration options, even the physical parameters.\n The other nice thing in ClusterNetworks is that all Pods, in any namespace can connect to them - unless the network administrator forbade it via the newly introduced **AllowedTenants** configuration list.\n Interested user can find reference manifests showcasing the features of the new APIs under [DANM V4 example manifests](https://github.com/nokia/danm/tree/master/example/4_0_examples).\n ##### Network management in the practical sense\n Regardless which paradigm thrives in your cluster, network objects are managed the exact same way - you just might not be allowed to execute a specific provisioning operation in case you are trying to overstep your boundaries! Don't worry, as DANM will always explicitly and instantly tell you when this happens.\n Unless explicitly stated in the description of a specific feature, all API features are generally supported, and supported the same way regardless through which network management API type you use them.\n Network management always starts with the creation of Kubernetes API objects, logically representing the characteristics of a network Pods can connect to.\n Users first need to create the manifest files of these objects according to the schema described in the [DanmNet schema](https://github.com/nokia/danm/tree/master/schema/DanmNet.yaml) , [TenantNetwork schema](https://github.com/nokia/danm/tree/master/schema/TenantNetwork.yaml) , or [ClusterNetwork schema](https://github.com/nokia/danm/tree/master/schema/ClusterNetwork.yaml) template files.\n A network object can be created just like any other Kubernetes object, for example by issuing:\n ```\n kubectl create -f test-net1.yaml\n __WARNING: DANM stores pretty important information in these API objects. Under no circumstances shall a network be manually deleted, if there are any running Pods still referencing it!__\n __Such action will undoubtedly lead to ruin and DANMation!__\n From DANM 4.0 upward the Webhook component makes sure this cannot happpen, but it is better to be aware of this detail."
- },
- {
- "heading": "Generally supported DANM API features",
- "data": ""
- },
- {
- "heading": "Naming container interfaces",
- "data": "Generally speaking, you need to care about how the network interfaces of your Pods are named inside their respective network namespaces.\n The hard reality to keep in mind is that you shall always have an interface literally called \"eth0\" created within all your Kubernetes Pods, because Kubelet will always search for the existence of such an interface at the end of Pod instantiation.\n If such an interface does not exist after CNI is invoked, the state of the Pod will be considered \"faulty\", and it will be re-created in a loop.\n To be able to comply with this Kubernetes limitation, DANM always names the first container interface \"eth0\", regardless your original intention.\n Sorry, but they made us do it :)\n **Note**: some CNI plugins try to be smart about this limitation on their own, and decided not to adhere to the CNI standard! An example of this behaviour can be found in Flannel.\n It is the user's responsibility to put the network connection of such boneheaded backends to the first place in the Pod's annotation!\n Besides making sure the first interface is always named correctly, DANM also supports both explicit, and implicit interface naming schemes for all NetworkTypes to help you flexibly name the other -and CNI standard- interfaces!\n An interface connected to a network containing the container_prefix attribute is always named accordingly. You can use this API to explicitly set descriptive, unique names to NICs connecting to this network.\n In case container_prefix is not set in an interface's network descriptor, DANM automatically uses the \"eth\" as the prefix when naming the interface.\n Regardless which prefix is used, the interface name is also suffixed with an integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called \"eth0\", second interface \"eth1\" etc.)\n DANM even supports the mixing of the networking schemes within the same Pod, and it supports the whole naming scheme for all network backends.\n This enables network administrators to even connect Pods to the same network more than once!"
- },
- {
- "heading": "Provisioning static IP routes",
- "data": "We recognize that not all networking involves an overlay technology, so provisioning IP routes directly into the Pod's network namespace needs to be generally supported.\n Network administrators can define routing rules for both IPv4, and IPv6 destination subnets under the \"routes\", and \"routes6\" attributes respectively.\n These attributes take a map of string-string key (destination subnet)-value(gateway address) pairs.\n The configured routes will be added to the default routing table of all Pods connecting to this network."
- },
- {
- "heading": "Provisioning policy-based IP routes",
- "data": "Configuring generic routes on the network level is a nice feature, but in more complex network configurations (e.g. Pod connects to multiple networks) it is desirable to support Pod-level route provisioning.\n The routing table to hold the Pods' policy-based IP routes can be configured via the \"rt_tables\" API attribute.\n Whenever a Pod asks for policy-based routes via the \"proutes\", and/or \"proutes6\" network connection attributes, the related routes will be added to the configured table.\n DANM also provisions the necessary rule pointing to the configured routing table."
- },
- {
- "heading": "Delegating to other CNI plugins",
- "data": "Pay special attention to the network attribute called \"NetworkType\". This parameter controls which CNI plugin is invoked by the DANM metaplugin during the execution of a CNI operation to setup, or delete exactly one network interface of a Pod.\n In case this parameter is set to \"ipvlan\", or is missing; then DANM's in-built IPVLAN CNI plugin creates the network (see next chapter for details).\n In case this attribute is provided and set to another value than \"ipvlan\", then network management is delegated to the CNI plugin with the same name.\n The binary will be searched in the configured CNI binary directory.\n Example: when a Pod is created and requests a connection to a network with \"NetworkType\" set to \"flannel\", then DANM will delegate the creation of this network interface to the /flannel binary."
- },
- {
- "heading": "Creating the configuration for delegated CNI operations",
- "data": "We strongly believe that network management in general should be driven by generic APIs -almost- completely adhering to the same schema. Therefore, DANM is capable of \"translating\" the generic options coming from network objects into the specific \"language\" the delegate CNI plugin understands.\n This way users can dynamically configure various networking solutions via the same, abstract API without caring about how a specific option is called exactly in the terminology of the delegate solution.\n A generic framework supporting this method is built into DANM's code, but still this level of integration requires case-by-case implementation.\n As a result, DANM currently supports two integration levels:\n - **Dynamic integration level:** CNI-specific network attributes (e.g. name of parent host devices etc.) can be controlled on a per network level, exclusively taken directly from the CRD object\n - **Static integration level:** CNI-specific network attributes are by default configured via static CNI configuration files (Note: this is the default CNI configuration method).\n Note: most of the DANM API supported attributes (e.g. IP route configuration, IP address management etc.) are generally supported for all CNIs, regardless their supported integration level.\n Always refer to the schema descriptors for more details on which parameters are universally supported!\n Our aim is to integrate all the popular CNIs into the DANM eco-system over time, but currently the following CNI's achieved dynamic integration level:\n - DANM's own, in-built IPVLAN CNI plugin\n - Set the \"NetworkType\" parameter to value \"ipvlan\" to use this backend\n - Intel's [SR-IOV CNI plugin](https://github.com/intel/sriov-cni )\n - Set the \"NetworkType\" parameter to value \"sriov\" to use this backend\n - Generic MACVLAN CNI from the CNI plugins example repository [MACVLAN CNI plugin](https://github.com/containernetworking/plugins/blob/master/plugins/main/macvlan/macvlan.go )\n - Set the \"NetworkType\" parameter to value \"macvlan\" to use this backend\n No separate configuration file is required when DANM connects Pods to such networks, everything happens automatically purely based on the network manifest!\n When network management is delegated to CNI plugins with static integration level; DANM first reads their configuration from the configured CNI config directory.\n The directory can be configured via setting the \"CNI_CONF_DIR\" environment variable in DANM CNI's context (be it in the host namespace, or inside a Kubelet container). Default value is \"/etc/cni/net.d\".\n In case there are multiple configuration files present for the same backend, users can control which one is used in a specific network provisioning operation via the NetworkID parameter.\n So, all in all: a Pod connecting to a network with \"NetworkType\" set to \"bridge\", and \"NetworkID\" set to \"example_network\" gets an interface provisioned by the /bridge binary based on the /example_network.conf file!\n In addition to simply delegating the interface creation operation, the universally supported features of the DANM management APIs -such as static and dynamic IP route provisioning, flexible interface naming, or centralized IPAM- are also configured either before, or after the delegation took place."
- },
- {
- "heading": "Connecting Pods to specific networks",
- "data": "Pods can request network connections to networks by defining one or more network connections in the annotation of their (template) spec field, according to the schema described in the **schema/network_attach.yaml** file.\n For each connection defined in such a manner DANM provisions exactly one interface into the Pod's network namespace, according to the way described in previous chapters (configuration taken from the referenced API object).\n In case you have added more than one network management APIs to your cluster, it is possible to connect the same Pod to different networks of different APIs. But please note, that physical network interfaces are 1:1 mapped to logical networks.\n In addition to simply invoking other CNI libraries to set-up network connections, Pod's can even influence the way their interfaces are created to a certain extent.\n For example Pods can ask DANM to provision L3 IP addresses to their network interfaces dynamically, statically, or not at all!\n Or, as described earlier; creation of policy-based L3 IP routes into their network namespace is also universally supported by the solution."
- },
- {
- "heading": "Defining default networks",
- "data": "If the Pod annotation is empty (no explicit connections are defined), DANM tries to fall back to a configured default network.\n In the lightweight network management paradigm default networks can be only configured on a per namespace level, by creating one DanmNet object with ObjectMeta.Name field set to \"default\" in the Pod's namespace.\n In a production grade cluster, default networks can be configured both on the namespace, and on the cluster level. If both are configured for a Pod -both a TenantNetwork named default in the Pod's namespace, and a ClusterNetwork named default exist in the cluster-; the namespace level default takes precedence.\n There are no restrictions as to what DANM supported attributes can be configured for a default network. However, in this case users cannot specify any further fine-grained properties for the Pod (i.e. static IP address, policy-based IP routes).\n This feature is beneficial for cluster operators who would like to use unmodified upstream manifest files (i.e. community maintained Helm charts or Pods created by K8s operators), or would like to use DANM in the \"vanilla K8s\" way."
- },
- {
- "heading": "Internal workings of the metaplugin",
- "data": "Regardless which CNI plugins are involved in managing the networks of a Pod, and how they are configured; DANM invokes all of them at the same time, in parallel threads.\n DANM waits for the CNI result of all executors before converting, and merging them together into one summarized result object. The aggregated result is then sent back to kubelet.\n If any executor reported an error, or hasn't finished its job even after 10 seconds; the result of the whole operation will be an error.\n DANM reports all errors towards kubelet in case multiple CNI plugins failed to do their job."
- },
- {
- "heading": "DANM IPAM",
- "data": "DANM includes a fully generic and very flexible IPAM module in-built into the solution. The usage of this module is seamlessly integrated together with all the natively supported CNI plugins (DANM's IPVLAN, Intel's SR-IOV, and the CNI project's reference MACVLAN plugins); as well as with any other CNI backend fully adhering to the v0.3.1 CNI standard!\n The main feature of DANM's IPAM is that it's fully integrated into DANM's network management APIs through the attributes called \"cidr\", \"allocation_pool\", \"net6\", and \"allocation_pool_v6\". Therefore users of the module can easily configure all aspects of network management by manipulating solely dynamic Kubernetes API objects!\n This native integration also enables a very tempting possibility. **As IP allocations belonging to a network are dynamically tracked *within the same API object***, it becomes possible to define:\n * discontinuous subnets 1:1 mapped to a logical network\n * **cluster-wide usable subnets** (instead of node restricted sub CIDRs)\n Network administrators can simply provision their desired CIDRs, and the allocation pools into the network object. Whenever a Pod is instantiated or deleted **on any host within the cluster**, DANM updates the respective allocation record belonging to the network through the Kubernetes API before provisioning the chosen IP to the Pod's interface.\n The flexible IPAM module also allows Pods to define the IP allocation scheme best suited for them. Pods can ask dynamically allocated IPs from the defined allocation pool, or can ask for one, specific, static address.\n The application can even ask DANM to forego the allocation of any IPs to their interface in case a L2 network interface is required.\n DANM IPAM is capable of handling 8 million -that's right!- IP allocations per network object, IPv4, and IPv6 mixed.\n If this is still not enough to impress you, we honestly don't know what else you might need from your IPAM! So please come, and tell us :)"
- },
- {
- "heading": "Using IPAM with static backends",
- "data": "While using the DANM IPAM with dynamic backends is mandatory, netadmins can freely choose if they want their static CNI backends to be also integrated to DANM's IPAM; or they would prefer these interfaces to be statically configured by another IPAM module.\n By default the \"ipam\" section of a static delegate is always configured from the CNI configuration file identified by the network's NetworkID parameter.\n However, users can overwrite this inflexible -and most of the time host-local- option by defining \"cidr\", and/or \"net6\" in their network manifest just as they would with a dynamic backend.\n When a Pod connects to a network with static NetworkType but containing allocation subnets, and explicitly asks for an \"ip\", and/or \"ip6\" address from DANM in its annotation; DANM overwrites the \"ipam\" section coming from the static config with its own, dynamically allocated address.\n If a Pod does not ask DANM to allocate an IP, or the network does not define the necessary parameters; the delegation automatically falls back to the \"ipam\" defined in the static config file.\n **Note**: DANM can only integrate static backends to its flexible IPAM if the CNI itself is fully compliant to the standard, i.e. uses the plugin defined in the \"ipam\" section of its configuration. It is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI!"
- },
- {
- "heading": "IPv6 and dual-stack support",
- "data": "DANM's IPAM module supports both pure IPv6, and dual-stack (one IPv4, and one IPv6 address provisioned to the same interface) addresses with full feature parity!\n To configure an IPv6 CIDR for a network, network administrators shall configure the \"net6\" attribute.\n Similarly to IPv4 addess management operators can define a desired allocation pool for their V6 subnet via the \"allocation_pol_v6\" structure.\n Additionally, IP routes for IPv6 subnets can be configured via \"routes6\".\n If both \"cidr\", and \"net6\" are configured for the same network, Pods connecting to that network can ask either one IPv4 or IPv6 address - or even both at the same time!\n This feature is generally supported the same way even for static CNI backends! However the promise that every specific CNI plugin is compatible and comfortable with both IPv6, and dual IPs allocated by an IPAM cannot be guaranteed by DANM.\n Therefore, it is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI!"
- },
- {
- "heading": "DANM IPVLAN CNI",
- "data": "DANM's IPVLAN CNI uses the Linux kernel's IPVLAN module to provision high-speed, low-latency network interfaces for applications which need better performance than a bridge (or any other overlay technology) can provide.\n *Keep in mind that the IPVLAN module is a fairly recent addition to the Linux kernel, so the feature cannot be used on systems whose kernel is older than 4.4!\n 4.14+ would be even better (lotta bug fixes)*\n The CNI provisions IPVLAN interfaces in L2 mode, and supports the following extra features:\n * attaching IPVLAN sub-interfaces to any host interface\n * attaching IPVLAN sub-interfaces to dynamically created VLAN or VxLAN host interfaces\n * renaming the created interfaces according to the \"container_prefix\" attribute defined in the network object\n * allocating IP addresses by using DANM's flexible, in-built IPAM module\n * provisioning generic IP routes into a configured routing table inside the Pod's network namespace\n * Pod-level controlled provisioning of policy-based IP routes into Pod's network namespace"
- },
- {
- "heading": "Device Plugin support",
- "data": "DANM provides general support for CNIs interworking with Kubernetes' Device Plugin mechanism.\n A practical example of such a network provisioner is the SR-IOV CNI.\n When a properly configured Network Device Plugin runs, the allocatable resource list for the node should be updated with resource discovered by the plugin."
- },
- {
- "heading": "Using Intel SR-IOV CNI",
- "data": "SR-IOV Network Device Plugin allows to create a list of *netdevice* type resource definitions with *sriovMode*, where each resource definition can have one or more assigned *rootDevice* (Physical Function). The plugin looks for Virtual Functions (VF) for each configured Physical Function (PF) and adds all discovered VFs to the allocatable resource's list of the given Kubernetes Node. The Device Plugin resource name will be the device pool name on the Node. These device pools can be referred in Pod definition's resource request part on the usual way.\n In the following example, the \"nokia.k8s.io/sriov_ens1f0\" device pool name consists of the \"nokia.k8s.io\" prefix and \"sriov_ens1f0\" resourceName.\n All network management APIs contain an optional **device_pool** field where a specific device pool can be assigned to the given network."
- },
- {
- "heading": "Note: device_pool and host_device parameters are mutually exclusive!",
- "data": "Before DANM invokes a CNI which expects a given resource to be attached to the Pod, it gathers all the Kubelet assigned device IDs belonging to device pool defined in the Pod's network, and passes one ID from the list to the CNI."
- },
- {
- "heading": "Note: Pods connecting to networks depending on a device_pool must declare their respective resource requests through their Pod.Spec.Resources API!",
- "data": "The following example network definition shows how to configure device_pool parameter for sriov network type.\n The following Pod definition shows how to combine K8s Device resource requests and multiple network connections using the assigned resources:"
- },
- {
- "heading": "DPDK support",
- "data": "DANM's SR-IOV integration supports -and is tested with- both Intel, and Mellanox manufactured physical functions.\n Moreover Pods can use the allocated Virtual Functions for either kernel, or user space networking.\n The only restriction to keep in mind is when a DPDK using application requests VFs from an Intel NIC for the purpose of user space networking (i.e. DPDK),\n those VFs shall be already bound to the vfio-pci kernel driver before the Pod is instantiated.\n To guarantee such VFs are always available on the Node the Pod is scheduled to, we strongly suggest advertising vfio-pci bound VFs as a separate Device Pool.\n When an already vfio bound function is mounted to an application, DANM also creates a dummy kernel interface in its stead in the Pod's network namespace.\n The dummy interface can be easily identified by the application, because it's named exactly as the VF would be, following the standard DANM interface naming conventions.\n The dummy interface is used to convey all the information the user space application requires to start its own networking stack in a standardized manner.\n The list includes:\n - the IPAM details belonging to the user space device, such as IP addresses, IP routes etc.\n - VLAN tag of the VF, if any\n - PCI address of the specific device -as a link alias- so applications know which IPs/VLANs belong to which user space device\n - the original MAC address of the VF\n User space applications can interrogate this information via the usual kernel APIs, and then configure the allocated resources into their own network stack without the need of requesting any extra kernel privileges!"
- },
- {
- "heading": "Usage of DANM's Webhook component",
- "data": ""
- },
- {
- "heading": "Responsibilities",
- "data": "The Webhook component introduced in DANM V4 is responsible for three things:\n - it initializes essential, but not human configurable API attributes (i.e. allocation tracking bitmasks) at the time of object creation\n - it matches, and connects TenantNetworks to administrator configured physical profiles allowed for tenant users\n - it validates the syntactic and semantic integrity of all API objects before any CREATE, or PUT REST operation are allowed to be persisted in the K8s API server's data store"
- },
- {
- "heading": "Connecting TenantNetworks to TenantConfigs",
- "data": ""
- },
- {
- "heading": "TenantConfig API",
- "data": "TenantNetworks cannot freely define the following attributes:\n - host_devices\n - device_pool\n - vlan\n - vxlan\n - NetworkID\n Reason is that all these attributes are related to physical resources, which might not be allowed to be used by the specific tenants: VLANs might not be configured in the switches, specific NICs are reserved for infrastructure use, static CNI configuration files might not exist on the container host's disk etc.\n Instead, these parameters are either entirely, or partially managed by DANM in TenantNetwork provisioning time.\n DANM does this by introducing a third new API with v4.0 called **TenantConfig**. TenantConfig is a mandatory API when DANM is used in the production grade mode.\n TenantConfig is a cluster-wide API, containing two major parameters: physical interface profiles usable by TenantNetworks, and NetworkType:NetworkID mappings.\n Refer to [TenantConfig schema](https://github.com/nokia/danm/tree/master/schema/TenantConfig.yaml) for more information on TenantConfigs."
- },
- {
- "heading": "Selecting a physical interface profile",
- "data": "There are multiple ways of how DANM can select the appropriate interface profile for a tenant user's network.\n Note: physical interface profiles are only relevant for dynamic backends.\n For backends dependent on the host_device option (such as IPVLAN, and MACVLAN):\n - if the TenantNetwork contains host_device attribute, DANM selects the entry from the TenantConfig with the matching name\n - if host_device is not provided by user, DANM randomly selects an interface profile from the TenantConfig\n For backends dependent on the device_pool option (such as SR-IOV), the user needs to explicitly state which device_pool it wants to use.\n The reasoning behind not supporting random profile selection for K8s Devices based backends is that the Pod using such Devices anyway need to explicitly request resources from a specific pool in its own Pod manifest. Randomly matching its network with a possibly different pool could result in run-time failures.\n If there are no suitable physical interface profiles configured by the cluster's network administrator, or the TenantNetwork tried to select a physical device which is not allowed; webhook denies the creation of the TenantNetwork.\n If a suitable profile could be selected, DANM:\n - mutates the physical interface profile's name into either the TenantNetwork's host_device, or device_pool attribute (DANM automatically figures out which one based on the name of the profile, and the NetworkType parameter)\n - if the interface profile is a virtual profile, DANM automatically reserves the next previously unused VNI from the configured VNI range\n - then mutates the reserved VNI into the TenantNetwork's respective attribute (vlan, or vxlan)\n To avoid the leaking of VNIs in the cluster, DANM also takes care of freeing the reserved VNI of a TenantNetwork when it is deleted."
- },
- {
- "heading": "Overwrite NetworkID for static delegates",
- "data": "Delegation to backends with static integration level (e.g. Calico, Flannel etc.) is configured via static CNI config files read from the container host's disk.\n These files are selected based on the NetworkType parameter of the TenantNetwork.\n Network administrators can configure NetworkType: NetworkID mappings into the TenantConfig. When a TenantNetwork is created with a NetworkType having a configured mapping, DANM automatically overwrites it's NetworkID with the provided value.\n Thus it becomes guaranteed that the tenant user's network will use the right CNI configuration file during Pod creation!"
- },
- {
- "heading": "List of validation rules",
- "data": ""
- },
- {
- "heading": "DanmNet",
- "data": "Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) DanmNet operation is subject to the following validation rules:\n 1. spec.Options.Cidr must be supplied in a valid IPv4 CIDR notation\n 2. all gateway addresses belonging to an entry of spec.Options.Routes shall be in the defined IPv4 CIDR\n 3. spec.Options.Net6 must be supplied in a valid IPv6 CIDR notation\n 4. all gateway addresses belonging to an entry of spec.Options.Routes6 shall be in the defined IPv6 CIDR\n 5. spec.Options.Alloc shall not be manually defined\n 6. spec.Options.Alloc6 shall not be manually defined\n 7. spec.Options.Allocation_pool cannot be defined without defining spec.Options.Cidr\n 8. spec.Options.Allocation_pool.Start shall be in the provided IPv4 CIDR\n 9. spec.Options.Allocation_pool.End shall be in the provided IPv4 CIDR\n 10. spec.Options.Allocation_pool.End shall be smaller than spec.Options.Allocation_pool.Start\n 11. spec.Options.Allocation_pool_V6 cannot be defined without defining spec.Options.Cidr\n 12. spec.Options.Allocation_pool_V6.Start shall be in the provided IPv6 CIDR\n 13. spec.Options.Allocation_pool_V6.End shall be in the provided IPv6 CIDR\n 14. spec.Options.Allocation_pool_V6.End shall be smaller than spec.Options.Allocation_pool_V6.Start\n 15. spec.Options.Allocation_pool_V6.Cidr must be supplied in a valid IPv6 CIDR notation, and must be in the provided IPv6 CIDR\n 16. The combined number of allocatable IP addresses of the manually provided IPv4 and IPv6 allocation CIDRs cannot be higher than 8 million\n 17. spec.Options.Vlan and spec.Options.Vxlan cannot be provided together\n 18. spec.NetworkID cannot be longer than 10 characters for dynamic backends\n 19. spec.AllowedTenants is not a valid parameter for this API type\n 20. spec.Options.Device_pool must be, and spec.Options.Host_device mustn't be provided for K8s Devices based networks (such as SR-IOV)\n 21. Any of spec.Options.Device, spec.Options.Vlan, or spec.Options.Vxlan attributes cannot be changed if there are any Pods currently connected to the network\n Every DELETE DanmNet operation is subject to the following validation rules:\n 22. the network cannot be deleted if there are any Pods currently connected to the network\n Not complying with any of these rules results in the denial of the provisioning operation."
- },
- {
- "heading": "TenantNetwork",
- "data": "Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) TenantNetwork operation is subject to the DanmNet validation rules no. 1-16, 18, 19.\n In addition TenantNetwork provisioning has the following extra rules:\n 1. spec.Options.Vlan cannot be provided\n 2. spec.Options.Vxlan cannot be provided\n 3. spec.Options.Vlan cannot be modified\n 4. spec.Options.Vxlan cannot be modified\n 5. spec.Options.Host_device cannot be modified\n 6. spec.Options.Device_pool cannot be modified\n Every DELETE TenantNetwork operation is subject to the DanmNet validation rule no.22.\n Not complying with any of these rules results in the denial of the provisioning operation."
- },
- {
- "heading": "ClusterNetwork",
- "data": "Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) ClusterNetwork operation is subject to the DanmNet validation rules no. 1-18, 20-21.\n Every DELETE ClusterNetwork operation is subject to the DanmNet validation rule no.22.\n Not complying with any of these rules results in the denial of the provisioning operation."
- },
- {
- "heading": "TenantConfig",
- "data": "Every CREATE, and PUT TenantConfig operation is subject to the following validation rules:\n 1. Either HostDevices, or NetworkIDs must not be empty\n 2. VniType and VniRange must be defined together for every HostDevices entry\n 3. Both key, and value must not be empty in every NetworkType: NetworkID mapping entry\n 4. A NetworkID cannot be longer than 10 characters in a NetworkType: NetworkID mapping belonging to a dynamic NetworkType"
- },
- {
- "heading": "Usage of DANM's Netwatcher component",
- "data": ""
- },
- {
- "heading": "Feature description",
- "data": "Netwatcher is a standalone Network Operator responsible for dynamically managing (i.e. creation and deletion) VxLAN and VLAN interfaces on all the hosts based on dynamic network management K8s APIs.\n Netwatcher is a mandatory component of the DANM networking suite, but can be a great standalone add to Multus, or any other NetworkAttachmentDefinition driven K8s clusters!\n When netwatcher is deployed it runs as a DaemonSet, brought-up on all hosts where a meta CNI plugin is configured."
- },
- {
- "heading": "Usage with DANM APIs",
- "data": "Whenever a DANM network is created, modified, or deleted -any network, belonging to any of the supported API types- within the Kubernetes cluster, netwatcher will be triggered.\n If the network in question contained either the \"vxlan\", or the \"vlan\" attributes; then netwatcher immediately creates, or deletes the VLAN or VxLAN host interface with the matching VID.\n If the Spec.Options.host_device, .vlan, or .vxlan attributes are modified netwatcher first deletes the old, and then creates the new host interface.\n This feature is the most beneficial when used together with a dynamic network provisioning backend supporting connecting Pod interfaces to virtual host devices (IPVLAN, MACVLAN, SR-IOV for VLANs). Whenever a Pod is connected to such a network containing a virtual network identifier, the CNI component automatically connects the created interface to the VxLAN or VLAN host interface created by the netwatcher; instead of directly connecting it to the configured host device."
- },
- {
- "heading": "Usage with NetworkAttachmentDefinition API",
- "data": "But wait that's not all - Netwatcher is an API agnostic standalone Operator! This means all of its supported features can be used even in clusters where DANM is not the configured meta CNI solution!\n If your cluster uses a CNI solution driven by the NetworkAttachmentDefinition API -such as Multus, or Genie-, you can deploy netwatcher as-is to automate various network management operatios of TelCo workloads.\n Whenever you deploy a NAD Netwatcher will inspect the CNI config portion stored under Spec.Config. If there is a VLAN, or VxLAN identifier added to a CNI configuration it will trigger Netwatcher to create the necessary host interfaces, the exact same way as if these attributes were added to a DANM API object.\n For example if you want your IPVLAN type NAD to be connected to a specific VLAN just add the tag to your object the following way:\n When it comes to dealing with NADs Netwatcher understands that these extra tags are not recognized by the existing CNI eco-system. So to achieve E2E automation Netwatcher will also modify the CNI configuration of the NAD to point to the right host interface!\n Let's use the above example to show how this works!\n First, upon seeing this network Netwatcher creates the appropriate host interface with the tag:"
- },
- {
- "heading": "ip l | grep vlantest",
- "data": "Then it also initiates an Update operation on the NAD, exchanging the old host interface reference to the correct one:"
- },
- {
- "heading": "kubectl get network-attachment-definitions.k8s.cni.cncf.io ipvlan-conf -o yaml",
- "data": "This approach ensures users can seamlessly integrate Netwatcher into their existing clusters and enjoy its extra capabilities without any extra hassle - just the way we like it!"
- },
- {
- "heading": "Usage of DANM's Svcwatcher component",
- "data": ""
- },
- {
- "heading": "Feature description",
- "data": "Svcwatcher component showcases the whole reason why DANM exists, and is designed the way it is. It is the first higher-level feature accomplishing our true goal described in the introduction section, that is, extending basic Kubernetes constructs to seamlessly work with multiple network interfaces.\n The first such construct is the Kubernetes Service!\n Let's see how it works.\n Svcwatcher basically works the same way as the default Service controller inside Kubernetes. It continuously monitors both the Service and the Pod APIs, and provisions Endpoints whenever the cluster administrator creates, updates, or deletes relevant API objects (e.g. creates a new Service, updates a Pod label etc.).\n DANM svcwatcher does the same, and more! The default Service controller assumes the Pod has one interface, so whenever a logical Service Endpoint is created it will be always done with the IP of the Pod's first (the infamous \"eth0\" in Kubernetes), and supposedly only network interface.\n DANM svcwatcher on the other hand makes this behaviour configurable! DANM enhances the same Service API so an object will always explicitly select one logical network, rather than implicitly choosing the one with the hard-coded name of \"eth0\".\n Then, svcwatcher provisions a Service Endpoint with the address of the selected Pod's chosen network interface.\n This enhancement basically upgrades the in-built Kubernetes Service Discovery concept to work over multiple network interfaces, making Service Discovery only return truly relevant Endpoints in every scenario!\n The services of the svcwatcher component work with all supported network management APIs!"
- },
- {
- "heading": "Svcwatcher compatible Service descriptors",
- "data": "Based on the feature description experienced Kubernetes users are probably already thinking \"but wait, there is no \"network selector\" field in the Kubernetes Service core API\".\n That is indeed true right now, but consider the core concept behind the creation of DANM: \"what use-cases would become possible if Networks would be part of the core Kubernetes API\"?\n So, we went ahead and simulated exactly this scenario, while making sure our solution also works with a vanilla Kubernetes today; just as we did with all our other API enhancements.\n This is possible by leveraging the so-called \"headless and selectorless Services\" concept in Kubernetes. Headless plus selectorless Services do not contain Pod selector field, which tells the Kubernetes native Service controller that Endpoint administration is handled by a 3rd party service.\n DANM svcwatcher is triggered when such a service is created, if it contains the DANM \"core API\" attributes in their annotation.\n These extra attributes are the following:\n \"danm.io/selector\": this selector serves the exact same purpose as the default Pod selector field (which is missing from a selectorless Service by definition). Endpoints are created for Pods which match all labels provided in this list\n \"danm.io/network\": this is the \"special sauce\" of DANM. When svcwatcher creates an Endpoint, it's IP will be taken from the selected Pod's physical interface connected to the DanmNet with the matching name\n \"danm.io/tenantNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to TenantNetworks, rather than DanmNets\n \"danm.io/clusterNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to ClusterNetworks, rather than DanmNets\n This means that DANM controlled Services behave exactly as in Kubernetes: a selected Pod's availability is advertised through one of its network interfaces.\n The big difference is that operators can now decide through which interface(s) they want the Pod to be discoverable! (Of course nothing forbids the creation of multiple Services selecting different interfaces of the same Pod, in case a Pod should be discoverable by different kind of communication partners).\n The schema of the enhanced, DANM-compatible Service object is described in detail in **schema/DanmService**.yaml file."
- },
- {
- "heading": "Demo: Multi-domain service discovery in Kubernetes",
- "data": "Why is this feature useful, the reader might ask?\n The answer depends on the use-case your application serves. If you share one, cloud-wide network between all application and infrastructure components, and everyone communicates with everyone through this -most probably overlay- network, then you are probably not excited by DANM's svcwatcher.\n However, if you believe in physically separated interfaces (or certain government organizations made you believe in it), non-default networks, multi-domain gateway components; then this is the feature you probably already built-in to your application's Helm chart in the form of an extra Consul, or Etcd component.\n This duplication of platform responsibility ends today! :)\n Allow us to demonstrate the usage of this feature via an every-day common TelCo inspired example located in the project's example/svcwatcher_demo directory.\n The example contains three Pods running in the same cluster:\n - A LoadBalancer Pod, whose job is to accept connections over any exotic but widely used non-L7 protocols (e.g. DIAMETER, LDAP, SIP, SIGTRAN etc.), and distribute the workload to backend services\n - An ExternalClient Pod, supplying the LoadBalancer with traffic through an external network\n - An InternalProcessor Pod, receiving requests to be served from the LoadBalancer Pod\n Our cluster contains three physical networks: external, internal, management.\n LoadBalancer connects to all three, because it needs to be able to establish connections to entities both supplying, and serving traffic. LoadBalancer also wishes to be scaled via Prometheus, hence it connects to the cluster's management network to expose its own \"packet_served_per_second\" custom metric.\n ExternalClient only connects to the LoadBalancer Pod, because it simply wants to send traffic to the application (VNF), and deal with the result of transactions. It doesn't care, or know anything about the internal architecture of the application (VNF).\n Because ExternalClient is not part of the same application (namespace) as LoadBalancer and InternalProcessor, it can't have access to their internal network.\n It doesn't require scaling, being a lightweight, non-critical component, therefore it also does not connect to the cluster's management network.\n InternalProcessor only connects to the LoadBalancer Pod, but being a small, dynamically changing component, we don't want to expose it to external clients.\n InternalProcessor wants to have access to the many network-based features of Kubernetes, so it also connects to the management network, similarly to LoadBalancer."
- },
- {
- "heading": "So, how can ExternalClient(S) discover LoadBalancer(S), how can LoadBalancer(S) discover InternalProcessor(S), and how can we avoid making LoadBalancer(S) and InternalProcessor(S) discoverable through their management interface?",
- "data": "With DANM, the answer is as simple as instantiating the demonstration Kubernetes manifest files in the following order: Namespaces -> DanmNets -> Deployments -> Services \"vnf-internal-processor\" will make the InternalProcessors discoverable through their application-internal network interface. LoadBalancers can use this Service to discover working backends serving transactions. \"vnf-internal-lb\" will make the LoadBalancers discoverable through their application-internal network interface. InternalProcessors can use this Service to discover application egress points/gateway components. Lastly, \"vnf-external-svc\" makes the same LoadBalancer instances discoverable but this time through their external network interfaces. External clients connecting to the same network can use this Service to find the ingress/gateway interfaces of the whole application (VNF)! As a closing note: remember to delete the now unnecessary Service Discovery tool's Deployment manifest from your Helm chart :)"
- },
- {
- "additional_info": "* [Usage of DANM's CNI](#usage-of-danms-cni) * [Configuring DANM](#configuring-danm) * [Network management](#network-management) * [Overview](#overview) * [Lightweight network management experience](#lightweight-network-management-experience) * [Production-grade network management experience](#production-grade-network-management-experience) * [Network management in the practical sense](#network-management-in-the-practical-sense) * [Generally supported DANM API features](#generally-supported-danm-api-features) * [Naming container interfaces](#naming-container-interfaces) * [Provisioning static IP routes](#provisioning-static-ip-routes) * [Provisioning policy-based IP routes](#provisioning-policy-based-ip-routes) * [Delegating to other CNI plugins](#delegating-to-other-cni-plugins) * [Creating the configuration for delegated CNI operations](#creating-the-configuration-for-delegated-cni-operations) * [Connecting Pods to specific networks](#connecting-pods-to-specific-networks) * [Defining default networks](#defining-default-networks) * [Internal workings of the metaplugin](#internal-workings-of-the-metaplugin) * [DANM IPAM](#danm-ipam) * [Using IPAM with static backends](#using-ipam-with-static-backends) * [IPv6 and dual-stack support](#ipv6-and-dual-stack-support) * [DANM IPVLAN CNI](#danm-ipvlan-cni) * [Device Plugin Support](#device-plugin-support) * [Using Intel SR-IOV CNI](#using-intel-sr-iov-cni) * [DPDK support](#dpdk-support) * [Usage of DANM's Webhook component](#usage-of-danms-webhook-component) * [Responsibilities](#responsibilities) * [Connecting TenantNetworks to TenantConfigs](#connecting-tenantnetworks-to-tenantconfigs) * [TenantConfig API](#tenantconfig-api) * [Selecting a physical interface profile](#selecting-a-physical-interface-profile) * [Overwrite NetworkID for static delegates](#overwrite-networkid-for-static-delegates) * [List of validation rules](#list-of-validation-rules) * [DanmNet](#danmnet) * [TenantNetwork](#tenantnetwork) * [ClusterNetwork](#clusternetwork) * [TenantConfig](#tenantconfig) * [Usage of DANM's Netwatcher component](#usage-of-danms-netwatcher-component) * [Feature description](#feature-description) * [Usage with DANM APIs](#usage-with-danm-apis) * [Usage with NetworkAttachmentDefinition API](#usage-with-networkattachmentdefinition-api) * [Usage of DANM's Svcwatcher component](#usage-of-danms-svcwatcher-component) * [Feature description](#feature-description) * [Svcwatcher compatible Service descriptors](#svcwatcher-compatible-service-descriptors) * [Demo: Multi-domain service discovery in Kubernetes](#demo-multi-domain-service-discovery-in-kubernetes) This section describes what features the DANM networking suite adds to a vanilla Kubernetes environment, and how can users utilize them. As DANM becomes more and more complex, we offer some level of control over the internal behaviour of how network provisioning is done. Unless stated otherwise, DANM behaviour can be configured purely through its CNI configuration file. The following configuration options are currently supported: - cniDir: Users can define where should DANM search for the CNI config files for static delegates. Default value is /etc/cni/net.d - namingScheme: if it is set to legacy, container network interface names are set exactly to the value of the respective network's Spec.Options.container_prefix parameter. Otherwise refer to [Naming container interfaces](#naming-container-interfaces) for details\" The DANM CNI is a full-fledged CNI metaplugin, capable of provisioning multiple network interfaces to a Pod, on-demand! DANM can utilize any of the existing and already integrated CNI plugins to do so. DANM supports two kind of network management experiences as of DANM 4.0 - **lightweight** (the only supported mode before v4.0), and **production-grade**. Your experience depends on which CRD-based management APIs you chose to add to your cluster during installation. If you want you can even add all available APIs at the same time to see which method better fits your need! We advise new users, or users operating a single tenant Kubernetes cluster to start out with a streamlined, lightweight network management experience. In this \"mode\" DANM only recognizes one network management API, called **DanmNet**. Both administrators, and tenant users manage their networks through the same API. Everyone has the same level of access, and can configure all the parameters supported by DANM at their leisure. At the same time it is impossible to create networks, which can be used across tenants (disclaimer: we use the word \"tenant\" as a synonym to \"Kubernetes namespace\" throughout the document). In a real, production-grade cluster the lightweight management paradigm does not suffice, because usually there are different users, with different roles interacting with each other. There are possibly multiple users using their own segment of the cloud -or should we say tenant?- at the same time; while there can be administrator(s) overseeing that everything is configured, and works as it should be. The idea behind production-grade network management is that: - tenant users shall be restricted to using only the network resources allocated to them by the administrators, but should be able to freely decide what to do with these resources within the confines of their tenant - administrators, and only administrators shall have control over the network resources of the whole cloud To satisfy the needs of this complex ecosystem, DANM provides different APIs for the different purposes: **TenantNetworks**, and **ClusterNetworks**! **TenantNetworks** is a namespaced API, and can be freely created by tenant users. It basically is the same API as DanmNet, with one big difference: parameters any way related to host settings cannot be freely configured through this API. These parameters are automatically filled by DANM instead! Wonder how? Refer to chapter [Connecting TenantNetworks to TenantConfigs](#connecting-tenantnetworks-to-tenantconfigs) for more information. **ClusterNetworks** on the other hand is a cluster-wide API, and as such, can be -or should be- only provisioned by administrator level users. Administrators can freely set all available configuration options, even the physical parameters. The other nice thing in ClusterNetworks is that all Pods, in any namespace can connect to them - unless the network administrator forbade it via the newly introduced **AllowedTenants** configuration list. Interested user can find reference manifests showcasing the features of the new APIs under [DANM V4 example manifests](https://github.com/nokia/danm/tree/master/example/4_0_examples). ##### Network management in the practical sense Regardless which paradigm thrives in your cluster, network objects are managed the exact same way - you just might not be allowed to execute a specific provisioning operation in case you are trying to overstep your boundaries! Don't worry, as DANM will always explicitly and instantly tell you when this happens. Unless explicitly stated in the description of a specific feature, all API features are generally supported, and supported the same way regardless through which network management API type you use them. Network management always starts with the creation of Kubernetes API objects, logically representing the characteristics of a network Pods can connect to. Users first need to create the manifest files of these objects according to the schema described in the [DanmNet schema](https://github.com/nokia/danm/tree/master/schema/DanmNet.yaml) , [TenantNetwork schema](https://github.com/nokia/danm/tree/master/schema/TenantNetwork.yaml) , or [ClusterNetwork schema](https://github.com/nokia/danm/tree/master/schema/ClusterNetwork.yaml) template files. A network object can be created just like any other Kubernetes object, for example by issuing: ``` kubectl create -f test-net1.yaml ``` Users can also interact with the existing network management objects just as they would with other core API objects: ``` / # kubectl describe danmnet test-net1 Name: test-net1 Namespace: default Labels: Annotations: API Version: kubernetes.nokia.com/v1 Kind: DanmNet Metadata: Cluster Name: Creation Timestamp: 2018-05-24T16:53:27Z Generation: 0 Resource Version: 3146 Self Link: /apis/kubernetes.nokia.com/v1/namespaces/default/danmnets/test-net1 UID: fb1fdfb5-5f72-11e8-a8d0-fa163e98af00 Spec: Network ID: test-net1 Network Type: ipvlan Options: Allocation _ Pool: Start: 192.168.1.10 End: 192.168.1.100 Container _ Prefix: eth0 Host _ Device: ens4 Rt _ Tables: 201 Validation: True Events: ``` __WARNING: DANM stores pretty important information in these API objects. Under no circumstances shall a network be manually deleted, if there are any running Pods still referencing it!__ __Such action will undoubtedly lead to ruin and DANMation!__ From DANM 4.0 upward the Webhook component makes sure this cannot happpen, but it is better to be aware of this detail. Generally speaking, you need to care about how the network interfaces of your Pods are named inside their respective network namespaces. The hard reality to keep in mind is that you shall always have an interface literally called \"eth0\" created within all your Kubernetes Pods, because Kubelet will always search for the existence of such an interface at the end of Pod instantiation. If such an interface does not exist after CNI is invoked, the state of the Pod will be considered \"faulty\", and it will be re-created in a loop. To be able to comply with this Kubernetes limitation, DANM always names the first container interface \"eth0\", regardless your original intention. Sorry, but they made us do it :) **Note**: some CNI plugins try to be smart about this limitation on their own, and decided not to adhere to the CNI standard! An example of this behaviour can be found in Flannel. It is the user's responsibility to put the network connection of such boneheaded backends to the first place in the Pod's annotation! Besides making sure the first interface is always named correctly, DANM also supports both explicit, and implicit interface naming schemes for all NetworkTypes to help you flexibly name the other -and CNI standard- interfaces! An interface connected to a network containing the container_prefix attribute is always named accordingly. You can use this API to explicitly set descriptive, unique names to NICs connecting to this network. In case container_prefix is not set in an interface's network descriptor, DANM automatically uses the \"eth\" as the prefix when naming the interface. Regardless which prefix is used, the interface name is also suffixed with an integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called \"eth0\", second interface \"eth1\" etc.) DANM even supports the mixing of the networking schemes within the same Pod, and it supports the whole naming scheme for all network backends. This enables network administrators to even connect Pods to the same network more than once! We recognize that not all networking involves an overlay technology, so provisioning IP routes directly into the Pod's network namespace needs to be generally supported. Network administrators can define routing rules for both IPv4, and IPv6 destination subnets under the \"routes\", and \"routes6\" attributes respectively. These attributes take a map of string-string key (destination subnet)-value(gateway address) pairs. The configured routes will be added to the default routing table of all Pods connecting to this network. Configuring generic routes on the network level is a nice feature, but in more complex network configurations (e.g. Pod connects to multiple networks) it is desirable to support Pod-level route provisioning. The routing table to hold the Pods' policy-based IP routes can be configured via the \"rt_tables\" API attribute. Whenever a Pod asks for policy-based routes via the \"proutes\", and/or \"proutes6\" network connection attributes, the related routes will be added to the configured table. DANM also provisions the necessary rule pointing to the configured routing table. Pay special attention to the network attribute called \"NetworkType\". This parameter controls which CNI plugin is invoked by the DANM metaplugin during the execution of a CNI operation to setup, or delete exactly one network interface of a Pod. In case this parameter is set to \"ipvlan\", or is missing; then DANM's in-built IPVLAN CNI plugin creates the network (see next chapter for details). In case this attribute is provided and set to another value than \"ipvlan\", then network management is delegated to the CNI plugin with the same name. The binary will be searched in the configured CNI binary directory. Example: when a Pod is created and requests a connection to a network with \"NetworkType\" set to \"flannel\", then DANM will delegate the creation of this network interface to the /flannel binary. We strongly believe that network management in general should be driven by generic APIs -almost- completely adhering to the same schema. Therefore, DANM is capable of \"translating\" the generic options coming from network objects into the specific \"language\" the delegate CNI plugin understands. This way users can dynamically configure various networking solutions via the same, abstract API without caring about how a specific option is called exactly in the terminology of the delegate solution. A generic framework supporting this method is built into DANM's code, but still this level of integration requires case-by-case implementation. As a result, DANM currently supports two integration levels: - **Dynamic integration level:** CNI-specific network attributes (e.g. name of parent host devices etc.) can be controlled on a per network level, exclusively taken directly from the CRD object - **Static integration level:** CNI-specific network attributes are by default configured via static CNI configuration files (Note: this is the default CNI configuration method). Note: most of the DANM API supported attributes (e.g. IP route configuration, IP address management etc.) are generally supported for all CNIs, regardless their supported integration level. Always refer to the schema descriptors for more details on which parameters are universally supported! Our aim is to integrate all the popular CNIs into the DANM eco-system over time, but currently the following CNI's achieved dynamic integration level: - DANM's own, in-built IPVLAN CNI plugin - Set the \"NetworkType\" parameter to value \"ipvlan\" to use this backend - Intel's [SR-IOV CNI plugin](https://github.com/intel/sriov-cni ) - Set the \"NetworkType\" parameter to value \"sriov\" to use this backend - Generic MACVLAN CNI from the CNI plugins example repository [MACVLAN CNI plugin](https://github.com/containernetworking/plugins/blob/master/plugins/main/macvlan/macvlan.go ) - Set the \"NetworkType\" parameter to value \"macvlan\" to use this backend No separate configuration file is required when DANM connects Pods to such networks, everything happens automatically purely based on the network manifest! When network management is delegated to CNI plugins with static integration level; DANM first reads their configuration from the configured CNI config directory. The directory can be configured via setting the \"CNI_CONF_DIR\" environment variable in DANM CNI's context (be it in the host namespace, or inside a Kubelet container). Default value is \"/etc/cni/net.d\". In case there are multiple configuration files present for the same backend, users can control which one is used in a specific network provisioning operation via the NetworkID parameter. So, all in all: a Pod connecting to a network with \"NetworkType\" set to \"bridge\", and \"NetworkID\" set to \"example_network\" gets an interface provisioned by the /bridge binary based on the /example_network.conf file! In addition to simply delegating the interface creation operation, the universally supported features of the DANM management APIs -such as static and dynamic IP route provisioning, flexible interface naming, or centralized IPAM- are also configured either before, or after the delegation took place. Pods can request network connections to networks by defining one or more network connections in the annotation of their (template) spec field, according to the schema described in the **schema/network_attach.yaml** file. For each connection defined in such a manner DANM provisions exactly one interface into the Pod's network namespace, according to the way described in previous chapters (configuration taken from the referenced API object). In case you have added more than one network management APIs to your cluster, it is possible to connect the same Pod to different networks of different APIs. But please note, that physical network interfaces are 1:1 mapped to logical networks. In addition to simply invoking other CNI libraries to set-up network connections, Pod's can even influence the way their interfaces are created to a certain extent. For example Pods can ask DANM to provision L3 IP addresses to their network interfaces dynamically, statically, or not at all! Or, as described earlier; creation of policy-based L3 IP routes into their network namespace is also universally supported by the solution. If the Pod annotation is empty (no explicit connections are defined), DANM tries to fall back to a configured default network. In the lightweight network management paradigm default networks can be only configured on a per namespace level, by creating one DanmNet object with ObjectMeta.Name field set to \"default\" in the Pod's namespace. In a production grade cluster, default networks can be configured both on the namespace, and on the cluster level. If both are configured for a Pod -both a TenantNetwork named default in the Pod's namespace, and a ClusterNetwork named default exist in the cluster-; the namespace level default takes precedence. There are no restrictions as to what DANM supported attributes can be configured for a default network. However, in this case users cannot specify any further fine-grained properties for the Pod (i.e. static IP address, policy-based IP routes). This feature is beneficial for cluster operators who would like to use unmodified upstream manifest files (i.e. community maintained Helm charts or Pods created by K8s operators), or would like to use DANM in the \"vanilla K8s\" way. Regardless which CNI plugins are involved in managing the networks of a Pod, and how they are configured; DANM invokes all of them at the same time, in parallel threads. DANM waits for the CNI result of all executors before converting, and merging them together into one summarized result object. The aggregated result is then sent back to kubelet. If any executor reported an error, or hasn't finished its job even after 10 seconds; the result of the whole operation will be an error. DANM reports all errors towards kubelet in case multiple CNI plugins failed to do their job. DANM includes a fully generic and very flexible IPAM module in-built into the solution. The usage of this module is seamlessly integrated together with all the natively supported CNI plugins (DANM's IPVLAN, Intel's SR-IOV, and the CNI project's reference MACVLAN plugins); as well as with any other CNI backend fully adhering to the v0.3.1 CNI standard! The main feature of DANM's IPAM is that it's fully integrated into DANM's network management APIs through the attributes called \"cidr\", \"allocation_pool\", \"net6\", and \"allocation_pool_v6\". Therefore users of the module can easily configure all aspects of network management by manipulating solely dynamic Kubernetes API objects! This native integration also enables a very tempting possibility. **As IP allocations belonging to a network are dynamically tracked *within the same API object***, it becomes possible to define: * discontinuous subnets 1:1 mapped to a logical network * **cluster-wide usable subnets** (instead of node restricted sub CIDRs) Network administrators can simply provision their desired CIDRs, and the allocation pools into the network object. Whenever a Pod is instantiated or deleted **on any host within the cluster**, DANM updates the respective allocation record belonging to the network through the Kubernetes API before provisioning the chosen IP to the Pod's interface. The flexible IPAM module also allows Pods to define the IP allocation scheme best suited for them. Pods can ask dynamically allocated IPs from the defined allocation pool, or can ask for one, specific, static address. The application can even ask DANM to forego the allocation of any IPs to their interface in case a L2 network interface is required. DANM IPAM is capable of handling 8 million -that's right!- IP allocations per network object, IPv4, and IPv6 mixed. If this is still not enough to impress you, we honestly don't know what else you might need from your IPAM! So please come, and tell us :) While using the DANM IPAM with dynamic backends is mandatory, netadmins can freely choose if they want their static CNI backends to be also integrated to DANM's IPAM; or they would prefer these interfaces to be statically configured by another IPAM module. By default the \"ipam\" section of a static delegate is always configured from the CNI configuration file identified by the network's NetworkID parameter. However, users can overwrite this inflexible -and most of the time host-local- option by defining \"cidr\", and/or \"net6\" in their network manifest just as they would with a dynamic backend. When a Pod connects to a network with static NetworkType but containing allocation subnets, and explicitly asks for an \"ip\", and/or \"ip6\" address from DANM in its annotation; DANM overwrites the \"ipam\" section coming from the static config with its own, dynamically allocated address. If a Pod does not ask DANM to allocate an IP, or the network does not define the necessary parameters; the delegation automatically falls back to the \"ipam\" defined in the static config file. **Note**: DANM can only integrate static backends to its flexible IPAM if the CNI itself is fully compliant to the standard, i.e. uses the plugin defined in the \"ipam\" section of its configuration. It is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI! DANM's IPAM module supports both pure IPv6, and dual-stack (one IPv4, and one IPv6 address provisioned to the same interface) addresses with full feature parity! To configure an IPv6 CIDR for a network, network administrators shall configure the \"net6\" attribute. Similarly to IPv4 addess management operators can define a desired allocation pool for their V6 subnet via the \"allocation_pol_v6\" structure. Additionally, IP routes for IPv6 subnets can be configured via \"routes6\". If both \"cidr\", and \"net6\" are configured for the same network, Pods connecting to that network can ask either one IPv4 or IPv6 address - or even both at the same time! This feature is generally supported the same way even for static CNI backends! However the promise that every specific CNI plugin is compatible and comfortable with both IPv6, and dual IPs allocated by an IPAM cannot be guaranteed by DANM. Therefore, it is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI! DANM's IPVLAN CNI uses the Linux kernel's IPVLAN module to provision high-speed, low-latency network interfaces for applications which need better performance than a bridge (or any other overlay technology) can provide. *Keep in mind that the IPVLAN module is a fairly recent addition to the Linux kernel, so the feature cannot be used on systems whose kernel is older than 4.4! 4.14+ would be even better (lotta bug fixes)* The CNI provisions IPVLAN interfaces in L2 mode, and supports the following extra features: * attaching IPVLAN sub-interfaces to any host interface * attaching IPVLAN sub-interfaces to dynamically created VLAN or VxLAN host interfaces * renaming the created interfaces according to the \"container_prefix\" attribute defined in the network object * allocating IP addresses by using DANM's flexible, in-built IPAM module * provisioning generic IP routes into a configured routing table inside the Pod's network namespace * Pod-level controlled provisioning of policy-based IP routes into Pod's network namespace DANM provides general support for CNIs interworking with Kubernetes' Device Plugin mechanism. A practical example of such a network provisioner is the SR-IOV CNI. When a properly configured Network Device Plugin runs, the allocatable resource list for the node should be updated with resource discovered by the plugin. SR-IOV Network Device Plugin allows to create a list of *netdevice* type resource definitions with *sriovMode*, where each resource definition can have one or more assigned *rootDevice* (Physical Function). The plugin looks for Virtual Functions (VF) for each configured Physical Function (PF) and adds all discovered VFs to the allocatable resource's list of the given Kubernetes Node. The Device Plugin resource name will be the device pool name on the Node. These device pools can be referred in Pod definition's resource request part on the usual way. In the following example, the \"nokia.k8s.io/sriov_ens1f0\" device pool name consists of the \"nokia.k8s.io\" prefix and \"sriov_ens1f0\" resourceName. ``` kubectl get nodes 172.30.101.104 -o json | jq '.status.allocatable' { \"cpu\": \"48\", \"ephemeral-storage\": \"48308001098\", \"hugepages-1Gi\": \"16Gi\", \"memory\": \"246963760Ki\", \"nokia.k8s.io/default\": \"0\", \"nokia.k8s.io/sriov_ens1f0\": \"8\", \"nokia.k8s.io/sriov_ens1f1\": \"8\", \"pods\": \"110\" } ``` All network management APIs contain an optional **device_pool** field where a specific device pool can be assigned to the given network. Before DANM invokes a CNI which expects a given resource to be attached to the Pod, it gathers all the Kubelet assigned device IDs belonging to device pool defined in the Pod's network, and passes one ID from the list to the CNI. The following example network definition shows how to configure device_pool parameter for sriov network type. ``` apiVersion: danm.io/v1 kind: DanmNet metadata: name: sriov-a namespace: example-sriov spec: NetworkID: sriov-a NetworkType: sriov Options: device_pool: \"nokia.k8s.io/sriov_ens1f0\" ``` The following Pod definition shows how to combine K8s Device resource requests and multiple network connections using the assigned resources: ``` apiVersion: v1 kind: Pod metadata: name: sriov-pod namespace: example-sriov labels: env: test annotations: danm.io/interfaces: | [ {\"network\":\"management\", \"ip\":\"dynamic\"}, {\"network\":\"sriov-a\", \"ip\":\"none\"}, {\"network\":\"sriov-b\", \"ip\":\"none\"} ] spec: containers: - name: sriov-pod image: busybox:latest args: - sleep - \"1000\" resources: requests: nokia.k8s.io/sriov_ens1f0: '1' nokia.k8s.io/sriov_ens1f1: '1' limits: nokia.k8s.io/sriov_ens1f0: '1' nokia.k8s.io/sriov_ens1f1: '1' nodeSelector: sriov: enabled ``` DANM's SR-IOV integration supports -and is tested with- both Intel, and Mellanox manufactured physical functions. Moreover Pods can use the allocated Virtual Functions for either kernel, or user space networking. The only restriction to keep in mind is when a DPDK using application requests VFs from an Intel NIC for the purpose of user space networking (i.e. DPDK), those VFs shall be already bound to the vfio-pci kernel driver before the Pod is instantiated. To guarantee such VFs are always available on the Node the Pod is scheduled to, we strongly suggest advertising vfio-pci bound VFs as a separate Device Pool. When an already vfio bound function is mounted to an application, DANM also creates a dummy kernel interface in its stead in the Pod's network namespace. The dummy interface can be easily identified by the application, because it's named exactly as the VF would be, following the standard DANM interface naming conventions. The dummy interface is used to convey all the information the user space application requires to start its own networking stack in a standardized manner. The list includes: - the IPAM details belonging to the user space device, such as IP addresses, IP routes etc. - VLAN tag of the VF, if any - PCI address of the specific device -as a link alias- so applications know which IPs/VLANs belong to which user space device - the original MAC address of the VF User space applications can interrogate this information via the usual kernel APIs, and then configure the allocated resources into their own network stack without the need of requesting any extra kernel privileges! The Webhook component introduced in DANM V4 is responsible for three things: - it initializes essential, but not human configurable API attributes (i.e. allocation tracking bitmasks) at the time of object creation - it matches, and connects TenantNetworks to administrator configured physical profiles allowed for tenant users - it validates the syntactic and semantic integrity of all API objects before any CREATE, or PUT REST operation are allowed to be persisted in the K8s API server's data store TenantNetworks cannot freely define the following attributes: - host_devices - device_pool - vlan - vxlan - NetworkID Reason is that all these attributes are related to physical resources, which might not be allowed to be used by the specific tenants: VLANs might not be configured in the switches, specific NICs are reserved for infrastructure use, static CNI configuration files might not exist on the container host's disk etc. Instead, these parameters are either entirely, or partially managed by DANM in TenantNetwork provisioning time. DANM does this by introducing a third new API with v4.0 called **TenantConfig**. TenantConfig is a mandatory API when DANM is used in the production grade mode. TenantConfig is a cluster-wide API, containing two major parameters: physical interface profiles usable by TenantNetworks, and NetworkType:NetworkID mappings. Refer to [TenantConfig schema](https://github.com/nokia/danm/tree/master/schema/TenantConfig.yaml) for more information on TenantConfigs. There are multiple ways of how DANM can select the appropriate interface profile for a tenant user's network. Note: physical interface profiles are only relevant for dynamic backends. For backends dependent on the host_device option (such as IPVLAN, and MACVLAN): - if the TenantNetwork contains host_device attribute, DANM selects the entry from the TenantConfig with the matching name - if host_device is not provided by user, DANM randomly selects an interface profile from the TenantConfig For backends dependent on the device_pool option (such as SR-IOV), the user needs to explicitly state which device_pool it wants to use. The reasoning behind not supporting random profile selection for K8s Devices based backends is that the Pod using such Devices anyway need to explicitly request resources from a specific pool in its own Pod manifest. Randomly matching its network with a possibly different pool could result in run-time failures. If there are no suitable physical interface profiles configured by the cluster's network administrator, or the TenantNetwork tried to select a physical device which is not allowed; webhook denies the creation of the TenantNetwork. If a suitable profile could be selected, DANM: - mutates the physical interface profile's name into either the TenantNetwork's host_device, or device_pool attribute (DANM automatically figures out which one based on the name of the profile, and the NetworkType parameter) - if the interface profile is a virtual profile, DANM automatically reserves the next previously unused VNI from the configured VNI range - then mutates the reserved VNI into the TenantNetwork's respective attribute (vlan, or vxlan) To avoid the leaking of VNIs in the cluster, DANM also takes care of freeing the reserved VNI of a TenantNetwork when it is deleted. Delegation to backends with static integration level (e.g. Calico, Flannel etc.) is configured via static CNI config files read from the container host's disk. These files are selected based on the NetworkType parameter of the TenantNetwork. Network administrators can configure NetworkType: NetworkID mappings into the TenantConfig. When a TenantNetwork is created with a NetworkType having a configured mapping, DANM automatically overwrites it's NetworkID with the provided value. Thus it becomes guaranteed that the tenant user's network will use the right CNI configuration file during Pod creation! Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) DanmNet operation is subject to the following validation rules: 1. spec.Options.Cidr must be supplied in a valid IPv4 CIDR notation 2. all gateway addresses belonging to an entry of spec.Options.Routes shall be in the defined IPv4 CIDR 3. spec.Options.Net6 must be supplied in a valid IPv6 CIDR notation 4. all gateway addresses belonging to an entry of spec.Options.Routes6 shall be in the defined IPv6 CIDR 5. spec.Options.Alloc shall not be manually defined 6. spec.Options.Alloc6 shall not be manually defined 7. spec.Options.Allocation_pool cannot be defined without defining spec.Options.Cidr 8. spec.Options.Allocation_pool.Start shall be in the provided IPv4 CIDR 9. spec.Options.Allocation_pool.End shall be in the provided IPv4 CIDR 10. spec.Options.Allocation_pool.End shall be smaller than spec.Options.Allocation_pool.Start 11. spec.Options.Allocation_pool_V6 cannot be defined without defining spec.Options.Cidr 12. spec.Options.Allocation_pool_V6.Start shall be in the provided IPv6 CIDR 13. spec.Options.Allocation_pool_V6.End shall be in the provided IPv6 CIDR 14. spec.Options.Allocation_pool_V6.End shall be smaller than spec.Options.Allocation_pool_V6.Start 15. spec.Options.Allocation_pool_V6.Cidr must be supplied in a valid IPv6 CIDR notation, and must be in the provided IPv6 CIDR 16. The combined number of allocatable IP addresses of the manually provided IPv4 and IPv6 allocation CIDRs cannot be higher than 8 million 17. spec.Options.Vlan and spec.Options.Vxlan cannot be provided together 18. spec.NetworkID cannot be longer than 10 characters for dynamic backends 19. spec.AllowedTenants is not a valid parameter for this API type 20. spec.Options.Device_pool must be, and spec.Options.Host_device mustn't be provided for K8s Devices based networks (such as SR-IOV) 21. Any of spec.Options.Device, spec.Options.Vlan, or spec.Options.Vxlan attributes cannot be changed if there are any Pods currently connected to the network Every DELETE DanmNet operation is subject to the following validation rules: 22. the network cannot be deleted if there are any Pods currently connected to the network Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) TenantNetwork operation is subject to the DanmNet validation rules no. 1-16, 18, 19. In addition TenantNetwork provisioning has the following extra rules: 1. spec.Options.Vlan cannot be provided 2. spec.Options.Vxlan cannot be provided 3. spec.Options.Vlan cannot be modified 4. spec.Options.Vxlan cannot be modified 5. spec.Options.Host_device cannot be modified 6. spec.Options.Device_pool cannot be modified Every DELETE TenantNetwork operation is subject to the DanmNet validation rule no.22. Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and ~~PUT~~ (see [https://github.com/nokia/danm/issues/144](https://github.com/nokia/danm/issues/144)) ClusterNetwork operation is subject to the DanmNet validation rules no. 1-18, 20-21. Every DELETE ClusterNetwork operation is subject to the DanmNet validation rule no.22. Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and PUT TenantConfig operation is subject to the following validation rules: 1. Either HostDevices, or NetworkIDs must not be empty 2. VniType and VniRange must be defined together for every HostDevices entry 3. Both key, and value must not be empty in every NetworkType: NetworkID mapping entry 4. A NetworkID cannot be longer than 10 characters in a NetworkType: NetworkID mapping belonging to a dynamic NetworkType Netwatcher is a standalone Network Operator responsible for dynamically managing (i.e. creation and deletion) VxLAN and VLAN interfaces on all the hosts based on dynamic network management K8s APIs. Netwatcher is a mandatory component of the DANM networking suite, but can be a great standalone add to Multus, or any other NetworkAttachmentDefinition driven K8s clusters! When netwatcher is deployed it runs as a DaemonSet, brought-up on all hosts where a meta CNI plugin is configured. Whenever a DANM network is created, modified, or deleted -any network, belonging to any of the supported API types- within the Kubernetes cluster, netwatcher will be triggered. If the network in question contained either the \"vxlan\", or the \"vlan\" attributes; then netwatcher immediately creates, or deletes the VLAN or VxLAN host interface with the matching VID. If the Spec.Options.host_device, .vlan, or .vxlan attributes are modified netwatcher first deletes the old, and then creates the new host interface. This feature is the most beneficial when used together with a dynamic network provisioning backend supporting connecting Pod interfaces to virtual host devices (IPVLAN, MACVLAN, SR-IOV for VLANs). Whenever a Pod is connected to such a network containing a virtual network identifier, the CNI component automatically connects the created interface to the VxLAN or VLAN host interface created by the netwatcher; instead of directly connecting it to the configured host device. But wait that's not all - Netwatcher is an API agnostic standalone Operator! This means all of its supported features can be used even in clusters where DANM is not the configured meta CNI solution! If your cluster uses a CNI solution driven by the NetworkAttachmentDefinition API -such as Multus, or Genie-, you can deploy netwatcher as-is to automate various network management operatios of TelCo workloads. Whenever you deploy a NAD Netwatcher will inspect the CNI config portion stored under Spec.Config. If there is a VLAN, or VxLAN identifier added to a CNI configuration it will trigger Netwatcher to create the necessary host interfaces, the exact same way as if these attributes were added to a DANM API object. For example if you want your IPVLAN type NAD to be connected to a specific VLAN just add the tag to your object the following way: ``` apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ipvlan-conf spec: config: '{ \"name\": \"vlantest\", \"cniVersion\": \"0.3.1\", \"type\": \"ipvlan\", \"master\": \"tenant-bond\", \"vlan\": 500, \"ipam\": { \"type\": \"static\", \"routes\": [ { \"dst\": \"0.0.0.0/0\", \"gw\": \"10.1.1.1\" } ] } }' ``` When it comes to dealing with NADs Netwatcher understands that these extra tags are not recognized by the existing CNI eco-system. So to achieve E2E automation Netwatcher will also modify the CNI configuration of the NAD to point to the right host interface! Let's use the above example to show how this works! First, upon seeing this network Netwatcher creates the appropriate host interface with the tag: ``` 568: vlantest.500@tenant-bond: mtu 9000 qdisc noqueue state UP mode DEFAULT group default ``` Then it also initiates an Update operation on the NAD, exchanging the old host interface reference to the correct one: ``` apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: ... - apiVersion: k8s.cni.cncf.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:config: {} manager: netwatcher operation: Update time: \"2021-03-01T17:21:11Z\" name: ipvlan-conf namespace: default spec: config: '{\"cniVersion\":\"0.3.1\",\"ipam\":{\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.1.1.1\"}],\"type\":\"static\"},\"master\":\"vlantest.500\",\"name\":\"vlantest\",\"type\":\"ipvlan\",\"vlan\":500}' ``` This approach ensures users can seamlessly integrate Netwatcher into their existing clusters and enjoy its extra capabilities without any extra hassle - just the way we like it! Svcwatcher component showcases the whole reason why DANM exists, and is designed the way it is. It is the first higher-level feature accomplishing our true goal described in the introduction section, that is, extending basic Kubernetes constructs to seamlessly work with multiple network interfaces. The first such construct is the Kubernetes Service! Let's see how it works. Svcwatcher basically works the same way as the default Service controller inside Kubernetes. It continuously monitors both the Service and the Pod APIs, and provisions Endpoints whenever the cluster administrator creates, updates, or deletes relevant API objects (e.g. creates a new Service, updates a Pod label etc.). DANM svcwatcher does the same, and more! The default Service controller assumes the Pod has one interface, so whenever a logical Service Endpoint is created it will be always done with the IP of the Pod's first (the infamous \"eth0\" in Kubernetes), and supposedly only network interface. DANM svcwatcher on the other hand makes this behaviour configurable! DANM enhances the same Service API so an object will always explicitly select one logical network, rather than implicitly choosing the one with the hard-coded name of \"eth0\". Then, svcwatcher provisions a Service Endpoint with the address of the selected Pod's chosen network interface. This enhancement basically upgrades the in-built Kubernetes Service Discovery concept to work over multiple network interfaces, making Service Discovery only return truly relevant Endpoints in every scenario! The services of the svcwatcher component work with all supported network management APIs! Based on the feature description experienced Kubernetes users are probably already thinking \"but wait, there is no \"network selector\" field in the Kubernetes Service core API\". That is indeed true right now, but consider the core concept behind the creation of DANM: \"what use-cases would become possible if Networks would be part of the core Kubernetes API\"? So, we went ahead and simulated exactly this scenario, while making sure our solution also works with a vanilla Kubernetes today; just as we did with all our other API enhancements. This is possible by leveraging the so-called \"headless and selectorless Services\" concept in Kubernetes. Headless plus selectorless Services do not contain Pod selector field, which tells the Kubernetes native Service controller that Endpoint administration is handled by a 3rd party service. DANM svcwatcher is triggered when such a service is created, if it contains the DANM \"core API\" attributes in their annotation. These extra attributes are the following: \"danm.io/selector\": this selector serves the exact same purpose as the default Pod selector field (which is missing from a selectorless Service by definition). Endpoints are created for Pods which match all labels provided in this list \"danm.io/network\": this is the \"special sauce\" of DANM. When svcwatcher creates an Endpoint, it's IP will be taken from the selected Pod's physical interface connected to the DanmNet with the matching name \"danm.io/tenantNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to TenantNetworks, rather than DanmNets \"danm.io/clusterNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to ClusterNetworks, rather than DanmNets This means that DANM controlled Services behave exactly as in Kubernetes: a selected Pod's availability is advertised through one of its network interfaces. The big difference is that operators can now decide through which interface(s) they want the Pod to be discoverable! (Of course nothing forbids the creation of multiple Services selecting different interfaces of the same Pod, in case a Pod should be discoverable by different kind of communication partners). The schema of the enhanced, DANM-compatible Service object is described in detail in **schema/DanmService**.yaml file. Why is this feature useful, the reader might ask? The answer depends on the use-case your application serves. If you share one, cloud-wide network between all application and infrastructure components, and everyone communicates with everyone through this -most probably overlay- network, then you are probably not excited by DANM's svcwatcher. However, if you believe in physically separated interfaces (or certain government organizations made you believe in it), non-default networks, multi-domain gateway components; then this is the feature you probably already built-in to your application's Helm chart in the form of an extra Consul, or Etcd component. This duplication of platform responsibility ends today! :) Allow us to demonstrate the usage of this feature via an every-day common TelCo inspired example located in the project's example/svcwatcher_demo directory. The example contains three Pods running in the same cluster: - A LoadBalancer Pod, whose job is to accept connections over any exotic but widely used non-L7 protocols (e.g. DIAMETER, LDAP, SIP, SIGTRAN etc.), and distribute the workload to backend services - An ExternalClient Pod, supplying the LoadBalancer with traffic through an external network - An InternalProcessor Pod, receiving requests to be served from the LoadBalancer Pod Our cluster contains three physical networks: external, internal, management. LoadBalancer connects to all three, because it needs to be able to establish connections to entities both supplying, and serving traffic. LoadBalancer also wishes to be scaled via Prometheus, hence it connects to the cluster's management network to expose its own \"packet_served_per_second\" custom metric. ExternalClient only connects to the LoadBalancer Pod, because it simply wants to send traffic to the application (VNF), and deal with the result of transactions. It doesn't care, or know anything about the internal architecture of the application (VNF). Because ExternalClient is not part of the same application (namespace) as LoadBalancer and InternalProcessor, it can't have access to their internal network. It doesn't require scaling, being a lightweight, non-critical component, therefore it also does not connect to the cluster's management network. InternalProcessor only connects to the LoadBalancer Pod, but being a small, dynamically changing component, we don't want to expose it to external clients. InternalProcessor wants to have access to the many network-based features of Kubernetes, so it also connects to the management network, similarly to LoadBalancer. With DANM, the answer is as simple as instantiating the demonstration Kubernetes manifest files in the following order: Namespaces -> DanmNets -> Deployments -> Services \"vnf-internal-processor\" will make the InternalProcessors discoverable through their application-internal network interface. LoadBalancers can use this Service to discover working backends serving transactions. \"vnf-internal-lb\" will make the LoadBalancers discoverable through their application-internal network interface. InternalProcessors can use this Service to discover application egress points/gateway components. Lastly, \"vnf-external-svc\" makes the same LoadBalancer instances discoverable but this time through their external network interfaces. External clients connecting to the same network can use this Service to find the ingress/gateway interfaces of the whole application (VNF)! As a closing note: remember to delete the now unnecessary Service Discovery tool's Deployment manifest from your Helm chart :)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "bug_report.md"
- },
- "content": [
- {
- "heading": "Describe the bug",
- "data": "A clear and concise description of what the bug is."
- },
- {
- "heading": "To Reproduce",
- "data": "Steps to reproduce the behavior:\n 1. Go to '...'\n 2. Click on '....'\n 3. Scroll down to '....'\n 4. See error"
- },
- {
- "heading": "Expected behavior",
- "data": "A clear and concise description of what you expected to happen."
- },
- {
- "heading": "Screenshots",
- "data": "If applicable, add screenshots to help explain your problem."
- },
- {
- "heading": "Desktop (please complete the following information):",
- "data": "- OS: [e.g. iOS]\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]"
- },
- {
- "heading": "Smartphone (please complete the following information):",
- "data": "- Device: [e.g. iPhone6]\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]"
- },
- {
- "heading": "Additional context",
- "data": "Add any other context about the problem here."
- },
- {
- "additional_info": "--- name: Bug report about: Create a report to help us improve title: '' labels: '' assignees: '' --- A clear and concise description of what the bug is. Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error A clear and concise description of what you expected to happen. If applicable, add screenshots to help explain your problem. - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] Add any other context about the problem here."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.2.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.2",
- "data": ""
- },
- {
- "heading": "\u65b0\u7279\u6027",
- "data": "1. \u4e00\u952e\u90e8\u7f72K8S+KubeEdge\n FabEdge\u662f\u4e00\u4e2a\u8fb9\u7f18\u5bb9\u5668\u7684\u7f51\u7edc\u65b9\u6848\uff0c\u4f7f\u7528\u5b83\u7684\u524d\u63d0\u662f\u6709K8S+KubeEdge\u96c6\u7fa4\u3002\u4f46\u662fK8S+KubeEdge\u7684\u90e8\u7f72\u6bd4\u8f83\u590d\u6742\uff0c\u5bfc\u81f4\u4f7f\u7528FabEdge\u7684\u95e8\u69db\u8fc7\u9ad8\u3002\u6211\u4eec\u63a8\u51fa\u4e00\u952e\u90e8\u7f72K8S+KubeEdge\u7684\u529f\u80fd\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u4e0a\u624b\u3002\n 1. \u81ea\u52a8\u7ba1\u7406\u8bc1\u4e66\n Strongswan\u662f\u4e00\u4e2a\u5f00\u6e90\u7684IPSec VPN\u7ba1\u7406\u8f6f\u4ef6\uff0cFabEdge\u5e95\u5c42\u4f7f\u7528\u5b83\u7ba1\u7406\u96a7\u9053\u3002\u4e3a\u4e86\u5b89\u5168\u6027\uff0c\u5b83\u4f7f\u7528\u8bc1\u4e66\u9a8c\u8bc1\u8fb9\u7f18\u8282\u70b9\u3002\u4f46\u4e3a\u6bcf\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u5206\u914d\u8bc1\u4e66\u662f\u4e00\u4e2a\u9ebb\u70e6\u4e14\u5bb9\u6613\u51fa\u9519\u7684\u8fc7\u7a0b\u3002\u6211\u4eec\u5728Operator\u91cc\u5b9e\u73b0\u4e86\u8bc1\u4e66\u7684\u81ea\u52a8\u7ba1\u7406\uff0c\u5728\u8282\u70b9\u4e0a\u7ebf\u7684\u65f6\u5019\u81ea\u52a8\u5206\u914d\u8bc1\u4e66\uff0c\u5927\u5927\u964d\u4f4e\u4e86\u8fd0\u7ef4\u5de5\u4f5c\u91cf\u3002\n \n 1. \u4f7f\u7528Helm\u5b89\u88c5\u90e8\u7f72\n FabEdge\u6709\u591a\u4e2a\u7ec4\u4ef6\uff0c\u7ec4\u4ef6\u7684\u914d\u7f6e\u6bd4\u8f83\u590d\u6742\u3002\u6211\u4eec\u4f7f\u7528Helm \uff08package manager for kubernetes\uff09\u7ba1\u7406FabEdge\uff0c\u7b80\u5316\u4e86\u5b89\u88c5\u90e8\u7f72\u8fc7\u7a0b\uff0c\u65b9\u4fbf\u7528\u6237\u4f7f\u7528\u3002"
- },
- {
- "heading": "\u5176\u5b83\u66f4\u65b0",
- "data": "1. \u652f\u6301IPSec NAT-T \u53ef\u4ee5\u4e3a\u4e91\u7aefconnector\u8bbe\u7f6e\u5916\u7f51\u5730\u5740\uff0cpublic_addresses, \u652f\u6301\u516c\u6709\u4e91\u4f7f\u7528\u6d6e\u52a8IP\u6216\u79c1\u6709\u4e91\u4f7f\u7528\u9632\u706b\u5899\u5730\u5740\u6620\u5c04\u7684\u573a\u666f\u3002 1. \u5b8c\u5584\u4e86connector\u7684iptables\u89c4\u5219 connector\u81ea\u52a8\u914d\u7f6eiptables\u89c4\u5219\uff0c\u5141\u8bb8IPSec\u6d41\u91cf\uff08ESP\uff0c UDP50/4500\uff09\u3002 1. \u589e\u52a0enable-proxy\u7684\u5f00\u5173 \u5bf9\u4e8e\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u4f7f\u7528\u539f\u751fkube-proxy\u7684\u573a\u666f\uff0c\u53ef\u4ee5\u9009\u62e9\u5173\u95edFabEdge\u81ea\u6709proxy\u7684\u5b9e\u73b0\u3002"
- },
- {
- "additional_info": "1. \u4e00\u952e\u90e8\u7f72K8S+KubeEdge FabEdge\u662f\u4e00\u4e2a\u8fb9\u7f18\u5bb9\u5668\u7684\u7f51\u7edc\u65b9\u6848\uff0c\u4f7f\u7528\u5b83\u7684\u524d\u63d0\u662f\u6709K8S+KubeEdge\u96c6\u7fa4\u3002\u4f46\u662fK8S+KubeEdge\u7684\u90e8\u7f72\u6bd4\u8f83\u590d\u6742\uff0c\u5bfc\u81f4\u4f7f\u7528FabEdge\u7684\u95e8\u69db\u8fc7\u9ad8\u3002\u6211\u4eec\u63a8\u51fa\u4e00\u952e\u90e8\u7f72K8S+KubeEdge\u7684\u529f\u80fd\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u4e0a\u624b\u3002 1. \u81ea\u52a8\u7ba1\u7406\u8bc1\u4e66 Strongswan\u662f\u4e00\u4e2a\u5f00\u6e90\u7684IPSec VPN\u7ba1\u7406\u8f6f\u4ef6\uff0cFabEdge\u5e95\u5c42\u4f7f\u7528\u5b83\u7ba1\u7406\u96a7\u9053\u3002\u4e3a\u4e86\u5b89\u5168\u6027\uff0c\u5b83\u4f7f\u7528\u8bc1\u4e66\u9a8c\u8bc1\u8fb9\u7f18\u8282\u70b9\u3002\u4f46\u4e3a\u6bcf\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u5206\u914d\u8bc1\u4e66\u662f\u4e00\u4e2a\u9ebb\u70e6\u4e14\u5bb9\u6613\u51fa\u9519\u7684\u8fc7\u7a0b\u3002\u6211\u4eec\u5728Operator\u91cc\u5b9e\u73b0\u4e86\u8bc1\u4e66\u7684\u81ea\u52a8\u7ba1\u7406\uff0c\u5728\u8282\u70b9\u4e0a\u7ebf\u7684\u65f6\u5019\u81ea\u52a8\u5206\u914d\u8bc1\u4e66\uff0c\u5927\u5927\u964d\u4f4e\u4e86\u8fd0\u7ef4\u5de5\u4f5c\u91cf\u3002 1. \u4f7f\u7528Helm\u5b89\u88c5\u90e8\u7f72 FabEdge\u6709\u591a\u4e2a\u7ec4\u4ef6\uff0c\u7ec4\u4ef6\u7684\u914d\u7f6e\u6bd4\u8f83\u590d\u6742\u3002\u6211\u4eec\u4f7f\u7528Helm \uff08package manager for kubernetes\uff09\u7ba1\u7406FabEdge\uff0c\u7b80\u5316\u4e86\u5b89\u88c5\u90e8\u7f72\u8fc7\u7a0b\uff0c\u65b9\u4fbf\u7528\u6237\u4f7f\u7528\u3002 1. \u652f\u6301IPSec NAT-T \u53ef\u4ee5\u4e3a\u4e91\u7aefconnector\u8bbe\u7f6e\u5916\u7f51\u5730\u5740\uff0cpublic_addresses, \u652f\u6301\u516c\u6709\u4e91\u4f7f\u7528\u6d6e\u52a8IP\u6216\u79c1\u6709\u4e91\u4f7f\u7528\u9632\u706b\u5899\u5730\u5740\u6620\u5c04\u7684\u573a\u666f\u3002 1. \u5b8c\u5584\u4e86connector\u7684iptables\u89c4\u5219 connector\u81ea\u52a8\u914d\u7f6eiptables\u89c4\u5219\uff0c\u5141\u8bb8IPSec\u6d41\u91cf\uff08ESP\uff0c UDP50/4500\uff09\u3002 1. \u589e\u52a0enable-proxy\u7684\u5f00\u5173 \u5bf9\u4e8e\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u4f7f\u7528\u539f\u751fkube-proxy\u7684\u573a\u666f\uff0c\u53ef\u4ee5\u9009\u62e9\u5173\u95edFabEdge\u81ea\u6709proxy\u7684\u5b9e\u73b0\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.3.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.3",
- "data": ""
- },
- {
- "heading": "\u65b0\u7279\u6027",
- "data": "1. \u652f\u6301\u4e91\u7aef\u96c6\u7fa4\u4f7f\u7528Flannel\u7f51\u7edc\u63d2\u4ef6\n [Flannel](https://github.com/flannel-io/flannel)\u7b80\u5355\uff0c\u6613\u7528\uff0c\u6709\u5927\u91cf\u7684\u7528\u6237\uff0c\u672c\u7248\u672c\u52a0\u5165\u4e86\u5bf9\u5b83\u7684\u652f\u6301\u3002\u5230\u76ee\u524d\u4e3a\u6b62\uff0cFabEdge\u652f\u6301\u7684\u63d2\u4ef6\u6709\uff1aCalico\uff0c Flannel\u3002\n 1. \u652f\u6301SuperEdge\n [SuperEdge](https://github.com/superedge/superedge/blob/main/README_CN.md)\u662fKubernetes\u539f\u751f\u7684\u8fb9\u7f18\u5bb9\u5668\u65b9\u6848\uff0c\u5b83\u5c06Kubernetes\u5f3a\u5927\u7684\u5bb9\u5668\u7ba1\u7406\u80fd\u529b\u6269\u5c55\u5230\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\uff0c\u9488\u5bf9\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\u5e38\u89c1\u7684\u6280\u672f\u6311\u6218\u63d0\u4f9b\u4e86\u89e3\u51b3\u65b9\u6848\u3002FabEdge\u672c\u7248\u672c\u52a0\u5165\u5bf9SuperEdge\u7684\u652f\u6301\u3002\n \n 1. \u652f\u6301OpenYurt\n [OpenYurt](https://openyurt.io/)\u662f\u6258\u7ba1\u5728 Cloud Native Computing Foundation (CNCF) \u4e0b\u7684 [\u6c99\u7bb1\u9879\u76ee](https://www.cncf.io/sandbox-projects/). \u5b83\u662f\u57fa\u4e8e\u539f\u751f Kubernetes \u6784\u5efa\u7684\uff0c\u76ee\u6807\u662f\u6269\u5c55 Kubernetes \u4ee5\u65e0\u7f1d\u652f\u6301\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u3002FabEdge\u672c\u7248\u672c\u52a0\u5165OpenYurt\u7684\u652f\u6301\u3002"
- },
- {
- "heading": "\u5176\u5b83\u66f4\u65b0",
- "data": "1. \u81ea\u52a8\u8bc6\u522b\u4e91\u7aefPOD\u7f51\u6bb5 Operator\u81ea\u52a8\u8bc6\u522b\u4e91\u7aef\u96c6\u7fa4POD\u7f51\u6bb5\uff0c\u4e0d\u518d\u9700\u8981\u7528\u6237\u624b\u52a8\u8f93\u5165\u3002 1. \u652f\u6301\u7528\u6237\u81ea\u5b9a\u4e49\u8fb9\u7f18\u8282\u70b9\u6807\u7b7e \u7528\u6237\u53ef\u4ee5\u81ea\u5b9a\u4e49\u7528\u4e8e\u6807\u8bc6FabEdge\u7ba1\u7406\u7684\u8fb9\u7f18\u8282\u70b9\u7684\u6807\u7b7e\u7ec4\u3002"
- },
- {
- "additional_info": "1. \u652f\u6301\u4e91\u7aef\u96c6\u7fa4\u4f7f\u7528Flannel\u7f51\u7edc\u63d2\u4ef6 [Flannel](https://github.com/flannel-io/flannel)\u7b80\u5355\uff0c\u6613\u7528\uff0c\u6709\u5927\u91cf\u7684\u7528\u6237\uff0c\u672c\u7248\u672c\u52a0\u5165\u4e86\u5bf9\u5b83\u7684\u652f\u6301\u3002\u5230\u76ee\u524d\u4e3a\u6b62\uff0cFabEdge\u652f\u6301\u7684\u63d2\u4ef6\u6709\uff1aCalico\uff0c Flannel\u3002 1. \u652f\u6301SuperEdge [SuperEdge](https://github.com/superedge/superedge/blob/main/README_CN.md)\u662fKubernetes\u539f\u751f\u7684\u8fb9\u7f18\u5bb9\u5668\u65b9\u6848\uff0c\u5b83\u5c06Kubernetes\u5f3a\u5927\u7684\u5bb9\u5668\u7ba1\u7406\u80fd\u529b\u6269\u5c55\u5230\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\uff0c\u9488\u5bf9\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\u5e38\u89c1\u7684\u6280\u672f\u6311\u6218\u63d0\u4f9b\u4e86\u89e3\u51b3\u65b9\u6848\u3002FabEdge\u672c\u7248\u672c\u52a0\u5165\u5bf9SuperEdge\u7684\u652f\u6301\u3002 1. \u652f\u6301OpenYurt [OpenYurt](https://openyurt.io/)\u662f\u6258\u7ba1\u5728 Cloud Native Computing Foundation (CNCF) \u4e0b\u7684 [\u6c99\u7bb1\u9879\u76ee](https://www.cncf.io/sandbox-projects/). \u5b83\u662f\u57fa\u4e8e\u539f\u751f Kubernetes \u6784\u5efa\u7684\uff0c\u76ee\u6807\u662f\u6269\u5c55 Kubernetes \u4ee5\u65e0\u7f1d\u652f\u6301\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u3002FabEdge\u672c\u7248\u672c\u52a0\u5165OpenYurt\u7684\u652f\u6301\u3002 1. \u81ea\u52a8\u8bc6\u522b\u4e91\u7aefPOD\u7f51\u6bb5 Operator\u81ea\u52a8\u8bc6\u522b\u4e91\u7aef\u96c6\u7fa4POD\u7f51\u6bb5\uff0c\u4e0d\u518d\u9700\u8981\u7528\u6237\u624b\u52a8\u8f93\u5165\u3002 1. \u652f\u6301\u7528\u6237\u81ea\u5b9a\u4e49\u8fb9\u7f18\u8282\u70b9\u6807\u7b7e \u7528\u6237\u53ef\u4ee5\u81ea\u5b9a\u4e49\u7528\u4e8e\u6807\u8bc6FabEdge\u7ba1\u7406\u7684\u8fb9\u7f18\u8282\u70b9\u7684\u6807\u7b7e\u7ec4\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.4.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.4",
- "data": "[toc]"
- },
- {
- "heading": "\u65b0\u7279\u6027",
- "data": "1. \u652f\u6301\u591a\u96c6\u7fa4\u901a\u8baf\n \u652f\u6301\u8de8\u96c6\u7fa4\u7684\uff0cPod/Service\u7684\u76f4\u63a5\u8bbf\u95ee\u3002\n 1. \u652f\u6301ARM\u67b6\u6784\n \u652f\u6301ARM32/64\u67b6\u6784\uff0c\u800c\u4e14\u652f\u6301\u5f02\u6784\u73af\u5883\uff0c\u5373\u4e00\u4e2a\u73af\u5883\u91cc\u5305\u542b\u591a\u79cd\u786c\u4ef6\u67b6\u6784\uff08X86/ARM32/ARM64\uff09"
- },
- {
- "heading": "\u5176\u5b83\u66f4\u65b0",
- "data": "1. \u652f\u6301\u7528\u6237\u624b\u52a8\u6307\u5b9a\u8fb9\u7f18\u8282\u70b9\u516c\u7f51\u5730\u5740 \u7528\u6237\u53ef\u4ee5\u4e3a\u8fb9\u7f18\u8282\u70b9\u6dfb\u52a0\u516c\u7f51\u5730\u5740\u7684\u6ce8\u89e3\uff0c\u7528\u4e8e\u5efa\u7acb\u8fb9\u7f18\u8282\u70b9\u5230\u8fb9\u7f18\u8282\u70b9\u7684\u96a7\u9053"
- },
- {
- "additional_info": "[toc] 1. \u652f\u6301\u591a\u96c6\u7fa4\u901a\u8baf \u652f\u6301\u8de8\u96c6\u7fa4\u7684\uff0cPod/Service\u7684\u76f4\u63a5\u8bbf\u95ee\u3002 1. \u652f\u6301ARM\u67b6\u6784 \u652f\u6301ARM32/64\u67b6\u6784\uff0c\u800c\u4e14\u652f\u6301\u5f02\u6784\u73af\u5883\uff0c\u5373\u4e00\u4e2a\u73af\u5883\u91cc\u5305\u542b\u591a\u79cd\u786c\u4ef6\u67b6\u6784\uff08X86/ARM32/ARM64\uff09 1. \u652f\u6301\u7528\u6237\u624b\u52a8\u6307\u5b9a\u8fb9\u7f18\u8282\u70b9\u516c\u7f51\u5730\u5740 \u7528\u6237\u53ef\u4ee5\u4e3a\u8fb9\u7f18\u8282\u70b9\u6dfb\u52a0\u516c\u7f51\u5730\u5740\u7684\u6ce8\u89e3\uff0c\u7528\u4e8e\u5efa\u7acb\u8fb9\u7f18\u8282\u70b9\u5230\u8fb9\u7f18\u8282\u70b9\u7684\u96a7\u9053"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.5.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.5.0",
- "data": "[toc]"
- },
- {
- "heading": "\u65b0\u7279\u6027",
- "data": "1. \u652f\u6301\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0\n \u652f\u6301\u5bf9\u5e94\u7528\u65e0\u4fb5\u5165\u7684\uff0c\u62d3\u6251\u611f\u77e5\u7684\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee\u3002"
- },
- {
- "heading": "\u5176\u5b83\u66f4\u65b0",
- "data": "1. \u4fee\u590d\u4e86\u4e00\u4e9bbug 2. \u4f18\u5316\u4e86\u914d\u7f6e\u6bd4\u5bf9\u7684\u903b\u8f91"
- },
- {
- "additional_info": "[toc] 1. \u652f\u6301\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0 \u652f\u6301\u5bf9\u5e94\u7528\u65e0\u4fb5\u5165\u7684\uff0c\u62d3\u6251\u611f\u77e5\u7684\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee\u3002 1. \u4fee\u590d\u4e86\u4e00\u4e9bbug 2. \u4f18\u5316\u4e86\u914d\u7f6e\u6bd4\u5bf9\u7684\u903b\u8f91"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.6.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.6.0",
- "data": "[toc]"
- },
- {
- "heading": "\u65b0\u7279\u6027",
- "data": "1. \u652f\u6301\u540c\u4e00\u5c40\u57df\u7f51\u5185\u7684\u8fb9\u7f18\u8282\u70b9\u81ea\u52a8\u7ec4\u7f51\uff0c\u65e0\u9700Community\u914d\u7f6e\n 2. \u652f\u6301\u53cc\u6808\u7f51\u7edc\uff08\u4ec5\u9650\u4e8eflannel)\n 3. \u66f4\u7075\u6d3b\u7684agent\u53c2\u6570\u914d\u7f6e"
- },
- {
- "heading": "\u5176\u5b83\u66f4\u65b0",
- "data": "1. \u4fee\u590d\u4e86\u4e00\u4e9bbug 2. \u6539\u5584\u4e86\u5bf9strongswan\u4e0d\u6d3b\u8dc3\u94fe\u63a5\u7684\u6e05\u7406\u548c\u91cd\u5efa\u80fd\u529b"
- },
- {
- "additional_info": "[toc] 1. \u652f\u6301\u540c\u4e00\u5c40\u57df\u7f51\u5185\u7684\u8fb9\u7f18\u8282\u70b9\u81ea\u52a8\u7ec4\u7f51\uff0c\u65e0\u9700Community\u914d\u7f6e 2. \u652f\u6301\u53cc\u6808\u7f51\u7edc\uff08\u4ec5\u9650\u4e8eflannel) 3. \u66f4\u7075\u6d3b\u7684agent\u53c2\u6570\u914d\u7f6e 1. \u4fee\u590d\u4e86\u4e00\u4e9bbug 2. \u6539\u5584\u4e86\u5bf9strongswan\u4e0d\u6d3b\u8dc3\u94fe\u63a5\u7684\u6e05\u7406\u548c\u91cd\u5efa\u80fd\u529b"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.7.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.7.0",
- "data": "[toc]"
- },
- {
- "heading": "New features",
- "data": "1. Change the naming strategy of fabedge-agent pods;\n 2. Add commonName validation for fabedge-agent certificates;\n 3. Implement node-specific configuration of fabedge-agent arguments;\n 4. Let fabedge-agent configure sysctl parameters needed;\n 5. Let fabedge-operator manage calico ippools for CIDRs;"
- },
- {
- "heading": "Bug fixes",
- "data": "1. Fix wrong service port mapping of fab-proxy;"
- },
- {
- "additional_info": "[toc] 1. Change the naming strategy of fabedge-agent pods; 2. Add commonName validation for fabedge-agent certificates; 3. Implement node-specific configuration of fabedge-agent arguments; 4. Let fabedge-agent configure sysctl parameters needed; 5. Let fabedge-operator manage calico ippools for CIDRs; 1. Fix wrong service port mapping of fab-proxy;"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.8.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.8.0",
- "data": "[toc]"
- },
- {
- "heading": "New features",
- "data": "1. Integerate coredns and kube-proxy into fabedge-agent and remove fab-proxy component; 2. Allow user to set strongswan port on connector; 3. Implement hole-punching feature which help edge nodes behind NAT network to communicate each other ;"
- },
- {
- "additional_info": "[toc] 1. Integerate coredns and kube-proxy into fabedge-agent and remove fab-proxy component; 2. Allow user to set strongswan port on connector; 3. Implement hole-punching feature which help edge nodes behind NAT network to communicate each other ;"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-0.8.1.md"
- },
- "content": [
- {
- "heading": "FabEdge V0.8.1",
- "data": "1. Fix infinitely generated route table 220 mentioned in [issue #386](https://github.com/FabEdge/fabedge/issues/386); 2. Use iptables-wrapper in images of fabedge-agent, fabedge-connector and fabedge-cloud-agent; 3. Improve startup process of fabedge-cloud-agent."
- },
- {
- "additional_info": "1. Fix infinitely generated route table 220 mentioned in [issue #386](https://github.com/FabEdge/fabedge/issues/386); 2. Use iptables-wrapper in images of fabedge-agent, fabedge-connector and fabedge-cloud-agent; 3. Improve startup process of fabedge-cloud-agent."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CHANGELOG-1.0.0.md"
- },
- "content": [
- {
- "heading": "FabEdge V1.0.0",
- "data": "Added: 1. Connector HA is implemented; 2. More calico modes are supported; 3. Flannel host-gw mode is supported; Fixed: 1. Fix the bug that nodePort service doesn't work on cloud side; 2. Fix the bug that cloud-agent lost connections to connector after connector reboot; 3. Fix the bug that fabedge-agent can't initialize tunnels if strongswan container reboot;"
- },
- {
- "additional_info": "Added: 1. Connector HA is implemented; 2. More calico modes are supported; 3. Flannel host-gw mode is supported; Fixed: 1. Fix the bug that nodePort service doesn't work on cloud side; 2. Fix the bug that cloud-agent lost connections to connector after connector reboot; 3. Fix the bug that fabedge-agent can't initialize tunnels if strongswan container reboot;"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CODE_OF_CONDUCT.md"
- },
- "content": [
- {
- "heading": "FabEdge Community Code of Conduct",
- "data": "We follow the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [fabedge@beyondcent.com](mailto:fabedge@beyondcent.com)."
- },
- {
- "additional_info": "We follow the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [fabedge@beyondcent.com](mailto:fabedge@beyondcent.com)."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "CONTRIBUTING.md"
- },
- "content": [
- {
- "heading": "Contributing Guide",
- "data": "- Before you get started\n - Code of Conduct\n - Getting started\n - Contributor Workflow\n - Creating Pull Requests\n - Code Review\n - Testing"
- },
- {
- "heading": "Before you get started",
- "data": ""
- },
- {
- "heading": "Code of Conduct",
- "data": "Please make sure to read and observe our [Code of Conduct](https://github.com/FabEdge/fabedge/blob/main/CODE_OF_CONDUCT.md)."
- },
- {
- "heading": "Getting started",
- "data": "- Fork the repository on GitHub\n - Read the [docs](https://github.com/FabEdge/fabedge/tree/main/docs) for deployment."
- },
- {
- "heading": "Your First Contribution",
- "data": "We will help you to contribute in different areas like filing issues, developing features, fixing bugs and getting your work reviewed and merged."
- },
- {
- "heading": "Contributor Workflow",
- "data": "Please do not ever hesitate to ask a question or send a pull request.\n This is a rough outline of what a contributor's workflow looks like:\n - Create a topic branch from where to base the contribution. This is usually master.\n - Make commits of logical units.\n - Make sure commit messages are in the proper format (see below).\n - Push changes in a topic branch to a personal fork of the repository.\n - Submit a pull request\n - The PR must receive an approval from maintainers."
- },
- {
- "heading": "Creating Pull Requests",
- "data": "FabEdge generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process."
- },
- {
- "heading": "Code Review",
- "data": "To make it easier for your PR to receive reviews, consider the reviewers will need you to:\n - follow [good coding guidelines](https://github.com/golang/go/wiki/CodeReviewComments).\n - write [good commit messages](https://chris.beams.io/posts/git-commit/).\n - break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue."
- },
- {
- "heading": "Format of the commit message",
- "data": "We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why.\n The format can be described more formally as follows:\n The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.\n Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers."
- },
- {
- "heading": "Testing",
- "data": "There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test: - Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer. - Integration: These tests cover interactions of package components or interactions between components and Kubernetes control plane components like API server. - End-to-end (\"e2e\"): These are broad tests of overall system behavior and coherence. Continuous integration will run these tests on PRs."
- },
- {
- "additional_info": "- Before you get started - Code of Conduct - Getting started - Contributor Workflow - Creating Pull Requests - Code Review - Please make sure to read and observe our [Code of Conduct](https://github.com/FabEdge/fabedge/blob/main/CODE_OF_CONDUCT.md). - Fork the repository on GitHub - Read the [docs](https://github.com/FabEdge/fabedge/tree/main/docs) for deployment. We will help you to contribute in different areas like filing issues, developing features, fixing bugs and getting your work reviewed and merged. Please do not ever hesitate to ask a question or send a pull request. This is a rough outline of what a contributor's workflow looks like: - Create a topic branch from where to base the contribution. This is usually master. - Make commits of logical units. - Make sure commit messages are in the proper format (see below). - Push changes in a topic branch to a personal fork of the repository. - Submit a pull request - The PR must receive an approval from maintainers. FabEdge generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process. To make it easier for your PR to receive reviews, consider the reviewers will need you to: - follow [good coding guidelines](https://github.com/golang/go/wiki/CodeReviewComments). - write [good commit messages](https://chris.beams.io/posts/git-commit/). - break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` agent: add test codes for manager this add some unit test codes to improve code coverage for agent Fixes #666 ``` The format can be described more formally as follows: ``` : ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. Note: if your pull request isn't getting enough attention, you can use the reach out on Slack to get help finding reviewers. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test: - Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer. - Integration: These tests cover interactions of package components or interactions between components and Kubernetes control plane components like API server. - End-to-end (\"e2e\"): These are broad tests of overall system behavior and coherence. Continuous integration will run these tests on PRs."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "deploy-ha.md"
- },
- "content": [
- {
- "heading": "Deploy HA FabEdge",
- "data": "FabEdge has implemented HA in v1.0.0 and this article will show you how to deploy HA FabEdge.\n *PS: About how to configure edge frameworks and DNS, please checkout [Get Started](./get-started.md), We won't repeat it again.*"
- },
- {
- "heading": "Enviroment",
- "data": "- Kubernetes v1.22.5\n - Flannel v0.19.2\n - KubeEdge 1.12.2\n - Helm3\n \n Nodes:\n harry, node1, node2 are cloud nodes and connect to a gateway whose public address is 10.40.10.180, edge1 and edge4 are located in their own networks."
- },
- {
- "heading": "Deploy",
- "data": "1. Add helm chart repo:\n 2. Get network information from cluster:\n 3. Execute quickstart.sh\n Parameter description:\n - connector-public-addresse: the public address of connectors and it should be accessible from edge nodes.\n - connector-public-port and connector-as-mediator: both are not required for HA deployment, they are here because it's necessary for this enviroment.\n - enable-keepalived\uff1a whether to use builtin keepalived, it is enabled in this example.\n - keepalived-vip\uff1a the virtual IP for connector and it should be an internal IP.\n - keepalived-interface\uff1a the interface which is used to assign virtual IP and make sure all interfaces on connector nodes share the same name.\n - keepalived-router-id\uff1a same as keepalived virtual_router_id which is used to identify different vrrp instances, not required.\n 4. Check if FabEdge is deployed successfully\uff1a\n here are two connectors pods running, both will try to acquire connector lease and the one who get the lease will function as connector, while the other one will function as cloud-agent untill it acquire connector lease."
- },
- {
- "heading": "Manually Deploy",
- "data": "We can also deploy HA FabEdge, please read [Manually Deploy](./manually-install.md) first. We won't repeat those steps here, just provide an example of values.yaml:"
- },
- {
- "additional_info": "FabEdge has implemented HA in v1.0.0 and this article will show you how to deploy HA FabEdge. *PS: About how to configure edge frameworks and DNS, please checkout [Get Started](./get-started.md), We won't repeat it again.* - Kubernetes v1.22.5 - Flannel v0.19.2 - KubeEdge 1.12.2 - Helm3 Nodes: ```shell NAME \u00a0 STATUS \u00a0 ROLES \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AGE \u00a0 \u00a0 VERSION \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 INTERNAL-IP \u00a0 EXTERNAL-IP \u00a0 OS-IMAGE \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 KERNEL-VERSION \u00a0 \u00a0 CONTAINER-RUNTIME edge1 \u00a0 Ready \u00a0 agent,edge \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 6d21h \u00a0 v1.22.6-kubeedge-v1.12.2 \u00a0 10.22.53.116 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://20.10.21 edge4 \u00a0 Ready \u00a0 agent,edge \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 6d22h \u00a0 v1.22.6-kubeedge-v1.12.2 \u00a0 10.40.30.110 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 harry \u00a0 Ready \u00a0 control-plane,master \u00a0 20d \u00a0 \u00a0 v1.22.5 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.5 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 node1 \u00a0 Ready \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 20d \u00a0 \u00a0 v1.22.6 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.6 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 node2 \u00a0 Ready \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 135m \u00a0 v1.22.5 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.7 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-166-generic \u00a0 docker://24.0.5 ``` harry, node1, node2 are cloud nodes and connect to a gateway whose public address is 10.40.10.180, edge1 and edge4 are located in their own networks. 1. Add helm chart repo: ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 2. Get network information from cluster: ``` curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 : clusterDomain \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 : kubernetes cluster-cidr \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 3. Execute quickstart.sh ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name harry \\ --cluster-region harry \\ --cluster-zone harry \\ --cluster-role host \\ --connectors node1,node2 \\ --edges edge1,edge4 \\ --connector-public-addresses 10.40.10.180 \\ --connector-public-port 45000 \\ --connector-as-mediator true \\ --enable-keepalived true \\ --keepalived-vip 192.168.1.200 \\ --keepalived-interface enp0s3 \\ --keepalived-router-id 51 \\ --chart fabedge/fabedge ``` Parameter description: - connector-public-addresse: the public address of connectors and it should be accessible from edge nodes. - connector-public-port and connector-as-mediator: both are not required for HA deployment, they are here because it's necessary for this enviroment. - enable-keepalived\uff1a whether to use builtin keepalived, it is enabled in this example. - keepalived-vip\uff1a the virtual IP for connector and it should be an internal IP. - keepalived-interface\uff1a the interface which is used to assign virtual IP and make sure all interfaces on connector nodes share the same name. - keepalived-router-id\uff1a same as keepalived virtual_router_id which is used to identify different vrrp instances, not required. 4. Check if FabEdge is deployed successfully\uff1a ```shell root@harry:~/fabedge# kubectl get nodes -o wide NAME \u00a0 STATUS \u00a0 ROLES \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AGE \u00a0 \u00a0 VERSION \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 INTERNAL-IP \u00a0 EXTERNAL-IP \u00a0 OS-IMAGE \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 KERNEL-VERSION \u00a0 \u00a0 CONTAINER-RUNTIME edge1 \u00a0 Ready \u00a0 agent,edge \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 7d \u00a0 \u00a0 v1.22.6-kubeedge-v1.12.2 \u00a0 10.22.53.116 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://20.10.21 edge4 \u00a0 Ready \u00a0 agent,edge \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 7d1h \u00a0 v1.22.6-kubeedge-v1.12.2 \u00a0 10.40.30.110 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 harry \u00a0 Ready \u00a0 control-plane,master \u00a0 21d \u00a0 \u00a0 v1.22.5 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.5 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 node1 \u00a0 Ready \u00a0 connector \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 21d \u00a0 \u00a0 v1.22.6 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.6 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-167-generic \u00a0 docker://24.0.5 node2 \u00a0 Ready \u00a0 connector \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4h55m \u00a0 v1.22.5 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0192.168.1.7 \u00a0 \u00a0 \u00a0 \u00a0 Ubuntu 20.04.6 LTS \u00a0 5.4.0-166-generic \u00a0 docker://24.0.5 root@harry:~/fabedge# kubectl get po -o wide NAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 READY \u00a0 STATUS \u00a0 RESTARTS \u00a0 AGE \u00a0 \u00a0 IP \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NODE \u00a0 NOMINATED NODE \u00a0 READINESS GATES fabedge-agent-55fnj \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2/2 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m14s \u00a0 10.22.53.116 \u00a0 edge1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fabedge-agent-vvwdz \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2/2 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m14s \u00a0 10.40.30.110 \u00a0 edge4 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fabedge-cloud-agent-rcwqk \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m16s \u00a0 192.168.1.5 \u00a0 harry \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fabedge-connector-7b659c4cd-475l2 \u00a0 3/3 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m14s \u00a0 192.168.1.7 \u00a0 node2 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fabedge-connector-7b659c4cd-6tj2c \u00a0 3/3 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m14s \u00a0 192.168.1.6 \u00a0 node1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fabedge-operator-5f4c5b5ffd-6cghs \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m16s \u00a0 10.233.66.21 \u00a0 node2 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 root@harry:~/fabedge# kubectl get lease NAME \u00a0 \u00a0 \u00a0 HOLDER \u00a0 AGE connector \u00a0 node2 \u00a0 7m43s ``` here are two connectors pods running, both will try to acquire connector lease and the one who get the lease will function as connector, while the other one will function as cloud-agent untill it acquire connector lease. We can also deploy HA FabEdge, please read [](./manually-install.md) first. We won't repeat those steps here, just provide an example of values.yaml: ```yaml cluster: name: harry role: host region: harry zone: harry cniType: \"flannel\" clusterCIDR: - 10.233.64.0/18 connectorPublicAddresses: - 10.40.10.180 connectorPublicPort: 45000 connectorAsMediator: true serviceClusterIPRange: - 10.233.0.0/18 connector: replicas: 2 keepalived: create: true interface: enp0s3 routerID: 51 vip: 192.168.1.200 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "deploy-ha_zh.md"
- },
- "content": [
- {
- "heading": "\u90e8\u7f72\u9ad8\u53ef\u7528FabEdge",
- "data": "\u200b FabEdge\u5728v1.0.0\u7248\u672c\u5b9e\u73b0\u4e86\u9ad8\u53ef\u7528\u7279\u6027\uff0c\u672c\u6587\u5c55\u793a\u5982\u4f55\u90e8\u7f72\u9ad8\u53ef\u7528FabEdge\u3002"
- },
- {
- "heading": "\u6ce8\uff1a \u6709\u5173\u8fb9\u7f18\u6846\u67b6\uff0cDNS\u914d\u7f6e\u6ce8\u610f\u4e8b\u9879\u8bf7\u53c2\u8003[\u5feb\u901f\u5b89\u88c5](./get-started_zh.md)\uff0c\u672c\u6587\u4e0d\u518d\u8d58\u8ff0\u3002",
- "data": ""
- },
- {
- "heading": "\u73af\u5883\u4fe1\u606f",
- "data": "- Kubernetes v1.22.5\n - Flannel v0.19.2\n - KubeEdge 1.12.2\n - Helm3"
- },
- {
- "heading": "\u8282\u70b9\u4fe1\u606f",
- "data": "harry, node1, node2\u662f\u4e91\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e00\u4e2a\u7f51\u5173\u540e\uff0c\u8be5\u7f51\u5173\u516c\u5f00\u5730\u5740\u662f10.40.10.180\uff0cedge1, edge4\u4e5f\u5206\u522b\u4f4d\u4e8e\u4e00\u4e2a\u7f51\u7edc\u3002"
- },
- {
- "heading": "\u90e8\u7f72",
- "data": "1. \u6dfb\u52a0chart\u4ed3\u5e93\n 2. \u83b7\u53d6\u96c6\u7fa4\u7f51\u7edc\u4fe1\u606f:\n 3. \u6267\u884c\u5b89\u88c5\u811a\u672c\n \u53c2\u6570\u8bf4\u660e\uff1a\n * connector-public-addresse\uff1a\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u8bbf\u95ee\u7684 connector\u7684\u516c\u5f00\u5730\u5740\n * onnector-public-port\u548cconnector-as-mediator\u90fd\u4e0d\u662f\u9ad8\u53ef\u7528\u90e8\u7f72\u7684\u5fc5\u987b\u53c2\u6570\uff0c\u53ea\u662f\u56e0\u4e3a\u672c\u6587\u7684\u73af\u5883\u9700\u8981\u914d\u7f6e\u7684 \uff1b\n * enable-keepalived\uff1a \u662f\u5426\u4f7f\u7528FabEdge\u81ea\u5e26\u7684keeplived\uff0c\u8fd9\u91cc\u9009\u62e9\u4e86\u5f00\u542f\n * keepalived-vip\uff1a connector\u4f7f\u7528\u7684\u865a\u62dfIP\uff0c\u8fd9\u4e2aIP\u5fc5\u987b\u662f\u5185\u7f51\u5730\u5740\n * keepalived-interface\uff1a \u7528\u6765\u5206\u914dconnector\u5185\u7f51\u5730\u5740\u7684\u63a5\u53e3\uff0c\u8bf7\u786e\u4fddconnector\u8282\u70b9\u7684\u4e0a\u7528\u6765\u5206\u914d\u5730\u5740\u7684\u7f51\u5361\u540d\u79f0\u76f8\u540c\n * keepalived-router-id\uff1a \u540ckeepalived\u7684virtual_router_id\uff0c\u7528\u6765\u6807\u8bc6\u4e0d\u540c\u7684vrrp\u5b9e\u4f8b\uff0c\u53ef\u4ee5\u4e0d\u914d\u7f6e\u3002\n 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\uff1a\n \u200b \u53ef\u4ee5\u770b\u5230\uff0c\u73b0\u5728\u6709\u4e24\u4e2aconnector pods\u5728\u8fd0\u884c\uff0c\u8fd9\u4e24\u4e2apod\u4f1a\u5c1d\u8bd5\u83b7\u53d6connector lease\uff0c\u83b7\u53d6\u6210\u529f\u7684\u4f1a\u4ee5connector\u89d2\u8272\u8fd0\u884c\uff0c\u5931\u8d25\u7684\u5219\u4ee5cloud-agent\u89d2\u8272\u8fd0\u884c\uff0c\u76f4\u5230\u83b7\u53d6connector lease\u3002\u4ece\u4e0a\u9762\u7684\u5185\u5bb9\u53ef\u4ee5\u770b\u51fa\uff0cnode2\u7684connector pod\u83b7\u53d6\u4e86connector lease\u3002"
- },
- {
- "heading": "\u624b\u5de5\u90e8\u7f72",
- "data": "\u4e5f\u53ef\u4ee5\u624b\u5de5\u90e8\u7f72\u9ad8\u53ef\u7528FabEdge\uff0c\u5728\u90e8\u7f72\u524d\u8bf7\u5148\u9605\u8bfb[\u624b\u5de5\u90e8\u7f72](./manually-install_zh.md)\u4e00\u6587\uff0c\u56e0\u4e3a\u6b65\u9aa4\u76f8\u540c\uff0c\u8fd9\u91cc\u53ea\u63d0\u4f9bvalues.yaml\u4f8b\u5b50\uff1a"
- },
- {
- "additional_info": "\u200b FabEdge\u5728v1.0.0\u7248\u672c\u5b9e\u73b0\u4e86\u9ad8\u53ef\u7528\u7279\u6027\uff0c\u672c\u6587\u5c55\u793a\u5982\u4f55\u90e8\u7f72\u9ad8\u53ef\u7528FabEdge\u3002 - Kubernetes v1.22.5 - Flannel v0.19.2 - KubeEdge 1.12.2 - Helm3 ```shell NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME edge1 Ready agent,edge 6d21h v1.22.6-kubeedge-v1.12.2 10.22.53.116 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://20.10.21 edge4 Ready agent,edge 6d22h v1.22.6-kubeedge-v1.12.2 10.40.30.110 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 harry Ready control-plane,master 20d v1.22.5 192.168.1.5 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 node1 Ready 20d v1.22.6 192.168.1.6 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 node2 Ready 135m v1.22.5 192.168.1.7 Ubuntu 20.04.6 LTS 5.4.0-166-generic docker://24.0.5 ``` harry, node1, node2\u662f\u4e91\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e00\u4e2a\u7f51\u5173\u540e\uff0c\u8be5\u7f51\u5173\u516c\u5f00\u5730\u5740\u662f10.40.10.180\uff0cedge1, edge4\u4e5f\u5206\u522b\u4f4d\u4e8e\u4e00\u4e2a\u7f51\u7edc\u3002 1. \u6dfb\u52a0chart\u4ed3\u5e93 ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 2. \u83b7\u53d6\u96c6\u7fa4\u7f51\u7edc\u4fe1\u606f: ``` curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : clusterDomain : kubernetes cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 3. \u6267\u884c\u5b89\u88c5\u811a\u672c ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name harry \\ --cluster-region harry \\ --cluster-zone harry \\ --cluster-role host \\ --connectors node1,node2 \\ --edges edge1,edge4 \\ --connector-public-addresses 10.40.10.180 \\ --connector-public-port 45000 \\ --connector-as-mediator true \\ --enable-keepalived true \\ --keepalived-vip 192.168.1.200 \\ --keepalived-interface enp0s3 \\ --keepalived-router-id 51 \\ --chart fabedge/fabedge ``` \u53c2\u6570\u8bf4\u660e\uff1a * connector-public-addresse\uff1a\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u8bbf\u95ee\u7684 connector\u7684\u516c\u5f00\u5730\u5740 * onnector-public-port\u548cconnector-as-mediator\u90fd\u4e0d\u662f\u9ad8\u53ef\u7528\u90e8\u7f72\u7684\u5fc5\u987b\u53c2\u6570\uff0c\u53ea\u662f\u56e0\u4e3a\u672c\u6587\u7684\u73af\u5883\u9700\u8981\u914d\u7f6e\u7684 \uff1b * enable-keepalived\uff1a \u662f\u5426\u4f7f\u7528FabEdge\u81ea\u5e26\u7684keeplived\uff0c\u8fd9\u91cc\u9009\u62e9\u4e86\u5f00\u542f * keepalived-vip\uff1a connector\u4f7f\u7528\u7684\u865a\u62dfIP\uff0c\u8fd9\u4e2aIP\u5fc5\u987b\u662f\u5185\u7f51\u5730\u5740 * keepalived-interface\uff1a \u7528\u6765\u5206\u914dconnector\u5185\u7f51\u5730\u5740\u7684\u63a5\u53e3\uff0c\u8bf7\u786e\u4fddconnector\u8282\u70b9\u7684\u4e0a\u7528\u6765\u5206\u914d\u5730\u5740\u7684\u7f51\u5361\u540d\u79f0\u76f8\u540c * keepalived-router-id\uff1a \u540ckeepalived\u7684virtual_router_id\uff0c\u7528\u6765\u6807\u8bc6\u4e0d\u540c\u7684vrrp\u5b9e\u4f8b\uff0c\u53ef\u4ee5\u4e0d\u914d\u7f6e\u3002 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\uff1a ```shell root@harry:~/fabedge# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME edge1 Ready agent,edge 7d v1.22.6-kubeedge-v1.12.2 10.22.53.116 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://20.10.21 edge4 Ready agent,edge 7d1h v1.22.6-kubeedge-v1.12.2 10.40.30.110 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 harry Ready control-plane,master 21d v1.22.5 192.168.1.5 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 node1 Ready connector 21d v1.22.6 192.168.1.6 Ubuntu 20.04.6 LTS 5.4.0-167-generic docker://24.0.5 node2 Ready connector 4h55m v1.22.5 192.168.1.7 Ubuntu 20.04.6 LTS 5.4.0-166-generic docker://24.0.5 root@harry:~/fabedge# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fabedge-agent-55fnj 2/2 Running 0 7m14s 10.22.53.116 edge1 fabedge-agent-vvwdz 2/2 Running 0 7m14s 10.40.30.110 edge4 fabedge-cloud-agent-rcwqk 1/1 Running 0 7m16s 192.168.1.5 harry fabedge-connector-7b659c4cd-475l2 3/3 Running 0 7m14s 192.168.1.7 node2 fabedge-connector-7b659c4cd-6tj2c 3/3 Running 0 7m14s 192.168.1.6 node1 fabedge-operator-5f4c5b5ffd-6cghs 1/1 Running 0 7m16s 10.233.66.21 node2 root@harry:~/fabedge# kubectl get lease NAME HOLDER AGE connector node2 7m43s ``` \u200b \u53ef\u4ee5\u770b\u5230\uff0c\u73b0\u5728\u6709\u4e24\u4e2aconnector pods\u5728\u8fd0\u884c\uff0c\u8fd9\u4e24\u4e2apod\u4f1a\u5c1d\u8bd5\u83b7\u53d6connector lease\uff0c\u83b7\u53d6\u6210\u529f\u7684\u4f1a\u4ee5connector\u89d2\u8272\u8fd0\u884c\uff0c\u5931\u8d25\u7684\u5219\u4ee5cloud-agent\u89d2\u8272\u8fd0\u884c\uff0c\u76f4\u5230\u83b7\u53d6connector lease\u3002\u4ece\u4e0a\u9762\u7684\u5185\u5bb9\u53ef\u4ee5\u770b\u51fa\uff0cnode2\u7684connector pod\u83b7\u53d6\u4e86connector lease\u3002 \u4e5f\u53ef\u4ee5\u9ad8\u53ef\u7528FabEdge\uff0c\u5728\u90e8\u7f72\u524d\u8bf7\u5148\u9605\u8bfb[](./manually-install_zh.md)\u4e00\u6587\uff0c\u56e0\u4e3a\u6b65\u9aa4\u76f8\u540c\uff0c\u8fd9\u91cc\u53ea\u63d0\u4f9bvalues.yaml\u4f8b\u5b50\uff1a ```yaml cluster: name: harry role: host region: harry zone: harry cniType: \"flannel\" clusterCIDR: - 10.233.64.0/18 connectorPublicAddresses: - 10.40.10.180 connectorPublicPort: 45000 connectorAsMediator: true serviceClusterIPRange: - 10.233.0.0/18 connector: replicas: 2 keepalived: create: true interface: enp0s3 routerID: 51 vip: 192.168.1.200 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "fabedge-design.md"
- },
- "content": [
- {
- "heading": "FabEdge\u6982\u8981\u8bbe\u8ba1",
- "data": "[toc]"
- },
- {
- "heading": "\u672f\u8bed",
- "data": "* \u4e91\u7aef\u8282\u70b9\uff1a\u4e00\u4e2a\u8fd0\u884c\u5728\u4e91\u7aef\u7684kubernetes\u8282\u70b9\uff0c\u901a\u5e38\u8ddf\u4e00\u7fa4\u4e91\u7aef\u8282\u70b9\u4f4d\u4e8e\u540c\u4e00\u4e2a\u6570\u636e\u4e2d\u5fc3\uff0c\u5171\u4eab\u540c\u4e00\u4e2a\u7f51\u7edc\uff0c\u5927\u591a\u6570Kubernetes\u7ba1\u7406\u7ec4\u4ef6\u4e5f\u90fd\u8fd0\u884c\u5728\u4e91\u7aef\u8282\u70b9\u4e0a\n * \u4e91\u7aefPod\uff1a\u8fd0\u884c\u5728\u4e91\u7aef\u8282\u70b9\u7684Pod\n * \u8fb9\u7f18\u8282\u70b9\uff1a\u8fb9\u7f18\u8282\u70b9\u662f\u76f8\u5bf9\u4e8e\u4e91\u8ba1\u7b97\u6570\u636e\u4e2d\u5fc3\u7684\u8282\u70b9\uff0c\u8fd9\u4e9b\u8282\u70b9\u5f80\u5f80\u5206\u5e03\u5728\u4e0d\u540c\u7684\u7269\u7406\u533a\u57df\uff0c\u8ddf\u4e91\u7aef\u8282\u70b9\u4f4d\u4e8e\u4e0d\u540c\u7684\u5c40\u57df\u7f51\uff0c\u7f51\u7edc\u73af\u5883\u4e5f\u8f83\u5dee\uff0c\u968f\u65f6\u53ef\u80fd\u8ddf\u4e91\u7aef\u63a7\u5236\u8282\u70b9\u5931\u8054\n * \u8fb9\u7f18Pod\uff1a\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u7684Pod"
- },
- {
- "heading": "\u6982\u8ff0",
- "data": "\u5728\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\uff0c\u4e91\u8fb9\u901a\u4fe1\u53ca\u8fb9\u8fb9\u901a\u4fe1\u662f\u4e2a\u5e38\u89c1\u7684\u9700\u6c42\uff0c\u4f46\u76ee\u524d\u7684\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u90fd\u5c1a\u672a\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0cFabEdge\u5c1d\u8bd5\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\u3002\n FabEdge\u662f\u4e00\u4e2a\u8fb9\u7f18\u7aef\u5de5\u4f5c\u7684CNI\u5b9e\u73b0\uff0c\u5b83\u5e76\u4e0d\u66ff\u4ee3Flannel, Calico\u7b49CNI\u5b9e\u73b0\uff0c\u800c\u662f\u4e0e\u8fd9\u4e9bCNI\u5b9e\u73b0\u76f8\u4e92\u914d\u5408\u5b9e\u73b0\u4e91\u8fb9\u901a\u4fe1\u3002\u5728\u4e91\u7aef\u7684\u901a\u4fe1\u4f9d\u7136\u7531Flannel\uff0cCalico\u7b49CNI\u5de5\u5177\u8d1f\u8d23\uff0cFabEdge\u901a\u8fc7\u5efa\u7acbVPN\u96a7\u9053\uff0c\u914d\u7f6eiptables\u548c\u8def\u7531\uff0c\u4f7f\u5f97\u5904\u4e8e\u4e0d\u540c\u7f51\u7edc\u7684Pod\u53ef\u4ee5\u76f8\u4e92\u901a\u4fe1\u3002"
- },
- {
- "heading": "\u76ee\u6807",
- "data": "* Pod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\n * \u4e91\u7aef\u8282\u70b9\u4e0e\u8fb9\u7f18Pod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\n * \u8fb9\u7f18\u8282\u70b9\u4e0e\u4e91\u7aefPod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\n * \u8fb9\u7f18Pod\u53ef\u4ee5\u8bbf\u95ee\u4e91\u7aef\u7684Service(ClusterIP)\n * \u4e91\u7aefPod\u53ef\u4ee5\u8bbf\u95ee\u8fb9\u7aef\u7684Service(ClusterIP)"
- },
- {
- "heading": "\u89e3\u51b3\u65b9\u6848",
- "data": "\u57fa\u672c\u7684\u539f\u7406\u5f88\u7b80\u5355\uff0c\u5728\u4e0d\u540c\u8282\u70b9\u95f4\u5efa\u7acb\u53cc\u5411VPN\u96a7\u9053\uff0c\u6253\u901a\u5206\u5e03\u5728\u4e0d\u540c\u533a\u57df\u7684\u5404\u4e2a\u5c40\u57df\u7f51\uff0c\u518d\u914d\u7f6e\u8def\u7531\u548ciptables\uff0c\u5f15\u5bfcPod\u53d1\u9001\u7684\u6570\u636e\u901a\u8fc7\u96a7\u9053\u5230\u8fbe\u76ee\u6807\u8282\u70b9\u4e0a\u7684Pod\uff0c\u8fd9\u6837\u5c31\u53ef\u4ee5\u5b9e\u73b0\u4e91\u8fb9\u53ca\u8fb9\u8fb9\u901a\u4fe1\u3002\n \u4e91\u7aef\u7684\u8282\u70b9\u901a\u5e38\u90fd\u5728\u4e00\u4e2a\u7f51\u7edc\u5185\uff0c\u6ca1\u5fc5\u8981\u90fd\u8ddf\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u6240\u4ee5\u4e91\u7aef\u4f1a\u6709\u4e00\u4e2a\u88ab\u6210\u4e3aconnector\u7684\u8282\u70b9\uff0c\u8be5\u8282\u70b9\u5145\u5f53\u4e91\u8fb9\u7684\u7f51\u5173\u8282\u70b9\u3002\n \u5efa\u7acbVPN\u96a7\u9053\u8981\u6d88\u8017\u4e00\u5b9a\u7684\u8d44\u6e90\uff0c\u5f88\u591a\u8fb9\u7f18\u573a\u666f\u8fb9\u7f18\u8282\u70b9\u4f17\u591a\uff0c\u5982\u679c\u6bcf\u4e2a\u8282\u70b9\u90fd\u8ddf\u5176\u4ed6\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u53ef\u80fd\u4f1a\u6d88\u8017\u4e0d\u5fc5\u8981\u7684\u8d44\u6e90\u3002\u4e3a\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0c\u9700\u8981\u6709\u4e00\u79cd\u673a\u5236\u53ef\u4ee5\u7ba1\u7f51\u7edc\u62d3\u6251\uff0c\u8ba9\u6709\u9700\u8981\u901a\u4fe1\u7684\u8282\u70b9\u53ef\u4ee5\u76f8\u4e92\u901a\u4fe1\u3002\n \u5efa\u7acb\u96a7\u9053\u540e\uff0c\u914d\u7f6e\u8def\u7531\u548c\u89c4\u5219\u540e\uff0cPod\u4e4b\u95f4\u7684\u901a\u4fe1\u53ea\u662f\u6210\u4e3a\u4e86\u53ef\u80fd\uff0c\u56e0\u4e3a\u4e00\u4e9b\u9650\u5236\uff0c\u8fb9\u7f18\u8282\u70b9\u4e0d\u80fd\u8fd0\u884cFlannel\u548cCalico\u7684\u7ec4\u4ef6\uff0c\u6240\u4ee5FabEdge\u4e5f\u8981\u627f\u62c5\u8fb9\u7f18Pod\u7684IPAM\u5de5\u4f5c\u3002\n \u7efc\u4e0a\u6240\u8ff0\uff0cFabeEdge\u8981\u89e3\u51b3\u5982\u4e0b\u95ee\u9898\uff1a\n * \u5efa\u7acb\u4e91\u8fb9\u548c\u8fb9\u8fb9\u4e4b\u95f4\u7684\u53cc\u5411VPN\u96a7\u9053\uff0c\u6253\u901a\u591a\u4e2a\u5c40\u57df\u7f51\n * \u5efa\u7acb\u8def\u7531\u89c4\u5219\u548ciptables\u89c4\u5219\n * \u7ba1\u7406\u8fb9\u7f18Pod\u7684IP\u5206\u914d\n * \u7ba1\u7406\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1"
- },
- {
- "heading": "VPN\u9009\u62e9",
- "data": "\u5728\u8bf8\u591aVPN\u4e2d\u6211\u4eec\u9009\u62e9\u4e86Strongswan\uff0c\u56e0\u4e3a\u5b83\u53ef\u4ee5\u901a\u8fc7vici\u534f\u8bae\u6765\u52a8\u6001\u7ba1\u7406\u96a7\u9053\u7684\u521b\u5efa\u548c\u9500\u6bc1\uff0c\u90e8\u7f72\u7684\u65f6\u5019\u53ef\u4ee5\u8fd0\u884c\u5728\u5355\u72ec\u7684\u5bb9\u5668\uff0c\u800c\u4e0d\u662f\u8ddfagent\u8fd0\u884c\u5728\u540c\u4e00\u4e2a\u5bb9\u5668\uff0c\u901a\u8fc7\u4fe1\u53f7\u6765\u63a7\u5236\u914d\u7f6e\u7684\u52a0\u8f7d\u3002"
- },
- {
- "heading": "\u8def\u7531\u89c4\u5219\u548ciptables\u89c4\u5219",
- "data": "\u4e3a\u65b9\u4fbf\u7406\u89e3\uff0c\u5728\u6b64\u63d0\u4f9b\u4e00\u5f20\u7f51\u7edc\u62d3\u6251\u56fe\uff0c\u8be5\u96c6\u7fa4\u67094\u4e2a\u8282\u70b9\uff0c\u5176\u4e2dnode1, node2\u662f\u4e91\u7aef\u8282\u70b9\uff0c\u4e91\u7aef\u4f7f\u7528flannel\u901a\u4fe1\uff0cnode2\u88ab\u9009\u4e3aconnector\u8282\u70b9\uff1b edge1\uff0cedge2\u662f\u8fb9\u7f18\u8282\u70b9\uff0c\u4e24\u4e2a\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u76f4\u63a5\u901a\u4fe1\u3002\u6700\u540e\u8be5\u96c6\u7fa4\u7684ServiceClusterIPRange\u4e3a10.234.0.0/18\u3002\n "
- },
- {
- "heading": "\u8fb9\u7f18\u8282\u70b9",
- "data": "\u4ee5edge1\u4e3a\u4f8b\uff0cFabEdge\u4f1a\u5728\u4e00\u4e2aID\u4e3a220\u7684\u8def\u7531\u8868\u91cc\u521b\u5efa\u8def\u7531\uff0c\u793a\u4f8b\u5982\u4e0b\uff1a\n 192.168.0.254\u662fedge1\u7684\u9ed8\u8ba4\u7f51\u5173\uff0c\u5176\u4ed6\u7f51\u6bb5\u5206\u522b\u662fServiceCluserIPRange\u548c\u5206\u914d\u7ed9\u5176\u4ed6\u8282\u70b9\u7684PodCIDRs\u3002\u4ece\u8868\u9762\u4e0a\u770b\u53d1\u541110.234.0.0/16\u7684\u6570\u636e\u4e0b\u4e00\u8df3\u662f192.168.0.254\uff0c\u4f46\u5b9e\u9645\u4e0a\u4f1a\u88abstrongswan\u62e6\u622a\u5e76\u901a\u8fc7\u5efa\u7acb\u7684\u96a7\u9053\u53d1\u5411\u4e0d\u540c\u7684\u8282\u70b9\u3002\n \u8def\u7531\u8868220\u662f\u7531strongswan\u521b\u5efa\uff0c\u5176\u4f18\u5148\u7ea7\u4e5f\u9ad8\u4e8e\u9ed8\u8ba4\u8def\u7531\u8868:\n \u9664\u4e86\u8def\u7531\u89c4\u5219\uff0c\u8fd8\u9700\u8981\u914d\u7f6eiptables\u89c4\u5219\uff0cfabedge agent\u4f1a\u5728filter\u548cnat\u8868\u91cc\u5b9a\u4e49\u5982\u4e0b\u89c4\u5219:\n filter table:\n nat table:\n \u5176\u4e2d10.234.67.0/24\u662f\u5206\u914d\u7ed9edge1\u7684PodCIDR\uff0cfilter\u8868\u91cc\u7684\u89c4\u5219\u786e\u4fdd\u6e90\u5730\u5740\u53ca\u76ee\u6807\u5730\u5740\u4e3a\u8fd9\u4e2a\u7f51\u6bb5\u7684\u6570\u636e\u53ef\u4ee5\u88ab\u8f6c\u53d1\uff0cnat\u8868\u91cc\u7684\u89c4\u5219\u786e\u4fdd\u8fb9\u7f18Pod\u8bbf\u95ee\u5916\u7f51\u65f6\u4f1a\u88ab\u5730\u5740\u8f6c\u6362\uff0c\u8bbf\u95ee\u5176\u4ed6Pod\u548c\u670d\u52a1\u65f6\u5219\u4e0d\u4f1a\u3002\n FABEDGE-PEER-CIDR\u662f\u4e00\u4e2aipset\uff0c\u91cc\u9762\u7684\u5730\u5740\u90fd\u662f\u5176\u4ed6\u8282\u70b9\u7684\u5730\u5740\uff0cPodCIDR\u53caServcieClusterIPRange"
- },
- {
- "heading": "connector\u8282\u70b9",
- "data": "FabEdge\u540c\u6837\u4f1a\u5728\u8def\u7531\u8868220\u91cc\u521b\u5efa\u4e00\u4e9b\u8def\u7531:\n 10.234.67.0/24\u53ca10.234.68.0/24\u662f\u8fb9\u7f18\u8282\u70b9\u7684PodCIDRs\uff0c10.22.48.254\u662f\u4e91\u7aef\u7684\u9ed8\u8ba4\u7f51\u5173\uff0c\u8ddf\u8fb9\u7f18\u7aef\u4e00\u6837\uff0c\u53d1\u5f80\u8fb9\u7f18Pod\u7684\u540c\u6837\u4f1a\u88abstrongswan\u901a\u8fc7\u96a7\u9053\u53d1\u5411\u8fb9\u7f18\u8282\u70b9\u3002\n FabEdge\u4e0d\u4f1a\u4fee\u6539connector\u8282\u70b9\u7684\u9ed8\u8ba4\u8def\u7531\u8868\uff0c\u5176\u5185\u5bb9\u5982\u4e0b:\n \u4ece\u4e0a\u9762\u53ef\u4ee5\u770b\u51fa\uff0c\u8bbf\u95ee\u4e91\u7aefPod\u7684\u8bf7\u6c42\u4f1a\u88abflannel\u5904\u7406\u3002\n \u4e0b\u9762\u662fFabEdge\u5728connector\u8282\u70b9\u751f\u6210\u7684iptables\u89c4\u5219:\n filter table:\n nat table:\n filter\u8868\u7684\u89c4\u5219\u4e3b\u8981\u662f\u786e\u4fddstrongswan\u548c\u4e91\u7aef\u7684\u6570\u636e\u4e0d\u4f1a\u88ab\u62d2\u7edd\uff0cnat\u8868\u6bd4\u8f83\u590d\u6742\uff1a\n * \u4e91\u8fb9Pod\u4e4b\u95f4\u7684\u901a\u4fe1\u4e0d\u505aNAT\uff1b\n * \u4e91\u7aefPod\u4e0e\u8fb9\u7f18\u8282\u70b9\u7684\u901a\u4fe1\u4e0d\u505aNAT\uff1b\n * \u8fb9\u7f18Pod\u8bbf\u95ee\u4e91\u7aef\u8282\u70b9\u65f6\u8981\u505aSNAT\uff0c\u907f\u514drp_filter\u95ee\u9898\uff1b\n * \u8fb9\u7f18\u8282\u70b9\u8bbf\u95ee\u4e91\u7aefPod\u65f6\u8981\u505aSNAT\uff0c\u5426\u5219\u56de\u5305\u4f1a\u627e\u4e0d\u5230\u8fb9\u7f18\u8282\u70b9\u3002"
- },
- {
- "heading": "\u975econnector\u7684\u4e91\u8282\u70b9",
- "data": "\u56e0\u4e3a\u4e91\u7aef\u7684\u96a7\u9053\u53ea\u6709connector\u80fd\u5efa\u7acb\uff0c\u975econnector\u8282\u70b9\u7684\u5176\u4ed6\u4e91\u8282\u70b9\u8bbf\u95ee\u8fb9\u7f18\u65f6\u9700\u8981\u5c06\u6d41\u91cf\u8def\u7531\u5230connector\u8282\u70b9\uff0c\u4ee5node1\u4e3a\u4f8b\uff0c\u5176\u8def\u7531\u89c4\u5219\u5982\u4e0b:\n \u8fd9\u4e9b\u89c4\u5219\u4f9d\u7136\u521b\u5efa\u5728\u8def\u7531\u8868220\u91cc\uff0c\u4e0d\u8fc7\u8be5\u8868\u4e0d\u518d\u7531strongswan\u521b\u5efa\uff0c\u800c\u662f\u7531fabedge-cloud-agent\u521b\u5efa.10.234.65.0\u662fconnector\u8282\u70b9\u7684flannel.1\u8bbe\u5907\u7684\u5730\u5740\u3002\n \u4e3b\u8868\u8def\u7531\u89c4\u5219\u5982\u4e0b:\n \u4ece\u4e24\u5f20\u8868\u53ef\u4ee5\u770b\u51fa\uff0cFabEdge\u5c06\u8bbf\u95ee\u8fb9\u7f18\u8282\u70b9\u7684\u6d41\u91cf\u90fd\u8def\u7531\u5230connector\u8282\u70b9\u7684flannel.1\u7f51\u5361\u4e0a\uff0c\u7136\u540e\u518d\u901a\u8fc7strongswan\u521b\u5efa\u7684\u96a7\u9053\u8bbf\u95ee\u8fb9\u7f18Pod\u3002"
- },
- {
- "heading": "\u8fb9\u7f18Pod IPAM",
- "data": "FabEdge\u6ca1\u6709\u50cf\u5176\u4ed6CNI\u5b9e\u73b0\u5f00\u53d1\u81ea\u5df1\u7684CNI\u63d2\u4ef6\uff0c\u800c\u662f\u91c7\u7528[CNI Plugins](https://github.com/containernetworking/plugins)\u63d0\u4f9b\u7684host-local, bridge, portmap, bandwith\u7b49\u63d2\u4ef6\uff0c\u8fd9\u4e9b\u901a\u7528\u63d2\u4ef6\u5df2\u7ecf\u53ef\u4ee5\u6ee1\u8db3\u57fa\u672c\u7684IPAM\u529f\u80fd\u3002\n fabedge-agent\u7ec4\u4ef6\u4f1a\u5728\u8fb9\u7f18\u7aef\u751f\u6210\u5982\u4e0bCNI\u914d\u7f6e"
- },
- {
- "heading": "\u8fb9\u7f18\u8282\u70b9PodCIDR\u5206\u914d",
- "data": "\u6839\u636eCNI\u5b9e\u73b0\u7684\u4e0d\u540c\uff0c\u6709\u4e9b\u573a\u666f\u4e0b\u9700\u8981FabEdge\u627f\u62c5\u8fb9\u7f18\u8282\u70b9\u7684PodCIDR\u5206\u914d\uff0c\u5206\u914d\u7684PodCIDR\u4f1a\u5b58\u50a8\u5728\u8fb9\u7f18\u8282\u70b9\u7684annotations\u91cc\uff0c\u4f8b\u5982:\n \u5f53\u524dFabEdge\u4ec5\u652f\u6301Flannel\u548cCalico\u4e24\u4e2aCNI\u5b9e\u73b0\uff0c\u4f7f\u7528Calico\u65f6\u9700\u8981\u989d\u5916\u63d0\u4f9b\u4e00\u4e2a\u7f51\u6bb5\u4f9bFabEdge\u5206\u914dPodCIDR\u3002"
- },
- {
- "heading": "\u8fb9\u7f18\u8282\u70b9\u901a\u4fe1\u7ba1\u7406",
- "data": "\u6211\u4eec\u63d0\u51fa\u4e86\u4e00\u4e2a\u53ebCommunity\u7684CRD\uff0c\u7528\u6237\u53ef\u4ee5\u5229\u7528Community\u7ba1\u7406\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u53ea\u6709\u4f4d\u4e8e\u540c\u4e00\u4e2aCommunity\u5185\u7684\u8fb9\u7f18Pod\u624d\u53ef\u4ee5\u901a\u4fe1\u3002\u56e0\u4e3a\u4e91\u8fb9\u901a\u4fe1\u7684\u9700\u6c42\u6bd4\u8f83\u9891\u7e41\uff0c\u6240\u4ee5\u4e91\u8fb9\u4e4b\u95f4\u9ed8\u8ba4\u5efa\u7acb\u96a7\u9053\u3002\n \u5982\u4e0b\u5b9e\u4f8b\u4f1a\u521b\u5efa\u4e00\u4e2a\u540d\u4e3aall-edge-nodes\u7684Community\uff0c\u8ba9edge1, edge2\u4e24\u4e2a\u8282\u70b9\u901a\u4fe1\uff0c\u914d\u7f6ecommunity\u65f6\uff0c\u4f7f\u7528\u7684\u540d\u79f0\u8ddf\u8282\u70b9\u540d\u4e0d\u540c\uff0c\u8fd9\u91cc\u7528\u7684\u540d\u79f0\u5176\u5b9e\u662f\u7aef\u70b9\u540d\uff0c\u4e00\u4e2a\u7aef\u70b9\u540d\u662f\u7531\"\u96c6\u7fa4\u540d.\u8282\u70b9\u540d\"\u7ec4\u6210\u7684\u3002"
- },
- {
- "heading": "\u670d\u52a1\u8bbf\u95ee",
- "data": "\u5728\u89e3\u51b3\u65b9\u6848\u91cc\u6ca1\u6709\u63d0\u5230\u5982\u4f55\u5b9e\u73b0\u670d\u52a1\u8bbf\u95ee(ClusterIP\u7c7b\u578b)\uff0c\u56e0\u4e3a\u6709\u4e9b\u573a\u666f\u4e0b\u8fd9\u4e0d\u662f\u4e2a\u95ee\u9898\u3002\n \u9996\u5148\uff0c\u4e91\u7aefPod\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff0c\u8fb9\u7f18Pod\u4e5f\u53ef\u4ee5\u8bbf\u95ee\u4efb\u4f55\u4e91\u7aef\u670d\u52a1\uff1b\u4f46\u8fb9\u7f18Pod\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\u53ef\u4ee5\u5206\u4e3a\u4e24\u79cd\u60c5\u51b5:\n * \u8fb9\u7f18\u7aef\u53ef\u4ee5\u8fd0\u884ckube-proxy\uff0c\u8fd9\u65f6\u53ea\u8981\u8fb9\u7f18\u8282\u70b9\u8ddf\u8fd0\u884c\u670d\u52a1\u540e\u7aef\u7684\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u4e86\u96a7\u9053\uff0c\u5c31\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff1b\n * \u8fb9\u7f18\u7aef\u4e0d\u80fd\u8fd0\u884ckube-proxy\uff0c\u8fd9\u65f6\u5373\u4fbf\u8fb9\u7f18\u8282\u70b9\u8ddf\u8fd0\u884c\u670d\u52a1\u540e\u7aef\u7684\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u751a\u81f3\u670d\u52a1\u540e\u7aef\u8ddf\u5ba2\u6237\u7aefPod\u5728\u540c\u4e00\u4e2a\u8282\u70b9\uff0c\u4e5f\u65e0\u6cd5\u901a\u4fe1\uff0c\u56e0\u4e3a\u57df\u540d\u89e3\u6790\u51fa\u6765\u7684\u662fClusterIP\uff0c\u9700\u8981\u901a\u8fc7connector\u6765\u4e2d\u8f6c\uff0c\u4f46connector\u5e76\u6ca1\u6709\u88ab\u8bbe\u8ba1\u627f\u62c5\u8fd9\u4e2a\u529f\u80fd\u3002\n \u9488\u5bf9\u7b2c\u4e8c\u79cd\u60c5\u51b5\uff0cFabEdge\u63d0\u4f9b\u4e86\u4e00\u4e2a\u540d\u4e3afab-proxy\u7684\u5c0f\u7ec4\u4ef6\uff0c\u5b83\u53ef\u4ee5\u8ba9\u5728\u540c\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u7684Pod\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u540e\u7aef\u5728\u540c\u4e00\u4e2a\u8282\u70b9\u7684\u670d\u52a1\uff0c\u4f46\u4e0d\u652f\u6301\u8de8\u8282\u70b9\u7684\u8bbf\u95ee\u3002\n \u5982\u679c\u60f3\u66f4\u597d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff0c\u6700\u597d\u8fd8\u662f\u8ba9kube-proxy\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u3002"
- },
- {
- "heading": "FabEdge\u7ec4\u4ef6\u8bf4\u660e",
- "data": ""
- },
- {
- "heading": "fabedge-agent",
- "data": "fabedge-agent\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\uff0c\u4e3b\u8981\u627f\u62c5CNI\u914d\u7f6e\u751f\u6210\uff0ciptables\uff0cipset, ipvs\u6570\u636e\u7ef4\u62a4\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cstrongswan\u96a7\u9053\u7ba1\u7406\u7b49\u529f\u80fd\u3002"
- },
- {
- "heading": "fabedge-connector",
- "data": "fabedge-connector\u8fd0\u884c\u5728\u4e91\u7aef\u7684\u7279\u5b9a\u8282\u70b9\u4e0a\uff0c\u4e3b\u8981\u8d1f\u8d23connector\u8282\u70b9\u7684iptables\uff0cipset, ipvs\u6570\u636e\u7ef4\u62a4\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cstrongswan\u96a7\u9053\u7ba1\u7406\u7b49\u529f\u80fd\uff0c\u53e6\u5916\u8fd8\u8d1f\u8d23\u5411fabedge-cloud-agent\u4f20\u9012\u4e00\u4e9b\u8def\u7531\u4fe1\u606f"
- },
- {
- "heading": "fabedge-cloud-agent",
- "data": "fabedge-cloud-agent\u8fd0\u884c\u5728\u4e91\u7aef\u7684\u975econnector\u8282\u70b9\uff0c\u4e3b\u8981\u8d1f\u8d23\u6240\u5728\u8282\u70b9\u7684iptables\uff0cipset\u6570\u636e\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cfabedge-cloud-agent\u4f1a\u4ecefabedge-connector\u83b7\u53d6\u4e00\u4e9b\u8def\u7531\u4fe1\u606f\u3002"
- },
- {
- "heading": "fabedge-opeartor",
- "data": "fabedge-operator\u627f\u62c5\u4e86\u4ee5\u4e0b\u529f\u80fd\uff1a * \u4e3a\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9\u90e8\u7f72fabedge-agent\uff1b * \u5728\u67d0\u4e9b\u573a\u666f\uff0c\u4e3a\u8fb9\u7f18\u8282\u70b9\u5206\u914dPodCIDR\uff1b * \u4e3a\u8fb9\u7f18\u8282\u70b9\u548cconnector\u8282\u70b9\u7684strongswan\u751f\u6210\u8bc1\u4e66\u548c\u79c1\u94a5\uff1b * \u4e3afabedge-agent\u548cfabedge-connector\u7ba1\u7406\u96a7\u9053\u914d\u7f6e\uff1b * \u5728\u5f00\u542ffab-proxy\u7279\u6027\u7684\u60c5\u51b5\u4e0b\uff0c\u4e3afabedge-agent\u751f\u6210\u8be5\u8282\u70b9\u7684IPVS\u914d\u7f6e"
- },
- {
- "additional_info": "[toc] * \u4e91\u7aef\u8282\u70b9\uff1a\u4e00\u4e2a\u8fd0\u884c\u5728\u4e91\u7aef\u7684kubernetes\u8282\u70b9\uff0c\u901a\u5e38\u8ddf\u4e00\u7fa4\u4e91\u7aef\u8282\u70b9\u4f4d\u4e8e\u540c\u4e00\u4e2a\u6570\u636e\u4e2d\u5fc3\uff0c\u5171\u4eab\u540c\u4e00\u4e2a\u7f51\u7edc\uff0c\u5927\u591a\u6570Kubernetes\u7ba1\u7406\u7ec4\u4ef6\u4e5f\u90fd\u8fd0\u884c\u5728\u4e91\u7aef\u8282\u70b9\u4e0a * \u4e91\u7aefPod\uff1a\u8fd0\u884c\u5728\u4e91\u7aef\u8282\u70b9\u7684Pod * \u8fb9\u7f18\u8282\u70b9\uff1a\u8fb9\u7f18\u8282\u70b9\u662f\u76f8\u5bf9\u4e8e\u4e91\u8ba1\u7b97\u6570\u636e\u4e2d\u5fc3\u7684\u8282\u70b9\uff0c\u8fd9\u4e9b\u8282\u70b9\u5f80\u5f80\u5206\u5e03\u5728\u4e0d\u540c\u7684\u7269\u7406\u533a\u57df\uff0c\u8ddf\u4e91\u7aef\u8282\u70b9\u4f4d\u4e8e\u4e0d\u540c\u7684\u5c40\u57df\u7f51\uff0c\u7f51\u7edc\u73af\u5883\u4e5f\u8f83\u5dee\uff0c\u968f\u65f6\u53ef\u80fd\u8ddf\u4e91\u7aef\u63a7\u5236\u8282\u70b9\u5931\u8054 * \u8fb9\u7f18Pod\uff1a\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u7684Pod \u5728\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e2d\uff0c\u4e91\u8fb9\u901a\u4fe1\u53ca\u8fb9\u8fb9\u901a\u4fe1\u662f\u4e2a\u5e38\u89c1\u7684\u9700\u6c42\uff0c\u4f46\u76ee\u524d\u7684\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u90fd\u5c1a\u672a\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0cFabEdge\u5c1d\u8bd5\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\u3002 FabEdge\u662f\u4e00\u4e2a\u8fb9\u7f18\u7aef\u5de5\u4f5c\u7684CNI\u5b9e\u73b0\uff0c\u5b83\u5e76\u4e0d\u66ff\u4ee3Flannel, Calico\u7b49CNI\u5b9e\u73b0\uff0c\u800c\u662f\u4e0e\u8fd9\u4e9bCNI\u5b9e\u73b0\u76f8\u4e92\u914d\u5408\u5b9e\u73b0\u4e91\u8fb9\u901a\u4fe1\u3002\u5728\u4e91\u7aef\u7684\u901a\u4fe1\u4f9d\u7136\u7531Flannel\uff0cCalico\u7b49CNI\u5de5\u5177\u8d1f\u8d23\uff0cFabEdge\u901a\u8fc7\u5efa\u7acbVPN\u96a7\u9053\uff0c\u914d\u7f6eiptables\u548c\u8def\u7531\uff0c\u4f7f\u5f97\u5904\u4e8e\u4e0d\u540c\u7f51\u7edc\u7684Pod\u53ef\u4ee5\u76f8\u4e92\u901a\u4fe1\u3002 * Pod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee * \u4e91\u7aef\u8282\u70b9\u4e0e\u8fb9\u7f18Pod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee * \u8fb9\u7f18\u8282\u70b9\u4e0e\u4e91\u7aefPod\u4e4b\u95f4\u901a\u8fc7IP\u5730\u5740\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee * \u8fb9\u7f18Pod\u53ef\u4ee5\u8bbf\u95ee\u4e91\u7aef\u7684Service(ClusterIP) * \u4e91\u7aefPod\u53ef\u4ee5\u8bbf\u95ee\u8fb9\u7aef\u7684Service(ClusterIP) \u57fa\u672c\u7684\u539f\u7406\u5f88\u7b80\u5355\uff0c\u5728\u4e0d\u540c\u8282\u70b9\u95f4\u5efa\u7acb\u53cc\u5411VPN\u96a7\u9053\uff0c\u6253\u901a\u5206\u5e03\u5728\u4e0d\u540c\u533a\u57df\u7684\u5404\u4e2a\u5c40\u57df\u7f51\uff0c\u518d\u914d\u7f6e\u8def\u7531\u548ciptables\uff0c\u5f15\u5bfcPod\u53d1\u9001\u7684\u6570\u636e\u901a\u8fc7\u96a7\u9053\u5230\u8fbe\u76ee\u6807\u8282\u70b9\u4e0a\u7684Pod\uff0c\u8fd9\u6837\u5c31\u53ef\u4ee5\u5b9e\u73b0\u4e91\u8fb9\u53ca\u8fb9\u8fb9\u901a\u4fe1\u3002 \u4e91\u7aef\u7684\u8282\u70b9\u901a\u5e38\u90fd\u5728\u4e00\u4e2a\u7f51\u7edc\u5185\uff0c\u6ca1\u5fc5\u8981\u90fd\u8ddf\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u6240\u4ee5\u4e91\u7aef\u4f1a\u6709\u4e00\u4e2a\u88ab\u6210\u4e3aconnector\u7684\u8282\u70b9\uff0c\u8be5\u8282\u70b9\u5145\u5f53\u4e91\u8fb9\u7684\u7f51\u5173\u8282\u70b9\u3002 \u5efa\u7acbVPN\u96a7\u9053\u8981\u6d88\u8017\u4e00\u5b9a\u7684\u8d44\u6e90\uff0c\u5f88\u591a\u8fb9\u7f18\u573a\u666f\u8fb9\u7f18\u8282\u70b9\u4f17\u591a\uff0c\u5982\u679c\u6bcf\u4e2a\u8282\u70b9\u90fd\u8ddf\u5176\u4ed6\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u53ef\u80fd\u4f1a\u6d88\u8017\u4e0d\u5fc5\u8981\u7684\u8d44\u6e90\u3002\u4e3a\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0c\u9700\u8981\u6709\u4e00\u79cd\u673a\u5236\u53ef\u4ee5\u7ba1\u7f51\u7edc\u62d3\u6251\uff0c\u8ba9\u6709\u9700\u8981\u901a\u4fe1\u7684\u8282\u70b9\u53ef\u4ee5\u76f8\u4e92\u901a\u4fe1\u3002 \u5efa\u7acb\u96a7\u9053\u540e\uff0c\u914d\u7f6e\u8def\u7531\u548c\u89c4\u5219\u540e\uff0cPod\u4e4b\u95f4\u7684\u901a\u4fe1\u53ea\u662f\u6210\u4e3a\u4e86\u53ef\u80fd\uff0c\u56e0\u4e3a\u4e00\u4e9b\u9650\u5236\uff0c\u8fb9\u7f18\u8282\u70b9\u4e0d\u80fd\u8fd0\u884cFlannel\u548cCalico\u7684\u7ec4\u4ef6\uff0c\u6240\u4ee5FabEdge\u4e5f\u8981\u627f\u62c5\u8fb9\u7f18Pod\u7684IPAM\u5de5\u4f5c\u3002 \u7efc\u4e0a\u6240\u8ff0\uff0cFabeEdge\u8981\u89e3\u51b3\u5982\u4e0b\u95ee\u9898\uff1a * \u5efa\u7acb\u4e91\u8fb9\u548c\u8fb9\u8fb9\u4e4b\u95f4\u7684\u53cc\u5411VPN\u96a7\u9053\uff0c\u6253\u901a\u591a\u4e2a\u5c40\u57df\u7f51 * \u5efa\u7acb\u8def\u7531\u89c4\u5219\u548ciptables\u89c4\u5219 * \u7ba1\u7406\u8fb9\u7f18Pod\u7684IP\u5206\u914d * \u7ba1\u7406\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1 \u5728\u8bf8\u591aVPN\u4e2d\u6211\u4eec\u9009\u62e9\u4e86Strongswan\uff0c\u56e0\u4e3a\u5b83\u53ef\u4ee5\u901a\u8fc7vici\u534f\u8bae\u6765\u52a8\u6001\u7ba1\u7406\u96a7\u9053\u7684\u521b\u5efa\u548c\u9500\u6bc1\uff0c\u90e8\u7f72\u7684\u65f6\u5019\u53ef\u4ee5\u8fd0\u884c\u5728\u5355\u72ec\u7684\u5bb9\u5668\uff0c\u800c\u4e0d\u662f\u8ddfagent\u8fd0\u884c\u5728\u540c\u4e00\u4e2a\u5bb9\u5668\uff0c\u901a\u8fc7\u4fe1\u53f7\u6765\u63a7\u5236\u914d\u7f6e\u7684\u52a0\u8f7d\u3002 \u4e3a\u65b9\u4fbf\u7406\u89e3\uff0c\u5728\u6b64\u63d0\u4f9b\u4e00\u5f20\u7f51\u7edc\u62d3\u6251\u56fe\uff0c\u8be5\u96c6\u7fa4\u67094\u4e2a\u8282\u70b9\uff0c\u5176\u4e2dnode1, node2\u662f\u4e91\u7aef\u8282\u70b9\uff0c\u4e91\u7aef\u4f7f\u7528flannel\u901a\u4fe1\uff0cnode2\u88ab\u9009\u4e3aconnector\u8282\u70b9\uff1b edge1\uff0cedge2\u662f\u8fb9\u7f18\u8282\u70b9\uff0c\u4e24\u4e2a\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u76f4\u63a5\u901a\u4fe1\u3002\u6700\u540e\u8be5\u96c6\u7fa4\u7684ServiceClusterIPRange\u4e3a10.234.0.0/18\u3002  \u4ee5edge1\u4e3a\u4f8b\uff0cFabEdge\u4f1a\u5728\u4e00\u4e2aID\u4e3a220\u7684\u8def\u7531\u8868\u91cc\u521b\u5efa\u8def\u7531\uff0c\u793a\u4f8b\u5982\u4e0b\uff1a ``` 10.234.0.0/18 via 192.168.0.254 dev eth0 10.234.64.0/24 via 192.168.0.254 dev eth0 10.234.65.0/24 via 192.168.0.254 dev eth0 10.234.68.0/24 via 192.168.0.254 dev eth0 . ``` 192.168.0.254\u662fedge1\u7684\u9ed8\u8ba4\u7f51\u5173\uff0c\u5176\u4ed6\u7f51\u6bb5\u5206\u522b\u662fServiceCluserIPRange\u548c\u5206\u914d\u7ed9\u5176\u4ed6\u8282\u70b9\u7684PodCIDRs\u3002\u4ece\u8868\u9762\u4e0a\u770b\u53d1\u541110.234.0.0/16\u7684\u6570\u636e\u4e0b\u4e00\u8df3\u662f192.168.0.254\uff0c\u4f46\u5b9e\u9645\u4e0a\u4f1a\u88abstrongswan\u62e6\u622a\u5e76\u901a\u8fc7\u5efa\u7acb\u7684\u96a7\u9053\u53d1\u5411\u4e0d\u540c\u7684\u8282\u70b9\u3002 \u8def\u7531\u8868220\u662f\u7531strongswan\u521b\u5efa\uff0c\u5176\u4f18\u5148\u7ea7\u4e5f\u9ad8\u4e8e\u9ed8\u8ba4\u8def\u7531\u8868: ``` 0:\tfrom all lookup local 220:\tfrom all lookup 220 32766:\tfrom all lookup main 32767:\tfrom all lookup default ``` \u9664\u4e86\u8def\u7531\u89c4\u5219\uff0c\u8fd8\u9700\u8981\u914d\u7f6eiptables\u89c4\u5219\uff0cfabedge agent\u4f1a\u5728filter\u548cnat\u8868\u91cc\u5b9a\u4e49\u5982\u4e0b\u89c4\u5219: filter table: ``` -N FABEDGE-FORWARD -A FORWARD -j FABEDGE-FORWARD -A FABEDGE-FORWARD -s 10.234.67.0/24 -j ACCEPT -A FABEDGE-FORWARD -d 10.234.67.0/24 -j ACCEPT ``` nat table: ``` -N FABEDGE-NAT-OUTGOING -A POSTROUTING -j FABEDGE-NAT-OUTGOING -A FABEDGE-NAT-OUTGOING -s 10.234.67.0/24 -m set --match-set FABEDGE-PEER-CIDR dst -j RETURN -A FABEDGE-NAT-OUTGOING -s 10.234.67.0/24 -d 10.234.67.0/24 -j RETURN -A FABEDGE-NAT-OUTGOING -s 10.234.67.0/24 -j MASQUERADE ``` \u5176\u4e2d10.234.67.0/24\u662f\u5206\u914d\u7ed9edge1\u7684PodCIDR\uff0cfilter\u8868\u91cc\u7684\u89c4\u5219\u786e\u4fdd\u6e90\u5730\u5740\u53ca\u76ee\u6807\u5730\u5740\u4e3a\u8fd9\u4e2a\u7f51\u6bb5\u7684\u6570\u636e\u53ef\u4ee5\u88ab\u8f6c\u53d1\uff0cnat\u8868\u91cc\u7684\u89c4\u5219\u786e\u4fdd\u8fb9\u7f18Pod\u8bbf\u95ee\u5916\u7f51\u65f6\u4f1a\u88ab\u5730\u5740\u8f6c\u6362\uff0c\u8bbf\u95ee\u5176\u4ed6Pod\u548c\u670d\u52a1\u65f6\u5219\u4e0d\u4f1a\u3002 FABEDGE-PEER-CIDR\u662f\u4e00\u4e2aipset\uff0c\u91cc\u9762\u7684\u5730\u5740\u90fd\u662f\u5176\u4ed6\u8282\u70b9\u7684\u5730\u5740\uff0cPodCIDR\u53caServcieClusterIPRange ``` Name: FABEDGE-PEER-CIDR Type: hash:net Revision: 6 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 952 References: 1 Number of entries: 9 Members: 10.234.65.0/24 10.22.48.17 10.234.64.0/24 10.234.0.0/18 10.22.48.16 192.168.0.2 10.234.68.0/24 ``` FabEdge\u540c\u6837\u4f1a\u5728\u8def\u7531\u8868220\u91cc\u521b\u5efa\u4e00\u4e9b\u8def\u7531: ``` 10.234.67.0/24 via 10.22.48.254 dev eth0 10.234.68.0/24 via 10.22.48.254 dev eth0 ``` 10.234.67.0/24\u53ca10.234.68.0/24\u662f\u8fb9\u7f18\u8282\u70b9\u7684PodCIDRs\uff0c10.22.48.254\u662f\u4e91\u7aef\u7684\u9ed8\u8ba4\u7f51\u5173\uff0c\u8ddf\u8fb9\u7f18\u7aef\u4e00\u6837\uff0c\u53d1\u5f80\u8fb9\u7f18Pod\u7684\u540c\u6837\u4f1a\u88abstrongswan\u901a\u8fc7\u96a7\u9053\u53d1\u5411\u8fb9\u7f18\u8282\u70b9\u3002 FabEdge\u4e0d\u4f1a\u4fee\u6539connector\u8282\u70b9\u7684\u9ed8\u8ba4\u8def\u7531\u8868\uff0c\u5176\u5185\u5bb9\u5982\u4e0b: ``` default via 10.22.48.254 dev eth0 proto dhcp metric 100 10.234.64.0/24 via 10.234.64.0 dev flannel.1 onlink 10.234.65.0/24 dev cni0 proto kernel scope link src 10.234.65.1 ``` \u4ece\u4e0a\u9762\u53ef\u4ee5\u770b\u51fa\uff0c\u8bbf\u95ee\u4e91\u7aefPod\u7684\u8bf7\u6c42\u4f1a\u88abflannel\u5904\u7406\u3002 \u4e0b\u9762\u662fFabEdge\u5728connector\u8282\u70b9\u751f\u6210\u7684iptables\u89c4\u5219: filter table: ``` -N FABEDGE-FORWARD -N FABEDGE-INPUT -A INPUT -j FABEDGE-INPUT -A FORWARD -j FABEDGE-FORWARD -A FABEDGE-FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FABEDGE-FORWARD -m set --match-set FABEDGE-CLOUD-POD-CIDR src -j ACCEPT -A FABEDGE-FORWARD -m set --match-set FABEDGE-CLOUD-POD-CIDR dst -j ACCEPT -A FABEDGE-FORWARD -m set --match-set FABEDGE-CLOUD-NODE-CIDR src -j ACCEPT -A FABEDGE-FORWARD -m set --match-set FABEDGE-CLOUD-NODE-CIDR dst -j ACCEPT -A FABEDGE-INPUT -p udp -m udp --dport 500 -j ACCEPT -A FABEDGE-INPUT -p udp -m udp --dport 4500 -j ACCEPT -A FABEDGE-INPUT -p esp -j ACCEPT -A FABEDGE-INPUT -p ah -j ACCEPT ``` nat table: ``` -N FABEDGE-POSTROUTING -A POSTROUTING -j FABEDGE-POSTROUTING -A FABEDGE-POSTROUTING -m set --match-set FABEDGE-CLOUD-POD-CIDR src -m set --match-set FABEDGE-EDGE-POD-CIDR dst -j ACCEPT -A FABEDGE-POSTROUTING -m set --match-set FABEDGE-EDGE-POD-CIDR src -m set --match-set FABEDGE-CLOUD-POD-CIDR dst -j ACCEPT -A FABEDGE-POSTROUTING -m set --match-set FABEDGE-CLOUD-POD-CIDR src -m set --match-set FABEDGE-EDGE-NODE-CIDR dst -j ACCEPT -A FABEDGE-POSTROUTING -m set --match-set FABEDGE-EDGE-POD-CIDR src -m set --match-set FABEDGE-CLOUD-NODE-CIDR dst -j MASQUERADE -A FABEDGE-POSTROUTING -m set --match-set FABEDGE-EDGE-NODE-CIDR src -m set --match-set FABEDGE-CLOUD-POD-CIDR dst -j MASQUERADE ``` filter\u8868\u7684\u89c4\u5219\u4e3b\u8981\u662f\u786e\u4fddstrongswan\u548c\u4e91\u7aef\u7684\u6570\u636e\u4e0d\u4f1a\u88ab\u62d2\u7edd\uff0cnat\u8868\u6bd4\u8f83\u590d\u6742\uff1a * \u4e91\u8fb9Pod\u4e4b\u95f4\u7684\u901a\u4fe1\u4e0d\u505aNAT\uff1b * \u4e91\u7aefPod\u4e0e\u8fb9\u7f18\u8282\u70b9\u7684\u901a\u4fe1\u4e0d\u505aNAT\uff1b * \u8fb9\u7f18Pod\u8bbf\u95ee\u4e91\u7aef\u8282\u70b9\u65f6\u8981\u505aSNAT\uff0c\u907f\u514drp_filter\u95ee\u9898\uff1b * \u8fb9\u7f18\u8282\u70b9\u8bbf\u95ee\u4e91\u7aefPod\u65f6\u8981\u505aSNAT\uff0c\u5426\u5219\u56de\u5305\u4f1a\u627e\u4e0d\u5230\u8fb9\u7f18\u8282\u70b9\u3002 \u56e0\u4e3a\u4e91\u7aef\u7684\u96a7\u9053\u53ea\u6709connector\u80fd\u5efa\u7acb\uff0c\u975econnector\u8282\u70b9\u7684\u5176\u4ed6\u4e91\u8282\u70b9\u8bbf\u95ee\u8fb9\u7f18\u65f6\u9700\u8981\u5c06\u6d41\u91cf\u8def\u7531\u5230connector\u8282\u70b9\uff0c\u4ee5node1\u4e3a\u4f8b\uff0c\u5176\u8def\u7531\u89c4\u5219\u5982\u4e0b: ``` 10.234.67.0/24 via 10.234.65.0 dev flannel.1 onlink 10.234.68.0/24 via 10.234.65.0 dev flannel.1 onlink ``` \u8fd9\u4e9b\u89c4\u5219\u4f9d\u7136\u521b\u5efa\u5728\u8def\u7531\u8868220\u91cc\uff0c\u4e0d\u8fc7\u8be5\u8868\u4e0d\u518d\u7531strongswan\u521b\u5efa\uff0c\u800c\u662f\u7531fabedge-cloud-agent\u521b\u5efa.10.234.65.0\u662fconnector\u8282\u70b9\u7684flannel.1\u8bbe\u5907\u7684\u5730\u5740\u3002 \u4e3b\u8868\u8def\u7531\u89c4\u5219\u5982\u4e0b: ``` default via 10.22.48.254 dev eth0 proto dhcp metric 100 10.22.48.0/24 dev eth0 proto kernel scope link src 10.22.48.16 metric 100 10.234.64.0/24 dev cni0 proto kernel scope link src 10.234.64.1 10.234.65.0/24 via 10.234.65.0 dev flannel.1 onlink 10.234.66.0/24 via 10.234.66.0 dev flannel.1 onlink ``` \u4ece\u4e24\u5f20\u8868\u53ef\u4ee5\u770b\u51fa\uff0cFabEdge\u5c06\u8bbf\u95ee\u8fb9\u7f18\u8282\u70b9\u7684\u6d41\u91cf\u90fd\u8def\u7531\u5230connector\u8282\u70b9\u7684flannel.1\u7f51\u5361\u4e0a\uff0c\u7136\u540e\u518d\u901a\u8fc7strongswan\u521b\u5efa\u7684\u96a7\u9053\u8bbf\u95ee\u8fb9\u7f18Pod\u3002 FabEdge\u6ca1\u6709\u50cf\u5176\u4ed6CNI\u5b9e\u73b0\u5f00\u53d1\u81ea\u5df1\u7684CNI\u63d2\u4ef6\uff0c\u800c\u662f\u91c7\u7528[CNI Plugins](https://github.com/containernetworking/plugins)\u63d0\u4f9b\u7684host-local, bridge, portmap, bandwith\u7b49\u63d2\u4ef6\uff0c\u8fd9\u4e9b\u901a\u7528\u63d2\u4ef6\u5df2\u7ecf\u53ef\u4ee5\u6ee1\u8db3\u57fa\u672c\u7684IPAM\u529f\u80fd\u3002 fabedge-agent\u7ec4\u4ef6\u4f1a\u5728\u8fb9\u7f18\u7aef\u751f\u6210\u5982\u4e0bCNI\u914d\u7f6e ``` { \"cniVersion\": \"0.3.1\", \"name\": \"fabedge\", \"plugins\": [ { \"type\": \"bridge\", \"bridge\": \"br-fabedge\", \"isDefaultGateway\": true, \"forceAddress\": true, \"hairpinMode\": true, \"mtu\": 1400, \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"10.234.67.0/24\" } ] ] } }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } }, { \"type\": \"bandwidth\", \"capabilities\": { \"bandwidth\": true } } ] } ``` \u6839\u636eCNI\u5b9e\u73b0\u7684\u4e0d\u540c\uff0c\u6709\u4e9b\u573a\u666f\u4e0b\u9700\u8981FabEdge\u627f\u62c5\u8fb9\u7f18\u8282\u70b9\u7684PodCIDR\u5206\u914d\uff0c\u5206\u914d\u7684PodCIDR\u4f1a\u5b58\u50a8\u5728\u8fb9\u7f18\u8282\u70b9\u7684annotations\u91cc\uff0c\u4f8b\u5982: ``` fabedge.io/subnets: 10.233.103.192/26 ``` \u5f53\u524dFabEdge\u4ec5\u652f\u6301Flannel\u548cCalico\u4e24\u4e2aCNI\u5b9e\u73b0\uff0c\u4f7f\u7528Calico\u65f6\u9700\u8981\u989d\u5916\u63d0\u4f9b\u4e00\u4e2a\u7f51\u6bb5\u4f9bFabEdge\u5206\u914dPodCIDR\u3002 \u6211\u4eec\u63d0\u51fa\u4e86\u4e00\u4e2a\u53ebCommunity\u7684CRD\uff0c\u7528\u6237\u53ef\u4ee5\u5229\u7528Community\u7ba1\u7406\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u53ea\u6709\u4f4d\u4e8e\u540c\u4e00\u4e2aCommunity\u5185\u7684\u8fb9\u7f18Pod\u624d\u53ef\u4ee5\u901a\u4fe1\u3002\u56e0\u4e3a\u4e91\u8fb9\u901a\u4fe1\u7684\u9700\u6c42\u6bd4\u8f83\u9891\u7e41\uff0c\u6240\u4ee5\u4e91\u8fb9\u4e4b\u95f4\u9ed8\u8ba4\u5efa\u7acb\u96a7\u9053\u3002 ``` type Community struct { metav1.TypeMeta `json:\",inline\"` metav1.ObjectMeta `json:\"metadata,omitempty\"` Spec CommunitySpec `json:\"spec,omitempty\"` } type CommunitySpec struct { Members []string `json:\"members,omitempty\"` } ``` \u5982\u4e0b\u5b9e\u4f8b\u4f1a\u521b\u5efa\u4e00\u4e2a\u540d\u4e3aall-edge-nodes\u7684Community\uff0c\u8ba9edge1, edge2\u4e24\u4e2a\u8282\u70b9\u901a\u4fe1\uff0c\u914d\u7f6ecommunity\u65f6\uff0c\u4f7f\u7528\u7684\u540d\u79f0\u8ddf\u8282\u70b9\u540d\u4e0d\u540c\uff0c\u8fd9\u91cc\u7528\u7684\u540d\u79f0\u5176\u5b9e\u662f\u7aef\u70b9\u540d\uff0c\u4e00\u4e2a\u7aef\u70b9\u540d\u662f\u7531\"\u96c6\u7fa4\u540d.\u8282\u70b9\u540d\"\u7ec4\u6210\u7684\u3002 ``` apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 ``` \u5728\u89e3\u51b3\u65b9\u6848\u91cc\u6ca1\u6709\u63d0\u5230\u5982\u4f55\u5b9e\u73b0\u670d\u52a1\u8bbf\u95ee(ClusterIP\u7c7b\u578b)\uff0c\u56e0\u4e3a\u6709\u4e9b\u573a\u666f\u4e0b\u8fd9\u4e0d\u662f\u4e2a\u95ee\u9898\u3002 \u9996\u5148\uff0c\u4e91\u7aefPod\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff0c\u8fb9\u7f18Pod\u4e5f\u53ef\u4ee5\u8bbf\u95ee\u4efb\u4f55\u4e91\u7aef\u670d\u52a1\uff1b\u4f46\u8fb9\u7f18Pod\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\u53ef\u4ee5\u5206\u4e3a\u4e24\u79cd\u60c5\u51b5: * \u8fb9\u7f18\u7aef\u53ef\u4ee5\u8fd0\u884ckube-proxy\uff0c\u8fd9\u65f6\u53ea\u8981\u8fb9\u7f18\u8282\u70b9\u8ddf\u8fd0\u884c\u670d\u52a1\u540e\u7aef\u7684\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u4e86\u96a7\u9053\uff0c\u5c31\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff1b * \u8fb9\u7f18\u7aef\u4e0d\u80fd\u8fd0\u884ckube-proxy\uff0c\u8fd9\u65f6\u5373\u4fbf\u8fb9\u7f18\u8282\u70b9\u8ddf\u8fd0\u884c\u670d\u52a1\u540e\u7aef\u7684\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u96a7\u9053\uff0c\u751a\u81f3\u670d\u52a1\u540e\u7aef\u8ddf\u5ba2\u6237\u7aefPod\u5728\u540c\u4e00\u4e2a\u8282\u70b9\uff0c\u4e5f\u65e0\u6cd5\u901a\u4fe1\uff0c\u56e0\u4e3a\u57df\u540d\u89e3\u6790\u51fa\u6765\u7684\u662fClusterIP\uff0c\u9700\u8981\u901a\u8fc7connector\u6765\u4e2d\u8f6c\uff0c\u4f46connector\u5e76\u6ca1\u6709\u88ab\u8bbe\u8ba1\u627f\u62c5\u8fd9\u4e2a\u529f\u80fd\u3002 \u9488\u5bf9\u7b2c\u4e8c\u79cd\u60c5\u51b5\uff0cFabEdge\u63d0\u4f9b\u4e86\u4e00\u4e2a\u540d\u4e3afab-proxy\u7684\u5c0f\u7ec4\u4ef6\uff0c\u5b83\u53ef\u4ee5\u8ba9\u5728\u540c\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u7684Pod\u901a\u8fc7\u57df\u540d\u8bbf\u95ee\u540e\u7aef\u5728\u540c\u4e00\u4e2a\u8282\u70b9\u7684\u670d\u52a1\uff0c\u4f46\u4e0d\u652f\u6301\u8de8\u8282\u70b9\u7684\u8bbf\u95ee\u3002 \u5982\u679c\u60f3\u66f4\u597d\u8bbf\u95ee\u8fb9\u7f18\u670d\u52a1\uff0c\u6700\u597d\u8fd8\u662f\u8ba9kube-proxy\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u3002  fabedge-agent\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\uff0c\u4e3b\u8981\u627f\u62c5CNI\u914d\u7f6e\u751f\u6210\uff0ciptables\uff0cipset, ipvs\u6570\u636e\u7ef4\u62a4\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cstrongswan\u96a7\u9053\u7ba1\u7406\u7b49\u529f\u80fd\u3002 fabedge-connector\u8fd0\u884c\u5728\u4e91\u7aef\u7684\u7279\u5b9a\u8282\u70b9\u4e0a\uff0c\u4e3b\u8981\u8d1f\u8d23connector\u8282\u70b9\u7684iptables\uff0cipset, ipvs\u6570\u636e\u7ef4\u62a4\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cstrongswan\u96a7\u9053\u7ba1\u7406\u7b49\u529f\u80fd\uff0c\u53e6\u5916\u8fd8\u8d1f\u8d23\u5411fabedge-cloud-agent\u4f20\u9012\u4e00\u4e9b\u8def\u7531\u4fe1\u606f fabedge-cloud-agent\u8fd0\u884c\u5728\u4e91\u7aef\u7684\u975econnector\u8282\u70b9\uff0c\u4e3b\u8981\u8d1f\u8d23\u6240\u5728\u8282\u70b9\u7684iptables\uff0cipset\u6570\u636e\uff0c\u8def\u7531\u8868\u914d\u7f6e\uff0cfabedge-cloud-agent\u4f1a\u4ecefabedge-connector\u83b7\u53d6\u4e00\u4e9b\u8def\u7531\u4fe1\u606f\u3002 fabedge-operator\u627f\u62c5\u4e86\u4ee5\u4e0b\u529f\u80fd\uff1a * \u4e3a\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9\u90e8\u7f72fabedge-agent\uff1b * \u5728\u67d0\u4e9b\u573a\u666f\uff0c\u4e3a\u8fb9\u7f18\u8282\u70b9\u5206\u914dPodCIDR\uff1b * \u4e3a\u8fb9\u7f18\u8282\u70b9\u548cconnector\u8282\u70b9\u7684strongswan\u751f\u6210\u8bc1\u4e66\u548c\u79c1\u94a5\uff1b * \u4e3afabedge-agent\u548cfabedge-connector\u7ba1\u7406\u96a7\u9053\u914d\u7f6e\uff1b * \u5728\u5f00\u542ffab-proxy\u7279\u6027\u7684\u60c5\u51b5\u4e0b\uff0c\u4e3afabedge-agent\u751f\u6210\u8be5\u8282\u70b9\u7684IPVS\u914d\u7f6e"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "FAQ.md"
- },
- "content": [
- {
- "heading": "Frequently Asked Questions (FAQ)",
- "data": ""
- },
- {
- "heading": "Is FabEdge another CNI implementation?",
- "data": "Not exactly, at least it is not another CNI implementation for general purpose. It's designed for resolving issues of network communication for edge computing. On the cloud side, it still relies on Flannel or Calico to ensure network communication. But on the edge side, it is FabEdge doing the work, maybe one day we can make Flannel or Calico to be running on the edge side."
- },
- {
- "heading": "Which CNI implementations can FabEdge work together with?",
- "data": "For now, FabEdge can only work with Flannel and Calico. FabEdge can work under the vxlan mode of Flannel, as well as the vxlan or IPIP mode of Calico. Furthermore, when working with Calico, you cannot use etcd as the backend storage of Calico."
- },
- {
- "heading": "What's the size of PodCIDR for each edge node? Can I change it? How?",
- "data": "Well, it's up to your Kubernetes settings and the CNI you use:\n * Flannel. Flannel doesn't allocate PodCIDR for work node itself, instead, it uses the PodCIDR field of each node and the PodCIDR is allocated by Kubernetes. In this situation, FabEdge will also use the PodCIDR of nodes. If you want to change the size, you have to set it up during the deployment of Kubernetes.\n * Calico. Calico will allocate PodCIDR for each node itself, but since FabEdge is unable to change the settings of Calico, we decide to allocate PodCIDR for edge nodes ourself, that is the reason why you need to provide the value for `edge-pod-cidr` parameter. To change the size of PodCIDR, you need to set `edge-cidr-mask-size` parameter:\n ```shell\n curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region beijing \\\n --connectors beijing \\\n --edges edge1,edge2,edge3 \\\n --connector-public-addresses 10.22.45.16 \\\n --cni-type calico \\\n --edge-pod-cidr 10.234.0.0/16 \\ # the address pool for edge pod, don't overlap with calico's\n --edge-cidr-mask-size 26 \\ # it's the network mask's size\n --chart fabedge/fabedge\n ```\n If you choose to [install FabEdge manually](https://github.com/FabEdge/fabedge/blob/main/docs/manually-install.md), you may take the following values.yaml as an example:\n PS: Script or manual installation will not be prompted when configuration examples are provided later. In addition, some parameters can only be configured in the manual installation mode, and no example of script installation is provided."
- },
- {
- "heading": "Can cloud pods and edge pods communicate to each other by default? Can I disable this feature?",
- "data": "Yes, cloud pods and edge pods can communicate by default and you can't disable it."
- },
- {
- "heading": "The traffic of cloud-edge communication is handled by the connector node. Is there a Single Point Of Failure?",
- "data": "Yes, for now there is no HA solution for connector, we're still working on it."
- },
- {
- "heading": "Is the edge-to-edge communication enabled by default?",
- "data": "No, FabEdge uses VPN tunnel to make communication between different available networks. However, it will use some resources to establish VPN tunnels. As not all edge nodes need to communicated, FabEdge provides community CRD to manage communication between edge-to-edge communication to avoid unnecessary consumption. Please check out [this](https://github.com/FabEdge/fabedge/blob/main/docs/user-guide.md#use-community) and find out how to use community."
- },
- {
- "heading": "Is edge-to-edge communication possible across networks?",
- "data": "Yes, but before FabEdge v0.8.0 it didn't work well. Since v0.8.0, we implemented hole-punching feature which can help edge nodes to establish VPN tunnels across different networks. This feature is disabled by default, you can enable it as following:\n or\uff1a"
- },
- {
- "heading": "Do edge nodes located within the same network need to establish tunnels to communicate with each other?",
- "data": "By default, yes it is. But if these nodes use the same router, please try auto-networking feature, it works like the host-gw mode of Flannel. Each edge node find peers under the same router using UDP multicast and generate routes for edge pods. You can enable it as following:"
- },
- {
- "heading": "Can different nodes communicate? Does I use SSH to visit nodes?",
- "data": "No, FabEdge does not implement communication between nodes, which is a bit troublesome on the one hand. On the other hand, we do not want the security measures between individual networks to be breached because of FabEdge.\n FabEdge doesn't provide SSH capability."
- },
- {
- "heading": "How does edge pods visit services?",
- "data": "It depends on your edge computing framekwork\uff1a\n * OpenYurt/SuperEdge. They will have their own coredns and kube-proxy pods running on edge nodes and FabEdge only provide network communication.\n * KubeEdge\u3002Before v0.8.0, FabEdge didn't do much for this, but you can deploy coredns and kube-proxy on edge nodes by yourself. Since v0.8.0, FabEdge have integrated coredns and kube-proxy into fabedge-agent.\n For now, the coredns integrated to fabedge-agent is 1.8.0 and kube-proxy is 1.22.5, if you want use different coredns and kube-proxy, you can turn them off:\n or"
- },
- {
- "heading": "My cluster's domain is not cluster.local, what should I do?",
- "data": "If your cluster uses KubeEdge, you need to provide your cluster domain to FabEdge when deploying it:"
- },
- {
- "heading": "I don't want to use node-role.kubernetes.io/edge to label edge nodes",
- "data": "By default FabEdge uses node-role.kubernetes.io/edge to recognize edge nodes, but you can use what you like, just provide it when deploying FabEdge:\n Don't change those parameters after you have deployed FabEdge, otherwise FabEdge might work improperly."
- },
- {
- "heading": "I can't use 500 and 4500 as public ports for connecotr, what should I do?",
- "data": "Don't worry, since FabEdge v0.8.0, you can configure connector's public port. It is worth mentioning this doesn't change the listen ports of connector's strongswan, but change the port which strongswan of edge nodes use to establish tunnels. In addition, there is no need to map the public network port for 500. When a tunnel is created using a non-500 port, only port 4500 is actually used, so only port 4500 of the connector needs to be mapped. The configuration is as follows:\n or\n It also worth mentioning that when using this feature, it might hurt communication performance\uff0c checkout [NAT Traversal](https://docs.strongswan.org/docs/5.9/features/natTraversal.html) for why."
- },
- {
- "heading": "Why are there fabdns and service-hub running in single cluster scenario, and can they be deleted?",
- "data": "If you install FabEdge using script, it will install them. If you have only one cluster, it's better to disable them.\n or"
- },
- {
- "heading": "Can the network addresses of each cluster overlap in a multi-cluster scenario?",
- "data": "No, not only the network addresses of container network, but also the host network. Even you have only one cluster, make sure the network addresses of them won't overlap."
- },
- {
- "heading": "I want to configure strongswan, how should I do?",
- "data": "Sorry, for now FabEdge doesn't provide any way to do that, you may build your strongswan image and configure it in the image."
- },
- {
- "additional_info": "Not exactly, at least it is not another CNI implementation for general purpose. It's designed for resolving issues of network communication for edge computing. On the cloud side, it still relies on Flannel or Calico to ensure network communication. But on the edge side, it is FabEdge doing the work, maybe one day we can make Flannel or Calico to be running on the edge side. For now, FabEdge can only work with Flannel and Calico. FabEdge can work under the vxlan mode of Flannel, as well as the vxlan or IPIP mode of Calico. Furthermore, when working with Calico, you cannot use etcd as the backend storage of Calico. Well, it's up to your Kubernetes settings and the CNI you use: * Flannel. Flannel doesn't allocate PodCIDR for work node itself, instead, it uses the PodCIDR field of each node and the PodCIDR is allocated by Kubernetes. In this situation, FabEdge will also use the PodCIDR of nodes. If you want to change the size, you have to set it up during the deployment of Kubernetes. * Calico. Calico will allocate PodCIDR for each node itself, but since FabEdge is unable to change the settings of Calico, we decide to allocate PodCIDR for edge nodes ourself, that is the reason why you need to provide the value for `edge-pod-cidr` parameter. To change the size of PodCIDR, you need to set `edge-cidr-mask-size` parameter: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --cni-type calico \\ --edge-pod-cidr 10.234.0.0/16 \\ # the address pool for edge pod, don't overlap with calico's --edge-cidr-mask-size 26 \\ # it's the network mask's size --chart fabedge/fabedge ``` If you choose to [install FabEdge manually](https://github.com/FabEdge/fabedge/blob/main/docs/manually-install.md), you may take the following values.yaml as an example: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"calico\" # configure edge pod CIDR and mask size for edge pods edgePodCIDR: \"10.234.0.0/16\" edgeCIDRMaskSize: 26 connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` PS: Script or manual installation will not be prompted when configuration examples are provided later. In addition, some parameters can only be configured in the manual installation mode, and no example of script installation is provided. Yes, cloud pods and edge pods can communicate by default and you can't disable it. Yes, for now there is no HA solution for connector, we're still working on it. No, FabEdge uses VPN tunnel to make communication between different available networks. However, it will use some resources to establish VPN tunnels. As not all edge nodes need to communicated, FabEdge provides community CRD to manage communication between edge-to-edge communication to avoid unnecessary consumption. Please check out [this](https://github.com/FabEdge/fabedge/blob/main/docs/user-guide.md#use-community) and find out how to use community. Yes, but before FabEdge v0.8.0 it didn't work well. Since v0.8.0, we implemented hole-punching feature which can help edge nodes to establish VPN tunnels across different networks. This feature is disabled by default, you can enable it as following: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --connector-as-mediator true \\ # enable hole-punching feature --chart fabedge/fabedge ``` or\uff1a ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" # enable hole-punching feature connectorAsMediator: true connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` By default, yes it is. But if these nodes use the same router, please try auto-networking feature, it works like the host-gw mode of Flannel. Each edge node find peers under the same router using UDP multicast and generate routes for edge pods. You can enable it as following: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.46.33 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" # enable auto networking AUTO_NETWORKING: \"true\" # fabedge-agent use this address to multicast, this value is also the default value which can limit the # multicast range to the router, normally you don't need to change it. MULTICAST_ADDRESS: \"239.40.20.81:18080\" # multicast token, each edge node can communicate with another one which has the same token MULTICAST_TOKEN: \"SdY3MTJDHKUkJsHU\" ``` No, FabEdge does not implement communication between nodes, which is a bit troublesome on the one hand. On the other hand, we do not want the security measures between individual networks to be breached because of FabEdge. FabEdge doesn't provide SSH capability. It depends on your edge computing framekwork\uff1a * OpenYurt/SuperEdge. They will have their own coredns and kube-proxy pods running on edge nodes and FabEdge only provide network communication. * KubeEdge\u3002Before v0.8.0, FabEdge didn't do much for this, but you can deploy coredns and kube-proxy on edge nodes by yourself. Since v0.8.0, FabEdge have integrated coredns and kube-proxy into fabedge-agent. For now, the coredns integrated to fabedge-agent is 1.8.0 and kube-proxy is 1.22.5, if you want use different coredns and kube-proxy, you can turn them off: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --enable-proxy false \\ # disable kube-proxy --enable-dns false \\ # disable coredns --chart fabedge/fabedge ``` or ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"false\" ENABLE_DNS: \"false\" ``` If your cluster uses KubeEdge, you need to provide your cluster domain to FabEdge when deploying it: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" # configure cluster domain DNS_CLUSTER_DOMAIN: \"your.domain\" ``` By default FabEdge uses node-role.kubernetes.io/edge to recognize edge nodes, but you can use what you like, just provide it when deploying FabEdge: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 # configure labels which are used to recognize edge nodes. Format key=value, value can be blank edgeLabels: - edge-node= # Here is an enable label, sometimes you may only want fabedge-agent to running on some edge nodes, # you can give fabedge-enable=true label to those nodes. - fabedge-enable=true # You can also use different labels to mark connector node connectorLabels: - connector-node= agent: args: ENABLE_PROXY: \"false\" ENABLE_DNS: \"false\" ``` Don't change those parameters after you have deployed FabEdge, otherwise FabEdge might work improperly. Don't worry, since FabEdge v0.8.0, you can configure connector's public port. It is worth mentioning this doesn't change the listen ports of connector's strongswan, but change the port which strongswan of edge nodes use to establish tunnels. In addition, there is no need to map the public network port for 500. When a tunnel is created using a non-500 port, only port 4500 is actually used, so only port 4500 of the connector needs to be mapped. The configuration is as follows: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2 \\ --connector-public-addresses 10.40.20.181 \\ --connector-public-port 45000 \\ --chart fabedge/fabedge ``` or ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" # configure connector's public port connectorPublicPort: 45000 connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` It also worth mentioning that when using this feature, it might hurt communication performance\uff0c checkout [NAT Traversal](https://docs.strongswan.org/docs/5.9/features/natTraversal.html) for why. If you install FabEdge using script, it will install them. If you have only one cluster, it's better to disable them. ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2 \\ --connector-public-addresses 10.40.20.181 \\ --enable-fabdns false \\ --chart fabedge/fabedge ``` or ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" fabDNS: # disable fabdns and service-hub create: false ``` No, not only the network addresses of container network, but also the host network. Even you have only one cluster, make sure the network addresses of them won't overlap. Sorry, for now FabEdge doesn't provide any way to do that, you may build your strongswan image and configure it in the image."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "FAQ_zh.md"
- },
- "content": [
- {
- "heading": "\u5e38\u89c1\u95ee\u9898",
- "data": ""
- },
- {
- "heading": "FabEdge\u662fCNI\u5b9e\u73b0\u5417\uff1f",
- "data": "\u5e76\u4e0d\u662f\uff0c\u81f3\u5c11\u4e0d\u662f\u5e38\u89c4\u610f\u4e49\u4e0a\u7684CNI\uff0c\u5b83\u7684\u8bbe\u8ba1\u76ee\u6807\u662f\u89e3\u51b3\u8fb9\u7f18\u573a\u666f\u7684\u7f51\u7edc\u901a\u4fe1\uff0c\u5728\u4e91\u7aef\uff0c\u4f9d\u7136\u662fFlannel, Calico\u8fd9\u4e9bCNI\u5728\u8d1f\u8d23\uff0c\u8fb9\u7f18\u4fa7\u6682\u65f6\u7531FabEdge\u8d1f\u8d23\uff0c\u6216\u8bb8\u6709\u4e00\u5929\u6211\u4eec\u80fd\u505a\u5230\u8ba9Flannel\uff0cCalico\u5de5\u4f5c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u3002"
- },
- {
- "heading": "FabEdge\u80fd\u8ddf\u54ea\u4e9bCNI\u517c\u5bb9\uff1f",
- "data": "\u76ee\u524d\u53ea\u517c\u5bb9\u4e86Flannel\u548cCalico\uff0c\u76ee\u524d\u517c\u5bb9Flannel\u7684vxlan\u6a21\u5f0f\u4ee5\u53caCalico\u7684IPIP\u548cvxlan\u6a21\u5f0f\uff0c\u53e6\u5916\u8ddfCalico\u534f\u4f5c\u65f6\uff0cCalico\u7684\u5b58\u50a8\u540e\u7aef\u4e0d\u80fd\u662fetcd\u3002"
- },
- {
- "heading": "\u5728\u8fb9\u7f18\u4fa7\u5206\u914d\u7684\u7f51\u6bb5\u6709\u591a\u5927\uff1f\u662f\u5426\u53ef\u4ee5\u8c03\u6574\uff1f\u5982\u4f55\u8c03\u6574\uff1f",
- "data": "\u53d6\u51b3\u4e8e\u4f60\u90e8\u7f72Kubernetes\u65f6\u7684\u914d\u7f6e\u53ca\u4f7f\u7528\u7684CNI\u5b9e\u73b0\uff1a\n * Flannel\u3002Flannel\u81ea\u8eab\u6ca1\u6709\u4e3a\u8282\u70b9\u5206\u914dPodCIDR\uff0c\u800c\u662f\u4f7f\u7528Kubernetes\u5206\u914d\u7684PodCIDR\uff0cFabEdge\u5728\u8fd9\u79cd\u573a\u666f\u4e0b\uff0c\u4e5f\u4f1a\u4f7f\u7528\u8282\u70b9\u81ea\u8eab\u7684PodCIDR\u503c\u3002\u5982\u679c\u8981\u8c03\u6574\u6bcf\u4e2a\u8282\u70b9\u7684PodCIDR\u5927\u5c0f\uff0c\u9700\u8981\u60a8\u5728\u90e8\u7f72Kubernetes\u65f6\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u3002\n * Calico\u3002Calico\u4f1a\u4e3a\u8282\u70b9\u5206\u914dPodCIDR\uff0c\u4f46\u56e0\u4e3aFabEdge\u6ca1\u6709\u80fd\u529b\u5f71\u54cd\u8fd9\u4e2a\u8fc7\u7a0b\uff0c\u6240\u4ee5\u9009\u62e9\u81ea\u5df1\u7ba1\u7406\u8fb9\u7f18\u4fa7\u7684PodCIDR\u5206\u914d\uff0c\u8fd9\u4e5f\u662f\u60a8\u90e8\u7f72\u65f6\u9700\u8981\u914d\u7f6eedgePodCIDR\u53c2\u6570\u7684\u539f\u56e0\u3002\u8981\u4fee\u6539PodCIDR\u7684\u5927\u5c0f\uff0c\u9700\u8981\u4fee\u6539`edge-cidr-mask-size`\u503c\uff0c\u5982:\n ```shell\n curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region beijing \\\n --connectors beijing \\\n --edges edge1,edge2,edge3 \\\n --connector-public-addresses 10.22.45.16 \\\n --cni-type calico \\\n --edge-pod-cidr 10.234.0.0/16 \\ # \u63d0\u4f9b\u8fb9\u7f18\u4fa7\u7684PodCIDR\u6c60\n --edge-cidr-mask-size 26 \\ # \u6ce8\u610f\u662f\u63a9\u7801\u957f\u5ea6\uff0c\u8fd9\u91cc\u53c2\u8003\u4e86Kubernetes\u76f8\u5e94\u53c2\u6570\u7684\u914d\u7f6e\u65b9\u5f0f\n --chart fabedge/fabedge\n ```\n \u5982\u679c\u60a8\u4f7f\u7528[\u624b\u52a8\u5b89\u88c5](https://github.com/FabEdge/fabedge/blob/main/docs/manually-install_zh.md)\uff0c\u53ef\u4ee5\u53c2\u8003\u5982\u4e0bvalues.yaml\u914d\u7f6e\uff1a\n *\u6ce8\uff1a\u540e\u9762\u63d0\u4f9b\u914d\u7f6e\u793a\u4f8b\u65f6\uff0c\u4e0d\u4f1a\u518d\u63d0\u793a\u811a\u672c\u5b89\u88c5\u6216\u624b\u52a8\u5b89\u88c5*\u3002\u53e6\u5916\uff0c\u6709\u4e9b\u53c2\u6570\u53ea\u6709\u5728\u624b\u52a8\u5b89\u88c5\u7684\u65b9\u5f0f\u4e0b\u624d\u80fd\u914d\u7f6e\uff0c\u8fd9\u65f6\u4e5f\u4e0d\u4f1a\u63d0\u4f9b\u811a\u672c\u5b89\u88c5\u7684\u4f8b\u5b50\u3002"
- },
- {
- "heading": "\u4e91\u8fb9\u4e4b\u95f4\u662f\u5426\u9ed8\u8ba4\u53ef\u4ee5\u901a\u4fe1\uff1f\u80fd\u5426\u5173\u95ed\uff1f",
- "data": "\u662f\u7684\uff0c\u4e91\u8fb9\u4e4b\u95f4\u9ed8\u8ba4\u662f\u53ef\u4ee5\u901a\u4fe1\u7684\uff0c\u5e76\u4e14\u4e0d\u80fd\u5173\u95ed\u3002"
- },
- {
- "heading": "\u4e91\u8fb9\u901a\u4fe1\u7684\u6d41\u91cf\u90fd\u7531connector\u8282\u70b9\u6765\u8d1f\u8d23\uff0c\u662f\u5426\u5b58\u5728\u5355\u70b9\u95ee\u9898\uff1f",
- "data": "\u4ecev1.0.0\u8d77\uff0cFabEdge\u5b9e\u73b0\u4e86connector\u9ad8\u53ef\u7528\u3002"
- },
- {
- "heading": "\u8fb9\u8fb9\u4e4b\u95f4\u9ed8\u8ba4\u662f\u5426\u53ef\u4ee5\u901a\u4fe1\uff1f",
- "data": "\u9ed8\u8ba4\u4e0d\u80fd\u901a\u4fe1\uff0cFabEdge\u5229\u7528VPN\u96a7\u9053\u6765\u6253\u901a\u8fb9\u7f18\u573a\u666f\u4e2d\u9694\u79bb\u7684\u7f51\u7edc\uff0c\u521b\u5efaVPN\u96a7\u9053\u4f1a\u6d88\u8017\u4e00\u5b9a\u7684\u8ba1\u7b97\u8d44\u6e90\uff0c\u4f46\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u5e76\u4e0d\u9700\u8981\u90fd\u8fde\u901a\u7f51\u7edc\uff0c\u4e3a\u4e86\u51cf\u5c11\u6ca1\u5fc5\u8981\u7684\u6d88\u8017\uff0cFabEdge\u5229\u7528Community CRD\u6765\u7ba1\u7406\u8fb9\u8fb9\u901a\u4fe1\uff0c\u53c2\u8003[\u8fd9\u91cc](https://github.com/FabEdge/fabedge/blob/main/docs/user-guide_zh.md#fabedge%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C)\u4e86\u89e3\u5982\u4f55\u4f7f\u7528Community\u3002"
- },
- {
- "heading": "\u8fb9\u8fb9\u901a\u4fe1\u662f\u5426\u53ef\u4ee5\u8de8\u7f51\uff1f",
- "data": "\u53ef\u4ee5\uff0c\u4f46FabEdge v0.8.0\u4e4b\u524d\u7684\u5b9e\u73b0\u6548\u679c\u4e0d\u592a\u597d\uff0c\u5728FabEdge v0.8.0\u7248\u672c\u4e2d\u5b9e\u73b0\u4e86\u6253\u6d1e\u529f\u80fd\uff0c\u53ef\u4ee5\u8f83\u597d\u5730\u89e3\u51b3\u8fb9\u8fb9\u4e4b\u95f4\u7684\u8de8\u7f51\u901a\u4fe1\u3002\u8be5\u529f\u80fd\u9ed8\u8ba4\u662f\u5173\u95ed\u7684\uff0c\u5f00\u542f\u65b9\u5f0f\u5982\u4e0b:\n \u6216\uff1a"
- },
- {
- "heading": "\u4f4d\u4e8e\u540c\u4e00\u7f51\u7edc\u5185\u7684\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u901a\u4fe1\u4e5f\u9700\u8981\u5efa\u7acb\u96a7\u9053\u5417\uff1f",
- "data": "\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u662f\u7684\u3002\u5982\u679c\u8fd9\u4e9b\u8282\u70b9\u4f4d\u4e8e\u540c\u4e00\u8def\u7531\u5668\u4e0b\uff0c\u90a3\u4e48\u53ef\u4ee5\u5c1d\u8bd5FabEdge\u7684\u81ea\u52a8\u7ec4\u7f51\u529f\u80fd\uff0c\u5b83\u7684\u5de5\u4f5c\u65b9\u5f0f\u7c7b\u4f3c\u4e8eflannel\u7684host-gw\u6a21\u5f0f\uff0c\u901a\u8fc7UDP\u5e7f\u64ad\u7684\u65b9\u5f0f\u5bfb\u627e\u5176\u4ed6\u8fb9\u7f18\u8282\u70b9\uff0c\u4e3a\u8fd9\u4e9b\u8282\u70b9\u4e0a\u7684\u5bb9\u5668\u751f\u6210\u8def\u7531\uff0c\u5176\u6027\u80fd\u4e5f\u8fd1\u4e4e\u4e3b\u673a\u7f51\u7edc\u3002\u5f00\u542f\u7684\u65b9\u5f0f\u5982\u4e0b:"
- },
- {
- "heading": "\u6ce8\uff1a\u56e0\u4e3a\u5b89\u88c5\u811a\u672c\u91cc\u6ca1\u6709\u76f8\u5e94\u7684\u914d\u7f6e\uff0c\u6240\u4ee5\u8fd9\u91cc\u6ca1\u6709\u63d0\u4f9b\u793a\u4f8b\u3002",
- "data": ""
- },
- {
- "heading": "\u8282\u70b9\u4e4b\u95f4\u662f\u5426\u53ef\u4ee5\u76f4\u63a5\u901a\u4fe1\uff1f\u662f\u5426\u652f\u6301SSH\u8bbf\u95ee\u5176\u4ed6\u8282\u70b9\uff1f",
- "data": "FabEdge\u5e76\u6ca1\u6709\u5b9e\u73b0\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u4e00\u65b9\u9762\u8fd9\u6709\u70b9\u9ebb\u70e6\uff0c\u53e6\u4e00\u65b9\u9762\uff0c\u6211\u4eec\u4e0d\u5e0c\u671b\u5404\u4e2a\u7f51\u7edc\u95f4\u7684\u5b89\u5168\u63aa\u65bd\u56e0\u4e3aFabEdge\u88ab\u7a81\u7834\u3002\n \u81f3\u4e8eSSH\u8bbf\u95ee\uff0cFabEdge\u4e5f\u6ca1\u5b9e\u73b0\u3002"
- },
- {
- "heading": "FabEdge\u662f\u600e\u4e48\u89e3\u51b3\u8fb9\u7f18\u7aef\u7684\u670d\u52a1\u8bbf\u95ee\u7684\uff1f",
- "data": "\u53d6\u51b3\u4e8e\u60a8\u9009\u62e9\u7684\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff1a\n * OpenYurt/SuperEdge\u3002\u8fd9\u4e9b\u6846\u67b6\u4f1a\u5728\u8fb9\u7f18\u8282\u70b9\u8fd0\u884c\u81ea\u5df1\u7684kube-proxy\u548ccoredns\uff0cFabEdge\u53ea\u662f\u63d0\u4f9b\u4e86\u7f51\u7edc\u901a\u4fe1\u80fd\u529b\uff1b\n * KubeEdge\u3002\u5728v0.8.0\u4e4b\u524d\uff0cFabEdge\u5e76\u6ca1\u6709\u5f88\u597d\u5730\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0c\u4f46\u60a8\u53ef\u4ee5\u81ea\u5df1\u5c06coredns\u548ckube-proxy\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\uff1b\u5728v0.8.0\u4e4b\u540e\uff0c FabEdge\u5c06coredns\u548ckube-proxy\u96c6\u6210\u8fdb\u4e86fabedge-agent\uff0c\u4ece\u800c\u53ef\u4ee5\u5411\u8fb9\u7f18\u4fa7\u5bb9\u5668\u63d0\u4f9b\u4e24\u8005\u7684\u80fd\u529b\u3002\n \u76ee\u524dfabedge-agent\u91cc\u96c6\u6210\u7684coredns\u7248\u672c\u4e3a1.8.0\uff0c\u8fd9\u662f\u76ee\u524d\u80fd\u627e\u5230\u7684\u4e0emetaServer\u517c\u5bb9\u6700\u9ad8\u53ef\u7528\u7248\u672c\uff1bkube-proxy\u5219\u662f1.22.5\u3002\u5982\u679c\u60a8\u5e0c\u671b\u4eb2\u81ea\u90e8\u7f72coredns\u548ckube-proxy\uff0c\u53ef\u4ee5\u5173\u95ed\u8fd9\u4e9b\u529f\u80fd:\n \u6216\u8005"
- },
- {
- "heading": "\u6211\u7684\u96c6\u7fa4\u57df\u4e0d\u662fcluster.local\uff0c\u600e\u4e48\u529e\uff1f",
- "data": "\u5982\u679c\u60a8\u7684\u96c6\u7fa4\u4f7f\u7528OpenYurt\u548cSuperEdge\uff0c\u90a3\u4e48\u4ec0\u4e48\u90fd\u4e0d\u7528\u505a\uff1b\u5982\u679c\u60a8\u7684\u96c6\u7fa4\u4f7f\u7528\u4e86KubeEdge\uff0c\u90a3\u4e48\u53ef\u4ee5\u5728\u90e8\u7f72FabEdge\u65f6\uff0c\u63d0\u4f9b\u60a8\u96c6\u7fa4\u7684\u96c6\u7fa4\u57df(cluster domain):"
- },
- {
- "heading": "\u6211\u4e0d\u60f3\u4f7f\u7528node-role.kubernetes.io/edge\u6765\u6807\u8bb0\u8fb9\u7f18\u8282\u70b9",
- "data": "FabEdge\u9ed8\u8ba4\u4f7f\u7528\u201dnode-role.kubernetes.io/edge\u201c\u6807\u7b7e\u6765\u8bc6\u522b\u8fb9\u7f18\u8282\u70b9\uff0c\u4f46\u60a8\u4e5f\u53ef\u4ee5\u4f7f\u7528\u60a8\u5e0c\u671b\u7684\u6807\u7b7e\uff0c\u53ea\u9700\u8981\u5728\u90e8\u7f72\u65f6\u63d0\u4f9b\u76f8\u5e94\u7684\u914d\u7f6e:\n \u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u6807\u7b7e\u53c2\u6570\u6700\u597d\u5728\u90e8\u7f72\u540e\u4e0d\u8981\u4fee\u6539\uff0c\u5426\u5219\u53ef\u80fd\u5bfc\u81f4FabEdge\u5de5\u4f5c\u4e0d\u6b63\u5e38\u3002"
- },
- {
- "heading": "\u6211\u7684\u4e91\u7aef\u516c\u7f51\u4e0d\u80fd\u4f7f\u7528500, 4500\u7aef\u53e3",
- "data": "\u200b \u4e0d\u7528\u62c5\u5fc3\uff0c\u4eceFabEdge v0.8.0\u5f00\u59cb\uff0c\u7528\u6237\u53ef\u4ee5\u4fee\u6539connector\u7684\u516c\u5f00\u7aef\u53e3\u53f7\u3002\u4e0d\u8fc7\u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u8fd9\u4e0d\u662f\u4fee\u6539\u4e86connector\u8282\u70b9\u7684strongswan\u7684\u76d1\u542c\u7aef\u53e3\uff0c\u5b83\u4f9d\u7136\u5728\u76d1\u542c500\u548c4500\uff0c\u800c\u662f\u4fee\u6539\u4e86\u8fb9\u7f18\u8282\u70b9\u5411\u4e91\u7aef\u5efa\u7acb\u96a7\u9053\u65f6\u4f7f\u7528\u7684\u7aef\u53e3\u53f7\u3002\u53e6\u5916\uff0c\u4e5f\u4e0d\u9700\u8981\u4e3a500\u505a\u516c\u7f51\u7aef\u53e3\u6620\u5c04\uff0c\u5f53\u4f7f\u7528\u975e500\u7aef\u53e3\u521b\u5efa\u96a7\u9053\u65f6\uff0c\u5b9e\u9645\u4e0a\u53ea\u67094500\u7aef\u53e3\u53c2\u4e0e\u4e86\uff0c\u6240\u4ee5\u53ea\u9700\u8981\u4e3aconnector \u76844500\u7aef\u53e3\u505a\u6620\u5c04\u3002\u914d\u7f6e\u65b9\u5f0f\u5982\u4e0b\uff1a\n \u6216\n \u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u4f7f\u7528\u8fd9\u79cd\u65b9\u5f0f\uff0c\u4f1a\u964d\u4f4e\u901a\u4fe1\u6027\u80fd\uff0c\u5177\u4f53\u539f\u56e0\u53c2\u8003[NAT Traversal](https://docs.strongswan.org/docs/5.9/features/natTraversal.html)"
- },
- {
- "heading": "\u5355\u96c6\u7fa4\u573a\u666f\u4e0b\u4e3a\u4ec0\u4e48\u6709fabdns\u548cservice-hub\u7684\u7ec4\u4ef6\u5728\u8fd0\u884c\uff0c\u662f\u5426\u53ef\u4ee5\u4e0d\u7528?",
- "data": "\u4f7f\u7528\u811a\u672c\u5b89\u88c5\u65f6\uff0c\u4f1a\u9ed8\u8ba4\u5b89\u88c5\u8fd9\u4e24\u4e2a\u7ec4\u4ef6\uff0c\u5982\u679c\u53ea\u662f\u5355\u96c6\u7fa4\u573a\u666f\uff0c\u53ef\u4ee5\u4e0d\u7528\u8fd9\u4e24\u4e2a\u7ec4\u4ef6\uff0c\u53c2\u6570\u914d\u7f6e\u5982\u4e0b:\n \u6216"
- },
- {
- "heading": "\u591a\u96c6\u7fa4\u901a\u4fe1\u573a\u666f\u4e0b\uff0c\u5404\u4e2a\u96c6\u7fa4\u7684\u7f51\u6bb5\u662f\u5426\u53ef\u4ee5\u91cd\u53e0\uff1f",
- "data": "\u4e0d\u80fd\uff0c\u4e0d\u5149\u5bb9\u5668\u7f51\u7edc\u7684\u7f51\u6bb5\u4e0d\u80fd\u91cd\u53e0\uff0c\u4e3b\u673a\u7f51\u7edc\u4e5f\u4e0d\u80fd\u3002\u5373\u4fbf\u662f\u5355\u96c6\u7fa4\u573a\u666f\uff0c\u4e3b\u673a\u7f51\u7edc\u7684\u5730\u5740\u7a7a\u95f4\u4e5f\u4e0d\u8981\u91cd\u53e0\u3002"
- },
- {
- "heading": "\u6211\u60f3\u4fee\u6539strongswan\u7684\u914d\u7f6e\uff0c\u8be5\u600e\u4e48\u505a\uff1f",
- "data": "\u5f88\u62b1\u6b49\uff0cFabEdge\u73b0\u5728\u8fd8\u505a\u4e0d\u5230\u914d\u7f6estrongswan\uff0c\u60a8\u53ef\u4ee5\u81ea\u5df1\u5236\u4f5c\u955c\u50cf\uff0c\u5728\u955c\u50cf\u91cc\u5b8c\u6210\u914d\u7f6e\u3002"
- },
- {
- "additional_info": "\u5e76\u4e0d\u662f\uff0c\u81f3\u5c11\u4e0d\u662f\u5e38\u89c4\u610f\u4e49\u4e0a\u7684CNI\uff0c\u5b83\u7684\u8bbe\u8ba1\u76ee\u6807\u662f\u89e3\u51b3\u8fb9\u7f18\u573a\u666f\u7684\u7f51\u7edc\u901a\u4fe1\uff0c\u5728\u4e91\u7aef\uff0c\u4f9d\u7136\u662fFlannel, Calico\u8fd9\u4e9bCNI\u5728\u8d1f\u8d23\uff0c\u8fb9\u7f18\u4fa7\u6682\u65f6\u7531FabEdge\u8d1f\u8d23\uff0c\u6216\u8bb8\u6709\u4e00\u5929\u6211\u4eec\u80fd\u505a\u5230\u8ba9Flannel\uff0cCalico\u5de5\u4f5c\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u3002 \u76ee\u524d\u53ea\u517c\u5bb9\u4e86Flannel\u548cCalico\uff0c\u76ee\u524d\u517c\u5bb9Flannel\u7684vxlan\u6a21\u5f0f\u4ee5\u53caCalico\u7684IPIP\u548cvxlan\u6a21\u5f0f\uff0c\u53e6\u5916\u8ddfCalico\u534f\u4f5c\u65f6\uff0cCalico\u7684\u5b58\u50a8\u540e\u7aef\u4e0d\u80fd\u662fetcd\u3002 \u53d6\u51b3\u4e8e\u4f60\u90e8\u7f72Kubernetes\u65f6\u7684\u914d\u7f6e\u53ca\u4f7f\u7528\u7684CNI\u5b9e\u73b0\uff1a * Flannel\u3002Flannel\u81ea\u8eab\u6ca1\u6709\u4e3a\u8282\u70b9\u5206\u914dPodCIDR\uff0c\u800c\u662f\u4f7f\u7528Kubernetes\u5206\u914d\u7684PodCIDR\uff0cFabEdge\u5728\u8fd9\u79cd\u573a\u666f\u4e0b\uff0c\u4e5f\u4f1a\u4f7f\u7528\u8282\u70b9\u81ea\u8eab\u7684PodCIDR\u503c\u3002\u5982\u679c\u8981\u8c03\u6574\u6bcf\u4e2a\u8282\u70b9\u7684PodCIDR\u5927\u5c0f\uff0c\u9700\u8981\u60a8\u5728\u90e8\u7f72Kubernetes\u65f6\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u3002 * Calico\u3002Calico\u4f1a\u4e3a\u8282\u70b9\u5206\u914dPodCIDR\uff0c\u4f46\u56e0\u4e3aFabEdge\u6ca1\u6709\u80fd\u529b\u5f71\u54cd\u8fd9\u4e2a\u8fc7\u7a0b\uff0c\u6240\u4ee5\u9009\u62e9\u81ea\u5df1\u7ba1\u7406\u8fb9\u7f18\u4fa7\u7684PodCIDR\u5206\u914d\uff0c\u8fd9\u4e5f\u662f\u60a8\u90e8\u7f72\u65f6\u9700\u8981\u914d\u7f6eedgePodCIDR\u53c2\u6570\u7684\u539f\u56e0\u3002\u8981\u4fee\u6539PodCIDR\u7684\u5927\u5c0f\uff0c\u9700\u8981\u4fee\u6539`edge-cidr-mask-size`\u503c\uff0c\u5982: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --cni-type calico \\ --edge-pod-cidr 10.234.0.0/16 \\ # \u63d0\u4f9b\u8fb9\u7f18\u4fa7\u7684PodCIDR\u6c60 --edge-cidr-mask-size 26 \\ # \u6ce8\u610f\u662f\u63a9\u7801\u957f\u5ea6\uff0c\u8fd9\u91cc\u53c2\u8003\u4e86Kubernetes\u76f8\u5e94\u53c2\u6570\u7684\u914d\u7f6e\u65b9\u5f0f --chart fabedge/fabedge ``` \u5982\u679c\u60a8\u4f7f\u7528[\u624b\u52a8\u5b89\u88c5](https://github.com/FabEdge/fabedge/blob/main/docs/manually-install_zh.md)\uff0c\u53ef\u4ee5\u53c2\u8003\u5982\u4e0bvalues.yaml\u914d\u7f6e\uff1a ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"calico\" # \u914d\u7f6eedgePodCIDR\u548c\u63a9\u7801\u957f\u5ea6 edgePodCIDR: \"10.234.0.0/16\" edgeCIDRMaskSize: 26 connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: # \u4ee5\u4e0b\u4e24\u4e2a\u53c2\u6570\u4ec5\u9700\u8981\u5728kubeedge\u73af\u5883\u6253\u5f00 ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` *\u6ce8\uff1a\u540e\u9762\u63d0\u4f9b\u914d\u7f6e\u793a\u4f8b\u65f6\uff0c\u4e0d\u4f1a\u518d\u63d0\u793a\u811a\u672c\u5b89\u88c5\u6216\u624b\u52a8\u5b89\u88c5*\u3002\u53e6\u5916\uff0c\u6709\u4e9b\u53c2\u6570\u53ea\u6709\u5728\u624b\u52a8\u5b89\u88c5\u7684\u65b9\u5f0f\u4e0b\u624d\u80fd\u914d\u7f6e\uff0c\u8fd9\u65f6\u4e5f\u4e0d\u4f1a\u63d0\u4f9b\u811a\u672c\u5b89\u88c5\u7684\u4f8b\u5b50\u3002 \u662f\u7684\uff0c\u4e91\u8fb9\u4e4b\u95f4\u9ed8\u8ba4\u662f\u53ef\u4ee5\u901a\u4fe1\u7684\uff0c\u5e76\u4e14\u4e0d\u80fd\u5173\u95ed\u3002 \u4ecev1.0.0\u8d77\uff0cFabEdge\u5b9e\u73b0\u4e86connector\u9ad8\u53ef\u7528\u3002 \u9ed8\u8ba4\u4e0d\u80fd\u901a\u4fe1\uff0cFabEdge\u5229\u7528VPN\u96a7\u9053\u6765\u6253\u901a\u8fb9\u7f18\u573a\u666f\u4e2d\u9694\u79bb\u7684\u7f51\u7edc\uff0c\u521b\u5efaVPN\u96a7\u9053\u4f1a\u6d88\u8017\u4e00\u5b9a\u7684\u8ba1\u7b97\u8d44\u6e90\uff0c\u4f46\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u5e76\u4e0d\u9700\u8981\u90fd\u8fde\u901a\u7f51\u7edc\uff0c\u4e3a\u4e86\u51cf\u5c11\u6ca1\u5fc5\u8981\u7684\u6d88\u8017\uff0cFabEdge\u5229\u7528Community CRD\u6765\u7ba1\u7406\u8fb9\u8fb9\u901a\u4fe1\uff0c\u53c2\u8003[\u8fd9\u91cc](https://github.com/FabEdge/fabedge/blob/main/docs/user-guide_zh.md#fabedge%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C)\u4e86\u89e3\u5982\u4f55\u4f7f\u7528Community\u3002 \u53ef\u4ee5\uff0c\u4f46FabEdge v0.8.0\u4e4b\u524d\u7684\u5b9e\u73b0\u6548\u679c\u4e0d\u592a\u597d\uff0c\u5728FabEdge v0.8.0\u7248\u672c\u4e2d\u5b9e\u73b0\u4e86\u6253\u6d1e\u529f\u80fd\uff0c\u53ef\u4ee5\u8f83\u597d\u5730\u89e3\u51b3\u8fb9\u8fb9\u4e4b\u95f4\u7684\u8de8\u7f51\u901a\u4fe1\u3002\u8be5\u529f\u80fd\u9ed8\u8ba4\u662f\u5173\u95ed\u7684\uff0c\u5f00\u542f\u65b9\u5f0f\u5982\u4e0b: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --connector-as-mediator true \\ # \u8fd9\u4e2a\u53c2\u6570\u542f\u7528\u6253\u6d1e\u529f\u80fd --chart fabedge/fabedge ``` \u6216\uff1a ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" # \u542f\u52a8\u6253\u6d1e\u7279\u6027 connectorAsMediator: true connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: # \u4ee5\u4e0b\u4e24\u4e2a\u53c2\u6570\u4ec5\u9700\u8981\u5728kubeedge\u73af\u5883\u6253\u5f00 ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u662f\u7684\u3002\u5982\u679c\u8fd9\u4e9b\u8282\u70b9\u4f4d\u4e8e\u540c\u4e00\u8def\u7531\u5668\u4e0b\uff0c\u90a3\u4e48\u53ef\u4ee5\u5c1d\u8bd5FabEdge\u7684\u81ea\u52a8\u7ec4\u7f51\u529f\u80fd\uff0c\u5b83\u7684\u5de5\u4f5c\u65b9\u5f0f\u7c7b\u4f3c\u4e8eflannel\u7684host-gw\u6a21\u5f0f\uff0c\u901a\u8fc7UDP\u5e7f\u64ad\u7684\u65b9\u5f0f\u5bfb\u627e\u5176\u4ed6\u8fb9\u7f18\u8282\u70b9\uff0c\u4e3a\u8fd9\u4e9b\u8282\u70b9\u4e0a\u7684\u5bb9\u5668\u751f\u6210\u8def\u7531\uff0c\u5176\u6027\u80fd\u4e5f\u8fd1\u4e4e\u4e3b\u673a\u7f51\u7edc\u3002\u5f00\u542f\u7684\u65b9\u5f0f\u5982\u4e0b: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.46.33 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" # \u542f\u52a8\u81ea\u52a8\u7ec4\u7f51 AUTO_NETWORKING: \"true\" # fabedge-agent\u7528\u6765\u5e7f\u64ad\u7684\u5730\u5740\uff0c\u4e0b\u9762\u7684\u503c\u4e5f\u662f\u9ed8\u8ba4\u5730\u5740\uff0c\u8be5\u5730\u5740\u5c06\u5e7f\u64ad\u8303\u56f4\u9650\u5236\u5728\u540c\u4e00\u8def\u7531\u5668\u4e0b\uff0c\u57fa\u672c\u4e0d\u9700\u8981\u4fee\u6539\u3002 MULTICAST_ADDRESS: \"239.40.20.81:18080\" # \u5e7f\u64ad\u4ee4\u724c\uff0c\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9\u53ea\u4f1a\u8ddf\u6301\u6709\u76f8\u540c\u4ee4\u724c\u7684\u5176\u4ed6\u8fb9\u7f18\u8282\u70b9\u901a\u4fe1 MULTICAST_TOKEN: \"SdY3MTJDHKUkJsHU\" ``` FabEdge\u5e76\u6ca1\u6709\u5b9e\u73b0\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u4e00\u65b9\u9762\u8fd9\u6709\u70b9\u9ebb\u70e6\uff0c\u53e6\u4e00\u65b9\u9762\uff0c\u6211\u4eec\u4e0d\u5e0c\u671b\u5404\u4e2a\u7f51\u7edc\u95f4\u7684\u5b89\u5168\u63aa\u65bd\u56e0\u4e3aFabEdge\u88ab\u7a81\u7834\u3002 \u81f3\u4e8eSSH\u8bbf\u95ee\uff0cFabEdge\u4e5f\u6ca1\u5b9e\u73b0\u3002 \u53d6\u51b3\u4e8e\u60a8\u9009\u62e9\u7684\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff1a * OpenYurt/SuperEdge\u3002\u8fd9\u4e9b\u6846\u67b6\u4f1a\u5728\u8fb9\u7f18\u8282\u70b9\u8fd0\u884c\u81ea\u5df1\u7684kube-proxy\u548ccoredns\uff0cFabEdge\u53ea\u662f\u63d0\u4f9b\u4e86\u7f51\u7edc\u901a\u4fe1\u80fd\u529b\uff1b * KubeEdge\u3002\u5728v0.8.0\u4e4b\u524d\uff0cFabEdge\u5e76\u6ca1\u6709\u5f88\u597d\u5730\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0c\u4f46\u60a8\u53ef\u4ee5\u81ea\u5df1\u5c06coredns\u548ckube-proxy\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\uff1b\u5728v0.8.0\u4e4b\u540e\uff0c FabEdge\u5c06coredns\u548ckube-proxy\u96c6\u6210\u8fdb\u4e86fabedge-agent\uff0c\u4ece\u800c\u53ef\u4ee5\u5411\u8fb9\u7f18\u4fa7\u5bb9\u5668\u63d0\u4f9b\u4e24\u8005\u7684\u80fd\u529b\u3002 \u76ee\u524dfabedge-agent\u91cc\u96c6\u6210\u7684coredns\u7248\u672c\u4e3a1.8.0\uff0c\u8fd9\u662f\u76ee\u524d\u80fd\u627e\u5230\u7684\u4e0emetaServer\u517c\u5bb9\u6700\u9ad8\u53ef\u7528\u7248\u672c\uff1bkube-proxy\u5219\u662f1.22.5\u3002\u5982\u679c\u60a8\u5e0c\u671b\u4eb2\u81ea\u90e8\u7f72coredns\u548ckube-proxy\uff0c\u53ef\u4ee5\u5173\u95ed\u8fd9\u4e9b\u529f\u80fd: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2,edge3 \\ --connector-public-addresses 10.22.45.16 \\ --enable-proxy false \\ # \u5173\u95edkube-proxy --enable-dns false \\ # \u5173\u95edcoredns --chart fabedge/fabedge ``` \u6216\u8005 ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"false\" ENABLE_DNS: \"false\" ``` \u5982\u679c\u60a8\u7684\u96c6\u7fa4\u4f7f\u7528OpenYurt\u548cSuperEdge\uff0c\u90a3\u4e48\u4ec0\u4e48\u90fd\u4e0d\u7528\u505a\uff1b\u5982\u679c\u60a8\u7684\u96c6\u7fa4\u4f7f\u7528\u4e86KubeEdge\uff0c\u90a3\u4e48\u53ef\u4ee5\u5728\u90e8\u7f72FabEdge\u65f6\uff0c\u63d0\u4f9b\u60a8\u96c6\u7fa4\u7684\u96c6\u7fa4\u57df(cluster domain): ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" # \u5728\u8fd9\u91cc\u914d\u7f6e\u96c6\u7fa4\u7684domain DNS_CLUSTER_DOMAIN: \"your.domain\" ``` FabEdge\u9ed8\u8ba4\u4f7f\u7528\u201dnode-role.kubernetes.io/edge\u201c\u6807\u7b7e\u6765\u8bc6\u522b\u8fb9\u7f18\u8282\u70b9\uff0c\u4f46\u60a8\u4e5f\u53ef\u4ee5\u4f7f\u7528\u60a8\u5e0c\u671b\u7684\u6807\u7b7e\uff0c\u53ea\u9700\u8981\u5728\u90e8\u7f72\u65f6\u63d0\u4f9b\u76f8\u5e94\u7684\u914d\u7f6e: ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 # \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u8bc6\u522b\u6807\u7b7e\uff0c\u683c\u5f0f\u5c31\u662fkey=value\u3002 edgeLabels: - edge-node= # \u8fd9\u91cc\u505a\u4e86\u4e00\u4e2aenable\u914d\u7f6e\uff0c\u5982\u679c\u60a8\u53ea\u6709\u90e8\u5206\u8282\u70b9\u9700\u8981\u8fd0\u884cfabedge\uff0c\u53ef\u4ee5\u53ea\u7ed9\u8fd9\u4e9b\u8282\u70b9\u6253\u4e0a\u6807\u7b7efabedge-enable=true - fabedge-enable=true # \u914d\u7f6econnector\u8282\u70b9\u7684\u8bc6\u522b\u6807\u7b7e connectorLabels: - connector-node= agent: args: ENABLE_PROXY: \"false\" ENABLE_DNS: \"false\" ``` \u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u6807\u7b7e\u53c2\u6570\u6700\u597d\u5728\u90e8\u7f72\u540e\u4e0d\u8981\u4fee\u6539\uff0c\u5426\u5219\u53ef\u80fd\u5bfc\u81f4FabEdge\u5de5\u4f5c\u4e0d\u6b63\u5e38\u3002 \u200b \u4e0d\u7528\u62c5\u5fc3\uff0c\u4eceFabEdge v0.8.0\u5f00\u59cb\uff0c\u7528\u6237\u53ef\u4ee5\u4fee\u6539connector\u7684\u516c\u5f00\u7aef\u53e3\u53f7\u3002\u4e0d\u8fc7\u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u8fd9\u4e0d\u662f\u4fee\u6539\u4e86connector\u8282\u70b9\u7684strongswan\u7684\u76d1\u542c\u7aef\u53e3\uff0c\u5b83\u4f9d\u7136\u5728\u76d1\u542c500\u548c4500\uff0c\u800c\u662f\u4fee\u6539\u4e86\u8fb9\u7f18\u8282\u70b9\u5411\u4e91\u7aef\u5efa\u7acb\u96a7\u9053\u65f6\u4f7f\u7528\u7684\u7aef\u53e3\u53f7\u3002\u53e6\u5916\uff0c\u4e5f\u4e0d\u9700\u8981\u4e3a500\u505a\u516c\u7f51\u7aef\u53e3\u6620\u5c04\uff0c\u5f53\u4f7f\u7528\u975e500\u7aef\u53e3\u521b\u5efa\u96a7\u9053\u65f6\uff0c\u5b9e\u9645\u4e0a\u53ea\u67094500\u7aef\u53e3\u53c2\u4e0e\u4e86\uff0c\u6240\u4ee5\u53ea\u9700\u8981\u4e3aconnector \u76844500\u7aef\u53e3\u505a\u6620\u5c04\u3002\u914d\u7f6e\u65b9\u5f0f\u5982\u4e0b\uff1a ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2 \\ --connector-public-addresses 10.40.20.181 \\ --connector-public-port 45000 \\ # \u914d\u7f6econnector\u516c\u5f00\u7aef\u53e3 --chart fabedge/fabedge ``` \u6216 ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" # \u914d\u7f6econnector\u516c\u5f00\u7aef\u53e3 connectorPublicPort: 45000 connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: # \u4ee5\u4e0b\u4e24\u4e2a\u53c2\u6570\u4ec5\u9700\u8981\u5728kubeedge\u73af\u5883\u6253\u5f00 ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" ``` \u9700\u8981\u4e00\u63d0\u7684\u662f\uff0c\u4f7f\u7528\u8fd9\u79cd\u65b9\u5f0f\uff0c\u4f1a\u964d\u4f4e\u901a\u4fe1\u6027\u80fd\uff0c\u5177\u4f53\u539f\u56e0\u53c2\u8003[NAT Traversal](https://docs.strongswan.org/docs/5.9/features/natTraversal.html) \u4f7f\u7528\u811a\u672c\u5b89\u88c5\u65f6\uff0c\u4f1a\u9ed8\u8ba4\u5b89\u88c5\u8fd9\u4e24\u4e2a\u7ec4\u4ef6\uff0c\u5982\u679c\u53ea\u662f\u5355\u96c6\u7fa4\u573a\u666f\uff0c\u53ef\u4ee5\u4e0d\u7528\u8fd9\u4e24\u4e2a\u7ec4\u4ef6\uff0c\u53c2\u6570\u914d\u7f6e\u5982\u4e0b: ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region beijing \\ --connectors beijing \\ --edges edge1,edge2 \\ --connector-public-addresses 10.40.20.181 \\ --enable-fabdns false \\ --chart fabedge/fabedge ``` \u6216 ```yaml cluster: name: beijing role: host region: beijing zone: beijing cniType: \"flannel\" connectorPublicAddresses: - 10.22.45.16 clusterCIDR: - 10.233.64.0/18 serviceClusterIPRange: - 10.233.0.0/18 agent: args: # \u4ee5\u4e0b\u4e24\u4e2a\u53c2\u6570\u4ec5\u9700\u8981\u5728kubeedge\u73af\u5883\u6253\u5f00 ENABLE_PROXY: \"true\" ENABLE_DNS: \"true\" fabDNS: # \u7981\u7528fabdns\u548cservice-hub create: false ``` \u4e0d\u80fd\uff0c\u4e0d\u5149\u5bb9\u5668\u7f51\u7edc\u7684\u7f51\u6bb5\u4e0d\u80fd\u91cd\u53e0\uff0c\u4e3b\u673a\u7f51\u7edc\u4e5f\u4e0d\u80fd\u3002\u5373\u4fbf\u662f\u5355\u96c6\u7fa4\u573a\u666f\uff0c\u4e3b\u673a\u7f51\u7edc\u7684\u5730\u5740\u7a7a\u95f4\u4e5f\u4e0d\u8981\u91cd\u53e0\u3002 \u5f88\u62b1\u6b49\uff0cFabEdge\u73b0\u5728\u8fd8\u505a\u4e0d\u5230\u914d\u7f6estrongswan\uff0c\u60a8\u53ef\u4ee5\u81ea\u5df1\u5236\u4f5c\u955c\u50cf\uff0c\u5728\u955c\u50cf\u91cc\u5b8c\u6210\u914d\u7f6e\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "feature_request.md"
- },
- "content": [
- {
- "heading": "Is your feature request related to a problem? Please describe.",
- "data": "A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]"
- },
- {
- "heading": "Describe the solution you'd like",
- "data": "A clear and concise description of what you want to happen."
- },
- {
- "heading": "Describe alternatives you've considered",
- "data": "A clear and concise description of any alternative solutions or features you've considered."
- },
- {
- "heading": "Additional context",
- "data": "Add any other context or screenshots about the feature request here."
- },
- {
- "additional_info": "--- name: Feature request about: Suggest an idea for this project title: '' labels: '' assignees: '' --- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] A clear and concise description of what you want to happen. A clear and concise description of any alternative solutions or features you've considered. Add any other context or screenshots about the feature request here."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.5.0.md"
- },
- "content": [
- {
- "heading": "Quickstart for FabEdge v0.5.0",
- "data": "[toc]"
- },
- {
- "heading": "Terminology",
- "data": "- **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability.\n - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability.\n - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge.\n - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster.\n - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster.\n - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a\n - **Node Type**\uff1a to define the communication between nodes within the same cluster\n - **Cluster Type**\uff1ato define the cross-cluster communication"
- },
- {
- "heading": "Prerequisite",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) or Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "Preparation",
- "data": "1. Make sure the following ports are allowed by firewall or security group.\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. Collect the configuration of the current cluster\n ```shell\n $ curl -s http://116.62.127.76/installer/v0.5.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "Deploy FabEdge on the host cluster",
- "data": "1. Label **all** edge nodes\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 22h v1.18.2\n edge2 Ready edge 22h v1.18.2\n master Ready master 22h v1.18.2\n node1 Ready 22h v1.18.2\n ```\n 2. Deploy FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name beijing --cluster-role host --cluster-zone beijing --cluster-region china --connectors node1 --connector-public-addresses 10.22.46.47 --chart http://116.62.127.76/fabedge-0.5.0.tgz\n ```\n > Note\uff1a\n > **--connectors**: names of k8s nodes which connectors are located\n > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located\n 3. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n 4. Create community for edges which need to communicate with each other\n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes # community name\n spec:\n members:\n - beijing.edge1 # format\uff1a{cluster name}.{edge node name}\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 5. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration\n 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration"
- },
- {
- "heading": "Deploy FabEdge in the member cluster",
- "data": "If any member cluster, register it in the host cluster first, then deploy FabEdge in it.\n 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration.\n \n ```shell\n # Run in the host cluster\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # cluster name\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------omitted-----9u0\n ```\n 2. Label **all** edge nodes\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 22h v1.18.2\n edge2 Ready edge 22h v1.18.2\n master Ready master 22h v1.18.2\n node1 Ready 22h v1.18.2\n ```\n 3. Deploy FabEdge in the member cluster\n \n ```shell\n curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name shanghai --cluster-role member --cluster-zone shanghai --cluster-region china --connectors node1 --chart http://116.62.127.76/fabedge-0.5.0.tgz --server-serviceHub-api-server https://10.22.46.47:30304 --host-operator-api-server https://10.22.46.47:30303 --connector-public-addresses 10.22.46.26 --init-token eyJ------omitted-----9u0\n ```\n > Note\uff1a\n > **--server-serviceHub-api-server**: endpoint of serviceHub in the host cluster\n > **--host-operator-api-server**: endpoint of operator-api in the host cluster\n > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster\n > **--init-token**: token when the member cluster is added in the host cluster\n 4. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "Enable multi-cluster communication",
- "data": "1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other\n ```shell\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # format: {cluster name}.connector\n - beijing.connector # format: {cluster name}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "Enable multi-cluster service discovery",
- "data": "the DNS components need to be modified\n - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only,\n - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`,\n - modify `coredns` for others\n 1. Update `nodelocaldns`\n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # local bind address\n forward . 10.233.12.205 # cluset-ip of fab-dns service\n }\n ```\n 2. Update `edge-coredns`\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # cluset-ip of fab-dns service\n }\n ```\n 3. Update `coredns `\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # cluset-ip of fab-dns service\n }\n ```\n \n 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect"
- },
- {
- "heading": "Edge computing framework depend configuration",
- "data": ""
- },
- {
- "heading": "for KubeEdge",
- "data": "1. Make sure `nodelocaldns` is running on all edge nodes\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. Update `edgecore` for all edge nodes\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # edgeMesh must be disabled\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output\n clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output\n ```\n > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used.\n 3. Reboot `edgecore` on all edge nodes\n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "for SuperEdge",
- "data": "1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod\n ```shell\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002"
- },
- {
- "heading": "CNI-dependent Configurations",
- "data": ""
- },
- {
- "heading": "for Calico",
- "data": "Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation.\n one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel)\n * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster\n * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster\n * on the member2 (Flannel) cluster, there is NO any configuration required.\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```"
- },
- {
- "heading": "FAQ",
- "data": "1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability. - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability. - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster. - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster. - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a - **Node Type**\uff1a to define the communication between nodes within the same cluster - **Cluster Type**\uff1ato define the cross-cluster communication - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) or Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09 1. Make sure the following ports are allowed by firewall or security group. - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. Collect the configuration of the current cluster ```shell $ curl -s http://116.62.127.76/installer/v0.5.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. Label **all** edge nodes ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready 22h v1.18.2 ``` 2. Deploy FabEdge ```shell $ curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name beijing --cluster-role host --cluster-zone beijing --cluster-region china --connectors node1 --connector-public-addresses 10.22.46.47 --chart http://116.62.127.76/fabedge-0.5.0.tgz ``` > Note\uff1a > **--connectors**: names of k8s nodes which connectors are located > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located 3. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 4. Create community for edges which need to communicate with each other ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: - beijing.edge1 # format\uff1a{cluster name}.{edge node name} - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 5. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration If any member cluster, register it in the host cluster first, then deploy FabEdge in it. 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration. ```shell # Run in the host cluster $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------omitted-----9u0 ``` 2. Label **all** edge nodes ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready 22h v1.18.2 ``` 3. Deploy FabEdge in the member cluster ```shell curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name shanghai --cluster-role member --cluster-zone shanghai --cluster-region china --connectors node1 --chart http://116.62.127.76/fabedge-0.5.0.tgz --server-serviceHub-api-server https://10.22.46.47:30304 --host-operator-api-server https://10.22.46.47:30303 --connector-public-addresses 10.22.46.26 --init-token eyJ------omitted-----9u0 ``` > Note\uff1a > **--server-serviceHub-api-server**: endpoint of serviceHub in the host cluster > **--host-operator-api-server**: endpoint of operator-api in the host cluster > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster > **--init-token**: token when the member cluster is added in the host cluster 4. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # format: {cluster name}.connector - beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` the DNS components need to be modified - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only, - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`, - modify `coredns` for others 1. Update `nodelocaldns` ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # local bind address forward . 10.233.12.205 # cluset-ip of fab-dns service } ``` 2. Update `edge-coredns` ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # cluset-ip of fab-dns service } ``` 3. Update `coredns ` ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluset-ip of fab-dns service } ``` 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect 1. Make sure `nodelocaldns` is running on all edge nodes ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. Update `edgecore` for all edge nodes ```shell $ vi /etc/kubeedge/config/edgecore.yaml # edgeMesh must be disabled edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output ``` > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used. 3. Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` 1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod ```shell $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002 Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster * on the member2 (Flannel) cluster, there is NO any configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` 1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.5.0_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge v0.5.0 \u5feb\u901f\u5b89\u88c5\u6307\u5357",
- "data": "[toc]"
- },
- {
- "heading": "\u6982\u5ff5",
- "data": "- **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b\n - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4\n - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf\n - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a\n - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf\n - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf"
- },
- {
- "heading": "\u524d\u63d0\u6761\u4ef6",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "\u73af\u5883\u51c6\u5907",
- "data": "1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528\n \n ```shell\n $ curl -s http://116.62.127.76/installer/v0.5.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "\u5728\u4e3b\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "1. \u4e3a**\u6240\u6709\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 22h v1.18.2\n edge2 Ready edge 22h v1.18.2\n master Ready master 22h v1.18.2\n node1 Ready 22h v1.18.2\n ```\n 2. \u5b89\u88c5FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name beijing --cluster-role host --cluster-zone beijing --cluster-region china --connectors node1 --connector-public-addresses 10.22.46.47 --chart http://116.62.127.76/fabedge-0.5.0.tgz\n ```\n > \u8bf4\u660e\uff1a\n > **--cluster-name**: \u96c6\u7fa4\u540d\u79f0\n > **--cluster-role**: \u96c6\u7fa4\u89d2\u8272\n > **--cluster-zone**: \u96c6\u7fa4\u6240\u5728\u7684\u533a\n > **--cluster-region**: \u96c6\u7fa4\u6240\u5728\u7684\u533a\u57df\n > **--connectors**: connectors\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\n > **--connector-public-addresses**: connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe\n 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n \n 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity\n \n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes\n spec:\n members:\n - beijing.edge1\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e\n 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e"
- },
- {
- "heading": "\u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "\u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\n 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528\n ```shell\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # \u96c6\u7fa4\u540d\u5b57\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------\u7701\u7565\u5185\u5bb9-----9u0\n ```\n 2. \u4e3a**\u6240\u6709\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 22h v1.18.2\n edge2 Ready edge 22h v1.18.2\n master Ready master 22h v1.18.2\n node1 Ready 22h v1.18.2\n ```\n 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage\n \n ```shell\n curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name shanghai --cluster-role member --cluster-zone shanghai --cluster-region china --connectors node1 --chart http://116.62.127.76/fabedge-0.5.0.tgz --server-serviceHub-api-server https://10.22.46.47:30304 --host-operator-api-server https://10.22.46.47:30303 --connector-public-addresses 10.22.46.26 --init-token ey...Jh\n ```\n > \u8bf4\u660e\uff1a\n > **--cluster-name**: \u96c6\u7fa4\u540d\u79f0\n > **--cluster-role**: \u96c6\u7fa4\u89d2\u8272\n > **--cluster-zone\uff1a** \u96c6\u7fa4\u6240\u5728\u7684\u533a\n > **--cluster-region\uff1a**\u96c6\u7fa4\u6240\u5728\u7684\u533a\u57df\n > **--server-serviceHub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--host-operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\n > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token\n 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u901a\u8baf",
- "data": "1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity\n ```shell\n # \u5728master\u8282\u70b9\u64cd\u4f5c\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0",
- "data": "\u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a\n 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns\n 1. \u914d\u7f6enodelocaldns\n \n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind\n forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 2. \u914d\u7f6eedge-coredns\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 3. \u914d\u7f6ecoredns\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548"
- },
- {
- "heading": "\u4e0e\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528KubeEdge",
- "data": "1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # \u5fc5\u987b\u7981\u7528edgeMesh\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS\n clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain\n ```\n > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740\n 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore\n \n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528SuperEdge",
- "data": "1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa\n ```shell\n # \u5728master\u8282\u70b9\u6267\u884c\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002"
- },
- {
- "heading": "\u4e0eCNI\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528Calico",
- "data": "\u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002\n \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel)\n - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```\n > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range"
- },
- {
- "heading": "\u5e38\u89c1\u95ee\u9898",
- "data": "1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4 - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09 1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3 - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528 ```shell $ curl -s http://116.62.127.76/installer/v0.5.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. \u4e3a**\u6240\u6709\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready 22h v1.18.2 ``` 2. \u5b89\u88c5FabEdge ```shell $ curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name beijing --cluster-role host --cluster-zone beijing --cluster-region china --connectors node1 --connector-public-addresses 10.22.46.47 --chart http://116.62.127.76/fabedge-0.5.0.tgz ``` > \u8bf4\u660e\uff1a > **--cluster-name**: \u96c6\u7fa4\u540d\u79f0 > **--cluster-role**: \u96c6\u7fa4\u89d2\u8272 > **--cluster-zone**: \u96c6\u7fa4\u6240\u5728\u7684\u533a > **--cluster-region**: \u96c6\u7fa4\u6240\u5728\u7684\u533a\u57df > **--connectors**: connectors\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d > **--connector-public-addresses**: connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e \u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528 ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # \u96c6\u7fa4\u540d\u5b57 EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------\u7701\u7565\u5185\u5bb9-----9u0 ``` 2. \u4e3a**\u6240\u6709\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready 22h v1.18.2 ``` 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage ```shell curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name shanghai --cluster-role member --cluster-zone shanghai --cluster-region china --connectors node1 --chart http://116.62.127.76/fabedge-0.5.0.tgz --server-serviceHub-api-server https://10.22.46.47:30304 --host-operator-api-server https://10.22.46.47:30303 --connector-public-addresses 10.22.46.26 --init-token ey...Jh ``` > \u8bf4\u660e\uff1a > **--cluster-name**: \u96c6\u7fa4\u540d\u79f0 > **--cluster-role**: \u96c6\u7fa4\u89d2\u8272 > **--cluster-zone\uff1a** \u96c6\u7fa4\u6240\u5728\u7684\u533a > **--cluster-region\uff1a**\u96c6\u7fa4\u6240\u5728\u7684\u533a\u57df > **--server-serviceHub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--host-operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740 > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity ```shell # \u5728master\u8282\u70b9\u64cd\u4f5c $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector EOF $ kubectl apply -f community.yaml ``` \u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns 1. \u914d\u7f6enodelocaldns ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740 } ``` 2. \u914d\u7f6eedge-coredns ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740 } ``` 3. \u914d\u7f6ecoredns ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740 } ``` 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548 1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e ```shell $ vi /etc/kubeedge/config/edgecore.yaml # \u5fc5\u987b\u7981\u7528edgeMesh edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain ``` > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore ```shell $ systemctl restart edgecore ``` 1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa ```shell # \u5728master\u8282\u70b9\u6267\u884c $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002 \u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002 \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel) - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002 ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range 1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.6.0.md"
- },
- "content": [
- {
- "heading": "Getting Started",
- "data": "[toc]"
- },
- {
- "heading": "Terminology",
- "data": "- **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability.\n - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability.\n - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between cloud side and edge side. Since connector node will have large traffic burden, it's better not to run other programs on them.\n - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge.\n - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster.\n - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster.\n - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a\n - **Node Type**\uff1a to define the communication between nodes within the same cluster\n - **Cluster Type**\uff1ato define the cross-cluster communication"
- },
- {
- "heading": "Prerequisite",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) or Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "Preparation",
- "data": "1. Make sure the following ports are allowed by firewall or security group.\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. Collect the configuration of the current cluster\n ```shell\n $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "Deploy FabEdge on the host cluster",
- "data": "1. Deploy FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz\n ```\n > Note\uff1a\n > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located\n 2. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n 3. Create community for edges which need to communicate with each other\n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes # community name\n spec:\n members:\n - beijing.edge1 # format\uff1a{cluster name}.{edge node name}\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 4. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration\n 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration"
- },
- {
- "heading": "Deploy FabEdge in the member cluster",
- "data": "If any member cluster, register it in the host cluster first, then deploy FabEdge in it.\n 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration.\n \n ```shell\n # Run in the host cluster\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # cluster name\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------omitted-----9u0\n ```\n 3. Deploy FabEdge in the member cluster\n \n ```shell\n curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > Note:\n > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster\n > **--init-token**: token when the member cluster is added in the host cluster\n > **--service-hub-api-server**: endpoint of serviceHub in the host cluster\n > **--operator-api-server**: endpoint of operator-api in the host cluster\n \n 4. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "Enable multi-cluster communication",
- "data": "1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other\n ```shell\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # format: {cluster name}.connector\n - beijing.connector # format: {cluster name}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "Enable multi-cluster service discovery",
- "data": "the DNS components need to be modified\n - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only,\n - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`,\n - modify `coredns` for others\n 1. Update `nodelocaldns`\n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # local bind address\n forward . 10.233.12.205 # cluset-ip of fab-dns service\n }\n ```\n 2. Update `edge-coredns`\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # cluset-ip of fab-dns service\n }\n ```\n 3. Update `coredns `\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # cluset-ip of fab-dns service\n }\n ```\n \n 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect"
- },
- {
- "heading": "Edge computing framework depend configuration",
- "data": ""
- },
- {
- "heading": "for KubeEdge",
- "data": "1. Make sure `nodelocaldns` is running on all edge nodes\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. Update `edgecore` for all edge nodes\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # edgeMesh must be disabled\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output\n clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output\n ```\n > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used.\n 3. Reboot `edgecore` on all edge nodes\n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "for SuperEdge",
- "data": "1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod\n ```shell\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002"
- },
- {
- "heading": "CNI-dependent Configurations",
- "data": ""
- },
- {
- "heading": "for Calico",
- "data": "Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation.\n one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel)\n * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster\n * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster\n * on the member2 (Flannel) cluster, there is NO any configuration required.\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```"
- },
- {
- "heading": "FAQ",
- "data": "1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability. - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability. - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between cloud side and edge side. Since connector node will have large traffic burden, it's better not to run other programs on them. - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster. - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster. - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a - **Node Type**\uff1a to define the communication between nodes within the same cluster - **Cluster Type**\uff1ato define the cross-cluster communication - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) or Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09 1. Make sure the following ports are allowed by firewall or security group. - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. Collect the configuration of the current cluster ```shell $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. Deploy FabEdge ```shell $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz ``` > Note\uff1a > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located 2. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 3. Create community for edges which need to communicate with each other ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: - beijing.edge1 # format\uff1a{cluster name}.{edge node name} - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 4. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration If any member cluster, register it in the host cluster first, then deploy FabEdge in it. 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration. ```shell # Run in the host cluster $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------omitted-----9u0 ``` 3. Deploy FabEdge in the member cluster ```shell curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > Note: > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster > **--init-token**: token when the member cluster is added in the host cluster > **--service-hub-api-server**: endpoint of serviceHub in the host cluster > **--operator-api-server**: endpoint of operator-api in the host cluster 4. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # format: {cluster name}.connector - beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` the DNS components need to be modified - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only, - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`, - modify `coredns` for others 1. Update `nodelocaldns` ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # local bind address forward . 10.233.12.205 # cluset-ip of fab-dns service } ``` 2. Update `edge-coredns` ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # cluset-ip of fab-dns service } ``` 3. Update `coredns ` ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluset-ip of fab-dns service } ``` 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect 1. Make sure `nodelocaldns` is running on all edge nodes ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. Update `edgecore` for all edge nodes ```shell $ vi /etc/kubeedge/config/edgecore.yaml # edgeMesh must be disabled edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output ``` > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used. 3. Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` 1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod ```shell $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002 Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster * on the member2 (Flannel) cluster, there is NO any configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` 1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.6.0_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge\u5feb\u901f\u5b89\u88c5\u6307\u5357",
- "data": "[toc]"
- },
- {
- "heading": "\u6982\u5ff5",
- "data": "- **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b\n - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002\n - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4\n - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf\n - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a\n - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf\n - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf"
- },
- {
- "heading": "\u524d\u63d0\u6761\u4ef6",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "\u73af\u5883\u51c6\u5907",
- "data": "1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528\n \n ```shell\n $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "\u5728\u4e3b\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "1. \u5b89\u88c5FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\n > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe\n \n 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n \n 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity\n \n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes\n spec:\n members:\n - beijing.edge1\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e\n 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e"
- },
- {
- "heading": "\u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "\u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\n 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528\n ```shell\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # \u96c6\u7fa4\u540d\u5b57\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------\u7701\u7565\u5185\u5bb9-----9u0\n ```\n 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage\n \n ```shell\n curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\n > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\n > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token\n \n 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u901a\u8baf",
- "data": "1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity\n ```shell\n # \u5728master\u8282\u70b9\u64cd\u4f5c\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0",
- "data": "\u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a\n 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns\n 1. \u914d\u7f6enodelocaldns\n \n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind\n forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 2. \u914d\u7f6eedge-coredns\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 3. \u914d\u7f6ecoredns\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548"
- },
- {
- "heading": "\u4e0e\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528KubeEdge",
- "data": "1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # \u5fc5\u987b\u7981\u7528edgeMesh\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS\n clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain\n ```\n > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740\n 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore\n \n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528SuperEdge",
- "data": "1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa\n ```shell\n # \u5728master\u8282\u70b9\u6267\u884c\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002"
- },
- {
- "heading": "\u4e0eCNI\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528Calico",
- "data": "\u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002\n \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel)\n - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```\n > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range"
- },
- {
- "heading": "\u5e38\u89c1\u95ee\u9898",
- "data": "1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002 - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4 - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09 1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3 - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528 ```shell $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. \u5b89\u88c5FabEdge ```shell $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0 > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e \u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528 ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # \u96c6\u7fa4\u540d\u5b57 EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------\u7701\u7565\u5185\u5bb9-----9u0 ``` 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage ```shell curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0 > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740 > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity ```shell # \u5728master\u8282\u70b9\u64cd\u4f5c $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector EOF $ kubectl apply -f community.yaml ``` \u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns 1. \u914d\u7f6enodelocaldns ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740 } ``` 2. \u914d\u7f6eedge-coredns ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740 } ``` 3. \u914d\u7f6ecoredns ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740 } ``` 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548 1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e ```shell $ vi /etc/kubeedge/config/edgecore.yaml # \u5fc5\u987b\u7981\u7528edgeMesh edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain ``` > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore ```shell $ systemctl restart edgecore ``` 1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa ```shell # \u5728master\u8282\u70b9\u6267\u884c $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002 \u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002 \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel) - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002 ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range 1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.7.0.md"
- },
- "content": [
- {
- "heading": "Getting Started",
- "data": "[toc]"
- },
- {
- "heading": "Terminology",
- "data": "- **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability.\n - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability.\n - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between cloud side and edge side. Since connector node will have large traffic burden, it's better not to run other programs on them.\n - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge.\n - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster.\n - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster.\n - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a\n - **Node Type**\uff1a to define the communication between nodes within the same cluster\n - **Cluster Type**\uff1ato define the cross-cluster communication"
- },
- {
- "heading": "Prerequisite",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) or Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "Preparation",
- "data": "1. Make sure the following ports are allowed by firewall or security group.\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. Collect the configuration of the current cluster\n ```shell\n $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "Deploy FabEdge on the host cluster",
- "data": "1. Deploy FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz\n ```\n > Note\uff1a\n > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located\n 2. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n 3. Create community for edges which need to communicate with each other\n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes # community name\n spec:\n members:\n - beijing.edge1 # format\uff1a{cluster name}.{edge node name}\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 4. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration\n 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration"
- },
- {
- "heading": "Deploy FabEdge in the member cluster",
- "data": "If any member cluster, register it in the host cluster first, then deploy FabEdge in it.\n 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration.\n \n ```shell\n # Run in the host cluster\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # cluster name\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------omitted-----9u0\n ```\n 3. Deploy FabEdge in the member cluster\n \n ```shell\n curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > Note:\n > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster\n > **--init-token**: token when the member cluster is added in the host cluster\n > **--service-hub-api-server**: endpoint of serviceHub in the host cluster\n > **--operator-api-server**: endpoint of operator-api in the host cluster\n \n 4. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "Enable multi-cluster communication",
- "data": "1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other\n ```shell\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # format: {cluster name}.connector\n - beijing.connector # format: {cluster name}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "Enable multi-cluster service discovery",
- "data": "the DNS components need to be modified\n - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only,\n - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`,\n - modify `coredns` for others\n 1. Update `nodelocaldns`\n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # local bind address\n forward . 10.233.12.205 # cluset-ip of fab-dns service\n }\n ```\n 2. Update `edge-coredns`\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # cluset-ip of fab-dns service\n }\n ```\n 3. Update `coredns `\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # cluset-ip of fab-dns service\n }\n ```\n \n 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect"
- },
- {
- "heading": "Edge computing framework depend configuration",
- "data": ""
- },
- {
- "heading": "for KubeEdge",
- "data": "1. Make sure `nodelocaldns` is running on all edge nodes\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. Update `edgecore` for all edge nodes\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # edgeMesh must be disabled\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output\n clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output\n ```\n > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used.\n 3. Reboot `edgecore` on all edge nodes\n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "for SuperEdge",
- "data": "1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod\n ```shell\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002"
- },
- {
- "heading": "CNI-dependent Configurations",
- "data": ""
- },
- {
- "heading": "for Calico",
- "data": "Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation.\n one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel)\n * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster\n * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster\n * on the member2 (Flannel) cluster, there is NO any configuration required.\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```"
- },
- {
- "heading": "FAQ",
- "data": "1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **Cloud Cluster**\uff1aa standard k8s cluster, located at the cloud side, providing the cloud computing capability. - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability. - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between cloud side and edge side. Since connector node will have large traffic burden, it's better not to run other programs on them. - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. - **Host Cluster**: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster. - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to host cluster. - **Community**\uff1aK8S CRD defined by FabEdge\uff0cthere are two types\uff1a - **Node Type**\uff1a to define the communication between nodes within the same cluster - **Cluster Type**\uff1ato define the cross-cluster communication - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) or Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09or SuperEdge\uff08v0.5.0\uff09or OpenYurt\uff08 v0.4.1\uff09 1. Make sure the following ports are allowed by firewall or security group. - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. Collect the configuration of the current cluster ```shell $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. Deploy FabEdge ```shell $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz ``` > Note\uff1a > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: ip addresses of k8s nodes which connectors are located 2. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 3. Create community for edges which need to communicate with each other ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: - beijing.edge1 # format\uff1a{cluster name}.{edge node name} - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 4. Update the [edge computing framework](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration 5. Update the [CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE) dependent configuration If any member cluster, register it in the host cluster first, then deploy FabEdge in it. 1. in the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration. ```shell # Run in the host cluster $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------omitted-----9u0 ``` 3. Deploy FabEdge in the member cluster ```shell curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > Note: > **--connectors**: The names of k8s nodes which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0cthose nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster > **--init-token**: token when the member cluster is added in the host cluster > **--service-hub-api-server**: endpoint of serviceHub in the host cluster > **--operator-api-server**: endpoint of operator-api in the host cluster 4. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. in the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # format: {cluster name}.connector - beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` the DNS components need to be modified - if `nodelocaldns` is used\uff0cmodify `nodelocaldns` only, - if SuperEdge `edge-coredns` is used\uff0cmodify `coredns` and `edge-coredns`, - modify `coredns` for others 1. Update `nodelocaldns` ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # local bind address forward . 10.233.12.205 # cluset-ip of fab-dns service } ``` 2. Update `edge-coredns` ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # cluset-ip of fab-dns service } ``` 3. Update `coredns ` ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluset-ip of fab-dns service } ``` 4. Reboot coredns\uff0cedge-coredns or nodelocaldns to take effect 1. Make sure `nodelocaldns` is running on all edge nodes ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. Update `edgecore` for all edge nodes ```shell $ vi /etc/kubeedge/config/edgecore.yaml # edgeMesh must be disabled edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # clusterDNS of get_cluster_info script output clusterDomain: \"root-cluster\" # clusterDomain of get_cluster_info script output ``` > **clusterDNS**\uff1aif no nodelocaldns\uff0ccoredns service can be used. 3. Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` 1. Verify the service\uff0cif not ready\uff0cto rebuild the Pod ```shell $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`\uff0cwhich prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint\u3002 Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) * on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster * on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster * on the member2 (Flannel) cluster, there is NO any configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` 1. If asymmetric routes exist, to disable **rp_filter** on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # save the configuration. $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 1. If Error with\uff1a\u201cError: cannot re-use a name that is still in use\u201d. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started-v0.7.0_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge\u5feb\u901f\u5b89\u88c5\u6307\u5357",
- "data": "[toc]"
- },
- {
- "heading": "\u6982\u5ff5",
- "data": "- **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b\n - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002\n - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4\n - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf\n - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a\n - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf\n - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf"
- },
- {
- "heading": "\u524d\u63d0\u6761\u4ef6",
- "data": "- Kubernetes (v1.18.8\uff0c1.22.7)\n - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5)\n - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09"
- },
- {
- "heading": "\u73af\u5883\u51c6\u5907",
- "data": "1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528\n \n ```shell\n $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : root-cluster\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "\u5728\u4e3b\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "1. \u5b89\u88c5FabEdge\n ```shell\n $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\n > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe\n \n 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```\n \n 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity\n \n ```shell\n $ cat > node-community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes\n spec:\n members:\n - beijing.edge1\n - beijing.edge2\n EOF\n \n $ kubectl apply -f node-community.yaml\n ```\n 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e\n 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e"
- },
- {
- "heading": "\u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "\u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\n 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528\n ```shell\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # \u96c6\u7fa4\u540d\u5b57\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------\u7701\u7565\u5185\u5bb9-----9u0\n ```\n 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage\n \n ```shell\n curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart http://116.62.127.76/fabedge-0.6.0.tgz \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\n > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\n > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token\n \n 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.18.2\n edge2 Ready edge 5h21m v1.18.2\n master Ready master 5h29m v1.18.2\n node1 Ready connector 5h23m v1.18.2\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n nodelocaldns-fmx7f 1/1 Running 0 17h\n nodelocaldns-kcz6b 1/1 Running 0 17h\n nodelocaldns-pwpm4 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-edge1 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u901a\u8baf",
- "data": "1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity\n ```shell\n # \u5728master\u8282\u70b9\u64cd\u4f5c\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0",
- "data": "\u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a\n 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8\n 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns\n 1. \u914d\u7f6enodelocaldns\n \n ```shell\n $ kubectl -n kube-system edit cm nodelocaldns\n global:53 {\n errors\n cache 30\n reload\n bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind\n forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 2. \u914d\u7f6eedge-coredns\n ```shell\n $ kubectl -n edge-system edit cm edge-coredns\n global {\n forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 3. \u914d\u7f6ecoredns\n ```shell\n $ kubectl -n kube-system edit cm coredns\n global {\n forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740\n }\n ```\n 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548"
- },
- {
- "heading": "\u4e0e\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528KubeEdge",
- "data": "1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c\n ```shell\n $ kubectl get po -n kube-system -o wide | grep nodelocaldns\n nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master \n nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 \n nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 \n ```\n 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n \n # \u5fc5\u987b\u7981\u7528edgeMesh\n edgeMesh:\n enable: false\n \n edged:\n enable: true\n cniBinDir: /opt/cni/bin\n cniCacheDirs: /var/lib/cni/cache\n cniConfDir: /etc/cni/net.d\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS\n clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain\n ```\n > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740\n 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore\n \n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528SuperEdge",
- "data": "1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa\n ```shell\n # \u5728master\u8282\u70b9\u6267\u884c\n $ kubectl get po -n edge-system\n application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h\n application-grid-wrapper-master-pvkv8 1/1 Running 0 15h\n application-grid-wrapper-node-dqxwv 1/1 Running 0 15h\n application-grid-wrapper-node-njzth 1/1 Running 0 15h\n edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h\n edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h\n edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h\n edge-health-7h29k 1/1 Running 3 15h\n edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h\n edge-health-wcptf 1/1 Running 3 15h\n tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h\n tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h\n tunnel-edge-dtb9j 1/1 Running 0 15h\n tunnel-edge-zxfn6 1/1 Running 0 15h\n \n $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf\n pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted\n \n $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp\n pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted\n ```\n 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002"
- },
- {
- "heading": "\u4e0eCNI\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528Calico",
- "data": "\u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002\n \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel)\n - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```\n > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range"
- },
- {
- "heading": "\u5e38\u89c1\u95ee\u9898",
- "data": "1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- },
- {
- "additional_info": "[toc] - **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002 - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4 - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf - Kubernetes (v1.18.8\uff0c1.22.7) - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5) - KubeEdge \uff08v1.5\uff09\u6216\u8005 SuperEdge\uff08v0.5.0\uff09\u6216\u8005 OpenYurt\uff08 v0.4.1\uff09 1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3 - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528 ```shell $ curl -s http://116.62.127.76/installer/v0.6.0/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. \u5b89\u88c5FabEdge ```shell $ curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0 > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 EOF $ kubectl apply -f node-community.yaml ``` 5. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#%E5%92%8C%E8%BE%B9%E7%BC%98%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#%E5%92%8CCNI%E7%9B%B8%E5%85%B3%E7%9A%84%E9%85%8D%E7%BD%AE)\u4fee\u6539\u76f8\u5173\u914d\u7f6e \u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528 ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # \u96c6\u7fa4\u540d\u5b57 EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------\u7701\u7565\u5185\u5bb9-----9u0 ``` 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage ```shell curl 116.62.127.76/installer/v0.6.0/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart http://116.62.127.76/fabedge-0.6.0.tgz \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0 > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740 > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity ```shell # \u5728master\u8282\u70b9\u64cd\u4f5c $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector EOF $ kubectl apply -f community.yaml ``` \u4fee\u6539\u7684\u96c6\u7fa4DNS\u7ec4\u4ef6\uff1a 1\uff09\u5982\u679c\u4f7f\u7528\u4e86nodelocaldns\uff0c\u53ea\u9700\u8981\u4fee\u6539nodelocaldns, \u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 2\uff09\u5982\u679c\u4f7f\u7528SuperEdge\uff0c\u4fee\u6539coredns\u548cedge-coredns\uff0c\u5176\u5b83\u914d\u7f6e\u4e0d\u52a8 3\uff09\u5176\u5b83\u60c5\u51b5\u53ea\u9700\u8981\u4fee\u6539coredns 1. \u914d\u7f6enodelocaldns ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # \u672c\u5730bind\u5730\u5740\uff0c\u53c2\u8003\u5176\u5b83\u914d\u7f6e\u6bb5\u4e2d\u7684bind forward . 10.233.12.205 # fabdns\u7684service IP\u5730\u5740 } ``` 2. \u914d\u7f6eedge-coredns ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward . 10.244.51.126 # fabdns\u7684service IP\u5730\u5740 } ``` 3. \u914d\u7f6ecoredns ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740 } ``` 4. \u91cd\u542fcoredns\u3001edge-coredns\u548cnodelocaldns\u4f7f\u914d\u7f6e\u751f\u6548 1. \u786e\u8ba4nodelocaldns\u5728**\u8fb9\u7f18\u8282\u70b9**\u6b63\u5e38\u8fd0\u884c ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 ``` 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e ```shell $ vi /etc/kubeedge/config/edgecore.yaml # \u5fc5\u987b\u7981\u7528edgeMesh edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDNS clusterDomain: \"root-cluster\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain ``` > **clusterDNS**\uff1a\u5982\u679c\u6ca1\u6709\u542f\u7528nodelocaldns\uff0c\u8bf7\u4f7f\u7528coredns service\u7684\u5730\u5740 3. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore ```shell $ systemctl restart edgecore ``` 1. \u68c0\u67e5\u670d\u52a1\u72b6\u6001\uff0c\u5982\u679c\u4e0dReady\uff0c\u8981\u5220\u9664Pod\u91cd\u5efa ```shell # \u5728master\u8282\u70b9\u6267\u884c $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` 2. SupeEdge\u7684master\u8282\u70b9\u4e0a\u9ed8\u8ba4\u5e26\u6709\u6c61\u70b9\uff1anode-role.kubernetes.io/master:NoSchedule\uff0c \u6240\u4ee5\u4e0d\u4f1a\u542f\u52a8fabedge-cloud-agent\uff0c\u5bfc\u81f4\u4e0d\u80fd\u548cmaster\u8282\u70b9\u4e0a\u7684Pod\u901a\u8baf\u3002\u5982\u679c\u9700\u8981\uff0c\u53ef\u4ee5\u4fee\u6539fabedge-cloud-agent\u7684DaemonSet\u914d\u7f6e\uff0c\u5bb9\u5fcd\u8fd9\u4e2a\u6c61\u70b9\u3002 \u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002 \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel) - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002 ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` > **cidr**: \u88ab\u6dfb\u52a0\u96c6\u7fa4\u7684get_cluster_info.sh\u8f93\u51fa\u7684cluster-cidr\u548cservice-cluster-ip-range 1. \u6709\u7684\u7f51\u7edc\u73af\u5883\u5b58\u5728\u975e\u5bf9\u79f0\u8def\u7531\uff0c\u987b\u8981\u5728\u4e91\u7aef\u6240\u6709\u8282\u70b9\u5173\u95edrp_filter ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done # \u4fdd\u5b58\u914d\u7f6e $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` 2. \u62a5\u9519\uff1a\u201cError: cannot re-use a name that is still in use\u201d\u3002\u8fd9\u662f\u56e0\u4e3afabedge\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started.md"
- },
- "content": [
- {
- "heading": "Getting Started",
- "data": "[toc]"
- },
- {
- "heading": "Terminology",
- "data": "- **Cloud Cluster**:a standard k8s cluster, located at the cloud side, providing the cloud computing capability.\n - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability.\n - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between the cloud side and edge side. Since a connector node will have a large traffic burden, it's better not to run other programs on them.\n - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge.\n - **Host Cluster**: a selective cloud cluster, used to manage cross-cluster communication. The 1st cluster deployed by FabEdge must be the host cluster.\n - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to the host cluster.\n - **Community**: an K8S CRD defined by FabEdge\uff0c there are two types:\n - **Node Type**: to define the communication between nodes within the same cluster\n - **Cluster Type**: to define the cross-cluster communication"
- },
- {
- "heading": "Prerequisite",
- "data": "- Kubernetes (v1.22.5+)\n - Flannel (v0.14.0) or Calico (v3.16.5)\n - KubeEdge (>= v1.9.0) or SuperEdge(v0.8.0) or OpenYurt(>= v1.2.0)"
- },
- {
- "heading": "PS1: For flannel, only Vxlan mode is supported. Support dual-stack environment.",
- "data": "*PS2: For calico, only IPIP mode is supported. Support IPv4 environment only.*"
- },
- {
- "heading": "Preparation",
- "data": "1. Make sure the following ports are allowed by the firewall or security group.\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n 2. Turn off firewalld if your machine has it.\n \n 3. Collect the configuration of the current cluster\n ```shell\n $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : cluster.local\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "Deploy FabEdge on the host cluster",
- "data": "1. Use helm to add fabedge repo:\n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n \n 1. Deploy FabEdge\n ```shell\n $ curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart fabedge/fabedge\n ```\n > Note:\n > **--connectors**: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0c those nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, it is required if you use Calico. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: IP addresses of k8s nodes which connectors are located\n *PS: The `quickstart.sh` script has more parameters\uff0c the example above only uses the necessary parameters, execute `quickstart.sh --help` to check all of them.*\n 2. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7dd5ccf489-5dc29 1/1 Running 0 24h\n fabedge-agent-bvnvj 2/2 Running 2 (23h ago) 24h\n fabedge-agent-c9bsx 2/2 Running 2 (23h ago) 24h\n fabedge-cloud-agent-lgqkw 1/1 Running 3 (24h ago) 24h\n fabedge-connector-54c78b5444-9dkt6 2/2 Running 0 24h\n fabedge-operator-767bc6c58b-rk7mr 1/1 Running 0 24h\n service-hub-7fd4659b89-h522c 1/1 Running 0 24h\n ```\n \n 3. Create a community for edges that need to communicate with each other\n ```shell\n $ cat > all-edges.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes # community name\n spec:\n members:\n - beijing.edge1 # format:{cluster name}.{edge node name}\n - beijing.edge2\n EOF\n \n $ kubectl apply -f all-edges.yaml\n ```\n 4. Update the [edge computing framework](#edge-computing-framework-dependent-configuration) dependent configuration\n 5. Update the [CNI](#cni-dependent-configurations) dependent configuration"
- },
- {
- "heading": "Deploy FabEdge in the member cluster",
- "data": "If you have any member cluster, register it in the host cluster first, then deploy FabEdge in it. Before you that, you'd better to make sure none of the addresses of host network and container network of those clusters overlap.\n 1. In the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration.\n \n ```shell\n # Run in the host cluster\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # cluster name\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------omitted-----9u0\n ```\n 3. Use helm to add fabedge repo:\n \n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n \n 3. Deploy FabEdge in the member cluster\n \n ```shell\n curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart fabedge/fabedge \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > Note:\n > **--connectors**: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector\n > **--edges:** The names of edge nodes\uff0c those nodes will be labeled as node-role.kubernetes.io/edge\n > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster.\n > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster\n > **--init-token**: token when the member cluster is added in the host cluster\n > **--service-hub-api-server**: endpoint of serviceHub in the host cluster\n > **--operator-api-server**: endpoint of operator-api in the host cluster\n \n 4. Verify the deployment\n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-m55h5 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "Enable multi-cluster communication",
- "data": "1. In the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other\n ```shell\n $ cat > community.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-clusters\n spec:\n members:\n - shanghai.connector # format: {cluster name}.connector\n - beijing.connector # format: {cluster name}.connector\n EOF\n \n $ kubectl apply -f community.yaml\n ```"
- },
- {
- "heading": "Enable multi-cluster service discovery",
- "data": "Change the coredns configmap:"
- },
- {
- "heading": "add this config",
- "data": "1. Reboot coredns to take effect"
- },
- {
- "heading": "Edge computing framework dependent configuration",
- "data": ""
- },
- {
- "heading": "KubeEdge",
- "data": ""
- },
- {
- "heading": "cloudcore",
- "data": "1. Enable dynamicController of cloudcore:\n ```\n dynamicController:\n enable: true\n ```\n This configuration item is in the cloudcore configuration file cloudcore.yaml, please find the file yourself according to your environment.\n 2. Make sure cloudcore has permissions to access **endpointslices** resources (only if cloudcore is running in cluster):\n ```\n kubectl edit clusterrole cloudcore\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRole\n metadata:\n labels:\n app.kubernetes.io/managed-by: Helm\n k8s-app: kubeedge\n kubeedge: cloudcore\n name: cloudcore\n rules:\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - list\n - watch\n ```\n 3. Restart cloudcore."
- },
- {
- "heading": "edgecore",
- "data": "1. Update `edgecore` on all edge nodes ( kubeedge < v.1.12.0)\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n edged:\n enable: true\n ...\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10\n clusterDomain: \"cluster.local\" # clusterDomain from get_cluster_info script output\n metaManager:\n metaServer:\n enable: true\n ```\n or ( kubeedge >= v.1.12.2)\n ```yaml\n $ vi /etc/kubeedge/config/edgecore.yaml\n edged:\n enable: true\n ...\n networkPluginName: cni\n networkPluginMTU: 1500\n tailoredKubeletConfig:\n clusterDNS: [\"169.254.25.10\"]\n clusterDomain: \"cluster.local\" # clusterDomain from get_cluster_info script output\n metaManager:\n metaServer:\n enable: true\n ```\n 3. Reboot `edgecore` on all edge nodes\n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "CNI dependent Configurations",
- "data": ""
- },
- {
- "heading": "for Calico",
- "data": "Since v0.7.0, fabedge can manage calico ippools of CIDRS from other clusters, the function is enabled when you use quickstart.sh to install fabedge. If you prefer to configure ippools by yourself, provide `--auto-keep-ippools false` when you install fabedge. If you choose to let fabedge configure ippools, the following content can be skipped.\n Regardless of the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation.\n one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel)\n * on the host (Calico) cluster, add the addresses of the member (Calico) cluster and the member(Flannel) cluster\n * on the member1 (Calico) cluster, add the addresses of the host (Calico) cluster and the member(Flannel) cluster\n * on the member2 (Flannel) cluster, there is no configuration required.\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```\n > **cidr** should be one the of following values\uff1a\n >\n > * edge-pod-cidr of current cluster\n > * cluster-cidr parameter of another cluster\n > * service-cluster-ip-range of another cluster"
- },
- {
- "heading": "More Documents",
- "data": "* This document introduces how to install FabEdge via a script which help you to try FabEdge soon, but we would recommend you to read [Manually Install](./manually-install-zh.md) which might fit in production environment better. * FabEdge also provide other features, read [FAQ](./FAQ.md) to find out."
- },
- {
- "additional_info": "[toc] - **Cloud Cluster**:a standard k8s cluster, located at the cloud side, providing the cloud computing capability. - **Edge Cluster**: a standard k8s cluster, located at the edge side, providing the edge computing capability. - **Connector Node**: a k8s node, located at the cloud side, connector is responsible for communication between the cloud side and edge side. Since a connector node will have a large traffic burden, it's better not to run other programs on them. - **Edge Node**: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. - **Host Cluster**: a selective cloud cluster, used to manage cross-cluster communication. The 1st cluster deployed by FabEdge must be the host cluster. - **Member Cluster**: an edge cluster, registered into the host cluster, reports the network information to the host cluster. - **Community**: an K8S CRD defined by FabEdge\uff0c there are two types: - **Node Type**: to define the communication between nodes within the same cluster - **Cluster Type**: to define the cross-cluster communication - Kubernetes (v1.22.5+) - Flannel (v0.14.0) or Calico (v3.16.5) - KubeEdge (>= v1.9.0) or SuperEdge(v0.8.0) or OpenYurt(>= v1.2.0) *PS2: For calico, only IPIP mode is supported. Support IPv4 environment only.* 1. Make sure the following ports are allowed by the firewall or security group. - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. Turn off firewalld if your machine has it. 3. Collect the configuration of the current cluster ```shell $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : cluster.local cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. Use helm to add fabedge repo: ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 1. Deploy FabEdge ```shell $ curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart fabedge/fabedge ``` > Note: > **--connectors**: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0c those nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, it is required if you use Calico. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: IP addresses of k8s nodes which connectors are located *PS: The `quickstart.sh` script has more parameters\uff0c the example above only uses the necessary parameters, execute `quickstart.sh --help` to check all of them.* 2. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m v1.22.5 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7dd5ccf489-5dc29 1/1 Running 0 24h fabedge-agent-bvnvj 2/2 Running 2 (23h ago) 24h fabedge-agent-c9bsx 2/2 Running 2 (23h ago) 24h fabedge-cloud-agent-lgqkw 1/1 Running 3 (24h ago) 24h fabedge-connector-54c78b5444-9dkt6 2/2 Running 0 24h fabedge-operator-767bc6c58b-rk7mr 1/1 Running 0 24h service-hub-7fd4659b89-h522c 1/1 Running 0 24h ``` 3. Create a community for edges that need to communicate with each other ```shell $ cat > all-edges.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: - beijing.edge1 # format:{cluster name}.{edge node name} - beijing.edge2 EOF $ kubectl apply -f all-edges.yaml ``` 4. Update the [edge computing framework](#edge-computing-framework-dependent-configuration) dependent configuration 5. Update the [CNI](#cni-dependent-configurations) dependent configuration If you have any member cluster, register it in the host cluster first, then deploy FabEdge in it. Before you that, you'd better to make sure none of the addresses of host network and container network of those clusters overlap. 1. In the **host cluster**\uff0ccreate an edge cluster named \"shanghai\". Get the token for registration. ```shell # Run in the host cluster $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------omitted-----9u0 ``` 3. Use helm to add fabedge repo: ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 3. Deploy FabEdge in the member cluster ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart fabedge/fabedge \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > Note: > **--connectors**: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > **--edges:** The names of edge nodes\uff0c those nodes will be labeled as node-role.kubernetes.io/edge > **--edge-pod-cidr**: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > **--connector-public-addresses**: ip address of k8s nodes on which connectors are located in the member cluster > **--init-token**: token when the member cluster is added in the host cluster > **--service-hub-api-server**: endpoint of serviceHub in the host cluster > **--operator-api-server**: endpoint of operator-api in the host cluster 4. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m v1.22.5 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-m55h5 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. In the **host cluster**\uff0ccreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: - shanghai.connector # format: {cluster name}.connector - beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` Change the coredns configmap: ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluster-ip of fab-dns service } .:53 { ... } ``` 1. Reboot coredns to take effect 1. Enable dynamicController of cloudcore: ``` dynamicController: enable: true ``` This configuration item is in the cloudcore configuration file cloudcore.yaml, please find the file yourself according to your environment. 2. Make sure cloudcore has permissions to access **endpointslices** resources (only if cloudcore is running in cluster): ``` kubectl edit clusterrole cloudcore apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/managed-by: Helm k8s-app: kubeedge kubeedge: cloudcore name: cloudcore rules: - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - get - list - watch ``` 3. Restart cloudcore. 1. Update `edgecore` on all edge nodes ( kubeedge < v.1.12.0) ```shell $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 clusterDomain: \"cluster.local\" # clusterDomain from get_cluster_info script output metaManager: metaServer: enable: true ``` or ( kubeedge >= v.1.12.2) ```yaml $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 tailoredKubeletConfig: clusterDNS: [\"169.254.25.10\"] clusterDomain: \"cluster.local\" # clusterDomain from get_cluster_info script output metaManager: metaServer: enable: true ``` 3. Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` Since v0.7.0, fabedge can manage calico ippools of CIDRS from other clusters, the function is enabled when you use quickstart.sh to install fabedge. If you prefer to configure ippools by yourself, provide `--auto-keep-ippools false` when you install fabedge. If you choose to let fabedge configure ippools, the following content can be skipped. Regardless of the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) * on the host (Calico) cluster, add the addresses of the member (Calico) cluster and the member(Flannel) cluster * on the member1 (Calico) cluster, add the addresses of the host (Calico) cluster and the member(Flannel) cluster * on the member2 (Flannel) cluster, there is no configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` > **cidr** should be one the of following values\uff1a > > * edge-pod-cidr of current cluster > * cluster-cidr parameter of another cluster > * service-cluster-ip-range of another cluster * This document introduces how to install FabEdge via a script which help you to try FabEdge soon, but we would recommend you to read [Manually Install](./manually-install-zh.md) which might fit in production environment better. * FabEdge also provide other features, read [FAQ](./FAQ.md) to find out."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "get-started_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge\u5feb\u901f\u5b89\u88c5\u6307\u5357",
- "data": "[toc]"
- },
- {
- "heading": "\u6982\u5ff5",
- "data": "- **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b\n - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002\n - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b\n - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4\n - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf\n - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a\n - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf\n - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf"
- },
- {
- "heading": "\u524d\u63d0\u6761\u4ef6",
- "data": "- Kubernetes (v1.22.5+)\n - Flannel (v0.14.0 ) \u6216\u8005 Calico (v3.16.5)\n - KubeEdge \uff08>= v1.9.0\uff09\u6216\u8005 SuperEdge\uff08v0.8.0\uff09\u6216\u8005 OpenYurt\uff08 >= v1.2.0\uff09\n *\u6ce81\uff1a Flannel\u76ee\u524d\u4ec5\u652f\u6301Vxlan\u6a21\u5f0f\uff0c\u652f\u6301\u53cc\u6808\u73af\u5883\u3002*\n *\u6ce82\uff1a Calico\u76ee\u524d\u4ec5\u652f\u6301IPIP\u6a21\u5f0f\uff0ckube backend\u5b58\u50a8(\u9ed8\u8ba4)\uff0c\u4e0d\u652f\u6301\u53cc\u6808\u73af\u5883\u3002*"
- },
- {
- "heading": "\u73af\u5883\u51c6\u5907",
- "data": "1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n \n 2. \u5982\u679c\u673a\u5668\u4e0a\u6709firewalld\uff0c\u4e5f\u6700\u597d\u5173\u95ed\n \n 3. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528\n \n ```shell\n $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : cluster.local\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```"
- },
- {
- "heading": "\u5728\u4e3b\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "1. \u7528helm\u6dfb\u52a0fabedge repo\uff1a\n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n \n 1. \u5b89\u88c5FabEdge\n ```shell\n $ curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name beijing \\\n --cluster-role host \\\n --cluster-zone beijing \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.47 \\\n --chart fabedge/fabedge\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\n > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe\n \n *\u6ce8\uff1a`quickstart.sh`\u811a\u672c\u6709\u5f88\u591a\u53c2\u6570\uff0c\u4ee5\u4e0a\u5b9e\u4f8b\u4ec5\u4ee5\u6700\u5e38\u7528\u7684\u53c2\u6570\u4e3e\u4f8b\uff0c\u6267\u884c`quickstart.sh --help`\u67e5\u8be2\u3002*\n \n 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n nginx-proxy-node1 1/1 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7dd5ccf489-5dc29 1/1 Running 0 24h\n fabedge-agent-bvnvj 2/2 Running 2 (23h ago) 24h\n fabedge-agent-c9bsx 2/2 Running 2 (23h ago) 24h\n fabedge-cloud-agent-lgqkw 1/1 Running 3 (24h ago) 24h\n fabedge-connector-54c78b5444-9dkt6 2/2 Running 0 24h\n fabedge-operator-767bc6c58b-rk7mr 1/1 Running 0 24h\n service-hub-7fd4659b89-h522c 1/1 Running 0 24h\n ```\n \n 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity\n \n ```shell\n $ cat > all-edges.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: beijing-edge-nodes\n spec:\n members:\n - beijing.edge1\n - beijing.edge2\n EOF\n \n $ kubectl apply -f all-edges.yaml\n ```\n 4. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e)\u4fee\u6539\u76f8\u5173\u914d\u7f6e\n 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#CNI\u76f8\u5173\u7684\u914d\u7f6e)\u4fee\u6539\u76f8\u5173\u914d\u7f6e"
- },
- {
- "heading": "\u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge",
- "data": "\u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\u3002\u5728\u90e8\u7f72\u524d\uff0c\u8981\u6ce8\u610f\u786e\u4fdd\u5404\u4e2a\u96c6\u7fa4\u7684\u4e3b\u673a\u7f51\u7edc\u5730\u5740\u53ca\u5bb9\u5668\u7f51\u7edc\u5730\u5740\u4e0d\u8981\u91cd\u53e0\u3002\n 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528\n ```shell\n $ cat > shanghai.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: shanghai # \u96c6\u7fa4\u540d\u5b57\n EOF\n \n $ kubectl apply -f shanghai.yaml\n \n $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}'\n eyJ------\u7701\u7565\u5185\u5bb9-----9u0\n ```\n 3. \u7528helm\u6dfb\u52a0fabedge repo\uff1a\n \n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n \n 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage\n \n ```shell\n curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\\n --cluster-name shanghai \\\n --cluster-role member \\\n --cluster-zone shanghai \\\n --cluster-region china \\\n --connectors node1 \\\n --edges edge1,edge2 \\\n --edge-pod-cidr 10.233.0.0/16 \\\n --connector-public-addresses 10.22.46.26 \\\n --chart fabedge/fabedge \\\n --service-hub-api-server https://10.22.46.47:30304 \\\n --operator-api-server https://10.22.46.47:30303 \\\n --init-token ey...Jh\n ```\n > \u8bf4\u660e\uff1a\n > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e\n > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e\n > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\u3002\u5728v1.0.0\u7248\u672c\u524d\uff0c\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\uff0c\u4ecev1.0.0\u8d77\uff0c\u5efa\u8bae\u8be5\u503c\u662fcluster-cidr\u7684\u5b50\u96c6\uff0c\u4f46\u4e0d\u8981\u8ddfCALICO_IPV4POOL_CIDR/\u91cc\u7684\u503c\u91cd\u53e0\u3002\n > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740\n > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3\n > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token\n \n 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38\n \n ```shell\n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n \n $ kubectl get po -n kube-system\n NAME READY STATUS RESTARTS AGE\n calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h\n calico-node-7dkwj 1/1 Running 0 16h\n calico-node-q95qp 1/1 Running 0 16h\n coredns-86978d8c6f-qwv49 1/1 Running 0 17h\n kube-apiserver-master 1/1 Running 0 17h\n kube-controller-manager-master 1/1 Running 0 17h\n kube-proxy-ls9d7 1/1 Running 0 17h\n kube-proxy-wj8j9 1/1 Running 0 17h\n kube-scheduler-master 1/1 Running 0 17h\n metrics-server-894c64767-f4bvr 2/2 Running 0 17h\n \n $ kubectl get po -n fabedge\n NAME READY STATUS RESTARTS AGE\n fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s\n fabedge-agent-m55h5 2/2 Running 0 8m18s\n fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s\n fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s\n fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s\n service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u901a\u8baf",
- "data": "1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity\n ```shell\n # \u5728master\u8282\u70b9\u64cd\u4f5c\n $ cat > all-edges.yaml << EOF\n apiVersion: fabedge.io/v1alpha1\n kind: Community\n metadata:\n name: all-edges\n spec:\n members:\n - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector\n EOF\n \n $ kubectl apply -f all-edges.yaml\n ```"
- },
- {
- "heading": "\u542f\u7528\u591a\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0",
- "data": "\u4fee\u6539\u96c6\u7fa4\u7684coredns\u914d\u7f6e\uff1a"
- },
- {
- "heading": "\u6dfb\u52a0\u8be5\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "KubeEdge",
- "data": ""
- },
- {
- "heading": "cloudcore",
- "data": "1. \u542f\u52a8cloudcore\u7684dynamicController:\n ```yaml\n dynamicController:\n enable: true\n ```\n \u8be5\u914d\u7f6e\u9879\u5728cloudcore\u7684\u914d\u7f6e\u6587\u4ef6cloudcore.yaml\u4e2d\uff0c\u8bf7\u6839\u636e\u60a8\u7684\u73af\u5883\u81ea\u884c\u5bfb\u627e\u8be5\u6587\u4ef6\u3002\n 2. \u786e\u4fddcloudcore\u6709\u8bbf\u95eeendpointslices\u8d44\u6e90\u7684\u6743\u9650(\u4ec5\u9650\u4e8e\u4ee5Pod\u65b9\u5f0f\u8fd0\u884c\u7684cloudcore):\n ```\n kubectl edit clusterrole cloudcore\n apiVersion: rbac.authorization.k8s.io/v1\n kind: ClusterRole\n metadata:\n labels:\n app.kubernetes.io/managed-by: Helm\n k8s-app: kubeedge\n kubeedge: cloudcore\n name: cloudcore\n rules:\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - list\n - watch\n ```\n 3. \u91cd\u542fcloudcore"
- },
- {
- "heading": "edgecore",
- "data": "1. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e ( kubeedge < v.1.12.0)\n ```shell\n $ vi /etc/kubeedge/config/edgecore.yaml\n edged:\n enable: true\n ...\n networkPluginName: cni\n networkPluginMTU: 1500\n clusterDNS: 169.254.25.10\n clusterDomain: \"cluster.local\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain\n metaManager:\n metaServer:\n enable: true\n ```\n \u6216\u8005 ( kubeedge >= v.1.12.2)\n ```yaml\n $ vi /etc/kubeedge/config/edgecore.yaml\n edged:\n enable: true\n ...\n networkPluginName: cni\n networkPluginMTU: 1500\n tailoredKubeletConfig:\n clusterDNS: [\"169.254.25.10\"]\n clusterDomain: \"cluster.local\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain\n metaManager:\n metaServer:\n enable: true\n ```\n 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore\n ```shell\n $ systemctl restart edgecore\n ```"
- },
- {
- "heading": "CNI\u76f8\u5173\u7684\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u5982\u679c\u4f7f\u7528Calico",
- "data": "\u81eav0.7.0\u8d77\uff0cfabedge\u63d0\u4f9b\u4e86\u81ea\u52a8\u7ef4\u62a4calico ippools\u529f\u80fd\uff0c\u4f7f\u7528`quickstart.sh`\u5b89\u88c5fabedge\u65f6\uff0c\u4f1a\u81ea\u52a8\u542f\u52a8\u8fd9\u4e2a\u529f\u80fd\u3002\u5982\u679c\u60a8\u5e0c\u671b\u81ea\u5df1\u7ba1\u7406calico ippools\uff0c\u53ef\u4ee5\u5728\u5b89\u88c5\u65f6\u4f7f\u7528`--auto-keep-ippools false`\u914d\u7f6e\u9879\u5173\u95ed\u8fd9\u4e2a\u529f\u80fd\u3002\u5728\u542f\u7528\u81ea\u52a8\u7ef4\u62a4calico ippools\u7684\u60c5\u51b5\u4e0b\uff0c\u4ee5\u4e0b\u5185\u5bb9\u53ef\u4ee5\u8df3\u8fc7\u3002\n \u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u672c\u96c6\u7fa4\u7684EdgePodCIDR\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002\n \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel)\n - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002\n - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002\n ```shell\n $ cat > cluster-cidr-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-cluster-cidr\n spec:\n blockSize: 26\n cidr: 10.233.64.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f cluster-cidr-pool.yaml\n \n $ cat > service-cluster-ip-range-pool.yaml << EOF\n apiVersion: projectcalico.org/v3\n kind: IPPool\n metadata:\n name: cluster-beijing-service-cluster-ip-range\n spec:\n blockSize: 26\n cidr: 10.233.0.0/18\n natOutgoing: false\n disabled: true\n ipipMode: Always\n EOF\n \n $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml\n ```\n > **cidr**\u53c2\u6570\u662f\u4ee5\u4e0b\u7cfb\u7edf\u53c2\u6570\u4e4b\u4e00\uff1a\n >\n > * \u672c\u96c6\u7fa4\u7684edge-pod-cidr\n > * \u5176\u4ed6\u96c6\u7fa4cluster-cidr\n > * \u5176\u4ed6\u96c6\u7fa4\u7684service-cluster-ip-range"
- },
- {
- "heading": "\u66f4\u591a\u8d44\u6599",
- "data": "* \u672c\u6587\u7684\u5b89\u88c5\u65b9\u5f0f\u662f\u811a\u672c\u5b89\u88c5\uff0c\u5b83\u8ba9\u60a8\u80fd\u5feb\u901f\u4f53\u9a8cFabEdge\uff0c\u4f46\u5efa\u8bae\u60a8\u9605\u8bfb[\u624b\u52a8\u5b89\u88c5](./manually-install_zh.md)\uff0c\u8fd9\u66f4\u9002\u5408\u5728\u751f\u4ea7\u73af\u5883\u4e0b\u7684\u90e8\u7f72\u3002 * FabEdge\u6709\u8bb8\u591a\u7279\u6027\uff0c\u8fd9\u4e9b\u90fd\u8bb0\u5f55\u5728[\u5e38\u89c1\u95ee\u9898](./FAQ_zh.md)\u3002 * \u5982\u679c\u60a8\u4f7f\u7528\u4e86\u591a\u96c6\u7fa4\u901a\u4fe1\u529f\u80fd\uff0c\u5efa\u8bae\u60a8\u9605\u8bfb[\u521b\u5efa\u5168\u5c40\u670d\u52a1](https://github.com/FabEdge/fab-dns/blob/main/docs/how-to-create-globalservice.md)\u6765\u77e5\u6653\u5982\u4f55\u8de8\u96c6\u7fa4\u8bbf\u95ee\u670d\u52a1\u3002"
- },
- {
- "additional_info": "[toc] - **\u4e91\u7aef\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u63d0\u4f9b\u4e91\u7aef\u7684\u8ba1\u7b97\u80fd\u529b - **Connector\u8282\u70b9**\uff1a\u6807\u51c6\u7684k8s\u8282\u70b9\uff0c\u4f4d\u4e8e\u4e91\u7aef\uff0c\u8d1f\u8d23\u4e91\u7aef\u548c\u8fb9\u7f18\u7aef\u901a\u4fe1\uff0c\u56e0\u4e3a\u53ef\u80fd\u4f1a\u627f\u8f7d\u5f88\u591a\u6d41\u91cf\uff0c\u5c3d\u91cf\u4e0d\u8981\u5728\u8be5\u8282\u70b9\u8fd0\u884c\u5176\u4ed6\u7a0b\u5e8f\u3002 - **\u8fb9\u7f18\u8282\u70b9**\uff1a\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\uff0c\u52a0\u5165\u4e91\u7aef\u96c6\u7fa4\u7684\u8fb9\u7f18\u4fa7\u8282\u70b9\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u8fb9\u7f18\u96c6\u7fa4**\uff1a\u6807\u51c6\u7684K8S\u96c6\u7fa4\uff0c\u4f4d\u4e8e\u8fb9\u7f18\u4fa7\uff0c\u63d0\u4f9b\u8fb9\u7f18\u8ba1\u7b97\u80fd\u529b - **\u4e3b\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u9009\u5b9a\u7684\u4e91\u7aef\u96c6\u7fa4\uff0c\u7528\u4e8e\u7ba1\u7406\u5176\u5b83\u96c6\u7fa4\u7684\u8de8\u96c6\u7fa4\u901a\u8baf\uff0cFabEdge\u90e8\u7f72\u7684\u7b2c\u4e00\u4e2a\u96c6\u7fa4\u5fc5\u987b\u662f\u4e3b\u96c6\u7fa4 - **\u6210\u5458\u96c6\u7fa4**\uff1a\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u6ce8\u518c\u5230\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u62a5\u672c\u96c6\u7fa4\u7aef\u70b9\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u7528\u4e8e\u591a\u96c6\u7fa4\u901a\u8baf - **Community**\uff1aFabEdge\u5b9a\u4e49\u7684CRD\uff0c\u5206\u4e3a\u4e24\u7c7b\uff1a - **\u8282\u70b9\u7c7b\u578b**\uff1a\u5b9a\u4e49\u96c6\u7fa4\u5185\u591a\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u8baf - **\u96c6\u7fa4\u7c7b\u578b**\uff1a\u5b9a\u4e49\u591a\u4e2a\u8fb9\u7f18\u96c6\u7fa4\u4e4b\u95f4\u7684\u901a\u8baf - Kubernetes (v1.22.5+) - Flannel (v0.14.0 ) \u6216\u8005 Calico (v3.16.5) - KubeEdge \uff08>= v1.9.0\uff09\u6216\u8005 SuperEdge\uff08v0.8.0\uff09\u6216\u8005 OpenYurt\uff08 >= v1.2.0\uff09 *\u6ce81\uff1a Flannel\u76ee\u524d\u4ec5\u652f\u6301Vxlan\u6a21\u5f0f\uff0c\u652f\u6301\u53cc\u6808\u73af\u5883\u3002* *\u6ce82\uff1a Calico\u76ee\u524d\u4ec5\u652f\u6301IPIP\u6a21\u5f0f\uff0ckube backend\u5b58\u50a8(\u9ed8\u8ba4)\uff0c\u4e0d\u652f\u6301\u53cc\u6808\u73af\u5883\u3002* 1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3 - ESP(50)\uff0cUDP/500\uff0cUDP/4500 2. \u5982\u679c\u673a\u5668\u4e0a\u6709firewalld\uff0c\u4e5f\u6700\u597d\u5173\u95ed 3. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528 ```shell $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : cluster.local cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` 1. \u7528helm\u6dfb\u52a0fabedge repo\uff1a ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 1. \u5b89\u88c5FabEdge ```shell $ curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart fabedge/fabedge ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\uff0c\u5e76\u786e\u4fdd\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0 > **--connector-public-addresses**: connector\u6240\u5728\u8282\u70b9\u7684\u516c\u7f51IP\u5730\u5740\uff0c\u4ece\u8fb9\u7f18\u8282\u70b9\u5fc5\u987b\u7f51\u7edc\u53ef\u8fbe *\u6ce8\uff1a`quickstart.sh`\u811a\u672c\u6709\u5f88\u591a\u53c2\u6570\uff0c\u4ee5\u4e0a\u5b9e\u4f8b\u4ec5\u4ee5\u6700\u5e38\u7528\u7684\u53c2\u6570\u4e3e\u4f8b\uff0c\u6267\u884c`quickstart.sh --help`\u67e5\u8be2\u3002* 3. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m v1.22.5 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7dd5ccf489-5dc29 1/1 Running 0 24h fabedge-agent-bvnvj 2/2 Running 2 (23h ago) 24h fabedge-agent-c9bsx 2/2 Running 2 (23h ago) 24h fabedge-cloud-agent-lgqkw 1/1 Running 3 (24h ago) 24h fabedge-connector-54c78b5444-9dkt6 2/2 Running 0 24h fabedge-operator-767bc6c58b-rk7mr 1/1 Running 0 24h service-hub-7fd4659b89-h522c 1/1 Running 0 24h ``` 4. \u4e3a\u9700\u8981\u901a\u8baf\u7684\u8fb9\u7f18\u8282\u70b9\u521b\u5efaCommunity ```shell $ cat > all-edges.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 EOF $ kubectl apply -f all-edges.yaml ``` 4. \u6839\u636e\u4f7f\u7528\u7684[\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6](#\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u76f8\u5173\u7684\u914d\u7f6e)\u4fee\u6539\u76f8\u5173\u914d\u7f6e 5. \u6839\u636e\u4f7f\u7528\u7684[CNI](#CNI\u76f8\u5173\u7684\u914d\u7f6e)\u4fee\u6539\u76f8\u5173\u914d\u7f6e \u5982\u679c\u6709\u6210\u5458\u96c6\u7fa4\uff0c\u5148\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u6240\u6709\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u7136\u540e\u5728\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\u3002\u5728\u90e8\u7f72\u524d\uff0c\u8981\u6ce8\u610f\u786e\u4fdd\u5404\u4e2a\u96c6\u7fa4\u7684\u4e3b\u673a\u7f51\u7edc\u5730\u5740\u53ca\u5bb9\u5668\u7f51\u7edc\u5730\u5740\u4e0d\u8981\u91cd\u53e0\u3002 1. \u5728**\u4e3b\u96c6\u7fa4**\u6dfb\u52a0\u4e00\u4e2a\u540d\u5b57\u53eb\u201cshanghai\u201d\u7684\u6210\u5458\u96c6\u7fa4\uff0c\u83b7\u53d6Token\u4f9b\u6ce8\u518c\u4f7f\u7528 ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # \u96c6\u7fa4\u540d\u5b57 EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJ------\u7701\u7565\u5185\u5bb9-----9u0 ``` 3. \u7528helm\u6dfb\u52a0fabedge repo\uff1a ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` 3. \u5728**\u6210\u5458\u96c6\u7fa4**\u5b89\u88c5FabEdage ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart fabedge/fabedge \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > \u8bf4\u660e\uff1a > **--connectors**: connector\u6240\u5728\u8282\u70b9\u4e3b\u673a\u540d\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/connector\u6807\u7b7e > **--edges:** \u8fb9\u7f18\u8282\u70b9\u540d\u79f0\uff0c\u6307\u5b9a\u7684\u8282\u70b9\u4f1a\u88ab\u6253\u4e0anode-role.kubernetes.io/edge\u6807\u7b7e > **--edge-pod-cidr**: \u7528\u6765\u5206\u914d\u7ed9\u8fb9\u7f18Pod\u7684\u7f51\u6bb5, \u4f7f\u7528Calico\u65f6\u5fc5\u987b\u914d\u7f6e\u3002\u5728v1.0.0\u7248\u672c\u524d\uff0c\u8fd9\u4e2a\u503c\u4e0d\u80fd\u8ddf\u96c6\u7fa4\u7684cluster-cidr\u53c2\u6570\u91cd\u53e0\uff0c\u4ecev1.0.0\u8d77\uff0c\u5efa\u8bae\u8be5\u503c\u662fcluster-cidr\u7684\u5b50\u96c6\uff0c\u4f46\u4e0d\u8981\u8ddfCALICO_IPV4POOL_CIDR/\u91cc\u7684\u503c\u91cd\u53e0\u3002 > **--connector-public-addresses**: member\u96c6\u7fa4connectors\u6240\u5728\u8282\u70b9\u7684ip\u5730\u5740 > **--service-hub-api-server**: host\u96c6\u7fa4serviceHub\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--operator-api-server**: host\u96c6\u7fa4operator-api\u670d\u52a1\u7684\u5730\u5740\u548c\u7aef\u53e3 > **--init-token**: host\u96c6\u7fa4\u83b7\u53d6\u7684token 4. \u786e\u8ba4\u90e8\u7f72\u6b63\u5e38 ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m v1.22.5 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-m55h5 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` 1. \u5728\u4e3b\u96c6\u7fa4\uff0c\u628a\u6240\u6709\u987b\u8981\u901a\u8baf\u7684\u96c6\u7fa4\u52a0\u5165\u4e00\u4e2aCommunity ```shell # \u5728master\u8282\u70b9\u64cd\u4f5c $ cat > all-edges.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-edges spec: members: - shanghai.connector # {\u96c6\u7fa4\u540d\u79f0}.connector - beijing.connector # {\u96c6\u7fa4\u540d\u79f0}.connector EOF $ kubectl apply -f all-edges.yaml ``` \u4fee\u6539\u96c6\u7fa4\u7684coredns\u914d\u7f6e\uff1a ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # fabdns\u7684service IP\u5730\u5740 } .:53 { ... } ``` 1. \u542f\u52a8cloudcore\u7684dynamicController: ```yaml dynamicController: enable: true ``` \u8be5\u914d\u7f6e\u9879\u5728cloudcore\u7684\u914d\u7f6e\u6587\u4ef6cloudcore.yaml\u4e2d\uff0c\u8bf7\u6839\u636e\u60a8\u7684\u73af\u5883\u81ea\u884c\u5bfb\u627e\u8be5\u6587\u4ef6\u3002 2. \u786e\u4fddcloudcore\u6709\u8bbf\u95eeendpointslices\u8d44\u6e90\u7684\u6743\u9650(\u4ec5\u9650\u4e8e\u4ee5Pod\u65b9\u5f0f\u8fd0\u884c\u7684cloudcore): ``` kubectl edit clusterrole cloudcore apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/managed-by: Helm k8s-app: kubeedge kubeedge: cloudcore name: cloudcore rules: - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - get - list - watch ``` 3. \u91cd\u542fcloudcore 1. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u4fee\u6539edgecore\u914d\u7f6e ( kubeedge < v.1.12.0) ```shell $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 clusterDomain: \"cluster.local\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain metaManager: metaServer: enable: true ``` \u6216\u8005 ( kubeedge >= v.1.12.2) ```yaml $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 tailoredKubeletConfig: clusterDNS: [\"169.254.25.10\"] clusterDomain: \"cluster.local\" # get_cluster_info\u811a\u672c\u8f93\u51fa\u7684clusterDomain metaManager: metaServer: enable: true ``` 2. \u5728**\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9**\u4e0a\u91cd\u542fedgecore ```shell $ systemctl restart edgecore ``` \u81eav0.7.0\u8d77\uff0cfabedge\u63d0\u4f9b\u4e86\u81ea\u52a8\u7ef4\u62a4calico ippools\u529f\u80fd\uff0c\u4f7f\u7528`quickstart.sh`\u5b89\u88c5fabedge\u65f6\uff0c\u4f1a\u81ea\u52a8\u542f\u52a8\u8fd9\u4e2a\u529f\u80fd\u3002\u5982\u679c\u60a8\u5e0c\u671b\u81ea\u5df1\u7ba1\u7406calico ippools\uff0c\u53ef\u4ee5\u5728\u5b89\u88c5\u65f6\u4f7f\u7528`--auto-keep-ippools false`\u914d\u7f6e\u9879\u5173\u95ed\u8fd9\u4e2a\u529f\u80fd\u3002\u5728\u542f\u7528\u81ea\u52a8\u7ef4\u62a4calico ippools\u7684\u60c5\u51b5\u4e0b\uff0c\u4ee5\u4e0b\u5185\u5bb9\u53ef\u4ee5\u8df3\u8fc7\u3002 \u4e0d\u8bba\u662f\u4ec0\u4e48\u96c6\u7fa4\u89d2\u8272, \u53ea\u8981\u96c6\u7fa4\u4f7f\u7528Calico\uff0c\u5c31\u8981\u5c06\u672c\u96c6\u7fa4\u7684EdgePodCIDR\u5176\u5b83\u6240\u6709\u96c6\u7fa4\u7684Pod\u548cService\u7684\u7f51\u6bb5\u52a0\u5165\u5f53\u524d\u96c6\u7fa4\u7684Calico\u914d\u7f6e, \u00a0\u9632\u6b62Calico\u505a\u6e90\u5730\u5740\u8f6c\u6362\uff0c\u5bfc\u81f4\u4e0d\u80fd\u901a\u8baf\u3002 \u4f8b\u5982: host (Calico) \u00a0+ member1 (Calico) + member2 (Flannel) - \u5728host (Calico) \u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06member1 (Calico)\uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230host\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member1 (Calico)\u96c6\u7fa4\u7684master\u8282\u70b9\u64cd\u4f5c\uff0c\u5c06host (Calico) \uff0cmember2 (Flannel)\u5730\u5740\u914d\u7f6e\u5230member1\u96c6\u7fa4\u7684Calico\u4e2d\u3002 - \u5728member2 (Flannel)\u65e0\u9700\u4efb\u4f55\u64cd\u4f5c\u3002 ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` > **cidr**\u53c2\u6570\u662f\u4ee5\u4e0b\u7cfb\u7edf\u53c2\u6570\u4e4b\u4e00\uff1a > > * \u672c\u96c6\u7fa4\u7684edge-pod-cidr > * \u5176\u4ed6\u96c6\u7fa4cluster-cidr > * \u5176\u4ed6\u96c6\u7fa4\u7684service-cluster-ip-range * \u672c\u6587\u7684\u5b89\u88c5\u65b9\u5f0f\u662f\u811a\u672c\u5b89\u88c5\uff0c\u5b83\u8ba9\u60a8\u80fd\u5feb\u901f\u4f53\u9a8cFabEdge\uff0c\u4f46\u5efa\u8bae\u60a8\u9605\u8bfb[\u624b\u52a8\u5b89\u88c5](./manually-install_zh.md)\uff0c\u8fd9\u66f4\u9002\u5408\u5728\u751f\u4ea7\u73af\u5883\u4e0b\u7684\u90e8\u7f72\u3002 * FabEdge\u6709\u8bb8\u591a\u7279\u6027\uff0c\u8fd9\u4e9b\u90fd\u8bb0\u5f55\u5728[\u5e38\u89c1\u95ee\u9898](./FAQ_zh.md)\u3002 * \u5982\u679c\u60a8\u4f7f\u7528\u4e86\u591a\u96c6\u7fa4\u901a\u4fe1\u529f\u80fd\uff0c\u5efa\u8bae\u60a8\u9605\u8bfb[\u521b\u5efa\u5168\u5c40\u670d\u52a1](https://github.com/FabEdge/fab-dns/blob/main/docs/how-to-create-globalservice.md)\u6765\u77e5\u6653\u5982\u4f55\u8de8\u96c6\u7fa4\u8bbf\u95ee\u670d\u52a1\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "install-k8s-and-kubeedge.md"
- },
- "content": [
- {
- "heading": "\u90e8\u7f72k8s\u96c6\u7fa4",
- "data": ""
- },
- {
- "heading": "\u5b89\u88c5\u6761\u4ef6",
- "data": "- \u9075\u5faa [kubeadm\u7684\u6700\u4f4e\u8981\u6c42](https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) \uff0cMaster && Node \u6700\u4f4e2C2G\uff0c\u78c1\u76d8\u7a7a\u95f4\u4e0d\u5c0f\u4e8e10G\uff1b\n > \u26a0\ufe0f\u6ce8\u610f\uff1a\u5c3d\u53ef\u80fd\u4f7f\u7528\u5e72\u51c0\u7684\u7cfb\u7edf\uff0c\u907f\u514d\u5176\u4ed6\u56e0\u7d20\u5f15\u8d77\u5b89\u88c5\u9519\u8bef\u3002"
- },
- {
- "heading": "\u652f\u6301\u7684\u64cd\u4f5c\u7cfb\u7edf",
- "data": "- **Ubuntu 18.04 \uff08\u63a8\u8350\u4f7f\u7528\uff09**\n - Ubuntu 20.04\n - CentOS 7.9\n - CentOS 7.8"
- },
- {
- "heading": "\u90e8\u7f72k8s\u96c6\u7fa4",
- "data": ""
- },
- {
- "heading": "\u5b89\u88c5k8s Master \u8282\u70b9",
- "data": "\u4ee5Ubuntu 18.04.5 \u7cfb\u7edf\u4e3a\u4f8b\u5b50\uff0c\u8fd0\u884c\u4ee5\u4e0b\u6307\u4ee4\uff1a\n > \u26a0\ufe0f\u6ce8\u610f\uff1a\u5982\u679c\u52a0\u8f7d\u65f6\u95f4\u8fc7\u957f\uff0c\u6709\u53ef\u80fd\u7f51\u901f\u8f83\u6162\uff0c\u8bf7\u8010\u5fc3\u7b49\u5f85\n \u5982\u679c\u51fa\u73b0\u4ee5\u4e0b\u4fe1\u606f\uff0c\u8868\u793a\u5b89\u88c5\u6210\u529f\uff1a"
- },
- {
- "heading": "\u6dfb\u52a0k8s\u8fb9\u7f18\u8282\u70b9",
- "data": "\u53c2\u6570\u8bf4\u660e\uff1a\n * ansible_hostname\t \u6307\u5b9a\u8fb9\u7f18\u8282\u70b9\u7684\u4e3b\u673a\u540d\n * ansible_user \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u7528\u6237\u540d\n * ansible_password \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u5bc6\u7801\n * ansible_host \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684IP\u5730\u5740\n \u4f8b\u5982\uff1a\u8bbe\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u4e3b\u673a\u540d\u4e3aedge1\u3001\u7528\u6237\u540d\u662froot\u3001\u5bc6\u7801\u662fpwd111\u3001IP\u4e3a10.22.45.26\uff0c\u6307\u4ee4\u5982\u4e0b\uff1a\n ```shell\n sudo curl http://116.62.127.76/FabEdge/fabedge/main/deploy/cluster/add-edge-node.sh | bash -s -- --host-vars ansible_hostname=edge1 ansible_user=root ansible_password=pwd111 ansible_host=10.22.45.26\n ```\n \u5982\u679c\u51fa\u73b0\u4ee5\u4e0b\u4fe1\u606f\uff0c\u8868\u793a\u5b89\u88c5\u6210\u529f\uff1a"
- },
- {
- "heading": "\u786e\u8ba4\u8282\u70b9\u6dfb\u52a0\u6210\u529f",
- "data": "> \u26a0\ufe0f\u6ce8\u610f\uff1a\u5982\u679c\u8fb9\u7f18\u8282\u70b9\u6ca1\u6709\u914d\u7f6e\u5bc6\u7801\uff0c\u9700\u8981\u914d\u7f6essh\u8bc1\u4e66\u3002 > > master\u8282\u70b9\u914d\u7f6essh\u8bc1\u4e66\uff1a > > ```shell > sudo docker exec -it installer bash > sudo ssh-copy-id {edge-node-IP} > ```"
- },
- {
- "additional_info": "- \u9075\u5faa [kubeadm\u7684\u6700\u4f4e\u8981\u6c42](https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) \uff0cMaster && Node \u6700\u4f4e2C2G\uff0c\u78c1\u76d8\u7a7a\u95f4\u4e0d\u5c0f\u4e8e10G\uff1b > \u26a0\ufe0f\u6ce8\u610f\uff1a\u5c3d\u53ef\u80fd\u4f7f\u7528\u5e72\u51c0\u7684\u7cfb\u7edf\uff0c\u907f\u514d\u5176\u4ed6\u56e0\u7d20\u5f15\u8d77\u5b89\u88c5\u9519\u8bef\u3002 - **Ubuntu 18.04 \uff08\u63a8\u8350\u4f7f\u7528\uff09** - Ubuntu 20.04 - CentOS 7.9 - CentOS 7.8 \u4ee5Ubuntu 18.04.5 \u7cfb\u7edf\u4e3a\u4f8b\u5b50\uff0c\u8fd0\u884c\u4ee5\u4e0b\u6307\u4ee4\uff1a ```shell sudo curl http://116.62.127.76/FabEdge/fabedge/main/deploy/cluster/install-k8s.sh | bash - ``` > \u26a0\ufe0f\u6ce8\u610f\uff1a\u5982\u679c\u52a0\u8f7d\u65f6\u95f4\u8fc7\u957f\uff0c\u6709\u53ef\u80fd\u7f51\u901f\u8f83\u6162\uff0c\u8bf7\u8010\u5fc3\u7b49\u5f85 \u5982\u679c\u51fa\u73b0\u4ee5\u4e0b\u4fe1\u606f\uff0c\u8868\u793a\u5b89\u88c5\u6210\u529f\uff1a ``` PLAY RECAP ********************************************************************* master : ok=15 changed=13 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ```shell sudo curl http://116.62.127.76/FabEdge/fabedge/main/deploy/cluster/add-edge-node.sh | bash -s -- --host-vars ansible_hostname={hostname} ansible_user={username} ansible_password={password} ansible_host={edge-node-IP} ``` \u53c2\u6570\u8bf4\u660e\uff1a * ansible_hostname\t \u6307\u5b9a\u8fb9\u7f18\u8282\u70b9\u7684\u4e3b\u673a\u540d * ansible_user \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u7528\u6237\u540d * ansible_password \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u5bc6\u7801 * ansible_host \u914d\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684IP\u5730\u5740 \u4f8b\u5982\uff1a\u8bbe\u7f6e\u8fb9\u7f18\u8282\u70b9\u7684\u4e3b\u673a\u540d\u4e3aedge1\u3001\u7528\u6237\u540d\u662froot\u3001\u5bc6\u7801\u662fpwd111\u3001IP\u4e3a10.22.45.26\uff0c\u6307\u4ee4\u5982\u4e0b\uff1a ```shell sudo curl http://116.62.127.76/FabEdge/fabedge/main/deploy/cluster/add-edge-node.sh | bash -s -- --host-vars ansible_hostname=edge1 ansible_user=root ansible_password=pwd111 ansible_host=10.22.45.26 ``` \u5982\u679c\u51fa\u73b0\u4ee5\u4e0b\u4fe1\u606f\uff0c\u8868\u793a\u5b89\u88c5\u6210\u529f\uff1a ``` PLAY RECAP ********************************************************************* edge1 : ok=13 changed=10 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ``` ```shell sudo kubectl get node NAME STATUS ROLES AGE VERSION edge1 Ready agent,edge 22m v1.19.3-kubeedge-v1.5.0 master Ready master,node 32m v1.19.7 ``` > \u26a0\ufe0f\u6ce8\u610f\uff1a\u5982\u679c\u8fb9\u7f18\u8282\u70b9\u6ca1\u6709\u914d\u7f6e\u5bc6\u7801\uff0c\u9700\u8981\u914d\u7f6essh\u8bc1\u4e66\u3002 > > master\u8282\u70b9\u914d\u7f6essh\u8bc1\u4e66\uff1a > > ```shell > sudo docker exec -it installer bash > sudo ssh-copy-id {edge-node-IP} > ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "integrate-with-k3s.md"
- },
- "content": [
- {
- "heading": "\u57fa\u4e8eK3S\u96c6\u6210FabEdge",
- "data": "K3S\u662f\u4e00\u4e2a\u8f7b\u91cf\u7ea7Kubernetes\u53d1\u884c\u7248\uff0c\u7279\u522b\u9002\u5408\u8fb9\u7f18\u8ba1\u7b97\u548c\u4e91\u8fb9\u7aef\u67b6\u6784\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a\u6807\u51c6K8S\u96c6\u7fa4\u4f7f\u7528\u3002\n [FabEdge](https://github.com/FabEdge/fabedge)\u662f\u4e00\u4e2a\u4e13\u95e8\u9488\u5bf9\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u8bbe\u8ba1\u7684\uff0c\u57fa\u4e8ekubernetes\u7684\u5bb9\u5668\u7f51\u7edc\u65b9\u6848\uff0c\u5b83\u7b26\u5408CNI\u89c4\u8303\uff0c\u53ef\u4ee5\u65e0\u7f1d\u96c6\u6210\u4efb\u4f55K8S\u73af\u5883\uff0c\u89e3\u51b3\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e0b\u4e91\u8fb9\u534f\u540c\uff0c\u8fb9\u8fb9\u534f\u540c\uff0c\u670d\u52a1\u53d1\u73b0\u7b49\u96be\u9898\u3002"
- },
- {
- "heading": "\u524d\u7f6e\u6761\u4ef6",
- "data": "- \u672c\u6559\u7a0b\u57fa\u4e8e\u4e91\u7aefmaster\u8282\u70b9\uff08\u4e91\u4e3b\u673a\uff09\uff0c\u548c\u8fb9\u7f18node\u8282\u70b9\uff08\u5185\u7f51\u670d\u52a1\u5668\uff0c\u53ef\u4ee5\u8bbf\u95ee\u4e92\u8054\u7f51\uff09\uff0c\u7f51\u7edc\u73af\u5883\u5c5e\u4e8e\u5355\u5411\u7f51\u7edc\u3002\u53e6\u5916\uff0c\u56e0\u4e3aK3S\u5b98\u65b9\u505a\u4e86\u5927\u91cf\u9002\u914d\uff0c\u6240\u4ee5\u4e5f\u9002\u7528\u4e8e\u5176\u4ed6\u67b6\u6784\u73af\u5883\uff0c\u6bd4\u5982\u53ef\u4ee5\u4f7f\u7528\u4e91\u8282\u70b9\u52a0\u5404\u79cdARM\u8bbe\u5907\u642d\u5efa\u5c5e\u4e8e\u4f60\u81ea\u5df1\u7684\u8fb9\u7f18\u96c6\u7fa4\uff0c\u5982\u6811\u8393\u6d3e\n - \uff08**\u91cd\u8981**\uff09\u4e91\u4e3b\u673a\u6709\u5185\u7f51 \"nodeip\"\uff0c\u548c\u516c\u7f51ip \"publicip\"\uff0c\u8bf7\u4f7f\u7528\u4f60\u7684\u670d\u52a1\u5668ip\u4ee3\u66ff\n - \u672c\u6559\u7a0b\u57fa\u4e8eK3S\u6700\u65b0\u7248\u672c\uff0c\u5e76\u4f7f\u7528docker\u4f5c\u4e3a\u5bb9\u5668\u8fd0\u884c\u65f6\n - \u5bf9\u73af\u5883\u57fa\u4e8e\u65e0\u8981\u6c42\uff0c\u53ef\u4ee5\u4f7f\u7528Ubuntu\u6216CentOS\uff0c\u4ee5\u4e0b\u64cd\u4f5c\u57fa\u4e8eroot\u7528\u6237\uff08K3S\u4e5f\u652f\u6301\u62e5\u6709root\u6743\u9650\u7684\u5176\u4ed6\u7528\u6237\u90e8\u7f72\uff09\n - \u786e\u4fdd\u6240\u6709\u8fb9\u7f18\u8282\u70b9\u80fd\u591f\u8bbf\u95ee\u4e91\u7aef\u8fd0\u884cconnector\u7684\u8282\u70b9\n - \u5982\u679c\u6709\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\uff0c\u5fc5\u987b\u5141\u8bb8ESP(50)\uff0cUDP/500\uff0cUDP/4500"
- },
- {
- "heading": "\u90e8\u7f72K3S\u96c6\u7fa4",
- "data": "1. \u5b89\u88c5master\uff0c\u5728master\u8282\u70b9\u6267\u884c"
- },
- {
- "heading": "\u5b89\u88c5docker",
- "data": ""
- },
- {
- "heading": "\u5b89\u88c5k3s master\uff0c\u4f7f\u7528 publicip \u548c docker\u8fd0\u884c\u65f6",
- "data": ""
- },
- {
- "heading": "\u90e8\u7f72\u6210\u529f\u540e\u53ef\u67e5\u770b",
- "data": ""
- },
- {
- "heading": "\u67e5\u770btoken\uff0c\u8282\u70b9join\u4f7f\u7528",
- "data": "2. \u5b89\u88c5node\uff0c\u5728\u5185\u7f51\u670d\u52a1\u5668\u6267\u884c"
- },
- {
- "heading": "\u5b89\u88c5docker",
- "data": ""
- },
- {
- "heading": "\u4f7f\u7528token\u52a0\u5165\u96c6\u7fa4\uff0c\u5e76\u53bb\u9664flannel\u7ec4\u4ef6",
- "data": ""
- },
- {
- "heading": "\u52a0\u5165\u96c6\u7fa4\u6210\u529f\u4e4b\u540e",
- "data": ""
- },
- {
- "heading": "\u914d\u7f6ekubectl\u4e0eapiserver\u7684\u8ba4\u8bc1",
- "data": "3. \u8fd9\u6837\u4e00\u4e2a\u4e91\u52a0\u8fb9\u7684\u96c6\u7fa4\u5c31\u7ec4\u5efa\u597d\u4e86\uff0c\u4f46\u662f\u73b0\u5728\u8fd8\u6ca1\u6709\u7f51\u7edc\u8bbf\u95ee\u7684\u80fd\u529b\uff0c\u63a5\u4e0b\u6765\u9700\u8981\u90e8\u7f72fabedge\uff0c\u53e6\u5916\u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f7f\u7528\u7684\u662fK3S\u9ed8\u8ba4\u914d\u7f6e\n 4. \u5b89\u88c5helm"
- },
- {
- "heading": "\u5b89\u88c5\u90e8\u7f72FabEdge",
- "data": "1. \u4e3a**\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e\n 2. \u5728**\u4e91\u7aef\u8282\u70b9**\u8fd0\u884cconnector\u7684\u8282\u70b9\uff0c\u5e76\u4e3a\u5b83\u505a\u6807\u8bb0\n 3. \u51c6\u5907values.yaml\u6587\u4ef6\n > \u8bf4\u660e\uff1a\n >\n > **connectorPublicAddresses**\uff1a\u4e3b\u8282\u70b9\u7684\u5730\u5740\uff0c\u786e\u4fdd\u80fd\u591f\u88ab\u8fb9\u7f18\u8282\u70b9\u8bbf\u95ee\n >\n > **connectorSubnets**\uff1a\u4e91\u7aef\u96c6\u7fa4\u4e2d\u7684service\u4f7f\u7528\u7684\u7f51\u6bb5\uff0c\u4e3aK3S\u9ed8\u8ba4\u768410.43.0.0/16\n >\n > **edgeLabels**\uff1a\u4f7f\u7528\u524d\u9762\u4e3a\u8fb9\u7f18\u8282\u70b9\u6dfb\u52a0\u7684\u6807\u7b7e\n >\n > **cniType**:\uff1a\u96c6\u7fa4\u4e2d\u4f7f\u7528\u7684cni\u63d2\u4ef6\u7c7b\u578b\n 4. \u5b89\u88c5fabedge\n > \u5982\u679c\u51fa\u73b0\u9519\u8bef\uff1a\u201cError: cannot re-use a name that is still in use\u201d\uff0c\u662f\u56e0\u4e3afabedge helm chart\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002\n >```shell\n > $ helm uninstall -n fabedge fabedge\n > release \"fabedge\" uninstalled\n >```"
- },
- {
- "heading": "\u90e8\u7f72\u540e\u9a8c\u8bc1",
- "data": "1. \u5728**\u7ba1\u7406\u8282\u70b9**\u4e0a\u786e\u8ba4FabEdge\u670d\u52a1\u6b63\u5e38 2. \u6ce8\u610f\u90e8\u7f72\u4e1a\u52a1pod\u4e4b\u524d\uff0c\u9700\u8981\u914d\u7f6e\u597dFabEdge\uff0c\u4e4b\u524d\u521b\u5efa\u7684pod\u9700\u8981\u5220\u9664\u91cd\u5efa\u624d\u80fd\u8054\u901a"
- },
- {
- "additional_info": "K3S\u662f\u4e00\u4e2a\u8f7b\u91cf\u7ea7Kubernetes\u53d1\u884c\u7248\uff0c\u7279\u522b\u9002\u5408\u8fb9\u7f18\u8ba1\u7b97\u548c\u4e91\u8fb9\u7aef\u67b6\u6784\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a\u6807\u51c6K8S\u96c6\u7fa4\u4f7f\u7528\u3002 [FabEdge](https://github.com/FabEdge/fabedge)\u662f\u4e00\u4e2a\u4e13\u95e8\u9488\u5bf9\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u8bbe\u8ba1\u7684\uff0c\u57fa\u4e8ekubernetes\u7684\u5bb9\u5668\u7f51\u7edc\u65b9\u6848\uff0c\u5b83\u7b26\u5408CNI\u89c4\u8303\uff0c\u53ef\u4ee5\u65e0\u7f1d\u96c6\u6210\u4efb\u4f55K8S\u73af\u5883\uff0c\u89e3\u51b3\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e0b\u4e91\u8fb9\u534f\u540c\uff0c\u8fb9\u8fb9\u534f\u540c\uff0c\u670d\u52a1\u53d1\u73b0\u7b49\u96be\u9898\u3002 - \u672c\u6559\u7a0b\u57fa\u4e8e\u4e91\u7aefmaster\u8282\u70b9\uff08\u4e91\u4e3b\u673a\uff09\uff0c\u548c\u8fb9\u7f18node\u8282\u70b9\uff08\u5185\u7f51\u670d\u52a1\u5668\uff0c\u53ef\u4ee5\u8bbf\u95ee\u4e92\u8054\u7f51\uff09\uff0c\u7f51\u7edc\u73af\u5883\u5c5e\u4e8e\u5355\u5411\u7f51\u7edc\u3002\u53e6\u5916\uff0c\u56e0\u4e3aK3S\u5b98\u65b9\u505a\u4e86\u5927\u91cf\u9002\u914d\uff0c\u6240\u4ee5\u4e5f\u9002\u7528\u4e8e\u5176\u4ed6\u67b6\u6784\u73af\u5883\uff0c\u6bd4\u5982\u53ef\u4ee5\u4f7f\u7528\u4e91\u8282\u70b9\u52a0\u5404\u79cdARM\u8bbe\u5907\u642d\u5efa\u5c5e\u4e8e\u4f60\u81ea\u5df1\u7684\u8fb9\u7f18\u96c6\u7fa4\uff0c\u5982\u6811\u8393\u6d3e - \uff08**\u91cd\u8981**\uff09\u4e91\u4e3b\u673a\u6709\u5185\u7f51 \"nodeip\"\uff0c\u548c\u516c\u7f51ip \"publicip\"\uff0c\u8bf7\u4f7f\u7528\u4f60\u7684\u670d\u52a1\u5668ip\u4ee3\u66ff - \u672c\u6559\u7a0b\u57fa\u4e8eK3S\u6700\u65b0\u7248\u672c\uff0c\u5e76\u4f7f\u7528docker\u4f5c\u4e3a\u5bb9\u5668\u8fd0\u884c\u65f6 - \u5bf9\u73af\u5883\u57fa\u4e8e\u65e0\u8981\u6c42\uff0c\u53ef\u4ee5\u4f7f\u7528Ubuntu\u6216CentOS\uff0c\u4ee5\u4e0b\u64cd\u4f5c\u57fa\u4e8eroot\u7528\u6237\uff08K3S\u4e5f\u652f\u6301\u62e5\u6709root\u6743\u9650\u7684\u5176\u4ed6\u7528\u6237\u90e8\u7f72\uff09 - \u786e\u4fdd\u6240\u6709\u8fb9\u7f18\u8282\u70b9\u80fd\u591f\u8bbf\u95ee\u4e91\u7aef\u8fd0\u884cconnector\u7684\u8282\u70b9 - \u5982\u679c\u6709\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\uff0c\u5fc5\u987b\u5141\u8bb8ESP(50)\uff0cUDP/500\uff0cUDP/4500 1. \u5b89\u88c5master\uff0c\u5728master\u8282\u70b9\u6267\u884c ```shell curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC='--docker' sh -s - --node-external-ip publicip kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7448499f4d-9ljl8 1/1 Running 0 113s kube-system metrics-server-86cbb8457f-4b8wn 1/1 Running 0 113s kube-system local-path-provisioner-5ff76fc89d-bwhcv 1/1 Running 0 113s kube-system helm-install-traefik-crd-tpvv8 0/1 Completed 0 114s kube-system svclb-traefik-8zfbw 2/2 Running 0 108s kube-system helm-install-traefik-szn6g 0/1 Completed 1 114s kube-system traefik-97b44b794-fhd5g 1/1 Running 0 108s cat /var/lib/rancher/k3s/server/node-token tokenxxxxxx ``` 2. \u5b89\u88c5node\uff0c\u5728\u5185\u7f51\u670d\u52a1\u5668\u6267\u884c ```shell curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC='--docker --flannel-backend none' K3S_URL=https://publicip:6443 K3S_TOKEN=tokenxxxxxx sh - kubectl get node NAME STATUS ROLES AGE VERSION master Ready control-plane,master 11m v1.21.5+k3s2 nodename Ready 25s v1.21.5+k3s2 mkdir -p $HOME/.kube sudo cp -f /etc/rancher/k3s/k3s.yaml $HOME/.kube/config ``` 3. \u8fd9\u6837\u4e00\u4e2a\u4e91\u52a0\u8fb9\u7684\u96c6\u7fa4\u5c31\u7ec4\u5efa\u597d\u4e86\uff0c\u4f46\u662f\u73b0\u5728\u8fd8\u6ca1\u6709\u7f51\u7edc\u8bbf\u95ee\u7684\u80fd\u529b\uff0c\u63a5\u4e0b\u6765\u9700\u8981\u90e8\u7f72fabedge\uff0c\u53e6\u5916\u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f7f\u7528\u7684\u662fK3S\u9ed8\u8ba4\u914d\u7f6e ```shell cluster-cidr : 10.42.0.0/16 service-cluster-ip-range : 10.43.0.0/16 ``` 4. \u5b89\u88c5helm ```shell wget https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz tar -xf helm-v3.6.3-linux-amd64.tar.gz cp linux-amd64/helm /usr/bin/helm ``` 1. \u4e3a**\u8fb9\u7f18\u8282\u70b9**\u6dfb\u52a0\u6807\u7b7e ```shell kubectl label no --overwrite=true nodename node-role.kubernetes.io/edge= node/nodename labeled kubectl get node NAME STATUS ROLES AGE VERSION master Ready control-plane,master 26m v1.21.5+k3s2 nodename Ready edge 15m v1.21.5+k3s2 ``` 2. \u5728**\u4e91\u7aef\u8282\u70b9**\u8fd0\u884cconnector\u7684\u8282\u70b9\uff0c\u5e76\u4e3a\u5b83\u505a\u6807\u8bb0 ```shell kubectl label no --overwrite=true master node-role.kubernetes.io/connector= node/master labeled kubectl get node NAME STATUS ROLES AGE VERSION master Ready edge 19m v1.21.5+k3s2 nodename Ready connector,control-plane,master 30m v1.21.5+k3s2 ``` 3. \u51c6\u5907values.yaml\u6587\u4ef6 ```shell operator: connectorPublicAddresses: publicip connectorSubnets: 10.43.0.0/16 edgeLabels: node-role.kubernetes.io/edge masqOutgoing: true enableProxy: false cniType: flannel ``` > \u8bf4\u660e\uff1a > > **connectorPublicAddresses**\uff1a\u4e3b\u8282\u70b9\u7684\u5730\u5740\uff0c\u786e\u4fdd\u80fd\u591f\u88ab\u8fb9\u7f18\u8282\u70b9\u8bbf\u95ee > > **connectorSubnets**\uff1a\u4e91\u7aef\u96c6\u7fa4\u4e2d\u7684service\u4f7f\u7528\u7684\u7f51\u6bb5\uff0c\u4e3aK3S\u9ed8\u8ba4\u768410.43.0.0/16 > > **edgeLabels**\uff1a\u4f7f\u7528\u524d\u9762\u4e3a\u8fb9\u7f18\u8282\u70b9\u6dfb\u52a0\u7684\u6807\u7b7e > > **cniType**:\uff1a\u96c6\u7fa4\u4e2d\u4f7f\u7528\u7684cni\u63d2\u4ef6\u7c7b\u578b 4. \u5b89\u88c5fabedge ```shell helm install fabedge --create-namespace -n fabedge -f values.yaml http://116.62.127.76/fabedge-0.3.0.tgz ``` > \u5982\u679c\u51fa\u73b0\u9519\u8bef\uff1a\u201cError: cannot re-use a name that is still in use\u201d\uff0c\u662f\u56e0\u4e3afabedge helm chart\u5df2\u7ecf\u5b89\u88c5\uff0c\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5378\u8f7d\u540e\u91cd\u8bd5\u3002 >```shell > $ helm uninstall -n fabedge fabedge > release \"fabedge\" uninstalled >``` 1. \u5728**\u7ba1\u7406\u8282\u70b9**\u4e0a\u786e\u8ba4FabEdge\u670d\u52a1\u6b63\u5e38 ```shell kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE cert-xhmxj 0/1 Completed 0 3m7s fabedge-operator-5b97448c9b-zl5zg 1/1 Running 0 3m1s fabedge-agent-nodename 2/2 Running 0 2m58s connector-6fffdbbc64-4wz86 2/2 Running 0 3m1s ``` 2. \u6ce8\u610f\u90e8\u7f72\u4e1a\u52a1pod\u4e4b\u524d\uff0c\u9700\u8981\u914d\u7f6e\u597dFabEdge\uff0c\u4e4b\u524d\u521b\u5efa\u7684pod\u9700\u8981\u5220\u9664\u91cd\u5efa\u624d\u80fd\u8054\u901a"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "MAINTAINERS.md"
- },
- "content": [
- {
- "heading": "FabEdge Maintainers",
- "data": ""
- },
- {
- "heading": "Current",
- "data": "| Maintainer | GitHub ID | Affiliation | Email |\n | -------------------- | ------------------------------------------------------- | ----------- |-----------------|\n | Jianbo Yan | [yanjianbo1983](https://github.com/yanjianbo1983) | BoCloud | yanjianbo@beyondcent.com |\n | Zhen Tang | [lostcharlie](https://github.com/lostcharlie) | [ISCAS](http://www.is.cas.cn/) | tangzhen12@otcaix.iscas.ac.cn |"
- },
- {
- "heading": "Emeritus Maintainers",
- "data": "* [haotaogeng](https://github.com/haotaogeng)"
- },
- {
- "additional_info": "| Maintainer | GitHub ID | Affiliation | Email | | -------------------- | ------------------------------------------------------- | ----------- |-----------------| | Jianbo Yan | [yanjianbo1983](https://github.com/yanjianbo1983) | BoCloud | yanjianbo@beyondcent.com | | Zhen Tang | [lostcharlie](https://github.com/lostcharlie) | [ISCAS](http://www.is.cas.cn/) | tangzhen12@otcaix.iscas.ac.cn | * [haotaogeng](https://github.com/haotaogeng)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "manually-install.md"
- },
- "content": [
- {
- "heading": "Manually Install",
- "data": "This article will show you how to install FabEdge without `quickstart.sh`\u3002The cluster uses KubeEdge and Calico and will be used as host cluster. Some settings may not suit your cases, you may need to change them according to your environment."
- },
- {
- "heading": "PS: About how to configure edge frameworks and DNS, please checkout [Get Started](./get-started.md), We won't repeat it again.",
- "data": ""
- },
- {
- "heading": "Prerequisite",
- "data": "- Kubernetes (v1.22.5+)\n - Flannel (v0.14.0) or Calico (v3.16.5)\n - KubeEdge (>= v1.9.0) or SuperEdge(v0.8.0) or OpenYurt( >= v1.2.0)\n - Helm3"
- },
- {
- "heading": "Deploy FabEdge",
- "data": "1. Make sure the following ports are allowed by firewall or security group.\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n \n 2. Collect the configuration of the current cluster\n \n ```shell\n $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : cluster.local\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```\n 3. Label connector nodes:\n ```shell\n $ kubectl label node --overwrite=true node1 node-role.kubernetes.io/connector=\n node/node1 labeled\n \n $ kubectl get no node1\n NAME STATUS ROLES AGE VERSION\n node1 Ready connector 22h v1.18.2\n ```\n 4. Label all edge nodes:\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n ```\n 5. Make sure no CNI pods will run on edge nodes, take Calico as an example:\n ```yaml\n cat > /tmp/cni-ds.patch.yaml << EOF\n spec:\n template:\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n - key: node-role.kubernetes.io/edge\n operator: DoesNotExist\n EOF\n kubectl patch ds -n kube-system calico-node --patch-file /tmp/cni-ds.patch.yaml\n ```\n 6. Add fabedge repo using helm:\n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n 7. Prepare your `values.yaml`"
- },
- {
- "heading": "P.S.: The code snippet above shows part of `values.yaml`, while you can get the complete `values.yaml` example by executing `helm show values fabedge/fabedge`.",
- "data": "8. Deploy FabEdge\n ```shell\n helm install fabedge fabedge/fabedge -n fabedge --create-namespace -f values.yaml\n ```\n If those pods following are running, you make it."
- },
- {
- "heading": "PS\uff1a fabedge-connector and fabedge-operator are necessary, fabedge-agent-XXX will only run on edge nodes, fabedge-cloud-agent will only run non-connector and non-edge nodes, fabdns and service-hub will be installed only if fabdns.create is true",
- "data": ""
- },
- {
- "additional_info": "T h i s a r t i c l e w i l l s h o w y o u h o w t o i n s t a l l F a b E d g e w i t h o u t ` q u i c k s t a r t . s h ` \u3002 T h e c l u s t e r u s e s K u b e E d g e a n d C a l i c o a n d w i l l b e u s e d a s h o s t c l u s t e r . S o m e s e t t i n g s m a y n o t s u i t y o u r c a s e s , y o u m a y n e e d t o c h a n g e t h e m a c c o r d i n g t o y o u r e n v i r o n m e n t . - K u b e r n e t e s ( v 1 . 2 2 . 5 + ) - F l a n n e l ( v 0 . 1 4 . 0 ) o r C a l i c o ( v 3 . 1 6 . 5 ) - K u b e E d g e ( > = v 1 . 9 . 0 ) o r S u p e r E d g e ( v 0 . 8 . 0 ) o r O p e n Y u r t ( > = v 1 . 2 . 0 ) - H e l m 3 1 . M a k e s u r e t h e f o l l o w i n g p o r t s a r e a l l o w e d b y f i r e w a l l o r s e c u r i t y g r o u p . - E S P ( 5 0 ) \uff0c U D P / 5 0 0 \uff0c U D P / 4 5 0 0 2 . C o l l e c t t h e c o n f i g u r a t i o n o f t h e c u r r e n t c l u s t e r ` ` ` s h e l l $ c u r l - s h t t p s : / / f a b e d g e . g i t h u b . i o / h e l m - c h a r t / s c r i p t s / g e t _ c l u s t e r _ i n f o . s h | b a s h - T h i s m a y t a k e s o m e t i m e . P l e a s e w a i t . c l u s t e r D N S : 1 6 9 . 2 5 4 . 2 5 . 1 0 c l u s t e r D o m a i n : c l u s t e r . l o c a l c l u s t e r - c i d r : 1 0 . 2 3 3 . 6 4 . 0 / 1 8 s e r v i c e - c l u s t e r - i p - r a n g e : 1 0 . 2 3 3 . 0 . 0 / 1 8 ` ` ` 3 . L a b e l c o n n e c t o r n o d e s : ` ` ` s h e l l $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e n o d e 1 n o d e - r o l e . k u b e r n e t e s . i o / c o n n e c t o r = n o d e / n o d e 1 l a b e l e d $ k u b e c t l g e t n o n o d e 1 N A M E S T A T U S R O L E S A G E V E R S I O N n o d e 1 R e a d y c o n n e c t o r 2 2 h v 1 . 1 8 . 2 ` ` ` 4 . L a b e l a l l e d g e n o d e s : ` ` ` s h e l l $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e e d g e 1 n o d e - r o l e . k u b e r n e t e s . i o / e d g e = n o d e / e d g e 1 l a b e l e d $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e e d g e 2 n o d e - r o l e . k u b e r n e t e s . i o / e d g e = n o d e / e d g e 2 l a b e l e d $ k u b e c t l g e t n o N A M E S T A T U S R O L E S A G E V E R S I O N e d g e 1 R e a d y e d g e 5 h 2 2 m v 1 . 2 2 . 6 - k u b e e d g e - v 1 . 1 2 . 2 e d g e 2 R e a d y e d g e 5 h 2 1 m v 1 . 2 2 . 6 - k u b e e d g e - v 1 . 1 2 . 2 m a s t e r R e a d y m a s t e r 5 h 2 9 m v 1 . 2 2 . 5 n o d e 1 R e a d y c o n n e c t o r 5 h 2 3 m v 1 . 2 2 . 5 ` ` ` 5 . M a k e s u r e n o C N I p o d s w i l l r u n o n e d g e n o d e s , t a k e C a l i c o a s a n e x a m p l e : ` ` ` y a m l c a t > / t m p / c n i - d s . p a t c h . y a m l < < E O F s p e c : t e m p l a t e : s p e c : a f f i n i t y : n o d e A f f i n i t y : r e q u i r e d D u r i n g S c h e d u l i n g I g n o r e d D u r i n g E x e c u t i o n : n o d e S e l e c t o r T e r m s : - m a t c h E x p r e s s i o n s : - k e y : k u b e r n e t e s . i o / o s o p e r a t o r : I n v a l u e s : - l i n u x - k e y : n o d e - r o l e . k u b e r n e t e s . i o / e d g e o p e r a t o r : D o e s N o t E x i s t E O F k u b e c t l p a t c h d s - n k u b e - s y s t e m c a l i c o - n o d e - - p a t c h - f i l e / t m p / c n i - d s . p a t c h . y a m l ` ` ` 6 . A d d f a b e d g e r e p o u s i n g h e l m : ` ` ` s h e l l h e l m r e p o a d d f a b e d g e h t t p s : / / f a b e d g e . g i t h u b . i o / h e l m - c h a r t ` ` ` 7 . P r e p a r e y o u r ` v a l u e s . y a m l ` ` ` ` y a m l c l u s t e r : n a m e : b e i j i n g r o l e : h o s t r e g i o n : b e i j i n g z o n e : b e i j i n g c n i T y p e : \" c a l i c o \" # e d g e P o d C I D R i s n o t n e c e s s a r y i f y o u r C N I i s f l a n n e l ; # A v o i d a n C I D R o v e r l a p p e d w i t h c l u s t e r - c i d r a r g u m e n t o f y o u r c l u s t e r e d g e P o d C I D R : \" 1 0 . 2 3 4 . 6 4 . 0 / 1 8 \" # I t ' s t h e v a l u e o f \" c l u s t e r - c i d r \" f e t c h e d i n S t e p 2 c l u s t e r C I D R : \" 1 0 . 2 3 3 . 6 4 . 0 / 1 8 \" # U s u a l l y c o n n e c t o r s h o u l d b e a c c e s s i b l e b y f a b e d g e - a g e n t b y p o r t 5 0 0 , # i f y o u c a n ' t m a p p u b l i c p o r t 5 0 0 , c h a n g e t h i s p a r a m e t e r . c o n n e c t o r P u b l i c P o r t : 5 0 0 # I f y o u r e d g e n o d e s a r e b e h i n d N A T n e t w o r k s a n d a r e h a r d t o e s t a b l i s h # t u n n e l s b e t w e e n t h e m , s e t t h i s p a r a m e t e r t o t r u e , t h i s w i l l l e t c o n n e c t o r # w o r k a l s o a s a m e d i a t o r t o h e l p e d g e n o d e s t o e s t a b l i s h t u n n e l s . c o n n e c t o r A s M e d i a t o r : f a l s e c o n n e c t o r P u b l i c A d d r e s s e s : - 1 0 . 2 2 . 4 8 . 1 6 # I t ' s t h e v a l u e o f \" s e r v i c e - c l u s t e r - i p - r a n g e \" f e t c h e d i n S t e p 2 s e r v i c e C l u s t e r I P R a n g e : - 1 0 . 2 3 4 . 0 . 0 / 1 8 f a b D N S : # I f y o u n e e d m u l t i - c l u s t e r s e r v i c e d i s c o v e r y , s e t c r e a t e t o t r u e c r e a t e : t r u e a g e n t : a r g s : # I f y o u r c l u s t e r u s e s s u p e r e d g e o r o p e n y u r t , s e t t h e m t o f a l s e ; # I f y o u r c l u s t e r u s e s k u b e e d g e , i t ' s b e t t e r t o s e t t h e m t o t r u e E N A B L E _ P R O X Y : \" t r u e \" E N A B L E _ D N S : \" t r u e \" ` ` ` 8 . D e p l o y F a b E d g e ` ` ` s h e l l h e l m i n s t a l l f a b e d g e f a b e d g e / f a b e d g e - n f a b e d g e - - c r e a t e - n a m e s p a c e - f v a l u e s . y a m l ` ` ` I f t h o s e p o d s f o l l o w i n g a r e r u n n i n g , y o u m a k e i t . ` ` ` s h e l l $ k u b e c t l g e t p o - n f a b e d g e N A M E R E A D Y S T A T U S R E S T A R T S A G E f a b d n s - 7 b 7 6 8 d 4 4 b 7 - b g 5 h 5 1 / 1 R u n n i n g 0 9 m 1 9 s f a b e d g e - a g e n t - b v n v j 2 / 2 R u n n i n g 0 8 m 1 8 s f a b e d g e - c l o u d - a g e n t - h x j t b 1 / 1 R u n n i n g 4 9 m 1 9 s f a b e d g e - c o n n e c t o r - 8 c 9 4 9 c 5 b c - 7 2 2 5 c 2 / 2 R u n n i n g 0 8 m 1 8 s f a b e d g e - o p e r a t o r - d d d d 9 9 9 f 8 - 2 p 6 z n 1 / 1 R u n n i n g 0 9 m 1 9 s s e r v i c e - h u b - 7 4 d 5 f c c 9 c 9 - f 5 t 8 f 1 / 1 R u n n i n g 0 9 m 1 9 s ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "manually-install_zh.md"
- },
- "content": [
- {
- "heading": "\u624b\u52a8\u5b89\u88c5",
- "data": "\u672c\u6587\u5c55\u793a\u5982\u4f55\u901a\u8fc7\u5728\u4e0d\u4f7f\u7528`quickstart.sh`\u811a\u672c\u7684\u60c5\u51b5\u5982\u4f55\u5b89\u88c5FabEdge\u3002\u6587\u7ae0\u91cc\u7684\u73af\u5883\u662fkubeedge+Calico\u7ec4\u5408\uff0c\u96c6\u7fa4\u4e3a\u4e3b\u673a\u7fa4\uff0c\u90e8\u5206\u914d\u7f6e\u548c\u53c2\u6570\u53ef\u80fd\u4e0d\u9002\u5408\u60a8\u7684\u73af\u5883\uff0c\u8bf7\u6839\u636e\u9700\u6c42\u8c03\u6574\uff0c"
- },
- {
- "heading": "\u6ce8\uff1a \u6709\u5173\u8fb9\u7f18\u6846\u67b6\uff0cDNS\u914d\u7f6e\u6ce8\u610f\u4e8b\u9879\u8bf7\u53c2\u8003[\u5feb\u901f\u5b89\u88c5](./get-started_zh.md)\uff0c\u672c\u6587\u4e0d\u518d\u8d58\u8ff0\u3002",
- "data": ""
- },
- {
- "heading": "\u524d\u63d0\u6761\u4ef6",
- "data": "- Kubernetes (v1.22.5+)\n - Flannel (v0.14.0) \u6216\u8005 Calico (v3.16.5)\n - KubeEdge \uff08>= v1.9.0\uff09\u6216\u8005 SuperEdge\uff08v0.8.0\uff09\u6216\u8005 OpenYurt\uff08 >= v1.2.0\uff09\n - Helm3"
- },
- {
- "heading": "\u5b89\u88c5FabEdge",
- "data": "1. \u786e\u4fdd\u9632\u706b\u5899\u6216\u5b89\u5168\u7ec4\u5141\u8bb8\u4ee5\u4e0b\u534f\u8bae\u548c\u7aef\u53e3\n - ESP(50)\uff0cUDP/500\uff0cUDP/4500\n \n 2. \u83b7\u53d6\u96c6\u7fa4\u914d\u7f6e\u4fe1\u606f\uff0c\u4f9b\u540e\u9762\u4f7f\u7528\n \n ```shell\n $ curl -s https://fabedge.github.io/helm-chart/scripts/get_cluster_info.sh | bash -\n This may take some time. Please wait.\n \n clusterDNS : 169.254.25.10\n clusterDomain : cluster.local\n cluster-cidr : 10.233.64.0/18\n service-cluster-ip-range : 10.233.0.0/18\n ```\n 3. \u4e3aConnector\u8282\u70b9\u6253\u6807\u7b7e\n ```shell\n $ kubectl label node --overwrite=true node1 node-role.kubernetes.io/connector=\n node/node1 labeled\n \n $ kubectl get no node1\n NAME STATUS ROLES AGE VERSION\n node1 Ready connector 22h v1.18.2\n ```\n 4. \u4e3a\u8fb9\u7f18\u8282\u70b9\u6253\u6807\u7b7e(\u65b0\u6dfb\u52a0\u7684\u8282\u70b9\u4e5f\u9700\u8981)\uff1a\n ```shell\n $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge=\n node/edge1 labeled\n $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge=\n node/edge2 labeled\n \n $ kubectl get no\n NAME STATUS ROLES AGE VERSION\n edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2\n edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2\n master Ready master 5h29m v1.22.5\n node1 Ready connector 5h23m v1.22.5\n ```\n 5. \u786e\u4fddCNI\u7ec4\u4ef6\u4e0d\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\uff0c\u8fd9\u91cc\u4ee5Calico\u4e3a\u4f8b\n ```yaml\n cat > /tmp/cni-ds.patch.yaml << EOF\n spec:\n template:\n spec:\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: kubernetes.io/os\n operator: In\n values:\n - linux\n - key: node-role.kubernetes.io/edge\n operator: DoesNotExist\n EOF\n kubectl patch ds -n kube-system calico-node --patch-file /tmp/cni-ds.patch.yaml\n ```\n 6. \u7528helm\u6dfb\u52a0fabedge repo:\n ```shell\n helm repo add fabedge https://fabedge.github.io/helm-chart\n ```\n 7. \u51c6\u5907`values.yaml`"
- },
- {
- "heading": "\u6ce8: \u793a\u4f8b\u7684`values.yaml`\u5e76\u975e\u5b8c\u6574\u5185\u5bb9\uff0c\u5b8c\u6574\u7684values\u6587\u4ef6\u53ef\u4ee5\u901a\u8fc7\u6267\u884c`helm show values fabedge/fabedge`\u7684\u65b9\u5f0f\u83b7\u53d6\u3002",
- "data": "8. \u5b89\u88c5FabEdge\n ```shell\n helm install fabedge fabedge/fabedge -n fabedge --create-namespace -f values.yaml\n ```\n \u5982\u679c\u4ee5\u4e0bPod\u8fd0\u884c\u6b63\u5e38\uff0c\u5219\u5b89\u88c5\u6210\u529f"
- },
- {
- "heading": "\u6ce8\uff1a\u5176\u4e2dfabedge-connector, fabedge-operator\u5fc5\u987b\u5b58\u5728\uff0cfabedge-agent-XXX\u53ea\u4f1a\u8fd0\u884c\u5728\u8fb9\u7f18\u8282\u70b9\uff0c fabedge-cloud-agent\u53ea\u6709\u975econnector\u548c\u975e\u8fb9\u7f18\u8282\u70b9\u624d\u4f1a\u5b58\u5728\uff0c fabdns\u548cservice-hub\u5728fabdns.create\u4e3atrue\u65f6\u624d\u4f1a\u5b89\u88c5\u3002",
- "data": ""
- },
- {
- "additional_info": "\u672c \u6587 \u5c55 \u793a \u5982 \u4f55 \u901a \u8fc7 \u5728 \u4e0d \u4f7f \u7528 ` q u i c k s t a r t . s h ` \u811a \u672c \u7684 \u60c5 \u51b5 \u5982 \u4f55 \u5b89 \u88c5 F a b E d g e \u3002 \u6587 \u7ae0 \u91cc \u7684 \u73af \u5883 \u662f k u b e e d g e + C a l i c o \u7ec4 \u5408 \uff0c \u96c6 \u7fa4 \u4e3a \u4e3b \u673a \u7fa4 \uff0c \u90e8 \u5206 \u914d \u7f6e \u548c \u53c2 \u6570 \u53ef \u80fd \u4e0d \u9002 \u5408 \u60a8 \u7684 \u73af \u5883 \uff0c \u8bf7 \u6839 \u636e \u9700 \u6c42 \u8c03 \u6574 \uff0c - K u b e r n e t e s ( v 1 . 2 2 . 5 + ) - F l a n n e l ( v 0 . 1 4 . 0 ) \u6216 \u8005 C a l i c o ( v 3 . 1 6 . 5 ) - K u b e E d g e \uff08 > = v 1 . 9 . 0 \uff09 \u6216 \u8005 S u p e r E d g e \uff08 v 0 . 8 . 0 \uff09 \u6216 \u8005 O p e n Y u r t \uff08 > = v 1 . 2 . 0 \uff09 - H e l m 3 1 . \u786e \u4fdd \u9632 \u706b \u5899 \u6216 \u5b89 \u5168 \u7ec4 \u5141 \u8bb8 \u4ee5 \u4e0b \u534f \u8bae \u548c \u7aef \u53e3 - E S P ( 5 0 ) \uff0c U D P / 5 0 0 \uff0c U D P / 4 5 0 0 2 . \u83b7 \u53d6 \u96c6 \u7fa4 \u914d \u7f6e \u4fe1 \u606f \uff0c \u4f9b \u540e \u9762 \u4f7f \u7528 ` ` ` s h e l l $ c u r l - s h t t p s : / / f a b e d g e . g i t h u b . i o / h e l m - c h a r t / s c r i p t s / g e t _ c l u s t e r _ i n f o . s h | b a s h - T h i s m a y t a k e s o m e t i m e . P l e a s e w a i t . c l u s t e r D N S : 1 6 9 . 2 5 4 . 2 5 . 1 0 c l u s t e r D o m a i n : c l u s t e r . l o c a l c l u s t e r - c i d r : 1 0 . 2 3 3 . 6 4 . 0 / 1 8 s e r v i c e - c l u s t e r - i p - r a n g e : 1 0 . 2 3 3 . 0 . 0 / 1 8 ` ` ` 3 . \u4e3a C o n n e c t o r \u8282 \u70b9 \u6253 \u6807 \u7b7e ` ` ` s h e l l $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e n o d e 1 n o d e - r o l e . k u b e r n e t e s . i o / c o n n e c t o r = n o d e / n o d e 1 l a b e l e d $ k u b e c t l g e t n o n o d e 1 N A M E S T A T U S R O L E S A G E V E R S I O N n o d e 1 R e a d y c o n n e c t o r 2 2 h v 1 . 1 8 . 2 ` ` ` 4 . \u4e3a \u8fb9 \u7f18 \u8282 \u70b9 \u6253 \u6807 \u7b7e ( \u65b0 \u6dfb \u52a0 \u7684 \u8282 \u70b9 \u4e5f \u9700 \u8981 ) \uff1a ` ` ` s h e l l $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e e d g e 1 n o d e - r o l e . k u b e r n e t e s . i o / e d g e = n o d e / e d g e 1 l a b e l e d $ k u b e c t l l a b e l n o d e - - o v e r w r i t e = t r u e e d g e 2 n o d e - r o l e . k u b e r n e t e s . i o / e d g e = n o d e / e d g e 2 l a b e l e d $ k u b e c t l g e t n o N A M E S T A T U S R O L E S A G E V E R S I O N e d g e 1 R e a d y e d g e 5 h 2 2 m v 1 . 2 2 . 6 - k u b e e d g e - v 1 . 1 2 . 2 e d g e 2 R e a d y e d g e 5 h 2 1 m v 1 . 2 2 . 6 - k u b e e d g e - v 1 . 1 2 . 2 m a s t e r R e a d y m a s t e r 5 h 2 9 m v 1 . 2 2 . 5 n o d e 1 R e a d y c o n n e c t o r 5 h 2 3 m v 1 . 2 2 . 5 ` ` ` 5 . \u786e \u4fdd C N I \u7ec4 \u4ef6 \u4e0d \u8fd0 \u884c \u5728 \u8fb9 \u7f18 \u8282 \u70b9 \uff0c \u8fd9 \u91cc \u4ee5 C a l i c o \u4e3a \u4f8b ` ` ` y a m l c a t > / t m p / c n i - d s . p a t c h . y a m l < < E O F s p e c : t e m p l a t e : s p e c : a f f i n i t y : n o d e A f f i n i t y : r e q u i r e d D u r i n g S c h e d u l i n g I g n o r e d D u r i n g E x e c u t i o n : n o d e S e l e c t o r T e r m s : - m a t c h E x p r e s s i o n s : - k e y : k u b e r n e t e s . i o / o s o p e r a t o r : I n v a l u e s : - l i n u x - k e y : n o d e - r o l e . k u b e r n e t e s . i o / e d g e o p e r a t o r : D o e s N o t E x i s t E O F k u b e c t l p a t c h d s - n k u b e - s y s t e m c a l i c o - n o d e - - p a t c h - f i l e / t m p / c n i - d s . p a t c h . y a m l ` ` ` 6 . \u7528 h e l m \u6dfb \u52a0 f a b e d g e r e p o : ` ` ` s h e l l h e l m r e p o a d d f a b e d g e h t t p s : / / f a b e d g e . g i t h u b . i o / h e l m - c h a r t ` ` ` 7 . \u51c6 \u5907 ` v a l u e s . y a m l ` ` ` ` y a m l c l u s t e r : n a m e : b e i j i n g r o l e : h o s t r e g i o n : b e i j i n g z o n e : b e i j i n g c n i T y p e : \" c a l i c o \" # \u5982 \u679c \u662f f l a n n e l \uff0c \u53ef \u4ee5 \u4e0d \u914d \u7f6e \u8fd9 \u4e2a \u53c2 \u6570 ; # \u53e6 \u5916 \u8fd9 \u4e2a \u53c2 \u6570 \u9700 \u8981 \u6ce8 \u610f \u4e0d \u8981 \u8ddf \u5f53 \u524d \u96c6 \u7fa4 \u7684 c l u s t e r - c i d r \u53c2 \u6570 \u91cd \u53e0 e d g e P o d C I D R : \" 1 0 . 2 3 4 . 6 4 . 0 / 1 8 \" # \u586b \u5165 \u6b65 \u9aa4 2 \u4e2d \u7684 c l u s t e r - c i d r c l u s t e r C I D R : \" 1 0 . 2 3 3 . 6 4 . 0 / 1 8 \" c o n n e c t o r P u b l i c A d d r e s s e s : - 1 0 . 2 2 . 4 8 . 1 6 # \u901a \u5e38 c o n n e c t o r \u9700 \u8981 \u88ab \u8fb9 \u7f18 \u8282 \u70b9 \u7684 f a b e d g e - a g e n t \u8bbf \u95ee \u9700 \u8981 \u6620 \u5c04 \u7aef \u53e3 \uff0c # \u5982 \u679c \u5916 \u90e8 \u7aef \u53e3 \u4e0d \u80fd \u6620 \u5c04 \u4e3a 5 0 0 , \u9700 \u8981 \u4fee \u6539 \u8be5 \u53c2 \u6570 c o n n e c t o r P u b l i c P o r t : 5 0 0 # \u662f \u5426 \u4f7f \u7528 c o n n e c t o r \u8282 \u70b9 \u4f5c \u4e3a m e d i a t o r \uff0c \u5982 \u679c \u8fb9 \u7f18 \u8282 \u70b9 \u4f4d \u4e8e N A T \u7f51 \u7edc \u540e \uff0c # \u5f7c \u6b64 \u4e4b \u95f4 \u4e0d \u80fd \u6b63 \u5e38 \u5efa \u7acb \u96a7 \u9053 \uff0c \u5efa \u8bae \u5f00 \u542f \u8be5 \u529f \u80fd c o n n e c t o r A s M e d i a t o r : f a l s e # \u586b \u5165 \u6b65 \u9aa4 2 \u4e2d \u7684 s e r v i c e - c l u s t e r - i p - r a n g e s e r v i c e C l u s t e r I P R a n g e : - 1 0 . 2 3 3 . 0 . 0 / 1 8 f a b D N S : # \u5982 \u679c \u662f \u591a \u96c6 \u7fa4 \u901a \u4fe1 \uff0c \u5e76 \u4e14 \u9700 \u8981 \u591a \u96c6 \u7fa4 \u670d \u52a1 \u53d1 \u73b0 \u529f \u80fd \uff0c \u9700 \u8981 \u8bbe \u7f6e \u4e3a t r u e c r e a t e : t r u e a g e n t : a r g s : # \u5982 \u679c \u662f s u p e r e d g e / o p e n y u r t \u73af \u5883 \uff0c \u5c06 \u4ee5 \u4e0b \u53c2 \u6570 \u8bbe \u7f6e \u4e3a f a l s e ; k u b e e d g e \u73af \u5883 \u4e0b \u5efa \u8bae \u6253 \u5f00 E N A B L E _ P R O X Y : \" t r u e \" E N A B L E _ D N S : \" t r u e \" ` ` ` 8 . \u5b89 \u88c5 F a b E d g e ` ` ` s h e l l h e l m i n s t a l l f a b e d g e f a b e d g e / f a b e d g e - n f a b e d g e - - c r e a t e - n a m e s p a c e - f v a l u e s . y a m l ` ` ` \u5982 \u679c \u4ee5 \u4e0b P o d \u8fd0 \u884c \u6b63 \u5e38 \uff0c \u5219 \u5b89 \u88c5 \u6210 \u529f ` ` ` s h e l l $ k u b e c t l g e t p o - n f a b e d g e N A M E R E A D Y S T A T U S R E S T A R T S A G E f a b d n s - 7 b 7 6 8 d 4 4 b 7 - b g 5 h 5 1 / 1 R u n n i n g 0 9 m 1 9 s f a b e d g e - a g e n t - b v n v j 2 / 2 R u n n i n g 0 8 m 1 8 s f a b e d g e - c l o u d - a g e n t - h x j t b 1 / 1 R u n n i n g 4 9 m 1 9 s f a b e d g e - c o n n e c t o r - 8 c 9 4 9 c 5 b c - 7 2 2 5 c 2 / 2 R u n n i n g 0 8 m 1 8 s f a b e d g e - o p e r a t o r - d d d d 9 9 9 f 8 - 2 p 6 z n 1 / 1 R u n n i n g 0 9 m 1 9 s s e r v i c e - h u b - 7 4 d 5 f c c 9 c 9 - f 5 t 8 f 1 / 1 R u n n i n g 0 9 m 1 9 s ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "multi-cluster-communication.md"
- },
- "content": [
- {
- "heading": "\u591a\u96c6\u7fa4\u901a\u4fe1\u8bbe\u8ba1",
- "data": ""
- },
- {
- "heading": "\u6982\u8ff0",
- "data": "\u591a\u96c6\u7fa4\u901a\u4fe1\u662f\u4e3a\u4e86\u8ba9\u591a\u4e2a\u5f02\u6784\u7684\uff0c\u5206\u5e03\u5728\u4e0d\u540c\u7f51\u7edc\u7684\u96c6\u7fa4\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\u5f7c\u6b64\u7684\u8282\u70b9\u548c\u670d\u52a1(\u76ee\u524d\u4ec5\u652f\u6301\u96c6\u7fa4\u7684\u4e91\u7aef\u8282\u70b9\u76f8\u4e92\u901a\u4fe1)\n \u591a\u4e2a\u901a\u4fe1\u7684\u96c6\u7fa4\u4e2d\u5fc5\u987b\u4e14\u53ea\u80fd\u6709\u4e00\u4e2a\u4e3b(host)\u96c6\u7fa4\uff0c\u5176\u4ed6\u96c6\u7fa4\u662f\u4e3b\u96c6\u7fa4\u7684\u6210\u5458(member)\u96c6\u7fa4, \u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u81ea\u5df1\u9700\u8981\u5916\u90e8\u901a\u4fe1\u7684\u7aef\u70b9\u4fe1\u606f\uff08\u76ee\u524d\u4ec5\u9650\u4e8eConnector)\u3002\n \u96c6\u7fa4\u95f4\u4e0d\u80fd\u76f4\u63a5\u901a\u4fe1, \u7528\u6237\u9700\u8981\u5728\u4e3b\u96c6\u7fa4\u901a\u8fc7\u793e\u533a\u6765\u7ec4\u7ec7\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\uff0c\u5728\u540c\u4e00\u4e2a\u793e\u533a\u7684\u96c6\u7fa4\u624d\u80fd\u76f8\u4e92\u8bbf\u95ee\u3002\n \u5404\u4e2a\u96c6\u7fa4\u95f4\u7684\u8282\u70b9\uff0cPod\uff0c\u670d\u52a1\u7684\u5730\u5740\u4e0d\u80fd\u91cd\u590d\u3002"
- },
- {
- "heading": "\u96c6\u7fa4\u5206\u7c7b",
- "data": ""
- },
- {
- "heading": "\u6309\u7cfb\u7edf\u5206\u7c7b",
- "data": "* \u5e38\u89c4Kubernetes\u96c6\u7fa4\n * \u8fb9\u7f18\u8ba1\u7b97\u96c6\u7fa4(KubeEdge/OpenYurt/SuperEdge)."
- },
- {
- "heading": "\u6309\u62d3\u6251\u7ed3\u6784\u5206\u7c7b",
- "data": "* \u6240\u6709\u8282\u70b9\u90fd\u5728\u4e00\u4e2a\u533a\u57df\uff0c\u7f51\u7edc\u4e5f\u5728\u4e00\u4e2a\u5c40\u57df\u7f51\uff0c\u901a\u5e38\u662f\u4f01\u4e1a\u5185\u90e8\u6216\u4e91\u4e0a\u7684\u4e00\u4e2aKubernetes\u96c6\u7fa4\uff0c\u8fd0\u884c\u5404\u79cd\u670d\u52a1\uff0c\u4e5f\u53ef\u80fd\u662f\u67d0\u79cd\u8bbe\u65bd\u7684\u5c0f\u578b\u96c6\u7fa4\u3002\n * \u7ba1\u7406\u8282\u70b9\u5728\u4e91\u7aef\uff0c\u8fb9\u7f18\u8282\u70b9\u5206\u5e03\u5728\u591a\u4e2a\u4f4d\u7f6e\uff0c\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc."
- },
- {
- "heading": "\u6309\u89d2\u8272\u5206\u7c7b",
- "data": "\u901a\u4fe1\u7684\u96c6\u7fa4\u53ef\u4ee5\u6709\u591a\u4e2a\uff0c\u4f46\u89d2\u8272\u5206\u4e3a\u4e24\u7c7b\uff1a\u4e3b\u96c6\u7fa4\u548c\u6210\u5458\u96c6\u7fa4\uff0c\u4e3b\u96c6\u7fa4\u5fc5\u987b\u4e14\u53ea\u80fd\u6709\u4e00\u4e2a\n \u4e3b\u96c6\u7fa4\u7684\u529f\u80fd\u5982\u4e0b\uff1a\n * \u8bc1\u4e66\u6d3e\u53d1\u3002\u6240\u6709\u96c6\u7fa4\u7684\u6839\u8bc1\u4e66\u5b58\u50a8\u5728\u4e3b\u673a\u7fa4\uff0c\u6210\u5458\u96c6\u7fa4\u53ef\u4ee5\u4ece\u4e3b\u96c6\u7fa4\u4e3aConnector\u548c\u8fb9\u7f18\u8282\u70b9Agent\u7533\u8bf7\u8bc1\u4e66\u3002\n * \u96c6\u4e2d\u5b58\u50a8\u6bcf\u4e2a\u96c6\u7fa4\u66b4\u9732\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u76ee\u524d\u4ec5\u9650\u4e8eConnector\n * \u793e\u533a\u7ba1\u7406\uff0c\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u5fc5\u987b\u5728\u540c\u4e00\u793e\u533a\n * \u5411\u5176\u4ed6\u96c6\u7fa4\u4e0b\u53d1\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\n \u6210\u5458\u96c6\u7fa4\u7684\u529f\u80fd\u5982\u4e0b\uff1a\n * \u5411\u4e3b\u96c6\u7fa4\u7533\u8bf7\u672c\u96c6\u7fa4\u7684\u7aef\u70b9\u8bc1\u4e66\n * \u5411\u4e3b\u96c6\u7fa4\u63d0\u4f9b\u81ea\u8eab\u5bf9\u5916\u66b4\u9732\u7684\u7aef\u70b9\u4fe1\u606f\n * \u901a\u8fc7\u4e3b\u96c6\u7fa4\u83b7\u53d6\u5176\u4ed6\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u5e76\u8ddf\u5176\u4ed6\u96c6\u7fa4\u5efa\u7acb\u901a\u4fe1"
- },
- {
- "heading": "\u81ea\u5b9a\u4e49\u8d44\u6e90",
- "data": "\u4e3a\u4e86\u7ba1\u7406\u591a\u96c6\u7fa4\u95f4\u7684\u901a\u4fe1\uff0c\u9700\u8981\u6dfb\u52a0\u6216\u4fee\u6539\u4e00\u4e9bCRD\u3002"
- },
- {
- "heading": "Community",
- "data": "Community\u539f\u5148\u7528\u4e8e\u7ba1\u7406\u4e00\u4e2a\u96c6\u7fa4\u5185\u90e8\u8fb9\u7f18\u8282\u70b9\u95f4\u7684\u901a\u4fe1\uff0c\u73b0\u5728\u53ef\u4ee5\u7528\u6765\u7ba1\u7406\u9700\u8981\u901a\u4fe1\u7684\u591a\u4e2a\u96c6\u7fa4\uff0c\u4f46\u6682\u65f6\u4e0d\u652f\u6301\u8fb9\u7f18\u8282\u70b9\u8de8\u96c6\u7fa4\u901a\u4fe1\u3002\u76ee\u524d\u4e00\u4e2a\u793e\u533a\u5185\u7684\u6210\u5458\u7c7b\u578b\u5fc5\u987b\u7edf\u4e00\uff0c\u8981\u4e48\u662f\u672c\u96c6\u7fa4\u7684\u8fb9\u7f18\u8282\u70b9\uff0c\u8981\u4e48\u662f\u5404\u4e2a\u96c6\u7fa4\u7684connector."
- },
- {
- "heading": "Cluster",
- "data": "Cluster\u662f\u4e3b\u96c6\u7fa4\u7528\u6765\u8bb0\u5f55\u5176\u4ed6\u96c6\u7fa4\u7aef\u70b9\u4fe1\u606f\u7684\u6570\u636e\u7ed3\u6784\uff0c\u6709\u4ee5\u4e0b\u5b57\u6bb5\uff1a\n * name. \u96c6\u7fa4\u540d\u79f0\uff0c\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u8bbf\u95ee\u4e3b\u96c6\u7fa4\u65f6\u58f0\u660e\u81ea\u5df1\u7684\u8eab\u4efd\u3002\n * token. \u7528\u4e8e\u6210\u5458\u96c6\u7fa4\u521d\u59cb\u5316\uff0ctoken\u7531\u4e3b\u96c6\u7fa4\u7684operator\u751f\u6210\u3002\n * endpoints. \u8be5\u96c6\u7fa4\u5185\u90e8\u6240\u6709\u9700\u8981\u8ddf\u5176\u4ed6\u96c6\u7fa4\u901a\u4fe1\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u5176\u4e2dConnector\u662f\u5fc5\u987b\u4e0a\u4f20\u7684\u3002\n * Name. \u5fc5\u987b\u552f\u4e00\uff0c\u5efa\u8bae\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u5728\u914d\u7f6ecluster domain\u65f6\u4e5f\u4fdd\u6301\u552f\u4e00\u3002\n * PublicAddresses. \u96c6\u7fa4\u7528\u4e8e\u5bf9\u5916\u901a\u4fe1\u7684\u516c\u7f51\u5730\u5740\uff0c\u8be5\u5730\u5740\u5fc5\u987b\u53ef\u4ee5\u88ab\u5176\u4ed6\u96c6\u7fa4\u548c\u672c\u96c6\u7fa4\u7684\u8fb9\u7f18\u8282\u70b9(\u5982\u679c\u6709)\u8bbf\u95ee\n * Subnets. \u4e3b\u8981\u662f\u96c6\u7fa4\u7684PodCIDRs\u6570\u636e\uff0c\u4f46\u4e5f\u542b\u6709\u63d0\u4f9b\u96c6\u7fa4\u7684ServiceCIDR\u3002\n * NodeCIDRs. \u96c6\u7fa4\u5185\u90e8\u4e91\u7aef\u8282\u70b9\u7684\u6240\u6709\u8282\u70b9\u7684IP\u5730\u5740\n * Type. \u8868\u660e\u7aef\u70b9\u7c7b\u578b\uff1a Connector\u548cEdgeNode"
- },
- {
- "heading": "\u540d\u79f0\u7ba1\u7406",
- "data": "\u5404\u4e2a\u96c6\u7fa4\u90fd\u6709Connector\uff0c\u8282\u70b9\u540d\u79f0\u4e5f\u53ef\u80fd\u91cd\u590d\uff0c\u4f46\u4f7f\u7528\u793e\u533a\u65f6\uff0c\u9700\u8981\u4fdd\u8bc1\u6210\u5458\u540d\u79f0\u552f\u4e00\uff0c\u4e3a\u4e86\u8fbe\u5230\u8fd9\u4e2a\u76ee\u6807\uff0c\u5728\u6bcf\u4e2a\u96c6\u7fa4\u4e0a\u62a5\u672c\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\u65f6\u9700\u8981\u5c06\u540d\u79f0\u4fee\u6539\u4e00\u4e0b\uff0c\u52a0\u4e0a\u96c6\u7fa4\u524d\u7f00\uff0c\u6bd4\u5982 cluster1\u7684Connector\u540d\u79f0\u8981\u6539\u4e3a: cluster1.connector\u3002"
- },
- {
- "additional_info": "\u591a\u96c6\u7fa4\u901a\u4fe1\u662f\u4e3a\u4e86\u8ba9\u591a\u4e2a\u5f02\u6784\u7684\uff0c\u5206\u5e03\u5728\u4e0d\u540c\u7f51\u7edc\u7684\u96c6\u7fa4\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\u5f7c\u6b64\u7684\u8282\u70b9\u548c\u670d\u52a1(\u76ee\u524d\u4ec5\u652f\u6301\u96c6\u7fa4\u7684\u4e91\u7aef\u8282\u70b9\u76f8\u4e92\u901a\u4fe1) \u591a\u4e2a\u901a\u4fe1\u7684\u96c6\u7fa4\u4e2d\u5fc5\u987b\u4e14\u53ea\u80fd\u6709\u4e00\u4e2a\u4e3b(host)\u96c6\u7fa4\uff0c\u5176\u4ed6\u96c6\u7fa4\u662f\u4e3b\u96c6\u7fa4\u7684\u6210\u5458(member)\u96c6\u7fa4, \u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\u81ea\u5df1\u9700\u8981\u5916\u90e8\u901a\u4fe1\u7684\u7aef\u70b9\u4fe1\u606f\uff08\u76ee\u524d\u4ec5\u9650\u4e8eConnector)\u3002 \u96c6\u7fa4\u95f4\u4e0d\u80fd\u76f4\u63a5\u901a\u4fe1, \u7528\u6237\u9700\u8981\u5728\u4e3b\u96c6\u7fa4\u901a\u8fc7\u793e\u533a\u6765\u7ec4\u7ec7\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\uff0c\u5728\u540c\u4e00\u4e2a\u793e\u533a\u7684\u96c6\u7fa4\u624d\u80fd\u76f8\u4e92\u8bbf\u95ee\u3002 \u5404\u4e2a\u96c6\u7fa4\u95f4\u7684\u8282\u70b9\uff0cPod\uff0c\u670d\u52a1\u7684\u5730\u5740\u4e0d\u80fd\u91cd\u590d\u3002 * \u5e38\u89c4Kubernetes\u96c6\u7fa4 * \u8fb9\u7f18\u8ba1\u7b97\u96c6\u7fa4(KubeEdge/OpenYurt/SuperEdge). * \u6240\u6709\u8282\u70b9\u90fd\u5728\u4e00\u4e2a\u533a\u57df\uff0c\u7f51\u7edc\u4e5f\u5728\u4e00\u4e2a\u5c40\u57df\u7f51\uff0c\u901a\u5e38\u662f\u4f01\u4e1a\u5185\u90e8\u6216\u4e91\u4e0a\u7684\u4e00\u4e2aKubernetes\u96c6\u7fa4\uff0c\u8fd0\u884c\u5404\u79cd\u670d\u52a1\uff0c\u4e5f\u53ef\u80fd\u662f\u67d0\u79cd\u8bbe\u65bd\u7684\u5c0f\u578b\u96c6\u7fa4\u3002 * \u7ba1\u7406\u8282\u70b9\u5728\u4e91\u7aef\uff0c\u8fb9\u7f18\u8282\u70b9\u5206\u5e03\u5728\u591a\u4e2a\u4f4d\u7f6e\uff0c\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc. \u901a\u4fe1\u7684\u96c6\u7fa4\u53ef\u4ee5\u6709\u591a\u4e2a\uff0c\u4f46\u89d2\u8272\u5206\u4e3a\u4e24\u7c7b\uff1a\u4e3b\u96c6\u7fa4\u548c\u6210\u5458\u96c6\u7fa4\uff0c\u4e3b\u96c6\u7fa4\u5fc5\u987b\u4e14\u53ea\u80fd\u6709\u4e00\u4e2a \u4e3b\u96c6\u7fa4\u7684\u529f\u80fd\u5982\u4e0b\uff1a * \u8bc1\u4e66\u6d3e\u53d1\u3002\u6240\u6709\u96c6\u7fa4\u7684\u6839\u8bc1\u4e66\u5b58\u50a8\u5728\u4e3b\u673a\u7fa4\uff0c\u6210\u5458\u96c6\u7fa4\u53ef\u4ee5\u4ece\u4e3b\u96c6\u7fa4\u4e3aConnector\u548c\u8fb9\u7f18\u8282\u70b9Agent\u7533\u8bf7\u8bc1\u4e66\u3002 * \u96c6\u4e2d\u5b58\u50a8\u6bcf\u4e2a\u96c6\u7fa4\u66b4\u9732\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u76ee\u524d\u4ec5\u9650\u4e8eConnector * \u793e\u533a\u7ba1\u7406\uff0c\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u5fc5\u987b\u5728\u540c\u4e00\u793e\u533a * \u5411\u5176\u4ed6\u96c6\u7fa4\u4e0b\u53d1\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f \u6210\u5458\u96c6\u7fa4\u7684\u529f\u80fd\u5982\u4e0b\uff1a * \u5411\u4e3b\u96c6\u7fa4\u7533\u8bf7\u672c\u96c6\u7fa4\u7684\u7aef\u70b9\u8bc1\u4e66 * \u5411\u4e3b\u96c6\u7fa4\u63d0\u4f9b\u81ea\u8eab\u5bf9\u5916\u66b4\u9732\u7684\u7aef\u70b9\u4fe1\u606f * \u901a\u8fc7\u4e3b\u96c6\u7fa4\u83b7\u53d6\u5176\u4ed6\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u5e76\u8ddf\u5176\u4ed6\u96c6\u7fa4\u5efa\u7acb\u901a\u4fe1 \u4e3a\u4e86\u7ba1\u7406\u591a\u96c6\u7fa4\u95f4\u7684\u901a\u4fe1\uff0c\u9700\u8981\u6dfb\u52a0\u6216\u4fee\u6539\u4e00\u4e9bCRD\u3002 Community\u539f\u5148\u7528\u4e8e\u7ba1\u7406\u4e00\u4e2a\u96c6\u7fa4\u5185\u90e8\u8fb9\u7f18\u8282\u70b9\u95f4\u7684\u901a\u4fe1\uff0c\u73b0\u5728\u53ef\u4ee5\u7528\u6765\u7ba1\u7406\u9700\u8981\u901a\u4fe1\u7684\u591a\u4e2a\u96c6\u7fa4\uff0c\u4f46\u6682\u65f6\u4e0d\u652f\u6301\u8fb9\u7f18\u8282\u70b9\u8de8\u96c6\u7fa4\u901a\u4fe1\u3002\u76ee\u524d\u4e00\u4e2a\u793e\u533a\u5185\u7684\u6210\u5458\u7c7b\u578b\u5fc5\u987b\u7edf\u4e00\uff0c\u8981\u4e48\u662f\u672c\u96c6\u7fa4\u7684\u8fb9\u7f18\u8282\u70b9\uff0c\u8981\u4e48\u662f\u5404\u4e2a\u96c6\u7fa4\u7684connector. Cluster\u662f\u4e3b\u96c6\u7fa4\u7528\u6765\u8bb0\u5f55\u5176\u4ed6\u96c6\u7fa4\u7aef\u70b9\u4fe1\u606f\u7684\u6570\u636e\u7ed3\u6784\uff0c\u6709\u4ee5\u4e0b\u5b57\u6bb5\uff1a * name. \u96c6\u7fa4\u540d\u79f0\uff0c\u6bcf\u4e2a\u6210\u5458\u96c6\u7fa4\u8bbf\u95ee\u4e3b\u96c6\u7fa4\u65f6\u58f0\u660e\u81ea\u5df1\u7684\u8eab\u4efd\u3002 * token. \u7528\u4e8e\u6210\u5458\u96c6\u7fa4\u521d\u59cb\u5316\uff0ctoken\u7531\u4e3b\u96c6\u7fa4\u7684operator\u751f\u6210\u3002 * endpoints. \u8be5\u96c6\u7fa4\u5185\u90e8\u6240\u6709\u9700\u8981\u8ddf\u5176\u4ed6\u96c6\u7fa4\u901a\u4fe1\u7684\u7aef\u70b9\u4fe1\u606f\uff0c\u5176\u4e2dConnector\u662f\u5fc5\u987b\u4e0a\u4f20\u7684\u3002 * Name. \u5fc5\u987b\u552f\u4e00\uff0c\u5efa\u8bae\u9700\u8981\u901a\u4fe1\u7684\u96c6\u7fa4\u5728\u914d\u7f6ecluster domain\u65f6\u4e5f\u4fdd\u6301\u552f\u4e00\u3002 * PublicAddresses. \u96c6\u7fa4\u7528\u4e8e\u5bf9\u5916\u901a\u4fe1\u7684\u516c\u7f51\u5730\u5740\uff0c\u8be5\u5730\u5740\u5fc5\u987b\u53ef\u4ee5\u88ab\u5176\u4ed6\u96c6\u7fa4\u548c\u672c\u96c6\u7fa4\u7684\u8fb9\u7f18\u8282\u70b9(\u5982\u679c\u6709)\u8bbf\u95ee * Subnets. \u4e3b\u8981\u662f\u96c6\u7fa4\u7684PodCIDRs\u6570\u636e\uff0c\u4f46\u4e5f\u542b\u6709\u63d0\u4f9b\u96c6\u7fa4\u7684ServiceCIDR\u3002 * NodeCIDRs. \u96c6\u7fa4\u5185\u90e8\u4e91\u7aef\u8282\u70b9\u7684\u6240\u6709\u8282\u70b9\u7684IP\u5730\u5740 * Type. \u8868\u660e\u7aef\u70b9\u7c7b\u578b\uff1a Connector\u548cEdgeNode \u5404\u4e2a\u96c6\u7fa4\u90fd\u6709Connector\uff0c\u8282\u70b9\u540d\u79f0\u4e5f\u53ef\u80fd\u91cd\u590d\uff0c\u4f46\u4f7f\u7528\u793e\u533a\u65f6\uff0c\u9700\u8981\u4fdd\u8bc1\u6210\u5458\u540d\u79f0\u552f\u4e00\uff0c\u4e3a\u4e86\u8fbe\u5230\u8fd9\u4e2a\u76ee\u6807\uff0c\u5728\u6bcf\u4e2a\u96c6\u7fa4\u4e0a\u62a5\u672c\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\u65f6\u9700\u8981\u5c06\u540d\u79f0\u4fee\u6539\u4e00\u4e0b\uff0c\u52a0\u4e0a\u96c6\u7fa4\u524d\u7f00\uff0c\u6bd4\u5982 cluster1\u7684Connector\u540d\u79f0\u8981\u6539\u4e3a: cluster1.connector\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "README.md"
- },
- "content": [
- {
- "additional_info": "This source file was originally from: [k8s.io/kubernetes@/v1.21.0](https://github.com/kubernetes/kubernetes/tree/v1.21.0) We added two ipset types: `hash:ip` and `hash:net`"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "README_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge",
- "data": "[](https://github.com/FabEdge/fabedge/actions/workflows/main.yml)\n [](https://github.com/fabedge/fabedge/releases)\n [](https://github.com/FabEdge/fabedge/blob/main/LICENSE)\n \n FabEdge\u662f\u4e00\u4e2a\u57fa\u4e8ekubernetes\u6784\u5efa\u7684\uff0c\u4e13\u6ce8\u4e8e\u8fb9\u7f18\u8ba1\u7b97\u7684\u5bb9\u5668\u7f51\u7edc\u65b9\u6848\uff0c\u652f\u6301KubeEdge/SuperEdge/OpenYurt\u7b49\u4e3b\u6d41\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u3002 FabEdge\u65e8\u5728\u89e3\u51b3\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e0b\u7f51\u7edc\u7ba1\u7406\u590d\u6742\uff0c\u8de8\u96c6\u7fa4\u901a\u4fe1\u56f0\u96be\uff0c\u7f3a\u5c11\u80fd\u81ea\u52a8\u611f\u77e5\u7f51\u7edc\u62d3\u6251\u7684\u670d\u52a1\u53d1\u73b0\u7b49\u95ee\u9898\uff0c\u4f7f\u80fd\u4e91\u8fb9\u3001\u8fb9\u8fb9\u4e1a\u52a1\u534f\u540c\u3002FabEdge\u652f\u63014/5G\uff0cWiFi\u7b49\u5f31\u7f51\u73af\u5883\uff0c\u9002\u7528\u4e8e\u7269\u8054\u7f51\uff0c\u8f66\u8054\u7f51\u3001\u667a\u6167\u57ce\u5e02\u7b49\u573a\u666f\u3002\n FabEdge\u4e0d\u4ec5\u652f\u6301\u8fb9\u7f18\u8282\u70b9\uff08\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u52a0\u5165\u96c6\u7fa4\u7684\u8fdc\u7a0b\u8282\u70b9\uff09\uff0c\u8fd8\u652f\u6301\u8fb9\u7f18\u96c6\u7fa4\uff08\u72ec\u7acb\u7684K8S\u96c6\u7fa4\uff09\u3002\n FabEdge\u662f\u6258\u7ba1\u5728CNCF\u4e0b\u7684\u6c99\u7bb1\u9879\u76ee\u3002"
- },
- {
- "heading": "\u7279\u6027",
- "data": "* **\u81ea\u52a8\u5730\u5740\u7ba1\u7406**\uff1a\u81ea\u52a8\u7ba1\u7406\u8fb9\u7f18\u8282\u70b9\u7f51\u6bb5\uff0c\u81ea\u52a8\u7ba1\u7406\u8fb9\u7f18\u5bb9\u5668IP\u5730\u5740\u3002\n * **\u4e91\u8fb9\u3001\u8fb9\u8fb9\u534f\u540c**: \u5efa\u7acb\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u5b89\u5168\u96a7\u9053\uff0c\u4f7f\u80fd\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u4e4b\u95f4\u7684\u4e1a\u52a1\u534f\u540c\u3002\n * **\u7075\u6d3b\u7684\u96a7\u9053\u7ba1\u7406**: \u4f7f\u7528\u81ea\u5b9a\u4e49\u8d44\u6e90\u201c\u793e\u533a\u201d\uff0c\u53ef\u4ee5\u6839\u636e\u4e1a\u52a1\u9700\u8981\u7075\u6d3b\u63a7\u5236\u8fb9\u8fb9\u96a7\u9053\u3002\n * **\u62d3\u6251\u611f\u77e5\u8def\u7531**: \u4f7f\u7528\u6700\u8fd1\u7684\u53ef\u7528\u670d\u52a1\u7aef\u70b9\uff0c\u51cf\u5c11\u670d\u52a1\u8bbf\u95ee\u5ef6\u65f6\u3002"
- },
- {
- "heading": "\u4f18\u52bf",
- "data": "* **\u6807\u51c6**: \u9075\u4eceK8S CNI\u89c4\u8303\uff0c\u9002\u7528\u4e8e\u4efb\u4f55\u534f\u8bae\uff0c\u4efb\u4f55\u5e94\u7528 \u3002\n * **\u5b89\u5168**: \u4f7f\u7528\u6210\u719f\u7a33\u5b9a\u7684IPSec\u6280\u672f\uff0c\u4f7f\u7528\u5b89\u5168\u7684\u8bc1\u4e66\u8ba4\u8bc1\u4f53\u7cfb\u3002\n * **\u6613\u7528**: \u4f7f\u7528Operator\u673a\u5236\uff0c\u81ea\u52a8\u7ba1\u7406\u5730\u5740\uff0c\u8282\u70b9\uff0c\u8bc1\u4e66\u7b49\uff0c\u6700\u5927\u7a0b\u5ea6\u51cf\u5c11\u4eba\u4e3a\u5e72\u9884\u3002"
- },
- {
- "heading": "\u5de5\u4f5c\u539f\u7406",
- "data": " \n * KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u5efa\u7acb\u4e86\u63a7\u5236\u9762\uff0c\u628a\u8fb9\u7f18\u8282\u70b9\u52a0\u5165\u4e91\u7aefK8S\u96c6\u7fa4\uff0c\u4f7f\u5f97\u53ef\u4ee5\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u4e0b\u53d1Pod\u7b49\u8d44\u6e90\uff1bFabEdge\u5728\u6b64\u57fa\u7840\u4e0a\u5efa\u7acb\u4e86\u4e00\u4e2a\u4e09\u5c42\u7684\u6570\u636e\u8f6c\u53d1\u9762\uff0c\u4f7f\u5f97Pod\u548cPod\u4e4b\u95f4\u53ef\u4ee5\u76f4\u63a5\u901a\u8baf\u3002\n * \u4e91\u7aef\u53ef\u4ee5\u662f\u4efb\u4f55K8S\u96c6\u7fa4\uff0c\u76ee\u524d\u652f\u6301\u7684CNI\u5305\u62ecCalico\uff0c Flannel\u3002\n * FabEdge\u4f7f\u7528\u5b89\u5168\u96a7\u9053\u6280\u672f\uff0c\u76ee\u524d\u652f\u6301IPSec\u3002\n * FabEdge\u5305\u62ec\u7684\u7ec4\u4ef6\uff1aOperators, Connector\uff0cAgent\u548cCloud-Agent\u3002\n * Operator\u8fd0\u884c\u5728\u4e91\u7aef\u4efb\u610f\u7684\u8282\u70b9\uff0c\u901a\u8fc7\u76d1\u542c\u8282\u70b9\uff0c\u670d\u52a1\u7b49K8S\u8d44\u6e90\uff0c\u4e3a\u6bcf\u4e2aAgent\u7ef4\u62a4\u4e00\u4e2aConfigMap\uff0c\u5305\u62ec\u4e86\u672cAgent\u9700\u8981\u7684\u8def\u7531\u4fe1\u606f\uff0c\u6bd4\u5982\u5b50\u7f51\uff0c\u7aef\u70b9\uff0c\u8d1f\u8f7d\u5747\u8861\u89c4\u5219\u7b49\uff0c\u540c\u65f6\u4e3a\u6bcf\u4e2aAgent\u7ef4\u62a4\u4e00\u4e2aSecret\uff0c\u5305\u62ecCA\u8bc1\u4e66\uff0c\u8282\u70b9\u8bc1\u4e66\u7b49\u3002Operator\u4e5f\u8d1f\u8d23Agent\u81ea\u8eab\u7684\u7ba1\u7406\uff0c\u5305\u62ec\u521b\u5efa\uff0c\u66f4\u65b0\uff0c\u5220\u9664\u7b49\u3002\n * Connector\u8fd0\u884c\u5728\u4e91\u7aef\u9009\u5b9a\u7684\u8282\u70b9\uff0c\u8d1f\u8d23\u7ba1\u7406\u4ece\u8fb9\u7f18\u8282\u70b9\u53d1\u8d77\u7684\u96a7\u9053\uff0c\u5728\u8fb9\u7f18\u8282\u70b9\u548c\u4e91\u7aef\u96c6\u7fa4\u4e4b\u95f4\u8f6c\u53d1\u6d41\u91cf\u3002\u4eceConnector\u8282\u70b9\u5230\u4e91\u7aef\u5176\u5b83\u975eConnector\u8282\u70b9\u7684\u6d41\u91cf\u8f6c\u53d1\u4ecd\u7136\u4f9d\u9760\u4e91\u7aefCNI\u3002\n * Cloud-Agent\u8fd0\u884c\u5728\u96c6\u7fa4\u4e2d\u6240\u6709\u975e\u8fb9\u7f18\uff0c\u975eConnector\u7684\u8282\u70b9\uff0c\u5b83\u8d1f\u8d23\u7ba1\u7406\u672c\u8282\u70b9\u5230\u8fdc\u7aef\u7684\u8def\u7531\u3002\n * Agent\u8fd0\u884c\u5728\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e0a\uff0c \u5b83\u4f7f\u7528\u81ea\u5df1\u7684ConfigMap\u548cSecret\u7684\u4fe1\u606f\uff0c\u53d1\u8d77\u5230\u4e91\u7aefConnector\u548c\u5176\u5b83\u8fb9\u7f18\u8282\u70b9\u7684\u96a7\u9053\uff0c\u8d1f\u8d23\u672c\u8282\u70b9\u7684\u8def\u7531\uff0c\u8d1f\u8f7d\u5747\u8861\uff0ciptables\u89c4\u5219\u7684\u7ba1\u7406\u3002\n * Fab-DNS\u8fd0\u884c\u5728\u6240\u6709FabEdge\u7684\u96c6\u7fa4\u91cc\uff0c\u5b83\u901a\u8fc7\u622a\u83b7DNS\u8bf7\u6c42\uff0c\u63d0\u4f9b\u62d3\u6251\u611f\u77e5\u7684\u8de8\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0\u80fd\u529b\u3002"
- },
- {
- "heading": "FabEdge\u548c\u4f20\u7edfCNI\u7684\u533a\u522b",
- "data": "FabEdge\u548c\u73b0\u6709\u7684CNI\uff0c\u6bd4\u5982Calico\uff0cFlannel\uff0c\u4e92\u4e3a\u8865\u5145\uff0c\u89e3\u51b3\u4e0d\u540c\u7684\u95ee\u9898\u3002\u5c31\u50cf\u524d\u9762\u67b6\u6784\u56fe\u6240\u793a\uff0cCalico\u7b49\u4f20\u7edf\u7684\u63d2\u4ef6\u8fd0\u884c\u5728\u4e91\u7aefK8S\u96c6\u7fa4\u91cc\uff0c\u8d1f\u8d23\u4e91\u5185\u8282\u70b9\u4e4b\u95f4\u7684\u6d41\u91cf\u8f6c\u53d1\uff0cFabEdge\u4f5c\u4e3a\u5b83\u7684\u4e00\u4e2a\u8865\u5145\uff0c\u628a\u7f51\u7edc\u7684\u80fd\u529b\u5ef6\u4f38\u5230\u4e86\u8fb9\u7f18\u8282\u70b9\u548c\u8fb9\u7f18\u96c6\u7fa4\uff0c\u4f7f\u80fd\u4e86\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u901a\u8baf\u3002"
- },
- {
- "heading": "\u7528\u6237\u624b\u518c",
- "data": "* [\u5feb\u901f\u5b89\u88c5](docs/get-started_zh.md)\n * [\u4f7f\u7528\u6307\u5357](docs/user-guide.md)\n * [\u5e38\u89c1\u95ee\u9898](docs/FAQ_zh.md)\n * [\u5378\u8f7dFabEdge](docs/uninstall.md)\n * [\u95ee\u9898\u6392\u67e5\u6307\u5357](docs/troubleshooting-guide.md)"
- },
- {
- "heading": "\u793e\u533a\u4f8b\u4f1a",
- "data": "\u53cc\u5468\u4f8b\u4f1a\uff08\u6bcf\u4e2a\u6708\u7684\u7b2c\u4e00\u548c\u7b2c\u56db\u5468\u7684\u5468\u56db\u4e0b\u5348\uff09\n \u4f1a\u8bae\u8d44\u6599:\n [Meeting notes and agenda](https://shimo.im/docs/Wwt9TdGqgVvpDHJt)\n [Meeting recordings\uff1abilibili channel](https://space.bilibili.com/524926244?spm_id_from=333.1007.0.0)"
- },
- {
- "heading": "\u8054\u7cfb\u65b9\u5f0f",
- "data": "\u00b7 \u90ae\u7bb1: fabedge@beyondcent.com\n \u00b7 \u626b\u63cf\u52a0\u5165\u5fae\u4fe1\u7fa4\n "
- },
- {
- "heading": "\u8d21\u732e",
- "data": "\u5982\u679c\u60a8\u6709\u5174\u8da3\u6210\u4e3a\u4e00\u4e2a\u8d21\u732e\u8005\uff0c\u4e5f\u6709\u5174\u8da3\u52a0\u5165FabEdge\u7684\u5f00\u53d1\uff0c\u8bf7\u67e5\u770b[CONTRIBUTING](./CONTRIBUTING.md)\u83b7\u53d6\u66f4\u591a\u5173\u4e8e\u5982\u4f55\u63d0\u4ea4 Patch \u548c\u8d21\u732e\u7684\u6d41\u7a0b\n \u8bf7\u52a1\u5fc5\u9605\u8bfb\u5e76\u9075\u5b88\u6211\u4eec\u7684[\u884c\u4e3a\u51c6\u5219](./CODE_OF_CONDUCT.md)"
- },
- {
- "heading": "\u8f6f\u4ef6\u8bb8\u53ef",
- "data": "FabEdge\u9075\u5faaApache 2.0 \u8bb8\u53ef\u3002"
- },
- {
- "additional_info": "[](https://github.com/FabEdge/fabedge/actions/workflows/main.yml) [](https://github.com/fabedge/fabedge/releases) [](https://github.com/FabEdge/fabedge/blob/main/LICENSE) FabEdge\u662f\u4e00\u4e2a\u57fa\u4e8ekubernetes\u6784\u5efa\u7684\uff0c\u4e13\u6ce8\u4e8e\u8fb9\u7f18\u8ba1\u7b97\u7684\u5bb9\u5668\u7f51\u7edc\u65b9\u6848\uff0c\u652f\u6301KubeEdge/SuperEdge/OpenYurt\u7b49\u4e3b\u6d41\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u3002 FabEdge\u65e8\u5728\u89e3\u51b3\u8fb9\u7f18\u8ba1\u7b97\u573a\u666f\u4e0b\u7f51\u7edc\u7ba1\u7406\u590d\u6742\uff0c\u8de8\u96c6\u7fa4\u901a\u4fe1\u56f0\u96be\uff0c\u7f3a\u5c11\u80fd\u81ea\u52a8\u611f\u77e5\u7f51\u7edc\u62d3\u6251\u7684\u670d\u52a1\u53d1\u73b0\u7b49\u95ee\u9898\uff0c\u4f7f\u80fd\u4e91\u8fb9\u3001\u8fb9\u8fb9\u4e1a\u52a1\u534f\u540c\u3002FabEdge\u652f\u63014/5G\uff0cWiFi\u7b49\u5f31\u7f51\u73af\u5883\uff0c\u9002\u7528\u4e8e\u7269\u8054\u7f51\uff0c\u8f66\u8054\u7f51\u3001\u667a\u6167\u57ce\u5e02\u7b49\u573a\u666f\u3002 FabEdge\u4e0d\u4ec5\u652f\u6301\u8fb9\u7f18\u8282\u70b9\uff08\u901a\u8fc7KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u52a0\u5165\u96c6\u7fa4\u7684\u8fdc\u7a0b\u8282\u70b9\uff09\uff0c\u8fd8\u652f\u6301\u8fb9\u7f18\u96c6\u7fa4\uff08\u72ec\u7acb\u7684K8S\u96c6\u7fa4\uff09\u3002 FabEdge\u662f\u6258\u7ba1\u5728CNCF\u4e0b\u7684\u6c99\u7bb1\u9879\u76ee\u3002 * **\u81ea\u52a8\u5730\u5740\u7ba1\u7406**\uff1a\u81ea\u52a8\u7ba1\u7406\u8fb9\u7f18\u8282\u70b9\u7f51\u6bb5\uff0c\u81ea\u52a8\u7ba1\u7406\u8fb9\u7f18\u5bb9\u5668IP\u5730\u5740\u3002 * **\u4e91\u8fb9\u3001\u8fb9\u8fb9\u534f\u540c**: \u5efa\u7acb\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u5b89\u5168\u96a7\u9053\uff0c\u4f7f\u80fd\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u4e4b\u95f4\u7684\u4e1a\u52a1\u534f\u540c\u3002 * **\u7075\u6d3b\u7684\u96a7\u9053\u7ba1\u7406**: \u4f7f\u7528\u81ea\u5b9a\u4e49\u8d44\u6e90\u201c\u793e\u533a\u201d\uff0c\u53ef\u4ee5\u6839\u636e\u4e1a\u52a1\u9700\u8981\u7075\u6d3b\u63a7\u5236\u8fb9\u8fb9\u96a7\u9053\u3002 * **\u62d3\u6251\u611f\u77e5\u8def\u7531**: \u4f7f\u7528\u6700\u8fd1\u7684\u53ef\u7528\u670d\u52a1\u7aef\u70b9\uff0c\u51cf\u5c11\u670d\u52a1\u8bbf\u95ee\u5ef6\u65f6\u3002 * **\u6807\u51c6**: \u9075\u4eceK8S CNI\u89c4\u8303\uff0c\u9002\u7528\u4e8e\u4efb\u4f55\u534f\u8bae\uff0c\u4efb\u4f55\u5e94\u7528 \u3002 * **\u5b89\u5168**: \u4f7f\u7528\u6210\u719f\u7a33\u5b9a\u7684IPSec\u6280\u672f\uff0c\u4f7f\u7528\u5b89\u5168\u7684\u8bc1\u4e66\u8ba4\u8bc1\u4f53\u7cfb\u3002 * **\u6613\u7528**: \u4f7f\u7528Operator\u673a\u5236\uff0c\u81ea\u52a8\u7ba1\u7406\u5730\u5740\uff0c\u8282\u70b9\uff0c\u8bc1\u4e66\u7b49\uff0c\u6700\u5927\u7a0b\u5ea6\u51cf\u5c11\u4eba\u4e3a\u5e72\u9884\u3002 * KubeEdge\u7b49\u8fb9\u7f18\u8ba1\u7b97\u6846\u67b6\u5efa\u7acb\u4e86\u63a7\u5236\u9762\uff0c\u628a\u8fb9\u7f18\u8282\u70b9\u52a0\u5165\u4e91\u7aefK8S\u96c6\u7fa4\uff0c\u4f7f\u5f97\u53ef\u4ee5\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u4e0b\u53d1Pod\u7b49\u8d44\u6e90\uff1bFabEdge\u5728\u6b64\u57fa\u7840\u4e0a\u5efa\u7acb\u4e86\u4e00\u4e2a\u4e09\u5c42\u7684\u6570\u636e\u8f6c\u53d1\u9762\uff0c\u4f7f\u5f97Pod\u548cPod\u4e4b\u95f4\u53ef\u4ee5\u76f4\u63a5\u901a\u8baf\u3002 * \u4e91\u7aef\u53ef\u4ee5\u662f\u4efb\u4f55K8S\u96c6\u7fa4\uff0c\u76ee\u524d\u652f\u6301\u7684CNI\u5305\u62ecCalico\uff0c Flannel\u3002 * FabEdge\u4f7f\u7528\u5b89\u5168\u96a7\u9053\u6280\u672f\uff0c\u76ee\u524d\u652f\u6301IPSec\u3002 * FabEdge\u5305\u62ec\u7684\u7ec4\u4ef6\uff1aOperators, Connector\uff0cAgent\u548cCloud-Agent\u3002 * Operator\u8fd0\u884c\u5728\u4e91\u7aef\u4efb\u610f\u7684\u8282\u70b9\uff0c\u901a\u8fc7\u76d1\u542c\u8282\u70b9\uff0c\u670d\u52a1\u7b49K8S\u8d44\u6e90\uff0c\u4e3a\u6bcf\u4e2aAgent\u7ef4\u62a4\u4e00\u4e2aConfigMap\uff0c\u5305\u62ec\u4e86\u672cAgent\u9700\u8981\u7684\u8def\u7531\u4fe1\u606f\uff0c\u6bd4\u5982\u5b50\u7f51\uff0c\u7aef\u70b9\uff0c\u8d1f\u8f7d\u5747\u8861\u89c4\u5219\u7b49\uff0c\u540c\u65f6\u4e3a\u6bcf\u4e2aAgent\u7ef4\u62a4\u4e00\u4e2aSecret\uff0c\u5305\u62ecCA\u8bc1\u4e66\uff0c\u8282\u70b9\u8bc1\u4e66\u7b49\u3002Operator\u4e5f\u8d1f\u8d23Agent\u81ea\u8eab\u7684\u7ba1\u7406\uff0c\u5305\u62ec\u521b\u5efa\uff0c\u66f4\u65b0\uff0c\u5220\u9664\u7b49\u3002 * Connector\u8fd0\u884c\u5728\u4e91\u7aef\u9009\u5b9a\u7684\u8282\u70b9\uff0c\u8d1f\u8d23\u7ba1\u7406\u4ece\u8fb9\u7f18\u8282\u70b9\u53d1\u8d77\u7684\u96a7\u9053\uff0c\u5728\u8fb9\u7f18\u8282\u70b9\u548c\u4e91\u7aef\u96c6\u7fa4\u4e4b\u95f4\u8f6c\u53d1\u6d41\u91cf\u3002\u4eceConnector\u8282\u70b9\u5230\u4e91\u7aef\u5176\u5b83\u975eConnector\u8282\u70b9\u7684\u6d41\u91cf\u8f6c\u53d1\u4ecd\u7136\u4f9d\u9760\u4e91\u7aefCNI\u3002 * Cloud-Agent\u8fd0\u884c\u5728\u96c6\u7fa4\u4e2d\u6240\u6709\u975e\u8fb9\u7f18\uff0c\u975eConnector\u7684\u8282\u70b9\uff0c\u5b83\u8d1f\u8d23\u7ba1\u7406\u672c\u8282\u70b9\u5230\u8fdc\u7aef\u7684\u8def\u7531\u3002 * Agent\u8fd0\u884c\u5728\u6bcf\u4e2a\u8fb9\u7f18\u8282\u70b9\u4e0a\uff0c \u5b83\u4f7f\u7528\u81ea\u5df1\u7684ConfigMap\u548cSecret\u7684\u4fe1\u606f\uff0c\u53d1\u8d77\u5230\u4e91\u7aefConnector\u548c\u5176\u5b83\u8fb9\u7f18\u8282\u70b9\u7684\u96a7\u9053\uff0c\u8d1f\u8d23\u672c\u8282\u70b9\u7684\u8def\u7531\uff0c\u8d1f\u8f7d\u5747\u8861\uff0ciptables\u89c4\u5219\u7684\u7ba1\u7406\u3002 * Fab-DNS\u8fd0\u884c\u5728\u6240\u6709FabEdge\u7684\u96c6\u7fa4\u91cc\uff0c\u5b83\u901a\u8fc7\u622a\u83b7DNS\u8bf7\u6c42\uff0c\u63d0\u4f9b\u62d3\u6251\u611f\u77e5\u7684\u8de8\u96c6\u7fa4\u670d\u52a1\u53d1\u73b0\u80fd\u529b\u3002 FabEdge\u548c\u73b0\u6709\u7684CNI\uff0c\u6bd4\u5982Calico\uff0cFlannel\uff0c\u4e92\u4e3a\u8865\u5145\uff0c\u89e3\u51b3\u4e0d\u540c\u7684\u95ee\u9898\u3002\u5c31\u50cf\u524d\u9762\u67b6\u6784\u56fe\u6240\u793a\uff0cCalico\u7b49\u4f20\u7edf\u7684\u63d2\u4ef6\u8fd0\u884c\u5728\u4e91\u7aefK8S\u96c6\u7fa4\u91cc\uff0c\u8d1f\u8d23\u4e91\u5185\u8282\u70b9\u4e4b\u95f4\u7684\u6d41\u91cf\u8f6c\u53d1\uff0cFabEdge\u4f5c\u4e3a\u5b83\u7684\u4e00\u4e2a\u8865\u5145\uff0c\u628a\u7f51\u7edc\u7684\u80fd\u529b\u5ef6\u4f38\u5230\u4e86\u8fb9\u7f18\u8282\u70b9\u548c\u8fb9\u7f18\u96c6\u7fa4\uff0c\u4f7f\u80fd\u4e86\u4e91\u8fb9\uff0c\u8fb9\u8fb9\u901a\u8baf\u3002 * [\u5feb\u901f\u5b89\u88c5](docs/get-started_zh.md) * [\u4f7f\u7528\u6307\u5357](docs/user-guide.md) * [\u5e38\u89c1\u95ee\u9898](docs/FAQ_zh.md) * [\u5378\u8f7dFabEdge](docs/uninstall.md) * [\u95ee\u9898\u6392\u67e5\u6307\u5357](docs/troubleshooting-guide.md) \u53cc\u5468\u4f8b\u4f1a\uff08\u6bcf\u4e2a\u6708\u7684\u7b2c\u4e00\u548c\u7b2c\u56db\u5468\u7684\u5468\u56db\u4e0b\u5348\uff09 \u4f1a\u8bae\u8d44\u6599: [Meeting notes and agenda](https://shimo.im/docs/Wwt9TdGqgVvpDHJt) [Meeting recordings\uff1abilibili channel](https://space.bilibili.com/524926244?spm_id_from=333.1007.0.0) \u00b7 \u90ae\u7bb1: fabedge@beyondcent.com \u00b7 \u626b\u63cf\u52a0\u5165\u5fae\u4fe1\u7fa4 \u5982\u679c\u60a8\u6709\u5174\u8da3\u6210\u4e3a\u4e00\u4e2a\u8d21\u732e\u8005\uff0c\u4e5f\u6709\u5174\u8da3\u52a0\u5165FabEdge\u7684\u5f00\u53d1\uff0c\u8bf7\u67e5\u770b[CONTRIBUTING](./CONTRIBUTING.md)\u83b7\u53d6\u66f4\u591a\u5173\u4e8e\u5982\u4f55\u63d0\u4ea4 Patch \u548c\u8d21\u732e\u7684\u6d41\u7a0b \u8bf7\u52a1\u5fc5\u9605\u8bfb\u5e76\u9075\u5b88\u6211\u4eec\u7684[\u884c\u4e3a\u51c6\u5219](./CODE_OF_CONDUCT.md) FabEdge\u9075\u5faaApache 2.0 \u8bb8\u53ef\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "roadmap.md"
- },
- "content": [
- {
- "heading": "FabEdge Roadmap",
- "data": ""
- },
- {
- "heading": "Q3 2021",
- "data": "- Support KubeEdge/SuperEdge/Openyurt\n - Automatic management of node certificate\n - Air-gap installation\n - Support Flannel/Calico\n - Support IPV4\n - Support IPSec Tunnel"
- },
- {
- "heading": "Q4 2021",
- "data": "- Support Edge Cluster\n - Support topology-aware service discovery"
- },
- {
- "heading": "v0.6.0",
- "data": "- Support IPV6\n - Implement a flexiable way to configure fabedge-agent\n - Support auto networking of edge nodes in LAN"
- },
- {
- "heading": "v0.7.0",
- "data": "- Change the naming strategy of fabedge-agent pods\n - Add commonName validation for fabedge-agent certificates\n - Implement node-specific configuration of fabedge-agent arguments\n - Let fabedge-agent configure sysctl parameters needed\n - Let fabedge-operator manage calico ippools for CIDRs"
- },
- {
- "heading": "v0.8.0",
- "data": "* Support settings strongswan's port\n * Support strongswan hole punching\n * Release fabctl which is a CLI to help diagnosing networking problems;\n * Integerate fabedge-agent with coredns and kube-proxy"
- },
- {
- "heading": "v0.9.0",
- "data": "* Implement connector high availability * Improve dual stack implementation * Improve iptables rule configuring (ensure the order of rules)"
- },
- {
- "additional_info": "- Support KubeEdge/SuperEdge/Openyurt - Automatic management of node certificate - Air-gap installation - Support Flannel/Calico - Support IPV4 - Support IPSec Tunnel - Support Edge Cluster - Support topology-aware service discovery - Support IPV6 - Implement a flexiable way to configure fabedge-agent - Support auto networking of edge nodes in LAN - Change the naming strategy of fabedge-agent pods - Add commonName validation for fabedge-agent certificates - Implement node-specific configuration of fabedge-agent arguments - Let fabedge-agent configure sysctl parameters needed - Let fabedge-operator manage calico ippools for CIDRs * Support settings strongswan's port * Support strongswan hole punching * Release fabctl which is a CLI to help diagnosing networking problems; * Integerate fabedge-agent with coredns and kube-proxy * Implement connector high availability * Improve dual stack implementation * Improve iptables rule configuring (ensure the order of rules)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "troubleshooting-guide.md"
- },
- "content": [
- {
- "heading": "FabEdge Troubleshooting Guide",
- "data": "English | [\u4e2d\u6587](troubleshooting-guide_zh.md)\n [toc]"
- },
- {
- "heading": "Verify Kubernetes is normal",
- "data": ""
- },
- {
- "heading": "Verify FabEdge is normal",
- "data": "If the FabEdge service is abnormal, check the pod logs."
- },
- {
- "heading": "Execute on master node, use the correct pod name.",
- "data": ""
- },
- {
- "heading": "Verify the tunnel is successfully established",
- "data": ""
- },
- {
- "heading": "Execute on master node.",
- "data": "If the tunnel cannot be established, check whether the firewall opens related ports. For details, see the [install](get-started.md)."
- },
- {
- "heading": "Check routing table and xfrm policy",
- "data": ""
- },
- {
- "heading": "Run on the connector node.",
- "data": ""
- },
- {
- "heading": "Run on edge nodes",
- "data": ""
- },
- {
- "heading": "Run on non-connector nodes in the cloud",
- "data": "> Note: If **edge node** has interfaces such as CNI, means flannel residues exist and you need to restart the node."
- },
- {
- "heading": "Check iptables",
- "data": ""
- },
- {
- "heading": "Run on the connector node",
- "data": ""
- },
- {
- "heading": "Run on edge nodes.",
- "data": "Check whether the environment has host firewall DROP rules, especially INPUT and FORWARD chains."
- },
- {
- "heading": "Verify the certificates",
- "data": "FabEdge related certificates including CA, Connector, and Agent, are stored in Secret and managed by Operator automatically. If some certificate-related error occur, you can use the following method to verify it."
- },
- {
- "heading": "Execute on the master node.",
- "data": ""
- },
- {
- "heading": "Start a container for cert.",
- "data": ""
- },
- {
- "heading": "Get the ID of the container you just started.",
- "data": ""
- },
- {
- "heading": "Copy the executable to the host.",
- "data": ""
- },
- {
- "heading": "Check out related secret.",
- "data": ""
- },
- {
- "heading": "Verify related secret.",
- "data": ""
- },
- {
- "heading": "Collect diagnostic information",
- "data": "You can use the script below to quickly collect the diagnostic information. If support is needed, please submit the generated files."
- },
- {
- "heading": "Run on master node",
- "data": ""
- },
- {
- "heading": "Run on connector node",
- "data": ""
- },
- {
- "heading": "Run on edge node",
- "data": ""
- },
- {
- "heading": "Run on other nodes",
- "data": ""
- },
- {
- "additional_info": "E n g l i s h | [ \u4e2d \u6587 ] ( t r o u b l e s h o o t i n g - g u i d e _ z h . m d ) [ t o c ] ` ` ` s h e l l k u b e c t l g e t p o - n k u b e - s y s t e m k u b e c t l g e t n o ` ` ` I f t h e F a b E d g e s e r v i c e i s a b n o r m a l , c h e c k t h e p o d l o g s . ` ` ` s h e l l k u b e c t l g e t p o - n f a b e d g e k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - o p e r a t o r - x x x k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - a g e n t - x x x k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - o p e r a t o r - 5 f c 5 c 4 b 5 6 - g l g j h k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - c o n n e c t o r - 6 8 b 6 8 6 7 b b f - m 6 6 v t - c s t r o n g s w a n k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - c o n n e c t o r - 6 8 b 6 8 6 7 b b f - m 6 6 v t - c c o n n e c t o r k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - a g e n t - e d g e 1 - c s t r o n g s w a n k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - a g e n t - e d g e 1 - c a g e n t ` ` ` ` ` ` s h e l l k u b e c t l e x e c - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - c o n n s k u b e c t l e x e c - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - s a s k u b e c t l e x e c - n f a b e d g e f a b e d g e - a g e n t - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - c o n n s k u b e c t l e x e c - n f a b e d g e f a b e d g e - a g e n t - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - s a s ` ` ` I f t h e t u n n e l c a n n o t b e e s t a b l i s h e d , c h e c k w h e t h e r t h e f i r e w a l l o p e n s r e l a t e d p o r t s . F o r d e t a i l s , s e e t h e [ i n s t a l l ] ( g e t - s t a r t e d . m d ) . ` ` ` s h e l l i p l i p r i p r s t 2 2 0 i p x p i p x s i p l i p r i p r s t 2 2 0 i p x p i p x s i p l i p r ` ` ` > N o t e : I f * * e d g e n o d e * * h a s i n t e r f a c e s s u c h a s C N I , m e a n s f l a n n e l r e s i d u e s e x i s t a n d y o u n e e d t o r e s t a r t t h e n o d e . ` ` ` s h e l l i p t a b l e s - S i p t a b l e s - L - n v - - l i n e - n u m b e r s i p t a b l e s - t n a t - S i p t a b l e s - t n a t - L - n v - - l i n e - n u m b e r s i p t a b l e s - S i p t a b l e s - L - n v - - l i n e - n u m b e r s i p t a b l e s - t n a t - S i p t a b l e s - t n a t - L - n v - - l i n e - n u m b e r s ` ` ` C h e c k w h e t h e r t h e e n v i r o n m e n t h a s h o s t f i r e w a l l D R O P r u l e s , e s p e c i a l l y I N P U T a n d F O R W A R D c h a i n s . F a b E d g e r e l a t e d c e r t i f i c a t e s i n c l u d i n g C A , C o n n e c t o r , a n d A g e n t , a r e s t o r e d i n S e c r e t a n d m a n a g e d b y O p e r a t o r a u t o m a t i c a l l y . I f s o m e c e r t i f i c a t e - r e l a t e d e r r o r o c c u r , y o u c a n u s e t h e f o l l o w i n g m e t h o d t o v e r i f y i t . ` ` ` s h e l l d o c k e r r u n f a b e d g e / c e r t d o c k e r p s - a | g r e p c e r t 6 5 c e b 5 7 d 6 6 5 6 f a b e d g e / c e r t \" / u s r / l o c a l / b i n / f a b e \u2026 \" 1 5 s e c o n d s a g o d o c k e r c p 6 5 c e b 5 7 d 6 6 5 6 : / u s r / l o c a l / b i n / f a b e d g e - c e r t . k u b e c t l g e t s e c r e t - n f a b e d g e N A M E T Y P E D A T A A G E a p i - s e r v e r - t l s k u b e r n e t e s . i o / t l s 4 3 d 2 2 h c e r t - t o k e n - c s f f n k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h c o n n e c t o r - t l s k u b e r n e t e s . i o / t l s 4 3 d 2 2 h d e f a u l t - t o k e n - r q 9 m v k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h f a b e d g e - a g e n t - t l s - e d g e 1 k u b e r n e t e s . i o / t l s 4 3 d 2 2 h f a b e d g e - a g e n t - t l s - e d g e 2 k u b e r n e t e s . i o / t l s 4 3 d 2 2 h f a b e d g e - c a O p a q u e 2 3 d 2 2 h f a b e d g e - o p e r a t o r - t o k e n - t b 8 q b k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h . / f a b e d g e - c e r t v e r i f y - s c o n n e c t o r - t l s Y o u r c e r t i s o k . / f a b e d g e - c e r t v e r i f y - s f a b e d g e - a g e n t - t l s - e d g e 1 Y o u r c e r t i s o k ` ` ` Y o u c a n u s e t h e s c r i p t b e l o w t o q u i c k l y c o l l e c t t h e d i a g n o s t i c i n f o r m a t i o n . I f s u p p o r t i s n e e d e d , p l e a s e s u b m i t t h e g e n e r a t e d f i l e s . ` ` ` c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s m a s t e r | t e e / t m p / m a s t e r - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s c o n n e c t o r | t e e / t m p / c o n n e c t o r - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s e d g e | t e e / t m p / e d g e - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h | t e e / t m p / n o d e - c h e c k e r . l o g ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "troubleshooting-guide_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge\u6392\u9519\u624b\u518c",
- "data": "[toc]"
- },
- {
- "heading": "\u786e\u8ba4Kubernetes\u73af\u5883\u6b63\u5e38",
- "data": "\u5982\u679cKubernetes\u4e0d\u6b63\u5e38\uff0c\u8bf7\u81ea\u884c\u6392\u67e5\uff0c\u76f4\u5230\u95ee\u9898\u89e3\u51b3\uff0c\u7136\u540e\u4e0b\u4e00\u6b65"
- },
- {
- "heading": "\u786e\u8ba4FabEdge\u670d\u52a1\u6b63\u5e38",
- "data": "\u5982\u679cFabEdge\u670d\u52a1\u4e0d\u6b63\u5e38\uff0c\u68c0\u67e5\u76f8\u5173\u65e5\u5fd7"
- },
- {
- "heading": "\u5728master\u8282\u70b9\u4e0a\u6267\u884c, \u8bf7\u4f7f\u7528\u6b63\u786e\u7684Pod\u7684\u540d\u5b57",
- "data": ""
- },
- {
- "heading": "\u786e\u8ba4\u96a7\u9053\u5efa\u7acb\u6210\u529f",
- "data": ""
- },
- {
- "heading": "\u5728master\u8282\u70b9\u4e0a\u6267\u884c",
- "data": "\u5982\u679c\u96a7\u9053\u4e0d\u80fd\u5efa\u7acb\uff0c\u8981\u786e\u8ba4\u9632\u706b\u5899\u662f\u5426\u5f00\u653e\u76f8\u5173\u7aef\u53e3\uff0c\u5177\u4f53\u53c2\u8003[\u5b89\u88c5\u624b\u518c](get-started.md)"
- },
- {
- "heading": "\u68c0\u67e5\u8def\u7531\u8868",
- "data": ""
- },
- {
- "heading": "\u5728connector\u8282\u70b9\u4e0a\u8fd0\u884c",
- "data": ""
- },
- {
- "heading": "\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u8fd0\u884c",
- "data": ""
- },
- {
- "heading": "\u5728\u4e91\u7aef\u975econnector\u8282\u70b9\u4e0a\u8fd0\u884c",
- "data": "\u5982\u679c**\u8fb9\u7f18\u8282\u70b9**\u6709cni\u7b49\u63a5\u53e3\uff0c\u8868\u793a\u6709flannel\u7684\u6b8b\u7559\uff0c\u9700\u8981\u91cd\u542f**\u8fb9\u7f18\u8282\u70b9**"
- },
- {
- "heading": "\u68c0\u67e5iptables",
- "data": ""
- },
- {
- "heading": "\u5728connector\u8282\u70b9\u4e0a\u8fd0\u884c",
- "data": ""
- },
- {
- "heading": "\u5728\u8fb9\u7f18\u8282\u70b9\u4e0a\u8fd0\u884c",
- "data": "\u68c0\u67e5\u662f\u5426\u73af\u5883\u91cc\u6709\u4e3b\u673a\u9632\u706b\u5899DROP\u7684\u89c4\u5219\uff0c\u5c24\u5176\u662fINPUT\uff0c FORWARD\u7684\u94fe"
- },
- {
- "heading": "\u6821\u9a8c\u8bc1\u4e66",
- "data": "FabEdge\u76f8\u5173\u7684\u8bc1\u4e66\uff0c\u5305\u62ecCA\uff0c Connector\uff0c Agent\u90fd\u4fdd\u5b58\u5728Secret\u91cc\uff0cOperator\u4f1a\u81ea\u52a8\u7ef4\u62a4\u8fd9\u4e9b\u8bc1\u4e66\u3002\u5982\u679c\u51fa\u73b0\u8bc1\u4e66\u76f8\u5173\u9519\u8bef\uff0c\u53ef\u4ee5\u4f7f\u7528\u4e0b\u9762\u65b9\u6cd5\u624b\u52a8\u9a8c\u8bc1\u3002"
- },
- {
- "heading": "\u5728master\u8282\u70b9\u4e0a\u6267\u884c",
- "data": ""
- },
- {
- "heading": "\u542f\u52a8\u4e00\u4e2acert\u7684\u5bb9\u5668",
- "data": ""
- },
- {
- "heading": "\u83b7\u53d6\u521a\u542f\u52a8\u7684\u5bb9\u5668\u7684ID",
- "data": ""
- },
- {
- "heading": "\u5c06\u53ef\u6267\u884c\u7a0b\u5e8f\u62f7\u8d1d\u5230\u4e3b\u673a",
- "data": ""
- },
- {
- "heading": "\u67e5\u770b\u76f8\u5173Secret",
- "data": ""
- },
- {
- "heading": "\u6821\u9a8c\u76f8\u5173Secret",
- "data": ""
- },
- {
- "heading": "\u6392\u67e5\u5de5\u5177",
- "data": "\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4e0b\u9762\u7684\u811a\u672c\u5feb\u901f\u6536\u96c6\u4ee5\u4e0a\u4fe1\u606f\uff0c\u5982\u9700\u793e\u533a\u63d0\u4f9b\u652f\u6301\uff0c\u8bf7\u63d0\u4ea4\u751f\u6210\u7684\u6587\u4ef6\u3002"
- },
- {
- "heading": "master\u8282\u70b9\u6267\u884c\uff1a",
- "data": ""
- },
- {
- "heading": "connector\u8282\u70b9\u6267\u884c\uff1a",
- "data": ""
- },
- {
- "heading": "edge\u8282\u70b9\u6267\u884c\uff1a",
- "data": ""
- },
- {
- "heading": "\u5176\u5b83\u8282\u70b9\u6267\u884c\uff1a",
- "data": ""
- },
- {
- "additional_info": "[ t o c ] ` ` ` s h e l l k u b e c t l g e t p o - n k u b e - s y s t e m k u b e c t l g e t n o ` ` ` \u5982 \u679c K u b e r n e t e s \u4e0d \u6b63 \u5e38 \uff0c \u8bf7 \u81ea \u884c \u6392 \u67e5 \uff0c \u76f4 \u5230 \u95ee \u9898 \u89e3 \u51b3 \uff0c \u7136 \u540e \u4e0b \u4e00 \u6b65 \u5982 \u679c F a b E d g e \u670d \u52a1 \u4e0d \u6b63 \u5e38 \uff0c \u68c0 \u67e5 \u76f8 \u5173 \u65e5 \u5fd7 ` ` ` s h e l l k u b e c t l g e t p o - n f a b e d g e k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - o p e r a t o r - x x x k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x k u b e c t l d e s c r i b e p o - n f a b e d g e f a b e d g e - a g e n t - x x x k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - o p e r a t o r - 5 f c 5 c 4 b 5 6 - g l g j h k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - c o n n e c t o r - 6 8 b 6 8 6 7 b b f - m 6 6 v t - c s t r o n g s w a n k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - c o n n e c t o r - 6 8 b 6 8 6 7 b b f - m 6 6 v t - c c o n n e c t o r k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - a g e n t - e d g e 1 - c s t r o n g s w a n k u b e c t l l o g s - - t a i l = 5 0 - n f a b e d g e f a b e d g e - a g e n t - e d g e 1 - c a g e n t ` ` ` ` ` ` s h e l l k u b e c t l e x e c - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - c o n n s k u b e c t l e x e c - n f a b e d g e f a b e d g e - c o n n e c t o r - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - s a s k u b e c t l e x e c - n f a b e d g e f a b e d g e - a g e n t - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - c o n n s k u b e c t l e x e c - n f a b e d g e f a b e d g e - a g e n t - x x x - c s t r o n g s w a n - - s w a n c t l - - l i s t - s a s ` ` ` \u5982 \u679c \u96a7 \u9053 \u4e0d \u80fd \u5efa \u7acb \uff0c \u8981 \u786e \u8ba4 \u9632 \u706b \u5899 \u662f \u5426 \u5f00 \u653e \u76f8 \u5173 \u7aef \u53e3 \uff0c \u5177 \u4f53 \u53c2 \u8003 [ \u5b89 \u88c5 \u624b \u518c ] ( g e t - s t a r t e d . m d ) ` ` ` s h e l l i p l i p r i p r s t 2 2 0 i p x p i p x s i p l i p r i p r s t 2 2 0 i p x p i p x s i p l i p r ` ` ` \u5982 \u679c * * \u8fb9 \u7f18 \u8282 \u70b9 * * \u6709 c n i \u7b49 \u63a5 \u53e3 \uff0c \u8868 \u793a \u6709 f l a n n e l \u7684 \u6b8b \u7559 \uff0c \u9700 \u8981 \u91cd \u542f * * \u8fb9 \u7f18 \u8282 \u70b9 * * ` ` ` s h e l l i p t a b l e s - S i p t a b l e s - L - n v - - l i n e - n u m b e r s i p t a b l e s - t n a t - S i p t a b l e s - t n a t - L - n v - - l i n e - n u m b e r s i p t a b l e s - S i p t a b l e s - L - n v - - l i n e - n u m b e r s i p t a b l e s - t n a t - S i p t a b l e s - t n a t - L - n v - - l i n e - n u m b e r s ` ` ` \u68c0 \u67e5 \u662f \u5426 \u73af \u5883 \u91cc \u6709 \u4e3b \u673a \u9632 \u706b \u5899 D R O P \u7684 \u89c4 \u5219 \uff0c \u5c24 \u5176 \u662f I N P U T \uff0c F O R W A R D \u7684 \u94fe F a b E d g e \u76f8 \u5173 \u7684 \u8bc1 \u4e66 \uff0c \u5305 \u62ec C A \uff0c C o n n e c t o r \uff0c A g e n t \u90fd \u4fdd \u5b58 \u5728 S e c r e t \u91cc \uff0c O p e r a t o r \u4f1a \u81ea \u52a8 \u7ef4 \u62a4 \u8fd9 \u4e9b \u8bc1 \u4e66 \u3002 \u5982 \u679c \u51fa \u73b0 \u8bc1 \u4e66 \u76f8 \u5173 \u9519 \u8bef \uff0c \u53ef \u4ee5 \u4f7f \u7528 \u4e0b \u9762 \u65b9 \u6cd5 \u624b \u52a8 \u9a8c \u8bc1 \u3002 ` ` ` s h e l l d o c k e r r u n f a b e d g e / c e r t d o c k e r p s - a | g r e p c e r t 6 5 c e b 5 7 d 6 6 5 6 f a b e d g e / c e r t \" / u s r / l o c a l / b i n / f a b e \u2026 \" 1 5 s e c o n d s a g o d o c k e r c p 6 5 c e b 5 7 d 6 6 5 6 : / u s r / l o c a l / b i n / f a b e d g e - c e r t . k u b e c t l g e t s e c r e t - n f a b e d g e N A M E T Y P E D A T A A G E a p i - s e r v e r - t l s k u b e r n e t e s . i o / t l s 4 3 d 2 2 h c e r t - t o k e n - c s f f n k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h c o n n e c t o r - t l s k u b e r n e t e s . i o / t l s 4 3 d 2 2 h d e f a u l t - t o k e n - r q 9 m v k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h f a b e d g e - a g e n t - t l s - e d g e 1 k u b e r n e t e s . i o / t l s 4 3 d 2 2 h f a b e d g e - a g e n t - t l s - e d g e 2 k u b e r n e t e s . i o / t l s 4 3 d 2 2 h f a b e d g e - c a O p a q u e 2 3 d 2 2 h f a b e d g e - o p e r a t o r - t o k e n - t b 8 q b k u b e r n e t e s . i o / s e r v i c e - a c c o u n t - t o k e n 3 3 d 2 2 h . / f a b e d g e - c e r t v e r i f y - s c o n n e c t o r - t l s Y o u r c e r t i s o k . / f a b e d g e - c e r t v e r i f y - s f a b e d g e - a g e n t - t l s - e d g e 1 Y o u r c e r t i s o k ` ` ` \u4e5f \u53ef \u4ee5 \u4f7f \u7528 \u4e0b \u9762 \u7684 \u811a \u672c \u5feb \u901f \u6536 \u96c6 \u4ee5 \u4e0a \u4fe1 \u606f \uff0c \u5982 \u9700 \u793e \u533a \u63d0 \u4f9b \u652f \u6301 \uff0c \u8bf7 \u63d0 \u4ea4 \u751f \u6210 \u7684 \u6587 \u4ef6 \u3002 ` ` ` c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s m a s t e r | t e e / t m p / m a s t e r - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s c o n n e c t o r | t e e / t m p / c o n n e c t o r - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h - s e d g e | t e e / t m p / e d g e - c h e c k e r . l o g c u r l h t t p : / / 1 1 6 . 6 2 . 1 2 7 . 7 6 / c h e c k e r . s h | b a s h | t e e / t m p / n o d e - c h e c k e r . l o g ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "uninstall.md"
- },
- "content": [
- {
- "heading": "Uninstall FabEdge",
- "data": "English | [\u4e2d\u6587](uninstall_zh.md) 1. Delete helm release 2. Delete other resources 3. Delete namespace 4. Delete all FabeEdge configuration file from all edge nodes 5. Delete all fabedge images on all nodes 6. Delete CustomResourceDefinition"
- },
- {
- "additional_info": "English | [\u4e2d\u6587](uninstall_zh.md) 1. Delete helm release ``` $ helm uninstall fabedge -n fabedge ``` 2. Delete other resources ``` $ kubectl -n fabedge delete cm --all $ kubectl -n fabedge delete pods --all $ kubectl -n fabedge delete secret --all $ kubectl -n fabedge delete job.batch --all ``` 3. Delete namespace ``` $ kubectl delete namespace fabedge ``` 4. Delete all FabeEdge configuration file from all edge nodes ``` $ rm -f /etc/cni/net.d/fabedge.* ``` 5. Delete all fabedge images on all nodes ``` $ docker images | grep fabedge | awk '{print $3}' | xargs -I{} docker rmi {} ``` 6. Delete CustomResourceDefinition ``` $ kubectl delete CustomResourceDefinition \"clusters.fabedge.io\" $ kubectl delete CustomResourceDefinition \"communities.fabedge.io\" $ kubectl delete ClusterRole \"fabedge-operator\" $ kubectl delete ClusterRoleBinding \"fabedge-operator\" ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "uninstall_zh.md"
- },
- "content": [
- {
- "heading": "\u5378\u8f7dFabEdge",
- "data": "1. \u4f7f\u7528helm\u5220\u9664\u4e3b\u8981\u8d44\u6e90 ```shell $ helm uninstall fabedge -n fabedge ``` 2. \u5220\u9664\u5176\u5b83\u8d44\u6e90 ```shell $ kubectl -n fabedge delete cm --all $ kubectl -n fabedge delete pods --all $ kubectl -n fabedge delete secret --all $ kubectl -n fabedge delete job.batch --all ``` 3. \u5220\u9664namespace ```shell $ kubectl delete namespace fabedge ``` 4. \u5220\u9664\u6240\u6709\u8fb9\u7f18\u8282\u70b9\u7684`fabedge.conf` ```shell $ rm -f /etc/cni/net.d/fabedge.* ``` \u200b 5. \u5220\u9664\u6240\u6709\u8282\u70b9\u7684\u4e0afabedge\u76f8\u5173\u7684\u955c\u50cf 6.\u5220\u9664CustomResourceDefinition"
- },
- {
- "additional_info": "1. \u4f7f\u7528helm\u5220\u9664\u4e3b\u8981\u8d44\u6e90 ```shell $ helm uninstall fabedge -n fabedge ``` 2. \u5220\u9664\u5176\u5b83\u8d44\u6e90 ```shell $ kubectl -n fabedge delete cm --all $ kubectl -n fabedge delete pods --all $ kubectl -n fabedge delete secret --all $ kubectl -n fabedge delete job.batch --all ``` 3. \u5220\u9664namespace ```shell $ kubectl delete namespace fabedge ``` 4. \u5220\u9664\u6240\u6709\u8fb9\u7f18\u8282\u70b9\u7684`fabedge.conf` ```shell $ rm -f /etc/cni/net.d/fabedge.* ``` \u200b 5. \u5220\u9664\u6240\u6709\u8282\u70b9\u7684\u4e0afabedge\u76f8\u5173\u7684\u955c\u50cf ```shell $ docker images | grep fabedge | awk '{print $3}' | xargs -I{} docker rmi {} ``` 6.\u5220\u9664CustomResourceDefinition ```shell $ kubectl delete CustomResourceDefinition \"clusters.fabedge.io\" $ kubectl delete CustomResourceDefinition \"communities.fabedge.io\" $ kubectl delete ClusterRole \"fabedge-operator\" $ kubectl delete ClusterRoleBinding \"fabedge-operator\" ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "use-nginx-to-proxy-connector.md"
- },
- "content": [
- {
- "heading": "\u4f7f\u7528nginx\u4ee3\u7406FabEdge-Connector",
- "data": "[toc]"
- },
- {
- "heading": "\u80cc\u666f",
- "data": "Connector\u662fFabEdge\u4e91\u7aef\u7ec4\u4ef6\uff0c\u7528\u4e8e\u7ec8\u7ed3IPSec VPN\u96a7\u9053\u3002\u56e0\u4e3a\u5b89\u5168\u6216\u5176\u5b83\u539f\u56e0\uff0c\u6709\u53ef\u80fd\u987b\u8981\u548c\u53cd\u5411\u4ee3\u7406\u6216\u8d1f\u8f7d\u5747\u8861\u5668\u914d\u5408\u4f7f\u7528\u3002"
- },
- {
- "heading": "\u5b9e\u9a8c\u73af\u5883",
- "data": "\n FabEdge\u7684\u914d\u7f6e\uff1a\n Nginx\u7684\u914d\u7f6e\uff1a"
- },
- {
- "heading": "\u7701\u7565\u65e0\u5173\u914d\u7f6e",
- "data": ""
- },
- {
- "heading": "\u7701\u7565\u65e0\u5173\u914d\u7f6e",
- "data": "> \u7248\u672c\uff1anginx/1.21.4 built by gcc 10.2.1 20210110 (Debian 10.2.1-6)\n FabEdge connector@node1\u72b6\u6001\uff1a\n > \u6ce8\u610f\uff1anode1\u4e0a\uff0c \u6240\u6709\u7684\u96a7\u9053\u5bf9\u7aef\u5730\u5740\u90fd\u662f10.20.8.24\uff0c \u4e5f\u5c31\u662fnginx\u7684\u5730\u5740\u3002\n FabEdge agent@edge1\u7684\u72b6\u6001\uff1a\n > \u6ce8\u610f\uff1aedge1\u4e0a\uff0c \u5230\u4e91\u7aefConnector\u7684\u96a7\u9053\u5bf9\u7aef\u5730\u5740\u662f10.20.8.24\uff0c \u4e5f\u5c31\u662fnginx\u7684\u5730\u5740\u3002"
- },
- {
- "heading": "\u5b9e\u9a8c\u7ed3\u679c",
- "data": ""
- },
- {
- "additional_info": "[ t o c ] C o n n e c t o r \u662f F a b E d g e \u4e91 \u7aef \u7ec4 \u4ef6 \uff0c \u7528 \u4e8e \u7ec8 \u7ed3 I P S e c V P N \u96a7 \u9053 \u3002 \u56e0 \u4e3a \u5b89 \u5168 \u6216 \u5176 \u5b83 \u539f \u56e0 \uff0c \u6709 \u53ef \u80fd \u987b \u8981 \u548c \u53cd \u5411 \u4ee3 \u7406 \u6216 \u8d1f \u8f7d \u5747 \u8861 \u5668 \u914d \u5408 \u4f7f \u7528 \u3002 ! [ f a b e d g e - w i t h - l b ] ( f a b e d g e - w i t h - l b . p n g ) F a b E d g e \u7684 \u914d \u7f6e \uff1a ` ` ` s h e l l $ c a t v a l u e s . y a m l o p e r a t o r : c o n n e c t o r P u b l i c A d d r e s s e s : 1 0 . 2 0 . 8 . 2 4 # n g i n x \u4ee3 \u7406 \u7684 \u5730 \u5740 c o n n e c t o r S u b n e t s : 1 0 . 9 6 . 0 . 0 / 1 2 e d g e L a b e l s : n o d e - r o l e . k u b e r n e t e s . i o / e d g e m a s q O u t g o i n g : t r u e e n a b l e P r o x y : f a l s e c n i T y p e : f l a n n e l ` ` ` N g i n x \u7684 \u914d \u7f6e \uff1a ` ` ` s h e l l s t r e a m { u p s t r e a m i s a k m p { s e r v e r 1 0 . 2 0 . 8 . 2 3 : 5 0 0 ; } u p s t r e a m i p s e c - n a t - t { s e r v e r 1 0 . 2 0 . 8 . 2 3 : 4 5 0 0 ; } s e r v e r { l i s t e n 5 0 0 u d p ; p r o x y _ p a s s i s a k m p ; } s e r v e r { l i s t e n 4 5 0 0 u d p ; p r o x y _ p a s s i p s e c - n a t - t ; } } ` ` ` > \u7248 \u672c \uff1a n g i n x / 1 . 2 1 . 4 b u i l t b y g c c 1 0 . 2 . 1 2 0 2 1 0 1 1 0 ( D e b i a n 1 0 . 2 . 1 - 6 ) F a b E d g e c o n n e c t o r @ n o d e 1 \u72b6 \u6001 \uff1a ` ` ` s h e l l r o o t @ n o d e 1 : ~ $ d o c k e r e x e c $ s t r o n g s w a n c t l - - l i s t - s a s e d g e 1 : # 2 0 , E S T A B L I S H E D , I K E v 2 , 1 0 7 a 2 d 2 9 0 f 6 1 e 3 1 7 _ i b 4 2 7 b c f 8 f 6 4 c 4 3 a 1 _ r * l o c a l ' C = C N , O = f a b e d g e . i o , C N = c l o u d - c o n n e c t o r ' @ 1 0 . 2 0 . 8 . 2 3 [ 4 5 0 0 ] r e m o t e ' C = C N , O = f a b e d g e . i o , C N = e d g e 1 ' @ 1 0 . 2 0 . 8 . 2 4 [ 4 6 7 8 8 ] A E S _ C B C - 1 2 8 / H M A C _ S H A 2 _ 2 5 6 _ 1 2 8 / P R F _ A E S 1 2 8 _ X C B C / E C P _ 2 5 6 e s t a b l i s h e d 2 0 7 4 s a g o , r e k e y i n g i n 1 1 4 6 0 s e d g e 1 - p 2 p : # 9 9 , r e q i d 1 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 0 7 4 s a g o , r e k e y i n g i n 1 2 0 3 s , e x p i r e s i n 1 8 8 6 s i n c 1 2 2 f 8 4 2 , 1 3 4 8 3 6 5 b y t e s , 1 6 0 5 7 p a c k e t s , 0 s a g o o u t c 6 1 e e d 0 2 , 1 3 5 4 6 8 6 b y t e s , 1 6 1 2 1 p a c k e t s , 0 s a g o l o c a l 1 0 . 9 6 . 0 . 0 / 1 2 1 9 2 . 1 6 8 . 0 . 0 / 2 4 1 9 2 . 1 6 8 . 1 . 0 / 2 4 r e m o t e 1 9 2 . 1 6 8 . 2 . 0 / 2 4 e d g e 1 - n 2 p : # 1 0 0 , r e q i d 2 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 0 7 4 s a g o , r e k e y i n g i n 1 2 9 2 s , e x p i r e s i n 1 8 8 6 s i n c 1 6 3 9 5 6 5 , 2 5 2 b y t e s , 3 p a c k e t s , 1 8 7 4 s a g o o u t c c 1 0 a a 8 b , 2 5 2 b y t e s , 3 p a c k e t s , 1 8 7 4 s a g o l o c a l 1 0 . 2 0 . 8 . 2 3 / 3 2 1 0 . 2 0 . 8 . 2 4 / 3 2 r e m o t e 1 9 2 . 1 6 8 . 2 . 0 / 2 4 e d g e 1 - p 2 n : # 1 0 1 , r e q i d 3 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 0 7 4 s a g o , r e k e y i n g i n 1 4 1 2 s , e x p i r e s i n 1 8 8 6 s i n c c 4 5 e 5 a 7 , 0 b y t e s , 0 p a c k e t s o u t c 4 4 f 7 8 9 7 , 0 b y t e s , 0 p a c k e t s l o c a l 1 0 . 9 6 . 0 . 0 / 1 2 1 9 2 . 1 6 8 . 0 . 0 / 2 4 1 9 2 . 1 6 8 . 1 . 0 / 2 4 r e m o t e 1 0 . 2 0 . 8 . 6 / 3 2 r o o t @ n o d e 1 : ~ # i p x p l d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 s r c 1 0 . 2 0 . 8 . 2 4 / 3 2 d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 1 3 2 7 t m p l s r c 1 0 . 2 0 . 8 . 2 3 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c c 1 0 a a 8 b r e q i d 2 m o d e t u n n e l s r c 1 0 . 2 0 . 8 . 2 3 / 3 2 d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 1 3 2 7 t m p l s r c 1 0 . 2 0 . 8 . 2 3 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c c 1 0 a a 8 b r e q i d 2 m o d e t u n n e l s r c 1 9 2 . 1 6 8 . 1 . 0 / 2 4 d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 5 4 2 3 t m p l s r c 1 0 . 2 0 . 8 . 2 3 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c 6 1 e e d 0 2 r e q i d 1 m o d e t u n n e l s r c 1 9 2 . 1 6 8 . 0 . 0 / 2 4 d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 5 4 2 3 t m p l s r c 1 0 . 2 0 . 8 . 2 3 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c 6 1 e e d 0 2 r e q i d 1 m o d e t u n n e l s r c 1 0 . 9 6 . 0 . 0 / 1 2 d s t 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d i r o u t p r i o r i t y 3 8 1 5 6 7 t m p l s r c 1 0 . 2 0 . 8 . 2 3 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c 6 1 e e d 0 2 r e q i d 1 m o d e t u n n e l ` ` ` > \u6ce8 \u610f \uff1a n o d e 1 \u4e0a \uff0c \u6240 \u6709 \u7684 \u96a7 \u9053 \u5bf9 \u7aef \u5730 \u5740 \u90fd \u662f 1 0 . 2 0 . 8 . 2 4 \uff0c \u4e5f \u5c31 \u662f n g i n x \u7684 \u5730 \u5740 \u3002 F a b E d g e a g e n t @ e d g e 1 \u7684 \u72b6 \u6001 \uff1a ` ` ` s h e l l r o o t @ e d g e 1 : ~ # d o c k e r e x e c $ s t r o n g s w a n c t l - - l i s t - s a s c l o u d - c o n n e c t o r : # 1 , E S T A B L I S H E D , I K E v 2 , 1 0 7 a 2 d 2 9 0 f 6 1 e 3 1 7 _ i * b 4 2 7 b c f 8 f 6 4 c 4 3 a 1 _ r l o c a l ' C = C N , O = f a b e d g e . i o , C N = e d g e 1 ' @ 1 0 . 2 0 . 8 . 6 [ 4 5 0 0 ] r e m o t e ' C = C N , O = f a b e d g e . i o , C N = c l o u d - c o n n e c t o r ' @ 1 0 . 2 0 . 8 . 2 4 [ 4 5 0 0 ] A E S _ C B C - 1 2 8 / H M A C _ S H A 2 _ 2 5 6 _ 1 2 8 / P R F _ A E S 1 2 8 _ X C B C / E C P _ 2 5 6 e s t a b l i s h e d 2 2 4 0 s a g o , r e k e y i n g i n 1 2 0 3 4 s c l o u d - c o n n e c t o r - p 2 p : # 4 , r e q i d 1 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 2 4 0 s a g o , r e k e y i n g i n 1 1 9 6 s , e x p i r e s i n 1 7 2 0 s i n c 6 1 e e d 0 2 , 1 4 5 9 8 5 4 b y t e s , 1 7 3 7 3 p a c k e t s , 0 s a g o o u t c 1 2 2 f 8 4 2 , 1 4 6 2 9 4 1 b y t e s , 1 7 4 2 1 p a c k e t s , 0 s a g o l o c a l 1 9 2 . 1 6 8 . 2 . 0 / 2 4 r e m o t e 1 0 . 9 6 . 0 . 0 / 1 2 1 9 2 . 1 6 8 . 0 . 0 / 2 4 1 9 2 . 1 6 8 . 1 . 0 / 2 4 c l o u d - c o n n e c t o r - p 2 n : # 5 , r e q i d 3 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 2 4 0 s a g o , r e k e y i n g i n 1 0 4 8 s , e x p i r e s i n 1 7 2 0 s i n c c 1 0 a a 8 b , 2 5 2 b y t e s , 3 p a c k e t s , 2 0 4 0 s a g o o u t c 1 6 3 9 5 6 5 , 2 5 2 b y t e s , 3 p a c k e t s , 2 0 4 0 s a g o l o c a l 1 9 2 . 1 6 8 . 2 . 0 / 2 4 r e m o t e 1 0 . 2 0 . 8 . 2 3 / 3 2 1 0 . 2 0 . 8 . 2 4 / 3 2 c l o u d - c o n n e c t o r - n 2 p : # 6 , r e q i d 2 , I N S T A L L E D , T U N N E L - i n - U D P , E S P : A E S _ G C M _ 1 6 - 1 2 8 i n s t a l l e d 2 2 4 0 s a g o , r e k e y i n g i n 1 1 4 1 s , e x p i r e s i n 1 7 2 0 s i n c 4 4 f 7 8 9 7 , 0 b y t e s , 0 p a c k e t s o u t c c 4 5 e 5 a 7 , 0 b y t e s , 0 p a c k e t s , 2 2 3 1 s a g o l o c a l 1 0 . 2 0 . 8 . 6 / 3 2 r e m o t e 1 0 . 9 6 . 0 . 0 / 1 2 1 9 2 . 1 6 8 . 0 . 0 / 2 4 1 9 2 . 1 6 8 . 1 . 0 / 2 4 r o o t @ e d g e 1 : ~ # i p x p l d s t 1 9 2 . 1 6 8 . 1 . 0 / 2 4 s r c 1 0 . 2 0 . 8 . 6 / 3 2 d s t 1 9 2 . 1 6 8 . 1 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 1 3 2 7 t m p l s r c 1 0 . 2 0 . 8 . 6 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c c 4 5 e 5 a 7 r e q i d 2 m o d e t u n n e l s r c 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d s t 1 9 2 . 1 6 8 . 1 . 0 / 2 4 d i r o u t p r i o r i t y 3 7 5 4 2 3 t m p l s r c 1 0 . 2 0 . 8 . 6 d s t 1 0 . 2 0 . 8 . 2 4 p r o t o e s p s p i 0 x c 1 2 2 f 8 4 2 r e q i d 1 m o d e t u n n e l ` ` ` > \u6ce8 \u610f \uff1a e d g e 1 \u4e0a \uff0c \u5230 \u4e91 \u7aef C o n n e c t o r \u7684 \u96a7 \u9053 \u5bf9 \u7aef \u5730 \u5740 \u662f 1 0 . 2 0 . 8 . 2 4 \uff0c \u4e5f \u5c31 \u662f n g i n x \u7684 \u5730 \u5740 \u3002 ` ` ` s h e l l r o o t @ m a s t e r : ~ # k u b e c t l g e t p o - o w i d e N A M E R E A D Y S T A T U S R E S T A R T S A G E I P N O D E N O M I N A T E D N O D E n g i n x - e d g e 1 1 / 1 R u n n i n g 0 6 2 m 1 9 2 . 1 6 8 . 2 . 7 e d g e 1 < n o n e > n g i n x - n o d e 1 1 / 1 R u n n i n g 0 6 2 m 1 9 2 . 1 6 8 . 1 . 7 n o d e 1 < n o n e > r o o t @ m a s t e r : ~ # k u b e c t l e x e c n g i n x - e d g e 1 - - i p r d e f a u l t v i a 1 9 2 . 1 6 8 . 2 . 1 d e v e t h 0 1 9 2 . 1 6 8 . 2 . 0 / 2 4 d e v e t h 0 p r o t o k e r n e l s c o p e l i n k s r c 1 9 2 . 1 6 8 . 2 . 7 r o o t @ m a s t e r : ~ # k u b e c t l e x e c n g i n x - e d g e 1 - - c u r l - s 1 9 2 . 1 6 8 . 1 . 7 P r a q m a N e t w o r k M u l t i T o o l ( w i t h N G I N X ) - n g i n x - n o d e 1 - 1 9 2 . 1 6 8 . 1 . 7 - H T T P : 8 0 , H T T P S : 4 4 3 ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "user-guide.md"
- },
- "content": [
- {
- "heading": "FabEdge User Guide",
- "data": "English | [\u4e2d\u6587](user-guide_zh.md)\n [toc]"
- },
- {
- "heading": "Networking Management",
- "data": ""
- },
- {
- "heading": "Use community",
- "data": "By default, the pods on the edge node can only access the pods in cloud nodes. For the pods on the edge nodes to communicate with each other directly without going through the cloud, we can define a community.\n Communities can also be used to organize multiple clusters which need to communicate with each other.\n Assume there are two clusters, `beijng` and `shanghai`. in the `beijing` cluster, there are there edge nodes of `edge1`, `edge2`, and `edge3`\n Create the following community to enable the communication between edge pods on the nodes of edge1/2/3 in cluster `beijing`\n Create the following community to enable the communication between `beijing` cluster and `shanghai` cluster"
- },
- {
- "heading": "Auto networking",
- "data": "To facilitate networking management, FabEdge provides a feature called Auto Networking which works under LAN, it uses direct routing to let pods running edge nodes in a LAN to communicate. You need to enable it at installation, check out [manually-install](manually-install.md) for how to install fabedge manually, here is only reference values.yaml:\n PS: Auto networking only works for edge nodes under the same router. When some nodes are in the same LAN and the same community, they will prefer auto networking."
- },
- {
- "heading": "Register member cluster",
- "data": "It is required to register the endpoint information of each member cluster into the host cluster for cross-cluster communication.\n 1. Create a cluster resource in the host cluster:\n ```yaml\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: beijing\n ```\n 2. Get the token\n ```shell\n # kubectl describe cluster beijing\n Name: beijing\n Namespace:\n Kind: Cluster\n Spec:\n Token: eyJhbGciOi--omitted--4PebW68A\n ```\n \n 3. Deploy FabEdge in the member cluster using the token.\n ```yaml\n # kubectl get cluster beijing -o yaml\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n name: beijing\n spec:\n endPoints:\n - id: C=CN, O=fabedge.io, CN=beijing.connector\n name: beijing.connector\n nodeSubnets:\n - 10.20.8.12\n - 10.20.8.38\n publicAddresses:\n - 10.20.8.12\n subnets:\n - 10.233.0.0/18\n - 10.233.70.0/24\n - 10.233.90.0/24\n type: Connector\n token: eyJhbGciOi--omit--4PebW68A\n ```"
- },
- {
- "heading": "Assign public address for edge node",
- "data": "In the public cloud, the virtual machine has only private address, which prevents from FabEdge establishing the edge-to-edge tunnels. In this case, the user can apply a public address for the virtual machine and add it to the annotation of the edge node. FabEdge will use this public address to establish the tunnel instead of the private one."
- },
- {
- "heading": "assign public address of 60.247.88.194 to node edge1",
- "data": ""
- },
- {
- "heading": "Create GlobalService",
- "data": "GlobalService is used to export a local/standard k8s service (ClusterIP or Headless) for other clusters to access it. And it provides the topology-aware service discovery capability.\n 1. create a service, e.g. namespace: default, name: web\n 2. Label it with : `fabedge.io/global-service: true`\n 3. It can be accessed by the domain name: `web.defaut.svc.global`"
- },
- {
- "heading": "Configure fabedge-agent for a specific node",
- "data": "Normally every fabedge-agent's arguments are the same, but FabEdge allows you configure arguments for a fabedge-agent on a specific node. You only need to provide fabedge agent arguments on annotations of the node, fabedge-operator will change the fabege-agent arguments. For example:\n The format of agent argument in node annotations is \"argument.fabedge.io/argument-name\", complete fabedge-agent arguments are listed [here](https://github.com/FabEdge/fabedge/blob/main/pkg/agent/config.go#L63)"
- },
- {
- "heading": "Disable fabedge-agent on specific node",
- "data": "fabedge-operator by default will create a fabedge-agent pod for each edge node, but FabEdge allows you to forbid it on specific nodes. First, you need to change edge labels, check out [manually-install](manually-install.md) for how to install FabEdge manually, here is only reference values.yaml Assume you have two edge nodes: edge1 and edge2, and you want only edge1 to have fabedge-agent, execute the command: Then you will have only edge1 have fabedge-agent running on it."
- },
- {
- "additional_info": "English | [\u4e2d\u6587](user-guide_zh.md) [toc] By default, the pods on the edge node can only access the pods in cloud nodes. For the pods on the edge nodes to communicate with each other directly without going through the cloud, we can define a community. Communities can also be used to organize multiple clusters which need to communicate with each other. Assume there are two clusters, `beijng` and `shanghai`. in the `beijing` cluster, there are there edge nodes of `edge1`, `edge2`, and `edge3` Create the following community to enable the communication between edge pods on the nodes of edge1/2/3 in cluster `beijing` ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 - beijing.edge3 ``` Create the following community to enable the communication between `beijing` cluster and `shanghai` cluster ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: connectors spec: members: - beijing.connector - shanghai.connector ``` To facilitate networking management, FabEdge provides a feature called Auto Networking which works under LAN, it uses direct routing to let pods running edge nodes in a LAN to communicate. You need to enable it at installation, check out [manually-install](manually-install.md) for how to install fabedge manually, here is only reference values.yaml: ```yaml agent: args: AUTO_NETWORKING: \"true\" # enable auto-networking feature MULTICAST_TOKEN: \"1b1bb567\" # make sure the token is unique, only nodes with the same token can compose a network MULTICAST_ADDRESS: \"239.40.20.81:18080\" # fabedge-agent uses this address to multicast endpoints information ``` PS: Auto networking only works for edge nodes under the same router. When some nodes are in the same LAN and the same community, they will prefer auto networking. It is required to register the endpoint information of each member cluster into the host cluster for cross-cluster communication. 1. Create a cluster resource in the host cluster: ```yaml apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: beijing ``` 2. Get the token ```shell # kubectl describe cluster beijing Name: beijing Namespace: Kind: Cluster Spec: Token: eyJhbGciOi--omitted--4PebW68A ``` 3. Deploy FabEdge in the member cluster using the token. ```yaml # kubectl get cluster beijing -o yaml apiVersion: fabedge.io/v1alpha1 kind: Cluster name: beijing spec: endPoints: - id: C=CN, O=fabedge.io, CN=beijing.connector name: beijing.connector nodeSubnets: - 10.20.8.12 - 10.20.8.38 publicAddresses: - 10.20.8.12 subnets: - 10.233.0.0/18 - 10.233.70.0/24 - 10.233.90.0/24 type: Connector token: eyJhbGciOi--omit--4PebW68A ``` In the public cloud, the virtual machine has only private address, which prevents from FabEdge establishing the edge-to-edge tunnels. In this case, the user can apply a public address for the virtual machine and add it to the annotation of the edge node. FabEdge will use this public address to establish the tunnel instead of the private one. ```shell kubectl annotate node edge1 \"fabedge.io/node-public-addresses=60.247.88.194\" ``` GlobalService is used to export a local/standard k8s service (ClusterIP or Headless) for other clusters to access it. And it provides the topology-aware service discovery capability. 1. create a service, e.g. namespace: default, name: web 2. Label it with : `fabedge.io/global-service: true` 3. It can be accessed by the domain name: `web.defaut.svc.global` Normally every fabedge-agent's arguments are the same, but FabEdge allows you configure arguments for a fabedge-agent on a specific node. You only need to provide fabedge agent arguments on annotations of the node, fabedge-operator will change the fabege-agent arguments. For example: ```shell kubectl annotate node edge1 argument.fabedge.io/enable-proxy=false # disable fab-proxy ``` The format of agent argument in node annotations is \"argument.fabedge.io/argument-name\", complete fabedge-agent arguments are listed [here](https://github.com/FabEdge/fabedge/blob/main/pkg/agent/config.go#L63) fabedge-operator by default will create a fabedge-agent pod for each edge node, but FabEdge allows you to forbid it on specific nodes. First, you need to change edge labels, check out [manually-install](manually-install.md) for how to install FabEdge manually, here is only reference values.yaml ```yaml cluster: # fabedge-operator will get edge nodes with edge labels, you can change it as you like edgeLabels: - node-role.kubernetes.io/edge= - agent.fabedge.io/enabled=true ``` Assume you have two edge nodes: edge1 and edge2, and you want only edge1 to have fabedge-agent, execute the command: ```yaml kubectl label node edge1 node-role.kubernetes.io/edge= kubectl label node edge1 agent.fabedge.io/enabled=true ``` Then you will have only edge1 have fabedge-agent running on it."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "user-guide_zh.md"
- },
- "content": [
- {
- "heading": "FabEdge\u7528\u6237\u624b\u518c",
- "data": "[toc]"
- },
- {
- "heading": "\u7f51\u7edc\u7ba1\u7406",
- "data": ""
- },
- {
- "heading": "\u4f7f\u7528\u793e\u533a",
- "data": "\u5728\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8fb9\u7f18\u8282\u70b9\u7684Pod\u53ea\u80fd\u8bbf\u95ee\u4e91\u7aefPod\u548c\u8282\u70b9\uff0c\u8fb9\u7f18\u8282\u70b9\u4e0a\u7684Pod\u4e4b\u95f4\u4e0d\u80fd\u4e92\u901a\uff0c\u8fd9\u662f\u4e3a\u4e86\u907f\u514d\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u592a\u591a\u96a7\u9053\u9020\u6210\u4e0d\u5fc5\u8981\u7684\u6d6a\u8d39\u3002\u4e3a\u4e86\u4f7f\u9700\u8981\u901a\u4fe1\u7684\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\uff0c\u6211\u4eec\u63d0\u51fa\u4e86\u793e\u533a\u8fd9\u4e2a\u6982\u5ff5\uff0c\u5f53\u51e0\u4e2a\u8fb9\u7f18\u8282\u70b9\u9700\u8981\u76f8\u4e92\u901a\u4fe1\u65f6\uff0c\u53ef\u4ee5\u5efa\u7acb\u4e00\u4e2a\u793e\u533a\uff0c\u628a\u9700\u8981\u901a\u4fe1\u7684\u8282\u70b9\u653e\u5165\u793e\u533a\u6210\u5458\u5217\u8868\uff0c\u90a3\u4e48\u8fd9\u4e9b\u793e\u533a\u6210\u5458\u5c31\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\u4e86\u3002\n \u5728\u591a\u96c6\u7fa4\u901a\u4fe1\u5b9e\u73b0\u540e\uff0c\u793e\u533a\u4e5f\u53ef\u4ee5\u7528\u6765\u7ec4\u7ec7\u9700\u8981\u76f8\u4e92\u901a\u4fe1\u7684\u96c6\u7fa4\u3002\n \u521b\u5efa\u4e00\u4e2a\u793e\u533a\u975e\u5e38\u7b80\u5355\uff0c\u5047\u8bbe\u6211\u4eec\u73b0\u5728\u6709\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u90e8\u7f72\u65f6\u4e3a\u96c6\u7fa4\u547d\u540d\u4e3abeijing\uff0c\u96c6\u7fa4\u91cc\u67093\u4e2a\u8fb9\u7f18\u8282\u70b9edge1, edge2, edge3\uff0c\u4e3a\u4e86\u4f7f\u4e09\u8005\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\uff0c\n \u521b\u5efa\u5982\u4e0b\u793e\u533a:\n _\u6ce8: \u793e\u533a\u6210\u5458\u7684\u540d\u5b57\u4e0d\u662f\u8282\u70b9\u540d\u79f0\uff0c\u800c\u662f\u7aef\u70b9\u540d\uff0c\u4e00\u4e2a\u8282\u70b9\u7684\u7aef\u70b9\u540d\"\u96c6\u7fa4\u540d.\u8282\u70b9\u540d\"\u8fd9\u6837\u7684\u683c\u5f0f\u751f\u6210\u7684\u3002_\n \u5047\u8bbe\u6211\u4eec\u8fd8\u6709\u53e6\u5916\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u90e8\u7f72\u65f6\u4e3a\u96c6\u7fa4\u547d\u540d\u4e3ashanghai\uff0c\u6211\u4eec\u73b0\u5728\u9700\u8981\u5c06beijing\u548cshanghai\u4e24\u4e2a\u96c6\u7fa4\u901a\u4fe1\uff0c\u521b\u5efa\u5982\u4e0b\u96c6\u7fa4:"
- },
- {
- "heading": "\u6ce8: \u8de8\u96c6\u7fa4\u901a\u4fe1\u4e3b\u8981\u662f\u7531connector\u5b9e\u73b0\uff0c\u6240\u4ee5\u6210\u5458\u540d\u79f0\u662f\u5404\u4e2a\u96c6\u7fa4\u7684connector\u7684\u7aef\u70b9\u540d",
- "data": ""
- },
- {
- "heading": "\u81ea\u52a8\u7ec4\u7f51",
- "data": "\u4e3a\u4e86\u51cf\u5c11\u7528\u6237\u7ba1\u7406\u7f51\u7edc\u7684\u8d1f\u62c5\uff0cFabEdge\u63d0\u4f9b\u4e86\u5c40\u57df\u7f51\u81ea\u52a8\u7ec4\u7f51\u7684\u529f\u80fd\uff0c\u81ea\u52a8\u7ec4\u7f51\u4f1a\u901a\u8fc7\u76f4\u8fde\u8def\u7531(direct routing)\u7684\u65b9\u5f0f\u8ba9\u8fb9\u7f18Pod\u76f8\u4e92\u901a\u4fe1\u3002\u8981\u4f7f\u7528\u8fd9\u4e2a\u529f\u80fd\u9700\u8981\u5728\u5b89\u88c5\u65f6\u5f00\u542f\uff0c\u5177\u4f53\u7684\u5b89\u88c5\u65b9\u5f0f\u53c2\u8003[\u624b\u52a8\u5b89\u88c5](manually-install_zh.md)\uff0c \u4e0b\u9762\u7684\u914d\u7f6e\u6587\u4ef6\u4f9b\u53c2\u8003\uff0c\u8bf7\u6839\u636e\u81ea\u5df1\u7684\u73af\u5883\u8c03\u6574\uff1a"
- },
- {
- "heading": "\u6ce81\uff1a \u81ea\u52a8\u7ec4\u7f51\u4ec5\u9650\u4e8e\u540c\u4e00\u8def\u7531\u5668\u4e0b\u7684\u8282\u70b9\u53ef\u7528\uff0c\u5f53\u4e24\u4e2a\u8282\u70b9\u5373\u5728\u4e00\u4e2a\u8def\u7531\u5668\u4e0b\uff0c\u53c8\u5904\u4e8e\u540c\u4e00\u4e2a\u793e\u533a\uff0c\u4f1a\u4f18\u5148\u4f7f\u7528\u81ea\u52a8\u7ec4\u7f51\u529f\u80fd\u3002",
- "data": ""
- },
- {
- "heading": "\u6ce8\u518c\u8fb9\u7f18\u96c6\u7fa4",
- "data": "\u591a\u96c6\u7fa4\u901a\u4fe1\u9700\u8981\u628a\u5404\u4e2a\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\uff1a\n 1. \u5728\u4e3b\u96c6\u7fa4\u521b\u5efa\u4e00\u4e2acluster\u8d44\u6e90:\n ```yaml\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n metadata:\n name: beijing\n ```\n \n 2. \u67e5\u770btoken\n ```shell\n # kubectl describe cluster beijing\n Name: beijing\n Namespace:\n Kind: Cluster\n Spec:\n Token: eyJhbGciOi--\u7701\u7565--4PebW68A\n ```\n \n *\u6ce8: token\u7531fabedge-operator\u8d1f\u8d23\u751f\u6210\uff0c\u8be5token\u6709\u6548\u671f\u5185\u4f7f\u7528\u8be5token\u8fdb\u884c\u6210\u5458\u96c6\u7fa4\u521d\u59cb\u5316*\n \n 3. \u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\uff0c\u90e8\u7f72\u65f6\u4f7f\u7528\u7b2c\u4e00\u6b65\u751f\u6210\u7684token, \u6210\u5458\u96c6\u7fa4\u7684operator\u4f1a\u628a\u672c\u96c6\u7fa4\u7684connector\u4fe1\u606f\u4e0a\u62a5\u81f3\u4e3b\u96c6\u7fa4\u3002\n ```yaml\n # kubectl get cluster beijing -o yaml\n apiVersion: fabedge.io/v1alpha1\n kind: Cluster\n name: beijing\n spec:\n endPoints:\n - id: C=CN, O=fabedge.io, CN=beijing.connector\n name: beijing.connector\n nodeSubnets:\n - 10.20.8.12\n - 10.20.8.38\n publicAddresses:\n - 10.20.8.12\n subnets:\n - 10.233.0.0/18\n - 10.233.70.0/24\n - 10.233.90.0/24\n type: Connector\n token: eyJhbGciOi--\u7701\u7565--4PebW68A\n ```"
- },
- {
- "heading": "\u4e3a\u8fb9\u7f18\u8282\u70b9\u6307\u5b9a\u516c\u7f51\u5730\u5740",
- "data": "\u5bf9\u4e8e\u516c\u6709\u4e91\u7684\u573a\u666f\uff0c\u4e91\u4e3b\u673a\u4e00\u822c\u53ea\u914d\u7f6e\u4e86\u79c1\u6709\u5730\u5740\uff0c\u5bfc\u81f4FabEdge\u65e0\u6cd5\u5efa\u7acb\u8fb9\u7f18\u5230\u8fb9\u7f18\u7684\u96a7\u9053\u3002\u8fd9\u79cd\u60c5\u51b5\u4e0b\u53ef\u4ee5\u4e3a\u4e91\u4e3b\u673a\u7533\u8bf7\u4e00\u4e2a\u516c\u7f51\u5730\u5740\uff0c\u52a0\u5165\u8282\u70b9\u7684\u6ce8\u89e3\uff0cFabEdge\u5c06\u81ea\u52a8\u4f7f\u7528\u8fd9\u4e2a\u516c\u7f51\u5730\u5740\u5efa\u7acb\u96a7\u9053\uff0c\u800c\u4e0d\u662f\u79c1\u6709\u5730\u5740\u3002"
- },
- {
- "heading": "\u4e3a\u8fb9\u7f18\u8282\u70b9edge1\u6307\u5b9a\u516c\u7f51\u5730\u574060.247.88.194",
- "data": ""
- },
- {
- "heading": "\u521b\u5efa\u5168\u5c40\u670d\u52a1",
- "data": "\u5168\u5c40\u670d\u52a1\u628a\u672c\u96c6\u7fa4\u7684\u4e00\u4e2a\u666e\u901a\u7684Service \uff08Headless \u6216 ClusterIP\uff09\uff0c\u66b4\u9732\u7ed9\u5176\u5b83\u96c6\u7fa4\u8bbf\u95ee\uff0c\u5e76\u4e14\u63d0\u4f9b\u57fa\u4e8e\u62d3\u6251\u7684\u670d\u52a1\u53d1\u73b0\u80fd\u529b\u3002\n 1. \u521b\u5efa\u4e00\u4e2ak8s\u7684\u670d\u52a1\uff0c \u6bd4\u5982\uff0c\u547d\u540d\u7a7a\u95f4\u662fdefault\uff0c service\u7684\u540d\u5b57\u662fweb\n 2. \u4e3aweb\u670d\u52a1\u6dfb\u52a0\u6807\u7b7e\uff1a`fabedge.io/global-service: true`\n 3. \u6240\u6709\u96c6\u7fa4\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\uff1a`web.default.svc.global`, \u5c31\u8fd1\u8bbf\u95ee\u5230web\u7684\u670d\u52a1\u3002\n \u66f4\u591a\u5185\u5bb9\u8bf7\u53c2\u8003[\u5982\u4f55\u521b\u5efa\u5168\u5c40\u670d\u52a1](https://github.com/FabEdge/fab-dns/blob/main/docs/how-to-create-globalservice.md)\u53ca[\u793a\u4f8b](https://github.com/FabEdge/fab-dns/tree/main/examples)"
- },
- {
- "heading": "FabEdge Agent\u8282\u70b9\u7ea7\u53c2\u6570\u914d\u7f6e",
- "data": "\u901a\u5e38fabedge-agent\u7684\u542f\u52a8\u53c2\u6570\u90fd\u662f\u4e00\u81f4\u7684\uff0c\u4f46fabedge\u5141\u8bb8\u60a8\u5bf9\u7279\u5b9a\u8282\u70b9\u7684fabedge-agent\u6307\u5b9a\u53c2\u6570\uff0c\u60a8\u4ec5\u9700\u5728\u8282\u70b9\u7684annotations\u914d\u7f6efabedge-agent\u53c2\u6570\uff0cfabedge-operator\u4f1a\u81ea\u52a8\u66f4\u65b0\u76f8\u5e94\u7684fabedge-agent pod\u3002\u4f8b\u5982:\n \u6bcf\u4e00\u4e2a\u53c2\u6570\u7684\u683c\u5f0f\u90fd\u662f\"argument.fabedge.io/argument-name\"\uff0c\u8be6\u7ec6\u7684\u53c2\u6570\u5217\u8868\u53c2\u8003[\u8fd9\u91cc](https://github.com/FabEdge/fabedge/blob/main/pkg/agent/config.go#L63)"
- },
- {
- "heading": "\u7981\u6b62\u5728\u7279\u5b9a\u8282\u70b9\u4e0a\u8fd0\u884cfabedge-agent",
- "data": "fabedge-operator\u9ed8\u8ba4\u4f1a\u4e3a\u6bcf\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u521b\u5efa\u4e00\u4e2afabedge-agent pod\uff0c\u4f46fabedge\u5141\u8bb8\u60a8\u901a\u8fc7\u914d\u7f6e\u6807\u7b7e\u7684\u65b9\u5f0f\u6765\u7981\u6b62fabedge-oeprator\u4e3a\u6307\u5b9a\u8282\u70b9\u521b\u5efafabedge-operator\u3002\u9996\u5148\u60a8\u9700\u8981\u5728\u5b89\u88c5fabedge\u4fee\u6539\u8fb9\u7f18\u8282\u70b9\u6807\u7b7e\uff0c\u5177\u4f53\u5b89\u88c5\u65b9\u5f0f\u53c2\u8003[\u624b\u52a8\u5b89\u88c5](manually-install_zh.md)\uff0c\u4e0b\u9762\u7684\u914d\u7f6e\u6587\u4ef6\u4f9b\u53c2\u8003\uff0c\u8bf7\u6839\u636e\u81ea\u5df1\u7684\u73af\u5883\u8c03\u6574\uff1a \u5047\u5982\u60a8\u6709\u4e24\u4e2a\u8fb9\u7f18\u8282\u70b9edge1\u4e0eedge2\uff0c\u60a8\u4ec5\u9700\u8981edge1\u8fd0\u884cfabedge-agent\uff0c\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4: \u5c31\u4f1a\u53ea\u5728edge1\u8fd0\u884cfabedge-agent\u3002"
- },
- {
- "additional_info": "[toc] \u5728\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8fb9\u7f18\u8282\u70b9\u7684Pod\u53ea\u80fd\u8bbf\u95ee\u4e91\u7aefPod\u548c\u8282\u70b9\uff0c\u8fb9\u7f18\u8282\u70b9\u4e0a\u7684Pod\u4e4b\u95f4\u4e0d\u80fd\u4e92\u901a\uff0c\u8fd9\u662f\u4e3a\u4e86\u907f\u514d\u8fb9\u7f18\u8282\u70b9\u5efa\u7acb\u592a\u591a\u96a7\u9053\u9020\u6210\u4e0d\u5fc5\u8981\u7684\u6d6a\u8d39\u3002\u4e3a\u4e86\u4f7f\u9700\u8981\u901a\u4fe1\u7684\u8fb9\u7f18\u8282\u70b9\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\uff0c\u6211\u4eec\u63d0\u51fa\u4e86\u793e\u533a\u8fd9\u4e2a\u6982\u5ff5\uff0c\u5f53\u51e0\u4e2a\u8fb9\u7f18\u8282\u70b9\u9700\u8981\u76f8\u4e92\u901a\u4fe1\u65f6\uff0c\u53ef\u4ee5\u5efa\u7acb\u4e00\u4e2a\u793e\u533a\uff0c\u628a\u9700\u8981\u901a\u4fe1\u7684\u8282\u70b9\u653e\u5165\u793e\u533a\u6210\u5458\u5217\u8868\uff0c\u90a3\u4e48\u8fd9\u4e9b\u793e\u533a\u6210\u5458\u5c31\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\u4e86\u3002 \u5728\u591a\u96c6\u7fa4\u901a\u4fe1\u5b9e\u73b0\u540e\uff0c\u793e\u533a\u4e5f\u53ef\u4ee5\u7528\u6765\u7ec4\u7ec7\u9700\u8981\u76f8\u4e92\u901a\u4fe1\u7684\u96c6\u7fa4\u3002 \u521b\u5efa\u4e00\u4e2a\u793e\u533a\u975e\u5e38\u7b80\u5355\uff0c\u5047\u8bbe\u6211\u4eec\u73b0\u5728\u6709\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u90e8\u7f72\u65f6\u4e3a\u96c6\u7fa4\u547d\u540d\u4e3abeijing\uff0c\u96c6\u7fa4\u91cc\u67093\u4e2a\u8fb9\u7f18\u8282\u70b9edge1, edge2, edge3\uff0c\u4e3a\u4e86\u4f7f\u4e09\u8005\u53ef\u4ee5\u76f8\u4e92\u8bbf\u95ee\uff0c \u521b\u5efa\u5982\u4e0b\u793e\u533a: ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-edge-nodes spec: members: - beijing.edge1 - beijing.edge2 - beijing.edge3 ``` _\u6ce8: \u793e\u533a\u6210\u5458\u7684\u540d\u5b57\u4e0d\u662f\u8282\u70b9\u540d\u79f0\uff0c\u800c\u662f\u7aef\u70b9\u540d\uff0c\u4e00\u4e2a\u8282\u70b9\u7684\u7aef\u70b9\u540d\"\u96c6\u7fa4\u540d.\u8282\u70b9\u540d\"\u8fd9\u6837\u7684\u683c\u5f0f\u751f\u6210\u7684\u3002_ \u5047\u8bbe\u6211\u4eec\u8fd8\u6709\u53e6\u5916\u4e00\u4e2a\u8fb9\u7f18\u96c6\u7fa4\uff0c\u90e8\u7f72\u65f6\u4e3a\u96c6\u7fa4\u547d\u540d\u4e3ashanghai\uff0c\u6211\u4eec\u73b0\u5728\u9700\u8981\u5c06beijing\u548cshanghai\u4e24\u4e2a\u96c6\u7fa4\u901a\u4fe1\uff0c\u521b\u5efa\u5982\u4e0b\u96c6\u7fa4: ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: connectors spec: members: - beijing.connector - shanghai.connector ``` \u4e3a\u4e86\u51cf\u5c11\u7528\u6237\u7ba1\u7406\u7f51\u7edc\u7684\u8d1f\u62c5\uff0cFabEdge\u63d0\u4f9b\u4e86\u5c40\u57df\u7f51\u81ea\u52a8\u7ec4\u7f51\u7684\u529f\u80fd\uff0c\u81ea\u52a8\u7ec4\u7f51\u4f1a\u901a\u8fc7\u76f4\u8fde\u8def\u7531(direct routing)\u7684\u65b9\u5f0f\u8ba9\u8fb9\u7f18Pod\u76f8\u4e92\u901a\u4fe1\u3002\u8981\u4f7f\u7528\u8fd9\u4e2a\u529f\u80fd\u9700\u8981\u5728\u5b89\u88c5\u65f6\u5f00\u542f\uff0c\u5177\u4f53\u7684\u5b89\u88c5\u65b9\u5f0f\u53c2\u8003[\u624b\u52a8\u5b89\u88c5](manually-install_zh.md)\uff0c \u4e0b\u9762\u7684\u914d\u7f6e\u6587\u4ef6\u4f9b\u53c2\u8003\uff0c\u8bf7\u6839\u636e\u81ea\u5df1\u7684\u73af\u5883\u8c03\u6574\uff1a ```yaml agent: args: AUTO_NETWORKING: \"true\" # \u6253\u5f00\u81ea\u52a8\u7ec4\u7f51\u529f\u80fd MULTICAST_TOKEN: \"1b1bb567\" # \u7ec4\u7f51\u4fe1\u606f\u4ee4\u724c\uff0c\u8bf7\u4fdd\u8bc1\u5c40\u57df\u7f51\u5185\u552f\u4e00\uff0c\u53ea\u6709\u6301\u6709\u76f8\u540c\u4ee4\u724c\u7684\u8282\u70b9\u624d\u80fd\u7ec4\u7f51 MULTICAST_ADDRESS: \"239.40.20.81:18080\" # fabedge-agent\u7528\u6765\u5e7f\u64ad\u7ec4\u7f51\u4fe1\u606f\u7684\u5730\u5740 ``` \u591a\u96c6\u7fa4\u901a\u4fe1\u9700\u8981\u628a\u5404\u4e2a\u96c6\u7fa4\u7684\u7aef\u70b9\u4fe1\u606f\u5728\u4e3b\u96c6\u7fa4\u6ce8\u518c\uff1a 1. \u5728\u4e3b\u96c6\u7fa4\u521b\u5efa\u4e00\u4e2acluster\u8d44\u6e90: ```yaml apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: beijing ``` 2. \u67e5\u770btoken ```shell # kubectl describe cluster beijing Name: beijing Namespace: Kind: Cluster Spec: Token: eyJhbGciOi--\u7701\u7565--4PebW68A ``` *\u6ce8: token\u7531fabedge-operator\u8d1f\u8d23\u751f\u6210\uff0c\u8be5token\u6709\u6548\u671f\u5185\u4f7f\u7528\u8be5token\u8fdb\u884c\u6210\u5458\u96c6\u7fa4\u521d\u59cb\u5316* 3. \u5728\u6210\u5458\u96c6\u7fa4\u90e8\u7f72FabEdge\uff0c\u90e8\u7f72\u65f6\u4f7f\u7528\u7b2c\u4e00\u6b65\u751f\u6210\u7684token, \u6210\u5458\u96c6\u7fa4\u7684operator\u4f1a\u628a\u672c\u96c6\u7fa4\u7684connector\u4fe1\u606f\u4e0a\u62a5\u81f3\u4e3b\u96c6\u7fa4\u3002 ```yaml # kubectl get cluster beijing -o yaml apiVersion: fabedge.io/v1alpha1 kind: Cluster name: beijing spec: endPoints: - id: C=CN, O=fabedge.io, CN=beijing.connector name: beijing.connector nodeSubnets: - 10.20.8.12 - 10.20.8.38 publicAddresses: - 10.20.8.12 subnets: - 10.233.0.0/18 - 10.233.70.0/24 - 10.233.90.0/24 type: Connector token: eyJhbGciOi--\u7701\u7565--4PebW68A ``` \u5bf9\u4e8e\u516c\u6709\u4e91\u7684\u573a\u666f\uff0c\u4e91\u4e3b\u673a\u4e00\u822c\u53ea\u914d\u7f6e\u4e86\u79c1\u6709\u5730\u5740\uff0c\u5bfc\u81f4FabEdge\u65e0\u6cd5\u5efa\u7acb\u8fb9\u7f18\u5230\u8fb9\u7f18\u7684\u96a7\u9053\u3002\u8fd9\u79cd\u60c5\u51b5\u4e0b\u53ef\u4ee5\u4e3a\u4e91\u4e3b\u673a\u7533\u8bf7\u4e00\u4e2a\u516c\u7f51\u5730\u5740\uff0c\u52a0\u5165\u8282\u70b9\u7684\u6ce8\u89e3\uff0cFabEdge\u5c06\u81ea\u52a8\u4f7f\u7528\u8fd9\u4e2a\u516c\u7f51\u5730\u5740\u5efa\u7acb\u96a7\u9053\uff0c\u800c\u4e0d\u662f\u79c1\u6709\u5730\u5740\u3002 ```shell kubectl annotate node edge1 \"fabedge.io/node-public-addresses=60.247.88.194\" ``` \u5168\u5c40\u670d\u52a1\u628a\u672c\u96c6\u7fa4\u7684\u4e00\u4e2a\u666e\u901a\u7684Service \uff08Headless \u6216 ClusterIP\uff09\uff0c\u66b4\u9732\u7ed9\u5176\u5b83\u96c6\u7fa4\u8bbf\u95ee\uff0c\u5e76\u4e14\u63d0\u4f9b\u57fa\u4e8e\u62d3\u6251\u7684\u670d\u52a1\u53d1\u73b0\u80fd\u529b\u3002 1. \u521b\u5efa\u4e00\u4e2ak8s\u7684\u670d\u52a1\uff0c \u6bd4\u5982\uff0c\u547d\u540d\u7a7a\u95f4\u662fdefault\uff0c service\u7684\u540d\u5b57\u662fweb 2. \u4e3aweb\u670d\u52a1\u6dfb\u52a0\u6807\u7b7e\uff1a`fabedge.io/global-service: true` 3. \u6240\u6709\u96c6\u7fa4\u53ef\u4ee5\u901a\u8fc7\u57df\u540d\uff1a`web.default.svc.global`, \u5c31\u8fd1\u8bbf\u95ee\u5230web\u7684\u670d\u52a1\u3002 \u66f4\u591a\u5185\u5bb9\u8bf7\u53c2\u8003[\u5982\u4f55\u521b\u5efa\u5168\u5c40\u670d\u52a1](https://github.com/FabEdge/fab-dns/blob/main/docs/how-to-create-globalservice.md)\u53ca[\u793a\u4f8b](https://github.com/FabEdge/fab-dns/tree/main/examples) \u901a\u5e38fabedge-agent\u7684\u542f\u52a8\u53c2\u6570\u90fd\u662f\u4e00\u81f4\u7684\uff0c\u4f46fabedge\u5141\u8bb8\u60a8\u5bf9\u7279\u5b9a\u8282\u70b9\u7684fabedge-agent\u6307\u5b9a\u53c2\u6570\uff0c\u60a8\u4ec5\u9700\u5728\u8282\u70b9\u7684annotations\u914d\u7f6efabedge-agent\u53c2\u6570\uff0cfabedge-operator\u4f1a\u81ea\u52a8\u66f4\u65b0\u76f8\u5e94\u7684fabedge-agent pod\u3002\u4f8b\u5982: ```shell kubectl annotate node edge1 argument.fabedge.io/enable-proxy=false # \u5173\u95edfab-proxy ``` \u6bcf\u4e00\u4e2a\u53c2\u6570\u7684\u683c\u5f0f\u90fd\u662f\"argument.fabedge.io/argument-name\"\uff0c\u8be6\u7ec6\u7684\u53c2\u6570\u5217\u8868\u53c2\u8003[\u8fd9\u91cc](https://github.com/FabEdge/fabedge/blob/main/pkg/agent/config.go#L63) fabedge-operator\u9ed8\u8ba4\u4f1a\u4e3a\u6bcf\u4e00\u4e2a\u8fb9\u7f18\u8282\u70b9\u521b\u5efa\u4e00\u4e2afabedge-agent pod\uff0c\u4f46fabedge\u5141\u8bb8\u60a8\u901a\u8fc7\u914d\u7f6e\u6807\u7b7e\u7684\u65b9\u5f0f\u6765\u7981\u6b62fabedge-oeprator\u4e3a\u6307\u5b9a\u8282\u70b9\u521b\u5efafabedge-operator\u3002\u9996\u5148\u60a8\u9700\u8981\u5728\u5b89\u88c5fabedge\u4fee\u6539\u8fb9\u7f18\u8282\u70b9\u6807\u7b7e\uff0c\u5177\u4f53\u5b89\u88c5\u65b9\u5f0f\u53c2\u8003[\u624b\u52a8\u5b89\u88c5](manually-install_zh.md)\uff0c\u4e0b\u9762\u7684\u914d\u7f6e\u6587\u4ef6\u4f9b\u53c2\u8003\uff0c\u8bf7\u6839\u636e\u81ea\u5df1\u7684\u73af\u5883\u8c03\u6574\uff1a ```yaml cluster: # fabedge-operator\u6839\u636eedgeLabels\u641c\u7d22\u8fb9\u7f18\u8282\u70b9\uff0c\u60a8\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u9700\u8981\u4fee\u6539\u4ee5\u4e0b\u5185\u5bb9 edgeLabels: - node-role.kubernetes.io/edge= - agent.fabedge.io/enabled=true ``` \u5047\u5982\u60a8\u6709\u4e24\u4e2a\u8fb9\u7f18\u8282\u70b9edge1\u4e0eedge2\uff0c\u60a8\u4ec5\u9700\u8981edge1\u8fd0\u884cfabedge-agent\uff0c\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4: ```yaml kubectl label node edge1 node-role.kubernetes.io/edge= kubectl label node edge1 agent.fabedge.io/enabled=true ``` \u5c31\u4f1a\u53ea\u5728edge1\u8fd0\u884cfabedge-agent\u3002"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FabEdge",
- "file_name": "\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee\u4ecb\u7ecd.md"
- },
- "content": [
- {
- "heading": "FabEdge\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee",
- "data": "FabEdge\u57280.4.0\u65f6\u5df2\u7ecf\u652f\u6301\u591a\u8fb9\u7f18\u96c6\u7fa4\u901a\u4fe1\uff0c\u4f46\u96c6\u7fa4\u95f4\u7684\u76f8\u4e92\u8bbf\u95ee\u53ea\u80fd\u901a\u8fc7IP\u6765\u8bbf\u95ee\uff0c\u5373\u4fbf\u8bbf\u95ee\u76ee\u6807\u662f\u4e00\u4e2a\u670d\u52a1\u4e5f\u4f1a\u5982\u6b64\uff0c\u8fd9\u4e0e\u65e5\u5e38\u4e2d\u4f7f\u7528Kubernetes\u7684\u4e60\u60ef\u6781\u4e0d\u76f8\u7b26\u3002\u4e8b\u5b9e\u4e0a\uff0c\u81ea\u591a\u96c6\u7fa4\u901a\u4fe1\u7684\u9700\u6c42\u5b58\u5728\u4ee5\u6765\uff0c\u8de8\u96c6\u7fa4\u7684\u670d\u52a1\u53d1\u73b0\u548c\u8bbf\u95ee\u7684\u9700\u6c42\u5c31\u4e00\u76f4\u5b58\u5728\uff0c\u5f00\u6e90\u793e\u533a\u4e5f\u4e00\u76f4\u5728\u52aa\u529b\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff1a * [Multi-cluster Service APIs](https://github.com/kubernetes-sigs/mcs-api) * [Lighthouse](https://submariner.io/getting-started/architecture/service-discovery/) * [Cilium Load-balancing & Service Discovery](https://docs.cilium.io/en/stable/gettingstarted/clustermesh/services/) \u65e2\u7136\u5df2\u7ecf\u5b58\u5728\u8fd9\u4e9b\u89e3\u51b3\u65b9\u6848\uff0c\u4e3a\u4ec0\u4e48FabEdge\u8981\u63d0\u51fa\u81ea\u5df1\u7684\u89e3\u51b3\u65b9\u6848\u5462\uff1f\u6709\u5982\u4e0b\u539f\u56e0\uff1a * mcs-api\u53ea\u662f\u4e00\u5957API\uff0c\u9700\u8981\u5176\u4ed6\u5b9e\u73b0\u8005\u89e3\u51b3\u5404\u4e2a\u96c6\u7fa4\u95f4\u670d\u52a1\u4fe1\u606f\u7684\u5bfc\u51fa\u5bfc\u5165\u3002 * Lighthouse\u4f9d\u8d56\u4e8esubmariner\uff0c\u800csubmariner\u5e76\u4e0d\u662f\u9762\u5411\u8fb9\u7f18\u573a\u666f\u7684\u3002 * Cilium\u662f\u4e00\u5957\u6574\u4f53\u89e3\u51b3\u65b9\u6848\uff0c\u4e0d\u80fd\u8ddf\u5176\u4ed6CNI\u5171\u5b58\uff0c\u6b64\u5916\u5b83\u4e5f\u4e0d\u662f\u9762\u5411\u8fb9\u7f18\u573a\u666f\u3002 \u4e3aFabEdge\u63d0\u4f9b\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee\u7684\u7ec4\u4ef6\u53eb[FabDNS](https://github.com/FabEdge/fab-dns)\uff0c\u5b83\u5c1d\u8bd5\u8fbe\u6210\u4ee5\u4e0b\u76ee\u6807\uff1a * \u5b83\u5141\u8bb8\u4e00\u4e2a\u96c6\u7fa4\u8bbf\u95ee\u5176\u4ed6\u96c6\u7fa4\u63d0\u4f9b\u7684\u670d\u52a1\uff0c\u670d\u52a1\u7c7b\u578b\u4ec5\u9650\u4e8eClusterIP\uff0cHeadless\u4e24\u79cd\u3002 * \u4e00\u4e2a\u670d\u52a1\u53ef\u4ee5\u90e8\u7f72\u4e8e\u4e00\u4e2a\u96c6\u7fa4\u5185\u90e8\uff0c\u4e5f\u53ef\u4ee5\u5206\u6563\u5728\u591a\u4e2a\u96c6\u7fa4\u91cc\u3002 * \u63d0\u4f9b\u4e00\u5b9a\u7684\u5177\u5907\u62d3\u6251\u611f\u77e5\u7684DNS\u89e3\u6790\uff0c\u8bbf\u95ee\u8005\u53ef\u4ee5\u5c31\u8fd1\u8bbf\u95ee\u6700\u8fd1\u7684\u670d\u52a1\u8282\u70b9\u3002 FabDNS\u6709\u4e24\u4e2a\u7ec4\u4ef6: service-hub\u4e0efab-dns\u3002\u8fd8\u63d0\u4f9b\u4e86\u4e00\u4e2aCRD: GlobalService\u3002\u4e00\u4e2a\u96c6\u7fa4\u82e5\u60f3\u5c06\u4e00\u4e2a\u670d\u52a1\u63d0\u4f9b\u7ed9\u5176\u4ed6\u96c6\u7fa4\uff0c\u9996\u5148\u8981\u5c06\u8be5\u670d\u52a1\u6807\u6ce8\u4e3a\u5168\u5c40\u670d\u52a1\u3002 service-hub\u8d1f\u8d23\u5404\u4e2a\u96c6\u7fa4\u95f4\u5168\u5c40\u670d\u52a1\u7684\u5bfc\u51fa\u4e0e\u5bfc\u5165\uff0cfab-dns\u8d1f\u8d23\u5728\u96c6\u7fa4\u5185\u90e8\u63d0\u4f9b\u5168\u5c40\u670d\u52a1\u7684\u5730\u5740\u89e3\u6790\u3002\u6bcf\u4e2a\u96c6\u7fa4\u90e8\u7f72FabDNS\u65f6\u8981\u6807\u6ce8\u62d3\u6251\u4fe1\u606f\uff0c\u5373region\u548czone\u4fe1\u606f\uff0cFabDNS\u7684\u62d3\u6251\u611f\u77e5\u5c31\u662f\u57fa\u4e8e\u8fd9\u4e9b\u62d3\u6251\u4fe1\u606f\u6765\u8fdb\u884c\u7684\u3002  \u4ee5\u4e0a\u56fe\u4e3a\u4f8b\uff0c\u5171\u6709\u4e09\u4e2a\u96c6\u7fa4\uff0c\u5317\u4eac\u96c6\u7fa4\u662f\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u6d77\u96c6\u7fa4\u548c\u82cf\u5dde\u96c6\u7fa4\u7684service-hub\u90fd\u8981\u901a\u8fc7\u5317\u4eac\u96c6\u7fa4\u7684service-hub\u4ea4\u6362\u5168\u5c40\u670d\u52a1\u4fe1\u606f\u3002\u5317\u4eac\u548c\u4e0a\u6d77\u96c6\u7fa4\u540c\u65f6\u66b4\u9732\u4e86\u4e00\u4e2anginx\u670d\u52a1\u548c\u4e00\u4e2amysql\u670d\u52a1\uff0c\u5047\u8bbe\u8fd9\u4e9b\u670d\u52a1\u90fd\u662f\u5728default\u547d\u540d\u7a7a\u95f4\u4e0b\u3002\u5982\u679c\u4e0a\u6d77\u6216\u5317\u4eac\u7684\u4e00\u4e2apod\u53bb\u8bbf\u95eenginx.default.global\uff0c\u90a3\u4e48\u54cd\u5e94\u7684pod\u53ea\u4f1a\u662f\u5404\u81ea\u96c6\u7fa4\u7684pod\uff0c\u56e0\u4e3azone\u662f\u5339\u914d\u7684\u3002\u5982\u679c\u82cf\u5dde\u96c6\u7fa4\u7684pod\u53bb\u8bbf\u95eenginx.default.global\uff0c\u90a3\u4e48\u5b83\u4f1a\u88ab\u4e0a\u6d77\u96c6\u7fa4\u7684nginx\u80cc\u540e\u7684pod\u54cd\u5e94\uff0c\u4e3a\u4ec0\u4e48\u5462\uff1f\u56e0\u4e3a\u82cf\u5dde\u548c\u4e0a\u6d77\u7684region\u90fd\u662fsouth"
- },
- {
- "additional_info": "FabEdge\u57280.4.0\u65f6\u5df2\u7ecf\u652f\u6301\u591a\u8fb9\u7f18\u96c6\u7fa4\u901a\u4fe1\uff0c\u4f46\u96c6\u7fa4\u95f4\u7684\u76f8\u4e92\u8bbf\u95ee\u53ea\u80fd\u901a\u8fc7IP\u6765\u8bbf\u95ee\uff0c\u5373\u4fbf\u8bbf\u95ee\u76ee\u6807\u662f\u4e00\u4e2a\u670d\u52a1\u4e5f\u4f1a\u5982\u6b64\uff0c\u8fd9\u4e0e\u65e5\u5e38\u4e2d\u4f7f\u7528Kubernetes\u7684\u4e60\u60ef\u6781\u4e0d\u76f8\u7b26\u3002\u4e8b\u5b9e\u4e0a\uff0c\u81ea\u591a\u96c6\u7fa4\u901a\u4fe1\u7684\u9700\u6c42\u5b58\u5728\u4ee5\u6765\uff0c\u8de8\u96c6\u7fa4\u7684\u670d\u52a1\u53d1\u73b0\u548c\u8bbf\u95ee\u7684\u9700\u6c42\u5c31\u4e00\u76f4\u5b58\u5728\uff0c\u5f00\u6e90\u793e\u533a\u4e5f\u4e00\u76f4\u5728\u52aa\u529b\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff1a * [Multi-cluster Service APIs](https://github.com/kubernetes-sigs/mcs-api) * [Lighthouse](https://submariner.io/getting-started/architecture/service-discovery/) * [Cilium Load-balancing & Service Discovery](https://docs.cilium.io/en/stable/gettingstarted/clustermesh/services/) \u65e2\u7136\u5df2\u7ecf\u5b58\u5728\u8fd9\u4e9b\u89e3\u51b3\u65b9\u6848\uff0c\u4e3a\u4ec0\u4e48FabEdge\u8981\u63d0\u51fa\u81ea\u5df1\u7684\u89e3\u51b3\u65b9\u6848\u5462\uff1f\u6709\u5982\u4e0b\u539f\u56e0\uff1a * mcs-api\u53ea\u662f\u4e00\u5957API\uff0c\u9700\u8981\u5176\u4ed6\u5b9e\u73b0\u8005\u89e3\u51b3\u5404\u4e2a\u96c6\u7fa4\u95f4\u670d\u52a1\u4fe1\u606f\u7684\u5bfc\u51fa\u5bfc\u5165\u3002 * Lighthouse\u4f9d\u8d56\u4e8esubmariner\uff0c\u800csubmariner\u5e76\u4e0d\u662f\u9762\u5411\u8fb9\u7f18\u573a\u666f\u7684\u3002 * Cilium\u662f\u4e00\u5957\u6574\u4f53\u89e3\u51b3\u65b9\u6848\uff0c\u4e0d\u80fd\u8ddf\u5176\u4ed6CNI\u5171\u5b58\uff0c\u6b64\u5916\u5b83\u4e5f\u4e0d\u662f\u9762\u5411\u8fb9\u7f18\u573a\u666f\u3002 \u4e3aFabEdge\u63d0\u4f9b\u8de8\u96c6\u7fa4\u670d\u52a1\u8bbf\u95ee\u7684\u7ec4\u4ef6\u53eb[FabDNS](https://github.com/FabEdge/fab-dns)\uff0c\u5b83\u5c1d\u8bd5\u8fbe\u6210\u4ee5\u4e0b\u76ee\u6807\uff1a * \u5b83\u5141\u8bb8\u4e00\u4e2a\u96c6\u7fa4\u8bbf\u95ee\u5176\u4ed6\u96c6\u7fa4\u63d0\u4f9b\u7684\u670d\u52a1\uff0c\u670d\u52a1\u7c7b\u578b\u4ec5\u9650\u4e8eClusterIP\uff0cHeadless\u4e24\u79cd\u3002 * \u4e00\u4e2a\u670d\u52a1\u53ef\u4ee5\u90e8\u7f72\u4e8e\u4e00\u4e2a\u96c6\u7fa4\u5185\u90e8\uff0c\u4e5f\u53ef\u4ee5\u5206\u6563\u5728\u591a\u4e2a\u96c6\u7fa4\u91cc\u3002 * \u63d0\u4f9b\u4e00\u5b9a\u7684\u5177\u5907\u62d3\u6251\u611f\u77e5\u7684DNS\u89e3\u6790\uff0c\u8bbf\u95ee\u8005\u53ef\u4ee5\u5c31\u8fd1\u8bbf\u95ee\u6700\u8fd1\u7684\u670d\u52a1\u8282\u70b9\u3002 FabDNS\u6709\u4e24\u4e2a\u7ec4\u4ef6: service-hub\u4e0efab-dns\u3002\u8fd8\u63d0\u4f9b\u4e86\u4e00\u4e2aCRD: GlobalService\u3002\u4e00\u4e2a\u96c6\u7fa4\u82e5\u60f3\u5c06\u4e00\u4e2a\u670d\u52a1\u63d0\u4f9b\u7ed9\u5176\u4ed6\u96c6\u7fa4\uff0c\u9996\u5148\u8981\u5c06\u8be5\u670d\u52a1\u6807\u6ce8\u4e3a\u5168\u5c40\u670d\u52a1\u3002 service-hub\u8d1f\u8d23\u5404\u4e2a\u96c6\u7fa4\u95f4\u5168\u5c40\u670d\u52a1\u7684\u5bfc\u51fa\u4e0e\u5bfc\u5165\uff0cfab-dns\u8d1f\u8d23\u5728\u96c6\u7fa4\u5185\u90e8\u63d0\u4f9b\u5168\u5c40\u670d\u52a1\u7684\u5730\u5740\u89e3\u6790\u3002\u6bcf\u4e2a\u96c6\u7fa4\u90e8\u7f72FabDNS\u65f6\u8981\u6807\u6ce8\u62d3\u6251\u4fe1\u606f\uff0c\u5373region\u548czone\u4fe1\u606f\uff0cFabDNS\u7684\u62d3\u6251\u611f\u77e5\u5c31\u662f\u57fa\u4e8e\u8fd9\u4e9b\u62d3\u6251\u4fe1\u606f\u6765\u8fdb\u884c\u7684\u3002  \u4ee5\u4e0a\u56fe\u4e3a\u4f8b\uff0c\u5171\u6709\u4e09\u4e2a\u96c6\u7fa4\uff0c\u5317\u4eac\u96c6\u7fa4\u662f\u4e3b\u96c6\u7fa4\uff0c\u4e0a\u6d77\u96c6\u7fa4\u548c\u82cf\u5dde\u96c6\u7fa4\u7684service-hub\u90fd\u8981\u901a\u8fc7\u5317\u4eac\u96c6\u7fa4\u7684service-hub\u4ea4\u6362\u5168\u5c40\u670d\u52a1\u4fe1\u606f\u3002\u5317\u4eac\u548c\u4e0a\u6d77\u96c6\u7fa4\u540c\u65f6\u66b4\u9732\u4e86\u4e00\u4e2anginx\u670d\u52a1\u548c\u4e00\u4e2amysql\u670d\u52a1\uff0c\u5047\u8bbe\u8fd9\u4e9b\u670d\u52a1\u90fd\u662f\u5728default\u547d\u540d\u7a7a\u95f4\u4e0b\u3002\u5982\u679c\u4e0a\u6d77\u6216\u5317\u4eac\u7684\u4e00\u4e2apod\u53bb\u8bbf\u95eenginx.default.global\uff0c\u90a3\u4e48\u54cd\u5e94\u7684pod\u53ea\u4f1a\u662f\u5404\u81ea\u96c6\u7fa4\u7684pod\uff0c\u56e0\u4e3azone\u662f\u5339\u914d\u7684\u3002\u5982\u679c\u82cf\u5dde\u96c6\u7fa4\u7684pod\u53bb\u8bbf\u95eenginx.default.global\uff0c\u90a3\u4e48\u5b83\u4f1a\u88ab\u4e0a\u6d77\u96c6\u7fa4\u7684nginx\u80cc\u540e\u7684pod\u54cd\u5e94\uff0c\u4e3a\u4ec0\u4e48\u5462\uff1f\u56e0\u4e3a\u82cf\u5dde\u548c\u4e0a\u6d77\u7684region\u90fd\u662fsouth"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "dpdk_crypto_ipsec_doc.md"
- },
- "content": [
- {
- "heading": "VPP IPSec implementation using DPDK Cryptodev API {#dpdk_crypto_ipsec_doc}",
- "data": "This document is meant to contain all related information about implementation and usability."
- },
- {
- "heading": "VPP IPsec with DPDK Cryptodev",
- "data": "DPDK Cryptodev is an asynchronous crypto API that supports both Hardware and Software implementations (for more details refer to [DPDK Cryptography Device Library documentation](http://dpdk.org/doc/guides/prog_guide/cryptodev_lib.html)).\n When there are enough Cryptodev resources for all workers, the node graph is reconfigured by adding and changing the default next nodes.\n The following nodes are added:\n * dpdk-crypto-input : polling input node, dequeuing from crypto devices.\n * dpdk-esp-encrypt : internal node.\n * dpdk-esp-decrypt : internal node.\n * dpdk-esp-encrypt-post : internal node.\n * dpdk-esp-decrypt-post : internal node.\n Set new default next nodes:\n * for esp encryption: esp-encrypt -> dpdk-esp-encrypt\n * for esp decryption: esp-decrypt -> dpdk-esp-decrypt"
- },
- {
- "heading": "How to enable VPP IPSec with DPDK Cryptodev support",
- "data": "When building DPDK with VPP, Cryptodev support is always enabled.\n Additionally, on x86_64 platforms, DPDK is built with SW crypto support."
- },
- {
- "heading": "Crypto Resources allocation",
- "data": "VPP allocates crypto resources based on a best effort approach:\n * first allocate Hardware crypto resources, then Software.\n * if there are not enough crypto resources for all workers, the graph node is not modified and the default VPP IPsec implementation based in OpenSSL is used. The following message is displayed:\n 0: dpdk_ipsec_init: not enough Cryptodevs, default to OpenSSL IPsec"
- },
- {
- "heading": "Configuration example",
- "data": "To enable DPDK Cryptodev the user just need to provide cryptodevs in the startup.conf.\n Below is an example startup.conf, it is not meant to be a default configuration:\n In the above configuration:\n * 0000:81:01.0 and 0000:81:01.1 are Ethernet device BDFs.\n * 0000:85:01.0 and 0000:85:01.1 are Crypto device BDFs and they require the same driver binding as DPDK Ethernet devices but they do not support any extra configuration options.\n * Two AESNI-MB Software (Virtual) Cryptodev PMDs are created in NUMA node 1.\n For further details refer to [DPDK Crypto Device Driver documentation](http://dpdk.org/doc/guides/cryptodevs/index.html)"
- },
- {
- "heading": "Operational data",
- "data": "The following CLI command displays the Cryptodev/Worker mapping:\n show crypto device mapping [verbose]"
- },
- {
- "heading": "nasm",
- "data": "Building the DPDK Crypto Libraries requires the open source project nasm (The Netwide Assembler) to be installed. Recommended version of nasm is 2.12.02. Minimum supported version of nasm is 2.11.06. Use the following command to determine the current nasm version: nasm -v CentOS 7.3 and earlier and Fedora 21 and earlier use unsupported versions of nasm. Use the following set of commands to build a supported version: wget http://www.nasm.us/pub/nasm/releasebuilds/2.12.02/nasm-2.12.02.tar.bz2 tar -xjvf nasm-2.12.02.tar.bz2 cd nasm-2.12.02/ ./configure make sudo make install"
- },
- {
- "additional_info": "This document is meant to contain all related information about implementation and usability. DPDK Cryptodev is an asynchronous crypto API that supports both Hardware and Software implementations (for more details refer to [DPDK Cryptography Device Library documentation](http://dpdk.org/doc/guides/prog_guide/cryptodev_lib.html)). When there are enough Cryptodev resources for all workers, the node graph is reconfigured by adding and changing the default next nodes. The following nodes are added: * dpdk-crypto-input : polling input node, dequeuing from crypto devices. * dpdk-esp-encrypt : internal node. * dpdk-esp-decrypt : internal node. * dpdk-esp-encrypt-post : internal node. * dpdk-esp-decrypt-post : internal node. Set new default next nodes: * for esp encryption: esp-encrypt -> dpdk-esp-encrypt * for esp decryption: esp-decrypt -> dpdk-esp-decrypt When building DPDK with VPP, Cryptodev support is always enabled. Additionally, on x86_64 platforms, DPDK is built with SW crypto support. VPP allocates crypto resources based on a best effort approach: * first allocate Hardware crypto resources, then Software. * if there are not enough crypto resources for all workers, the graph node is not modified and the default VPP IPsec implementation based in OpenSSL is used. The following message is displayed: 0: dpdk_ipsec_init: not enough Cryptodevs, default to OpenSSL IPsec To enable DPDK Cryptodev the user just need to provide cryptodevs in the startup.conf. Below is an example startup.conf, it is not meant to be a default configuration: ``` dpdk { dev 0000:81:00.0 dev 0000:81:00.1 dev 0000:85:01.0 dev 0000:85:01.1 vdev crypto_aesni_mb0,socket_id=1 vdev crypto_aesni_mb1,socket_id=1 } ``` In the above configuration: * 0000:81:01.0 and 0000:81:01.1 are Ethernet device BDFs. * 0000:85:01.0 and 0000:85:01.1 are Crypto device BDFs and they require the same driver binding as DPDK Ethernet devices but they do not support any extra configuration options. * Two AESNI-MB Software (Virtual) Cryptodev PMDs are created in NUMA node 1. For further details refer to [DPDK Crypto Device Driver documentation](http://dpdk.org/doc/guides/cryptodevs/index.html) The following CLI command displays the Cryptodev/Worker mapping: show crypto device mapping [verbose] Building the DPDK Crypto Libraries requires the open source project (The Netwide Assembler) to be installed. Recommended version of is 2.12.02. Minimum supported version of is 2.11.06. Use the following command to determine the current version: -v CentOS 7.3 and earlier and Fedora 21 and earlier use unsupported versions of . Use the following set of commands to build a supported version: wget http://www..us/pub//releasebuilds/2.12.02/-2.12.02.tar.bz2 tar -xjvf -2.12.02.tar.bz2 cd -2.12.02/ ./configure make sudo make install"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "index_entry.md"
- },
- "content": [
- {
- "heading": "Copyright (c) 2016 Comcast Cable Communications Management, LLC.",
- "data": ""
- },
- {
- "heading": "Licensed under the Apache License, Version 2.0 (the \"License\");",
- "data": ""
- },
- {
- "heading": "you may not use this file except in compliance with the License.",
- "data": ""
- },
- {
- "heading": "You may obtain a copy of the License at:",
- "data": ""
- },
- {
- "heading": "http://www.apache.org/licenses/LICENSE-2.0",
- "data": ""
- },
- {
- "heading": "Unless required by applicable law or agreed to in writing, software",
- "data": ""
- },
- {
- "heading": "distributed under the License is distributed on an \"AS IS\" BASIS,",
- "data": ""
- },
- {
- "heading": "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
- "data": ""
- },
- {
- "heading": "See the License for the specific language governing permissions and",
- "data": ""
- },
- {
- "heading": "limitations under the License.",
- "data": ""
- },
- {
- "heading": "}",
- "data": "{{ \"* [%s](@ref %s)\" % (item[\"name\"], meta[\"label\"]) }}"
- },
- {
- "additional_info": "{# {{ \"* [%s](@ref %s)\" % (item[\"name\"], meta[\"label\"])"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "index_header.md"
- },
- "content": [
- {
- "heading": "Copyright (c) 2016 Comcast Cable Communications Management, LLC.",
- "data": ""
- },
- {
- "heading": "Licensed under the Apache License, Version 2.0 (the \"License\");",
- "data": ""
- },
- {
- "heading": "you may not use this file except in compliance with the License.",
- "data": ""
- },
- {
- "heading": "You may obtain a copy of the License at:",
- "data": ""
- },
- {
- "heading": "http://www.apache.org/licenses/LICENSE-2.0",
- "data": ""
- },
- {
- "heading": "Unless required by applicable law or agreed to in writing, software",
- "data": ""
- },
- {
- "heading": "distributed under the License is distributed on an \"AS IS\" BASIS,",
- "data": ""
- },
- {
- "heading": "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
- "data": ""
- },
- {
- "heading": "See the License for the specific language governing permissions and",
- "data": ""
- },
- {
- "heading": "limitations under the License.",
- "data": ""
- },
- {
- "heading": "}",
- "data": ""
- },
- {
- "heading": "Startup Configuration {{'{#'}}syscfg}",
- "data": "The VPP network stack comes with several configuration options that can be\n provided either on the command line or in a configuration file.\n Specific applications built on the stack have been known to require a dozen\n arguments, depending on requirements. This section describes commonly-used\n options and parameters.\n You can find command-line argument parsers in the source code by searching for\n instances of the `VLIB_CONFIG_FUNCTION` macro. The invocation\n `VLIB_CONFIG_FUNCTION(foo_config, \"foo\")` will cause the function\n `foo_config` to receive all the options and values supplied in a parameter\n block named \"`foo`\", for example: `foo { arg1 arg2 arg3 ... }`.\n @todo Tell the nice people where this document lives so that the might\n help improve it!"
- },
- {
- "heading": "Command-line arguments",
- "data": "Parameters are grouped by a section name. When providing more than one\n parameter to a section all parameters for that section must be wrapped in\n curly braces.\n Which will produce output similar to this:\n \n _______ _ _ _____ ___\n __/ __/ _ \\ (_)__ | | / / _ \\/ _ \\\n _/ _// // / / / _ \\ | |/ / ___/ ___/\n /_/ /____(_)_/\\___/ |___/_/ /_/\n \n vpp# \n When providing only one such parameter the braces are optional. For example,\n the following command argument, `unix interactive` does not have braces:\n The command line can be presented as a single string or as several; anything\n given on the command line is concatenated with spaces into a single string\n before parsing.\n VPP applications must be able to locate their own executable images. The\n simplest way to ensure this will work is to invoke a VPP application by giving\n its absolute path; for example: `/usr/bin/vpp `. At startup, VPP\n applications parse through their own ELF-sections (primarily) to make lists\n of init, configuration, and exit handlers.\n When developing with VPP, in _gdb_ it's often sufficient to start an application\n like this at the `(gdb)` prompt:"
- },
- {
- "heading": "Configuration file",
- "data": "It is also possible to supply parameters in a startup configuration file the\n path of which is provided to the VPP application on its command line.\n The format of the configuration file is a simple text file with the same\n content as the command line but with the benefit of being able to use newlines\n to make the content easier to read. For example:\n VPP is then instructed to load this file with the `-c` option:"
- },
- {
- "heading": "Index of startup command sections",
- "data": "[TOC]"
- },
- {
- "additional_info": "{# The VPP network stack comes with several configuration options that can be provided either on the command line or in a configuration file. Specific applications built on the stack have been known to require a dozen arguments, depending on requirements. This section describes commonly-used options and parameters. You can find command-line argument parsers in the source code by searching for instances of the `VLIB_CONFIG_FUNCTION` macro. The invocation `VLIB_CONFIG_FUNCTION(foo_config, \"foo\")` will cause the function `foo_config` to receive all the options and values supplied in a parameter block named \"`foo`\", for example: `foo { arg1 arg2 arg3 ... }`. @todo Tell the nice people where this document lives so that the might help improve it! Parameters are grouped by a section name. When providing more than one parameter to a section all parameters for that section must be wrapped in curly braces. ``` /usr/bin/vpp unix { interactive cli-listen 127.0.0.1:5002 } ``` Which will produce output similar to this: _______ _ _ _____ ___ __/ __/ _ \\ (_)__ | | / / _ \\/ _ \\ _/ _// // / / / _ \\ | |/ / ___/ ___/ /_/ /____(_)_/\\___/ |___/_/ /_/ vpp# When providing only one such parameter the braces are optional. For example, the following command argument, `unix interactive` does not have braces: ``` /usr/bin/vpp unix interactive ``` The command line can be presented as a single string or as several; anything given on the command line is concatenated with spaces into a single string before parsing. VPP applications must be able to locate their own executable images. The simplest way to ensure this will work is to invoke a VPP application by giving its absolute path; for example: `/usr/bin/vpp `. At startup, VPP applications parse through their own ELF-sections (primarily) to make lists of init, configuration, and exit handlers. When developing with VPP, in _gdb_ it's often sufficient to start an application like this at the `(gdb)` prompt: ``` run unix interactive ``` It is also possible to supply parameters in a startup configuration file the path of which is provided to the VPP application on its command line. The format of the configuration file is a simple text file with the same content as the command line but with the benefit of being able to use newlines to make the content easier to read. For example: ``` unix { nodaemon log /var/log/vpp/vpp.log full-coredump cli-listen localhost:5002 } api-trace { on } dpdk { dev 0000:03:00.0 } ``` VPP is then instructed to load this file with the `-c` option: ``` /usr/bin/vpp -c /etc/vpp/startup.conf ``` [TOC]"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "index_section.md"
- },
- "content": [
- {
- "heading": "Copyright (c) 2016 Comcast Cable Communications Management, LLC.",
- "data": ""
- },
- {
- "heading": "Licensed under the Apache License, Version 2.0 (the \"License\");",
- "data": ""
- },
- {
- "heading": "you may not use this file except in compliance with the License.",
- "data": ""
- },
- {
- "heading": "You may obtain a copy of the License at:",
- "data": ""
- },
- {
- "heading": "http://www.apache.org/licenses/LICENSE-2.0",
- "data": ""
- },
- {
- "heading": "Unless required by applicable law or agreed to in writing, software",
- "data": ""
- },
- {
- "heading": "distributed under the License is distributed on an \"AS IS\" BASIS,",
- "data": ""
- },
- {
- "heading": "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
- "data": ""
- },
- {
- "heading": "See the License for the specific language governing permissions and",
- "data": ""
- },
- {
- "heading": "limitations under the License.",
- "data": ""
- },
- {
- "heading": "}",
- "data": "clis/{{ this.page_label(group) }}"
- },
- {
- "additional_info": "{# clis/{{ this.page_label(group)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "item_format.md"
- },
- "content": [
- {
- "heading": "Copyright (c) 2016 Comcast Cable Communications Management, LLC.",
- "data": ""
- },
- {
- "heading": "Licensed under the Apache License, Version 2.0 (the \"License\");",
- "data": ""
- },
- {
- "heading": "you may not use this file except in compliance with the License.",
- "data": ""
- },
- {
- "heading": "You may obtain a copy of the License at:",
- "data": ""
- },
- {
- "heading": "http://www.apache.org/licenses/LICENSE-2.0",
- "data": ""
- },
- {
- "heading": "Unless required by applicable law or agreed to in writing, software",
- "data": ""
- },
- {
- "heading": "distributed under the License is distributed on an \"AS IS\" BASIS,",
- "data": ""
- },
- {
- "heading": "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
- "data": ""
- },
- {
- "heading": "See the License for the specific language governing permissions and",
- "data": ""
- },
- {
- "heading": "limitations under the License.",
- "data": ""
- },
- {
- "heading": "}",
- "data": "{% set v = item['value'] %}\n {{ \"@section %s %s\" % (meta['label'], item['name']) }}\n {% if 'siphon_block' in item['meta'] %}\n {% set sb = item[\"meta\"][\"siphon_block\"] %}\n {% if sb %}\n {# Extracted from the code in /*? ... ?*/ blocks #}"
- },
- {
- "heading": "Description",
- "data": "{{ sb }}\n {% endif %}\n {% endif %}\n {% if \"name\" in meta or \"function\" in item %}\n {# Gives some developer-useful linking #}"
- },
- {
- "heading": "Declaration and implementation",
- "data": "{% if \"name\" in meta %} {{ \"Declaration: @ref %s (@ref %s line %d)\" % (meta['name'], meta[\"file\"], item[\"meta\"][\"line_start\"]) }} {% endif %} {% if \"function\" in item %} {{ \"Implementation: @ref %s.\" % item[\"function\"] }} {% endif %} {% endif %}"
- },
- {
- "additional_info": "{# {% set v = item['value'] %} {{ \"@section %s %s\" % (meta['label'], item['name']) }} {% if 'siphon_block' in item['meta'] %} {% set sb = item[\"meta\"][\"siphon_block\"] %} {% if sb %} {# Extracted from the code in /*? ... ?*/ blocks #} {{ sb }} {% endif %} {% endif %} {% if \"name\" in meta or \"function\" in item %} {# Gives some developer-useful linking #} {% if \"name\" in meta %} {{ \"Declaration: @ref %s (@ref %s line %d)\" % (meta['name'], meta[\"file\"], item[\"meta\"][\"line_start\"]) }} {% endif %} {% if \"function\" in item %} {{ \"Implementation: @ref %s.\" % item[\"function\"] }} {% endif %} {% endif %}"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "item_header.md"
- },
- "content": [
- {
- "heading": "Copyright (c) 2016 Comcast Cable Communications Management, LLC.",
- "data": ""
- },
- {
- "heading": "Licensed under the Apache License, Version 2.0 (the \"License\");",
- "data": ""
- },
- {
- "heading": "you may not use this file except in compliance with the License.",
- "data": ""
- },
- {
- "heading": "You may obtain a copy of the License at:",
- "data": ""
- },
- {
- "heading": "http://www.apache.org/licenses/LICENSE-2.0",
- "data": ""
- },
- {
- "heading": "Unless required by applicable law or agreed to in writing, software",
- "data": ""
- },
- {
- "heading": "distributed under the License is distributed on an \"AS IS\" BASIS,",
- "data": ""
- },
- {
- "heading": "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
- "data": ""
- },
- {
- "heading": "See the License for the specific language governing permissions and",
- "data": ""
- },
- {
- "heading": "limitations under the License.",
- "data": ""
- },
- {
- "heading": "}",
- "data": "{{ \"@page %s %s\" % (this.page_label(group), this.page_title(group)) }}"
- },
- {
- "additional_info": "{# {{ \"@page %s %s\" % (this.page_label(group), this.page_title(group))"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "qos_doc.md"
- },
- "content": [
- {
- "heading": "QoS Hierarchical Scheduler {#qos_doc}",
- "data": "The Quality-of-Service (QoS) scheduler performs egress-traffic management by\n prioritizing the transmission of the packets of different type services and\n subscribers based on the Service Level Agreements (SLAs). The QoS scheduler can\n be enabled on one or more NIC output interfaces depending upon the\n requirement."
- },
- {
- "heading": "Overview",
- "data": "The QoS scheduler supports a number of scheduling and shaping levels which\n construct hierarchical-tree. The first level in the hierarchy is port (i.e.\n the physical interface) that constitutes the root node of the tree. The\n subsequent level is subport which represents the group of the\n users/subscribers. The individual user/subscriber is represented by the pipe\n at the next level. Each user can have different traffic type based on the\n criteria of specific loss rate, jitter, and latency. These traffic types are\n represented at the traffic-class level in the form of different traffic-\n classes. The last level contains number of queues which are grouped together\n to host the packets of the specific class type traffic.\n The QoS scheduler implementation requires flow classification, enqueue and\n dequeue operations. The flow classification is mandatory stage for HQoS where\n incoming packets are classified by mapping the packet fields information to\n 5-tuple (HQoS subport, pipe, traffic class, queue within traffic class, and\n color) and storing that information in mbuf sched field. The enqueue operation\n uses this information to determine the queue for storing the packet, and at\n this stage, if the specific queue is full, QoS drops the packet. The dequeue\n operation consists of scheduling the packet based on its length and available\n credits, and handing over the scheduled packet to the output interface.\n For more information on QoS Scheduler, please refer DPDK Programmer's Guide-\n http://dpdk.org/doc/guides/prog_guide/qos_framework.html"
- },
- {
- "heading": "QoS Scheduler Parameters",
- "data": "Following illustrates the default HQoS configuration for each 10GbE output\n port:\n Single subport (subport 0):\n - Subport rate set to 100% of port rate\n - Each of the 4 traffic classes has rate set to 100% of port rate\n 4K pipes per subport 0 (pipes 0 .. 4095) with identical configuration:\n - Pipe rate set to 1/4K of port rate\n - Each of the 4 traffic classes has rate set to 100% of pipe rate\n - Within each traffic class, the byte-level WRR weights for the 4 queues are set to 1:1:1:1"
- },
- {
- "heading": "Port configuration",
- "data": ""
- },
- {
- "heading": "Subport configuration",
- "data": ""
- },
- {
- "heading": "Pipe configuration",
- "data": ""
- },
- {
- "heading": "Random Early Detection (RED) parameters per traffic class and color (Green / Yellow / Red)",
- "data": ""
- },
- {
- "heading": "DPDK QoS Scheduler Integration in VPP",
- "data": "The Hierarchical Quality-of-Service (HQoS) scheduler object could be seen as\n part of the logical NIC output interface. To enable HQoS on specific output\n interface, vpp startup.conf file has to be configured accordingly. The output\n interface that requires HQoS, should have \"hqos\" parameter specified in dpdk\n section. Another optional parameter \"hqos-thread\" has been defined which can\n be used to associate the output interface with specific hqos thread. In cpu\n section of the config file, \"corelist-hqos-threads\" is introduced to assign\n logical cpu cores to run the HQoS threads. A HQoS thread can run multiple HQoS\n objects each associated with different output interfaces. All worker threads\n instead of writing packets to NIC TX queue directly, write the packets to a\n software queues. The hqos_threads read the software queues, and enqueue the\n packets to HQoS objects, as well as dequeue packets from HQOS objects and\n write them to NIC output interfaces. The worker threads need to be able to\n send the packets to any output interface, therefore, each HQoS object\n associated with NIC output interface should have software queues equal to\n worker threads count.\n Following illustrates the sample startup configuration file with 4x worker\n threads feeding 2x hqos threads that handle each QoS scheduler for 1x output\n interface."
- },
- {
- "heading": "QoS scheduler CLI Commands",
- "data": "Each QoS scheduler instance is initialised with default parameters required to\n configure hqos port, subport, pipe and queues. Some of the parameters can be\n re-configured in run-time through CLI commands."
- },
- {
- "heading": "Configuration",
- "data": "Following commands can be used to configure QoS scheduler parameters.\n The command below can be used to set the subport level parameters such as\n token bucket rate (bytes per seconds), token bucket size (bytes), traffic\n class rates (bytes per seconds) and token update period (Milliseconds).\n For setting the pipe profile, following command can be used.\n To assign QoS scheduler instance to the specific thread, following command can\n be used.\n The command below is used to set the packet fields required for classifying\n the incoming packet. As a result of classification process, packet field\n information will be mapped to 5 tuples (subport, pipe, traffic class, pipe,\n color) and stored in packet mbuf.\n The DSCP table entries used for identifying the traffic class and queue can be set using the command below;"
- },
- {
- "heading": "Show Command",
- "data": "The QoS Scheduler configuration can displayed using the command below.\n The QoS Scheduler placement over the logical cpu cores can be displayed using\n below command."
- },
- {
- "heading": "QoS Scheduler Binary APIs",
- "data": "This section explains the available binary APIs for configuring QoS scheduler parameters in run-time. The following API can be used to set the pipe profile of a pipe that belongs to a given subport: The data structures used for set the pipe profile parameter are as follows; The following API can be used to set the subport level parameters, for example- token bucket rate (bytes per seconds), token bucket size (bytes), traffic class rate (bytes per seconds) and tokens update period. The data structures used for set the subport level parameter are as follows; The following API can be used set the DSCP table entry. The DSCP table have 64 entries to map the packet DSCP field onto traffic class and hqos input queue. The data structures used for setting DSCP table entries are given below."
- },
- {
- "additional_info": "The Quality-of-Service (QoS) scheduler performs egress-traffic management by prioritizing the transmission of the packets of different type services and subscribers based on the Service Level Agreements (SLAs). The QoS scheduler can be enabled on one or more NIC output interfaces depending upon the requirement. The QoS scheduler supports a number of scheduling and shaping levels which construct hierarchical-tree. The first level in the hierarchy is port (i.e. the physical interface) that constitutes the root node of the tree. The subsequent level is subport which represents the group of the users/subscribers. The individual user/subscriber is represented by the pipe at the next level. Each user can have different traffic type based on the criteria of specific loss rate, jitter, and latency. These traffic types are represented at the traffic-class level in the form of different traffic- classes. The last level contains number of queues which are grouped together to host the packets of the specific class type traffic. The QoS scheduler implementation requires flow classification, enqueue and dequeue operations. The flow classification is mandatory stage for HQoS where incoming packets are classified by mapping the packet fields information to 5-tuple (HQoS subport, pipe, traffic class, queue within traffic class, and color) and storing that information in mbuf sched field. The enqueue operation uses this information to determine the queue for storing the packet, and at this stage, if the specific queue is full, QoS drops the packet. The dequeue operation consists of scheduling the packet based on its length and available credits, and handing over the scheduled packet to the output interface. For more information on QoS Scheduler, please refer DPDK Programmer's Guide- http://dpdk.org/doc/guides/prog_guide/qos_framework.html Following illustrates the default HQoS configuration for each 10GbE output port: Single subport (subport 0): - Subport rate set to 100% of port rate - Each of the 4 traffic classes has rate set to 100% of port rate 4K pipes per subport 0 (pipes 0 .. 4095) with identical configuration: - Pipe rate set to 1/4K of port rate - Each of the 4 traffic classes has rate set to 100% of pipe rate - Within each traffic class, the byte-level WRR weights for the 4 queues are set to 1:1:1:1 ``` port { rate 1250000000 /* Assuming 10GbE port */ frame_overhead 24 /* Overhead fields per Ethernet frame: * 7B (Preamble) + * 1B (Start of Frame Delimiter (SFD)) + * 4B (Frame Check Sequence (FCS)) + * 12B (Inter Frame Gap (IFG)) */ mtu 1522 /* Assuming Ethernet/IPv4 pkt (FCS not included) */ n_subports_per_port 1 /* Number of subports per output interface */ n_pipes_per_subport 4096 /* Number of pipes (users/subscribers) */ queue_sizes 64 64 64 64 /* Packet queue size for each traffic class. * All queues within the same pipe traffic class * have the same size. Queues from different * pipes serving the same traffic class have * the same size. */ } ``` ``` subport 0 { tb_rate 1250000000 /* Subport level token bucket rate (bytes per second) */ tb_size 1000000 /* Subport level token bucket size (bytes) */ tc0_rate 1250000000 /* Subport level token bucket rate for traffic class 0 (bytes per second) */ tc1_rate 1250000000 /* Subport level token bucket rate for traffic class 1 (bytes per second) */ tc2_rate 1250000000 /* Subport level token bucket rate for traffic class 2 (bytes per second) */ tc3_rate 1250000000 /* Subport level token bucket rate for traffic class 3 (bytes per second) */ tc_period 10 /* Time interval for refilling the token bucket associated with traffic class (Milliseconds) */ pipe 0 4095 profile 0 /* pipes (users/subscribers) configured with pipe profile 0 */ } ``` ``` pipe_profile 0 { tb_rate 305175 /* Pipe level token bucket rate (bytes per second) */ tb_size 1000000 /* Pipe level token bucket size (bytes) */ tc0_rate 305175 /* Pipe level token bucket rate for traffic class 0 (bytes per second) */ tc1_rate 305175 /* Pipe level token bucket rate for traffic class 1 (bytes per second) */ tc2_rate 305175 /* Pipe level token bucket rate for traffic class 2 (bytes per second) */ tc3_rate 305175 /* Pipe level token bucket rate for traffic class 3 (bytes per second) */ tc_period 40 /* Time interval for refilling the token bucket associated with traffic class at pipe level (Milliseconds) */ tc3_oversubscription_weight 1 /* Weight traffic class 3 oversubscription */ tc0_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 0 */ tc1_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 1 */ tc2_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 2 */ tc3_wrr_weights 1 1 1 1 /* Pipe queues WRR weights for traffic class 3 */ } ``` ``` red { tc0_wred_min 48 40 32 /* Minimum threshold for traffic class 0 queue (min_th) in number of packets */ tc0_wred_max 64 64 64 /* Maximum threshold for traffic class 0 queue (max_th) in number of packets */ tc0_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 0 queue (maxp = 1 / maxp_inv) */ tc0_wred_weight 9 9 9 /* Traffic Class 0 queue weight */ tc1_wred_min 48 40 32 /* Minimum threshold for traffic class 1 queue (min_th) in number of packets */ tc1_wred_max 64 64 64 /* Maximum threshold for traffic class 1 queue (max_th) in number of packets */ tc1_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 1 queue (maxp = 1 / maxp_inv) */ tc1_wred_weight 9 9 9 /* Traffic Class 1 queue weight */ tc2_wred_min 48 40 32 /* Minimum threshold for traffic class 2 queue (min_th) in number of packets */ tc2_wred_max 64 64 64 /* Maximum threshold for traffic class 2 queue (max_th) in number of packets */ tc2_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 2 queue (maxp = 1 / maxp_inv) */ tc2_wred_weight 9 9 9 /* Traffic Class 2 queue weight */ tc3_wred_min 48 40 32 /* Minimum threshold for traffic class 3 queue (min_th) in number of packets */ tc3_wred_max 64 64 64 /* Maximum threshold for traffic class 3 queue (max_th) in number of packets */ tc3_wred_inv_prob 10 10 10 /* Inverse of packet marking probability for traffic class 3 queue (maxp = 1 / maxp_inv) */ tc3_wred_weight 9 9 9 /* Traffic Class 3 queue weight */ } ``` The Hierarchical Quality-of-Service (HQoS) scheduler object could be seen as part of the logical NIC output interface. To enable HQoS on specific output interface, vpp startup.conf file has to be configured accordingly. The output interface that requires HQoS, should have \"hqos\" parameter specified in dpdk section. Another optional parameter \"hqos-thread\" has been defined which can be used to associate the output interface with specific hqos thread. In cpu section of the config file, \"corelist-hqos-threads\" is introduced to assign logical cpu cores to run the HQoS threads. A HQoS thread can run multiple HQoS objects each associated with different output interfaces. All worker threads instead of writing packets to NIC TX queue directly, write the packets to a software queues. The hqos_threads read the software queues, and enqueue the packets to HQoS objects, as well as dequeue packets from HQOS objects and write them to NIC output interfaces. The worker threads need to be able to send the packets to any output interface, therefore, each HQoS object associated with NIC output interface should have software queues equal to worker threads count. Following illustrates the sample startup configuration file with 4x worker threads feeding 2x hqos threads that handle each QoS scheduler for 1x output interface. ``` dpdk { socket-mem 16384,16384 dev 0000:02:00.0 { num-rx-queues 2 hqos } dev 0000:06:00.0 { num-rx-queues 2 hqos } num-mbufs 1000000 } cpu { main-core 0 corelist-workers 1, 2, 3, 4 corelist-hqos-threads 5, 6 } ``` Each QoS scheduler instance is initialised with default parameters required to configure hqos port, subport, pipe and queues. Some of the parameters can be re-configured in run-time through CLI commands. Following commands can be used to configure QoS scheduler parameters. The command below can be used to set the subport level parameters such as token bucket rate (bytes per seconds), token bucket size (bytes), traffic class rates (bytes per seconds) and token update period (Milliseconds). ``` set dpdk interface hqos subport subport [rate ] [bktsize ] [tc0 ] [tc1 ] [tc2 ] [tc3 ] [period ] ``` For setting the pipe profile, following command can be used. ``` set dpdk interface hqos pipe subport pipe profile ``` To assign QoS scheduler instance to the specific thread, following command can be used. ``` set dpdk interface hqos placement thread ``` The command below is used to set the packet fields required for classifying the incoming packet. As a result of classification process, packet field information will be mapped to 5 tuples (subport, pipe, traffic class, pipe, color) and stored in packet mbuf. ``` set dpdk interface hqos pktfield id subport|pipe|tc offset mask ``` The DSCP table entries used for identifying the traffic class and queue can be set using the command below; ``` set dpdk interface hqos tctbl entry tc queue ``` The QoS Scheduler configuration can displayed using the command below. ``` vpp# show dpdk interface hqos TenGigabitEthernet2/0/0 Thread: Input SWQ size = 4096 packets Enqueue burst size = 256 packets Dequeue burst size = 220 packets Packet field 0: slab position = 0, slab bitmask = 0x0000000000000000 (subport) Packet field 1: slab position = 40, slab bitmask = 0x0000000fff000000 (pipe) Packet field 2: slab position = 8, slab bitmask = 0x00000000000000fc (tc) Packet field 2 tc translation table: ([Mapped Value Range]: tc/queue tc/queue ...) [ 0 .. 15]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3 [16 .. 31]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3 [32 .. 47]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3 [48 .. 63]: 0/0 0/1 0/2 0/3 1/0 1/1 1/2 1/3 2/0 2/1 2/2 2/3 3/0 3/1 3/2 3/3 Port: Rate = 1250000000 bytes/second MTU = 1514 bytes Frame overhead = 24 bytes Number of subports = 1 Number of pipes per subport = 4096 Packet queue size: TC0 = 64, TC1 = 64, TC2 = 64, TC3 = 64 packets Number of pipe profiles = 1 Subport 0: Rate = 120000000 bytes/second Token bucket size = 1000000 bytes Traffic class rate: TC0 = 120000000, TC1 = 120000000, TC2 = 120000000, TC3 = 120000000 bytes/second TC period = 10 milliseconds Pipe profile 0: Rate = 305175 bytes/second Token bucket size = 1000000 bytes Traffic class rate: TC0 = 305175, TC1 = 305175, TC2 = 305175, TC3 = 305175 bytes/second TC period = 40 milliseconds TC0 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC1 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC2 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC3 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 ``` The QoS Scheduler placement over the logical cpu cores can be displayed using below command. ``` vpp# show dpdk interface hqos placement Thread 5 (vpp_hqos-threads_0 at lcore 5): TenGigabitEthernet2/0/0 queue 0 Thread 6 (vpp_hqos-threads_1 at lcore 6): TenGigabitEthernet4/0/1 queue 0 ``` This section explains the available binary APIs for configuring QoS scheduler parameters in run-time. The following API can be used to set the pipe profile of a pipe that belongs to a given subport: ``` sw_interface_set_dpdk_hqos_pipe rx | sw_if_index subport pipe profile ``` The data structures used for set the pipe profile parameter are as follows; ``` /** \\\\brief DPDK interface HQoS pipe profile set request @param client_index - opaque cookie to identify the sender @param context - sender context, to match reply w/ request @param sw_if_index - the interface @param subport - subport ID @param pipe - pipe ID within its subport @param profile - pipe profile ID */ define sw_interface_set_dpdk_hqos_pipe { u32 client_index; u32 context; u32 sw_if_index; u32 subport; u32 pipe; u32 profile; }; /** \\\\brief DPDK interface HQoS pipe profile set reply @param context - sender context, to match reply w/ request @param retval - request return code */ define sw_interface_set_dpdk_hqos_pipe_reply { u32 context; i32 retval; }; ``` The following API can be used to set the subport level parameters, for example- token bucket rate (bytes per seconds), token bucket size (bytes), traffic class rate (bytes per seconds) and tokens update period. ``` sw_interface_set_dpdk_hqos_subport rx | sw_if_index subport [rate ] [bktsize ] [tc0 ] [tc1 ] [tc2 ] [tc3 ] [period ] ``` The data structures used for set the subport level parameter are as follows; ``` /** \\\\brief DPDK interface HQoS subport parameters set request @param client_index - opaque cookie to identify the sender @param context - sender context, to match reply w/ request @param sw_if_index - the interface @param subport - subport ID @param tb_rate - subport token bucket rate (measured in bytes/second) @param tb_size - subport token bucket size (measured in credits) @param tc_rate - subport traffic class 0 .. 3 rates (measured in bytes/second) @param tc_period - enforcement period for rates (measured in milliseconds) */ define sw_interface_set_dpdk_hqos_subport { u32 client_index; u32 context; u32 sw_if_index; u32 subport; u32 tb_rate; u32 tb_size; u32 tc_rate[4]; u32 tc_period; }; /** \\\\brief DPDK interface HQoS subport parameters set reply @param context - sender context, to match reply w/ request @param retval - request return code */ define sw_interface_set_dpdk_hqos_subport_reply { u32 context; i32 retval; }; ``` The following API can be used set the DSCP table entry. The DSCP table have 64 entries to map the packet DSCP field onto traffic class and hqos input queue. ``` sw_interface_set_dpdk_hqos_tctbl rx | sw_if_index entry tc queue ``` The data structures used for setting DSCP table entries are given below. ``` /** \\\\brief DPDK interface HQoS tctbl entry set request @param client_index - opaque cookie to identify the sender @param context - sender context, to match reply w/ request @param sw_if_index - the interface @param entry - entry index ID @param tc - traffic class (0 .. 3) @param queue - traffic class queue (0 .. 3) */ define sw_interface_set_dpdk_hqos_tctbl { u32 client_index; u32 context; u32 sw_if_index; u32 entry; u32 tc; u32 queue; }; /** \\\\brief DPDK interface HQoS tctbl entry set reply @param context - sender context, to match reply w/ request @param retval - request return code */ define sw_interface_set_dpdk_hqos_tctbl_reply { u32 context; i32 retval; }; ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "README.md"
- },
- "content": [
- {
- "heading": "Introduction",
- "data": "The VPP platform is an extensible framework that provides out-of-the-box\n production quality switch/router functionality. It is the open source version\n of Cisco's Vector Packet Processing (VPP) technology: a high performance,\n packet-processing stack that can run on commodity CPUs.\n The benefits of this implementation of VPP are its high performance, proven\n technology, its modularity and flexibility, and rich feature set.\n For more information on VPP and its features please visit the\n [FD.io website](http://fd.io/) and\n [What is VPP?](https://wiki.fd.io/view/VPP/What_is_VPP%3F) pages."
- },
- {
- "heading": "Changes",
- "data": "Details of the changes leading up to this version of VPP can be found under\n doc/releasenotes."
- },
- {
- "heading": "Directory layout",
- "data": "| Directory name | Description |\n | ---------------------- | ------------------------------------------- |\n | build-data | Build metadata |\n | build-root | Build output directory |\n | docs | Sphinx Documentation |\n | dpdk | DPDK patches and build infrastructure |\n | extras/libmemif | Client library for memif |\n | src/examples | VPP example code |\n | src/plugins | VPP bundled plugins directory |\n | src/svm | Shared virtual memory allocation library |\n | src/tests | Standalone tests (not part of test harness) |\n | src/vat | VPP API test program |\n | src/vlib | VPP application library |\n | src/vlibapi | VPP API library |\n | src/vlibmemory | VPP Memory management |\n | src/vnet | VPP networking |\n | src/vpp | VPP application |\n | src/vpp-api | VPP application API bindings |\n | src/vppinfra | VPP core library |\n | src/vpp/api | Not-yet-relocated API bindings |\n | test | Unit tests and Python test harness |"
- },
- {
- "heading": "Getting started",
- "data": "In general anyone interested in building, developing or running VPP should\n consult the [VPP wiki](https://wiki.fd.io/view/VPP) for more complete\n documentation.\n In particular, readers are recommended to take a look at [Pulling, Building,\n Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run\n ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step\n coverage of the topic.\n For the impatient, some salient information is distilled below."
- },
- {
- "heading": "Quick-start: On an existing Linux host",
- "data": "To install system dependencies, build VPP and then install it, simply run the\n build script. This should be performed a non-privileged user with `sudo`\n access from the project base directory:\n ./extras/vagrant/build.sh\n If you want a more fine-grained approach because you intend to do some\n development work, the `Makefile` in the root directory of the source tree\n provides several convenience shortcuts as `make` targets that may be of\n interest. To see the available targets run:\n make"
- },
- {
- "heading": "Quick-start: Vagrant",
- "data": "The directory `extras/vagrant` contains a `VagrantFile` and supporting\n scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine.\n This VM can then be used to test concepts with VPP or as a development\n platform to extend VPP. Some obvious caveats apply when using a VM for VPP\n since its performance will never match that of bare metal; if your work is\n timing or performance sensitive, consider using bare metal in addition or\n instead of the VM.\n For this to work you will need a working installation of Vagrant. Instructions\n for this can be found [on the Setting up Vagrant wiki page]\n (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant)."
- },
- {
- "heading": "More information",
- "data": "Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes. Visit the [VPP wiki](https://wiki.fd.io/view/VPP) for details on more advanced building strategies and other development notes."
- },
- {
- "additional_info": "Vector Packet Processing ======================== The VPP platform is an extensible framework that provides out-of-the-box production quality switch/router functionality. It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs. The benefits of this implementation of VPP are its high performance, proven technology, its modularity and flexibility, and rich feature set. For more information on VPP and its features please visit the [FD.io website](http://fd.io/) and [What is VPP?](https://wiki.fd.io/view/VPP/What_is_VPP%3F) pages. Details of the changes leading up to this version of VPP can be found under doc/releasenotes. | Directory name | Description | | ---------------------- | ------------------------------------------- | | build-data | Build metadata | | build-root | Build output directory | | docs | Sphinx Documentation | | dpdk | DPDK patches and build infrastructure | | extras/libmemif | Client library for memif | | src/examples | VPP example code | | src/plugins | VPP bundled plugins directory | | src/svm | Shared virtual memory allocation library | | src/tests | Standalone tests (not part of test harness) | | src/vat | VPP API test program | | src/vlib | VPP application library | | src/vlibapi | VPP API library | | src/vlibmemory | VPP Memory management | | src/vnet | VPP networking | | src/vpp | VPP application | | src/vpp-api | VPP application API bindings | | src/vppinfra | VPP core library | | src/vpp/api | Not-yet-relocated API bindings | | test | Unit tests and Python test harness | In general anyone interested in building, developing or running VPP should consult the [VPP wiki](https://wiki.fd.io/view/VPP) for more complete documentation. In particular, readers are recommended to take a look at [Pulling, Building, Running, Hacking, Pushing](https://wiki.fd.io/view/VPP/Pulling,_Building,_Run ning,_Hacking_and_Pushing_VPP_Code) which provides extensive step-by-step coverage of the topic. For the impatient, some salient information is distilled below. To install system dependencies, build VPP and then install it, simply run the build script. This should be performed a non-privileged user with `sudo` access from the project base directory: ./extras/vagrant/build.sh If you want a more fine-grained approach because you intend to do some development work, the `Makefile` in the root directory of the source tree provides several convenience shortcuts as `make` targets that may be of interest. To see the available targets run: make The directory `extras/vagrant` contains a `VagrantFile` and supporting scripts to bootstrap a working VPP inside a Vagrant-managed Virtual Machine. This VM can then be used to test concepts with VPP or as a development platform to extend VPP. Some obvious caveats apply when using a VM for VPP since its performance will never match that of bare metal; if your work is timing or performance sensitive, consider using bare metal in addition or instead of the VM. For this to work you will need a working installation of Vagrant. Instructions for this can be found [on the Setting up Vagrant wiki page] (https://wiki.fd.io/view/DEV/Setting_Up_Vagrant). Several modules provide documentation, see @subpage user_doc for more end-user-oriented information. Also see @subpage dev_doc for developer notes. Visit the [VPP wiki](https://wiki.fd.io/view/VPP) for details on more advanced building strategies and other development notes."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "FD.io",
- "file_name": "spec.md"
- },
- "content": [
- {
- "heading": "Packet Forger JSON Specification Rev 0.1",
- "data": ""
- },
- {
- "heading": "0. Change Logs",
- "data": "2021-10, initialized by Zhang, Qi"
- },
- {
- "heading": "1. Parse Graph",
- "data": "A Parse Graph is a unidirectional graph. It is consist of a set of nodes and edges. A node represent a network protocol header, and an edge represent the linkage of two protocol headers which is adjacent in the packet. An example of a parse graph have 5 nodes and 6 edges.\n [](https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZ3JhcGggVERcbiAgICBBKChNQUMpKSAtLT4gQigoSVB2NCkpXG4gICAgQSgoTUFDKSkgLS0-IEMoKElQdjYpKVxuICAgIEIgLS0-IEQoKFRDUCkpXG4gICAgQyAtLT4gRCgoVENQKSlcbiAgICBCIC0tPiBFKChVRFApKVxuICAgIEMgLS0-IEUoKFVEUCkpXG4gICAgIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRhcmtcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0)\n A Node or an Edge is described by a json object. There is no json representation for a parse graph, software should load all json objects of nodes and edges then build the parse graph logic in memory."
- },
- {
- "heading": "2. Node",
- "data": "A json object of Node will include below properties:"
- },
- {
- "heading": "**type",
- "data": "This should always be \"node\"."
- },
- {
- "heading": "**name",
- "data": "This is the name of the protocol."
- },
- {
- "heading": "**layout",
- "data": "This is an array of fields in the protocol header which also imply the bit order. For example, json object of mac header as below:\n ```\n {\n \"type\" : \"node\",\n \"name\" : \"mac\",\n \"layout\" : [\n {\n \"name\" : \"src\",\n \"size\" : \"48\",\n \"format\" : \"mac\",\n },\n {\n \"name\" : \"dst\",\n \"size\" : \"48\",\n \"format\" : \"mac\",\n },\n {\n \"name\" : \"ethertype\",\n \"size\" : \"16\",\n }\n ]\n }\n ```\n For each field, there are properties can be defined:\n * **name**\n The name of the field, typically it should be unique to all fields in the same node, except when it is \"reserved\".\n * **size**\n Size of the field, note, the unit is \"bit\" but not \"byte\".\n Sometime a field's size can be decided by another field's value, for example, a geneve header's \"options\" field's size is decided by \"optlen\" field's value, so we have below:\n ```\n \"name\" : \"geneve\",\n \"layout\" : [\n ......\n {\n \"name\" : \"reserved\",\n \"size\" : \"8\"\n },\n {\n \"name\" : \"options\",\n \"size\" : \"optlen<<5\"\n }\n ],\n ```\n Since when \"optlen\" increases 1 which means 4 bytes (32 bits) increase of \"options\"'s size so the bit value should shift left 5.\n * **format**\n Defined the input string format of the value, all formats are described in the section **Input Format** which also described the default format if it is not explicitly defined.\n * **default**\n Defined the default value of the field when a protocol header instance is created by the node. If not defined, the default value is always 0. The default value can be overwritten when forging a packet with specific value of the field. For example, we defined the default ipv4 address as below:\n ```\n \"name\" : \"ipv4\",\n \"layout\" : [\n ......\n {\n \"name\" : \"src\",\n \"size\" : \"32\",\n \"format\" : \"ipv4\",\n \"default\" : \"1.1.1.1\"\n },\n {\n \"name\" : \"dst\",\n \"size\" : \"32\",\n \"format\" : \"ipv4\",\n \"default\" : \"2.2.2.2\"\n }\n ]\n ```\n * **readonly**\n Define if a field is read only or not, typically it will be used together with \"default\". For example, the version of IPv4 header should be 4 and can't be overwritten.\n ```\n \"name\" : \"ipv4\",\n \"layout\" : [\n {\n \"name\" : \"version\",\n \"size\" : \"4\",\n \"default\" : \"4\",\n \"readonly\" : \"true\"\n },\n ......\n ],\n ```\n A reserved field implies it is \"readonly\" and should always be 0.\n * **optional**\n A field could be optional depends on some flag as another field. For example, the GRE header has couple optional fields.\n ```\n \"name\" : \"gre\",\n \"layout\" : [\n {\n \"name\" : \"c\",\n \"size\" : \"1\",\n },\n {\n \"name\" : \"reserved\",\n \"size\" : \"1\",\n },\n {\n \"name\" : \"k\",\n \"size\" : \"1\",\n },\n {\n \"name\" : \"s\",\n \"size\" : \"1\",\n },\n ......\n {\n \"name\" : \"checksum\",\n \"size\" : \"16\",\n \"optional\" : \"c=1\",\n },\n {\n \"name\" : \"reserved\",\n \"size\" : \"16\",\n \"optional\" : \"c=1\",\n },\n {\n \"name\" : \"key\",\n \"size\" : \"32\",\n \"optional\" : \"k=1\"\n },\n {\n \"name\" : \"sequencenumber\",\n \"size\" : \"32\",\n \"optional\" : \"s=1\"\n }\n ]\n ```\n The expresion of an optional field can use \"**&**\" or \"**|**\" combine multiple conditions, for example for gtpu header, we have below optional fields.\n ```\n \"name\" : \"gtpu\",\n \"layout\" : [\n ......\n {\n \"name\" : \"e\",\n \"size\" : \"1\"\n },\n {\n \"name\" : \"s\",\n \"size\" : \"1\"\n },\n {\n \"name\" : \"pn\",\n \"size\" : \"1\"\n },\n ......\n {\n \"name\" : \"teid\",\n \"size\" : \"16\"\n },\n {\n \"name\" : \"sequencenumber\",\n \"size\" : \"16\",\n \"optional\" : \"e=1|s=1|pn=1\",\n },\n ......\n ]\n ```\n * **autoincrease**\n Some field's value cover the length of the payload or size of an optional field in the same header, so it should be auto increased during packet forging. For example the \"totallength\" of ipv4 header is a autoincrease feild.\n ```\n \"name\" : \"ipv4\",\n \"layout\" : [\n ......\n {\n \"name\" : \"totallength\",\n \"size\" : \"16\",\n \"default\" : \"20\",\n \"autoincrease\" : \"true\",\n },\n ......\n ]\n ```\n A field which is autoincrease also imply its readonly.\n * **increaselength**\n Typically this should only be enabled for an optional field to trigger another field's autoincrease. For example, the gtpc's \"messagelength\" field cover all the data start from field \"teid\", so its default size is 4 bytes which cover sequencenumber + 8 reserved bit, and should be increased if \"teid\" exist or any payload be appended.\n ```\n \"name\" : \"gtpc\",\n \"layout\" : [\n ......\n {\n \"name\" : \"messagelength\",\n \"size\" : \"16\",\n \"default\" : \"4\",\n \"autoincrease\" : \"true\",\n },\n {\n \"name\" : \"teid\",\n \"size\" : \"32\",\n \"optional\" : \"t=1\",\n \"increaselength\" : \"true\"\n },\n {\n \"name\" : \"sequencenumber\",\n \"size\" : \"24\",\n },\n {\n \"name\" : \"reserved\",\n \"size\" : \"8\",\n }\n ]\n ```"
- },
- {
- "heading": "**attributes",
- "data": "This defines an array of attributes, the attribute does not define the data belongs to current protocol header, but it impact the behaviour during applying actions of an edge when the protocol header is involved. For example, a geneve node has attribute \"udpport\" which define the udp tunnel port, so when it is appended after a udp header, the udp header's dst port is expected to be changed to this value.\n ```\n \"name\" : \"geneve\",\n \"fields\" : [\n ......\n ],\n \"attributes\" : [\n {\n \"name\" : \"udpport\",\n \"size\" : \"16\",\n \"default\" : \"6081\"\n }\n ]\n ```\n An attribute can only have below properties which take same effect when they are in field.\n * name\n * size (must be fixed value)\n * default\n * format"
- },
- {
- "heading": "3. Edge",
- "data": "A json object of Edge will include below properties:\n * **type**\n This should always be \"edge\".\n * **start**\n This is the start node of the edge.\n * **end**\n This is the end node of the edge.\n * **actions**\n This is an array of actions the should be applied during packet forging.\n For example, when append a ipv4 headers after a mac header, the \"ethertype\" field of mac should be set to \"0x0800\":\n ```\n {\n \"type\" : \"edge\",\n \"start\" : \"mac\",\n \"end\" : \"ipv4\",\n \"actions\" : [\n {\n \"dst\" : \"start.ethertype\",\n \"src\" : \"0x0800\"\n }\n ]\n }\n ```\n Each action should have two properties:\n * **dst**\n This describe the target field to set, it is formatted as .\n node must be \"start\" or \"end\".\n * **src**\n This describe the value to set, it could be a const value or same format as dst's.\n For example when append a vlan header after mac, we will have below actions:\n ```\n {\n \"type\" : \"edge\",\n \"start\" : \"mac\",\n \"end\" : \"vlan\",\n \"actions\" : [\n {\n \"dst\" : \"start.ethertype\",\n \"src\" : \"end.tpid\"\n },\n {\n \"dst\" : \"end.ethertype\",\n \"src\" : \"start.ethertype\"\n }\n ]\n }\n ```\n To avoid duplication, multiple edges can be aggregate into the one json object if there actions are same. So, multiple node name can be added to **start** or **end** with seperateor \"**,**\".\n For example, all ipv6 and ipv6 extention header share the same actions when append a udp header\n ```\n {\n \"type\" : \"edge\",\n \"start\" : \"ipv6,ipv6srh,ipv6crh16,ipv6crh32\",\n \"end\" : \"udp\",\n \"actions\" : [\n {\n \"dst\" : \"start.nextheader\",\n \"src\" : \"17\"\n }\n ]\n }\n ```\n Another examples is gre and nvgre share the same actions when be appanded after a ipv4 header:\n ```\n {\n \"type\" : \"edge\",\n \"start\" : \"ipv4\",\n \"end\" : \"gre,nvgre\",\n \"actions\" : [\n {\n \"dst\" : \"start.protocol\",\n \"src\" : \"47\"\n }\n ]\n }\n ```"
- },
- {
- "heading": "4. Path",
- "data": "A path defines a sequence of nodes which is the input parameter for a packet forging, a packet forging should fail if the path can't be recognised as a subgraph of the parser graph.\n A json object of a path should include below properties:"
- },
- {
- "heading": "**type",
- "data": "This should always be \"path\"."
- },
- {
- "heading": "**stack",
- "data": "This is an array of node configurations which also imply the protocol header sequence of a packet. Below is an example to forge an ipv4 / udp packet with default value.\n ```\n {\n \"type\" : \"path\",\n \"stack\" : [\n {\n \"header\" : \"mac\"\n },\n {\n \"header\" : \"ipv4\"\n },\n {\n \"header\" : \"udp\"\n },\n ]\n }\n ```\n A node configuration can have below properties:\n * **header**\n This is a protocol name (a node name).\n * **fields**\n This is an array of 3 member tuples:\n * **name**\n The name of the field or attribute that belongs to the node, note a readonly field should not be selected.\n * **value**\n The value to set the field or attribute.\n * **mask**\n This is optional, if it is not defined, corresponding bit of the mask should be set to 0, and it should be ignored for an attribute."
- },
- {
- "heading": "**actions",
- "data": "This is optional. When this json file is the input of flow adding commands, it can be used directly as the flow rule's action.\n An example to forge a ipv4 packet with src ip address 192.168.0.1 and dst ip address 192.168.0.2, also take ip address as mask.\n ```\n {\n \"type\" : \"path\",\n \"stack\" : [\n {\n \"header\" : \"mac\",\n },\n {\n \"header\" : \"ipv4\",\n \"fields\" : [\n {\n \"name\" : \"src\",\n \"value\" : \"192.168.0.1\",\n \"mask\" : \"255.255.255.255\"\n },\n {\n \"name\" : \"dst\",\n \"value\" : \"192.168.0.2\",\n \"mask\" : \"255.255.255.255\"\n }\n ]\n }\n ],\n \"actions\" : \"redirect-to-queue 3\"\n }\n ```"
- },
- {
- "heading": "5. Input Format",
- "data": "Every field or attribute is associated with an **Input Format**, so the software can figure out how to parse default value in the node or a config value in the path.\n Currently we have 8 predefined format and don't support customised format."
- },
- {
- "heading": "**u8",
- "data": "accept number from 0 to 255 or hex from 0x0 to 0xff."
- },
- {
- "heading": "**u16",
- "data": "accept number from 0 to 65535 or hex from 0x0 to 0xffff."
- },
- {
- "heading": "**u32",
- "data": "accept number from 0 to 4294967295 or hex from 0x0 to 0xffffffff"
- },
- {
- "heading": "**u64",
- "data": "accept number from 0 to 2^64 -1 or hex from 0x0 to 0xffffffffffffffff"
- },
- {
- "heading": "**mac",
- "data": "accept xx:xx:xx:xx:xx:xx , x in hex from 0 to f"
- },
- {
- "heading": "**ipv4",
- "data": "accept n.n.n.n , n from 0 to 255"
- },
- {
- "heading": "**ipv6",
- "data": "accept xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, x in hex from 0 to f"
- },
- {
- "heading": "**bytearray",
- "data": "accept u8,u8,u8..... If format is not defined for a field or attribute, the default format will be selected base on size as below, and the MSB should be ignored by software if the value exceeds the limitation. | Size | Default Format | | ------------- | -------------- | | 1 - 8 | u8 | | 9 - 16 | u16 | | 17 - 32 | u32 | | 33 - 64 | u64 | | > 64 | bytearray | | variable size | bytearray |"
- },
- {
- "additional_info": "2021-10, initialized by Zhang, Qi A Parse Graph is a unidirectional graph. It is consist of a set of nodes and edges. A node represent a network protocol header, and an edge represent the linkage of two protocol headers which is adjacent in the packet. An example of a parse graph have 5 nodes and 6 edges. [](https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZ3JhcGggVERcbiAgICBBKChNQUMpKSAtLT4gQigoSVB2NCkpXG4gICAgQSgoTUFDKSkgLS0-IEMoKElQdjYpKVxuICAgIEIgLS0-IEQoKFRDUCkpXG4gICAgQyAtLT4gRCgoVENQKSlcbiAgICBCIC0tPiBFKChVRFApKVxuICAgIEMgLS0-IEUoKFVEUCkpXG4gICAgIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRhcmtcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0) A Node or an Edge is described by a json object. There is no json representation for a parse graph, software should load all json objects of nodes and edges then build the parse graph logic in memory. A json object of Node will include below properties: This should always be \"node\". This is the name of the protocol. This is an array of fields in the protocol header which also imply the bit order. For example, json object of mac header as below: ``` { \"type\" : \"node\", \"name\" : \"mac\", \"layout\" : [ { \"name\" : \"src\", \"size\" : \"48\", \"format\" : \"mac\", }, { \"name\" : \"dst\", \"size\" : \"48\", \"format\" : \"mac\", }, { \"name\" : \"ethertype\", \"size\" : \"16\", } ] } ``` For each field, there are properties can be defined: * **name** The name of the field, typically it should be unique to all fields in the same node, except when it is \"reserved\". * **size** Size of the field, note, the unit is \"bit\" but not \"byte\". Sometime a field's size can be decided by another field's value, for example, a geneve header's \"options\" field's size is decided by \"optlen\" field's value, so we have below: ``` \"name\" : \"geneve\", \"layout\" : [ ...... { \"name\" : \"reserved\", \"size\" : \"8\" }, { \"name\" : \"options\", \"size\" : \"optlen<<5\" } ], ``` Since when \"optlen\" increases 1 which means 4 bytes (32 bits) increase of \"options\"'s size so the bit value should shift left 5. * **format** Defined the input string format of the value, all formats are described in the section **Input Format** which also described the default format if it is not explicitly defined. * **default** Defined the default value of the field when a protocol header instance is created by the node. If not defined, the default value is always 0. The default value can be overwritten when forging a packet with specific value of the field. For example, we defined the default ipv4 address as below: ``` \"name\" : \"ipv4\", \"layout\" : [ ...... { \"name\" : \"src\", \"size\" : \"32\", \"format\" : \"ipv4\", \"default\" : \"1.1.1.1\" }, { \"name\" : \"dst\", \"size\" : \"32\", \"format\" : \"ipv4\", \"default\" : \"2.2.2.2\" } ] ``` * **readonly** Define if a field is read only or not, typically it will be used together with \"default\". For example, the version of IPv4 header should be 4 and can't be overwritten. ``` \"name\" : \"ipv4\", \"layout\" : [ { \"name\" : \"version\", \"size\" : \"4\", \"default\" : \"4\", \"readonly\" : \"true\" }, ...... ], ``` A reserved field implies it is \"readonly\" and should always be 0. * **optional** A field could be optional depends on some flag as another field. For example, the GRE header has couple optional fields. ``` \"name\" : \"gre\", \"layout\" : [ { \"name\" : \"c\", \"size\" : \"1\", }, { \"name\" : \"reserved\", \"size\" : \"1\", }, { \"name\" : \"k\", \"size\" : \"1\", }, { \"name\" : \"s\", \"size\" : \"1\", }, ...... { \"name\" : \"checksum\", \"size\" : \"16\", \"optional\" : \"c=1\", }, { \"name\" : \"reserved\", \"size\" : \"16\", \"optional\" : \"c=1\", }, { \"name\" : \"key\", \"size\" : \"32\", \"optional\" : \"k=1\" }, { \"name\" : \"sequencenumber\", \"size\" : \"32\", \"optional\" : \"s=1\" } ] ``` The expresion of an optional field can use \"**&**\" or \"**|**\" combine multiple conditions, for example for gtpu header, we have below optional fields. ``` \"name\" : \"gtpu\", \"layout\" : [ ...... { \"name\" : \"e\", \"size\" : \"1\" }, { \"name\" : \"s\", \"size\" : \"1\" }, { \"name\" : \"pn\", \"size\" : \"1\" }, ...... { \"name\" : \"teid\", \"size\" : \"16\" }, { \"name\" : \"sequencenumber\", \"size\" : \"16\", \"optional\" : \"e=1|s=1|pn=1\", }, ...... ] ``` * **autoincrease** Some field's value cover the length of the payload or size of an optional field in the same header, so it should be auto increased during packet forging. For example the \"totallength\" of ipv4 header is a autoincrease feild. ``` \"name\" : \"ipv4\", \"layout\" : [ ...... { \"name\" : \"totallength\", \"size\" : \"16\", \"default\" : \"20\", \"autoincrease\" : \"true\", }, ...... ] ``` A field which is autoincrease also imply its readonly. * **increaselength** Typically this should only be enabled for an optional field to trigger another field's autoincrease. For example, the gtpc's \"messagelength\" field cover all the data start from field \"teid\", so its default size is 4 bytes which cover sequencenumber + 8 reserved bit, and should be increased if \"teid\" exist or any payload be appended. ``` \"name\" : \"gtpc\", \"layout\" : [ ...... { \"name\" : \"messagelength\", \"size\" : \"16\", \"default\" : \"4\", \"autoincrease\" : \"true\", }, { \"name\" : \"teid\", \"size\" : \"32\", \"optional\" : \"t=1\", \"increaselength\" : \"true\" }, { \"name\" : \"sequencenumber\", \"size\" : \"24\", }, { \"name\" : \"reserved\", \"size\" : \"8\", } ] ``` This defines an array of attributes, the attribute does not define the data belongs to current protocol header, but it impact the behaviour during applying actions of an edge when the protocol header is involved. For example, a geneve node has attribute \"udpport\" which define the udp tunnel port, so when it is appended after a udp header, the udp header's dst port is expected to be changed to this value. ``` \"name\" : \"geneve\", \"fields\" : [ ...... ], \"attributes\" : [ { \"name\" : \"udpport\", \"size\" : \"16\", \"default\" : \"6081\" } ] ``` An attribute can only have below properties which take same effect when they are in field. * name * size (must be fixed value) * default * format A json object of Edge will include below properties: * **type** This should always be \"edge\". * **start** This is the start node of the edge. * **end** This is the end node of the edge. * **actions** This is an array of actions the should be applied during packet forging. For example, when append a ipv4 headers after a mac header, the \"ethertype\" field of mac should be set to \"0x0800\": ``` { \"type\" : \"edge\", \"start\" : \"mac\", \"end\" : \"ipv4\", \"actions\" : [ { \"dst\" : \"start.ethertype\", \"src\" : \"0x0800\" } ] } ``` Each action should have two properties: * **dst** This describe the target field to set, it is formatted as . node must be \"start\" or \"end\". * **src** This describe the value to set, it could be a const value or same format as dst's. For example when append a vlan header after mac, we will have below actions: ``` { \"type\" : \"edge\", \"start\" : \"mac\", \"end\" : \"vlan\", \"actions\" : [ { \"dst\" : \"start.ethertype\", \"src\" : \"end.tpid\" }, { \"dst\" : \"end.ethertype\", \"src\" : \"start.ethertype\" } ] } ``` To avoid duplication, multiple edges can be aggregate into the one json object if there actions are same. So, multiple node name can be added to **start** or **end** with seperateor \"**,**\". For example, all ipv6 and ipv6 extention header share the same actions when append a udp header ``` { \"type\" : \"edge\", \"start\" : \"ipv6,ipv6srh,ipv6crh16,ipv6crh32\", \"end\" : \"udp\", \"actions\" : [ { \"dst\" : \"start.nextheader\", \"src\" : \"17\" } ] } ``` Another examples is gre and nvgre share the same actions when be appanded after a ipv4 header: ``` { \"type\" : \"edge\", \"start\" : \"ipv4\", \"end\" : \"gre,nvgre\", \"actions\" : [ { \"dst\" : \"start.protocol\", \"src\" : \"47\" } ] } ``` A path defines a sequence of nodes which is the input parameter for a packet forging, a packet forging should fail if the path can't be recognised as a subgraph of the parser graph. A json object of a path should include below properties: This should always be \"path\". This is an array of node configurations which also imply the protocol header sequence of a packet. Below is an example to forge an ipv4 / udp packet with default value. ``` { \"type\" : \"path\", \"stack\" : [ { \"header\" : \"mac\" }, { \"header\" : \"ipv4\" }, { \"header\" : \"udp\" }, ] } ``` A node configuration can have below properties: * **header** This is a protocol name (a node name). * **fields** This is an array of 3 member tuples: * **name** The name of the field or attribute that belongs to the node, note a readonly field should not be selected. * **value** The value to set the field or attribute. * **mask** This is optional, if it is not defined, corresponding bit of the mask should be set to 0, and it should be ignored for an attribute. This is optional. When this json file is the input of flow adding commands, it can be used directly as the flow rule's action. An example to forge a ipv4 packet with src ip address 192.168.0.1 and dst ip address 192.168.0.2, also take ip address as mask. ``` { \"type\" : \"path\", \"stack\" : [ { \"header\" : \"mac\", }, { \"header\" : \"ipv4\", \"fields\" : [ { \"name\" : \"src\", \"value\" : \"192.168.0.1\", \"mask\" : \"255.255.255.255\" }, { \"name\" : \"dst\", \"value\" : \"192.168.0.2\", \"mask\" : \"255.255.255.255\" } ] } ], \"actions\" : \"redirect-to-queue 3\" } ``` Every field or attribute is associated with an **Input Format**, so the software can figure out how to parse default value in the node or a config value in the path. Currently we have 8 predefined format and don't support customised format. accept number from 0 to 255 or hex from 0x0 to 0xff. accept number from 0 to 65535 or hex from 0x0 to 0xffff. accept number from 0 to 4294967295 or hex from 0x0 to 0xffffffff accept number from 0 to 2^64 -1 or hex from 0x0 to 0xffffffffffffffff accept xx:xx:xx:xx:xx:xx , x in hex from 0 to f accept n.n.n.n , n from 0 to 255 accept xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, x in hex from 0 to f accept u8,u8,u8..... If format is not defined for a field or attribute, the default format will be selected base on size as below, and the MSB should be ignored by software if the value exceeds the limitation. | Size | Default Format | | ------------- | -------------- | | 1 - 8 | u8 | | 9 - 16 | u16 | | 17 - 32 | u32 | | 33 - 64 | u64 | | > 64 | bytearray | | variable size | bytearray |"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "add-nftables-implementation.md"
- },
- "content": [
- {
- "heading": "Add nftables implementation to flannel",
- "data": "Date: 2024-02-01"
- },
- {
- "heading": "Status",
- "data": "Writing"
- },
- {
- "heading": "Context",
- "data": "At the moment, flannel uses iptables to mask and route packets.\n Our implementation is based on the library from coreos (https://github.com/coreos/go-iptables).\n There are several issues with using iptables in flannel:\n * performance: packets are matched using a list so performance is O(n). This isn't very important for flannel because use few iptables rules anyway.\n * stability:\n ** rules must be purged then updated every time flannel needs to change a rule to keep the correct order\n ** there can be interferences with other k8s components using iptables as well (kube-proxy, kube-router...)\n * deprecation: nftables is pushed as a replacement for iptables in the kernel and in future distros including the future RHEL.\n References:\n - https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/3866-nftables-proxy/README.md#motivation"
- },
- {
- "heading": "Current state",
- "data": "In flannel code, all references to iptables are wrapped in the `iptables` package.\n The package provides the type `IPTableRule` to represent an individual rule. This type is almost entirely internal to the package so it would be easy to refactor the code to hide in favor of a more abstract type that would work for both iptables and nftables rules.\n Unfortunately the package doesn't provide an interface so in order to provide both an iptables-based and an nftables-based implementation this needs to be refactored.\n This package includes several Go interfaces (`IPTables`, `IPTablesError`) that are used for testing."
- },
- {
- "heading": "Requirements",
- "data": "Ideally, flannel will include both iptables and nftables implementation. These need to coexist in the code but will be mutually exclusive at runtime.\n The choice of which implementation to use will be triggered by an optional CLI flag.\n iptables will remain the default for the time being.\n Using nftables is an opportunity for optimising the rules deployed by flannel but we need to be careful about retro-compatibility with the current backend.\n Starting flannel in either mode should reset the other mode as best as possible to ensure that users don't need to reboot if they need to change mode."
- },
- {
- "heading": "Architecture",
- "data": "Currently, flannel uses two dedicated tables for its own rules: `FLANNEL-POSTRTG` and `FLANNEL-FWD`.\n * flannel adds rules to the `FORWARD` and `POSTROUTING` tables to direct traffic to its own tables.\n * rules in `FLANNEL-POSTRTG` are used to manage masquerading of the traffic to/from the pods\n * rules in `FLANNEL-FWD` are used to ensure that traffic to and from the flannel network can be forwarded\n With nftables, flannel would have its own dedicated table (`flannel`) with arbitrary chains and rules as needed.\n see https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)"
- },
- {
- "heading": "!! untested example",
- "data": ""
- },
- {
- "heading": "nftables library",
- "data": "We can either:\n * call the `nft` executable directly\n * use https://github.com/kubernetes-sigs/knftables which is developed for kube-proxy and should cover our use case"
- },
- {
- "heading": "Implementation steps",
- "data": "* refactor current iptables code to better encapsulate iptables calls in the dedicated package\n * implement nftables mode that is the exact equivalent of the current iptables code\n * add similar unit tests and e2e test coverage\n * try to optimize the code using nftables-specific feature\n * integrate the new flag in k3s"
- },
- {
- "heading": "Decision",
- "data": ""
- },
- {
- "additional_info": "D a t e : 2 0 2 4 - 0 2 - 0 1 W r i t i n g A t t h e m o m e n t , f l a n n e l u s e s i p t a b l e s t o m a s k a n d r o u t e p a c k e t s . O u r i m p l e m e n t a t i o n i s b a s e d o n t h e l i b r a r y f r o m c o r e o s ( h t t p s : / / g i t h u b . c o m / c o r e o s / g o - i p t a b l e s ) . T h e r e a r e s e v e r a l i s s u e s w i t h u s i n g i p t a b l e s i n f l a n n e l : * p e r f o r m a n c e : p a c k e t s a r e m a t c h e d u s i n g a l i s t s o p e r f o r m a n c e i s O ( n ) . T h i s i s n ' t v e r y i m p o r t a n t f o r f l a n n e l b e c a u s e u s e f e w i p t a b l e s r u l e s a n y w a y . * s t a b i l i t y : * * r u l e s m u s t b e p u r g e d t h e n u p d a t e d e v e r y t i m e f l a n n e l n e e d s t o c h a n g e a r u l e t o k e e p t h e c o r r e c t o r d e r * * t h e r e c a n b e i n t e r f e r e n c e s w i t h o t h e r k 8 s c o m p o n e n t s u s i n g i p t a b l e s a s w e l l ( k u b e - p r o x y , k u b e - r o u t e r . . . ) * d e p r e c a t i o n : n f t a b l e s i s p u s h e d a s a r e p l a c e m e n t f o r i p t a b l e s i n t h e k e r n e l a n d i n f u t u r e d i s t r o s i n c l u d i n g t h e f u t u r e R H E L . R e f e r e n c e s : - h t t p s : / / g i t h u b . c o m / k u b e r n e t e s / e n h a n c e m e n t s / b l o b / m a s t e r / k e p s / s i g - n e t w o r k / 3 8 6 6 - n f t a b l e s - p r o x y / R E A D M E . m d # m o t i v a t i o n I n f l a n n e l c o d e , a l l r e f e r e n c e s t o i p t a b l e s a r e w r a p p e d i n t h e ` i p t a b l e s ` p a c k a g e . T h e p a c k a g e p r o v i d e s t h e t y p e ` I P T a b l e R u l e ` t o r e p r e s e n t a n i n d i v i d u a l r u l e . T h i s t y p e i s a l m o s t e n t i r e l y i n t e r n a l t o t h e p a c k a g e s o i t w o u l d b e e a s y t o r e f a c t o r t h e c o d e t o h i d e i n f a v o r o f a m o r e a b s t r a c t t y p e t h a t w o u l d w o r k f o r b o t h i p t a b l e s a n d n f t a b l e s r u l e s . U n f o r t u n a t e l y t h e p a c k a g e d o e s n ' t p r o v i d e a n i n t e r f a c e s o i n o r d e r t o p r o v i d e b o t h a n i p t a b l e s - b a s e d a n d a n n f t a b l e s - b a s e d i m p l e m e n t a t i o n t h i s n e e d s t o b e r e f a c t o r e d . T h i s p a c k a g e i n c l u d e s s e v e r a l G o i n t e r f a c e s ( ` I P T a b l e s ` , ` I P T a b l e s E r r o r ` ) t h a t a r e u s e d f o r t e s t i n g . I d e a l l y , f l a n n e l w i l l i n c l u d e b o t h i p t a b l e s a n d n f t a b l e s i m p l e m e n t a t i o n . T h e s e n e e d t o c o e x i s t i n t h e c o d e b u t w i l l b e m u t u a l l y e x c l u s i v e a t r u n t i m e . T h e c h o i c e o f w h i c h i m p l e m e n t a t i o n t o u s e w i l l b e t r i g g e r e d b y a n o p t i o n a l C L I f l a g . i p t a b l e s w i l l r e m a i n t h e d e f a u l t f o r t h e t i m e b e i n g . U s i n g n f t a b l e s i s a n o p p o r t u n i t y f o r o p t i m i s i n g t h e r u l e s d e p l o y e d b y f l a n n e l b u t w e n e e d t o b e c a r e f u l a b o u t r e t r o - c o m p a t i b i l i t y w i t h t h e c u r r e n t b a c k e n d . S t a r t i n g f l a n n e l i n e i t h e r m o d e s h o u l d r e s e t t h e o t h e r m o d e a s b e s t a s p o s s i b l e t o e n s u r e t h a t u s e r s d o n ' t n e e d t o r e b o o t i f t h e y n e e d t o c h a n g e m o d e . C u r r e n t l y , f l a n n e l u s e s t w o d e d i c a t e d t a b l e s f o r i t s o w n r u l e s : ` F L A N N E L - P O S T R T G ` a n d ` F L A N N E L - F W D ` . * f l a n n e l a d d s r u l e s t o t h e ` F O R W A R D ` a n d ` P O S T R O U T I N G ` t a b l e s t o d i r e c t t r a f f i c t o i t s o w n t a b l e s . * r u l e s i n ` F L A N N E L - P O S T R T G ` a r e u s e d t o m a n a g e m a s q u e r a d i n g o f t h e t r a f f i c t o / f r o m t h e p o d s * r u l e s i n ` F L A N N E L - F W D ` a r e u s e d t o e n s u r e t h a t t r a f f i c t o a n d f r o m t h e f l a n n e l n e t w o r k c a n b e f o r w a r d e d W i t h n f t a b l e s , f l a n n e l w o u l d h a v e i t s o w n d e d i c a t e d t a b l e ( ` f l a n n e l ` ) w i t h a r b i t r a r y c h a i n s a n d r u l e s a s n e e d e d . s e e h t t p s : / / w i k i . n f t a b l e s . o r g / w i k i - n f t a b l e s / i n d e x . p h p / P e r f o r m i n g _ N e t w o r k _ A d d r e s s _ T r a n s l a t i o n _ ( N A T ) ` ` ` t a b l e f l a n n e l { c h a i n f l a n n e l - p o s t r t g { t y p e n a t h o o k p o s t r o u t i n g p r i o r i t y 0 ; # k u b e - p r o x y m e t a m a r k 0 x 4 0 0 0 / 0 x 4 0 0 0 r e t u r n # d o n ' t N A T t r a f f i c w i t h i n o v e r l a y n e t w o r k i p s a d d r $ p o d _ c i d r i p d a d d r $ c l u s t e r _ c i d r r e t u r n i p s a d d r $ c l u s t e r _ c i d r i p d a d d r $ p o d _ c i d r r e t u r n # P r e v e n t p e r f o r m i n g M a s q u e r a d e o n e x t e r n a l t r a f f i c w h i c h a r r i v e s f r o m a N o d e t h a t o w n s t h e c o n t a i n e r / p o d I P a d d r e s s i p s a d d r ! = $ p o d _ c i d r i p d a d d r $ c l u s t e r _ c i d r r e t u r n # N A T i f i t ' s n o t m u l t i c a s t t r a f f i c i p s a d d r $ c l u s t e r _ c i d r i p d a d d r ! = 2 2 4 . 0 . 0 . 0 / 4 n a t # M a s q u e r a d e a n y t h i n g h e a d e d t o w a r d s f l a n n e l f r o m t h e h o s t i p s a d d r ! = $ c l u s t e r _ c i d r i p d a d d r $ c l u s t e r _ c i d r n a t } c h a i n f l a n n e l - f w d { t y p e f i l t e r h o o k i n p u t p r i o r i t y 0 ; p o l i c y d r o p ; # a l l o w t r a f f i c t o b e f o r w a r d e d i f i t i s t o o r f r o m t h e f l a n n e l n e t w o r k r a n g e i p s a d d r f l a n n e l N e t w o r k a c c e p t i p d a d d r f l a n n e l N e t w o r k a c c e p t } } ` ` ` W e c a n e i t h e r : * c a l l t h e ` n f t ` e x e c u t a b l e d i r e c t l y * u s e h t t p s : / / g i t h u b . c o m / k u b e r n e t e s - s i g s / k n f t a b l e s w h i c h i s d e v e l o p e d f o r k u b e - p r o x y a n d s h o u l d c o v e r o u r u s e c a s e * r e f a c t o r c u r r e n t i p t a b l e s c o d e t o b e t t e r e n c a p s u l a t e i p t a b l e s c a l l s i n t h e d e d i c a t e d p a c k a g e * i m p l e m e n t n f t a b l e s m o d e t h a t i s t h e e x a c t e q u i v a l e n t o f t h e c u r r e n t i p t a b l e s c o d e * a d d s i m i l a r u n i t t e s t s a n d e 2 e t e s t c o v e r a g e * t r y t o o p t i m i z e t h e c o d e u s i n g n f t a b l e s - s p e c i f i c f e a t u r e * i n t e g r a t e t h e n e w f l a g i n k 3 s"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "backends.md"
- },
- "content": [
- {
- "heading": "Backends",
- "data": "Flannel may be paired with several different backends. Once set, the backend should not be changed at runtime.\n VXLAN is the recommended choice. host-gw is recommended for more experienced users who want the performance improvement and whose infrastructure support it (typically it can't be used in cloud environments). UDP is suggested for debugging only or for very old kernels that don't support VXLAN.\n In case `firewalld` is enabled on the node the port used by the backend needs to be enabled with `firewall-cmd`:\n For more information on configuration options for Tencent see [TencentCloud VPC Backend for Flannel][tencentcloud-vpc]"
- },
- {
- "heading": "Recommended backends",
- "data": ""
- },
- {
- "heading": "VXLAN",
- "data": "Use in-kernel VXLAN to encapsulate the packets.\n Type and options:\n * `Type` (string): `vxlan`\n * `VNI` (number): VXLAN Identifier (VNI) to be used. On Linux, defaults to 1. On Windows should be greater than or equal to 4096.\n * `Port` (number): UDP port to use for sending encapsulated packets. On Linux, defaults to kernel default, currently 8472, but on Windows, must be 4789.\n * `GBP` (Boolean): Enable [VXLAN Group Based Policy](https://github.com/torvalds/linux/commit/3511494ce2f3d3b77544c79b87511a4ddb61dc89). Defaults to `false`. GBP is not supported on Windows\n * `DirectRouting` (Boolean): Enable direct routes (like `host-gw`) when the hosts are on the same subnet. VXLAN will only be used to encapsulate packets to hosts on different subnets. Defaults to `false`. DirectRouting is not supported on Windows.\n * `MTU` (number): Desired MTU for the outgoing packets if not defined the MTU of the external interface is used.\n * `MacPrefix` (String): Only use on Windows, set to the MAC prefix. Defaults to `0E-2A`.\n Starting with Ubuntu 21.10, vxlan support on Raspberry Pi has been moved into a separate kernel module."
- },
- {
- "heading": "host-gw",
- "data": "Use host-gw to create IP routes to subnets via remote machine IPs. Requires direct layer2 connectivity between hosts running flannel.\n host-gw provides good performance, with few dependencies, and easy set up.\n Type:\n * `Type` (string): `host-gw`"
- },
- {
- "heading": "WireGuard",
- "data": "Use in-kernel [WireGuard](https://www.wireguard.com) to encapsulate and encrypt the packets.\n Type:\n * `Type` (string): `wireguard`\n * `PSK` (string): Optional. The pre shared key to use. Use `wg genpsk` to generate a key.\n * `ListenPort` (int): Optional. The udp port to listen on. Default is `51820`.\n * `ListenPortV6` (int): Optional. The udp port to listen on for ipv6. Default is `51821`.\n * `MTU` (number): Desired MTU for the outgoing packets if not defined the MTU of the external interface is used.\n * `Mode` (string): Optional.\n * separate - Use separate wireguard tunnels for ipv4 and ipv6 (default)\n * auto - Single wireguard tunnel for both address families; autodetermine the preferred peer address\n * ipv4 - Single wireguard tunnel for both address families; use ipv4 for\n the peer addresses\n * ipv6 - Single wireguard tunnel for both address families; use ipv6 for\n the peer addresses\n * `PersistentKeepaliveInterval` (int): Optional. Default is 0 (disabled).\n If no private key was generated before the private key is written to `/run/flannel/wgkey`. You can use environment `WIREGUARD_KEY_FILE` to change this path.\n The static names of the interfaces are `flannel-wg` and `flannel-wg-v6`. WireGuard tools like `wg show` can be used to debug interfaces and peers.\n Users of kernels < 5.6 need to [install](https://www.wireguard.com/install/) an additional Wireguard package."
- },
- {
- "heading": "UDP",
- "data": "Use UDP only for debugging if your network and kernel prevent you from using VXLAN or host-gw.\n Type and options:\n * `Type` (string): `udp`\n * `Port` (number): UDP port to use for sending encapsulated packets. Defaults to 8285."
- },
- {
- "heading": "Experimental backends",
- "data": "The following options are experimental and unsupported at this time."
- },
- {
- "heading": "Alloc",
- "data": "Alloc performs subnet allocation with no forwarding of data packets.\n Type:\n * `Type` (string): `alloc`"
- },
- {
- "heading": "TencentCloud VPC",
- "data": "Use TencentCloud VPC to create IP routes in a [TencentCloud VPC route table](https://intl.cloud.tencent.com/product/vpc) when running in an TencentCloud VPC. This mitigates the need to create a separate flannel interface.\n Requirements:\n * Running on an CVM instance that is in an TencentCloud VPC.\n * Permission require `accessid` and `keysecret`.\n * `Type` (string): `tencent-vpc`\n * `AccessKeyID` (string): API access key ID. Can also be configured with environment ACCESS_KEY_ID.\n * `AccessKeySecret` (string): API access key secret. Can also be configured with environment ACCESS_KEY_SECRET.\n Route Limits: TencentCloud VPC limits the number of entries per route table to 50.\n [tencentcloud-vpc]: https://github.com/flannel-io/flannel/blob/master/Documentation/tencentcloud-vpc-backend.md"
- },
- {
- "heading": "IPIP",
- "data": "Use in-kernel IPIP to encapsulate the packets.\n IPIP kind of tunnels is the simplest one. It has the lowest overhead, but can incapsulate only IPv4 unicast traffic, so you will not be able to setup OSPF, RIP or any other multicast-based protocol.\n Type:\n * `Type` (string): `ipip`\n * `DirectRouting` (Boolean): Enable direct routes (like `host-gw`) when the hosts are on the same subnet. IPIP will only be used to encapsulate packets to hosts on different subnets. Defaults to `false`.\n Note that there may exist two ipip tunnel device `tunl0` and `flannel.ipip`, this is expected and it's not a bug.\n `tunl0` is automatically created per network namespace by ipip kernel module on modprobe ipip module. It is the namespace default IPIP device with attributes local=any and remote=any.\n When receiving IPIP protocol packets, kernel will forward them to tunl0 as a fallback device if it can't find an option whose local/remote attribute matches their src/dst ip address more precisely.\n `flannel.ipip` is created by flannel to achieve one to many ipip network."
- },
- {
- "heading": "IPSec",
- "data": "Use in-kernel IPSec to encapsulate and encrypt the packets.\n [Strongswan](https://www.strongswan.org) is used at the IKEv2 daemon. A single pre-shared key is used for the initial key exchange between hosts and then Strongswan ensures that keys are rotated at regular intervals.\n Type:\n * `Type` (string): `ipsec`\n * `PSK` (string): Required. The pre shared key to use. It needs to be at least 96 characters long. One method for generating this key is to run `dd if=/dev/urandom count=48 bs=1 status=none | xxd -p -c 48`\n * `UDPEncap` (Boolean): Optional, defaults to false. Forces the use UDP encapsulation of packets which can help with some NAT gateways.\n * `ESPProposal` (string): Optional, defaults to `aes128gcm16-sha256-prfsha256-ecp256`. Change this string to choose another ESP Proposal.\n Hint:\n Add rules to your firewall: Open ports 50 (for ESP protocol), UDP 500 (for IKE, to manage encryption keys) and UDP 4500 (for IPSEC NAT-Traversal mode)."
- },
- {
- "heading": "Troubleshooting",
- "data": "Logging * When flannel is run from a container, the Strongswan tools are installed. `swanctl` can be used for interacting with the charon and it provides a logs command. * Charon logs are also written to the stdout of the flannel process. Troubleshooting * `ip xfrm state` can be used to interact with the kernel's security association database. This can be used to show the current security associations (SA) and whether a host is successfully establishing ipsec connections to other hosts. * `ip xfrm policy` can be used to show the installed policies. Flannel installs three policies for each host it connects to. Flannel will not restore policies that are manually deleted (unless flannel is restarted). It will also not delete stale policies on startup. They can be removed by rebooting your host or by removing all ipsec state with `ip xfrm state flush && ip xfrm policy flush` and restarting flannel."
- },
- {
- "additional_info": "Flannel may be paired with several different backends. Once set, the backend should not be changed at runtime. VXLAN is the recommended choice. host-gw is recommended for more experienced users who want the performance improvement and whose infrastructure support it (typically it can't be used in cloud environments). UDP is suggested for debugging only or for very old kernels that don't support VXLAN. In case `firewalld` is enabled on the node the port used by the backend needs to be enabled with `firewall-cmd`: ``` firewall-cmd --permanent --zone=public --add-port=[port]/udp ``` For more information on configuration options for Tencent see [TencentCloud VPC Backend for Flannel][tencentcloud-vpc] Use in-kernel VXLAN to encapsulate the packets. Type and options: * `Type` (string): `vxlan` * `VNI` (number): VXLAN Identifier (VNI) to be used. On Linux, defaults to 1. On Windows should be greater than or equal to 4096. * `Port` (number): UDP port to use for sending encapsulated packets. On Linux, defaults to kernel default, currently 8472, but on Windows, must be 4789. * `GBP` (Boolean): Enable [VXLAN Group Based Policy](https://github.com/torvalds/linux/commit/3511494ce2f3d3b77544c79b87511a4ddb61dc89). Defaults to `false`. GBP is not supported on Windows * `DirectRouting` (Boolean): Enable direct routes (like `host-gw`) when the hosts are on the same subnet. VXLAN will only be used to encapsulate packets to hosts on different subnets. Defaults to `false`. DirectRouting is not supported on Windows. * `MTU` (number): Desired MTU for the outgoing packets if not defined the MTU of the external interface is used. * `MacPrefix` (String): Only use on Windows, set to the MAC prefix. Defaults to `0E-2A`. Starting with Ubuntu 21.10, vxlan support on Raspberry Pi has been moved into a separate kernel module. ``` sudo apt install linux-modules-extra-raspi ``` Use host-gw to create IP routes to subnets via remote machine IPs. Requires direct layer2 connectivity between hosts running flannel. host-gw provides good performance, with few dependencies, and easy set up. Type: * `Type` (string): `host-gw` Use in-kernel [WireGuard](https://www.wireguard.com) to encapsulate and encrypt the packets. Type: * `Type` (string): `wireguard` * `PSK` (string): Optional. The pre shared key to use. Use `wg genpsk` to generate a key. * `ListenPort` (int): Optional. The udp port to listen on. Default is `51820`. * `ListenPortV6` (int): Optional. The udp port to listen on for ipv6. Default is `51821`. * `MTU` (number): Desired MTU for the outgoing packets if not defined the MTU of the external interface is used. * `Mode` (string): Optional. * separate - Use separate wireguard tunnels for ipv4 and ipv6 (default) * auto - Single wireguard tunnel for both address families; autodetermine the preferred peer address * ipv4 - Single wireguard tunnel for both address families; use ipv4 for the peer addresses * ipv6 - Single wireguard tunnel for both address families; use ipv6 for the peer addresses * `PersistentKeepaliveInterval` (int): Optional. Default is 0 (disabled). If no private key was generated before the private key is written to `/run/flannel/wgkey`. You can use environment `WIREGUARD_KEY_FILE` to change this path. The static names of the interfaces are `flannel-wg` and `flannel-wg-v6`. WireGuard tools like `wg show` can be used to debug interfaces and peers. Users of kernels < 5.6 need to [install](https://www.wireguard.com/install/) an additional Wireguard package. Use UDP only for debugging if your network and kernel prevent you from using VXLAN or host-gw. Type and options: * `Type` (string): `udp` * `Port` (number): UDP port to use for sending encapsulated packets. Defaults to 8285. The following options are experimental and unsupported at this time. Alloc performs subnet allocation with no forwarding of data packets. Type: * `Type` (string): `alloc` Use TencentCloud VPC to create IP routes in a [TencentCloud VPC route table](https://intl.cloud.tencent.com/product/vpc) when running in an TencentCloud VPC. This mitigates the need to create a separate flannel interface. Requirements: * Running on an CVM instance that is in an TencentCloud VPC. * Permission require `accessid` and `keysecret`. * `Type` (string): `tencent-vpc` * `AccessKeyID` (string): API access key ID. Can also be configured with environment ACCESS_KEY_ID. * `AccessKeySecret` (string): API access key secret. Can also be configured with environment ACCESS_KEY_SECRET. Route Limits: TencentCloud VPC limits the number of entries per route table to 50. [tencentcloud-vpc]: https://github.com/flannel-io/flannel/blob/master/Documentation/tencentcloud-vpc-backend.md Use in-kernel IPIP to encapsulate the packets. IPIP kind of tunnels is the simplest one. It has the lowest overhead, but can incapsulate only IPv4 unicast traffic, so you will not be able to setup OSPF, RIP or any other multicast-based protocol. Type: * `Type` (string): `ipip` * `DirectRouting` (Boolean): Enable direct routes (like `host-gw`) when the hosts are on the same subnet. IPIP will only be used to encapsulate packets to hosts on different subnets. Defaults to `false`. Note that there may exist two ipip tunnel device `tunl0` and `flannel.ipip`, this is expected and it's not a bug. `tunl0` is automatically created per network namespace by ipip kernel module on modprobe ipip module. It is the namespace default IPIP device with attributes local=any and remote=any. When receiving IPIP protocol packets, kernel will forward them to tunl0 as a fallback device if it can't find an option whose local/remote attribute matches their src/dst ip address more precisely. `flannel.ipip` is created by flannel to achieve one to many ipip network. Use in-kernel IPSec to encapsulate and encrypt the packets. [Strongswan](https://www.strongswan.org) is used at the IKEv2 daemon. A single pre-shared key is used for the initial key exchange between hosts and then Strongswan ensures that keys are rotated at regular intervals. Type: * `Type` (string): `ipsec` * `PSK` (string): Required. The pre shared key to use. It needs to be at least 96 characters long. One method for generating this key is to run `dd if=/dev/urandom count=48 bs=1 status=none | xxd -p -c 48` * `UDPEncap` (Boolean): Optional, defaults to false. Forces the use UDP encapsulation of packets which can help with some NAT gateways. * `ESPProposal` (string): Optional, defaults to `aes128gcm16-sha256-prfsha256-ecp256`. Change this string to choose another ESP Proposal. Hint: Add rules to your firewall: Open ports 50 (for ESP protocol), UDP 500 (for IKE, to manage encryption keys) and UDP 4500 (for IPSEC NAT-Traversal mode). Logging * When flannel is run from a container, the Strongswan tools are installed. `swanctl` can be used for interacting with the charon and it provides a logs command. * Charon logs are also written to the stdout of the flannel process. * `ip xfrm state` can be used to interact with the kernel's security association database. This can be used to show the current security associations (SA) and whether a host is successfully establishing ipsec connections to other hosts. * `ip xfrm policy` can be used to show the installed policies. Flannel installs three policies for each host it connects to. Flannel will not restore policies that are manually deleted (unless flannel is restarted). It will also not delete stale policies on startup. They can be removed by rebooting your host or by removing all ipsec state with `ip xfrm state flush && ip xfrm policy flush` and restarting flannel."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "building.md"
- },
- "content": [
- {
- "heading": "Building flannel",
- "data": "The most reliable way to build flannel is by using Docker."
- },
- {
- "heading": "Building in a Docker container",
- "data": "To build flannel in a container run `make dist/flanneld-amd64`.\n You will now have a `flanneld-amd64` binary in the `dist` directory."
- },
- {
- "heading": "Building for other platforms",
- "data": "If you're not running `amd64` then you need to manually set `ARCH` before running `make`. For example, to produce a\n `flanneld-s390x` binary and image, run\n * ARCH=s390x make image\n If you want to cross-compile for a different platform (e.g. you're running `amd64` but you want to produce `arm` binaries) then you need the qemu-static binaries to be present in `/usr/bin`. They can be installed on Ubuntu with\n * `sudo apt-get install qemu-user-static`\n Then you should be able to set the ARCH as above\n * ARCH=arm make image"
- },
- {
- "heading": "Building a multi-arch image",
- "data": "To build the multi-arch image of flannel locally, you need to install [Docker buildx](https://github.com/docker/buildx).\n Then you can use the following target:\n If you don't already have a builder running locally, you can this target to start it:\n See the [buildx documentation](https://docs.docker.com/reference/cli/docker/buildx/) for more details."
- },
- {
- "heading": "Running the tests locally",
- "data": "To run the end-to-end tests locally, you need to installl [Docker compose](https://docs.docker.com/compose/install/)."
- },
- {
- "heading": "Building manually",
- "data": "1. Make sure you have required dependencies installed on your machine. * On Ubuntu, run `sudo apt-get install linux-libc-dev golang gcc`. If the golang version installed is not 1.7 or higher. Download the newest golang and install manually. To build the flannel.exe on windows, mingw-w64 is also needed. Run command `sudo apt-get install mingw-w64` * On Fedora/Redhat, run `sudo yum install kernel-headers golang gcc glibc-static`. 2. Git clone the flannel repo. It MUST be placed in your GOPATH under `github.com/flannel-io/flannel`: `cd $GOPATH/src; git clone https://github.com/flannel-io/flannel.git` 3. Run the build script, ensuring that `CGO_ENABLED=1`: `cd flannel; CGO_ENABLED=1 make dist/flanneld` for linux usage. Run the build script, ensuring that `CGO_ENABLED=1`: `cd flannel; CGO_ENABLED=1 make dist/flanneld.exe` for windows usage."
- },
- {
- "additional_info": "The most reliable way to build flannel is by using Docker. To build flannel in a container run `make dist/flanneld-amd64`. You will now have a `flanneld-amd64` binary in the `dist` directory. If you're not running `amd64` then you need to manually set `ARCH` before running `make`. For example, to produce a `flanneld-s390x` binary and image, run * ARCH=s390x make image If you want to cross-compile for a different platform (e.g. you're running `amd64` but you want to produce `arm` binaries) then you need the qemu-static binaries to be present in `/usr/bin`. They can be installed on Ubuntu with * `sudo apt-get install qemu-user-static` Then you should be able to set the ARCH as above * ARCH=arm make image To build the multi-arch image of flannel locally, you need to install [Docker buildx](https://github.com/docker/buildx). Then you can use the following target: ``` make build-multi-arch ``` If you don't already have a builder running locally, you can this target to start it: ``` make buildx-create-builder ``` See the [buildx documentation](https://docs.docker.com/reference/cli/docker/buildx/) for more details. To run the end-to-end tests locally, you need to installl [Docker compose](https://docs.docker.com/compose/install/). 1. Make sure you have required dependencies installed on your machine. * On Ubuntu, run `sudo apt-get install linux-libc-dev golang gcc`. If the golang version installed is not 1.7 or higher. Download the newest golang and install manually. To build the flannel.exe on windows, mingw-w64 is also needed. Run command `sudo apt-get install mingw-w64` * On Fedora/Redhat, run `sudo yum install kernel-headers golang gcc glibc-static`. 2. Git clone the flannel repo. It MUST be placed in your GOPATH under `github.com/flannel-io/flannel`: `cd $GOPATH/src; git clone https://github.com/flannel-io/flannel.git` 3. Run the build script, ensuring that `CGO_ENABLED=1`: `cd flannel; CGO_ENABLED=1 make dist/flanneld` for linux usage. Run the build script, ensuring that `CGO_ENABLED=1`: `cd flannel; CGO_ENABLED=1 make dist/flanneld.exe` for windows usage."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "code-of-conduct.md"
- },
- "content": [
- {
- "heading": "CoreOS Community Code of Conduct",
- "data": ""
- },
- {
- "heading": "Contributor Code of Conduct",
- "data": "As contributors and maintainers of this project, and in the interest of\n fostering an open and welcoming community, we pledge to respect all people who\n contribute through reporting issues, posting feature requests, updating\n documentation, submitting pull requests or patches, and other activities.\n We are committed to making participation in this project a harassment-free\n experience for everyone, regardless of level of experience, gender, gender\n identity and expression, sexual orientation, disability, personal appearance,\n body size, race, ethnicity, age, religion, or nationality.\n Examples of unacceptable behavior by participants include:\n * The use of sexualized language or imagery\n * Personal attacks\n * Trolling or insulting/derogatory comments\n * Public or private harassment\n * Publishing others' private information, such as physical or electronic addresses, without explicit permission\n * Other unethical or unprofessional conduct.\n Project maintainers have the right and responsibility to remove, edit, or\n reject comments, commits, code, wiki edits, issues, and other contributions\n that are not aligned to this Code of Conduct. By adopting this Code of Conduct,\n project maintainers commit themselves to fairly and consistently applying these\n principles to every aspect of managing this project. Project maintainers who do\n not follow or enforce the Code of Conduct may be permanently removed from the\n project team.\n This code of conduct applies both within project spaces and in public spaces\n when an individual is representing the project or its community.\n Instances of abusive, harassing, or otherwise unacceptable behavior may be\n reported by contacting a project maintainer, Brandon Philips\n , and/or Rithu John .\n This Code of Conduct is adapted from the Contributor Covenant\n (http://contributor-covenant.org), version 1.2.0, available at\n http://contributor-covenant.org/version/1/2/0/"
- },
- {
- "heading": "CoreOS Events Code of Conduct",
- "data": "CoreOS events are working conferences intended for professional networking and collaboration in the CoreOS community. Attendees are expected to behave according to professional standards and in accordance with their employer\u2019s policies on appropriate workplace behavior. While at CoreOS events or related social networking opportunities, attendees should not engage in discriminatory or offensive speech or actions including but not limited to gender, sexuality, race, age, disability, or religion. Speakers should be especially aware of these concerns. CoreOS does not condone any statements by speakers contrary to these standards. CoreOS reserves the right to deny entrance and/or eject from an event (without refund) any individual found to be engaging in discriminatory or offensive speech or actions. Please bring any concerns to the immediate attention of designated on-site staff, Brandon Philips , and/or Rithu John ."
- },
- {
- "additional_info": "As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery * Personal attacks * Trolling or insulting/derogatory comments * Public or private harassment * Publishing others' private information, such as physical or electronic addresses, without explicit permission * Other unethical or unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a project maintainer, Brandon Philips , and/or Rithu John . This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.2.0, available at http://contributor-covenant.org/version/1/2/0/ CoreOS events are working conferences intended for professional networking and collaboration in the CoreOS community. Attendees are expected to behave according to professional standards and in accordance with their employer\u2019s policies on appropriate workplace behavior. While at CoreOS events or related social networking opportunities, attendees should not engage in discriminatory or offensive speech or actions including but not limited to gender, sexuality, race, age, disability, or religion. Speakers should be especially aware of these concerns. CoreOS does not condone any statements by speakers contrary to these standards. CoreOS reserves the right to deny entrance and/or eject from an event (without refund) any individual found to be engaging in discriminatory or offensive speech or actions. Please bring any concerns to the immediate attention of designated on-site staff, Brandon Philips , and/or Rithu John ."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "configuration.md"
- },
- "content": [
- {
- "heading": "Configuration",
- "data": "If the --kube-subnet-mgr argument is true, flannel reads its configuration from `/etc/kube-flannel/net-conf.json`.\n If the --kube-subnet-mgr argument is false, flannel reads its configuration from etcd.\n By default, it will read the configuration from `/coreos.com/network/config` (which can be overridden using `--etcd-prefix`).\n Use the `etcdctl` utility to set values in etcd.\n The value of the config is a JSON dictionary with the following keys:\n * `Network` (string): IPv4 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv4 is true)\n * `IPv6Network` (string): IPv6 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv6 is true)\n * `EnableIPv4` (bool): Enables ipv4 support\n Defaults to `true`\n * `EnableIPv6` (bool): Enables ipv6 support\n Defaults to `false`\n * `EnableNFTables` (bool): (EXPERIMENTAL) If set to true, flannel uses nftables instead of iptables to masquerade the traffic.\n Default to `false`\n * `SubnetLen` (integer): The size of the subnet allocated to each host.\n Defaults to 24 (i.e. /24) unless `Network` was configured to be smaller than a /22 in which case it is two less than the network.\n * `SubnetMin` (string): The beginning of IP range which the subnet allocation should start with.\n Defaults to the second subnet of `Network`.\n * `SubnetMax` (string): The end of the IP range at which the subnet allocation should end with.\n Defaults to the last subnet of `Network`.\n * `IPv6SubnetLen` (integer): The size of the ipv6 subnet allocated to each host.\n Defaults to 64 (i.e. /64) unless `Ipv6Network` was configured to be smaller than a /62 in which case it is two less than the network.\n * `IPv6SubnetMin` (string): The beginning of IPv6 range which the subnet allocation should start with.\n Defaults to the second subnet of `Ipv6Network`.\n * `IPv6SubnetMax` (string): The end of the IPv6 range at which the subnet allocation should end with.\n Defaults to the last subnet of `Ipv6Network`.\n * `Backend` (dictionary): Type of backend to use and specific configurations for that backend.\n The list of available backends and the keys that can be put into the this dictionary are listed in [Backends](backends.md).\n Defaults to `vxlan` backend.\n Subnet leases have a duration of 24 hours. Leases are renewed within 1 hour of their expiration,\n unless a different renewal margin is set with the ``--subnet-lease-renew-margin`` option."
- },
- {
- "heading": "Example configuration JSON",
- "data": "The following configuration illustrates the use of most options with `udp` backend."
- },
- {
- "heading": "Key command line options",
- "data": "MTU is calculated and set automatically by flannel. It then reports that value in `subnet.env`. This value can be changed as [backend](backends.md) config."
- },
- {
- "heading": "Environment variables",
- "data": "The command line options outlined above can also be specified via environment variables.\n For example `--etcd-endpoints=http://10.0.0.2:2379` is equivalent to `FLANNELD_ETCD_ENDPOINTS=http://10.0.0.2:2379` environment variable.\n Any command line option can be turned into an environment variable by prefixing it with `FLANNELD_`, stripping leading dashes, converting to uppercase and replacing all other dashes to underscores.\n `EVENT_QUEUE_DEPTH` is another environment variable to indicate the kubernetes scale. Set `EVENT_QUEUE_DEPTH` to adapter your cluster node numbers. If not set, default value is 5000."
- },
- {
- "heading": "Health Check",
- "data": "Flannel provides a health check http endpoint `healthz`. Currently this endpoint will blindly\n return http status ok(i.e. 200) when flannel is running. This feature is by default disabled.\n Set `healthz-port` to a non-zero value will enable a healthz server for flannel."
- },
- {
- "heading": "Dual-stack",
- "data": "Flannel supports dual-stack mode. This means pods and services could use ipv4 and ipv6 at the same time. Currently, dual-stack is only supported for vxlan, wireguard or host-gw(linux) backends.\n Requirements:\n * v1.0.1 of flannel binary from [containernetworking/plugins](https://github.com/containernetworking/plugins)\n * Nodes must have an ipv4 and ipv6 address in the main interface\n * Nodes must have an ipv4 and ipv6 address default route\n * vxlan support ipv6 tunnel require kernel version >= 3.12\n Configuration:\n * Set \"EnableIPv6\": true and the \"IPv6Network\", for example \"IPv6Network\": * \"2001:cafe:42:0::/56\" in the net-conf.json of the kube-flannel-cfg ConfigMap or in `/coreos.com/network/config` for etcd\n If everything works as expected, flanneld should generate a `/run/flannel/subnet.env` file with IPV6 subnet and network. For example:"
- },
- {
- "heading": "IPv6 only",
- "data": "To use an IPv6-only environment use the same configuration of the Dual-stack section to enable IPv6 and add \"EnableIPv4\": false in the net-conf.json of the kube-flannel-cfg ConfigMap. In case of IPv6-only setup, please use the docker.io IPv6-only endpoint as described in the following link: https://www.docker.com/blog/beta-ipv6-support-on-docker-hub-registry/"
- },
- {
- "heading": "nftables mode",
- "data": "To enable `nftables` mode in flannel, set `EnableNFTables` to true in flannel configuration. Note: to test with kube-proxy, use kubeadm with the following configuration:"
- },
- {
- "additional_info": "If the --kube-subnet-mgr argument is true, flannel reads its configuration from `/etc/kube-flannel/net-conf.json`. If the --kube-subnet-mgr argument is false, flannel reads its configuration from etcd. By default, it will read the configuration from `/coreos.com/network/config` (which can be overridden using `--etcd-prefix`). Use the `etcdctl` utility to set values in etcd. The value of the config is a JSON dictionary with the following keys: * `Network` (string): IPv4 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv4 is true) * `IPv6Network` (string): IPv6 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv6 is true) * `EnableIPv4` (bool): Enables ipv4 support Defaults to `true` * `EnableIPv6` (bool): Enables ipv6 support Defaults to `false` * `EnableNFTables` (bool): (EXPERIMENTAL) If set to true, flannel uses nftables instead of iptables to masquerade the traffic. Default to `false` * `SubnetLen` (integer): The size of the subnet allocated to each host. Defaults to 24 (i.e. /24) unless `Network` was configured to be smaller than a /22 in which case it is two less than the network. * `SubnetMin` (string): The beginning of IP range which the subnet allocation should start with. Defaults to the second subnet of `Network`. * `SubnetMax` (string): The end of the IP range at which the subnet allocation should end with. Defaults to the last subnet of `Network`. * `IPv6SubnetLen` (integer): The size of the ipv6 subnet allocated to each host. Defaults to 64 (i.e. /64) unless `Ipv6Network` was configured to be smaller than a /62 in which case it is two less than the network. * `IPv6SubnetMin` (string): The beginning of IPv6 range which the subnet allocation should start with. Defaults to the second subnet of `Ipv6Network`. * `IPv6SubnetMax` (string): The end of the IPv6 range at which the subnet allocation should end with. Defaults to the last subnet of `Ipv6Network`. * `Backend` (dictionary): Type of backend to use and specific configurations for that backend. The list of available backends and the keys that can be put into the this dictionary are listed in [Backends](backends.md). Defaults to `vxlan` backend. Subnet leases have a duration of 24 hours. Leases are renewed within 1 hour of their expiration, unless a different renewal margin is set with the ``--subnet-lease-renew-margin`` option. The following configuration illustrates the use of most options with `udp` backend. ```json { \"Network\": \"10.0.0.0/8\", \"SubnetLen\": 20, \"SubnetMin\": \"10.10.0.0\", \"SubnetMax\": \"10.99.0.0\", \"Backend\": { \"Type\": \"udp\", \"Port\": 7890 } } ``` ```bash --public-ip=\"\": IP accessible by other nodes for inter-host communication. Defaults to the IP of the interface being used for communication. --etcd-endpoints=http://127.0.0.1:4001: a comma-delimited list of etcd endpoints. --etcd-prefix=/coreos.com/network: etcd prefix. --etcd-keyfile=\"\": SSL key file used to secure etcd communication. --etcd-certfile=\"\": SSL certification file used to secure etcd communication. --etcd-cafile=\"\": SSL Certificate Authority file used to secure etcd communication. --kube-subnet-mgr: Contact the Kubernetes API for subnet assignment instead of etcd. --iface=\"\": interface to use (IP or name) for inter-host communication. Defaults to the interface for the default route on the machine. This can be specified multiple times to check each option in order. Returns the first match found. --iface-regex=\"\": regex expression to match the first interface to use (IP or name) for inter-host communication. If unspecified, will default to the interface for the default route on the machine. This can be specified multiple times to check each regex in order. Returns the first match found. This option is superseded by the iface option and will only be used if nothing matches any option specified in the iface options. --iface-can-reach=\"\": detect interface to use (IP or name) for inter-host communication based on which will be used for provided IP. This is exactly the interface to use of command \"ip route get \" (example: --iface-can-reach=192.168.1.1 results the interface can be reached to 192.168.1.1 will be selected) --iptables-resync=5: resync period for iptables rules, in seconds. Defaults to 5 seconds, if you see a large amount of contention for the iptables lock increasing this will probably help. --subnet-file=/run/flannel/subnet.env: filename where env variables (subnet and MTU values) will be written to. --net-config-path=/etc/kube-flannel/net-conf.json: path to the network configuration file to use --subnet-lease-renew-margin=60: subnet lease renewal margin, in minutes. --ip-masq=false: setup IP masquerade for traffic destined for outside the flannel network. Flannel assumes that the default policy is ACCEPT in the NAT POSTROUTING chain. -v=0: log level for V logs. Set to 1 to see messages related to data path. --healthz-ip=\"0.0.0.0\": The IP address for healthz server to listen (default \"0.0.0.0\") --healthz-port=0: The port for healthz server to listen(0 to disable) --version: print version and exit ``` MTU is calculated and set automatically by flannel. It then reports that value in `subnet.env`. This value can be changed as [backend](backends.md) config. The command line options outlined above can also be specified via environment variables. For example `--etcd-endpoints=http://10.0.0.2:2379` is equivalent to `FLANNELD_ETCD_ENDPOINTS=http://10.0.0.2:2379` environment variable. Any command line option can be turned into an environment variable by prefixing it with `FLANNELD_`, stripping leading dashes, converting to uppercase and replacing all other dashes to underscores. `EVENT_QUEUE_DEPTH` is another environment variable to indicate the kubernetes scale. Set `EVENT_QUEUE_DEPTH` to adapter your cluster node numbers. If not set, default value is 5000. Flannel provides a health check http endpoint `healthz`. Currently this endpoint will blindly return http status ok(i.e. 200) when flannel is running. This feature is by default disabled. Set `healthz-port` to a non-zero value will enable a healthz server for flannel. Flannel supports dual-stack mode. This means pods and services could use ipv4 and ipv6 at the same time. Currently, dual-stack is only supported for vxlan, wireguard or host-gw(linux) backends. Requirements: * v1.0.1 of flannel binary from [containernetworking/plugins](https://github.com/containernetworking/plugins) * Nodes must have an ipv4 and ipv6 address in the main interface * Nodes must have an ipv4 and ipv6 address default route * vxlan support ipv6 tunnel require kernel version >= 3.12 Configuration: * Set \"EnableIPv6\": true and the \"IPv6Network\", for example \"IPv6Network\": * \"2001:cafe:42:0::/56\" in the net-conf.json of the kube-flannel-cfg ConfigMap or in `/coreos.com/network/config` for etcd If everything works as expected, flanneld should generate a `/run/flannel/subnet.env` file with IPV6 subnet and network. For example: ```bash FLANNEL_NETWORK=10.42.0.0/16 FLANNEL_SUBNET=10.42.0.1/24 FLANNEL_IPV6_NETWORK=2001:cafe:42::/56 FLANNEL_IPV6_SUBNET=2001:cafe:42::1/64 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true ``` To use an IPv6-only environment use the same configuration of the Dual-stack section to enable IPv6 and add \"EnableIPv4\": false in the net-conf.json of the kube-flannel-cfg ConfigMap. In case of IPv6-only setup, please use the docker.io IPv6-only endpoint as described in the following link: https://www.docker.com/blog/beta-ipv6-support-on-docker-hub-registry/ To enable `nftables` mode in flannel, set `EnableNFTables` to true in flannel configuration. Note: to test with kube-proxy, use kubeadm with the following configuration: ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: v1.29.0 controllerManager: extraArgs: feature-gates: NFTablesProxyMode=true --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: \"nftables\" featureGates: NFTablesProxyMode: true ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "CONTRIBUTING.md"
- },
- "content": [
- {
- "heading": "How to Contribute",
- "data": "CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via\n GitHub pull requests. This document outlines some of the conventions on\n development workflow, commit message formatting, contact points and other\n resources to make it easier to get your contribution accepted."
- },
- {
- "heading": "Certificate of Origin",
- "data": "By contributing to this project you agree to the Developer Certificate of\n Origin (DCO). This document was created by the Linux Kernel community and is a\n simple statement that you, as a contributor, have the legal right to make the\n contribution. See the [DCO](DCO) file for details."
- },
- {
- "heading": "Getting Started",
- "data": "- Fork the repository on GitHub\n - Read the [README](README.md) for build and test instructions\n - Play with the project, submit bugs, submit patches!"
- },
- {
- "heading": "Contribution Flow",
- "data": "This is a rough outline of what a contributor's workflow looks like:\n - Create a topic branch from where you want to base your work (usually master).\n - Make commits of logical units.\n - Make sure your commit messages are in the proper format (see below).\n - Push your changes to a topic branch in your fork of the repository.\n - Make sure the tests pass, and add any new tests as appropriate.\n - Submit a pull request to the original repository.\n Thanks for your contributions!"
- },
- {
- "heading": "Format of the Commit Message",
- "data": "We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. The format can be described more formally as follows: The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools."
- },
- {
- "additional_info": "CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the [DCO](DCO) file for details. - Fork the repository on GitHub - Read the [README](README.md) for build and test instructions - Play with the project, submit bugs, submit patches! This is a rough outline of what a contributor's workflow looks like: - Create a topic branch from where you want to base your work (usually master). - Make commits of logical units. - Make sure your commit messages are in the proper format (see below). - Push your changes to a topic branch in your fork of the repository. - Make sure the tests pass, and add any new tests as appropriate. - Submit a pull request to the original repository. Thanks for your contributions! We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` scripts: add the test-cluster command this uses tmux to setup a test cluster that you can easily kill and start for debugging. Fixes #38 ``` The format can be described more formally as follows: ``` : ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "extension.md"
- },
- "content": [
- {
- "heading": "Extension",
- "data": "The `extension` backend provides an easy way for prototyping new backend types for flannel.\n It is _not_ recommended for production use, for example it doesn't have a built in retry mechanism.\n This backend has the following configuration\n * `Type` (string): `extension`\n * `PreStartupCommand` (string): Command to run before allocating a network to this host\n * The stdout of the process is captured and passed to the stdin of the SubnetAdd/Remove commands.\n * `PostStartupCommand` (string): Command to run after allocating a network to this host\n * The following environment variable is set\n * SUBNET - The subnet of the remote host that was added.\n * `SubnetAddCommand` (string): Command to run when a subnet is added\n * stdin - The output from `PreStartupCommand` is passed in.\n * The following environment variables are set\n * SUBNET - The ipv4 subnet of the remote host that was added.\n * IPV6SUBNET - The ipv6 subnet of the remote host that was added.\n * PUBLIC_IP - The public IP of the remote host.\n * PUBLIC_IPV6 - The public IPv6 of the remote host.\n * `SubnetRemoveCommand`(string): Command to run when a subnet is removed\n * stdin - The output from `PreStartupCommand` is passed in.\n * The following environment variables are set\n * SUBNET - The ipv4 subnet of the remote host that was removed.\n * IPV6SUBNET - The ipv6 subnet of the remote host that was removed.\n * PUBLIC_IP - The public IP of the remote host.\n * PUBLIC_IPV6 - The public IPv6 of the remote host.\n All commands are run through the `sh` shell and are run with the same permissions as the flannel daemon."
- },
- {
- "heading": "Simple example (host-gw)",
- "data": "To replicate the functionality of the host-gw plugin, there's no need for a startup command.\n The backend just needs to manage the route to subnets when they are added or removed.\n An example"
- },
- {
- "heading": "Complex example (vxlan)",
- "data": "VXLAN is more complex. It needs to store the MAC address of the vxlan device when it's created and to make it available to the flannel daemon running on other hosts. The address of the vxlan device also needs to be set _after_ the subnet has been allocated. An example"
- },
- {
- "additional_info": "The `extension` backend provides an easy way for prototyping new backend types for flannel. It is _not_ recommended for production use, for example it doesn't have a built in retry mechanism. This backend has the following configuration * `Type` (string): `extension` * `PreStartupCommand` (string): Command to run before allocating a network to this host * The stdout of the process is captured and passed to the stdin of the SubnetAdd/Remove commands. * `PostStartupCommand` (string): Command to run after allocating a network to this host * The following environment variable is set * SUBNET - The subnet of the remote host that was added. * `SubnetAddCommand` (string): Command to run when a subnet is added * stdin - The output from `PreStartupCommand` is passed in. * The following environment variables are set * SUBNET - The ipv4 subnet of the remote host that was added. * IPV6SUBNET - The ipv6 subnet of the remote host that was added. * PUBLIC_IP - The public IP of the remote host. * PUBLIC_IPV6 - The public IPv6 of the remote host. * `SubnetRemoveCommand`(string): Command to run when a subnet is removed * stdin - The output from `PreStartupCommand` is passed in. * The following environment variables are set * SUBNET - The ipv4 subnet of the remote host that was removed. * IPV6SUBNET - The ipv6 subnet of the remote host that was removed. * PUBLIC_IP - The public IP of the remote host. * PUBLIC_IPV6 - The public IPv6 of the remote host. All commands are run through the `sh` shell and are run with the same permissions as the flannel daemon. To replicate the functionality of the host-gw plugin, there's no need for a startup command. The backend just needs to manage the route to subnets when they are added or removed. An example ```json { \"Network\": \"10.0.0.0/16\", \"Backend\": { \"Type\": \"extension\", \"SubnetAddCommand\": \"ip route add $SUBNET via $PUBLIC_IP\", \"SubnetRemoveCommand\": \"ip route del $SUBNET via $PUBLIC_IP\" } } ``` VXLAN is more complex. It needs to store the MAC address of the vxlan device when it's created and to make it available to the flannel daemon running on other hosts. The address of the vxlan device also needs to be set _after_ the subnet has been allocated. An example ```json { \"Network\": \"10.50.0.0/16\", \"Backend\": { \"Type\": \"extension\", \"PreStartupCommand\": \"export VNI=1; export IF_NAME=flannel-vxlan; ip link del $IF_NAME 2>/dev/null; ip link add $IF_NAME type vxlan id $VNI dstport 8472 && cat /sys/class/net/$IF_NAME/address\", \"PostStartupCommand\": \"export IF_NAME=flannel-vxlan; export SUBNET_IP=`echo $SUBNET | cut -d'/' -f 1`; ip addr add $SUBNET_IP/32 dev $IF_NAME && ip link set $IF_NAME up\", \"SubnetAddCommand\": \"export SUBNET_IP=`echo $SUBNET | cut -d'/' -f 1`; export IF_NAME=flannel-vxlan; read VTEP; ip route add $SUBNET nexthop via $SUBNET_IP dev $IF_NAME onlink && ip neigh replace $SUBNET_IP dev $IF_NAME lladdr $VTEP && bridge fdb add $VTEP dev $IF_NAME self dst $PUBLIC_IP\" } } ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "integrations.md"
- },
- "content": [
- {
- "heading": "Integrations",
- "data": "This document tracks projects that integrate with flannel. [Join the community](https://github.com/flannel-io/flannel/) and help us keep the list current."
- },
- {
- "heading": "Projects",
- "data": "[Kubernetes](https://kubernetes.io/docs/admin/networking/#flannel): Container orchestration platform with options for [flannel as an overlay](https://kubernetes.io/docs/admin/networking/#flannel). [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel): Kubernetes CNI plugin that uses Calico for network policies and intra-node communications and Flannel for inter-node communications. [K3s](https://k3s.io/): Kubernetes distribution with flannel embedded as CNI. [RKE2](https://docs.rke2.io/): Kubernetes distribution packed with Canal as default CNI."
- },
- {
- "additional_info": "This document tracks projects that integrate with flannel. [Join the community](https://github.com/flannel-io/flannel/) and help us keep the list current. [Kubernetes](https://kubernetes.io/docs/admin/networking/#flannel): Container orchestration platform with options for [flannel as an overlay](https://kubernetes.io/docs/admin/networking/#flannel). [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel): Kubernetes CNI plugin that uses Calico for network policies and intra-node communications and Flannel for inter-node communications. [K3s](https://k3s.io/): Kubernetes distribution with flannel embedded as CNI. [RKE2](https://docs.rke2.io/): Kubernetes distribution packed with Canal as default CNI."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "ISSUE_TEMPLATE.md"
- },
- "content": [
- {
- "heading": "Expected Behavior",
- "data": "\n "
- },
- {
- "heading": "Current Behavior",
- "data": "\n "
- },
- {
- "heading": "Possible Solution",
- "data": "\n "
- },
- {
- "heading": "Steps to Reproduce (for bugs)",
- "data": "\n \n 1.\n 2.\n 3.\n 4."
- },
- {
- "heading": "Context",
- "data": "\n "
- },
- {
- "heading": "Your Environment",
- "data": " * Flannel version: * Backend used (e.g. vxlan or udp): * Etcd version: * Kubernetes version (if used): * Operating System and version: * Link to your project (optional):"
- },
- {
- "additional_info": " 1. 2. 3. 4. * Flannel version: * Backend used (e.g. vxlan or udp): * Etcd version: * Kubernetes version (if used): * Operating System and version: * Link to your project (optional):"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "kubernetes.md"
- },
- "content": [
- {
- "heading": "kubeadm",
- "data": "For information on deploying flannel manually, using the Kubernetes installer toolkit kubeadm, see [Installing Kubernetes on Linux with kubeadm][kubeadm].\n NOTE: If `kubeadm` is used, then pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init` to ensure that the `podCIDR` is set."
- },
- {
- "heading": "kube-flannel.yaml",
- "data": "The `flannel` manifest defines five things:\n 1. A `kube-flannel` with PodSecurity level set to *privileged*.\n 2. A ClusterRole and ClusterRoleBinding for Role Based Access Control (RBAC).\n 3. A service account for `flannel` to use.\n 4. A ConfigMap containing both a CNI configuration and a `flannel` configuration. The `network` in the `flannel` configuration should match the pod network CIDR. The choice of `backend` is also made here and defaults to VXLAN.\n 5. A DaemonSet for every architecture to deploy the `flannel` pod on each Node. The pod has two containers 1) the `flannel` daemon itself, and 2) an initContainer for deploying the CNI configuration to a location that the `kubelet` can read.\n When you run pods, they will be allocated IP addresses from the pod network CIDR. No matter which node those pods end up on, they will be able to communicate with each other."
- },
- {
- "heading": "Notes on securing flannel deployment",
- "data": "As of Kubernetes v1.21, the [PodSecurityPolicy API was deprecated](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) and it will be removed in v1.25. Thus, the `flannel` manifest does not use PodSecurityPolicy anymore.\n If you wish to use the [Pod Security Admission Controller](https://kubernetes.io/docs/concepts/security/pod-security-admission/) which was introduced to [replace PodSecurityPolicy](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/), you will need to deploy `flannel` in a namespace which allows the deployment of pods with `privileged` level. The `baseline` level is insufficient to deploy `flannel` and you will see the following error message:\n The `kube-flannel.yaml` manifest deploys `flannel` in the `kube-flannel` namespace and enables the `privileged` level for this namespace.\n Thus, you will need to restrict access to this namespace if you wish to secure your cluster.\n If you want to deploy `flannel` securely in a shared namespace or want more fine-grained control over the pods deployed in your cluster, you can use a 3rd-party admission controller like [Kubewarden](https://kubewarden.io). Kubewarden provides policies that can replace features of PodSecurityPolicy like [capabilities-psp-policy](https://github.com/kubewarden/capabilities-psp-policy) and [hostpaths-psp-policy](https://github.com/kubewarden/hostpaths-psp-policy).\n Other options include [Kyverno](https://kyverno.io/policies/pod-security/) and [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper)."
- },
- {
- "heading": "Annotations",
- "data": "* `flannel.alpha.coreos.com/public-ip`, `flannel.alpha.coreos.com/public-ipv6`: Define the used public IP of the node. If configured when Flannel starts it'll be used as the `public-ip` and `public-ipv6` flag.\n * `flannel.alpha.coreos.com/public-ip-overwrite`, `flannel.alpha.coreos.com/public-ipv6-overwrite`: Allows to overwrite the public IP of a node. Useful if the public IP can not determined from the node, e.G. because it is behind a NAT. It can be automatically set to a nodes `ExternalIP` using the [flannel-node-annotator](https://github.com/alvaroaleman/flannel-node-annotator).\n See also the \"NAT\" section in [troubleshooting](./troubleshooting.md) if UDP checksums seem corrupted."
- },
- {
- "heading": "Older versions of Kubernetes",
- "data": "`kube-flannel.yaml` has some features that aren't compatible with older versions of Kubernetes, though flanneld itself should work with any version of Kubernetes."
- },
- {
- "heading": "For Kubernetes v1.6~v1.15",
- "data": "If you see errors saying `found invalid field...` when you try to apply `kube-flannel.yaml` then you can try the \"legacy\" manifest file\n * `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-legacy.yml`\n This file does not bundle RBAC permissions. If you need those, run\n * `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-rbac.yml`\n If you didn't apply the `kube-flannel-rbac.yml` manifest and you need to, you'll see errors in your flanneld logs about failing to connect.\n * `Failed to create SubnetManager: error retrieving pod spec...`"
- },
- {
- "heading": "For Kubernetes v1.16",
- "data": "`kube-flannel.yaml` uses `ClusterRole` & `ClusterRoleBinding` of `rbac.authorization.k8s.io/v1`. When you use Kubernetes v1.16, you should replace `rbac.authorization.k8s.io/v1` to `rbac.authorization.k8s.io/v1beta1` because `rbac.authorization.k8s.io/v1` had become GA from Kubernetes v1.17."
- },
- {
- "heading": "For Kubernetes <= v1.24",
- "data": "As of Kubernetes v1.21, the [PodSecurityPolicy API was deprecated](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) and it will be removed in v1.25. Thus, the `flannel` manifest does not use PodSecurityPolicy anymore.\n If you still wish to use it, you can use `kube-flannel-psp.yaml` instead of `kube-flannel.yaml`. Please note that if you use a Kubernetes version >= 1.21, you will see a deprecation warning for the PodSecurityPolicy API."
- },
- {
- "heading": "Troubleshooting",
- "data": "See [troubleshooting](troubleshooting.md) [kubeadm]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/"
- },
- {
- "additional_info": "For information on deploying flannel manually, using the Kubernetes installer toolkit kubeadm, see [Installing Kubernetes on Linux with kubeadm][kubeadm]. NOTE: If `kubeadm` is used, then pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init` to ensure that the `podCIDR` is set. The `flannel` manifest defines five things: 1. A `kube-flannel` with PodSecurity level set to *privileged*. 2. A ClusterRole and ClusterRoleBinding for Role Based Access Control (RBAC). 3. A service account for `flannel` to use. 4. A ConfigMap containing both a CNI configuration and a `flannel` configuration. The `network` in the `flannel` configuration should match the pod network CIDR. The choice of `backend` is also made here and defaults to VXLAN. 5. A DaemonSet for every architecture to deploy the `flannel` pod on each Node. The pod has two containers 1) the `flannel` daemon itself, and 2) an initContainer for deploying the CNI configuration to a location that the `kubelet` can read. When you run pods, they will be allocated IP addresses from the pod network CIDR. No matter which node those pods end up on, they will be able to communicate with each other. As of Kubernetes v1.21, the [PodSecurityPolicy API was deprecated](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) and it will be removed in v1.25. Thus, the `flannel` manifest does not use PodSecurityPolicy anymore. If you wish to use the [Pod Security Admission Controller](https://kubernetes.io/docs/concepts/security/pod-security-admission/) which was introduced to [replace PodSecurityPolicy](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/), you will need to deploy `flannel` in a namespace which allows the deployment of pods with `privileged` level. The `baseline` level is insufficient to deploy `flannel` and you will see the following error message: ``` Error creating: non-default capabilities (container \"kube-flannel\" must not include \"NET_ADMIN\", \"NET_RAW\" in securityContext.capabilities.add), host namespaces (hostNetwork=true), hostPath volumes (volumes \"run\", \"cni-plugin\", \"cni\", \"xtables-lock\") ``` The `kube-flannel.yaml` manifest deploys `flannel` in the `kube-flannel` namespace and enables the `privileged` level for this namespace. Thus, you will need to restrict access to this namespace if you wish to secure your cluster. If you want to deploy `flannel` securely in a shared namespace or want more fine-grained control over the pods deployed in your cluster, you can use a 3rd-party admission controller like [Kubewarden](https://kubewarden.io). Kubewarden provides policies that can replace features of PodSecurityPolicy like [capabilities-psp-policy](https://github.com/kubewarden/capabilities-psp-policy) and [hostpaths-psp-policy](https://github.com/kubewarden/hostpaths-psp-policy). Other options include [Kyverno](https://kyverno.io/policies/pod-security/) and [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper). * `flannel.alpha.coreos.com/public-ip`, `flannel.alpha.coreos.com/public-ipv6`: Define the used public IP of the node. If configured when Flannel starts it'll be used as the `public-ip` and `public-ipv6` flag. * `flannel.alpha.coreos.com/public-ip-overwrite`, `flannel.alpha.coreos.com/public-ipv6-overwrite`: Allows to overwrite the public IP of a node. Useful if the public IP can not determined from the node, e.G. because it is behind a NAT. It can be automatically set to a nodes `ExternalIP` using the [flannel-node-annotator](https://github.com/alvaroaleman/flannel-node-annotator). See also the \"NAT\" section in [troubleshooting](./troubleshooting.md) if UDP checksums seem corrupted. `kube-flannel.yaml` has some features that aren't compatible with older versions of Kubernetes, though flanneld itself should work with any version of Kubernetes. If you see errors saying `found invalid field...` when you try to apply `kube-flannel.yaml` then you can try the \"legacy\" manifest file * `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-legacy.yml` This file does not bundle RBAC permissions. If you need those, run * `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-rbac.yml` If you didn't apply the `kube-flannel-rbac.yml` manifest and you need to, you'll see errors in your flanneld logs about failing to connect. * `Failed to create SubnetManager: error retrieving pod spec...` `kube-flannel.yaml` uses `ClusterRole` & `ClusterRoleBinding` of `rbac.authorization.k8s.io/v1`. When you use Kubernetes v1.16, you should replace `rbac.authorization.k8s.io/v1` to `rbac.authorization.k8s.io/v1beta1` because `rbac.authorization.k8s.io/v1` had become GA from Kubernetes v1.17. As of Kubernetes v1.21, the [PodSecurityPolicy API was deprecated](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) and it will be removed in v1.25. Thus, the `flannel` manifest does not use PodSecurityPolicy anymore. If you still wish to use it, you can use `kube-flannel-psp.yaml` instead of `kube-flannel.yaml`. Please note that if you use a Kubernetes version >= 1.21, you will see a deprecation warning for the PodSecurityPolicy API. See [troubleshooting](troubleshooting.md) [kubeadm]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "PULL_REQUEST_TEMPLATE.md"
- },
- "content": [
- {
- "heading": "Description",
- "data": ""
- },
- {
- "heading": "Todos",
- "data": "- [ ] Tests\n - [ ] Documentation\n - [ ] Release note"
- },
- {
- "heading": "Release Note",
- "data": ""
- },
- {
- "additional_info": " - [ ] Tests - [ ] Documentation - [ ] Release note ```release-note None required ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "README.md"
- },
- "content": [
- {
- "heading": "Install",
- "data": "sudo snap install flannel --classic ([Don't have snapd installed?](https://snapcraft.io/docs/core/install))"
- },
- {
- "additional_info": " Flannel This is the snap for Flannel, a network fabric for containers, designed for Kubernetes. It works on Ubuntu, Fedora, Debian, and other major Linux distributions.
Published for with \ud83d\udc9d by Snapcrafters
sudo snap install flannel --classic ([Don't have snapd installed?](https://snapcraft.io/docs/core/install))"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "reporting_bugs.md"
- },
- "content": [
- {
- "heading": "Reporting bugs",
- "data": "If any part of the flannel project has bugs or documentation mistakes, please let us know by [opening an issue][flannel-issue]. Before creating a bug report, please check that an issue reporting the same problem does not already exist.\n To make the bug report accurate and easy to understand, please try to create bug reports that are:\n - Specific. Include as much details as possible: which version, what environment, what configuration, etc.\n - Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem.\n - Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on flannel is out of scope, but we are happy to provide guidance in the right direction or help with using flannel itself.\n - Unique. Do not duplicate an existing bug report.\n - Scoped. One bug per report. Do not follow up with another bug inside one report.\n It may be worthwhile to read [Elika Etemad\u2019s article on filing good bug reports][filing-good-bugs] before creating a bug report.\n We might ask for further information to locate a bug. A duplicated bug report will be closed."
- },
- {
- "heading": "Frequently asked questions",
- "data": ""
- },
- {
- "heading": "How to get a stack trace",
- "data": ""
- },
- {
- "heading": "How to get flannel version",
- "data": "[flannel-issue]: https://github.com/flannel-io/flannel/issues/new [filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/"
- },
- {
- "additional_info": "If any part of the flannel project has bugs or documentation mistakes, please let us know by [opening an issue][flannel-issue]. Before creating a bug report, please check that an issue reporting the same problem does not already exist. To make the bug report accurate and easy to understand, please try to create bug reports that are: - Specific. Include as much details as possible: which version, what environment, what configuration, etc. - Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. - Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on flannel is out of scope, but we are happy to provide guidance in the right direction or help with using flannel itself. - Unique. Do not duplicate an existing bug report. - Scoped. One bug per report. Do not follow up with another bug inside one report. It may be worthwhile to read [Elika Etemad\u2019s article on filing good bug reports][filing-good-bugs] before creating a bug report. We might ask for further information to locate a bug. A duplicated bug report will be closed. ``` bash $ kill -QUIT $PID ``` ``` bash $ flannel --version ``` [flannel-issue]: https://github.com/flannel-io/flannel/issues/new [filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "reservations.md"
- },
- "content": [
- {
- "heading": "Leases and Reservations",
- "data": ""
- },
- {
- "heading": "Leases",
- "data": "When flannel starts up, it ensures that the host has a subnet lease. If there is\n an existing lease then it's used, otherwise one is assigned.\n Leases can be viewed by checking the contents of etcd. e.g.\n This shows that there is a single lease (`10.5.52.0/24`) which will expire in 74737 seconds. flannel will attempt to renew the lease before it expires, but if flannel is not running for an extended period then the lease will be lost.\n The `\"PublicIP\"` value is how flannel knows to reuse this lease when restarted.\n This means that if the public IP changes, then the flannel subnet will change too.\n In case a host is unable to renew its lease before the lease expires (e.g. a host takes a long time to restart and the timing lines up with when the lease would normally be renewed), flannel will then attempt to renew the last lease that it has saved in its subnet config file (which, unless specified, is located at `/var/run/flannel/subnet.env`)\n In this case, if flannel fails to retrieve an existing lease from etcd, it will attempt to renew lease specified in `FLANNEL_SUBNET` (`10.5.52.1/24`). It will only renew this lease if the subnet specified is valid for the current etcd network configuration otherwise it will allocate a new lease."
- },
- {
- "heading": "Reservations",
- "data": "flannel also supports reservations for the subnet assigned to a host. Reservations\n allow a fixed subnet to be used for a given host.\n The only difference between a lease and reservation is the etcd TTL value. Simply\n removing the TTL from a lease will convert it to a reservation. e.g."
- },
- {
- "heading": "update the value without any lease option (--lease).",
- "data": ""
- },
- {
- "additional_info": "W h e n f l a n n e l s t a r t s u p , i t e n s u r e s t h a t t h e h o s t h a s a s u b n e t l e a s e . I f t h e r e i s a n e x i s t i n g l e a s e t h e n i t ' s u s e d , o t h e r w i s e o n e i s a s s i g n e d . L e a s e s c a n b e v i e w e d b y c h e c k i n g t h e c o n t e n t s o f e t c d . e . g . ` ` ` $ e x p o r t E T C D C T L _ A P I = 3 $ e t c d c t l g e t / c o r e o s . c o m / n e t w o r k / s u b n e t s - - p r e f i x - - k e y s - o n l y / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 5 2 . 0 - 2 4 $ e t c d c t l g e t / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 5 2 . 0 - 2 4 / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 5 2 . 0 - 2 4 { \" P u b l i c I P \" : \" 1 9 2 . 1 6 8 . 6 4 . 3 \" , \" P u b l i c I P v 6 \" : n u l l , \" B a c k e n d T y p e \" : \" v x l a n \" , \" B a c k e n d D a t a \" : { \" V N I \" : 1 , \" V t e p M A C \" : \" c 6 : d 2 : 3 2 : 6 f : 8 f : 4 4 \" } } $ e t c d c t l l e a s e l i s t f o u n d 1 l e a s e s 6 9 4 d 8 5 4 3 3 0 f c 5 1 1 0 $ e t c d c t l l e a s e t i m e t o l i v e - - k e y s 6 9 4 d 8 5 4 3 3 0 f c 5 1 1 0 l e a s e 6 9 4 d 8 5 4 3 3 0 f c 5 1 1 0 g r a n t e d w i t h T T L ( 8 6 4 0 0 s ) , r e m a i n i n g ( 7 4 7 3 7 s ) , a t t a c h e d k e y s ( [ / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 5 2 . 0 - 2 4 ] ) ` ` ` T h i s s h o w s t h a t t h e r e i s a s i n g l e l e a s e ( ` 1 0 . 5 . 5 2 . 0 / 2 4 ` ) w h i c h w i l l e x p i r e i n 7 4 7 3 7 s e c o n d s . f l a n n e l w i l l a t t e m p t t o r e n e w t h e l e a s e b e f o r e i t e x p i r e s , b u t i f f l a n n e l i s n o t r u n n i n g f o r a n e x t e n d e d p e r i o d t h e n t h e l e a s e w i l l b e l o s t . T h e ` \" P u b l i c I P \" ` v a l u e i s h o w f l a n n e l k n o w s t o r e u s e t h i s l e a s e w h e n r e s t a r t e d . T h i s m e a n s t h a t i f t h e p u b l i c I P c h a n g e s , t h e n t h e f l a n n e l s u b n e t w i l l c h a n g e t o o . I n c a s e a h o s t i s u n a b l e t o r e n e w i t s l e a s e b e f o r e t h e l e a s e e x p i r e s ( e . g . a h o s t t a k e s a l o n g t i m e t o r e s t a r t a n d t h e t i m i n g l i n e s u p w i t h w h e n t h e l e a s e w o u l d n o r m a l l y b e r e n e w e d ) , f l a n n e l w i l l t h e n a t t e m p t t o r e n e w t h e l a s t l e a s e t h a t i t h a s s a v e d i n i t s s u b n e t c o n f i g f i l e ( w h i c h , u n l e s s s p e c i f i e d , i s l o c a t e d a t ` / v a r / r u n / f l a n n e l / s u b n e t . e n v ` ) ` ` ` b a s h c a t / v a r / r u n / f l a n n e l / s u b n e t . e n v F L A N N E L _ N E T W O R K = 1 0 . 5 . 0 . 0 / 1 6 F L A N N E L _ S U B N E T = 1 0 . 5 . 5 2 . 1 / 2 4 F L A N N E L _ M T U = 1 4 5 0 F L A N N E L _ I P M A S Q = f a l s e ` ` ` I n t h i s c a s e , i f f l a n n e l f a i l s t o r e t r i e v e a n e x i s t i n g l e a s e f r o m e t c d , i t w i l l a t t e m p t t o r e n e w l e a s e s p e c i f i e d i n ` F L A N N E L _ S U B N E T ` ( ` 1 0 . 5 . 5 2 . 1 / 2 4 ` ) . I t w i l l o n l y r e n e w t h i s l e a s e i f t h e s u b n e t s p e c i f i e d i s v a l i d f o r t h e c u r r e n t e t c d n e t w o r k c o n f i g u r a t i o n o t h e r w i s e i t w i l l a l l o c a t e a n e w l e a s e . f l a n n e l a l s o s u p p o r t s r e s e r v a t i o n s f o r t h e s u b n e t a s s i g n e d t o a h o s t . R e s e r v a t i o n s a l l o w a f i x e d s u b n e t t o b e u s e d f o r a g i v e n h o s t . T h e o n l y d i f f e r e n c e b e t w e e n a l e a s e a n d r e s e r v a t i o n i s t h e e t c d T T L v a l u e . S i m p l y r e m o v i n g t h e T T L f r o m a l e a s e w i l l c o n v e r t i t t o a r e s e r v a t i o n . e . g . ` ` ` $ e x p o r t E T C D C T L _ A P I = 3 $ e t c d c t l p u t / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 1 . 0 - 2 4 $ ( e t c d c t l g e t / c o r e o s . c o m / n e t w o r k / s u b n e t s / 1 0 . 5 . 1 . 0 - 2 4 ) ` ` `"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "running.md"
- },
- "content": [
- {
- "heading": "Running flannel",
- "data": "Once you have pushed configuration JSON to `etcd`, you can start `flanneld`. If you published your config at the default location, you can start `flanneld` with no arguments.\n Flannel will acquire a subnet lease, configure its routes based on other leases in the overlay network and start routing packets.\n It will also monitor `etcd` for new members of the network and adjust the routes accordingly.\n After flannel has acquired the subnet and configured backend, it will write out an environment variable file (`/run/flannel/subnet.env` by default) with subnet address and MTU that it supports.\n For more information on checking the IP range for a specific host, see [Leases and Reservations](https://github.com/flannel-io/flannel/blob/master/Documentation/reservations.md)."
- },
- {
- "heading": "Multiple networks",
- "data": "Flanneld does not support running multiple networks from a single daemon (it did previously as an experimental feature).\n However, it does support running multiple daemons on the same host with different configurations. The `-subnet-file` and `-etcd-prefix` options should be used to \"namespace\" the different daemons.\n For example"
- },
- {
- "heading": "Running manually",
- "data": "1. Download a `flannel` binary.\n 2. Run the binary.\n 3. Run `etcd`. Follow the instructions on the [etcd page](https://etcd.io/docs/v3.5/quickstart/), or, if you have docker just do\n 4. Observe that `flannel` can now talk to `etcd`, but can't find any config. So write some config. Either get `etcdctl` from the [etcd page](https://etcd.io/docs/v3.5/quickstart/), or use `docker` again.\n Now `flannel` is running, it has created a VXLAN tunnel device on the host and written a subnet config file\n Each time flannel is restarted, it will attempt to access the `FLANNEL_SUBNET` value written in this subnet config file. This prevents each host from needing to update its network information in case a host is unable to renew its lease before it expires (e.g. a host was restarting during the time flannel would normally renew its lease).\n The `FLANNEL_SUBNET` value is also only used if it is valid for the etcd network config. For instance, a `FLANNEL_SUBNET` value of `10.5.72.1/24` will not be used if the etcd network value is set to `10.6.0.0/16` since it is not within that network range.\n Subnet config value is `10.5.72.1/24`\n etcd network value is `10.6.0.0/16`. Since `10.5.72.1/24` is outside of this network, a new lease will be allocated."
- },
- {
- "heading": "Interface selection",
- "data": "Flannel uses the interface selected to register itself in the datastore.\n The important options are:\n * `-iface string`: Interface to use (IP or name) for inter-host communication.\n * `-public-ip string`: IP accessible by other nodes for inter-host communication.\n The combination of the defaults, the autodetection and these two flags ultimately result in the following being determined:\n * An interface (used for MTU detection and selecting the VTEP MAC in VXLAN).\n * An IP address for that interface.\n * A public IP that can be used for reaching this node. In `host-gw` it should match the interface address."
- },
- {
- "heading": "Making changes at runtime",
- "data": "Please be aware of the following flannel runtime limitations.\n * The datastore type cannot be changed.\n * The backend type cannot be changed. (It can be changed if you stop all workloads and restart all flannel daemons.)\n * You can change the subnetlen/subnetmin/subnetmax with a daemon restart. (Subnets can be changed with caution. If pods are already using IP addresses outside the new range they will stop working.)\n * The clusterwide network range cannot be changed (without downtime)."
- },
- {
- "heading": "Docker integration",
- "data": "Docker daemon accepts `--bip` argument to configure the subnet of the docker0 bridge.\n It also accepts `--mtu` to set the MTU for docker0 and veth devices that it will be creating.\n Because flannel writes out the acquired subnet and MTU values into a file, the script starting Docker can source in the values and pass them to Docker daemon:\n Systemd users can use `EnvironmentFile` directive in the `.service` file to pull in `/run/flannel/subnet.env`\n If you want to leave default docker0 network as it is and instead create a new network that will be using flannel you do so like this:"
- },
- {
- "heading": "Running on Vagrant",
- "data": "Vagrant has a tendency to give the default interface (one with the default route) a non-unique IP (often 10.0.2.15).\n This causes flannel to register multiple nodes with the same IP.\n To work around this issue, use `--iface` option to specify the interface that has a unique IP."
- },
- {
- "heading": "Zero-downtime restarts",
- "data": "When running with a backend other than `udp`, the kernel is providing the data path with `flanneld` acting as the control plane. As such, `flanneld` can be restarted (even to do an upgrade) without disturbing existing flows. However in the case of `vxlan` backend, this needs to be done within a few seconds as ARP entries can start to timeout requiring the flannel daemon to refresh them. Also, to avoid interruptions during restart, the configuration must not be changed (e.g. VNI, --iface values)."
- },
- {
- "additional_info": "Once you have pushed configuration JSON to `etcd`, you can start `flanneld`. If you published your config at the default location, you can start `flanneld` with no arguments. Flannel will acquire a subnet lease, configure its routes based on other leases in the overlay network and start routing packets. It will also monitor `etcd` for new members of the network and adjust the routes accordingly. After flannel has acquired the subnet and configured backend, it will write out an environment variable file (`/run/flannel/subnet.env` by default) with subnet address and MTU that it supports. For more information on checking the IP range for a specific host, see [Leases and Reservations](https://github.com/flannel-io/flannel/blob/master/Documentation/reservations.md). Flanneld does not support running multiple networks from a single daemon (it did previously as an experimental feature). However, it does support running multiple daemons on the same host with different configurations. The `-subnet-file` and `-etcd-prefix` options should be used to \"namespace\" the different daemons. For example ``` flanneld -subnet-file /vxlan.env -etcd-prefix=/vxlan/network ``` 1. Download a `flannel` binary. ```bash wget https://github.com/flannel-io/flannel/releases/latest/download/flanneld-amd64 && chmod +x flanneld-amd64 ``` 2. Run the binary. ```bash sudo ./flanneld-amd64 # it will hang waiting to talk to etcd ``` 3. Run `etcd`. Follow the instructions on the [etcd page](https://etcd.io/docs/v3.5/quickstart/), or, if you have docker just do ```bash docker run --rm --net=host quay.io/coreos/etcd ``` 4. Observe that `flannel` can now talk to `etcd`, but can't find any config. So write some config. Either get `etcdctl` from the [etcd page](https://etcd.io/docs/v3.5/quickstart/), or use `docker` again. ```bash docker run --rm -e ETCDCTL_API=3 --net=host quay.io/coreos/etcd etcdctl put /coreos.com/network/config '{ \"Network\": \"10.5.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}}' ``` Now `flannel` is running, it has created a VXLAN tunnel device on the host and written a subnet config file ```bash cat /run/flannel/subnet.env FLANNEL_NETWORK=10.5.0.0/16 FLANNEL_SUBNET=10.5.72.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false ``` Each time flannel is restarted, it will attempt to access the `FLANNEL_SUBNET` value written in this subnet config file. This prevents each host from needing to update its network information in case a host is unable to renew its lease before it expires (e.g. a host was restarting during the time flannel would normally renew its lease). The `FLANNEL_SUBNET` value is also only used if it is valid for the etcd network config. For instance, a `FLANNEL_SUBNET` value of `10.5.72.1/24` will not be used if the etcd network value is set to `10.6.0.0/16` since it is not within that network range. Subnet config value is `10.5.72.1/24` ```bash cat /run/flannel/subnet.env FLANNEL_NETWORK=10.5.0.0/16 FLANNEL_SUBNET=10.5.72.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false ``` etcd network value is `10.6.0.0/16`. Since `10.5.72.1/24` is outside of this network, a new lease will be allocated. ```bash export ETCDCTL_API=3 etcdctl get /coreos.com/network/config { \"Network\": \"10.6.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}} ``` Flannel uses the interface selected to register itself in the datastore. The important options are: * `-iface string`: Interface to use (IP or name) for inter-host communication. * `-public-ip string`: IP accessible by other nodes for inter-host communication. The combination of the defaults, the autodetection and these two flags ultimately result in the following being determined: * An interface (used for MTU detection and selecting the VTEP MAC in VXLAN). * An IP address for that interface. * A public IP that can be used for reaching this node. In `host-gw` it should match the interface address. Please be aware of the following flannel runtime limitations. * The datastore type cannot be changed. * The backend type cannot be changed. (It can be changed if you stop all workloads and restart all flannel daemons.) * You can change the subnetlen/subnetmin/subnetmax with a daemon restart. (Subnets can be changed with caution. If pods are already using IP addresses outside the new range they will stop working.) * The clusterwide network range cannot be changed (without downtime). Docker daemon accepts `--bip` argument to configure the subnet of the docker0 bridge. It also accepts `--mtu` to set the MTU for docker0 and veth devices that it will be creating. Because flannel writes out the acquired subnet and MTU values into a file, the script starting Docker can source in the values and pass them to Docker daemon: ```bash source /run/flannel/subnet.env docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} & ``` Systemd users can use `EnvironmentFile` directive in the `.service` file to pull in `/run/flannel/subnet.env` If you want to leave default docker0 network as it is and instead create a new network that will be using flannel you do so like this: ```bash source /run/flannel/subnet.env docker network create --attachable=true --subnet=${FLANNEL_SUBNET} -o \"com.docker.network.driver.mtu\"=${FLANNEL_MTU} flannel ``` Vagrant has a tendency to give the default interface (one with the default route) a non-unique IP (often 10.0.2.15). This causes flannel to register multiple nodes with the same IP. To work around this issue, use `--iface` option to specify the interface that has a unique IP. When running with a backend other than `udp`, the kernel is providing the data path with `flanneld` acting as the control plane. As such, `flanneld` can be restarted (even to do an upgrade) without disturbing existing flows. However in the case of `vxlan` backend, this needs to be done within a few seconds as ARP entries can start to timeout requiring the flannel daemon to refresh them. Also, to avoid interruptions during restart, the configuration must not be changed (e.g. VNI, --iface values)."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "tencentcloud-vpc-backend.md"
- },
- "content": [
- {
- "heading": "TencentCloud VPC Backend for Flannel",
- "data": "There are only two differences between the usage method and Alibaba Cloud: 1. Tencent Cloud needs to create a routing table, while Alibaba Cloud creates a switch 2. In network/config, backend-type is \"tencent-vpc\""
- },
- {
- "additional_info": "There are only two differences between the usage method and Alibaba Cloud: 1. Tencent Cloud needs to create a routing table, while Alibaba Cloud creates a switch 2. In network/config, backend-type is \"tencent-vpc\""
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "troubleshooting.md"
- },
- "content": [
- {
- "heading": "Troubleshooting",
- "data": ""
- },
- {
- "heading": "General",
- "data": ""
- },
- {
- "heading": "Connectivity",
- "data": "In Docker v1.13 and later, the default iptables forwarding policy was changed to `DROP`. For more detail on the Docker change, see the Docker [documentation](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#container-communication-between-hosts).\n This problems manifests itself as connectivity problems between containers running on different hosts. To resolve it upgrade to the latest version of flannel."
- },
- {
- "heading": "Logging",
- "data": "Flannel uses the `klog` library but only supports logging to stderr. The severity level can't be changed but the verbosity can be changed with the `-v` option. Flannel does not make extensive use of the verbosity level but increasing the value from `0` (the default) will result in some additional logs. To get the most detailed logs, use `-v=10`\n When running under systemd (e.g. on CoreOS Container Linux) the logs can be viewed with `journalctl -u flanneld`\n When flannel is running as a pod on Kubernetes, the logs can be viewed with `kubectl logs --namespace kube-flannel -c kube-flannel`. You can find the pod IDs with `kubectl get pod --namespace kube-flannel -l app=flannel`"
- },
- {
- "heading": "Interface selection and the public IP.",
- "data": "Most backends require that each node has a unique \"public IP\" address. This address is chosen when flannel starts. Because leases are tied to the public address, if the address changes, flannel must be restarted.\n The interface chosen and the public IP in use is logged out during startup, e.g."
- },
- {
- "heading": "Vagrant",
- "data": "Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.\n This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the `--iface=eth1` flag to flannel so that the second interface is chosen."
- },
- {
- "heading": "NAT",
- "data": "When the public IP is behind NAT, the UDP checksum fields of the VXLAN packets can be corrupted.\n In that case, try running the following commands to avoid corrupted checksums:\n To automate the command above via udev, create `/etc/udev/rules.d/90-flannel.rules` as follows:\n "
- },
- {
- "heading": "Permissions",
- "data": "Depending on the backend being used, flannel may need to run with super user permissions. Examples include creating VXLAN devices or programming routes. If you see errors similar to the following, confirm that the user running flannel has the right permissions (or try running with `sudo)`.\n * `Error adding route...`\n * `Add L2 failed`\n * `Failed to set up IP Masquerade`\n * `Error registering network: operation not permitted`"
- },
- {
- "heading": "Performance",
- "data": ""
- },
- {
- "heading": "Control plane",
- "data": "Flannel is known to scale to a very large number of hosts. A delay in contacting pods in a newly created host may indicate control plane problems. Flannel doesn't need much CPU or RAM but the first thing to check would be that it has adequate resources available. Flannel is also reliant on the performance of the datastore, either etcd or the Kubernetes API server. Check that they are performing well."
- },
- {
- "heading": "Data plane",
- "data": "Flannel relies on the underlying network so that's the first thing to check if you're seeing poor data plane performance.\n There are two flannel specific choices that can have a big impact on performance\n 1) The type of backend. For example, if encapsulation is used, `vxlan` will always perform better than `udp`. For maximum data plane performance, avoid encapsulation.\n 2) The size of the MTU can have a large impact. To achieve maximum raw bandwidth, a network supporting a large MTU should be used. Flannel writes an MTU setting to the `subnet.env` file. This file is read by either the Docker daemon or the CNI flannel plugin which does the networking for individual containers. To troubleshoot, first ensure that the network interface that flannel is using has the right MTU. Then check that the correct MTU is written to the `subnet.env`. Finally, check that the containers have the correct MTU on their virtual ethernet device."
- },
- {
- "heading": "Firewalls",
- "data": "When using `udp` backend, flannel uses UDP port 8285 for sending encapsulated packets.\n When using `vxlan` backend, kernel uses UDP port 8472 for sending encapsulated packets.\n Make sure that your firewall rules allow this traffic for all hosts participating in the overlay network.\n Make sure that your firewall rules allow traffic from pod network cidr visit your kubernetes master node."
- },
- {
- "heading": "Kubernetes Specific",
- "data": "The flannel kube subnet manager relies on the fact that each node already has a `podCIDR` defined.\n You can check the podCidr for your nodes with one of the following two commands\n * `kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`\n * `kubectl get nodes -o template --template={{.spec.podCIDR}}`\n If your nodes do not have a podCIDR, then either use the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=` controller-manager command-line options.\n If `kubeadm` is being used then pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init` which will ensure that all nodes are automatically assigned a `podCIDR`.\n It's possible (but not generally recommended) to manually set the `podCIDR` to a fixed value for each node. The node subnet ranges must not overlap.\n * `kubectl patch node -p '{\"spec\":{\"podCIDR\":\"\"}}'`"
- },
- {
- "heading": "Log messages",
- "data": "* `failed to read net conf` - flannel expects to be able to read the net conf from \"/etc/kube-flannel/net-conf.json\". In the provided manifest, this is set up in the `kube-flannel-cfg` ConfigMap. * `error parsing subnet config` - The net conf is malformed. Double check that it has the right content and is valid JSON. * `node pod cidr not assigned` - The node doesn't have a `podCIDR` defined. See above for more info. * `Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-abc123': the server does not allow access to the requested resource` - The kubernetes cluster has RBAC enabled. Run `https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-rbac.yml`"
- },
- {
- "additional_info": "In Docker v1.13 and later, the default iptables forwarding policy was changed to `DROP`. For more detail on the Docker change, see the Docker [documentation](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#container-communication-between-hosts). This problems manifests itself as connectivity problems between containers running on different hosts. To resolve it upgrade to the latest version of flannel. Flannel uses the `klog` library but only supports logging to stderr. The severity level can't be changed but the verbosity can be changed with the `-v` option. Flannel does not make extensive use of the verbosity level but increasing the value from `0` (the default) will result in some additional logs. To get the most detailed logs, use `-v=10` ``` -v value log level for V logs -vmodule value comma-separated list of pattern=N settings for file-filtered logging -log_backtrace_at value when logging hits line file:N, emit a stack trace ``` When running under systemd (e.g. on CoreOS Container Linux) the logs can be viewed with `journalctl -u flanneld` When flannel is running as a pod on Kubernetes, the logs can be viewed with `kubectl logs --namespace kube-flannel -c kube-flannel`. You can find the pod IDs with `kubectl get pod --namespace kube-flannel -l app=flannel` Most backends require that each node has a unique \"public IP\" address. This address is chosen when flannel starts. Because leases are tied to the public address, if the address changes, flannel must be restarted. The interface chosen and the public IP in use is logged out during startup, e.g. ``` I0629 14:28:35.866793 5522 main.go:386] Determining IP address of default interface I0629 14:28:35.866987 5522 main.go:399] Using interface with name enp62s0u1u2 and address 172.24.17.174 I0629 14:28:35.867000 5522 main.go:412] Using 10.10.10.10 as external address ``` Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the `--iface=eth1` flag to flannel so that the second interface is chosen. When the public IP is behind NAT, the UDP checksum fields of the VXLAN packets can be corrupted. In that case, try running the following commands to avoid corrupted checksums: ```bash /usr/sbin/ethtool -K flannel.1 tx-checksum-ip-generic off ``` To automate the command above via udev, create `/etc/udev/rules.d/90-flannel.rules` as follows: ``` SUBSYSTEM==\"net\", ACTION==\"add|change|move\", ENV{INTERFACE}==\"flannel.1\", RUN+=\"/usr/sbin/ethtool -K flannel.1 tx-checksum-ip-generic off\" ``` Depending on the backend being used, flannel may need to run with super user permissions. Examples include creating VXLAN devices or programming routes. If you see errors similar to the following, confirm that the user running flannel has the right permissions (or try running with `sudo)`. * `Error adding route...` * `Add L2 failed` * `Failed to set up IP Masquerade` * `Error registering network: operation not permitted` Flannel is known to scale to a very large number of hosts. A delay in contacting pods in a newly created host may indicate control plane problems. Flannel doesn't need much CPU or RAM but the first thing to check would be that it has adequate resources available. Flannel is also reliant on the performance of the datastore, either etcd or the Kubernetes API server. Check that they are performing well. Flannel relies on the underlying network so that's the first thing to check if you're seeing poor data plane performance. There are two flannel specific choices that can have a big impact on performance 1) The type of backend. For example, if encapsulation is used, `vxlan` will always perform better than `udp`. For maximum data plane performance, avoid encapsulation. 2) The size of the MTU can have a large impact. To achieve maximum raw bandwidth, a network supporting a large MTU should be used. Flannel writes an MTU setting to the `subnet.env` file. This file is read by either the Docker daemon or the CNI flannel plugin which does the networking for individual containers. To troubleshoot, first ensure that the network interface that flannel is using has the right MTU. Then check that the correct MTU is written to the `subnet.env`. Finally, check that the containers have the correct MTU on their virtual ethernet device. When using `udp` backend, flannel uses UDP port 8285 for sending encapsulated packets. When using `vxlan` backend, kernel uses UDP port 8472 for sending encapsulated packets. Make sure that your firewall rules allow this traffic for all hosts participating in the overlay network. Make sure that your firewall rules allow traffic from pod network cidr visit your kubernetes master node. The flannel kube subnet manager relies on the fact that each node already has a `podCIDR` defined. You can check the podCidr for your nodes with one of the following two commands * `kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'` * `kubectl get nodes -o template --template={{.spec.podCIDR}}` If your nodes do not have a podCIDR, then either use the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=` controller-manager command-line options. If `kubeadm` is being used then pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init` which will ensure that all nodes are automatically assigned a `podCIDR`. It's possible (but not generally recommended) to manually set the `podCIDR` to a fixed value for each node. The node subnet ranges must not overlap. * `kubectl patch node -p '{\"spec\":{\"podCIDR\":\"\"}}'` * `failed to read net conf` - flannel expects to be able to read the net conf from \"/etc/kube-flannel/net-conf.json\". In the provided manifest, this is set up in the `kube-flannel-cfg` ConfigMap. * `error parsing subnet config` - The net conf is malformed. Double check that it has the right content and is valid JSON. * `node pod cidr not assigned` - The node doesn't have a `podCIDR` defined. See above for more info. * `Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-abc123': the server does not allow access to the requested resource` - The kubernetes cluster has RBAC enabled. Run `https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-old-manifests/kube-flannel-rbac.yml`"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Flannel",
- "file_name": "upgrade.md"
- },
- "content": [
- {
- "heading": "Upgrade",
- "data": "Flannel upgrade/downgrade procedure\n \n There are different ways of changing flannel version in the running cluster:"
- },
- {
- "heading": "Remove old resources definitions and install a new one.",
- "data": "* Pros: Cleanest way of managing resources of the flannel deployment and no manual validation required as long as no additional resources was created by administrators/operators\n * Cons: Massive networking outage within a cluster during the version change"
- },
- {
- "heading": "1. Delete all the flannel resources using kubectl",
- "data": ""
- },
- {
- "heading": "2. Install the newer version of flannel and reboot the nodes",
- "data": ""
- },
- {
- "heading": "On the fly version",
- "data": "* Pros: Less disruptive way of changing flannel version, easier to do\n * Cons: Some version may have changes which can't be just replaced and may need resources cleanup and/or rename, manual resources comparison required\n If the update is done from newer version as 0.20.2 it can be done using kubectl\n In case of error on the labeling follow the previous way."
- },
- {
- "heading": "Using the helm repository",
- "data": "From version 0.21.4 flannel is deployed on an helm repository at `https://flannel-io.github.io/flannel/` it will be possible to manage the update directly with helm."
- },
- {
- "additional_info": "Flannel upgrade/downgrade procedure There are different ways of changing flannel version in the running cluster: * Pros: Cleanest way of managing resources of the flannel deployment and no manual validation required as long as no additional resources was created by administrators/operators * Cons: Massive networking outage within a cluster during the version change ```bash kubectl -n kube-flannel delete daemonset kube-flannel-ds kubectl -n kube-flannel delete configmap kube-flannel-cfg kubectl -n kube-flannel delete serviceaccount flannel kubectl delete clusterrolebinding.rbac.authorization.k8s.io flannel kubectl delete clusterrole.rbac.authorization.k8s.io flannel kubectl delete namespace kube-flannel ``` * Pros: Less disruptive way of changing flannel version, easier to do * Cons: Some version may have changes which can't be just replaced and may need resources cleanup and/or rename, manual resources comparison required If the update is done from newer version as 0.20.2 it can be done using kubectl ```bash kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml ``` In case of error on the labeling follow the previous way. From version 0.21.4 flannel is deployed on an helm repository at `https://flannel-io.github.io/flannel/` it will be possible to manage the update directly with helm. ```bash helm upgrade flannel --set podCidr=\"10.244.0.0/16\" --namespace kube-flannel flannel/flannel ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Kilo",
- "file_name": "annotations.md"
- },
- "content": [
- {
- "heading": "Annotations",
- "data": "The following annotations can be added to any Kubernetes Node object to configure the Kilo network.\n |Name|type|examples|\n |----|----|-------|\n |[kilo.squat.ai/force-endpoint](#force-endpoint)|host:port|`55.55.55.55:51820`, `example.com:1337`|\n |[kilo.squat.ai/force-internal-ip](#force-internal-ip)|CIDR|`55.55.55.55/32`, `\"-\"`,`\"\"`|\n |[kilo.squat.ai/leader](#leader)|string|`\"\"`, `true`|\n |[kilo.squat.ai/location](#location)|string|`gcp-east`, `lab`|\n |[kilo.squat.ai/persistent-keepalive](#persistent-keepalive)|uint|`10`|\n |[kilo.squat.ai/allowed-location-ips](#allowed-location-ips)|CIDR|`66.66.66.66/32`|"
- },
- {
- "heading": "force-endpoint",
- "data": "In order to create links between locations, Kilo requires at least one node in each location to have an endpoint, ie a `host:port` combination, that is routable from the other locations.\n If the locations are in different cloud providers or in different private networks, then the `host` portion of the endpoint should be a publicly accessible IP address, or a DNS name that resolves to a public IP, so that the other locations can route packets to it.\n The Kilo agent running on each node will use heuristics to automatically detect an external IP address for the node and correctly configure its endpoint; however, in some circumstances it may be necessary to explicitly configure the endpoint to use, for example:\n * _no automatic public IP on ethernet device_: on some cloud providers it is common for nodes to be allocated a public IP address but for the Ethernet devices to only be automatically configured with the private network address; in this case the allocated public IP address should be specified;\n * _multiple public IP addresses_: if a node has multiple public IPs but one is preferred, then the preferred IP address should be specified;\n * _IPv6_: if a node has both public IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified;\n * _dynamic IP address_: if a node has a dynamically allocated public IP address, for example an IP leased from a network provider, then a dynamic DNS name can be given can be given and Kilo will periodically lookup the IP to keep the endpoint up-to-date;\n * _override port_: if a node should listen on a specific port that is different from the mesh's default WireGuard port, then this annotation can be used to override the port; this can be useful, for example, to ensure that two nodes operating behind the same port-forwarded NAT gateway can each be allocated a different port."
- },
- {
- "heading": "force-internal-ip",
- "data": "Kilo routes packets destined for nodes inside the same logical location using the node's internal IP address.\n The Kilo agent running on each node will use heuristics to automatically detect a private IP address for the node; however, in some circumstances it may be necessary to explicitly configure the IP address, for example:\n * _multiple private IP addresses_: if a node has multiple private IPs but one is preferred, then the preferred IP address should be specified;\n * _IPv6_: if a node has both private IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified.\n * _disable private IP with \"-\" or \"\"_: a node has a private and public address, but the private address ought to be ignored."
- },
- {
- "heading": "leader",
- "data": "By default, Kilo creates a network mesh at the data-center granularity.\n This means that one leader node is selected from each location to be an edge server and act as the gateway to other locations; the network topology will be a full mesh between leaders.\n Kilo automatically selects the leader for each location in a stable and deterministic manner to avoid churn in the network configuration, while giving preference to nodes that are known to have public IP addresses.\n In some situations it may be desirable to manually select the leader for a location, for example:\n * _firewall_: Kilo requires an open UDP port, which defaults to 51820, to communicate between locations; if only one node is configured to have that port open, then that node should be given the leader annotation;\n * _bandwidth_: if certain nodes in the cluster have a higher bandwidth or lower latency Internet connection, then those nodes should be given the leader annotation.\n > **Note**: multiple nodes within a single location can be given the leader annotation; in this case, Kilo will select one leader from the set of annotated nodes."
- },
- {
- "heading": "location",
- "data": "Kilo allows nodes in different logical or physical locations to route packets to one-another.\n In order to know what connections to create, Kilo needs to know which nodes are in each location.\n Kilo will try to infer each node's location from the [topology.kubernetes.io/region](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion) node label.\n If the label is not present for a node, for example if running a bare-metal cluster or on an unsupported cloud provider, then the location annotation should be specified.\n > **Note**: all nodes without a defined location will be considered to be in the default location `\"\"`."
- },
- {
- "heading": "persistent-keepalive",
- "data": "In certain deployments, cluster nodes may be located behind NAT or a firewall, e.g. edge nodes located behind a commodity router.\n In these scenarios, the nodes behind NAT can send packets to the nodes outside of the NATed network, however the outside nodes can only send packets into the NATed network as long as the NAT mapping remains valid.\n In order for a node behind NAT to receive packets from nodes outside of the NATed network, it must maintain the NAT mapping by regularly sending packets to those nodes, ie by sending _keepalives_.\n The frequency of emission of these keepalive packets can be controlled by setting the persistent-keepalive annotation on the node behind NAT.\n The annotated node will use the specified value will as the persistent-keepalive interval for all of its peers.\n For more background, [see the WireGuard documentation on NAT and firewall traversal](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence)."
- },
- {
- "heading": "allowed-location-ips",
- "data": "It is possible to add allowed-location-ips to a location by annotating any node within that location. Adding allowed-location-ips to a location makes these IPs routable from other locations as well. In an example deployment of Kilo with two locations A and B, a printer in location A can be accessible from nodes and pods in location B. Additionally, Kilo Peers can use the printer in location A."
- },
- {
- "additional_info": "The following annotations can be added to any Kubernetes Node object to configure the Kilo network. |Name|type|examples| |----|----|-------| |[kilo.squat.ai/force-endpoint](#force-endpoint)|host:port|`55.55.55.55:51820`, `example.com:1337`| |[kilo.squat.ai/force-internal-ip](#force-internal-ip)|CIDR|`55.55.55.55/32`, `\"-\"`,`\"\"`| |[kilo.squat.ai/leader](#leader)|string|`\"\"`, `true`| |[kilo.squat.ai/location](#location)|string|`gcp-east`, `lab`| |[kilo.squat.ai/persistent-keepalive](#persistent-keepalive)|uint|`10`| |[kilo.squat.ai/](#)|CIDR|`66.66.66.66/32`| In order to create links between locations, Kilo requires at least one node in each location to have an endpoint, ie a `host:port` combination, that is routable from the other locations. If the locations are in different cloud providers or in different private networks, then the `host` portion of the endpoint should be a publicly accessible IP address, or a DNS name that resolves to a public IP, so that the other locations can route packets to it. The Kilo agent running on each node will use heuristics to automatically detect an external IP address for the node and correctly configure its endpoint; however, in some circumstances it may be necessary to explicitly configure the endpoint to use, for example: * _no automatic public IP on ethernet device_: on some cloud providers it is common for nodes to be allocated a public IP address but for the Ethernet devices to only be automatically configured with the private network address; in this case the allocated public IP address should be specified; * _multiple public IP addresses_: if a node has multiple public IPs but one is preferred, then the preferred IP address should be specified; * _IPv6_: if a node has both public IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified; * _dynamic IP address_: if a node has a dynamically allocated public IP address, for example an IP leased from a network provider, then a dynamic DNS name can be given can be given and Kilo will periodically lookup the IP to keep the endpoint up-to-date; * _override port_: if a node should listen on a specific port that is different from the mesh's default WireGuard port, then this annotation can be used to override the port; this can be useful, for example, to ensure that two nodes operating behind the same port-forwarded NAT gateway can each be allocated a different port. Kilo routes packets destined for nodes inside the same logical location using the node's internal IP address. The Kilo agent running on each node will use heuristics to automatically detect a private IP address for the node; however, in some circumstances it may be necessary to explicitly configure the IP address, for example: * _multiple private IP addresses_: if a node has multiple private IPs but one is preferred, then the preferred IP address should be specified; * _IPv6_: if a node has both private IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified. * _disable private IP with \"-\" or \"\"_: a node has a private and public address, but the private address ought to be ignored. By default, Kilo creates a network mesh at the data-center granularity. This means that one leader node is selected from each location to be an edge server and act as the gateway to other locations; the network topology will be a full mesh between leaders. Kilo automatically selects the leader for each location in a stable and deterministic manner to avoid churn in the network configuration, while giving preference to nodes that are known to have public IP addresses. In some situations it may be desirable to manually select the leader for a location, for example: * _firewall_: Kilo requires an open UDP port, which defaults to 51820, to communicate between locations; if only one node is configured to have that port open, then that node should be given the leader annotation; * _bandwidth_: if certain nodes in the cluster have a higher bandwidth or lower latency Internet connection, then those nodes should be given the leader annotation. > **Note**: multiple nodes within a single location can be given the leader annotation; in this case, Kilo will select one leader from the set of annotated nodes. Kilo allows nodes in different logical or physical locations to route packets to one-another. In order to know what connections to create, Kilo needs to know which nodes are in each location. Kilo will try to infer each node's location from the [topology.kubernetes.io/region](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion) node label. If the label is not present for a node, for example if running a bare-metal cluster or on an unsupported cloud provider, then the location annotation should be specified. > **Note**: all nodes without a defined location will be considered to be in the default location `\"\"`. In certain deployments, cluster nodes may be located behind NAT or a firewall, e.g. edge nodes located behind a commodity router. In these scenarios, the nodes behind NAT can send packets to the nodes outside of the NATed network, however the outside nodes can only send packets into the NATed network as long as the NAT mapping remains valid. In order for a node behind NAT to receive packets from nodes outside of the NATed network, it must maintain the NAT mapping by regularly sending packets to those nodes, ie by sending _keepalives_. The frequency of emission of these keepalive packets can be controlled by setting the persistent-keepalive annotation on the node behind NAT. The annotated node will use the specified value will as the persistent-keepalive interval for all of its peers. For more background, [see the WireGuard documentation on NAT and firewall traversal](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence). It is possible to add to a location by annotating any node within that location. Adding to a location makes these IPs routable from other locations as well. In an example deployment of Kilo with two locations A and B, a printer in location A can be accessible from nodes and pods in location B. Additionally, Kilo Peers can use the printer in location A."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Kilo",
- "file_name": "api.md"
- },
- "content": [
- {
- "heading": "API",
- "data": "This document is a reference of the API types introduced by Kilo.\n > **Note**: this document is generated from code comments. When contributing a change to this document, please do so by changing the code comments."
- },
- {
- "heading": "Table of Contents",
- "data": "* [DNSOrIP](#dnsorip)\n * [Peer](#peer)\n * [PeerEndpoint](#peerendpoint)\n * [PeerList](#peerlist)\n * [PeerSpec](#peerspec)"
- },
- {
- "heading": "DNSOrIP",
- "data": "DNSOrIP represents either a DNS name or an IP address. When both are given, the IP address, as it is more specific, override the DNS name.\n | Field | Description | Scheme | Required |\n | ----- | ----------- | ------ | -------- |\n | dns | DNS must be a valid RFC 1123 subdomain. | string | false |\n | ip | IP must be a valid IP address. | string | false |\n [Back to TOC](#table-of-contents)"
- },
- {
- "heading": "Peer",
- "data": "Peer is a WireGuard peer that should have access to the VPN.\n | Field | Description | Scheme | Required |\n | ----- | ----------- | ------ | -------- |\n | metadata | Standard object\u2019s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta) | false |\n | spec | Specification of the desired behavior of the Kilo Peer. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status | [PeerSpec](#peerspec) | true |\n [Back to TOC](#table-of-contents)"
- },
- {
- "heading": "PeerEndpoint",
- "data": "PeerEndpoint represents a WireGuard endpoint, which is an IP:port tuple.\n | Field | Description | Scheme | Required |\n | ----- | ----------- | ------ | -------- |\n | dnsOrIP | DNSOrIP is a DNS name or an IP address. | [DNSOrIP](#dnsorip) | true |\n | port | Port must be a valid port number. | uint32 | true |\n [Back to TOC](#table-of-contents)"
- },
- {
- "heading": "PeerList",
- "data": "PeerList is a list of peers.\n | Field | Description | Scheme | Required |\n | ----- | ----------- | ------ | -------- |\n | metadata | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) | false |\n | items | List of peers. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md | [][Peer](#peer) | true |\n [Back to TOC](#table-of-contents)"
- },
- {
- "heading": "PeerSpec",
- "data": "PeerSpec is the description and configuration of a peer. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | allowedIPs | AllowedIPs is the list of IP addresses that are allowed for the given peer's tunnel. | []string | true | | endpoint | Endpoint is the initial endpoint for connections to the peer. | *[PeerEndpoint](#peerendpoint) | false | | persistentKeepalive | PersistentKeepalive is the interval in seconds of the emission of keepalive packets by the peer. This defaults to 0, which disables the feature. | int | false | | presharedKey | PresharedKey is the optional symmetric encryption key for the peer. | string | false | | publicKey | PublicKey is the WireGuard public key for the peer. | string | true | [Back to TOC](#table-of-contents)"
- },
- {
- "additional_info": "This document is a reference of the API types introduced by Kilo. > **Note**: this document is generated from code comments. When contributing a change to this document, please do so by changing the code comments. * [DNSOrIP](#dnsorip) * [Peer](#peer) * [PeerEndpoint](#peerendpoint) * [PeerList](#peerlist) * [](#peerspec) DNSOrIP represents either a DNS name or an IP address. When both are given, the IP address, as it is more specific, override the DNS name. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | dns | DNS must be a valid RFC 1123 subdomain. | string | false | | ip | IP must be a valid IP address. | string | false | [Back to TOC](#table-of-contents) Peer is a WireGuard peer that should have access to the VPN. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | metadata | Standard object\u2019s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta) | false | | spec | Specification of the desired behavior of the Kilo Peer. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status | [](#peerspec) | true | [Back to TOC](#table-of-contents) PeerEndpoint represents a WireGuard endpoint, which is an IP:port tuple. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | dnsOrIP | DNSOrIP is a DNS name or an IP address. | [DNSOrIP](#dnsorip) | true | | port | Port must be a valid port number. | uint32 | true | [Back to TOC](#table-of-contents) PeerList is a list of peers. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | metadata | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) | false | | items | List of peers. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md | [][Peer](#peer) | true | [Back to TOC](#table-of-contents) is the description and configuration of a peer. | Field | Description | Scheme | Required | | ----- | ----------- | ------ | -------- | | allowedIPs | AllowedIPs is the list of IP addresses that are allowed for the given peer's tunnel. | []string | true | | endpoint | Endpoint is the initial endpoint for connections to the peer. | *[PeerEndpoint](#peerendpoint) | false | | persistentKeepalive | PersistentKeepalive is the interval in seconds of the emission of keepalive packets by the peer. This defaults to 0, which disables the feature. | int | false | | presharedKey | PresharedKey is the optional symmetric encryption key for the peer. | string | false | | publicKey | PublicKey is the WireGuard public key for the peer. | string | true | [Back to TOC](#table-of-contents)"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Kilo",
- "file_name": "bash_completions.md"
- },
- "content": [
- {
- "heading": "Generating Bash Completions For Your cobra.Command",
- "data": "Please refer to [Shell Completions](shell_completions.md) for details."
- },
- {
- "heading": "Bash legacy dynamic completions",
- "data": "For backward compatibility, Cobra still supports its legacy dynamic completion solution (described below). Unlike the `ValidArgsFunction` solution, the legacy solution will only work for Bash shell-completion and not for other shells. This legacy solution can be used along-side `ValidArgsFunction` and `RegisterFlagCompletionFunc()`, as long as both solutions are not used for the same command. This provides a path to gradually migrate from the legacy solution to the new solution. **Note**: Cobra's default `completion` command uses bash completion V2. If you are currently using Cobra's legacy dynamic completion solution, you should not use the default `completion` command but continue using your own. The legacy solution allows you to inject bash functions into the bash completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some code that works in kubernetes: And then I set that in my command definition: The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `__kubectl_custom_func()` (`___custom_func()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod [mypod]`. If you type `kubectl get pod [tab][tab]` the `__kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `__kubectl_custom_func()` will see that the cobra.Command is \"kubectl_get\" and will thus call another helper `__kubectl_get_resource()`. `__kubectl_get_resource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `__kubectl_parse_get pod`. `__kubectl_parse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Similarly, for flags: In addition add the `__kubectl_get_namespaces` implementation in the `BashCompletionFunction` value, e.g.:"
- },
- {
- "additional_info": "Please refer to [Shell Completions](shell_completions.md) for details. For backward compatibility, Cobra still supports its legacy dynamic completion solution (described below). Unlike the `ValidArgsFunction` solution, the legacy solution will only work for Bash shell-completion and not for other shells. This legacy solution can be used along-side `ValidArgsFunction` and `RegisterFlagCompletionFunc()`, as long as both solutions are not used for the same command. This provides a path to gradually migrate from the legacy solution to the new solution. **Note**: Cobra's default `completion` command uses bash completion V2. If you are currently using Cobra's legacy dynamic completion solution, you should not use the default `completion` command but continue using your own. The legacy solution allows you to inject bash functions into the bash completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some code that works in kubernetes: ```bash const ( bash_completion_func = `__kubectl_parse_get() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } __kubectl_get_resource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi __kubectl_parse_get ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } __kubectl_custom_func() { case ${last_command} in kubectl_get | kubectl_describe | kubectl_delete | kubectl_stop) __kubectl_get_resource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bash_completion_func, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `__kubectl_custom_func()` (`___custom_func()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod [mypod]`. If you type `kubectl get pod [tab][tab]` the `__kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `__kubectl_custom_func()` will see that the cobra.Command is \"kubectl_get\" and will thus call another helper `__kubectl_get_resource()`. `__kubectl_get_resource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `__kubectl_parse_get pod`. `__kubectl_parse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Similarly, for flags: ```go annotation := make(map[string][]string) annotation[cobra.BashCompCustom] = []string{\"__kubectl_get_namespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `__kubectl_get_namespaces` implementation in the `BashCompletionFunction` value, e.g.: ```bash __kubectl_get_namespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ```"
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Kilo",
- "file_name": "building_kilo.md"
- },
- "content": [
- {
- "heading": "Build and Test Kilo",
- "data": "This document describes how you can build and test Kilo.\n To follow along, you need to install the following utilities:\n - `go` not for building but formatting the code and running unit tests\n - `make`\n - `jq`\n - `git`\n - `curl`\n - `docker`"
- },
- {
- "heading": "Getting Started",
- "data": "Clone the Repository and `cd` into it."
- },
- {
- "heading": "Build",
- "data": "For consistency, the Kilo binaries are compiled in a Docker container, so make sure the `docker` package is installed and the daemon is running."
- },
- {
- "heading": "Compile Binaries",
- "data": "To compile the `kg` and `kgctl` binaries run:\n Binaries are always placed in a directory corresponding to the local system's OS and architecture following the pattern `bin///`, so on an AMD64 machine running Linux, the binaries will be stored in `bin/linux/amd64/`.\n You can build the binaries for a different architecture by setting the `ARCH` environment variable before invoking `make`, e.g.:\n Likewise, to build `kg` for another OS, set the `OS` environment variable before invoking `make`:"
- },
- {
- "heading": "Test",
- "data": "To execute the unit tests, run:\n To lint the code in the repository, run:\n To execute basic end to end tests, run:\n > **Note**: The end to end tests are currently flaky, so try running them again if they fail.\n To instead run all of the tests with a single command, run:"
- },
- {
- "heading": "Build and Push the Container Images",
- "data": "If you want to build containers for a processor architecture that is different from your computer's, then you will first need to configure QEMU as the interpreter for binaries built for non-native architectures: Set the `$IMAGE` environment variable to `/kilo`. This way the generated container images and manifests will be named accordingly. By skipping this step, you will be able to tag images but will not be able to push the containers and manifests to your own Docker Hub. If you want to use a different container registry, run: To build containers with the `kg` image for `arm`, `arm64` and `amd64`, run: Push the container images and build a manifest with: To tag and push the manifest with `latest`, run: Now you can deploy the custom build of Kilo to your cluster. If you are already running Kilo, change the image from `squat/kilo` to `[registry/]/kilo[:sha]`."
- },
- {
- "additional_info": "This document describes how you can build and test Kilo. To follow along, you need to install the following utilities: - `go` not for building but formatting the code and running unit tests - `make` - `jq` - `git` - `curl` - `docker` Clone the Repository and `cd` into it. ```shell git clone https://github.com/squat/kilo.git cd kilo ``` For consistency, the Kilo binaries are compiled in a Docker container, so make sure the `docker` package is installed and the daemon is running. To compile the `kg` and `kgctl` binaries run: ```shell make ``` Binaries are always placed in a directory corresponding to the local system's OS and architecture following the pattern `bin///`, so on an AMD64 machine running Linux, the binaries will be stored in `bin/linux/amd64/`. You can build the binaries for a different architecture by setting the `ARCH` environment variable before invoking `make`, e.g.: ```shell ARCH= make ``` Likewise, to build `kg` for another OS, set the `OS` environment variable before invoking `make`: ```shell OS= make ``` To execute the unit tests, run: ```shell make unit ``` To lint the code in the repository, run: ```shell make lint ``` To execute basic end to end tests, run: ```shell make e2e ``` > **Note**: The end to end tests are currently flaky, so try running them again if they fail. To instead run all of the tests with a single command, run: ```shell make test ``` If you want to build containers for a processor architecture that is different from your computer's, then you will first need to configure QEMU as the interpreter for binaries built for non-native architectures: ```shell docker run --rm --privileged multiarch/qemu-user-static --reset -p yes ``` Set the `$IMAGE` environment variable to `/kilo`. This way the generated container images and manifests will be named accordingly. By skipping this step, you will be able to tag images but will not be able to push the containers and manifests to your own Docker Hub. ```shell export IMAGE=/kilo ``` If you want to use a different container registry, run: ```shell export REGISTRY= ``` To build containers with the `kg` image for `arm`, `arm64` and `amd64`, run: ```shell make all-container ``` Push the container images and build a manifest with: ```shell make manifest ``` To tag and push the manifest with `latest`, run: ```shell make manifest-latest ``` Now you can deploy the custom build of Kilo to your cluster. If you are already running Kilo, change the image from `squat/kilo` to `[registry/]/kilo[:sha]`."
- }
- ]
- },
- {
- "tag": {
- "category": "Runtime",
- "subcategory": "Cloud Native Network",
- "project_name": "Kilo",
- "file_name": "building_website.md"
- },
- "content": [
- {
- "heading": "Build and Test the Website",
- "data": "You may have noticed that the `markdown` files in the `/docs` directory are also displayed on [Kilo's website](https://kilo.squat.ai/).\n If you want to add documentation to Kilo, you can start a local webserver to check out how the website would look like."
- },
- {
- "heading": "Requirements",
- "data": "Install [yarn](https://yarnpkg.com/getting-started/install)."
- },
- {
- "heading": "Build and Run",
- "data": "The markdown files for the website are located in `/website/docs` and are generated from the like-named markdown files in the `/docs` directory and from the corresponding header files without the `.md` extension in the `/website/docs` directory.\n To generate the markdown files in `/website/docs`, run:\n Next, build the website itself by installing the `node_modules` and building the website's HTML from the generated markdown:\n Now, start the website server with:\n This command should have opened a browser window with the website; if not, open your browser and point it to `http://localhost:3000`.\n If you make changes to any of the markdown files in `/docs` and want to reload the local `node` server, run:\n You can execute the above while the node server is running and the website will be rebuilt and reloaded automatically."
- },
- {
- "heading": "Add a New File to the Docs",
- "data": "If you add a new file to the `/docs` directory, you also need to create a corresponding header file containing the front-matter in `/website/docs/`. Then, regenerate the markdown for the website with the command: Edit `/website/sidebars.js` accordingly. > **Note**: The `id` in the header file `/website/docs/