Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
410,977 | 12,004,716,443 | IssuesEvent | 2020-04-09 12:07:54 | AY1920S2-CS2103T-F09-4/main | https://api.github.com/repos/AY1920S2-CS2103T-F09-4/main | closed | Implement Help Window with help for basic commands for each page | priority.Medium type.Epic | Rather than just giving the link to our UG, I think we should at least give some basic commands to the user in the help window popup.
Each view to have a personalized help window popup. | 1.0 | Implement Help Window with help for basic commands for each page - Rather than just giving the link to our UG, I think we should at least give some basic commands to the user in the help window popup.
Each view to have a personalized help window popup. | non_infrastructure | implement help window with help for basic commands for each page rather than just giving the link to our ug i think we should at least give some basic commands to the user in the help window popup each view to have a personalized help window popup | 0 |
943 | 3,006,482,602 | IssuesEvent | 2015-07-27 10:41:59 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | Download for dll pdb files | affected: master auto-transferred bug category: infrastructure priority: normal | Transferred from http://code.opencv.org/issues/3876
```
|| Philipp Hasper on 2014-08-22 08:50
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: infrastructure
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Windows
```
Download for dll pdb files
-----------
```
(This is actually not a bug but a feature request; where is the right place to put requests?)
It would be really nice to provide the pdb files for the OpenCV dlls as a separate download. I know that the filesize is big and that you can compile it yourself but
# Debugging own code without the OpenCV debug information is really really hard - I would even say useless (somewhere deep down the code an assertion containing 2-5 conditions failed. And now?)
# Storage is cheap and the internet is fast nowadays
# Actually, if you compress the pdb files they are not that big - 80 MB for one version
# Building OpenCV takes time if you want to include all performance-boosting libraries
# Switching to a new OpenCV version would take half the time if you didn't have to build it yourself
So as a conclusion: I think the prebuild libaray is not suitable for debugging. So if you are so nice to provide prebuild files, why not deliver the full package?
```
History
-------
##### Steven Puttemans on 2014-08-22 09:21
```
Afaik PDB files are system and configuration specific. So it is kind of hard to deliver them for each OS version - OpenCV version - VS studio version combination out there. That is why they decided not to include this into the public downloads. Actually if you configure everything good and you build your system multithreaded rebuilding openCV should'nt take more than half an hour, even with all optimizations.
- Assignee set to Roman Donchenko
- Status changed from New to Open
```
##### Philipp Hasper on 2014-08-22 09:55
```
Building time itself is indeed no big problem, you are right; but collecting all the necessary libraries is rather tedious. I did not know that PDB files are OS dependent, I'm new to this concept. Why does the staticlib folder contain PDB files then?
```
##### Vladislav Vinogradov on 2014-09-30 11:19
```
- Category set to infrastructure
``` | 1.0 | Download for dll pdb files - Transferred from http://code.opencv.org/issues/3876
```
|| Philipp Hasper on 2014-08-22 08:50
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: infrastructure
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Windows
```
Download for dll pdb files
-----------
```
(This is actually not a bug but a feature request; where is the right place to put requests?)
It would be really nice to provide the pdb files for the OpenCV dlls as a separate download. I know that the filesize is big and that you can compile it yourself but
# Debugging own code without the OpenCV debug information is really really hard - I would even say useless (somewhere deep down the code an assertion containing 2-5 conditions failed. And now?)
# Storage is cheap and the internet is fast nowadays
# Actually, if you compress the pdb files they are not that big - 80 MB for one version
# Building OpenCV takes time if you want to include all performance-boosting libraries
# Switching to a new OpenCV version would take half the time if you didn't have to build it yourself
So as a conclusion: I think the prebuild libaray is not suitable for debugging. So if you are so nice to provide prebuild files, why not deliver the full package?
```
History
-------
##### Steven Puttemans on 2014-08-22 09:21
```
Afaik PDB files are system and configuration specific. So it is kind of hard to deliver them for each OS version - OpenCV version - VS studio version combination out there. That is why they decided not to include this into the public downloads. Actually if you configure everything good and you build your system multithreaded rebuilding openCV should'nt take more than half an hour, even with all optimizations.
- Assignee set to Roman Donchenko
- Status changed from New to Open
```
##### Philipp Hasper on 2014-08-22 09:55
```
Building time itself is indeed no big problem, you are right; but collecting all the necessary libraries is rather tedious. I did not know that PDB files are OS dependent, I'm new to this concept. Why does the staticlib folder contain PDB files then?
```
##### Vladislav Vinogradov on 2014-09-30 11:19
```
- Category set to infrastructure
``` | infrastructure | download for dll pdb files transferred from philipp hasper on priority normal affected branch master dev category infrastructure tracker bug difficulty pr platform any windows download for dll pdb files this is actually not a bug but a feature request where is the right place to put requests it would be really nice to provide the pdb files for the opencv dlls as a separate download i know that the filesize is big and that you can compile it yourself but debugging own code without the opencv debug information is really really hard i would even say useless somewhere deep down the code an assertion containing conditions failed and now storage is cheap and the internet is fast nowadays actually if you compress the pdb files they are not that big mb for one version building opencv takes time if you want to include all performance boosting libraries switching to a new opencv version would take half the time if you didn t have to build it yourself so as a conclusion i think the prebuild libaray is not suitable for debugging so if you are so nice to provide prebuild files why not deliver the full package history steven puttemans on afaik pdb files are system and configuration specific so it is kind of hard to deliver them for each os version opencv version vs studio version combination out there that is why they decided not to include this into the public downloads actually if you configure everything good and you build your system multithreaded rebuilding opencv should nt take more than half an hour even with all optimizations assignee set to roman donchenko status changed from new to open philipp hasper on building time itself is indeed no big problem you are right but collecting all the necessary libraries is rather tedious i did not know that pdb files are os dependent i m new to this concept why does the staticlib folder contain pdb files then vladislav vinogradov on category set to infrastructure | 1 |
201,456 | 15,207,537,226 | IssuesEvent | 2021-02-17 00:25:40 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | opened | Stabilize TestFunctional/parallel/DockerEnv integration test | kind/failing-test kind/flake | TestFunctional/parallel/DockerEnv flakes when run in GitHub actions.
Example of failed run:
```
2021-02-17T00:01:22.7911009Z helpers_test.go:240: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
2021-02-17T00:01:22.7912978Z helpers_test.go:241: ======> post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
2021-02-17T00:01:22.7915014Z helpers_test.go:243: (dbg) Run: ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25
2021-02-17T00:01:23.4137669Z === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService
2021-02-17T00:01:23.4140067Z helpers_test.go:335: "nginx-svc" [e262f289-58b0-4c41-aad0-b1f27b215a87] Running
2021-02-17T00:01:25.7045729Z === CONT TestFunctional/parallel/DockerEnv
2021-02-17T00:01:25.7048581Z helpers_test.go:243: (dbg) Done: ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25: (2.912515242s)
2021-02-17T00:01:25.7127524Z helpers_test.go:248: TestFunctional/parallel/DockerEnv logs:
2021-02-17T00:01:25.7129836Z -- stdout --
2021-02-17T00:01:25.7130708Z * ==> Docker <==
2021-02-17T00:01:25.7131897Z * -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:23 UTC. --
2021-02-17T00:01:25.7133743Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.769589195Z" level=error msg="stream copy error: reading from a closed fifo"
2021-02-17T00:01:25.7137288Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.944284964Z" level=error msg="82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a cleanup: failed to delete container from containerd: no such container"
2021-02-17T00:01:25.7142581Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.945330107Z" level=error msg="Handler for POST /v1.40/containers/82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:448: writing syncT 'resume' caused: write init-p: broken pipe: unknown"
2021-02-17T00:01:25.7147074Z * Feb 16 23:58:21 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:21.103453944Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7155725Z * Feb 17 00:00:57 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:57.998953278Z" level=info msg="ignoring event" container=8fd1325a18ee143988be3547727d8bb1983f6642dca67c24eaa2e156fbdcedf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7160823Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.007947254Z" level=info msg="ignoring event" container=61b9482f4323073909d3a860ac936509130f852404978d404900ec38e54e0200 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7164620Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.015376624Z" level=info msg="ignoring event" container=6136c721d0d7941a59afa82cc01c7280ca7b2d7261f750f68192a95b65f4844a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7169285Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.057495247Z" level=info msg="ignoring event" container=0f7cb48a86e1bdc0327f62552c3aabf601652416c933f2b744808cbd149eb4bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7175111Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.065156312Z" level=info msg="ignoring event" container=4bf1331ef083bce8b8a4165534423ed97620ceb9a4843cf81a4f7085c2a22ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7180025Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.099294398Z" level=info msg="ignoring event" container=cc20d60fcb71c66921736e5e823362b244abe438582875b5aa83ff0b4cb7ad11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7184353Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.110503491Z" level=info msg="ignoring event" container=5125090049bcd369f289c201a30a074c0cc5d55cc354b91a8f6cd5f2adff9e99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7188513Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.135775116Z" level=info msg="ignoring event" container=adda7f4f538315d4d60bb5b953421c1b20eaf7a100fe60f28de1dcab458915d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7192663Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.174797694Z" level=info msg="ignoring event" container=76a5892bf39bb79234bdec9a159fbf32376330a142812a83e967896484ec4b56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7196657Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193196620Z" level=info msg="ignoring event" container=42999b8184eca1699b899b5d048429eef0cb313b9a0bce3f9d103641b909aab1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7217822Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193237973Z" level=info msg="ignoring event" container=65452e92862d23b63a6bca266a66dfdd3feb1d39dbe07ceafc55c2c945fa25ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7224584Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193264828Z" level=info msg="ignoring event" container=f299474e9f3cffff18b490e6578c55864bf4a171d0d0a402e40f1ffd1c4bfbb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7229128Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.398598093Z" level=info msg="ignoring event" container=f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7234766Z * Feb 17 00:01:02 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:02.992413832Z" level=info msg="ignoring event" container=c6da32b7234d60e5a536b37b5d58cc2ee7f094c01a560cf7b71ad239de05a89e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7239798Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.449034180Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7246258Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.450072234Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7252441Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7256664Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7260462Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.207118651Z" level=info msg="ignoring event" container=c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7264918Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.269276332Z" level=info msg="ignoring event" container=f61b22da999cb0b63e1389394cad98ba5abdc954f772c957f6a5c3f0458c294e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7268345Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.855757804Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7269703Z *
2021-02-17T00:01:25.7270226Z * ==> container status <==
2021-02-17T00:01:25.7271062Z * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2021-02-17T00:01:25.7274144Z * f59bd12c23289 nginx@sha256:c2ce58e024275728b00a554ac25628af25c54782865b3487b11c21cafb7fabda 5 seconds ago Running nginx 0 8472496ad852e
2021-02-17T00:01:25.7276688Z * a07ef8bb4a5d8 95d99817fc335 8 seconds ago Running kube-apiserver 0 b501d6f1e4173
2021-02-17T00:01:25.7279449Z * 79e4f0230c9d5 db91994f4ee8f 8 seconds ago Running coredns 1 8416187a50e92
2021-02-17T00:01:25.7302430Z * 60cc59b481124 788e63d07298d 24 seconds ago Running kube-proxy 1 febddf7be60d8
2021-02-17T00:01:25.7305102Z * aa25c43bff27d 84bee7cc4870e 24 seconds ago Running storage-provisioner 1 ce148f582a8ed
2021-02-17T00:01:25.7307307Z * 9f35eeb44c8f7 60d957e44ec8a 25 seconds ago Running kube-scheduler 1 e03ded6bf9e51
2021-02-17T00:01:25.7309672Z * 71db52d9a3e8f 3a1a2b528610a 25 seconds ago Running kube-controller-manager 1 db4a886c25f5b
2021-02-17T00:01:25.7311714Z * f61b22da999cb 95d99817fc335 25 seconds ago Exited kube-apiserver 1 c48c263e44c9c
2021-02-17T00:01:25.7314122Z * 8f607bf42a9f1 05b738aa1bc63 25 seconds ago Running etcd 1 f0319c08752b2
2021-02-17T00:01:25.7315925Z * 65452e92862d2 84bee7cc4870e 2 minutes ago Exited storage-provisioner 0 42999b8184eca
2021-02-17T00:01:25.7317392Z * c6da32b7234d6 db91994f4ee8f 3 minutes ago Exited coredns 0 76a5892bf39bb
2021-02-17T00:01:25.7319364Z * 5125090049bcd 788e63d07298d 3 minutes ago Exited kube-proxy 0 0f7cb48a86e1b
2021-02-17T00:01:25.7321227Z * f299474e9f3cf 60d957e44ec8a 3 minutes ago Exited kube-scheduler 0 6136c721d0d79
2021-02-17T00:01:25.7323886Z * 4bf1331ef083b 3a1a2b528610a 3 minutes ago Exited kube-controller-manager 0 61b9482f43230
2021-02-17T00:01:25.7325479Z * 8fd1325a18ee1 05b738aa1bc63 3 minutes ago Exited etcd 0 cc20d60fcb71c
2021-02-17T00:01:25.7326394Z *
2021-02-17T00:01:25.7327387Z * ==> coredns [79e4f0230c9d] <==
2021-02-17T00:01:25.7328034Z * .:53
2021-02-17T00:01:25.7329015Z * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7330417Z * CoreDNS-1.7.0
2021-02-17T00:01:25.7331152Z * linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7332015Z * [INFO] plugin/ready: Still waiting on: "kubernetes"
2021-02-17T00:01:25.7333190Z *
2021-02-17T00:01:25.7333766Z * ==> coredns [c6da32b7234d] <==
2021-02-17T00:01:25.7336756Z * E0217 00:00:57.837669 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=243&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7340796Z * E0217 00:00:57.837865 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=580&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7344504Z * E0217 00:00:57.837884 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=201&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7346573Z * .:53
2021-02-17T00:01:25.7347430Z * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7348771Z * CoreDNS-1.7.0
2021-02-17T00:01:25.7349380Z * linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7350146Z * [INFO] SIGTERM: Shutting down servers then terminating
2021-02-17T00:01:25.7357211Z * [INFO] plugin/health: Going into lameduck mode for 5s
2021-02-17T00:01:25.7357865Z *
2021-02-17T00:01:25.7358391Z * ==> describe nodes <==
2021-02-17T00:01:25.7359539Z * Name: functional-20210216235525-2779755
2021-02-17T00:01:25.7361176Z * Roles: control-plane,master
2021-02-17T00:01:25.7362058Z * Labels: beta.kubernetes.io/arch=arm64
2021-02-17T00:01:25.7362912Z * beta.kubernetes.io/os=linux
2021-02-17T00:01:25.7363837Z * kubernetes.io/arch=arm64
2021-02-17T00:01:25.7365065Z * kubernetes.io/hostname=functional-20210216235525-2779755
2021-02-17T00:01:25.7366056Z * kubernetes.io/os=linux
2021-02-17T00:01:25.7367032Z * minikube.k8s.io/commit=3bdb549339cf69353b01a489c6dbe349d7066bcf
2021-02-17T00:01:25.7368468Z * minikube.k8s.io/name=functional-20210216235525-2779755
2021-02-17T00:01:25.7369498Z * minikube.k8s.io/updated_at=2021_02_16T23_58_02_0700
2021-02-17T00:01:25.7370515Z * minikube.k8s.io/version=v1.17.1
2021-02-17T00:01:25.7371725Z * node-role.kubernetes.io/control-plane=
2021-02-17T00:01:25.7372950Z * node-role.kubernetes.io/master=
2021-02-17T00:01:25.7374401Z * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
2021-02-17T00:01:25.7375667Z * node.alpha.kubernetes.io/ttl: 0
2021-02-17T00:01:25.7377172Z * volumes.kubernetes.io/controller-managed-attach-detach: true
2021-02-17T00:01:25.7378443Z * CreationTimestamp: Tue, 16 Feb 2021 23:57:59 +0000
2021-02-17T00:01:25.7379123Z * Taints: <none>
2021-02-17T00:01:25.7379720Z * Unschedulable: false
2021-02-17T00:01:25.7380293Z * Lease:
2021-02-17T00:01:25.7381276Z * HolderIdentity: functional-20210216235525-2779755
2021-02-17T00:01:25.7382189Z * AcquireTime: <unset>
2021-02-17T00:01:25.7382846Z * RenewTime: Wed, 17 Feb 2021 00:01:22 +0000
2021-02-17T00:01:25.7383454Z * Conditions:
2021-02-17T00:01:25.7384418Z * Type Status LastHeartbeatTime LastTransitionTime Reason Message
2021-02-17T00:01:25.7385801Z * ---- ------ ----------------- ------------------ ------ -------
2021-02-17T00:01:25.7387207Z * MemoryPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
2021-02-17T00:01:25.7388988Z * DiskPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
2021-02-17T00:01:25.7391386Z * PIDPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
2021-02-17T00:01:25.7393003Z * Ready True Wed, 17 Feb 2021 00:01:14 +0000 Wed, 17 Feb 2021 00:01:14 +0000 KubeletReady kubelet is posting ready status
2021-02-17T00:01:25.7393880Z * Addresses:
2021-02-17T00:01:25.7394552Z * InternalIP: 192.168.82.108
2021-02-17T00:01:25.7395608Z * Hostname: functional-20210216235525-2779755
2021-02-17T00:01:25.7396388Z * Capacity:
2021-02-17T00:01:25.7396890Z * cpu: 2
2021-02-17T00:01:25.7397931Z * ephemeral-storage: 40474572Ki
2021-02-17T00:01:25.7398790Z * hugepages-1Gi: 0
2021-02-17T00:01:25.7399565Z * hugepages-2Mi: 0
2021-02-17T00:01:25.7400634Z * hugepages-32Mi: 0
2021-02-17T00:01:25.7401430Z * hugepages-64Ki: 0
2021-02-17T00:01:25.7402117Z * memory: 8038232Ki
2021-02-17T00:01:25.7402643Z * pods: 110
2021-02-17T00:01:25.7403315Z * Allocatable:
2021-02-17T00:01:25.7403837Z * cpu: 2
2021-02-17T00:01:25.7404668Z * ephemeral-storage: 40474572Ki
2021-02-17T00:01:25.7405521Z * hugepages-1Gi: 0
2021-02-17T00:01:25.7406290Z * hugepages-2Mi: 0
2021-02-17T00:01:25.7407084Z * hugepages-32Mi: 0
2021-02-17T00:01:25.7407867Z * hugepages-64Ki: 0
2021-02-17T00:01:25.7408463Z * memory: 8038232Ki
2021-02-17T00:01:25.7408978Z * pods: 110
2021-02-17T00:01:25.7409493Z * System Info:
2021-02-17T00:01:25.7410124Z * Machine ID: 46f6444822754a889e4650f359992409
2021-02-17T00:01:25.7411089Z * System UUID: 50408af4-47b4-4574-ab83-34615404919a
2021-02-17T00:01:25.7412556Z * Boot ID: b0b00e66-2c54-4a1e-86bd-8109c5527bb8
2021-02-17T00:01:25.7413773Z * Kernel Version: 5.4.0-1029-aws
2021-02-17T00:01:25.7414457Z * OS Image: Ubuntu 20.04.1 LTS
2021-02-17T00:01:25.7415104Z * Operating System: linux
2021-02-17T00:01:25.7415768Z * Architecture: arm64
2021-02-17T00:01:25.7416552Z * Container Runtime Version: docker://20.10.2
2021-02-17T00:01:25.7417329Z * Kubelet Version: v1.20.2
2021-02-17T00:01:25.7422174Z * Kube-Proxy Version: v1.20.2
2021-02-17T00:01:25.7422887Z * PodCIDR: 10.244.0.0/24
2021-02-17T00:01:25.7424961Z * PodCIDRs: 10.244.0.0/24
2021-02-17T00:01:25.7426007Z * Non-terminated Pods: (8 in total)
2021-02-17T00:01:25.7427015Z * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
2021-02-17T00:01:25.7428818Z * --------- ---- ------------ ---------- --------------- ------------- ---
2021-02-17T00:01:25.7430054Z * default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s
2021-02-17T00:01:25.7432070Z * kube-system coredns-74ff55c5b-9jwcl 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 3m6s
2021-02-17T00:01:25.7433776Z * kube-system etcd-functional-20210216235525-2779755 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 3m18s
2021-02-17T00:01:25.7436038Z * kube-system kube-apiserver-functional-20210216235525-2779755 250m (12%) 0 (0%) 0 (0%) 0 (0%) 1s
2021-02-17T00:01:25.7438714Z * kube-system kube-controller-manager-functional-20210216235525-2779755 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m18s
2021-02-17T00:01:25.7441004Z * kube-system kube-proxy-lvfk2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m6s
2021-02-17T00:01:25.7442826Z * kube-system kube-scheduler-functional-20210216235525-2779755 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m18s
2021-02-17T00:01:25.7444679Z * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m1s
2021-02-17T00:01:25.7445772Z * Allocated resources:
2021-02-17T00:01:25.7446773Z * (Total limits may be over 100 percent, i.e., overcommitted.)
2021-02-17T00:01:25.7447817Z * Resource Requests Limits
2021-02-17T00:01:25.7448680Z * -------- -------- ------
2021-02-17T00:01:25.7449248Z * cpu 750m (37%) 0 (0%)
2021-02-17T00:01:25.7449794Z * memory 170Mi (2%) 170Mi (2%)
2021-02-17T00:01:25.7450633Z * ephemeral-storage 100Mi (0%) 0 (0%)
2021-02-17T00:01:25.7451508Z * hugepages-1Gi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7452334Z * hugepages-2Mi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7453331Z * hugepages-32Mi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7454167Z * hugepages-64Ki 0 (0%) 0 (0%)
2021-02-17T00:01:25.7454751Z * Events:
2021-02-17T00:01:25.7455385Z * Type Reason Age From Message
2021-02-17T00:01:25.7456283Z * ---- ------ ---- ---- -------
2021-02-17T00:01:25.7458703Z * Normal NodeHasSufficientMemory 3m34s (x4 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7461425Z * Normal NodeHasNoDiskPressure 3m34s (x5 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7463881Z * Normal NodeHasSufficientPID 3m34s (x4 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7465381Z * Normal Starting 3m18s kubelet Starting kubelet.
2021-02-17T00:01:25.7467180Z * Normal NodeHasSufficientMemory 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7470701Z * Normal NodeHasNoDiskPressure 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7473635Z * Normal NodeHasSufficientPID 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7476442Z * Normal NodeNotReady 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7479244Z * Normal NodeAllocatableEnforced 3m18s kubelet Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7481460Z * Normal NodeReady 3m8s kubelet Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7484473Z * Normal Starting 3m4s kube-proxy Starting kube-proxy.
2021-02-17T00:01:25.7486650Z * Normal Starting 15s kube-proxy Starting kube-proxy.
2021-02-17T00:01:25.7489122Z * Normal Starting 12s kubelet Starting kubelet.
2021-02-17T00:01:25.7491506Z * Normal NodeHasSufficientMemory 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7494691Z * Normal NodeHasNoDiskPressure 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7497382Z * Normal NodeHasSufficientPID 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7499463Z * Normal NodeNotReady 11s kubelet Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7501329Z * Normal NodeAllocatableEnforced 11s kubelet Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7503230Z * Normal NodeReady 10s kubelet Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7504206Z *
2021-02-17T00:01:25.7504664Z * ==> dmesg <==
2021-02-17T00:01:25.7505447Z * [ +0.000862] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7506428Z * [ +0.000668] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7507440Z * [ +0.001050] FS-Cache: N-cookie d=00000000866407ee n=000000005e953fae
2021-02-17T00:01:25.7508334Z * [ +0.000918] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7509257Z * [ +0.013502] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7510297Z * [ +0.000689] FS-Cache: O-cookie c=00000000b1a9545c [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7511311Z * [ +0.001078] FS-Cache: O-cookie d=00000000866407ee n=00000000cc8b7d72
2021-02-17T00:01:25.7512210Z * [ +0.000937] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7513190Z * [ +0.000674] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7514304Z * [ +0.001054] FS-Cache: N-cookie d=00000000866407ee n=000000001a6a5283
2021-02-17T00:01:25.7515254Z * [ +0.000854] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7516179Z * [ +1.733025] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7517215Z * [ +0.000664] FS-Cache: O-cookie c=00000000524c02db [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7518215Z * [ +0.001123] FS-Cache: O-cookie d=00000000866407ee n=000000000f2cbff9
2021-02-17T00:01:25.7519104Z * [ +0.000853] FS-Cache: O-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7520198Z * [ +0.000669] FS-Cache: N-cookie c=00000000dc53534f [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7521252Z * [ +0.001112] FS-Cache: N-cookie d=00000000866407ee n=0000000012ba97ce
2021-02-17T00:01:25.7522153Z * [ +0.000856] FS-Cache: N-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7523086Z * [ +0.346794] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7525413Z * [ +0.000654] FS-Cache: O-cookie c=000000002f236a72 [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7526628Z * [ +0.001105] FS-Cache: O-cookie d=00000000866407ee n=000000005ebbc510
2021-02-17T00:01:25.7527582Z * [ +0.000843] FS-Cache: O-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7528541Z * [ +0.000636] FS-Cache: N-cookie c=00000000d47b852c [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7529542Z * [ +0.001088] FS-Cache: N-cookie d=00000000866407ee n=00000000a03ebc34
2021-02-17T00:01:25.7530447Z * [ +0.000888] FS-Cache: N-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7531006Z *
2021-02-17T00:01:25.7531497Z * ==> etcd [8f607bf42a9f] <==
2021-02-17T00:01:25.7532319Z * 2021-02-17 00:00:59.424921 I | embed: initial cluster =
2021-02-17T00:01:25.7533605Z * 2021-02-17 00:00:59.463680 I | etcdserver: restarting member 8bf199ee24c8c3e2 in cluster f398ff6fd447e89b at commit index 641
2021-02-17T00:01:25.7534821Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=()
2021-02-17T00:01:25.7535811Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 became follower at term 2
2021-02-17T00:01:25.7537171Z * raft2021/02/17 00:00:59 INFO: newRaft 8bf199ee24c8c3e2 [peers: [], term: 2, commit: 641, applied: 0, lastindex: 641, lastterm: 2]
2021-02-17T00:01:25.7538580Z * 2021-02-17 00:00:59.502906 W | auth: simple token is not cryptographically signed
2021-02-17T00:01:25.7539912Z * 2021-02-17 00:00:59.527298 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2021-02-17T00:01:25.7541254Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=(10084010288757654498)
2021-02-17T00:01:25.7543183Z * 2021-02-17 00:00:59.532654 I | etcdserver/membership: added member 8bf199ee24c8c3e2 [https://192.168.82.108:2380] to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7544745Z * 2021-02-17 00:00:59.532875 N | etcdserver/membership: set the initial cluster version to 3.4
2021-02-17T00:01:25.7545990Z * 2021-02-17 00:00:59.533626 I | etcdserver/api: enabled capabilities for version 3.4
2021-02-17T00:01:25.7547957Z * 2021-02-17 00:00:59.551573 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-02-17T00:01:25.7549818Z * 2021-02-17 00:00:59.555394 I | embed: listening for metrics on http://127.0.0.1:2381
2021-02-17T00:01:25.7550924Z * 2021-02-17 00:00:59.555771 I | embed: listening for peers on 192.168.82.108:2380
2021-02-17T00:01:25.7551837Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 is starting a new election at term 2
2021-02-17T00:01:25.7552803Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became candidate at term 3
2021-02-17T00:01:25.7554198Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 received MsgVoteResp from 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7555376Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became leader at term 3
2021-02-17T00:01:25.7556461Z * raft2021/02/17 00:01:00 INFO: raft.node: 8bf199ee24c8c3e2 elected leader 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7579690Z * 2021-02-17 00:01:00.898434 I | etcdserver: published {Name:functional-20210216235525-2779755 ClientURLs:[https://192.168.82.108:2379]} to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7581550Z * 2021-02-17 00:01:00.898584 I | embed: ready to serve client requests
2021-02-17T00:01:25.7582599Z * 2021-02-17 00:01:00.901801 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7583620Z * 2021-02-17 00:01:00.902770 I | embed: ready to serve client requests
2021-02-17T00:01:25.7584648Z * 2021-02-17 00:01:00.909680 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7585744Z * 2021-02-17 00:01:22.638430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7586427Z *
2021-02-17T00:01:25.7586932Z * ==> etcd [8fd1325a18ee] <==
2021-02-17T00:01:25.7587815Z * 2021-02-16 23:57:52.464452 I | embed: ready to serve client requests
2021-02-17T00:01:25.7588828Z * 2021-02-16 23:57:52.465545 I | embed: ready to serve client requests
2021-02-17T00:01:25.7589843Z * 2021-02-16 23:57:52.466747 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7590885Z * 2021-02-16 23:57:52.472829 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7591972Z * 2021-02-16 23:58:16.034982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7593110Z * 2021-02-16 23:58:19.927093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7594266Z * 2021-02-16 23:58:29.925620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7595407Z * 2021-02-16 23:58:39.925441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7596558Z * 2021-02-16 23:58:49.925551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7597699Z * 2021-02-16 23:58:59.925554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7598850Z * 2021-02-16 23:59:09.925573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7600433Z * 2021-02-16 23:59:19.925451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7601706Z * 2021-02-16 23:59:29.925560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7602914Z * 2021-02-16 23:59:39.925644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7604104Z * 2021-02-16 23:59:49.925434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7605590Z * 2021-02-16 23:59:59.925511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7607200Z * 2021-02-17 00:00:09.925431 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7608417Z * 2021-02-17 00:00:19.925514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7609579Z * 2021-02-17 00:00:29.928682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7610726Z * 2021-02-17 00:00:39.925484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7612079Z * 2021-02-17 00:00:49.925819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7613336Z * 2021-02-17 00:00:57.845440 N | pkg/osutil: received terminated signal, shutting down...
2021-02-17T00:01:25.7615432Z * WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7617547Z * 2021-02-17 00:00:57.854805 I | etcdserver: skipped leadership transfer for single voting member cluster
2021-02-17T00:01:25.7619372Z * WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {192.168.82.108:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.82.108:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7620784Z *
2021-02-17T00:01:25.7621255Z * ==> kernel <==
2021-02-17T00:01:25.7621870Z * 00:01:24 up 27 days, 21:57, 0 users, load average: 4.69, 3.22, 1.88
2021-02-17T00:01:25.7623252Z * Linux functional-20210216235525-2779755 5.4.0-1029-aws #30-Ubuntu SMP Tue Oct 20 10:08:09 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
2021-02-17T00:01:25.7624316Z * PRETTY_NAME="Ubuntu 20.04.1 LTS"
2021-02-17T00:01:25.7624843Z *
2021-02-17T00:01:25.7625598Z * ==> kube-apiserver [a07ef8bb4a5d] <==
2021-02-17T00:01:25.7626685Z * I0217 00:01:22.429463 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
2021-02-17T00:01:25.7628103Z * I0217 00:01:22.429636 1 available_controller.go:475] Starting AvailableConditionController
2021-02-17T00:01:25.7629643Z * I0217 00:01:22.429648 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
2021-02-17T00:01:25.7630913Z * I0217 00:01:22.481887 1 controller.go:86] Starting OpenAPI controller
2021-02-17T00:01:25.7632060Z * I0217 00:01:22.482086 1 naming_controller.go:291] Starting NamingConditionController
2021-02-17T00:01:25.7633385Z * I0217 00:01:22.482141 1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7635145Z * I0217 00:01:22.482342 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7638216Z * I0217 00:01:22.482710 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7640369Z * I0217 00:01:22.482746 1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7641773Z * I0217 00:01:22.691475 1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7643443Z * I0217 00:01:22.691631 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7645426Z * I0217 00:01:22.691734 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7647394Z * I0217 00:01:22.692223 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7648930Z * I0217 00:01:22.891716 1 shared_informer.go:247] Caches are synced for crd-autoregister
2021-02-17T00:01:25.7650524Z * E0217 00:01:22.920693 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
2021-02-17T00:01:25.7652389Z * I0217 00:01:22.933643 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7653992Z * I0217 00:01:22.942843 1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7655259Z * I0217 00:01:22.949532 1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7656435Z * I0217 00:01:22.950206 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
2021-02-17T00:01:25.7657765Z * I0217 00:01:22.950929 1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7659097Z * I0217 00:01:22.997605 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7660689Z * I0217 00:01:23.000657 1 shared_informer.go:247] Caches are synced for node_authorizer
2021-02-17T00:01:25.7661983Z * I0217 00:01:23.421440 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7663706Z * I0217 00:01:23.421480 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7665360Z * I0217 00:01:23.452034 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7666275Z *
2021-02-17T00:01:25.7667112Z * ==> kube-apiserver [f61b22da999c] <==
2021-02-17T00:01:25.7668152Z * I0217 00:01:08.748768 1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7669877Z * I0217 00:01:08.748780 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7672503Z * I0217 00:01:08.748796 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7674800Z * I0217 00:01:08.748815 1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7676562Z * I0217 00:01:08.748843 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7780570Z * I0217 00:01:08.748899 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7782716Z * I0217 00:01:08.786444 1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7784338Z * I0217 00:01:08.786463 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7785589Z * I0217 00:01:08.909921 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
2021-02-17T00:01:25.7787186Z * I0217 00:01:08.909953 1 shared_informer.go:247] Caches are synced for crd-autoregister
2021-02-17T00:01:25.7788609Z * I0217 00:01:08.909971 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7790663Z * I0217 00:01:08.922944 1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7792008Z * I0217 00:01:08.923459 1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7793095Z * I0217 00:01:08.923869 1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7794133Z * I0217 00:01:08.981381 1 shared_informer.go:247] Caches are synced for node_authorizer
2021-02-17T00:01:25.7796762Z * I0217 00:01:09.570927 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7798660Z * I0217 00:01:09.570951 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7800407Z * I0217 00:01:09.604291 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7801842Z * I0217 00:01:10.573333 1 controller.go:609] quota admission added evaluator for: serviceaccounts
2021-02-17T00:01:25.7803075Z * I0217 00:01:10.590931 1 controller.go:609] quota admission added evaluator for: deployments.apps
2021-02-17T00:01:25.7804464Z * I0217 00:01:10.637711 1 controller.go:609] quota admission added evaluator for: daemonsets.apps
2021-02-17T00:01:25.7805940Z * I0217 00:01:10.651065 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
2021-02-17T00:01:25.7807770Z * I0217 00:01:10.656298 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
2021-02-17T00:01:25.7809965Z * I0217 00:01:12.307049 1 controller.go:609] quota admission added evaluator for: events.events.k8s.io
2021-02-17T00:01:25.7811581Z * I0217 00:01:12.980312 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7812543Z *
2021-02-17T00:01:25.7813566Z * ==> kube-controller-manager [4bf1331ef083] <==
2021-02-17T00:01:25.7815339Z * I0216 23:58:17.971424 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
2021-02-17T00:01:25.7817523Z * I0216 23:58:17.971431 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
2021-02-17T00:01:25.7818952Z * I0216 23:58:17.978572 1 shared_informer.go:247] Caches are synced for daemon sets
2021-02-17T00:01:25.7819890Z * I0216 23:58:17.980580 1 shared_informer.go:247] Caches are synced for job
2021-02-17T00:01:25.7820824Z * I0216 23:58:17.996091 1 shared_informer.go:247] Caches are synced for endpoint
2021-02-17T00:01:25.7821839Z * I0216 23:58:18.001644 1 shared_informer.go:247] Caches are synced for bootstrap_signer
2021-02-17T00:01:25.7823371Z * I0216 23:58:18.044493 1 range_allocator.go:373] Set node functional-20210216235525-2779755 PodCIDR to [10.244.0.0/24]
2021-02-17T00:01:25.7824571Z * I0216 23:58:18.070873 1 shared_informer.go:247] Caches are synced for attach detach
2021-02-17T00:01:25.7825574Z * I0216 23:58:18.072738 1 shared_informer.go:247] Caches are synced for deployment
2021-02-17T00:01:25.7826572Z * I0216 23:58:18.081999 1 shared_informer.go:247] Caches are synced for disruption
2021-02-17T00:01:25.7827544Z * I0216 23:58:18.082020 1 disruption.go:339] Sending events to api server.
2021-02-17T00:01:25.7829562Z * I0216 23:58:18.126753 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
2021-02-17T00:01:25.7832456Z * I0216 23:58:18.126787 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lvfk2"
2021-02-17T00:01:25.7834109Z * I0216 23:58:18.128131 1 shared_informer.go:247] Caches are synced for ReplicaSet
2021-02-17T00:01:25.7835159Z * I0216 23:58:18.128311 1 shared_informer.go:247] Caches are synced for persistent volume
2021-02-17T00:01:25.7836208Z * I0216 23:58:18.171707 1 shared_informer.go:247] Caches are synced for resource quota
2021-02-17T00:01:25.7837214Z * I0216 23:58:18.197220 1 shared_informer.go:247] Caches are synced for resource quota
2021-02-17T00:01:25.7839315Z * I0216 23:58:18.217117 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9jwcl"
2021-02-17T00:01:25.7842651Z * I0216 23:58:18.226678 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7844831Z * I0216 23:58:18.349094 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2021-02-17T00:01:25.7846126Z * I0216 23:58:18.620820 1 shared_informer.go:247] Caches are synced for garbage collector
2021-02-17T00:01:25.7847521Z * I0216 23:58:18.620850 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2021-02-17T00:01:25.7848864Z * I0216 23:58:18.649273 1 shared_informer.go:247] Caches are synced for garbage collector
2021-02-17T00:01:25.7851179Z * I0216 23:58:18.882898 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
2021-02-17T00:01:25.7854323Z * I0216 23:58:18.895151 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7855824Z *
2021-02-17T00:01:25.7856719Z * ==> kube-controller-manager [71db52d9a3e8] <==
2021-02-17T00:01:25.7857721Z * I0217 00:01:14.738870 1 shared_informer.go:247] Caches are synced for token_cleaner
2021-02-17T00:01:25.7859363Z * I0217 00:01:14.891621 1 node_ipam_controller.go:91] Sending events to api server.
2021-02-17T00:01:25.7861902Z * W0217 00:01:15.156379 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7865221Z * W0217 00:01:15.156463 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7869025Z * E0217 00:01:22.784748 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope
2021-02-17T00:01:25.7872509Z * E0217 00:01:22.790669 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" at the cluster scope
2021-02-17T00:01:25.7874776Z * I0217 00:01:24.895161 1 range_allocator.go:82] Sending events to api server.
2021-02-17T00:01:25.7876058Z * I0217 00:01:24.895278 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
2021-02-17T00:01:25.7877370Z * I0217 00:01:24.895312 1 controllermanager.go:554] Started "nodeipam"
2021-02-17T00:01:25.7878397Z * I0217 00:01:24.895870 1 node_ipam_controller.go:159] Starting ipam controller
2021-02-17T00:01:25.7879379Z * I0217 00:01:24.895886 1 shared_informer.go:240] Waiting for caches to sync for node
2021-02-17T00:01:25.7880778Z * I0217 00:01:24.896247 1 shared_informer.go:240] Waiting for caches to sync for resource quota
2021-02-17T00:01:25.7884134Z * W0217 00:01:24.924651 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="functional-20210216235525-2779755" does not exist
2021-02-17T00:01:25.7886039Z * I0217 00:01:24.939044 1 shared_informer.go:247] Caches are synced for service account
2021-02-17T00:01:25.7887049Z * I0217 00:01:24.948379 1 shared_informer.go:247] Caches are synced for crt configmap
2021-02-17T00:01:25.7888466Z * I0217 00:01:24.959433 1 shared_informer.go:247] Caches are synced for namespace
2021-02-17T00:01:25.7889640Z * I0217 00:01:24.988829 1 shared_informer.go:247] Caches are synced for expand
2021-02-17T00:01:25.7891265Z * I0217 00:01:24.989039 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
2021-02-17T00:01:25.7892490Z * I0217 00:01:24.989085 1 shared_informer.go:247] Caches are synced for bootstrap_signer
2021-02-17T00:01:25.7893652Z * I0217 00:01:24.991191 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
2021-02-17T00:01:25.7894759Z * I0217 00:01:24.996098 1 shared_informer.go:247] Caches are synced for node
2021-02-17T00:01:25.7895708Z * I0217 00:01:24.996226 1 range_allocator.go:172] Starting range CIDR allocator
2021-02-17T00:01:25.7896727Z * I0217 00:01:24.996250 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
2021-02-17T00:01:25.7897796Z * I0217 00:01:24.996281 1 shared_informer.go:247] Caches are synced for cidrallocator
2021-02-17T00:01:25.7898762Z * I0217 00:01:25.010171 1 shared_informer.go:247] Caches are synced for TTL
2021-02-17T00:01:25.7899424Z *
2021-02-17T00:01:25.7900114Z * ==> kube-proxy [5125090049bc] <==
2021-02-17T00:01:25.7900900Z * I0216 23:58:20.295618 1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7902345Z * I0216 23:58:20.295698 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7903508Z * W0216 23:58:20.375690 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7904476Z * I0216 23:58:20.375784 1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7905287Z * I0216 23:58:20.380839 1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7906097Z * I0216 23:58:20.381254 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7908418Z * I0216 23:58:20.381323 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2021-02-17T00:01:25.7910303Z * I0216 23:58:20.381353 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2021-02-17T00:01:25.7911576Z * I0216 23:58:20.387720 1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7912573Z * I0216 23:58:20.387738 1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7913832Z * I0216 23:58:20.396779 1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7914894Z * I0216 23:58:20.398104 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7915977Z * I0216 23:58:20.487872 1 shared_informer.go:247] Caches are synced for service config
2021-02-17T00:01:25.7917021Z * I0216 23:58:20.498664 1 shared_informer.go:247] Caches are synced for endpoint slice config
2021-02-17T00:01:25.7918167Z *
2021-02-17T00:01:25.7919260Z * ==> kube-proxy [60cc59b48112] <==
2021-02-17T00:01:25.7920192Z * I0217 00:01:08.977998 1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7921737Z * I0217 00:01:08.978264 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7922891Z * W0217 00:01:09.011677 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7923879Z * I0217 00:01:09.015080 1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7924683Z * I0217 00:01:09.015281 1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7925641Z * I0217 00:01:09.015739 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7926575Z * I0217 00:01:09.016506 1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7927566Z * I0217 00:01:09.016523 1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7928582Z * I0217 00:01:09.018671 1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7929646Z * I0217 00:01:09.018683 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7930704Z * I0217 00:01:09.116645 1 shared_informer.go:247] Caches are synced for service config
2021-02-17T00:01:25.7931759Z * I0217 00:01:09.118788 1 shared_informer.go:247] Caches are synced for endpoint slice config
2021-02-17T00:01:25.7934073Z * W0217 00:01:15.157304 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.EndpointSlice ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7937031Z * W0217 00:01:15.157404 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7941666Z * E0217 00:01:16.183512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=592": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7944391Z *
2021-02-17T00:01:25.7945234Z * ==> kube-scheduler [9f35eeb44c8f] <==
2021-02-17T00:01:25.7947201Z * W0217 00:01:15.156806 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7949992Z * W0217 00:01:15.156846 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7953592Z * W0217 00:01:15.156889 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7957052Z * W0217 00:01:15.156928 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7960809Z * W0217 00:01:15.156963 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7963961Z * W0217 00:01:15.157001 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7967061Z * W0217 00:01:15.157040 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7969980Z * W0217 00:01:15.157080 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7972995Z * W0217 00:01:15.159086 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7977058Z * E0217 00:01:15.974491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.82.108:8441/api/v1/replicationcontrollers?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7980931Z * E0217 00:01:16.044069 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.82.108:8441/apis/storage.k8s.io/v1/storageclasses?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7985612Z * E0217 00:01:16.108933 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.82.108:8441/apis/apps/v1/statefulsets?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7989175Z * E0217 00:01:16.189947 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.82.108:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=603": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7992944Z * E0217 00:01:22.803828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.7997070Z * E0217 00:01:22.804049 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8002135Z * E0217 00:01:22.804260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8007098Z * E0217 00:01:22.804335 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8011639Z * E0217 00:01:22.808221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8015822Z * E0217 00:01:22.808327 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8019266Z * E0217 00:01:22.808379 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8023285Z * E0217 00:01:22.808528 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8026913Z * E0217 00:01:22.808701 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8029729Z * E0217 00:01:22.808829 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8032548Z * E0217 00:01:22.808930 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8035850Z * E0217 00:01:22.809065 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8038112Z *
2021-02-17T00:01:25.8039062Z * ==> kube-scheduler [f299474e9f3c] <==
2021-02-17T00:01:25.8040797Z * I0216 23:57:55.081863 1 serving.go:331] Generated self-signed cert in-memory
2021-02-17T00:01:25.8043980Z * W0216 23:57:59.499312 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
2021-02-17T00:01:25.8048744Z * W0216 23:57:59.499512 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8051373Z * W0216 23:57:59.499600 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
2021-02-17T00:01:25.8053808Z * W0216 23:57:59.499678 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
2021-02-17T00:01:25.8055545Z * I0216 23:57:59.540243 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
2021-02-17T00:01:25.8057889Z * I0216 23:57:59.542337 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8060665Z * I0216 23:57:59.542978 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8062627Z * I0216 23:57:59.543078 1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-02-17T00:01:25.8065478Z * E0216 23:57:59.543966 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8069178Z * E0216 23:57:59.545114 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8073152Z * E0216 23:57:59.545362 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8080136Z * E0216 23:57:59.546148 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8087699Z * E0216 23:57:59.548607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8092265Z * E0216 23:57:59.549854 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8095729Z * E0216 23:57:59.550907 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8099195Z * E0216 23:57:59.562699 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8103457Z * E0216 23:57:59.563182 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8107607Z * E0216 23:57:59.563416 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8111158Z * E0216 23:57:59.563845 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8114406Z * E0216 23:57:59.564278 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8118791Z * E0216 23:58:00.417302 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8122746Z * I0216 23:58:01.043172 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8124072Z *
2021-02-17T00:01:25.8124550Z * ==> kubelet <==
2021-02-17T00:01:25.8125445Z * -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:25 UTC. --
2021-02-17T00:01:25.8130830Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.780173 15475 status_manager.go:550] Failed to get status for pod "kube-controller-manager-functional-20210216235525-2779755_kube-system(57b8c22dbe6410e4bd36cf14b0f8bdc7)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210216235525-2779755": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8136807Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.844570 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8140037Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.849077 15475 pod_container_deletor.go:79] Container "8416187a50e920331a49c3bbf146074f2c32bb228f3808050534a82ffd8dbef7" not found in pod's containers
2021-02-17T00:01:25.8144851Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.979475 15475 status_manager.go:550] Failed to get status for pod "nginx-svc_default(e262f289-58b0-4c41-aad0-b1f27b215a87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/default/pods/nginx-svc": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8148781Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.026809 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8151970Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.074189 15475 pod_container_deletor.go:79] Container "b501d6f1e41731aba59158fda9d32800d305eba9db75cacf081ac9ef75c2233b" not found in pod's containers
2021-02-17T00:01:25.8155789Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.081372 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8158881Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.082831 15475 pod_container_deletor.go:79] Container "8472496ad852e71254ec44abcc9960f802b1dd67d9bdf2ffb853cc3c07c4cb42" not found in pod's containers
2021-02-17T00:01:25.8162532Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.134223 15475 pod_container_deletor.go:79] Container "c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80" not found in pod's containers
2021-02-17T00:01:25.8166210Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:16.137587 15475 scope.go:95] [topologymanager] RemoveContainer - Container ID: f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09
2021-02-17T00:01:25.8170743Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.241862 15475 status_manager.go:550] Failed to get status for pod "kube-proxy-lvfk2_kube-system(22e1a123-9634-4fec-8a72-1034b1968f87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lvfk2": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8175424Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.140694 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8178903Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.157909 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8182170Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.163462 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8186597Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.579165 15475 request.go:655] Throttling request took 1.064157484s, request: GET:https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-b877h&resourceVersion=582
2021-02-17T00:01:25.8191001Z * Feb 17 00:01:19 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:19.227718 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8195727Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.791224 15475 reflector.go:138] object-"kube-system"/"coredns-token-z5pj2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-z5pj2" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8201706Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.796483 15475 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8208750Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.797132 15475 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jhjgp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jhjgp" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8214870Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798204 15475 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8220588Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798735 15475 reflector.go:138] object-"kube-system"/"kube-proxy-token-b877h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-b877h" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8226671Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.800718 15475 reflector.go:138] object-"default"/"default-token-8ljbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8ljbt" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8231533Z * Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:23.099246 15475 kubelet.go:1625] Deleted mirror pod "kube-apiserver-functional-20210216235525-2779755_kube-system(bba41377-5d01-4c3c-984c-eb882846f88c)" because it is outdated
2021-02-17T00:01:25.8236168Z * Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:23.262057 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8239838Z * Feb 17 00:01:24 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:24.525885 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8242367Z *
2021-02-17T00:01:25.8243238Z * ==> storage-provisioner [65452e92862d] <==
2021-02-17T00:01:25.8244306Z * I0216 23:58:24.667869 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8245640Z * I0216 23:58:24.753470 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8248407Z * I0216 23:58:24.753506 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8250329Z * I0216 23:58:24.799196 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
2021-02-17T00:01:25.8254181Z * I0216 23:58:24.809684 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3dffc90-2681-49a4-b60e-fc0704798284", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84 became leader
2021-02-17T00:01:25.8257942Z * I0216 23:58:24.809802 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8260411Z * I0216 23:58:24.911352 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8264198Z * E0217 00:00:57.833611 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolumeClaim: Get "https://10.96.0.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8268438Z * E0217 00:00:57.833656 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.StorageClass: Get "https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=457&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8272578Z * E0217 00:00:57.833681 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolume: Get "https://10.96.0.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8274749Z *
2021-02-17T00:01:25.8275633Z * ==> storage-provisioner [aa25c43bff27] <==
2021-02-17T00:01:25.8276737Z * I0217 00:01:05.578036 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8278083Z * I0217 00:01:08.947015 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8280179Z * I0217 00:01:08.982874 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8281263Z
2021-02-17T00:01:25.8281896Z -- /stdout --
2021-02-17T00:01:25.8283569Z helpers_test.go:250: (dbg) Run: ./minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210216235525-2779755 -n functional-20210216235525-2779755
2021-02-17T00:01:26.1719266Z helpers_test.go:257: (dbg) Run: kubectl --context functional-20210216235525-2779755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
2021-02-17T00:01:26.2753885Z helpers_test.go:263: non-running pods: kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.2755978Z helpers_test.go:265: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
2021-02-17T00:01:26.2758367Z helpers_test.go:268: (dbg) Run: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.3845029Z helpers_test.go:268: (dbg) Non-zero exit: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1 (108.714473ms)
2021-02-17T00:01:26.3846817Z
2021-02-17T00:01:26.3847290Z ** stderr **
2021-02-17T00:01:26.3849016Z Error from server (NotFound): pods "kube-apiserver-functional-20210216235525-2779755" not found
2021-02-17T00:01:26.3850295Z
2021-02-17T00:01:26.3850744Z ** /stderr **
2021-02-17T00:01:26.3852677Z helpers_test.go:270: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1
``` | 1.0 | Stabilize TestFunctional/parallel/DockerEnv integration test - TestFunctional/parallel/DockerEnv flakes when run in GitHub actions.
Example of failed run:
```
2021-02-17T00:01:22.7911009Z helpers_test.go:240: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
2021-02-17T00:01:22.7912978Z helpers_test.go:241: ======> post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
2021-02-17T00:01:22.7915014Z helpers_test.go:243: (dbg) Run: ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25
2021-02-17T00:01:23.4137669Z === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService
2021-02-17T00:01:23.4140067Z helpers_test.go:335: "nginx-svc" [e262f289-58b0-4c41-aad0-b1f27b215a87] Running
2021-02-17T00:01:25.7045729Z === CONT TestFunctional/parallel/DockerEnv
2021-02-17T00:01:25.7048581Z helpers_test.go:243: (dbg) Done: ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25: (2.912515242s)
2021-02-17T00:01:25.7127524Z helpers_test.go:248: TestFunctional/parallel/DockerEnv logs:
2021-02-17T00:01:25.7129836Z -- stdout --
2021-02-17T00:01:25.7130708Z * ==> Docker <==
2021-02-17T00:01:25.7131897Z * -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:23 UTC. --
2021-02-17T00:01:25.7133743Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.769589195Z" level=error msg="stream copy error: reading from a closed fifo"
2021-02-17T00:01:25.7137288Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.944284964Z" level=error msg="82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a cleanup: failed to delete container from containerd: no such container"
2021-02-17T00:01:25.7142581Z * Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.945330107Z" level=error msg="Handler for POST /v1.40/containers/82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:448: writing syncT 'resume' caused: write init-p: broken pipe: unknown"
2021-02-17T00:01:25.7147074Z * Feb 16 23:58:21 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:21.103453944Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7155725Z * Feb 17 00:00:57 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:57.998953278Z" level=info msg="ignoring event" container=8fd1325a18ee143988be3547727d8bb1983f6642dca67c24eaa2e156fbdcedf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7160823Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.007947254Z" level=info msg="ignoring event" container=61b9482f4323073909d3a860ac936509130f852404978d404900ec38e54e0200 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7164620Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.015376624Z" level=info msg="ignoring event" container=6136c721d0d7941a59afa82cc01c7280ca7b2d7261f750f68192a95b65f4844a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7169285Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.057495247Z" level=info msg="ignoring event" container=0f7cb48a86e1bdc0327f62552c3aabf601652416c933f2b744808cbd149eb4bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7175111Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.065156312Z" level=info msg="ignoring event" container=4bf1331ef083bce8b8a4165534423ed97620ceb9a4843cf81a4f7085c2a22ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7180025Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.099294398Z" level=info msg="ignoring event" container=cc20d60fcb71c66921736e5e823362b244abe438582875b5aa83ff0b4cb7ad11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7184353Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.110503491Z" level=info msg="ignoring event" container=5125090049bcd369f289c201a30a074c0cc5d55cc354b91a8f6cd5f2adff9e99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7188513Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.135775116Z" level=info msg="ignoring event" container=adda7f4f538315d4d60bb5b953421c1b20eaf7a100fe60f28de1dcab458915d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7192663Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.174797694Z" level=info msg="ignoring event" container=76a5892bf39bb79234bdec9a159fbf32376330a142812a83e967896484ec4b56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7196657Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193196620Z" level=info msg="ignoring event" container=42999b8184eca1699b899b5d048429eef0cb313b9a0bce3f9d103641b909aab1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7217822Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193237973Z" level=info msg="ignoring event" container=65452e92862d23b63a6bca266a66dfdd3feb1d39dbe07ceafc55c2c945fa25ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7224584Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193264828Z" level=info msg="ignoring event" container=f299474e9f3cffff18b490e6578c55864bf4a171d0d0a402e40f1ffd1c4bfbb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7229128Z * Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.398598093Z" level=info msg="ignoring event" container=f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7234766Z * Feb 17 00:01:02 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:02.992413832Z" level=info msg="ignoring event" container=c6da32b7234d60e5a536b37b5d58cc2ee7f094c01a560cf7b71ad239de05a89e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7239798Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.449034180Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7246258Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.450072234Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7252441Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7256664Z * Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7260462Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.207118651Z" level=info msg="ignoring event" container=c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7264918Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.269276332Z" level=info msg="ignoring event" container=f61b22da999cb0b63e1389394cad98ba5abdc954f772c957f6a5c3f0458c294e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7268345Z * Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.855757804Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7269703Z *
2021-02-17T00:01:25.7270226Z * ==> container status <==
2021-02-17T00:01:25.7271062Z * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2021-02-17T00:01:25.7274144Z * f59bd12c23289 nginx@sha256:c2ce58e024275728b00a554ac25628af25c54782865b3487b11c21cafb7fabda 5 seconds ago Running nginx 0 8472496ad852e
2021-02-17T00:01:25.7276688Z * a07ef8bb4a5d8 95d99817fc335 8 seconds ago Running kube-apiserver 0 b501d6f1e4173
2021-02-17T00:01:25.7279449Z * 79e4f0230c9d5 db91994f4ee8f 8 seconds ago Running coredns 1 8416187a50e92
2021-02-17T00:01:25.7302430Z * 60cc59b481124 788e63d07298d 24 seconds ago Running kube-proxy 1 febddf7be60d8
2021-02-17T00:01:25.7305102Z * aa25c43bff27d 84bee7cc4870e 24 seconds ago Running storage-provisioner 1 ce148f582a8ed
2021-02-17T00:01:25.7307307Z * 9f35eeb44c8f7 60d957e44ec8a 25 seconds ago Running kube-scheduler 1 e03ded6bf9e51
2021-02-17T00:01:25.7309672Z * 71db52d9a3e8f 3a1a2b528610a 25 seconds ago Running kube-controller-manager 1 db4a886c25f5b
2021-02-17T00:01:25.7311714Z * f61b22da999cb 95d99817fc335 25 seconds ago Exited kube-apiserver 1 c48c263e44c9c
2021-02-17T00:01:25.7314122Z * 8f607bf42a9f1 05b738aa1bc63 25 seconds ago Running etcd 1 f0319c08752b2
2021-02-17T00:01:25.7315925Z * 65452e92862d2 84bee7cc4870e 2 minutes ago Exited storage-provisioner 0 42999b8184eca
2021-02-17T00:01:25.7317392Z * c6da32b7234d6 db91994f4ee8f 3 minutes ago Exited coredns 0 76a5892bf39bb
2021-02-17T00:01:25.7319364Z * 5125090049bcd 788e63d07298d 3 minutes ago Exited kube-proxy 0 0f7cb48a86e1b
2021-02-17T00:01:25.7321227Z * f299474e9f3cf 60d957e44ec8a 3 minutes ago Exited kube-scheduler 0 6136c721d0d79
2021-02-17T00:01:25.7323886Z * 4bf1331ef083b 3a1a2b528610a 3 minutes ago Exited kube-controller-manager 0 61b9482f43230
2021-02-17T00:01:25.7325479Z * 8fd1325a18ee1 05b738aa1bc63 3 minutes ago Exited etcd 0 cc20d60fcb71c
2021-02-17T00:01:25.7326394Z *
2021-02-17T00:01:25.7327387Z * ==> coredns [79e4f0230c9d] <==
2021-02-17T00:01:25.7328034Z * .:53
2021-02-17T00:01:25.7329015Z * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7330417Z * CoreDNS-1.7.0
2021-02-17T00:01:25.7331152Z * linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7332015Z * [INFO] plugin/ready: Still waiting on: "kubernetes"
2021-02-17T00:01:25.7333190Z *
2021-02-17T00:01:25.7333766Z * ==> coredns [c6da32b7234d] <==
2021-02-17T00:01:25.7336756Z * E0217 00:00:57.837669 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=243&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7340796Z * E0217 00:00:57.837865 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=580&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7344504Z * E0217 00:00:57.837884 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=201&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7346573Z * .:53
2021-02-17T00:01:25.7347430Z * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7348771Z * CoreDNS-1.7.0
2021-02-17T00:01:25.7349380Z * linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7350146Z * [INFO] SIGTERM: Shutting down servers then terminating
2021-02-17T00:01:25.7357211Z * [INFO] plugin/health: Going into lameduck mode for 5s
2021-02-17T00:01:25.7357865Z *
2021-02-17T00:01:25.7358391Z * ==> describe nodes <==
2021-02-17T00:01:25.7359539Z * Name: functional-20210216235525-2779755
2021-02-17T00:01:25.7361176Z * Roles: control-plane,master
2021-02-17T00:01:25.7362058Z * Labels: beta.kubernetes.io/arch=arm64
2021-02-17T00:01:25.7362912Z * beta.kubernetes.io/os=linux
2021-02-17T00:01:25.7363837Z * kubernetes.io/arch=arm64
2021-02-17T00:01:25.7365065Z * kubernetes.io/hostname=functional-20210216235525-2779755
2021-02-17T00:01:25.7366056Z * kubernetes.io/os=linux
2021-02-17T00:01:25.7367032Z * minikube.k8s.io/commit=3bdb549339cf69353b01a489c6dbe349d7066bcf
2021-02-17T00:01:25.7368468Z * minikube.k8s.io/name=functional-20210216235525-2779755
2021-02-17T00:01:25.7369498Z * minikube.k8s.io/updated_at=2021_02_16T23_58_02_0700
2021-02-17T00:01:25.7370515Z * minikube.k8s.io/version=v1.17.1
2021-02-17T00:01:25.7371725Z * node-role.kubernetes.io/control-plane=
2021-02-17T00:01:25.7372950Z * node-role.kubernetes.io/master=
2021-02-17T00:01:25.7374401Z * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
2021-02-17T00:01:25.7375667Z * node.alpha.kubernetes.io/ttl: 0
2021-02-17T00:01:25.7377172Z * volumes.kubernetes.io/controller-managed-attach-detach: true
2021-02-17T00:01:25.7378443Z * CreationTimestamp: Tue, 16 Feb 2021 23:57:59 +0000
2021-02-17T00:01:25.7379123Z * Taints: <none>
2021-02-17T00:01:25.7379720Z * Unschedulable: false
2021-02-17T00:01:25.7380293Z * Lease:
2021-02-17T00:01:25.7381276Z * HolderIdentity: functional-20210216235525-2779755
2021-02-17T00:01:25.7382189Z * AcquireTime: <unset>
2021-02-17T00:01:25.7382846Z * RenewTime: Wed, 17 Feb 2021 00:01:22 +0000
2021-02-17T00:01:25.7383454Z * Conditions:
2021-02-17T00:01:25.7384418Z * Type Status LastHeartbeatTime LastTransitionTime Reason Message
2021-02-17T00:01:25.7385801Z * ---- ------ ----------------- ------------------ ------ -------
2021-02-17T00:01:25.7387207Z * MemoryPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
2021-02-17T00:01:25.7388988Z * DiskPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
2021-02-17T00:01:25.7391386Z * PIDPressure False Wed, 17 Feb 2021 00:01:14 +0000 Tue, 16 Feb 2021 23:57:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
2021-02-17T00:01:25.7393003Z * Ready True Wed, 17 Feb 2021 00:01:14 +0000 Wed, 17 Feb 2021 00:01:14 +0000 KubeletReady kubelet is posting ready status
2021-02-17T00:01:25.7393880Z * Addresses:
2021-02-17T00:01:25.7394552Z * InternalIP: 192.168.82.108
2021-02-17T00:01:25.7395608Z * Hostname: functional-20210216235525-2779755
2021-02-17T00:01:25.7396388Z * Capacity:
2021-02-17T00:01:25.7396890Z * cpu: 2
2021-02-17T00:01:25.7397931Z * ephemeral-storage: 40474572Ki
2021-02-17T00:01:25.7398790Z * hugepages-1Gi: 0
2021-02-17T00:01:25.7399565Z * hugepages-2Mi: 0
2021-02-17T00:01:25.7400634Z * hugepages-32Mi: 0
2021-02-17T00:01:25.7401430Z * hugepages-64Ki: 0
2021-02-17T00:01:25.7402117Z * memory: 8038232Ki
2021-02-17T00:01:25.7402643Z * pods: 110
2021-02-17T00:01:25.7403315Z * Allocatable:
2021-02-17T00:01:25.7403837Z * cpu: 2
2021-02-17T00:01:25.7404668Z * ephemeral-storage: 40474572Ki
2021-02-17T00:01:25.7405521Z * hugepages-1Gi: 0
2021-02-17T00:01:25.7406290Z * hugepages-2Mi: 0
2021-02-17T00:01:25.7407084Z * hugepages-32Mi: 0
2021-02-17T00:01:25.7407867Z * hugepages-64Ki: 0
2021-02-17T00:01:25.7408463Z * memory: 8038232Ki
2021-02-17T00:01:25.7408978Z * pods: 110
2021-02-17T00:01:25.7409493Z * System Info:
2021-02-17T00:01:25.7410124Z * Machine ID: 46f6444822754a889e4650f359992409
2021-02-17T00:01:25.7411089Z * System UUID: 50408af4-47b4-4574-ab83-34615404919a
2021-02-17T00:01:25.7412556Z * Boot ID: b0b00e66-2c54-4a1e-86bd-8109c5527bb8
2021-02-17T00:01:25.7413773Z * Kernel Version: 5.4.0-1029-aws
2021-02-17T00:01:25.7414457Z * OS Image: Ubuntu 20.04.1 LTS
2021-02-17T00:01:25.7415104Z * Operating System: linux
2021-02-17T00:01:25.7415768Z * Architecture: arm64
2021-02-17T00:01:25.7416552Z * Container Runtime Version: docker://20.10.2
2021-02-17T00:01:25.7417329Z * Kubelet Version: v1.20.2
2021-02-17T00:01:25.7422174Z * Kube-Proxy Version: v1.20.2
2021-02-17T00:01:25.7422887Z * PodCIDR: 10.244.0.0/24
2021-02-17T00:01:25.7424961Z * PodCIDRs: 10.244.0.0/24
2021-02-17T00:01:25.7426007Z * Non-terminated Pods: (8 in total)
2021-02-17T00:01:25.7427015Z * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
2021-02-17T00:01:25.7428818Z * --------- ---- ------------ ---------- --------------- ------------- ---
2021-02-17T00:01:25.7430054Z * default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s
2021-02-17T00:01:25.7432070Z * kube-system coredns-74ff55c5b-9jwcl 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 3m6s
2021-02-17T00:01:25.7433776Z * kube-system etcd-functional-20210216235525-2779755 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 3m18s
2021-02-17T00:01:25.7436038Z * kube-system kube-apiserver-functional-20210216235525-2779755 250m (12%) 0 (0%) 0 (0%) 0 (0%) 1s
2021-02-17T00:01:25.7438714Z * kube-system kube-controller-manager-functional-20210216235525-2779755 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m18s
2021-02-17T00:01:25.7441004Z * kube-system kube-proxy-lvfk2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m6s
2021-02-17T00:01:25.7442826Z * kube-system kube-scheduler-functional-20210216235525-2779755 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m18s
2021-02-17T00:01:25.7444679Z * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m1s
2021-02-17T00:01:25.7445772Z * Allocated resources:
2021-02-17T00:01:25.7446773Z * (Total limits may be over 100 percent, i.e., overcommitted.)
2021-02-17T00:01:25.7447817Z * Resource Requests Limits
2021-02-17T00:01:25.7448680Z * -------- -------- ------
2021-02-17T00:01:25.7449248Z * cpu 750m (37%) 0 (0%)
2021-02-17T00:01:25.7449794Z * memory 170Mi (2%) 170Mi (2%)
2021-02-17T00:01:25.7450633Z * ephemeral-storage 100Mi (0%) 0 (0%)
2021-02-17T00:01:25.7451508Z * hugepages-1Gi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7452334Z * hugepages-2Mi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7453331Z * hugepages-32Mi 0 (0%) 0 (0%)
2021-02-17T00:01:25.7454167Z * hugepages-64Ki 0 (0%) 0 (0%)
2021-02-17T00:01:25.7454751Z * Events:
2021-02-17T00:01:25.7455385Z * Type Reason Age From Message
2021-02-17T00:01:25.7456283Z * ---- ------ ---- ---- -------
2021-02-17T00:01:25.7458703Z * Normal NodeHasSufficientMemory 3m34s (x4 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7461425Z * Normal NodeHasNoDiskPressure 3m34s (x5 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7463881Z * Normal NodeHasSufficientPID 3m34s (x4 over 3m34s) kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7465381Z * Normal Starting 3m18s kubelet Starting kubelet.
2021-02-17T00:01:25.7467180Z * Normal NodeHasSufficientMemory 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7470701Z * Normal NodeHasNoDiskPressure 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7473635Z * Normal NodeHasSufficientPID 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7476442Z * Normal NodeNotReady 3m18s kubelet Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7479244Z * Normal NodeAllocatableEnforced 3m18s kubelet Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7481460Z * Normal NodeReady 3m8s kubelet Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7484473Z * Normal Starting 3m4s kube-proxy Starting kube-proxy.
2021-02-17T00:01:25.7486650Z * Normal Starting 15s kube-proxy Starting kube-proxy.
2021-02-17T00:01:25.7489122Z * Normal Starting 12s kubelet Starting kubelet.
2021-02-17T00:01:25.7491506Z * Normal NodeHasSufficientMemory 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7494691Z * Normal NodeHasNoDiskPressure 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7497382Z * Normal NodeHasSufficientPID 11s kubelet Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7499463Z * Normal NodeNotReady 11s kubelet Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7501329Z * Normal NodeAllocatableEnforced 11s kubelet Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7503230Z * Normal NodeReady 10s kubelet Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7504206Z *
2021-02-17T00:01:25.7504664Z * ==> dmesg <==
2021-02-17T00:01:25.7505447Z * [ +0.000862] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7506428Z * [ +0.000668] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7507440Z * [ +0.001050] FS-Cache: N-cookie d=00000000866407ee n=000000005e953fae
2021-02-17T00:01:25.7508334Z * [ +0.000918] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7509257Z * [ +0.013502] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7510297Z * [ +0.000689] FS-Cache: O-cookie c=00000000b1a9545c [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7511311Z * [ +0.001078] FS-Cache: O-cookie d=00000000866407ee n=00000000cc8b7d72
2021-02-17T00:01:25.7512210Z * [ +0.000937] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7513190Z * [ +0.000674] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7514304Z * [ +0.001054] FS-Cache: N-cookie d=00000000866407ee n=000000001a6a5283
2021-02-17T00:01:25.7515254Z * [ +0.000854] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7516179Z * [ +1.733025] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7517215Z * [ +0.000664] FS-Cache: O-cookie c=00000000524c02db [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7518215Z * [ +0.001123] FS-Cache: O-cookie d=00000000866407ee n=000000000f2cbff9
2021-02-17T00:01:25.7519104Z * [ +0.000853] FS-Cache: O-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7520198Z * [ +0.000669] FS-Cache: N-cookie c=00000000dc53534f [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7521252Z * [ +0.001112] FS-Cache: N-cookie d=00000000866407ee n=0000000012ba97ce
2021-02-17T00:01:25.7522153Z * [ +0.000856] FS-Cache: N-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7523086Z * [ +0.346794] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7525413Z * [ +0.000654] FS-Cache: O-cookie c=000000002f236a72 [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7526628Z * [ +0.001105] FS-Cache: O-cookie d=00000000866407ee n=000000005ebbc510
2021-02-17T00:01:25.7527582Z * [ +0.000843] FS-Cache: O-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7528541Z * [ +0.000636] FS-Cache: N-cookie c=00000000d47b852c [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7529542Z * [ +0.001088] FS-Cache: N-cookie d=00000000866407ee n=00000000a03ebc34
2021-02-17T00:01:25.7530447Z * [ +0.000888] FS-Cache: N-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7531006Z *
2021-02-17T00:01:25.7531497Z * ==> etcd [8f607bf42a9f] <==
2021-02-17T00:01:25.7532319Z * 2021-02-17 00:00:59.424921 I | embed: initial cluster =
2021-02-17T00:01:25.7533605Z * 2021-02-17 00:00:59.463680 I | etcdserver: restarting member 8bf199ee24c8c3e2 in cluster f398ff6fd447e89b at commit index 641
2021-02-17T00:01:25.7534821Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=()
2021-02-17T00:01:25.7535811Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 became follower at term 2
2021-02-17T00:01:25.7537171Z * raft2021/02/17 00:00:59 INFO: newRaft 8bf199ee24c8c3e2 [peers: [], term: 2, commit: 641, applied: 0, lastindex: 641, lastterm: 2]
2021-02-17T00:01:25.7538580Z * 2021-02-17 00:00:59.502906 W | auth: simple token is not cryptographically signed
2021-02-17T00:01:25.7539912Z * 2021-02-17 00:00:59.527298 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2021-02-17T00:01:25.7541254Z * raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=(10084010288757654498)
2021-02-17T00:01:25.7543183Z * 2021-02-17 00:00:59.532654 I | etcdserver/membership: added member 8bf199ee24c8c3e2 [https://192.168.82.108:2380] to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7544745Z * 2021-02-17 00:00:59.532875 N | etcdserver/membership: set the initial cluster version to 3.4
2021-02-17T00:01:25.7545990Z * 2021-02-17 00:00:59.533626 I | etcdserver/api: enabled capabilities for version 3.4
2021-02-17T00:01:25.7547957Z * 2021-02-17 00:00:59.551573 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-02-17T00:01:25.7549818Z * 2021-02-17 00:00:59.555394 I | embed: listening for metrics on http://127.0.0.1:2381
2021-02-17T00:01:25.7550924Z * 2021-02-17 00:00:59.555771 I | embed: listening for peers on 192.168.82.108:2380
2021-02-17T00:01:25.7551837Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 is starting a new election at term 2
2021-02-17T00:01:25.7552803Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became candidate at term 3
2021-02-17T00:01:25.7554198Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 received MsgVoteResp from 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7555376Z * raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became leader at term 3
2021-02-17T00:01:25.7556461Z * raft2021/02/17 00:01:00 INFO: raft.node: 8bf199ee24c8c3e2 elected leader 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7579690Z * 2021-02-17 00:01:00.898434 I | etcdserver: published {Name:functional-20210216235525-2779755 ClientURLs:[https://192.168.82.108:2379]} to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7581550Z * 2021-02-17 00:01:00.898584 I | embed: ready to serve client requests
2021-02-17T00:01:25.7582599Z * 2021-02-17 00:01:00.901801 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7583620Z * 2021-02-17 00:01:00.902770 I | embed: ready to serve client requests
2021-02-17T00:01:25.7584648Z * 2021-02-17 00:01:00.909680 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7585744Z * 2021-02-17 00:01:22.638430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7586427Z *
2021-02-17T00:01:25.7586932Z * ==> etcd [8fd1325a18ee] <==
2021-02-17T00:01:25.7587815Z * 2021-02-16 23:57:52.464452 I | embed: ready to serve client requests
2021-02-17T00:01:25.7588828Z * 2021-02-16 23:57:52.465545 I | embed: ready to serve client requests
2021-02-17T00:01:25.7589843Z * 2021-02-16 23:57:52.466747 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7590885Z * 2021-02-16 23:57:52.472829 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7591972Z * 2021-02-16 23:58:16.034982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7593110Z * 2021-02-16 23:58:19.927093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7594266Z * 2021-02-16 23:58:29.925620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7595407Z * 2021-02-16 23:58:39.925441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7596558Z * 2021-02-16 23:58:49.925551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7597699Z * 2021-02-16 23:58:59.925554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7598850Z * 2021-02-16 23:59:09.925573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7600433Z * 2021-02-16 23:59:19.925451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7601706Z * 2021-02-16 23:59:29.925560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7602914Z * 2021-02-16 23:59:39.925644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7604104Z * 2021-02-16 23:59:49.925434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7605590Z * 2021-02-16 23:59:59.925511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7607200Z * 2021-02-17 00:00:09.925431 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7608417Z * 2021-02-17 00:00:19.925514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7609579Z * 2021-02-17 00:00:29.928682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7610726Z * 2021-02-17 00:00:39.925484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7612079Z * 2021-02-17 00:00:49.925819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7613336Z * 2021-02-17 00:00:57.845440 N | pkg/osutil: received terminated signal, shutting down...
2021-02-17T00:01:25.7615432Z * WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7617547Z * 2021-02-17 00:00:57.854805 I | etcdserver: skipped leadership transfer for single voting member cluster
2021-02-17T00:01:25.7619372Z * WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {192.168.82.108:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.82.108:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7620784Z *
2021-02-17T00:01:25.7621255Z * ==> kernel <==
2021-02-17T00:01:25.7621870Z * 00:01:24 up 27 days, 21:57, 0 users, load average: 4.69, 3.22, 1.88
2021-02-17T00:01:25.7623252Z * Linux functional-20210216235525-2779755 5.4.0-1029-aws #30-Ubuntu SMP Tue Oct 20 10:08:09 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
2021-02-17T00:01:25.7624316Z * PRETTY_NAME="Ubuntu 20.04.1 LTS"
2021-02-17T00:01:25.7624843Z *
2021-02-17T00:01:25.7625598Z * ==> kube-apiserver [a07ef8bb4a5d] <==
2021-02-17T00:01:25.7626685Z * I0217 00:01:22.429463 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
2021-02-17T00:01:25.7628103Z * I0217 00:01:22.429636 1 available_controller.go:475] Starting AvailableConditionController
2021-02-17T00:01:25.7629643Z * I0217 00:01:22.429648 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
2021-02-17T00:01:25.7630913Z * I0217 00:01:22.481887 1 controller.go:86] Starting OpenAPI controller
2021-02-17T00:01:25.7632060Z * I0217 00:01:22.482086 1 naming_controller.go:291] Starting NamingConditionController
2021-02-17T00:01:25.7633385Z * I0217 00:01:22.482141 1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7635145Z * I0217 00:01:22.482342 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7638216Z * I0217 00:01:22.482710 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7640369Z * I0217 00:01:22.482746 1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7641773Z * I0217 00:01:22.691475 1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7643443Z * I0217 00:01:22.691631 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7645426Z * I0217 00:01:22.691734 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7647394Z * I0217 00:01:22.692223 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7648930Z * I0217 00:01:22.891716 1 shared_informer.go:247] Caches are synced for crd-autoregister
2021-02-17T00:01:25.7650524Z * E0217 00:01:22.920693 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
2021-02-17T00:01:25.7652389Z * I0217 00:01:22.933643 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7653992Z * I0217 00:01:22.942843 1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7655259Z * I0217 00:01:22.949532 1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7656435Z * I0217 00:01:22.950206 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
2021-02-17T00:01:25.7657765Z * I0217 00:01:22.950929 1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7659097Z * I0217 00:01:22.997605 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7660689Z * I0217 00:01:23.000657 1 shared_informer.go:247] Caches are synced for node_authorizer
2021-02-17T00:01:25.7661983Z * I0217 00:01:23.421440 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7663706Z * I0217 00:01:23.421480 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7665360Z * I0217 00:01:23.452034 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7666275Z *
2021-02-17T00:01:25.7667112Z * ==> kube-apiserver [f61b22da999c] <==
2021-02-17T00:01:25.7668152Z * I0217 00:01:08.748768 1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7669877Z * I0217 00:01:08.748780 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7672503Z * I0217 00:01:08.748796 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7674800Z * I0217 00:01:08.748815 1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7676562Z * I0217 00:01:08.748843 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7780570Z * I0217 00:01:08.748899 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7782716Z * I0217 00:01:08.786444 1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7784338Z * I0217 00:01:08.786463 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7785589Z * I0217 00:01:08.909921 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
2021-02-17T00:01:25.7787186Z * I0217 00:01:08.909953 1 shared_informer.go:247] Caches are synced for crd-autoregister
2021-02-17T00:01:25.7788609Z * I0217 00:01:08.909971 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7790663Z * I0217 00:01:08.922944 1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7792008Z * I0217 00:01:08.923459 1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7793095Z * I0217 00:01:08.923869 1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7794133Z * I0217 00:01:08.981381 1 shared_informer.go:247] Caches are synced for node_authorizer
2021-02-17T00:01:25.7796762Z * I0217 00:01:09.570927 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7798660Z * I0217 00:01:09.570951 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7800407Z * I0217 00:01:09.604291 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7801842Z * I0217 00:01:10.573333 1 controller.go:609] quota admission added evaluator for: serviceaccounts
2021-02-17T00:01:25.7803075Z * I0217 00:01:10.590931 1 controller.go:609] quota admission added evaluator for: deployments.apps
2021-02-17T00:01:25.7804464Z * I0217 00:01:10.637711 1 controller.go:609] quota admission added evaluator for: daemonsets.apps
2021-02-17T00:01:25.7805940Z * I0217 00:01:10.651065 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
2021-02-17T00:01:25.7807770Z * I0217 00:01:10.656298 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
2021-02-17T00:01:25.7809965Z * I0217 00:01:12.307049 1 controller.go:609] quota admission added evaluator for: events.events.k8s.io
2021-02-17T00:01:25.7811581Z * I0217 00:01:12.980312 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7812543Z *
2021-02-17T00:01:25.7813566Z * ==> kube-controller-manager [4bf1331ef083] <==
2021-02-17T00:01:25.7815339Z * I0216 23:58:17.971424 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
2021-02-17T00:01:25.7817523Z * I0216 23:58:17.971431 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
2021-02-17T00:01:25.7818952Z * I0216 23:58:17.978572 1 shared_informer.go:247] Caches are synced for daemon sets
2021-02-17T00:01:25.7819890Z * I0216 23:58:17.980580 1 shared_informer.go:247] Caches are synced for job
2021-02-17T00:01:25.7820824Z * I0216 23:58:17.996091 1 shared_informer.go:247] Caches are synced for endpoint
2021-02-17T00:01:25.7821839Z * I0216 23:58:18.001644 1 shared_informer.go:247] Caches are synced for bootstrap_signer
2021-02-17T00:01:25.7823371Z * I0216 23:58:18.044493 1 range_allocator.go:373] Set node functional-20210216235525-2779755 PodCIDR to [10.244.0.0/24]
2021-02-17T00:01:25.7824571Z * I0216 23:58:18.070873 1 shared_informer.go:247] Caches are synced for attach detach
2021-02-17T00:01:25.7825574Z * I0216 23:58:18.072738 1 shared_informer.go:247] Caches are synced for deployment
2021-02-17T00:01:25.7826572Z * I0216 23:58:18.081999 1 shared_informer.go:247] Caches are synced for disruption
2021-02-17T00:01:25.7827544Z * I0216 23:58:18.082020 1 disruption.go:339] Sending events to api server.
2021-02-17T00:01:25.7829562Z * I0216 23:58:18.126753 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
2021-02-17T00:01:25.7832456Z * I0216 23:58:18.126787 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lvfk2"
2021-02-17T00:01:25.7834109Z * I0216 23:58:18.128131 1 shared_informer.go:247] Caches are synced for ReplicaSet
2021-02-17T00:01:25.7835159Z * I0216 23:58:18.128311 1 shared_informer.go:247] Caches are synced for persistent volume
2021-02-17T00:01:25.7836208Z * I0216 23:58:18.171707 1 shared_informer.go:247] Caches are synced for resource quota
2021-02-17T00:01:25.7837214Z * I0216 23:58:18.197220 1 shared_informer.go:247] Caches are synced for resource quota
2021-02-17T00:01:25.7839315Z * I0216 23:58:18.217117 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9jwcl"
2021-02-17T00:01:25.7842651Z * I0216 23:58:18.226678 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7844831Z * I0216 23:58:18.349094 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2021-02-17T00:01:25.7846126Z * I0216 23:58:18.620820 1 shared_informer.go:247] Caches are synced for garbage collector
2021-02-17T00:01:25.7847521Z * I0216 23:58:18.620850 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2021-02-17T00:01:25.7848864Z * I0216 23:58:18.649273 1 shared_informer.go:247] Caches are synced for garbage collector
2021-02-17T00:01:25.7851179Z * I0216 23:58:18.882898 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
2021-02-17T00:01:25.7854323Z * I0216 23:58:18.895151 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7855824Z *
2021-02-17T00:01:25.7856719Z * ==> kube-controller-manager [71db52d9a3e8] <==
2021-02-17T00:01:25.7857721Z * I0217 00:01:14.738870 1 shared_informer.go:247] Caches are synced for token_cleaner
2021-02-17T00:01:25.7859363Z * I0217 00:01:14.891621 1 node_ipam_controller.go:91] Sending events to api server.
2021-02-17T00:01:25.7861902Z * W0217 00:01:15.156379 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7865221Z * W0217 00:01:15.156463 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7869025Z * E0217 00:01:22.784748 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope
2021-02-17T00:01:25.7872509Z * E0217 00:01:22.790669 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" at the cluster scope
2021-02-17T00:01:25.7874776Z * I0217 00:01:24.895161 1 range_allocator.go:82] Sending events to api server.
2021-02-17T00:01:25.7876058Z * I0217 00:01:24.895278 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
2021-02-17T00:01:25.7877370Z * I0217 00:01:24.895312 1 controllermanager.go:554] Started "nodeipam"
2021-02-17T00:01:25.7878397Z * I0217 00:01:24.895870 1 node_ipam_controller.go:159] Starting ipam controller
2021-02-17T00:01:25.7879379Z * I0217 00:01:24.895886 1 shared_informer.go:240] Waiting for caches to sync for node
2021-02-17T00:01:25.7880778Z * I0217 00:01:24.896247 1 shared_informer.go:240] Waiting for caches to sync for resource quota
2021-02-17T00:01:25.7884134Z * W0217 00:01:24.924651 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="functional-20210216235525-2779755" does not exist
2021-02-17T00:01:25.7886039Z * I0217 00:01:24.939044 1 shared_informer.go:247] Caches are synced for service account
2021-02-17T00:01:25.7887049Z * I0217 00:01:24.948379 1 shared_informer.go:247] Caches are synced for crt configmap
2021-02-17T00:01:25.7888466Z * I0217 00:01:24.959433 1 shared_informer.go:247] Caches are synced for namespace
2021-02-17T00:01:25.7889640Z * I0217 00:01:24.988829 1 shared_informer.go:247] Caches are synced for expand
2021-02-17T00:01:25.7891265Z * I0217 00:01:24.989039 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
2021-02-17T00:01:25.7892490Z * I0217 00:01:24.989085 1 shared_informer.go:247] Caches are synced for bootstrap_signer
2021-02-17T00:01:25.7893652Z * I0217 00:01:24.991191 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
2021-02-17T00:01:25.7894759Z * I0217 00:01:24.996098 1 shared_informer.go:247] Caches are synced for node
2021-02-17T00:01:25.7895708Z * I0217 00:01:24.996226 1 range_allocator.go:172] Starting range CIDR allocator
2021-02-17T00:01:25.7896727Z * I0217 00:01:24.996250 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
2021-02-17T00:01:25.7897796Z * I0217 00:01:24.996281 1 shared_informer.go:247] Caches are synced for cidrallocator
2021-02-17T00:01:25.7898762Z * I0217 00:01:25.010171 1 shared_informer.go:247] Caches are synced for TTL
2021-02-17T00:01:25.7899424Z *
2021-02-17T00:01:25.7900114Z * ==> kube-proxy [5125090049bc] <==
2021-02-17T00:01:25.7900900Z * I0216 23:58:20.295618 1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7902345Z * I0216 23:58:20.295698 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7903508Z * W0216 23:58:20.375690 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7904476Z * I0216 23:58:20.375784 1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7905287Z * I0216 23:58:20.380839 1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7906097Z * I0216 23:58:20.381254 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7908418Z * I0216 23:58:20.381323 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2021-02-17T00:01:25.7910303Z * I0216 23:58:20.381353 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2021-02-17T00:01:25.7911576Z * I0216 23:58:20.387720 1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7912573Z * I0216 23:58:20.387738 1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7913832Z * I0216 23:58:20.396779 1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7914894Z * I0216 23:58:20.398104 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7915977Z * I0216 23:58:20.487872 1 shared_informer.go:247] Caches are synced for service config
2021-02-17T00:01:25.7917021Z * I0216 23:58:20.498664 1 shared_informer.go:247] Caches are synced for endpoint slice config
2021-02-17T00:01:25.7918167Z *
2021-02-17T00:01:25.7919260Z * ==> kube-proxy [60cc59b48112] <==
2021-02-17T00:01:25.7920192Z * I0217 00:01:08.977998 1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7921737Z * I0217 00:01:08.978264 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7922891Z * W0217 00:01:09.011677 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7923879Z * I0217 00:01:09.015080 1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7924683Z * I0217 00:01:09.015281 1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7925641Z * I0217 00:01:09.015739 1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7926575Z * I0217 00:01:09.016506 1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7927566Z * I0217 00:01:09.016523 1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7928582Z * I0217 00:01:09.018671 1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7929646Z * I0217 00:01:09.018683 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7930704Z * I0217 00:01:09.116645 1 shared_informer.go:247] Caches are synced for service config
2021-02-17T00:01:25.7931759Z * I0217 00:01:09.118788 1 shared_informer.go:247] Caches are synced for endpoint slice config
2021-02-17T00:01:25.7934073Z * W0217 00:01:15.157304 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.EndpointSlice ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7937031Z * W0217 00:01:15.157404 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7941666Z * E0217 00:01:16.183512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=592": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7944391Z *
2021-02-17T00:01:25.7945234Z * ==> kube-scheduler [9f35eeb44c8f] <==
2021-02-17T00:01:25.7947201Z * W0217 00:01:15.156806 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7949992Z * W0217 00:01:15.156846 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7953592Z * W0217 00:01:15.156889 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7957052Z * W0217 00:01:15.156928 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7960809Z * W0217 00:01:15.156963 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7963961Z * W0217 00:01:15.157001 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7967061Z * W0217 00:01:15.157040 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7969980Z * W0217 00:01:15.157080 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7972995Z * W0217 00:01:15.159086 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7977058Z * E0217 00:01:15.974491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.82.108:8441/api/v1/replicationcontrollers?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7980931Z * E0217 00:01:16.044069 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.82.108:8441/apis/storage.k8s.io/v1/storageclasses?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7985612Z * E0217 00:01:16.108933 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.82.108:8441/apis/apps/v1/statefulsets?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7989175Z * E0217 00:01:16.189947 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.82.108:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=603": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7992944Z * E0217 00:01:22.803828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.7997070Z * E0217 00:01:22.804049 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8002135Z * E0217 00:01:22.804260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8007098Z * E0217 00:01:22.804335 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8011639Z * E0217 00:01:22.808221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8015822Z * E0217 00:01:22.808327 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8019266Z * E0217 00:01:22.808379 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8023285Z * E0217 00:01:22.808528 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8026913Z * E0217 00:01:22.808701 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8029729Z * E0217 00:01:22.808829 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8032548Z * E0217 00:01:22.808930 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8035850Z * E0217 00:01:22.809065 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8038112Z *
2021-02-17T00:01:25.8039062Z * ==> kube-scheduler [f299474e9f3c] <==
2021-02-17T00:01:25.8040797Z * I0216 23:57:55.081863 1 serving.go:331] Generated self-signed cert in-memory
2021-02-17T00:01:25.8043980Z * W0216 23:57:59.499312 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
2021-02-17T00:01:25.8048744Z * W0216 23:57:59.499512 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8051373Z * W0216 23:57:59.499600 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
2021-02-17T00:01:25.8053808Z * W0216 23:57:59.499678 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
2021-02-17T00:01:25.8055545Z * I0216 23:57:59.540243 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
2021-02-17T00:01:25.8057889Z * I0216 23:57:59.542337 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8060665Z * I0216 23:57:59.542978 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8062627Z * I0216 23:57:59.543078 1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-02-17T00:01:25.8065478Z * E0216 23:57:59.543966 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8069178Z * E0216 23:57:59.545114 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8073152Z * E0216 23:57:59.545362 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8080136Z * E0216 23:57:59.546148 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8087699Z * E0216 23:57:59.548607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8092265Z * E0216 23:57:59.549854 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8095729Z * E0216 23:57:59.550907 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8099195Z * E0216 23:57:59.562699 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8103457Z * E0216 23:57:59.563182 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8107607Z * E0216 23:57:59.563416 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8111158Z * E0216 23:57:59.563845 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8114406Z * E0216 23:57:59.564278 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8118791Z * E0216 23:58:00.417302 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8122746Z * I0216 23:58:01.043172 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8124072Z *
2021-02-17T00:01:25.8124550Z * ==> kubelet <==
2021-02-17T00:01:25.8125445Z * -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:25 UTC. --
2021-02-17T00:01:25.8130830Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.780173 15475 status_manager.go:550] Failed to get status for pod "kube-controller-manager-functional-20210216235525-2779755_kube-system(57b8c22dbe6410e4bd36cf14b0f8bdc7)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210216235525-2779755": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8136807Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.844570 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8140037Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.849077 15475 pod_container_deletor.go:79] Container "8416187a50e920331a49c3bbf146074f2c32bb228f3808050534a82ffd8dbef7" not found in pod's containers
2021-02-17T00:01:25.8144851Z * Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.979475 15475 status_manager.go:550] Failed to get status for pod "nginx-svc_default(e262f289-58b0-4c41-aad0-b1f27b215a87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/default/pods/nginx-svc": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8148781Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.026809 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8151970Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.074189 15475 pod_container_deletor.go:79] Container "b501d6f1e41731aba59158fda9d32800d305eba9db75cacf081ac9ef75c2233b" not found in pod's containers
2021-02-17T00:01:25.8155789Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.081372 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8158881Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.082831 15475 pod_container_deletor.go:79] Container "8472496ad852e71254ec44abcc9960f802b1dd67d9bdf2ffb853cc3c07c4cb42" not found in pod's containers
2021-02-17T00:01:25.8162532Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.134223 15475 pod_container_deletor.go:79] Container "c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80" not found in pod's containers
2021-02-17T00:01:25.8166210Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:16.137587 15475 scope.go:95] [topologymanager] RemoveContainer - Container ID: f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09
2021-02-17T00:01:25.8170743Z * Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.241862 15475 status_manager.go:550] Failed to get status for pod "kube-proxy-lvfk2_kube-system(22e1a123-9634-4fec-8a72-1034b1968f87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lvfk2": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8175424Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.140694 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8178903Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.157909 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8182170Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.163462 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8186597Z * Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.579165 15475 request.go:655] Throttling request took 1.064157484s, request: GET:https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-b877h&resourceVersion=582
2021-02-17T00:01:25.8191001Z * Feb 17 00:01:19 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:19.227718 15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8195727Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.791224 15475 reflector.go:138] object-"kube-system"/"coredns-token-z5pj2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-z5pj2" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8201706Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.796483 15475 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8208750Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.797132 15475 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jhjgp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jhjgp" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8214870Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798204 15475 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8220588Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798735 15475 reflector.go:138] object-"kube-system"/"kube-proxy-token-b877h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-b877h" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8226671Z * Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.800718 15475 reflector.go:138] object-"default"/"default-token-8ljbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8ljbt" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8231533Z * Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:23.099246 15475 kubelet.go:1625] Deleted mirror pod "kube-apiserver-functional-20210216235525-2779755_kube-system(bba41377-5d01-4c3c-984c-eb882846f88c)" because it is outdated
2021-02-17T00:01:25.8236168Z * Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:23.262057 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8239838Z * Feb 17 00:01:24 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:24.525885 15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8242367Z *
2021-02-17T00:01:25.8243238Z * ==> storage-provisioner [65452e92862d] <==
2021-02-17T00:01:25.8244306Z * I0216 23:58:24.667869 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8245640Z * I0216 23:58:24.753470 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8248407Z * I0216 23:58:24.753506 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8250329Z * I0216 23:58:24.799196 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
2021-02-17T00:01:25.8254181Z * I0216 23:58:24.809684 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3dffc90-2681-49a4-b60e-fc0704798284", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84 became leader
2021-02-17T00:01:25.8257942Z * I0216 23:58:24.809802 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8260411Z * I0216 23:58:24.911352 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8264198Z * E0217 00:00:57.833611 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolumeClaim: Get "https://10.96.0.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8268438Z * E0217 00:00:57.833656 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.StorageClass: Get "https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=457&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8272578Z * E0217 00:00:57.833681 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolume: Get "https://10.96.0.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8274749Z *
2021-02-17T00:01:25.8275633Z * ==> storage-provisioner [aa25c43bff27] <==
2021-02-17T00:01:25.8276737Z * I0217 00:01:05.578036 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8278083Z * I0217 00:01:08.947015 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8280179Z * I0217 00:01:08.982874 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8281263Z
2021-02-17T00:01:25.8281896Z -- /stdout --
2021-02-17T00:01:25.8283569Z helpers_test.go:250: (dbg) Run: ./minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210216235525-2779755 -n functional-20210216235525-2779755
2021-02-17T00:01:26.1719266Z helpers_test.go:257: (dbg) Run: kubectl --context functional-20210216235525-2779755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
2021-02-17T00:01:26.2753885Z helpers_test.go:263: non-running pods: kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.2755978Z helpers_test.go:265: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
2021-02-17T00:01:26.2758367Z helpers_test.go:268: (dbg) Run: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.3845029Z helpers_test.go:268: (dbg) Non-zero exit: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1 (108.714473ms)
2021-02-17T00:01:26.3846817Z
2021-02-17T00:01:26.3847290Z ** stderr **
2021-02-17T00:01:26.3849016Z Error from server (NotFound): pods "kube-apiserver-functional-20210216235525-2779755" not found
2021-02-17T00:01:26.3850295Z
2021-02-17T00:01:26.3850744Z ** /stderr **
2021-02-17T00:01:26.3852677Z helpers_test.go:270: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1
``` | non_infrastructure | stabilize testfunctional parallel dockerenv integration test testfunctional parallel dockerenv flakes when run in github actions example of failed run helpers test go testfunctional parallel dockerenv failed start of post mortem logs helpers test go post mortem minikube logs helpers test go dbg run minikube linux p functional logs n cont testfunctional parallel tunnelcmd serial waitservice helpers test go nginx svc running cont testfunctional parallel dockerenv helpers test go dbg done minikube linux p functional logs n helpers test go testfunctional parallel dockerenv logs stdout docker logs begin at tue utc end at wed utc feb functional dockerd time level error msg stream copy error reading from a closed fifo feb functional dockerd time level error msg cleanup failed to delete container from containerd no such container feb functional dockerd time level error msg handler for post containers start returned error oci runtime create failed container linux go starting container process caused process linux go container init caused process linux go writing synct resume caused write init p broken pipe unknown feb functional dockerd time level warning msg your kernel does not support swap limit capabilities or the cgroup is not mounted memory limited without swap feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level error msg handler for get containers json returned error write unix var run docker sock write broken pipe feb functional dockerd time level error msg handler for get containers json returned error write unix var run docker sock write broken pipe feb functional dockerd http superfluous response writeheader call from github com docker docker api server httputils writejson httputils write json go feb functional dockerd http superfluous response writeheader call from github com docker docker api server httputils writejson httputils write json go feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level info msg ignoring event container module libcontainerd namespace moby topic tasks delete type events taskdelete feb functional dockerd time level warning msg your kernel does not support swap limit capabilities or the cgroup is not mounted memory limited without swap container status container image created state name attempt pod id nginx seconds ago running nginx seconds ago running kube apiserver seconds ago running coredns seconds ago running kube proxy seconds ago running storage provisioner seconds ago running kube scheduler seconds ago running kube controller manager seconds ago exited kube apiserver seconds ago running etcd minutes ago exited storage provisioner minutes ago exited coredns minutes ago exited kube proxy minutes ago exited kube scheduler minutes ago exited kube controller manager minutes ago exited etcd coredns plugin reload running configuration coredns linux plugin ready still waiting on kubernetes coredns reflector go pkg mod io client go tools cache reflector go failed to watch service get dial tcp connect connection refused reflector go pkg mod io client go tools cache reflector go failed to watch endpoints get dial tcp connect connection refused reflector go pkg mod io client go tools cache reflector go failed to watch namespace get dial tcp connect connection refused plugin reload running configuration coredns linux sigterm shutting down servers then terminating plugin health going into lameduck mode for describe nodes name functional roles control plane master labels beta kubernetes io arch beta kubernetes io os linux kubernetes io arch kubernetes io hostname functional kubernetes io os linux minikube io commit minikube io name functional minikube io updated at minikube io version node role kubernetes io control plane node role kubernetes io master annotations kubeadm alpha kubernetes io cri socket var run dockershim sock node alpha kubernetes io ttl volumes kubernetes io controller managed attach detach true creationtimestamp tue feb taints unschedulable false lease holderidentity functional acquiretime renewtime wed feb conditions type status lastheartbeattime lasttransitiontime reason message memorypressure false wed feb tue feb kubelethassufficientmemory kubelet has sufficient memory available diskpressure false wed feb tue feb kubelethasnodiskpressure kubelet has no disk pressure pidpressure false wed feb tue feb kubelethassufficientpid kubelet has sufficient pid available ready true wed feb wed feb kubeletready kubelet is posting ready status addresses internalip hostname functional capacity cpu ephemeral storage hugepages hugepages hugepages hugepages memory pods allocatable cpu ephemeral storage hugepages hugepages hugepages hugepages memory pods system info machine id system uuid boot id kernel version aws os image ubuntu lts operating system linux architecture container runtime version docker kubelet version kube proxy version podcidr podcidrs non terminated pods in total namespace name cpu requests cpu limits memory requests memory limits age default nginx svc kube system coredns kube system etcd functional kube system kube apiserver functional kube system kube controller manager functional kube system kube proxy kube system kube scheduler functional kube system storage provisioner allocated resources total limits may be over percent i e overcommitted resource requests limits cpu memory ephemeral storage hugepages hugepages hugepages hugepages events type reason age from message normal nodehassufficientmemory over kubelet node functional status is now nodehassufficientmemory normal nodehasnodiskpressure over kubelet node functional status is now nodehasnodiskpressure normal nodehassufficientpid over kubelet node functional status is now nodehassufficientpid normal starting kubelet starting kubelet normal nodehassufficientmemory kubelet node functional status is now nodehassufficientmemory normal nodehasnodiskpressure kubelet node functional status is now nodehasnodiskpressure normal nodehassufficientpid kubelet node functional status is now nodehassufficientpid normal nodenotready kubelet node functional status is now nodenotready normal nodeallocatableenforced kubelet updated node allocatable limit across pods normal nodeready kubelet node functional status is now nodeready normal starting kube proxy starting kube proxy normal starting kube proxy starting kube proxy normal starting kubelet starting kubelet normal nodehassufficientmemory kubelet node functional status is now nodehassufficientmemory normal nodehasnodiskpressure kubelet node functional status is now nodehasnodiskpressure normal nodehassufficientpid kubelet node functional status is now nodehassufficientpid normal nodenotready kubelet node functional status is now nodenotready normal nodeallocatableenforced kubelet updated node allocatable limit across pods normal nodeready kubelet node functional status is now nodeready dmesg fs cache o key fs cache n cookie c fs cache n cookie d n fs cache n key fs cache duplicate cookie detected fs cache o cookie c fs cache o cookie d n fs cache o key fs cache n cookie c fs cache n cookie d n fs cache n key fs cache duplicate cookie detected fs cache o cookie c fs cache o cookie d n fs cache o key fs cache n cookie c fs cache n cookie d n fs cache n key fs cache duplicate cookie detected fs cache o cookie c fs cache o cookie d n fs cache o key fs cache n cookie c fs cache n cookie d n fs cache n key etcd i embed initial cluster i etcdserver restarting member in cluster at commit index info switched to configuration voters info became follower at term info newraft term commit applied lastindex lastterm w auth simple token is not cryptographically signed i etcdserver starting server info switched to configuration voters i etcdserver membership added member to cluster n etcdserver membership set the initial cluster version to i etcdserver api enabled capabilities for version i embed clienttls cert var lib minikube certs etcd server crt key var lib minikube certs etcd server key trusted ca var lib minikube certs etcd ca crt client cert auth true crl file i embed listening for metrics on i embed listening for peers on info is starting a new election at term info became candidate at term info received msgvoteresp from at term info became leader at term info raft node elected leader at term i etcdserver published name functional clienturls to cluster i embed ready to serve client requests i embed serving client requests on i embed ready to serve client requests i embed serving client requests on i etcdserver api etcdhttp health ok status code etcd i embed ready to serve client requests i embed ready to serve client requests i embed serving client requests on i embed serving client requests on i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code i etcdserver api etcdhttp health ok status code n pkg osutil received terminated signal shutting down warning grpc addrconn createtransport failed to connect to err connection error desc transport error while dialing dial tcp connect connection refused reconnecting i etcdserver skipped leadership transfer for single voting member cluster warning grpc addrconn createtransport failed to connect to err connection error desc transport error while dialing dial tcp connect connection refused reconnecting kernel up days users load average linux functional aws ubuntu smp tue oct utc gnu linux pretty name ubuntu lts kube apiserver shared informer go waiting for caches to sync for cluster authentication trust controller available controller go starting availableconditioncontroller cache go waiting for caches to sync for availableconditioncontroller controller controller go starting openapi controller naming controller go starting namingconditioncontroller establishing controller go starting establishingcontroller nonstructuralschema controller go starting nonstructuralschemaconditioncontroller apiapproval controller go starting kubernetesapiapprovalpolicyconformantconditioncontroller crd finalizer go starting crdfinalizer crdregistration controller go starting crd autoregister controller shared informer go waiting for caches to sync for crd autoregister dynamic cafile content go starting client ca bundle var lib minikube certs ca crt dynamic cafile content go starting request header var lib minikube certs front proxy ca crt shared informer go caches are synced for crd autoregister controller go unable to remove old endpoints from kubernetes service no master ips were listed in storage refusing to erase all endpoints for the kubernetes service cache go caches are synced for apiserviceregistrationcontroller controller cache go caches are synced for availableconditioncontroller controller cache go caches are synced for autoregister controller shared informer go caches are synced for cluster authentication trust controller apf controller go running api priority and fairness config worker controller go quota admission added evaluator for leases coordination io shared informer go caches are synced for node authorizer controller go openapi aggregationcontroller action for item nothing removed from the queue controller go openapi aggregationcontroller action for item internal local delegation chain nothing removed from the queue storage scheduling go all system priority classes are created successfully or already exist kube apiserver establishing controller go starting establishingcontroller nonstructuralschema controller go starting nonstructuralschemaconditioncontroller apiapproval controller go starting kubernetesapiapprovalpolicyconformantconditioncontroller crd finalizer go starting crdfinalizer dynamic cafile content go starting client ca bundle var lib minikube certs ca crt dynamic cafile content go starting request header var lib minikube certs front proxy ca crt crdregistration controller go starting crd autoregister controller shared informer go waiting for caches to sync for crd autoregister shared informer go caches are synced for cluster authentication trust controller shared informer go caches are synced for crd autoregister cache go caches are synced for apiserviceregistrationcontroller controller cache go caches are synced for availableconditioncontroller controller apf controller go running api priority and fairness config worker cache go caches are synced for autoregister controller shared informer go caches are synced for node authorizer controller go openapi aggregationcontroller action for item nothing removed from the queue controller go openapi aggregationcontroller action for item internal local delegation chain nothing removed from the queue storage scheduling go all system priority classes are created successfully or already exist controller go quota admission added evaluator for serviceaccounts controller go quota admission added evaluator for deployments apps controller go quota admission added evaluator for daemonsets apps controller go quota admission added evaluator for roles rbac authorization io controller go quota admission added evaluator for rolebindings rbac authorization io controller go quota admission added evaluator for events events io controller go quota admission added evaluator for leases coordination io kube controller manager shared informer go caches are synced for certificate csrsigning kubelet serving shared informer go caches are synced for certificate csrsigning kubelet client shared informer go caches are synced for daemon sets shared informer go caches are synced for job shared informer go caches are synced for endpoint shared informer go caches are synced for bootstrap signer range allocator go set node functional podcidr to shared informer go caches are synced for attach detach shared informer go caches are synced for deployment shared informer go caches are synced for disruption disruption go sending events to api server event go event occurred object kube system coredns kind deployment apiversion apps type normal reason scalingreplicaset message scaled up replica set coredns to event go event occurred object kube system kube proxy kind daemonset apiversion apps type normal reason successfulcreate message created pod kube proxy shared informer go caches are synced for replicaset shared informer go caches are synced for persistent volume shared informer go caches are synced for resource quota shared informer go caches are synced for resource quota event go event occurred object kube system coredns kind replicaset apiversion apps type normal reason successfulcreate message created pod coredns event go event occurred object kube system coredns kind replicaset apiversion apps type normal reason successfulcreate message created pod coredns shared informer go waiting for caches to sync for garbage collector shared informer go caches are synced for garbage collector garbagecollector go garbage collector all resource monitors have synced proceeding to collect garbage shared informer go caches are synced for garbage collector event go event occurred object kube system coredns kind deployment apiversion apps type normal reason scalingreplicaset message scaled down replica set coredns to event go event occurred object kube system coredns kind replicaset apiversion apps type normal reason successfuldelete message deleted pod coredns kube controller manager shared informer go caches are synced for token cleaner node ipam controller go sending events to api server reflector go io client go informers factory go watch of secret ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of serviceaccount ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go failed to watch serviceaccount failed to list serviceaccount serviceaccounts is forbidden user system kube controller manager cannot list resource serviceaccounts in api group at the cluster scope reflector go io client go informers factory go failed to watch secret failed to list secret secrets is forbidden user system kube controller manager cannot list resource secrets in api group at the cluster scope range allocator go sending events to api server range allocator go no secondary service cidr provided skipping filtering out secondary service addresses controllermanager go started nodeipam node ipam controller go starting ipam controller shared informer go waiting for caches to sync for node shared informer go waiting for caches to sync for resource quota actual state of world go failed to update statusupdateneeded field in actual state of world failed to set statusupdateneeded to needed true because nodename functional does not exist shared informer go caches are synced for service account shared informer go caches are synced for crt configmap shared informer go caches are synced for namespace shared informer go caches are synced for expand shared informer go caches are synced for certificate csrapproving shared informer go caches are synced for bootstrap signer shared informer go caches are synced for clusterroleaggregator shared informer go caches are synced for node range allocator go starting range cidr allocator shared informer go waiting for caches to sync for cidrallocator shared informer go caches are synced for cidrallocator shared informer go caches are synced for ttl kube proxy node go successfully retrieved node ip server others go kube proxy node ip is an address assume operation server others go unknown proxy mode assuming iptables proxy server others go using iptables proxier server go version conntrack go setting nf conntrack max to conntrack go set sysctl net netfilter nf conntrack tcp timeout established to conntrack go set sysctl net netfilter nf conntrack tcp timeout close wait to config go starting service config controller shared informer go waiting for caches to sync for service config config go starting endpoint slice config controller shared informer go waiting for caches to sync for endpoint slice config shared informer go caches are synced for service config shared informer go caches are synced for endpoint slice config kube proxy node go successfully retrieved node ip server others go kube proxy node ip is an address assume operation server others go unknown proxy mode assuming iptables proxy server others go using iptables proxier server go version conntrack go setting nf conntrack max to config go starting service config controller shared informer go waiting for caches to sync for service config config go starting endpoint slice config controller shared informer go waiting for caches to sync for endpoint slice config shared informer go caches are synced for service config shared informer go caches are synced for endpoint slice config reflector go io client go informers factory go watch of endpointslice ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of service ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go failed to watch service failed to list service get dial tcp connect connection refused kube scheduler reflector go io client go informers factory go watch of csinode ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of pod ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of persistentvolumeclaim ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of poddisruptionbudget ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of replicationcontroller ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of persistentvolume ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of storageclass ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of service ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go watch of node ended with very short watch io client go informers factory go unexpected watch close watch lasted less than a second and no items received reflector go io client go informers factory go failed to watch replicationcontroller failed to list replicationcontroller get dial tcp connect connection refused reflector go io client go informers factory go failed to watch storageclass failed to list storageclass get dial tcp connect connection refused reflector go io client go informers factory go failed to watch statefulset failed to list statefulset get dial tcp connect connection refused reflector go io client go informers factory go failed to watch pod failed to list pod get dial tcp connect connection refused reflector go io client go informers factory go failed to watch storageclass failed to list storageclass storageclasses storage io is forbidden user system kube scheduler cannot list resource storageclasses in api group storage io at the cluster scope reflector go io client go informers factory go failed to watch persistentvolumeclaim failed to list persistentvolumeclaim persistentvolumeclaims is forbidden user system kube scheduler cannot list resource persistentvolumeclaims in api group at the cluster scope reflector go io client go informers factory go failed to watch persistentvolume failed to list persistentvolume persistentvolumes is forbidden user system kube scheduler cannot list resource persistentvolumes in api group at the cluster scope reflector go io client go informers factory go failed to watch poddisruptionbudget failed to list poddisruptionbudget poddisruptionbudgets policy is forbidden user system kube scheduler cannot list resource poddisruptionbudgets in api group policy at the cluster scope reflector go io client go informers factory go failed to watch replicationcontroller failed to list replicationcontroller replicationcontrollers is forbidden user system kube scheduler cannot list resource replicationcontrollers in api group at the cluster scope reflector go io client go informers factory go failed to watch statefulset failed to list statefulset statefulsets apps is forbidden user system kube scheduler cannot list resource statefulsets in api group apps at the cluster scope reflector go io client go informers factory go failed to watch replicaset failed to list replicaset replicasets apps is forbidden user system kube scheduler cannot list resource replicasets in api group apps at the cluster scope reflector go io apiserver pkg server dynamiccertificates configmap cafile content go failed to watch configmap failed to list configmap configmaps extension apiserver authentication is forbidden user system kube scheduler cannot list resource configmaps in api group in the namespace kube system reflector go io client go informers factory go failed to watch pod failed to list pod pods is forbidden user system kube scheduler cannot list resource pods in api group at the cluster scope reflector go io client go informers factory go failed to watch service failed to list service services is forbidden user system kube scheduler cannot list resource services in api group at the cluster scope reflector go io client go informers factory go failed to watch node failed to list node nodes is forbidden user system kube scheduler cannot list resource nodes in api group at the cluster scope reflector go io client go informers factory go failed to watch csinode failed to list csinode csinodes storage io is forbidden user system kube scheduler cannot list resource csinodes in api group storage io at the cluster scope kube scheduler serving go generated self signed cert in memory requestheader controller go unable to get configmap extension apiserver authentication in kube system usually fixed by kubectl create rolebinding n kube system rolebinding name role extension apiserver authentication reader serviceaccount your ns your sa authentication go error looking up in cluster authentication configuration configmaps extension apiserver authentication is forbidden user system kube scheduler cannot get resource configmaps in api group in the namespace kube system authentication go continuing without authentication configuration this may treat all requests as anonymous authentication go to require authentication configuration lookup to succeed set authentication tolerate lookup failure false secure serving go serving securely on configmap cafile content go starting client ca kube system extension apiserver authentication client ca file shared informer go waiting for caches to sync for client ca kube system extension apiserver authentication client ca file tlsconfig go starting dynamicservingcertificatecontroller reflector go io client go informers factory go failed to watch replicaset failed to list replicaset replicasets apps is forbidden user system kube scheduler cannot list resource replicasets in api group apps at the cluster scope reflector go io client go informers factory go failed to watch service failed to list service services is forbidden user system kube scheduler cannot list resource services in api group at the cluster scope reflector go io apiserver pkg server dynamiccertificates configmap cafile content go failed to watch configmap failed to list configmap configmaps extension apiserver authentication is forbidden user system kube scheduler cannot list resource configmaps in api group in the namespace kube system reflector go io client go informers factory go failed to watch persistentvolumeclaim failed to list persistentvolumeclaim persistentvolumeclaims is forbidden user system kube scheduler cannot list resource persistentvolumeclaims in api group at the cluster scope reflector go io client go informers factory go failed to watch poddisruptionbudget failed to list poddisruptionbudget poddisruptionbudgets policy is forbidden user system kube scheduler cannot list resource poddisruptionbudgets in api group policy at the cluster scope reflector go io client go informers factory go failed to watch replicationcontroller failed to list replicationcontroller replicationcontrollers is forbidden user system kube scheduler cannot list resource replicationcontrollers in api group at the cluster scope reflector go io client go informers factory go failed to watch node failed to list node nodes is forbidden user system kube scheduler cannot list resource nodes in api group at the cluster scope reflector go io client go informers factory go failed to watch persistentvolume failed to list persistentvolume persistentvolumes is forbidden user system kube scheduler cannot list resource persistentvolumes in api group at the cluster scope reflector go io client go informers factory go failed to watch csinode failed to list csinode csinodes storage io is forbidden user system kube scheduler cannot list resource csinodes in api group storage io at the cluster scope reflector go io client go informers factory go failed to watch storageclass failed to list storageclass storageclasses storage io is forbidden user system kube scheduler cannot list resource storageclasses in api group storage io at the cluster scope reflector go io client go informers factory go failed to watch pod failed to list pod pods is forbidden user system kube scheduler cannot list resource pods in api group at the cluster scope reflector go io client go informers factory go failed to watch statefulset failed to list statefulset statefulsets apps is forbidden user system kube scheduler cannot list resource statefulsets in api group apps at the cluster scope reflector go io client go informers factory go failed to watch replicationcontroller failed to list replicationcontroller replicationcontrollers is forbidden user system kube scheduler cannot list resource replicationcontrollers in api group at the cluster scope shared informer go caches are synced for client ca kube system extension apiserver authentication client ca file kubelet logs begin at tue utc end at wed utc feb functional kubelet status manager go failed to get status for pod kube controller manager functional kube system get dial tcp connect connection refused feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for kube system coredns through plugin invalid network status for feb functional kubelet pod container deletor go container not found in pod s containers feb functional kubelet status manager go failed to get status for pod nginx svc default get dial tcp connect connection refused feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for default nginx svc through plugin invalid network status for feb functional kubelet pod container deletor go container not found in pod s containers feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for default nginx svc through plugin invalid network status for feb functional kubelet pod container deletor go container not found in pod s containers feb functional kubelet pod container deletor go container not found in pod s containers feb functional kubelet scope go removecontainer container id feb functional kubelet status manager go failed to get status for pod kube proxy kube system get dial tcp connect connection refused feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for kube system coredns through plugin invalid network status for feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for default nginx svc through plugin invalid network status for feb functional kubelet kubelet go trying to delete pod kube apiserver functional kube system feb functional kubelet request go throttling request took request get feb functional kubelet docker sandbox go failed to read pod ip from plugin docker couldn t find network status for default nginx svc through plugin invalid network status for feb functional kubelet reflector go object kube system coredns token failed to watch secret failed to list secret secrets coredns token is forbidden user system node functional cannot list resource secrets in api group in the namespace kube system no relationship found between node functional and this object feb functional kubelet reflector go object kube system kube proxy failed to watch configmap failed to list configmap configmaps kube proxy is forbidden user system node functional cannot list resource configmaps in api group in the namespace kube system no relationship found between node functional and this object feb functional kubelet reflector go object kube system storage provisioner token jhjgp failed to watch secret failed to list secret secrets storage provisioner token jhjgp is forbidden user system node functional cannot list resource secrets in api group in the namespace kube system no relationship found between node functional and this object feb functional kubelet reflector go object kube system coredns failed to watch configmap failed to list configmap configmaps coredns is forbidden user system node functional cannot list resource configmaps in api group in the namespace kube system no relationship found between node functional and this object feb functional kubelet reflector go object kube system kube proxy token failed to watch secret failed to list secret secrets kube proxy token is forbidden user system node functional cannot list resource secrets in api group in the namespace kube system no relationship found between node functional and this object feb functional kubelet reflector go object default default token failed to watch secret failed to list secret secrets default token is forbidden user system node functional cannot list resource secrets in api group in the namespace default no relationship found between node functional and this object feb functional kubelet kubelet go deleted mirror pod kube apiserver functional kube system because it is outdated feb functional kubelet kubelet go trying to delete pod kube apiserver functional kube system feb functional kubelet kubelet go trying to delete pod kube apiserver functional kube system storage provisioner storage provisioner go initializing the minikube storage provisioner storage provisioner go storage provisioner initialized now starting service leaderelection go attempting to acquire leader lease kube system io minikube hostpath leaderelection go successfully acquired lease kube system io minikube hostpath event go event objectreference kind endpoints namespace kube system name io minikube hostpath uid apiversion resourceversion fieldpath type normal reason leaderelection functional became leader controller go starting provisioner controller io minikube hostpath functional controller go started provisioner controller io minikube hostpath functional reflector go pkg mod io client go tools cache reflector go failed to watch persistentvolumeclaim get dial tcp connect connection refused reflector go pkg mod io client go tools cache reflector go failed to watch storageclass get dial tcp connect connection refused reflector go pkg mod io client go tools cache reflector go failed to watch persistentvolume get dial tcp connect connection refused storage provisioner storage provisioner go initializing the minikube storage provisioner storage provisioner go storage provisioner initialized now starting service leaderelection go attempting to acquire leader lease kube system io minikube hostpath stdout helpers test go dbg run minikube linux status format apiserver p functional n functional helpers test go dbg run kubectl context functional get po o jsonpath items metadata name a field selector status phase running helpers test go non running pods kube apiserver functional helpers test go post mortem describe non running pods helpers test go dbg run kubectl context functional describe pod kube apiserver functional helpers test go dbg non zero exit kubectl context functional describe pod kube apiserver functional exit status stderr error from server notfound pods kube apiserver functional not found stderr helpers test go kubectl context functional describe pod kube apiserver functional exit status | 0 |
108,499 | 4,346,315,163 | IssuesEvent | 2016-07-29 15:35:42 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | drake-visualizer needs test coverage in CI | configuration: mac priority: high team: software core type: continuous integration | https://github.com/RobotLocomotion/drake/issues/2731 should have been caught immediately by CI -- drake-visualizer simply failed to run on homebrew mac. We do not currently have any test coverage for running the drake-visualizer.
| 1.0 | drake-visualizer needs test coverage in CI - https://github.com/RobotLocomotion/drake/issues/2731 should have been caught immediately by CI -- drake-visualizer simply failed to run on homebrew mac. We do not currently have any test coverage for running the drake-visualizer.
| non_infrastructure | drake visualizer needs test coverage in ci should have been caught immediately by ci drake visualizer simply failed to run on homebrew mac we do not currently have any test coverage for running the drake visualizer | 0 |
259,185 | 19,589,476,823 | IssuesEvent | 2022-01-05 11:11:47 | cloudflare/cloudflare-docs | https://api.github.com/repos/cloudflare/cloudflare-docs | closed | [Images] Framework integration docs for Images | documentation product:images | ### Expected Behavior
Images to also have a "Integration with Frameworks" section like Image Resizing (https://developers.cloudflare.com/images/image-resizing/integration-with-frameworks).
### Actual Behavior
It does not have this currently.
### Section that requires update
https://developers.cloudflare.com/images/cloudflare-images
### Additional information
Now that you can serve Images from `/cdn-cgi/imagedelivery` it also deserves a page so people know how to link this with frameworks like NextJS. Obviously, the width x height doesn't really matter here but the whole "serving optimized images" of `next/image` is suitable for Images. | 1.0 | [Images] Framework integration docs for Images - ### Expected Behavior
Images to also have a "Integration with Frameworks" section like Image Resizing (https://developers.cloudflare.com/images/image-resizing/integration-with-frameworks).
### Actual Behavior
It does not have this currently.
### Section that requires update
https://developers.cloudflare.com/images/cloudflare-images
### Additional information
Now that you can serve Images from `/cdn-cgi/imagedelivery` it also deserves a page so people know how to link this with frameworks like NextJS. Obviously, the width x height doesn't really matter here but the whole "serving optimized images" of `next/image` is suitable for Images. | non_infrastructure | framework integration docs for images expected behavior images to also have a integration with frameworks section like image resizing actual behavior it does not have this currently section that requires update additional information now that you can serve images from cdn cgi imagedelivery it also deserves a page so people know how to link this with frameworks like nextjs obviously the width x height doesn t really matter here but the whole serving optimized images of next image is suitable for images | 0 |
389 | 2,676,622,781 | IssuesEvent | 2015-03-25 18:39:27 | btopro/elmsln | https://api.github.com/repos/btopro/elmsln | opened | handsfree drush-create-site issue | bug infrastructure | after running handsfree installer, it seems like you need to process running /usr/local/bin/drush-create-site/drush-create-site manually on the server and it works; if you let the crontab pick it up it fails... even tho both are root.... very odd. | 1.0 | handsfree drush-create-site issue - after running handsfree installer, it seems like you need to process running /usr/local/bin/drush-create-site/drush-create-site manually on the server and it works; if you let the crontab pick it up it fails... even tho both are root.... very odd. | infrastructure | handsfree drush create site issue after running handsfree installer it seems like you need to process running usr local bin drush create site drush create site manually on the server and it works if you let the crontab pick it up it fails even tho both are root very odd | 1 |
132,855 | 28,369,435,758 | IssuesEvent | 2023-04-12 15:54:06 | TIDES-transit/TIDES | https://api.github.com/repos/TIDES-transit/TIDES | closed | 🐛💻 – workflows not running on schema change PRs | 🐛 bug 💻 code | **Describe the problem**
For example, the validate_table_schemas, workflow have never run. And validate_?? hasn't run since ??.
The workflow `on>pull_request>paths` reference `spec/.*.spec.json` but the actual schema files follow the pattern `spec/.*.schema.json`
| 1.0 | 🐛💻 – workflows not running on schema change PRs - **Describe the problem**
For example, the validate_table_schemas, workflow have never run. And validate_?? hasn't run since ??.
The workflow `on>pull_request>paths` reference `spec/.*.spec.json` but the actual schema files follow the pattern `spec/.*.schema.json`
| non_infrastructure | 🐛💻 – workflows not running on schema change prs describe the problem for example the validate table schemas workflow have never run and validate hasn t run since the workflow on pull request paths reference spec spec json but the actual schema files follow the pattern spec schema json | 0 |
4,428 | 5,068,480,536 | IssuesEvent | 2016-12-24 17:34:54 | TechnionYP5777/UpAndGo | https://api.github.com/repos/TechnionYP5777/UpAndGo | closed | some communication problems | infrastructure | Hello, guys! My telephone isn't working so I can't check whatsapp or telegram.
So, if somebody wants to contact me, please, do it via git or facebook.
I'll remove this issue when I'll find a way to participate in aforementioned messengers again. | 1.0 | some communication problems - Hello, guys! My telephone isn't working so I can't check whatsapp or telegram.
So, if somebody wants to contact me, please, do it via git or facebook.
I'll remove this issue when I'll find a way to participate in aforementioned messengers again. | infrastructure | some communication problems hello guys my telephone isn t working so i can t check whatsapp or telegram so if somebody wants to contact me please do it via git or facebook i ll remove this issue when i ll find a way to participate in aforementioned messengers again | 1 |
22,880 | 15,595,279,977 | IssuesEvent | 2021-03-18 14:44:23 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | Reduce warnings output in CI | area:rasa-oss :ferris_wheel: area:rasa-oss/infrastructure :bullettrain_front: effort:enable-squad/2 priority:high type:maintenance :wrench: | **Description of Problem**:
When running our CI builds there are currently a lot of warnings in the output in the CI. This makes it harder to detect errors / problems in the builds and also makes us blind to new warnings caused by our changes (broken window theory).
**Overview of the Solution**:
* I think there is a feature in `pytest` which will mark a test as failed if it emits an unhandled warning (see [here](https://github.com/pytest-dev/pytest/issues/1173#issuecomment-431247482))
**Definition of Done**:
- [ ] all expected warnings are handled in the tests using `pytest.warns`
- [ ] no unexpected warnings are emitted
| 1.0 | Reduce warnings output in CI - **Description of Problem**:
When running our CI builds there are currently a lot of warnings in the output in the CI. This makes it harder to detect errors / problems in the builds and also makes us blind to new warnings caused by our changes (broken window theory).
**Overview of the Solution**:
* I think there is a feature in `pytest` which will mark a test as failed if it emits an unhandled warning (see [here](https://github.com/pytest-dev/pytest/issues/1173#issuecomment-431247482))
**Definition of Done**:
- [ ] all expected warnings are handled in the tests using `pytest.warns`
- [ ] no unexpected warnings are emitted
| infrastructure | reduce warnings output in ci description of problem when running our ci builds there are currently a lot of warnings in the output in the ci this makes it harder to detect errors problems in the builds and also makes us blind to new warnings caused by our changes broken window theory overview of the solution i think there is a feature in pytest which will mark a test as failed if it emits an unhandled warning see definition of done all expected warnings are handled in the tests using pytest warns no unexpected warnings are emitted | 1 |
301,102 | 26,016,479,413 | IssuesEvent | 2022-12-21 08:56:26 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: cdc/ledger failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.cdc/ledger [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8046095?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8046095?buildTab=artifacts#/cdc/ledger) on master @ [10266a323f94c3cf397d5de590e512d987e63e22](https://github.com/cockroachdb/cockroach/commits/10266a323f94c3cf397d5de590e512d987e63e22):
```
test artifacts and logs in: /artifacts/cdc/ledger/run_1
(test_impl.go:291).Fatal: output in run_085534.283597394_n4_workload_init_ledger: ./workload init ledger {pgurl:2} returned: COMMAND_PROBLEM: exit status 1
(test_impl.go:314).Errorf: error shutting down prometheus/grafana: context canceled
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/ledger.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: cdc/ledger failed - roachtest.cdc/ledger [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8046095?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8046095?buildTab=artifacts#/cdc/ledger) on master @ [10266a323f94c3cf397d5de590e512d987e63e22](https://github.com/cockroachdb/cockroach/commits/10266a323f94c3cf397d5de590e512d987e63e22):
```
test artifacts and logs in: /artifacts/cdc/ledger/run_1
(test_impl.go:291).Fatal: output in run_085534.283597394_n4_workload_init_ledger: ./workload init ledger {pgurl:2} returned: COMMAND_PROBLEM: exit status 1
(test_impl.go:314).Errorf: error shutting down prometheus/grafana: context canceled
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/ledger.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_infrastructure | roachtest cdc ledger failed roachtest cdc ledger with on master test artifacts and logs in artifacts cdc ledger run test impl go fatal output in run workload init ledger workload init ledger pgurl returned command problem exit status test impl go errorf error shutting down prometheus grafana context canceled parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb cdc | 0 |
273,996 | 23,801,687,167 | IssuesEvent | 2022-09-03 11:53:11 | arunmr1980/academic-integrator | https://api.github.com/repos/arunmr1980/academic-integrator | closed | Lesson plan session link getting cleared. | bug dependency for prod testing | **Login Info + Browser Details:**
http://demo.admin.greenchalk.in/academics/curriculum
**Actual Behaviour:**
While creating a lesson plan, add a link and link name, save the form,
Once we click on the edit lesson plan the link getting cleared.
**Steps to Reproduce:**
1. Navigate to LESSON PLAN
2. Click on 'View Subject Plan'
3. Create a lesson plan with a link and link name
4. Save the plan
5. Click on 'Edit Lesson Plan'
6. Check the link field
7. Link getting cleared
**Screenshot:**

| 1.0 | Lesson plan session link getting cleared. - **Login Info + Browser Details:**
http://demo.admin.greenchalk.in/academics/curriculum
**Actual Behaviour:**
While creating a lesson plan, add a link and link name, save the form,
Once we click on the edit lesson plan the link getting cleared.
**Steps to Reproduce:**
1. Navigate to LESSON PLAN
2. Click on 'View Subject Plan'
3. Create a lesson plan with a link and link name
4. Save the plan
5. Click on 'Edit Lesson Plan'
6. Check the link field
7. Link getting cleared
**Screenshot:**

| non_infrastructure | lesson plan session link getting cleared login info browser details actual behaviour while creating a lesson plan add a link and link name save the form once we click on the edit lesson plan the link getting cleared steps to reproduce navigate to lesson plan click on view subject plan create a lesson plan with a link and link name save the plan click on edit lesson plan check the link field link getting cleared screenshot | 0 |
332,284 | 24,340,087,051 | IssuesEvent | 2022-10-01 15:42:38 | ppap1771/Face-Recognition | https://api.github.com/repos/ppap1771/Face-Recognition | opened | Update readme after reviewing the code. | documentation good first issue Hactoberfest | The initial commit has been made and as people contribute we can add the updated features to the readme file. | 1.0 | Update readme after reviewing the code. - The initial commit has been made and as people contribute we can add the updated features to the readme file. | non_infrastructure | update readme after reviewing the code the initial commit has been made and as people contribute we can add the updated features to the readme file | 0 |
203,746 | 15,888,504,474 | IssuesEvent | 2021-04-10 07:40:08 | dankamongmen/notcurses | https://api.github.com/repos/dankamongmen/notcurses | closed | ncvisual_render ought use ncvisual_geom | bitmaps documentation enhancement | `ncvisual_geom()` allows callers to know how large their blit will be before doing it. It reproduces the calculations of `ncvisual_render()`, duplicating code (and i'm pretty sure it's broken for bitmaps). `ncvisual_render()` ought use `ncvisual_geom()` to get its calculations, ensuring that there's one source of truth.
also, it ought be possible to find out which blitter is going to be used. don't we do this somewhere? | 1.0 | ncvisual_render ought use ncvisual_geom - `ncvisual_geom()` allows callers to know how large their blit will be before doing it. It reproduces the calculations of `ncvisual_render()`, duplicating code (and i'm pretty sure it's broken for bitmaps). `ncvisual_render()` ought use `ncvisual_geom()` to get its calculations, ensuring that there's one source of truth.
also, it ought be possible to find out which blitter is going to be used. don't we do this somewhere? | non_infrastructure | ncvisual render ought use ncvisual geom ncvisual geom allows callers to know how large their blit will be before doing it it reproduces the calculations of ncvisual render duplicating code and i m pretty sure it s broken for bitmaps ncvisual render ought use ncvisual geom to get its calculations ensuring that there s one source of truth also it ought be possible to find out which blitter is going to be used don t we do this somewhere | 0 |
2,787 | 2,637,027,146 | IssuesEvent | 2015-03-10 10:11:54 | andymost/WidgetWatcher | https://api.github.com/repos/andymost/WidgetWatcher | closed | Вынести раздачу страниц | Test enhancement | * Добавить папку для страниц
* Добавить хук для тестов по очистке папки (посмотреть как это можно делать не для всех тестов, если хуком не получится обернуть)
* Обернуть раздачу страниц чтоб раздачу выполнять непосредственно из теста
| 1.0 | Вынести раздачу страниц - * Добавить папку для страниц
* Добавить хук для тестов по очистке папки (посмотреть как это можно делать не для всех тестов, если хуком не получится обернуть)
* Обернуть раздачу страниц чтоб раздачу выполнять непосредственно из теста
| non_infrastructure | вынести раздачу страниц добавить папку для страниц добавить хук для тестов по очистке папки посмотреть как это можно делать не для всех тестов если хуком не получится обернуть обернуть раздачу страниц чтоб раздачу выполнять непосредственно из теста | 0 |
21,419 | 14,547,179,194 | IssuesEvent | 2020-12-15 22:29:28 | robotology/QA | https://api.github.com/repos/robotology/QA | closed | Noise insulation for power supplies | hardware infrastructure | Hi,
I'm currently trying to get a quiet noise-reducing cabinet for the iCub power supplies.
We're still having the old XFR35-35 and XFR60-46 power supplies here and I just measured
a noise level of about 55dB with a smart phone app (not sure how accurate that one is).
I was wondering if anybody had experience with this issue - I do know that the Osaka group are
using a quiet rack enclosure of a Japanese manufacturer and I have their details.
But obviously I need a European retailer and am currently speaking to the British 'The Rack People'. I'm considering the 4U Orion Acoustic Mini.
So does any of the European iCub users have experience with noise insulation for their power suppies?
And could anyone tell me
- the average power consumption of the iCub when sitting / attached to its frame and moving the arms
- the BTU rating (BTU = British Thermal Unit) of the power supplies.
I searched online, but I could hardly only get info on the XFR35-35, but no BTU rating.
Knowing the average and/or peak power consumption might be enough information for the retailer to give me qualified support. As I'm just ugrading our whole infrastructure I can't start it up and have a look at the display of the power supplies.
Many thanks.
Frank
| 1.0 | Noise insulation for power supplies - Hi,
I'm currently trying to get a quiet noise-reducing cabinet for the iCub power supplies.
We're still having the old XFR35-35 and XFR60-46 power supplies here and I just measured
a noise level of about 55dB with a smart phone app (not sure how accurate that one is).
I was wondering if anybody had experience with this issue - I do know that the Osaka group are
using a quiet rack enclosure of a Japanese manufacturer and I have their details.
But obviously I need a European retailer and am currently speaking to the British 'The Rack People'. I'm considering the 4U Orion Acoustic Mini.
So does any of the European iCub users have experience with noise insulation for their power suppies?
And could anyone tell me
- the average power consumption of the iCub when sitting / attached to its frame and moving the arms
- the BTU rating (BTU = British Thermal Unit) of the power supplies.
I searched online, but I could hardly only get info on the XFR35-35, but no BTU rating.
Knowing the average and/or peak power consumption might be enough information for the retailer to give me qualified support. As I'm just ugrading our whole infrastructure I can't start it up and have a look at the display of the power supplies.
Many thanks.
Frank
| infrastructure | noise insulation for power supplies hi i m currently trying to get a quiet noise reducing cabinet for the icub power supplies we re still having the old and power supplies here and i just measured a noise level of about with a smart phone app not sure how accurate that one is i was wondering if anybody had experience with this issue i do know that the osaka group are using a quiet rack enclosure of a japanese manufacturer and i have their details but obviously i need a european retailer and am currently speaking to the british the rack people i m considering the orion acoustic mini so does any of the european icub users have experience with noise insulation for their power suppies and could anyone tell me the average power consumption of the icub when sitting attached to its frame and moving the arms the btu rating btu british thermal unit of the power supplies i searched online but i could hardly only get info on the but no btu rating knowing the average and or peak power consumption might be enough information for the retailer to give me qualified support as i m just ugrading our whole infrastructure i can t start it up and have a look at the display of the power supplies many thanks frank | 1 |
32,613 | 7,552,531,868 | IssuesEvent | 2018-04-19 00:51:37 | dickschoeller/gedbrowser | https://api.github.com/repos/dickschoeller/gedbrowser | closed | Tests of API controllers, crud classes and helpers | code smell in progress | Right now the tests are:
* all through the controllers
* don't check behaviors well
Fix to:
* Test the helpers directly
* Test the CRUDs directly
* Really check the results
* Don't check the JSON from the controllers
Coverage issues:
* ApiFamily
* SaveController
* ApiSource
* GedWriter
* ApiSubmitter | 1.0 | Tests of API controllers, crud classes and helpers - Right now the tests are:
* all through the controllers
* don't check behaviors well
Fix to:
* Test the helpers directly
* Test the CRUDs directly
* Really check the results
* Don't check the JSON from the controllers
Coverage issues:
* ApiFamily
* SaveController
* ApiSource
* GedWriter
* ApiSubmitter | non_infrastructure | tests of api controllers crud classes and helpers right now the tests are all through the controllers don t check behaviors well fix to test the helpers directly test the cruds directly really check the results don t check the json from the controllers coverage issues apifamily savecontroller apisource gedwriter apisubmitter | 0 |
22,730 | 15,414,986,558 | IssuesEvent | 2021-03-05 01:32:08 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | reopened | EditorFeatures.Cocoa is missing PDBs | Area-Infrastructure untriaged | Need to ensure that EditorFeatures.Cocoa is including PDBs in the package, at least. Ideally for VS Mac if we can embed the PDBs and embed all sources, that would be even better. | 1.0 | EditorFeatures.Cocoa is missing PDBs - Need to ensure that EditorFeatures.Cocoa is including PDBs in the package, at least. Ideally for VS Mac if we can embed the PDBs and embed all sources, that would be even better. | infrastructure | editorfeatures cocoa is missing pdbs need to ensure that editorfeatures cocoa is including pdbs in the package at least ideally for vs mac if we can embed the pdbs and embed all sources that would be even better | 1 |
324,070 | 9,883,422,406 | IssuesEvent | 2019-06-24 19:21:49 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Improve debuggability of syntax errors in `/etc/zulip/settings.py` files | area: production installer priority: high | As discussed in #6993, it's common for a broken `settings.py` file to mean that Django doesn't start, and thus `/var/log/zulip/errors.log` remains empty; instead, a user needs to know to look in `/var/log/zulip/django.log` to see why Django didn't start. I think with some clever coding in `zproject/wsgi.py`, we might be able to ensure that something gets logged into `errors.log` in the event that Django can't start up due to a `settings.py` import failure.
This would be valuable to fix since users who don't see anything in an error log when the server 500s can quickly become discouraged about Zulip's quality. If fixing this in the clever way I suggested turns out to be difficult, we should instead (or in addition, perhaps) invest in documenting `django.log` as a place to look if `errors.log` is empty. | 1.0 | Improve debuggability of syntax errors in `/etc/zulip/settings.py` files - As discussed in #6993, it's common for a broken `settings.py` file to mean that Django doesn't start, and thus `/var/log/zulip/errors.log` remains empty; instead, a user needs to know to look in `/var/log/zulip/django.log` to see why Django didn't start. I think with some clever coding in `zproject/wsgi.py`, we might be able to ensure that something gets logged into `errors.log` in the event that Django can't start up due to a `settings.py` import failure.
This would be valuable to fix since users who don't see anything in an error log when the server 500s can quickly become discouraged about Zulip's quality. If fixing this in the clever way I suggested turns out to be difficult, we should instead (or in addition, perhaps) invest in documenting `django.log` as a place to look if `errors.log` is empty. | non_infrastructure | improve debuggability of syntax errors in etc zulip settings py files as discussed in it s common for a broken settings py file to mean that django doesn t start and thus var log zulip errors log remains empty instead a user needs to know to look in var log zulip django log to see why django didn t start i think with some clever coding in zproject wsgi py we might be able to ensure that something gets logged into errors log in the event that django can t start up due to a settings py import failure this would be valuable to fix since users who don t see anything in an error log when the server can quickly become discouraged about zulip s quality if fixing this in the clever way i suggested turns out to be difficult we should instead or in addition perhaps invest in documenting django log as a place to look if errors log is empty | 0 |
60,531 | 3,130,457,747 | IssuesEvent | 2015-09-09 09:31:31 | PICOGH/Webviewer | https://api.github.com/repos/PICOGH/Webviewer | opened | Vervang geothermie kaartlagen | enhancement priority: 2 (normal) Use case Leiden | Op basis van nieuwe data, gebaseerd op een nieuwe potentiecontour. Deze heb ik je toegestuurd. | 1.0 | Vervang geothermie kaartlagen - Op basis van nieuwe data, gebaseerd op een nieuwe potentiecontour. Deze heb ik je toegestuurd. | non_infrastructure | vervang geothermie kaartlagen op basis van nieuwe data gebaseerd op een nieuwe potentiecontour deze heb ik je toegestuurd | 0 |
10,465 | 8,575,700,862 | IssuesEvent | 2018-11-12 18:02:10 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Running performance tests ignores the xunitmethodname property | area-Infrastructure | Using either xunitoptions property with -method as shown below:
`msbuild /t:rebuildandtest /p:Performance=true /p:ConfigurationGroup=Release /p:TargetOS=Windows_NT /p:xunitoptions="-method System.Tests.Perf_Environment.ExpandEnvironmentVariables"`
or using xunitmethodname as shown below:
`msbuild /t:rebuildandtest /p:Performance=true /p:ConfigurationGroup=Release /p:TargetOS=Windows_NT /p:xunitmethodname=System.Tests.Perf_Environment.ExpandEnvironmentVariables`
Should run the single specified test but the combination of commands above will run all tests instead.
I'm creating this issue to track the problem. Will need to confirm if it has to do with the combination of using `Performance=true` along with specifying the method name.
cc: @ViktorHofer @Anipik @safern @leotsarev | 1.0 | Running performance tests ignores the xunitmethodname property - Using either xunitoptions property with -method as shown below:
`msbuild /t:rebuildandtest /p:Performance=true /p:ConfigurationGroup=Release /p:TargetOS=Windows_NT /p:xunitoptions="-method System.Tests.Perf_Environment.ExpandEnvironmentVariables"`
or using xunitmethodname as shown below:
`msbuild /t:rebuildandtest /p:Performance=true /p:ConfigurationGroup=Release /p:TargetOS=Windows_NT /p:xunitmethodname=System.Tests.Perf_Environment.ExpandEnvironmentVariables`
Should run the single specified test but the combination of commands above will run all tests instead.
I'm creating this issue to track the problem. Will need to confirm if it has to do with the combination of using `Performance=true` along with specifying the method name.
cc: @ViktorHofer @Anipik @safern @leotsarev | infrastructure | running performance tests ignores the xunitmethodname property using either xunitoptions property with method as shown below msbuild t rebuildandtest p performance true p configurationgroup release p targetos windows nt p xunitoptions method system tests perf environment expandenvironmentvariables or using xunitmethodname as shown below msbuild t rebuildandtest p performance true p configurationgroup release p targetos windows nt p xunitmethodname system tests perf environment expandenvironmentvariables should run the single specified test but the combination of commands above will run all tests instead i m creating this issue to track the problem will need to confirm if it has to do with the combination of using performance true along with specifying the method name cc viktorhofer anipik safern leotsarev | 1 |
58,525 | 3,089,700,240 | IssuesEvent | 2015-08-25 23:06:42 | google/googlemock | https://api.github.com/repos/google/googlemock | closed | Fix warning C4628 in MSVS2010 | auto-migrated OpSys-Windows Priority-Medium Type-Enhancement | ```
What steps will reproduce the problem?
1. Compile chromium with /Wall with MSVC2010 SP1.
What is the expected output? What do you see instead?
...\src\testing\gmock\include\gmock/gmock-actions.h(116): warning C4628:
digraphs not supported with -Ze. Character sequence '<:' not interpreted as
alternate token for '['
is generated.
Ref: http://msdn.microsoft.com/en-us/library/5xk7ehw0.aspx
Which version of Google Mock are you using? On what operating system?
r403
```
Original issue reported on code.google.com by `maruel@google.com` on 29 Nov 2011 at 9:22 | 1.0 | Fix warning C4628 in MSVS2010 - ```
What steps will reproduce the problem?
1. Compile chromium with /Wall with MSVC2010 SP1.
What is the expected output? What do you see instead?
...\src\testing\gmock\include\gmock/gmock-actions.h(116): warning C4628:
digraphs not supported with -Ze. Character sequence '<:' not interpreted as
alternate token for '['
is generated.
Ref: http://msdn.microsoft.com/en-us/library/5xk7ehw0.aspx
Which version of Google Mock are you using? On what operating system?
r403
```
Original issue reported on code.google.com by `maruel@google.com` on 29 Nov 2011 at 9:22 | non_infrastructure | fix warning in what steps will reproduce the problem compile chromium with wall with what is the expected output what do you see instead src testing gmock include gmock gmock actions h warning digraphs not supported with ze character sequence not interpreted as alternate token for is generated ref which version of google mock are you using on what operating system original issue reported on code google com by maruel google com on nov at | 0 |
157,492 | 13,690,768,741 | IssuesEvent | 2020-09-30 14:46:06 | AY2021S1-CS2103-T16-4/tp | https://api.github.com/repos/AY2021S1-CS2103-T16-4/tp | closed | Update developer guide. | documentation type.Documentation | update developer guide for update target user profile, user stories, value proposition, use cases, NFRs, and glossary in DeveloperGuide.md | 2.0 | Update developer guide. - update developer guide for update target user profile, user stories, value proposition, use cases, NFRs, and glossary in DeveloperGuide.md | non_infrastructure | update developer guide update developer guide for update target user profile user stories value proposition use cases nfrs and glossary in developerguide md | 0 |
612,197 | 19,006,771,060 | IssuesEvent | 2021-11-23 01:38:35 | WiIIiam278/HuskHomes2 | https://api.github.com/repos/WiIIiam278/HuskHomes2 | closed | Option to display countdown numbers as titles instead of action bar | type: feature request priority: low | > If possible, nice how could you add titles to the screen while teleporting?
Original Issue #33 - ReferTV - May 4th, 2021
| 1.0 | Option to display countdown numbers as titles instead of action bar - > If possible, nice how could you add titles to the screen while teleporting?
Original Issue #33 - ReferTV - May 4th, 2021
| non_infrastructure | option to display countdown numbers as titles instead of action bar if possible nice how could you add titles to the screen while teleporting original issue refertv may | 0 |
335,872 | 24,481,841,747 | IssuesEvent | 2022-10-08 23:46:14 | mapmapteam/mapmap | https://api.github.com/repos/mapmapteam/mapmap | opened | Discuss about the deprecation warning in the README file and update it | bug needs_documentation documentation priority_high | In dde2acf4ebcfd8ee9fce5bb6565aded8d109f73b a message has been added to the README file for the project.
See https://github.com/mapmapteam/mapmap/commit/dde2acf4ebcfd8ee9fce5bb6565aded8d109f73b#commitcomment-86226021
## Discussion
It would be best to discuss it with the other developers before committing a message like this in the main README file of the project. At least, asking for some code reviews would have been appropriate.
There are plans to fund the development efforts for MapMap, and there is a lot of interest for the project, so I would not say that the project is dead.
Therefore, I think that this message should be removed until a discussion has happened. And then, a discussion about this should be planned. | 2.0 | Discuss about the deprecation warning in the README file and update it - In dde2acf4ebcfd8ee9fce5bb6565aded8d109f73b a message has been added to the README file for the project.
See https://github.com/mapmapteam/mapmap/commit/dde2acf4ebcfd8ee9fce5bb6565aded8d109f73b#commitcomment-86226021
## Discussion
It would be best to discuss it with the other developers before committing a message like this in the main README file of the project. At least, asking for some code reviews would have been appropriate.
There are plans to fund the development efforts for MapMap, and there is a lot of interest for the project, so I would not say that the project is dead.
Therefore, I think that this message should be removed until a discussion has happened. And then, a discussion about this should be planned. | non_infrastructure | discuss about the deprecation warning in the readme file and update it in a message has been added to the readme file for the project see discussion it would be best to discuss it with the other developers before committing a message like this in the main readme file of the project at least asking for some code reviews would have been appropriate there are plans to fund the development efforts for mapmap and there is a lot of interest for the project so i would not say that the project is dead therefore i think that this message should be removed until a discussion has happened and then a discussion about this should be planned | 0 |
360,025 | 25,267,812,949 | IssuesEvent | 2022-11-16 06:44:23 | Rpc-h/RPCh | https://api.github.com/repos/Rpc-h/RPCh | closed | Spec of RPCh exit | documentation | <!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
<!--
Provide a clear and concise description of what this epic achieves.
-->
### Description
RPCh exit node is connected to a locally running HOPRd node.
Its job is to listen to incoming messages, reconstruct requests, perform requests to providers and return back the response.
### Relevant issues
- create spec
### Specs
https://docs.google.com/document/d/1H52hM0utG1x-85QWWoMSvfpL9Y7rih7cpiNSj-b-RMY/edit?usp=sharing | 1.0 | Spec of RPCh exit - <!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
<!--
Provide a clear and concise description of what this epic achieves.
-->
### Description
RPCh exit node is connected to a locally running HOPRd node.
Its job is to listen to incoming messages, reconstruct requests, perform requests to providers and return back the response.
### Relevant issues
- create spec
### Specs
https://docs.google.com/document/d/1H52hM0utG1x-85QWWoMSvfpL9Y7rih7cpiNSj-b-RMY/edit?usp=sharing | non_infrastructure | spec of rpch exit provide a clear and concise description of what this epic achieves description rpch exit node is connected to a locally running hoprd node its job is to listen to incoming messages reconstruct requests perform requests to providers and return back the response relevant issues create spec specs | 0 |
11,417 | 9,181,014,575 | IssuesEvent | 2019-03-05 09:13:07 | coq/coq | https://api.github.com/repos/coq/coq | closed | 32bit Windows failure to build lablgtk | kind: ci-failure kind: infrastructure platform: Windows | Link to build error log: https://coq.gitlab.io/-/coq/-/jobs/156128814/artifacts/artifacts/buildlogs/lablgtk-2.18.6-make-world_err.txt (for job https://gitlab.com/coq/coq/-/jobs/156128814).
```
flexlink -chain mingw -stack 16777216 -o "lablgtk.cmxs" "-L." "-LC:/ci/cygwin32_3421_8273/usr/i686-w64-mingw32/sys-root/mingw/libocaml" "lablgtk.cmxs.startup.o" "lablgtk.a" "-llablgtk2" "-lgtk-win32-2.0" "-lgdk-win32-2.0" "-lgdi32" "-limm32" "-lshell32" "-lole32" "-lpangocairo-1.0" "-lpangoft2-1.0" "-lharfbuzz" "-lm" "-lpangowin32-1.0" "-lgdi32" "-lusp10" "-lpango-1.0" "-lm" "-latk-1.0" "-lcairo" "-lz" "-lpixman-1" "-lfontconfig" "-lexpat" "-lfreetype" "-lbz2" "-lpng16" "-lz" "-lexpat" "-lfreetype" "-lbz2" "-lpng16" "-lz" "-lgdk_pixbuf-2.0" "-lm" "-lpng16" "-lz" "-lgio-2.0" "-lz" "-lgmodule-2.0" "-lgobject-2.0" "-lffi" "-lglib-2.0" "-lintl" "-lws2_32" "-lole32" "-lwinmm" "-lshlwapi" "-lpcre" "-lintl" "-lpcre"
The command line is too long.
** Fatal error: Error during linking
```
I don't know yet if this is a one off or a new problem because only the latest master build has failed.
I've restarted it, let's see: https://gitlab.com/coq/coq/-/jobs/156366664 | 1.0 | 32bit Windows failure to build lablgtk - Link to build error log: https://coq.gitlab.io/-/coq/-/jobs/156128814/artifacts/artifacts/buildlogs/lablgtk-2.18.6-make-world_err.txt (for job https://gitlab.com/coq/coq/-/jobs/156128814).
```
flexlink -chain mingw -stack 16777216 -o "lablgtk.cmxs" "-L." "-LC:/ci/cygwin32_3421_8273/usr/i686-w64-mingw32/sys-root/mingw/libocaml" "lablgtk.cmxs.startup.o" "lablgtk.a" "-llablgtk2" "-lgtk-win32-2.0" "-lgdk-win32-2.0" "-lgdi32" "-limm32" "-lshell32" "-lole32" "-lpangocairo-1.0" "-lpangoft2-1.0" "-lharfbuzz" "-lm" "-lpangowin32-1.0" "-lgdi32" "-lusp10" "-lpango-1.0" "-lm" "-latk-1.0" "-lcairo" "-lz" "-lpixman-1" "-lfontconfig" "-lexpat" "-lfreetype" "-lbz2" "-lpng16" "-lz" "-lexpat" "-lfreetype" "-lbz2" "-lpng16" "-lz" "-lgdk_pixbuf-2.0" "-lm" "-lpng16" "-lz" "-lgio-2.0" "-lz" "-lgmodule-2.0" "-lgobject-2.0" "-lffi" "-lglib-2.0" "-lintl" "-lws2_32" "-lole32" "-lwinmm" "-lshlwapi" "-lpcre" "-lintl" "-lpcre"
The command line is too long.
** Fatal error: Error during linking
```
I don't know yet if this is a one off or a new problem because only the latest master build has failed.
I've restarted it, let's see: https://gitlab.com/coq/coq/-/jobs/156366664 | infrastructure | windows failure to build lablgtk link to build error log for job flexlink chain mingw stack o lablgtk cmxs l lc ci usr sys root mingw libocaml lablgtk cmxs startup o lablgtk a lgtk lgdk lpangocairo lharfbuzz lm lpango lm latk lcairo lz lpixman lfontconfig lexpat lfreetype lz lexpat lfreetype lz lgdk pixbuf lm lz lgio lz lgmodule lgobject lffi lglib lintl lwinmm lshlwapi lpcre lintl lpcre the command line is too long fatal error error during linking i don t know yet if this is a one off or a new problem because only the latest master build has failed i ve restarted it let s see | 1 |
176,139 | 6,556,844,064 | IssuesEvent | 2017-09-06 15:22:02 | byu-oit/home_d8 | https://api.github.com/repos/byu-oit/home_d8 | closed | Make sure redirects are pointing from old site to new | Top Priority | Pages include
1. http://home.byu.edu/home/content/directories - going to https://lambda.byu.edu/ae/prod/person/cgi/personLookup.cgi?showEmpStd=S directly now (per our discussion)
2. https://home.byu.edu/home/colleges - going to our Colleges & Department page
3. http://home.byu.edu/home/content/libraries - going to http://www-test.byu.edu/byu-libraries
https://home.byu.edu/home/content/galleries-and-museums goes to http://www-test.byu.edu/galleries-and-museums
4. https://home.byu.edu/webapp/mymap/register.htm we can use the url: https://y.byu.edu/ry/ae/prod/mymap/cgi/register.cgi which still prompts for login and then takes them to their registration screen. | 1.0 | Make sure redirects are pointing from old site to new - Pages include
1. http://home.byu.edu/home/content/directories - going to https://lambda.byu.edu/ae/prod/person/cgi/personLookup.cgi?showEmpStd=S directly now (per our discussion)
2. https://home.byu.edu/home/colleges - going to our Colleges & Department page
3. http://home.byu.edu/home/content/libraries - going to http://www-test.byu.edu/byu-libraries
https://home.byu.edu/home/content/galleries-and-museums goes to http://www-test.byu.edu/galleries-and-museums
4. https://home.byu.edu/webapp/mymap/register.htm we can use the url: https://y.byu.edu/ry/ae/prod/mymap/cgi/register.cgi which still prompts for login and then takes them to their registration screen. | non_infrastructure | make sure redirects are pointing from old site to new pages include going to directly now per our discussion going to our colleges department page going to goes to we can use the url which still prompts for login and then takes them to their registration screen | 0 |
33,237 | 27,323,154,215 | IssuesEvent | 2023-02-24 22:03:06 | NAnt2/NAnt2 | https://api.github.com/repos/NAnt2/NAnt2 | closed | [NAntContrib] Mono build fails at MSI and SourceSafe tasks with mono 3.2.3++ | infrastructure | <a href="https://github.com/dguder"><img src="https://avatars3.githubusercontent.com/u/395019?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [dguder](https://github.com/dguder)**
_Saturday Jan 31, 2015 at 20:57 GMT_
_Originally opened as https://github.com/nant/nantcontrib/issues/36_
----
Since Mono 3.2.3 the build fails with error CS0571, where CSC has no issues about this code. Since MSI and SourceSafe is not so important at mono on linux these taks might be dropped on mono build
| 1.0 | [NAntContrib] Mono build fails at MSI and SourceSafe tasks with mono 3.2.3++ - <a href="https://github.com/dguder"><img src="https://avatars3.githubusercontent.com/u/395019?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [dguder](https://github.com/dguder)**
_Saturday Jan 31, 2015 at 20:57 GMT_
_Originally opened as https://github.com/nant/nantcontrib/issues/36_
----
Since Mono 3.2.3 the build fails with error CS0571, where CSC has no issues about this code. Since MSI and SourceSafe is not so important at mono on linux these taks might be dropped on mono build
| infrastructure | mono build fails at msi and sourcesafe tasks with mono issue by saturday jan at gmt originally opened as since mono the build fails with error where csc has no issues about this code since msi and sourcesafe is not so important at mono on linux these taks might be dropped on mono build | 1 |
34,973 | 30,644,041,626 | IssuesEvent | 2023-07-25 02:03:50 | ministryofjustice/data-platform | https://api.github.com/repos/ministryofjustice/data-platform | closed | 🎨 Refactor GitHub management | identity-access-management Data Platform Core Infrastructure stale | Currently it's a bit complicatedto decipher what teams and repos are created, and how things relate
This ⬇️ is **WIP**
| Name | Parent | Exists |
|:---:|:---:|:---:|
| data-platform | n/a | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform) - created manually |
| data-platform-architects | data-platform | No |
| data-platform-cloud-platform-development | data-platform | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform-cloud-platform-development) - created manually |
| data-platform-core-infrastructure | data-platform | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform-core-infrastructure) - created manually |
| 1.0 | 🎨 Refactor GitHub management - Currently it's a bit complicatedto decipher what teams and repos are created, and how things relate
This ⬇️ is **WIP**
| Name | Parent | Exists |
|:---:|:---:|:---:|
| data-platform | n/a | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform) - created manually |
| data-platform-architects | data-platform | No |
| data-platform-cloud-platform-development | data-platform | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform-cloud-platform-development) - created manually |
| data-platform-core-infrastructure | data-platform | [Yes](https://github.com/orgs/ministryofjustice/teams/data-platform-core-infrastructure) - created manually |
| infrastructure | 🎨 refactor github management currently it s a bit complicatedto decipher what teams and repos are created and how things relate this ⬇️ is wip name parent exists data platform n a created manually data platform architects data platform no data platform cloud platform development data platform created manually data platform core infrastructure data platform created manually | 1 |
7,238 | 6,836,115,400 | IssuesEvent | 2017-11-10 05:40:59 | moment/luxon | https://api.github.com/repos/moment/luxon | opened | CI not failing when tests fail | infrastructure | Noticed a couple of times that CI no longer fails if a test fails. Maybe an issue with `gulp-jest` not making `gulp` return with a nonzero exit code? | 1.0 | CI not failing when tests fail - Noticed a couple of times that CI no longer fails if a test fails. Maybe an issue with `gulp-jest` not making `gulp` return with a nonzero exit code? | infrastructure | ci not failing when tests fail noticed a couple of times that ci no longer fails if a test fails maybe an issue with gulp jest not making gulp return with a nonzero exit code | 1 |
164,310 | 6,223,914,481 | IssuesEvent | 2017-07-10 13:10:13 | GluuFederation/oxAuth | https://api.github.com/repos/GluuFederation/oxAuth | closed | Custom script migration to conform 3.1.x | enhancement High priority | We need to replace all Seam classes imports in custom scripts with equivalents in 3.1.0 code.
Also we need to replace page.xml files with JSF 2.2 faces.xml files (more info in issue #501)
This issue depends on #501 and #502 | 1.0 | Custom script migration to conform 3.1.x - We need to replace all Seam classes imports in custom scripts with equivalents in 3.1.0 code.
Also we need to replace page.xml files with JSF 2.2 faces.xml files (more info in issue #501)
This issue depends on #501 and #502 | non_infrastructure | custom script migration to conform x we need to replace all seam classes imports in custom scripts with equivalents in code also we need to replace page xml files with jsf faces xml files more info in issue this issue depends on and | 0 |
7,118 | 10,468,853,406 | IssuesEvent | 2019-09-22 16:34:51 | WSU-4110/cOUT | https://api.github.com/repos/WSU-4110/cOUT | opened | NR 3: Usability of Application | Features Nonfunctional Requirements | |NR 3: Usability of Application|
|---|
|**Goal**: The application must be easy to use.
**Stakeholders:** Students and Teachers|
| **Description:** The application must be user-friendly by being easy to use and quick to learn. The application should be simple for everyone to use as students can range from any age (Undergraduates, Graduate Students, PhD Students) and teachers with little technical backgrounds can also use the application. The application must also be effective to achieve the goal of a simple messaging forum on both the student and teacher side. It must also be useful in that all requirements of basic messaging as well as anonymity should be accomplished with optimal efficiency. |
| **Origin:** Anika Taufiq|
| **Version:** 1.0 |
|**Date:** 09/21/2019
| **Priority:** 3 |
| 1.0 | NR 3: Usability of Application - |NR 3: Usability of Application|
|---|
|**Goal**: The application must be easy to use.
**Stakeholders:** Students and Teachers|
| **Description:** The application must be user-friendly by being easy to use and quick to learn. The application should be simple for everyone to use as students can range from any age (Undergraduates, Graduate Students, PhD Students) and teachers with little technical backgrounds can also use the application. The application must also be effective to achieve the goal of a simple messaging forum on both the student and teacher side. It must also be useful in that all requirements of basic messaging as well as anonymity should be accomplished with optimal efficiency. |
| **Origin:** Anika Taufiq|
| **Version:** 1.0 |
|**Date:** 09/21/2019
| **Priority:** 3 |
| non_infrastructure | nr usability of application nr usability of application goal the application must be easy to use stakeholders students and teachers description the application must be user friendly by being easy to use and quick to learn the application should be simple for everyone to use as students can range from any age undergraduates graduate students phd students and teachers with little technical backgrounds can also use the application the application must also be effective to achieve the goal of a simple messaging forum on both the student and teacher side it must also be useful in that all requirements of basic messaging as well as anonymity should be accomplished with optimal efficiency origin anika taufiq version date priority | 0 |
686,881 | 23,507,525,505 | IssuesEvent | 2022-08-18 13:49:55 | twisted/twisted | https://api.github.com/repos/twisted/twisted | closed | TypeError: 'DelayedCall' object is not iterable | core bug priority-normal new | |<img alt="allenap's avatar" src="https://avatars.githubusercontent.com/u/0?s=50" width="50" height="50">| allenap reported|
|-|-|
|Trac ID|trac#8307|
|Type|defect|
|Created|2016-04-26 16:06:50Z|
```
Python 3.5.1+ (default, Mar 30 2016, 22:46:26)
[GCC 5.3.1 20160330] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from twisted.internet.base import DelayedCall
>>> dc = DelayedCall(1, lambda: None, (), {}, lambda dc: None, lambda dc: None)
>>> dc.debug = True
>>> dc.cancel()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gavin/GitHub/twisted/twisted/internet/base.py", line 94, in cancel
self._str = bytes(self)
TypeError: 'DelayedCall' object is not iterable
```
```
$ tail -n1 twisted/_version.py
version = versions.Version('twisted', 16, 1, 1)
```
<details><summary>Searchable metadata</summary>
```
trac-id__8307 8307
type__defect defect
reporter__allenap allenap
priority__normal normal
milestone__None None
branch__
branch_author__
status__new new
resolution__None None
component__core core
keywords__None None
time__1461686810312194 1461686810312194
changetime__1462290490013595 1462290490013595
version__None None
owner__None None
```
</details>
| 1.0 | TypeError: 'DelayedCall' object is not iterable - |<img alt="allenap's avatar" src="https://avatars.githubusercontent.com/u/0?s=50" width="50" height="50">| allenap reported|
|-|-|
|Trac ID|trac#8307|
|Type|defect|
|Created|2016-04-26 16:06:50Z|
```
Python 3.5.1+ (default, Mar 30 2016, 22:46:26)
[GCC 5.3.1 20160330] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from twisted.internet.base import DelayedCall
>>> dc = DelayedCall(1, lambda: None, (), {}, lambda dc: None, lambda dc: None)
>>> dc.debug = True
>>> dc.cancel()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gavin/GitHub/twisted/twisted/internet/base.py", line 94, in cancel
self._str = bytes(self)
TypeError: 'DelayedCall' object is not iterable
```
```
$ tail -n1 twisted/_version.py
version = versions.Version('twisted', 16, 1, 1)
```
<details><summary>Searchable metadata</summary>
```
trac-id__8307 8307
type__defect defect
reporter__allenap allenap
priority__normal normal
milestone__None None
branch__
branch_author__
status__new new
resolution__None None
component__core core
keywords__None None
time__1461686810312194 1461686810312194
changetime__1462290490013595 1462290490013595
version__None None
owner__None None
```
</details>
| non_infrastructure | typeerror delayedcall object is not iterable allenap reported trac id trac type defect created python default mar on linux type help copyright credits or license for more information from twisted internet base import delayedcall dc delayedcall lambda none lambda dc none lambda dc none dc debug true dc cancel traceback most recent call last file line in file home gavin github twisted twisted internet base py line in cancel self str bytes self typeerror delayedcall object is not iterable tail twisted version py version versions version twisted searchable metadata trac id type defect defect reporter allenap allenap priority normal normal milestone none none branch branch author status new new resolution none none component core core keywords none none time changetime version none none owner none none | 0 |
22,116 | 6,229,408,619 | IssuesEvent | 2017-07-11 03:45:44 | XceedBoucherS/TestImport5 | https://api.github.com/repos/XceedBoucherS/TestImport5 | closed | DateTimePicker: Date cannot be entered by keyboard | CodePlex | <b>Philvx[CodePlex]</b> <br />After entering the first number the Cursor is automatically moved to the beginning and it is impossible to enter any more numbers.The attached videos shows this behavior. After clicking the part of the date I wanted to change I entered a single number
which you cannot see unfortunately. But at least you can see the cursor jumping to the front immediately upon my key press making further input impossible.I am using version 1.8 (NuGet package) of the Extended WPF Toolkit. This problem did not exist in version
1.7.
| 1.0 | DateTimePicker: Date cannot be entered by keyboard - <b>Philvx[CodePlex]</b> <br />After entering the first number the Cursor is automatically moved to the beginning and it is impossible to enter any more numbers.The attached videos shows this behavior. After clicking the part of the date I wanted to change I entered a single number
which you cannot see unfortunately. But at least you can see the cursor jumping to the front immediately upon my key press making further input impossible.I am using version 1.8 (NuGet package) of the Extended WPF Toolkit. This problem did not exist in version
1.7.
| non_infrastructure | datetimepicker date cannot be entered by keyboard philvx after entering the first number the cursor is automatically moved to the beginning and it is impossible to enter any more numbers the attached videos shows this behavior after clicking the part of the date i wanted to change i entered a single number which you cannot see unfortunately but at least you can see the cursor jumping to the front immediately upon my key press making further input impossible i am using version nuget package of the extended wpf toolkit this problem did not exist in version | 0 |
394,148 | 11,632,690,868 | IssuesEvent | 2020-02-28 06:04:28 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 satging-1337] Web Election: no candidates for title election | Category: Web Priority: High Status: Fixed | 1. Create title
2. Start Election for title

3. Enter the election like a candidate


4. Open site for voting
No candidates

| 1.0 | [0.9.0 satging-1337] Web Election: no candidates for title election - 1. Create title
2. Start Election for title

3. Enter the election like a candidate


4. Open site for voting
No candidates

| non_infrastructure | web election no candidates for title election create title start election for title enter the election like a candidate open site for voting no candidates | 0 |
8,284 | 3,703,067,419 | IssuesEvent | 2016-02-29 19:04:08 | stkent/amplify | https://api.github.com/repos/stkent/amplify | closed | Install time doesn't seem to be working correctly | bug code difficulty-medium | After integrating Amplify and running it with the default options it seems to be prompting right away. The desired behavior should wait one week before prompting. | 1.0 | Install time doesn't seem to be working correctly - After integrating Amplify and running it with the default options it seems to be prompting right away. The desired behavior should wait one week before prompting. | non_infrastructure | install time doesn t seem to be working correctly after integrating amplify and running it with the default options it seems to be prompting right away the desired behavior should wait one week before prompting | 0 |
73,986 | 19,916,399,747 | IssuesEvent | 2022-01-25 23:22:09 | o3de/o3de | https://api.github.com/repos/o3de/o3de | closed | [Pre Release] Ensure version number updates to CMake parameters are complete | sig/build | Ensure the Build SIG updates CMake parameters with the new version number.
- Successor task: [#7018]
Discord: [#sig-build](https://discord.com/channels/805939474655346758/816043576034328636)
Build SIG meetings: [O3DE Calendar](https://lists.o3de.org/g/o3de-calendar/calendar)
Build SIG meeting agendas: [GHI](https://github.com/o3de/sig-build/issues) | 1.0 | [Pre Release] Ensure version number updates to CMake parameters are complete - Ensure the Build SIG updates CMake parameters with the new version number.
- Successor task: [#7018]
Discord: [#sig-build](https://discord.com/channels/805939474655346758/816043576034328636)
Build SIG meetings: [O3DE Calendar](https://lists.o3de.org/g/o3de-calendar/calendar)
Build SIG meeting agendas: [GHI](https://github.com/o3de/sig-build/issues) | non_infrastructure | ensure version number updates to cmake parameters are complete ensure the build sig updates cmake parameters with the new version number successor task discord build sig meetings build sig meeting agendas | 0 |
24,076 | 16,823,323,059 | IssuesEvent | 2021-06-17 15:23:15 | emory-libraries/blacklight-catalog | https://api.github.com/repos/emory-libraries/blacklight-catalog | closed | Create aws cloudwatch alerts for Solr backups | Infrastructure | Create alerting for backup failures. Alerts should route to #dlp-alerts in the EUL Slack workspace. | 1.0 | Create aws cloudwatch alerts for Solr backups - Create alerting for backup failures. Alerts should route to #dlp-alerts in the EUL Slack workspace. | infrastructure | create aws cloudwatch alerts for solr backups create alerting for backup failures alerts should route to dlp alerts in the eul slack workspace | 1 |
7,610 | 7,018,205,337 | IssuesEvent | 2017-12-21 12:48:36 | SatelliteQE/robottelo | https://api.github.com/repos/SatelliteQE/robottelo | closed | Some tests deselected even if they have no bz skip decorator | 6.3 Bug High Infrastructure | Note:
tests.foreman.api.test_docker.test_positive_update_url
log deselection reported 2 times
this test has no bz skip
seems the get_func_name in helpers do not take in consideration the class name
```
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_update_url
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_publish_with_docker_repo_composite
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_create_using_cv
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_read_container_log
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_update_url
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_hostgroup.test_positive_update_arch
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_hostgroup.test_positive_update_content_source
``` | 1.0 | Some tests deselected even if they have no bz skip decorator - Note:
tests.foreman.api.test_docker.test_positive_update_url
log deselection reported 2 times
this test has no bz skip
seems the get_func_name in helpers do not take in consideration the class name
```
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_update_url
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_publish_with_docker_repo_composite
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_create_using_cv
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_read_container_log
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_docker.test_positive_update_url
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_hostgroup.test_positive_update_arch
2017-12-15 20:59:59 - conftest - DEBUG - Deselected test tests.foreman.api.test_hostgroup.test_positive_update_content_source
``` | infrastructure | some tests deselected even if they have no bz skip decorator note tests foreman api test docker test positive update url log deselection reported times this test has no bz skip seems the get func name in helpers do not take in consideration the class name conftest debug deselected test tests foreman api test docker test positive update url conftest debug deselected test tests foreman api test docker test positive publish with docker repo composite conftest debug deselected test tests foreman api test docker test positive create using cv conftest debug deselected test tests foreman api test docker test positive read container log conftest debug deselected test tests foreman api test docker test positive update url conftest debug deselected test tests foreman api test hostgroup test positive update arch conftest debug deselected test tests foreman api test hostgroup test positive update content source | 1 |
13,312 | 10,198,615,589 | IssuesEvent | 2019-08-13 06:04:52 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | MultiFieldMultiZoneManagementBasic.apsimx is very slow to run | interface/infrastructure refactor | C:\ApsimX\Tests\Simulation\MultiZoneManagement\MultiFieldMultiZoneManagementBasic.apsimx
This file is very slow to run. It has many zones. After profiling it seems that it spends about 6 minutes in one method connecting events.
Events.cs: Lines 36-46.
| 1.0 | MultiFieldMultiZoneManagementBasic.apsimx is very slow to run - C:\ApsimX\Tests\Simulation\MultiZoneManagement\MultiFieldMultiZoneManagementBasic.apsimx
This file is very slow to run. It has many zones. After profiling it seems that it spends about 6 minutes in one method connecting events.
Events.cs: Lines 36-46.
| infrastructure | multifieldmultizonemanagementbasic apsimx is very slow to run c apsimx tests simulation multizonemanagement multifieldmultizonemanagementbasic apsimx this file is very slow to run it has many zones after profiling it seems that it spends about minutes in one method connecting events events cs lines | 1 |
11,120 | 8,951,974,688 | IssuesEvent | 2019-01-25 15:22:39 | stiftungswo/izivi | https://api.github.com/repos/stiftungswo/izivi | closed | MobX und Formik einbauen | infrastructure | Um das Handling der Frontend-Forms zu vereinfachen, sollen wie im neuen Dime auch Formik und MobX eingebaut werden.
Man könnte mit MobX auch #137 umsetzen. | 1.0 | MobX und Formik einbauen - Um das Handling der Frontend-Forms zu vereinfachen, sollen wie im neuen Dime auch Formik und MobX eingebaut werden.
Man könnte mit MobX auch #137 umsetzen. | infrastructure | mobx und formik einbauen um das handling der frontend forms zu vereinfachen sollen wie im neuen dime auch formik und mobx eingebaut werden man könnte mit mobx auch umsetzen | 1 |
4,450 | 5,095,343,913 | IssuesEvent | 2017-01-03 14:56:19 | hzi-braunschweig/SORMAS-Open | https://api.github.com/repos/hzi-braunschweig/SORMAS-Open | opened | Custom field to select a case, contact or event | Infrastructure optional sormas-ui | E.g. when creating a new task or contact.
Should come with filters and more information (Name, Age, LGA, Classification, Report date) so the supervisor can deal with all the data. | 1.0 | Custom field to select a case, contact or event - E.g. when creating a new task or contact.
Should come with filters and more information (Name, Age, LGA, Classification, Report date) so the supervisor can deal with all the data. | infrastructure | custom field to select a case contact or event e g when creating a new task or contact should come with filters and more information name age lga classification report date so the supervisor can deal with all the data | 1 |
755,206 | 26,420,965,977 | IssuesEvent | 2023-01-13 20:25:54 | redhat-developer/vscode-openshift-tools | https://api.github.com/repos/redhat-developer/vscode-openshift-tools | closed | Git repository validation never ends on Mac M1 | priority/critical os/darwin kind/bug arch/arm64 | It happens because macos-release package used in @octokit/rest v16.30.1 fails to detect macos release name and exception is not handled properly on git import vebview side. | 1.0 | Git repository validation never ends on Mac M1 - It happens because macos-release package used in @octokit/rest v16.30.1 fails to detect macos release name and exception is not handled properly on git import vebview side. | non_infrastructure | git repository validation never ends on mac it happens because macos release package used in octokit rest fails to detect macos release name and exception is not handled properly on git import vebview side | 0 |
17,635 | 12,488,007,785 | IssuesEvent | 2020-05-31 12:12:16 | AdamsLair/duality-docs | https://api.github.com/repos/AdamsLair/duality-docs | opened | Transform informative old forum postings into docs pages | Infrastructure Page Request Task | ### Summary
Since the [old forum](https://forum.duality2d.net/) will [go offline](https://github.com/AdamsLair/duality/issues/707#issuecomment-636461203) at some point, we should consider to rescue any worthwhile content that might still be in there.
### Analysis
- While a lot of info on the forum is quite old and may be outdated, there may be topics where it's still the exclusive (persistent) source of documentation in the forum of support thread postings.
- As a first step, we could gather links to the topics that contain valuable info that is not otherwise documented.
- In a second step, we can then aggregate them into page request issues, with a backwards link here to keep an overview.
- Writing those pages will not be a simple copy-paste operation, but could be a great opportunity to extend the current docs on topics that have been asked in the past. | 1.0 | Transform informative old forum postings into docs pages - ### Summary
Since the [old forum](https://forum.duality2d.net/) will [go offline](https://github.com/AdamsLair/duality/issues/707#issuecomment-636461203) at some point, we should consider to rescue any worthwhile content that might still be in there.
### Analysis
- While a lot of info on the forum is quite old and may be outdated, there may be topics where it's still the exclusive (persistent) source of documentation in the forum of support thread postings.
- As a first step, we could gather links to the topics that contain valuable info that is not otherwise documented.
- In a second step, we can then aggregate them into page request issues, with a backwards link here to keep an overview.
- Writing those pages will not be a simple copy-paste operation, but could be a great opportunity to extend the current docs on topics that have been asked in the past. | infrastructure | transform informative old forum postings into docs pages summary since the will at some point we should consider to rescue any worthwhile content that might still be in there analysis while a lot of info on the forum is quite old and may be outdated there may be topics where it s still the exclusive persistent source of documentation in the forum of support thread postings as a first step we could gather links to the topics that contain valuable info that is not otherwise documented in a second step we can then aggregate them into page request issues with a backwards link here to keep an overview writing those pages will not be a simple copy paste operation but could be a great opportunity to extend the current docs on topics that have been asked in the past | 1 |
229,003 | 17,497,071,348 | IssuesEvent | 2021-08-10 02:54:37 | VikeLabs/courseup | https://api.github.com/repos/VikeLabs/courseup | closed | Community Contributions Documentation | documentation | Add documentation to the appropriate places (README etc.) about how to contribute code to this project. | 1.0 | Community Contributions Documentation - Add documentation to the appropriate places (README etc.) about how to contribute code to this project. | non_infrastructure | community contributions documentation add documentation to the appropriate places readme etc about how to contribute code to this project | 0 |
8,855 | 7,698,021,860 | IssuesEvent | 2018-05-18 21:04:23 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | test-backend argument parsing doesn't work with options at end | area: testing-infrastructure bug in progress | I've been finding this pretty annoying recently:
```
(zulip-py3-venv) tabbott@zaset:~/zulip$ test-backend test_decorators --coverage
-- Running tests in serial mode.
Traceback (most recent call last):
File "/home/tabbott/zulip/tools/test-backend", line 366, in <module>
failures, failed_tests = test_runner.run_tests(suites, full_suite=full_suite)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 459, in run_tests
self.test_imports(test_labels, suite)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 435, in test_imports
check_import_error(test_name)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 362, in check_import_error
raise exc from exc # Disable exception chaining in Python 3.
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 360, in check_import_error
__import__(test_name)
ImportError: No module named '--coverage'
``` | 1.0 | test-backend argument parsing doesn't work with options at end - I've been finding this pretty annoying recently:
```
(zulip-py3-venv) tabbott@zaset:~/zulip$ test-backend test_decorators --coverage
-- Running tests in serial mode.
Traceback (most recent call last):
File "/home/tabbott/zulip/tools/test-backend", line 366, in <module>
failures, failed_tests = test_runner.run_tests(suites, full_suite=full_suite)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 459, in run_tests
self.test_imports(test_labels, suite)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 435, in test_imports
check_import_error(test_name)
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 362, in check_import_error
raise exc from exc # Disable exception chaining in Python 3.
File "/home/tabbott/zulip/zerver/lib/test_runner.py", line 360, in check_import_error
__import__(test_name)
ImportError: No module named '--coverage'
``` | infrastructure | test backend argument parsing doesn t work with options at end i ve been finding this pretty annoying recently zulip venv tabbott zaset zulip test backend test decorators coverage running tests in serial mode traceback most recent call last file home tabbott zulip tools test backend line in failures failed tests test runner run tests suites full suite full suite file home tabbott zulip zerver lib test runner py line in run tests self test imports test labels suite file home tabbott zulip zerver lib test runner py line in test imports check import error test name file home tabbott zulip zerver lib test runner py line in check import error raise exc from exc disable exception chaining in python file home tabbott zulip zerver lib test runner py line in check import error import test name importerror no module named coverage | 1 |
21,389 | 14,542,313,499 | IssuesEvent | 2020-12-15 15:34:17 | robotology/QA | https://api.github.com/repos/robotology/QA | closed | Could not find a package configuration file provided by "YARP_robottestingframework" | infrastructure software | Hi dear,
I am trying to install using "robotology_superbuild" in Ubuntu 18.04, but during **make** I am encountered with the following error. Please help to sort it out.
[ 81%] Performing configure step for 'icub-tests'
Not searching for unused variables given on the command line.
loading initial cache file /home/wasif/robotology-superbuild/build/robotology/icub-tests/CMakeFiles/YCMTmp/icub-tests-cache-Release.cmake
CMake Error at /home/wasif/opt/yarp/lib/cmake/YARP/YARPConfig.cmake:183 (find_package):
Could not find a package configuration file provided by
"YARP_robottestingframework" with any of the following names:
YARP_robottestingframeworkConfig.cmake
yarp_robottestingframework-config.cmake
Add the installation prefix of "YARP_robottestingframework" to
CMAKE_PREFIX_PATH or set "YARP_robottestingframework_DIR" to a directory
containing one of the above files. If "YARP_robottestingframework"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:25 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/wasif/robotology-superbuild/build/robotology/icub-tests/CMakeFiles/CMakeOutput.log".
CMakeFiles/icub-tests.dir/build.make:101: recipe for target 'robotology/icub-tests/CMakeFiles/YCMStamp/icub-tests-configure' failed
make[2]: *** [robotology/icub-tests/CMakeFiles/YCMStamp/icub-tests-configure] Error 1
CMakeFiles/Makefile2:1173: recipe for target 'CMakeFiles/icub-tests.dir/all' failed
make[1]: *** [CMakeFiles/icub-tests.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
| 1.0 | Could not find a package configuration file provided by "YARP_robottestingframework" - Hi dear,
I am trying to install using "robotology_superbuild" in Ubuntu 18.04, but during **make** I am encountered with the following error. Please help to sort it out.
[ 81%] Performing configure step for 'icub-tests'
Not searching for unused variables given on the command line.
loading initial cache file /home/wasif/robotology-superbuild/build/robotology/icub-tests/CMakeFiles/YCMTmp/icub-tests-cache-Release.cmake
CMake Error at /home/wasif/opt/yarp/lib/cmake/YARP/YARPConfig.cmake:183 (find_package):
Could not find a package configuration file provided by
"YARP_robottestingframework" with any of the following names:
YARP_robottestingframeworkConfig.cmake
yarp_robottestingframework-config.cmake
Add the installation prefix of "YARP_robottestingframework" to
CMAKE_PREFIX_PATH or set "YARP_robottestingframework_DIR" to a directory
containing one of the above files. If "YARP_robottestingframework"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:25 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/wasif/robotology-superbuild/build/robotology/icub-tests/CMakeFiles/CMakeOutput.log".
CMakeFiles/icub-tests.dir/build.make:101: recipe for target 'robotology/icub-tests/CMakeFiles/YCMStamp/icub-tests-configure' failed
make[2]: *** [robotology/icub-tests/CMakeFiles/YCMStamp/icub-tests-configure] Error 1
CMakeFiles/Makefile2:1173: recipe for target 'CMakeFiles/icub-tests.dir/all' failed
make[1]: *** [CMakeFiles/icub-tests.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
| infrastructure | could not find a package configuration file provided by yarp robottestingframework hi dear i am trying to install using robotology superbuild in ubuntu but during make i am encountered with the following error please help to sort it out performing configure step for icub tests not searching for unused variables given on the command line loading initial cache file home wasif robotology superbuild build robotology icub tests cmakefiles ycmtmp icub tests cache release cmake cmake error at home wasif opt yarp lib cmake yarp yarpconfig cmake find package could not find a package configuration file provided by yarp robottestingframework with any of the following names yarp robottestingframeworkconfig cmake yarp robottestingframework config cmake add the installation prefix of yarp robottestingframework to cmake prefix path or set yarp robottestingframework dir to a directory containing one of the above files if yarp robottestingframework provides a separate development package or sdk be sure it has been installed call stack most recent call first cmakelists txt find package configuring incomplete errors occurred see also home wasif robotology superbuild build robotology icub tests cmakefiles cmakeoutput log cmakefiles icub tests dir build make recipe for target robotology icub tests cmakefiles ycmstamp icub tests configure failed make error cmakefiles recipe for target cmakefiles icub tests dir all failed make error makefile recipe for target all failed make error | 1 |
24,175 | 16,987,857,748 | IssuesEvent | 2021-06-30 16:19:52 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Build failed: Validate-DotNet/main #runtime-95694-.NET6Preview6 | area-Infrastructure untriaged | Build [#runtime-95694-.NET6Preview6](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1210240) partiallySucceeded
## :warning: : internal / Validate-DotNet partiallySucceeded
### Summary
**Finished** - Tue, 29 Jun 2021 04:03:11 GMT
**Duration** - 298 minutes
**Requested for** - DotNet Bot
**Reason** - manual
### Details
#### Validation Ring
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Missing symbols for 2 modules in the package D:\workspace\_work\1\a\signed\shipping\assets\symbols\Microsoft.NETCore.App.Runtime.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Missing symbols for 2 modules in the package D:\workspace\_work\1\a\signed\shipping\assets\symbols\Microsoft.NETCore.App.Runtime.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Symbols missing for 2/308 packages
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - PowerShell exited with code '1'.
#### Required Validation Ring
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/237) - Number of checksums and assets don't match. Checksums: 182. Assets: 786. Assets with no corresponding checksum are:
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Runtime.CompilerServices.Unsafe.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.Pkcs.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.ProtectedData.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.Xml.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Permissions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.DiagnosticSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Data.OleDb.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Data.Odbc.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Configuration.ConfigurationManager.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
Runtime/6.0.0-preview.6.21325.12/runtime-productVersion.txt
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Net.Http.WinHttpHandler.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Net.Http.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.Context.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Numerics.Tensors.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Runtime.Mono.LLVM.osx-x64
Microsoft.NETCore.App.Runtime.Mono.osx-arm64
Microsoft.NETCore.App.Runtime.Mono.osx-x64
Microsoft.NETCore.App.Runtime.Mono.tvossimulator-arm64
Microsoft.NETCore.App.Runtime.osx-x64
runtime.osx-x64.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.Microsoft.NETCore.DotNetHost
runtime.win-arm.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.Microsoft.NETCore.ILAsm
runtime.osx-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.osx-x64.Microsoft.NETCore.ILDAsm
runtime.osx-x64.Microsoft.NETCore.DotNetHostResolver
runtime.osx-x64.Microsoft.NETCore.TestHost
runtime.win-arm64.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.runtime.native.System.IO.Ports
runtime.win-arm.Microsoft.NETCore.DotNetHost
runtime.win-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.win-arm.Microsoft.NETCore.DotNetHostResolver
runtime.win-arm.Microsoft.NETCore.ILAsm
runtime.win-arm.Microsoft.NETCore.ILDAsm
runtime.win-arm.Microsoft.NETCore.TestHost
runtime.win-arm64.Microsoft.NETCore.DotNetHost
runtime.win-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.win-arm64.Microsoft.NETCore.ILAsm
runtime.win-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-x64.Microsoft.NETCore.ILAsm
Microsoft.Extensions.Logging.Console
Microsoft.Extensions.Logging.EventSource
Microsoft.Extensions.Logging.TraceSource
Microsoft.Extensions.Logging.EventLog
Microsoft.Extensions.Options
Microsoft.Extensions.Options.ConfigurationExtensions
Microsoft.NETCore.App.Crossgen2.linux-arm64
Microsoft.NETCore.App.Composite
Microsoft.NET.Runtime.iOS.Sample.Mono
Microsoft.NET.Runtime.MonoAOTCompiler.Task
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-x64
Microsoft.NETCore.App.Runtime.linux-musl-arm
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.browser-wasm
Microsoft.NETCore.App.Runtime.linux-arm
Microsoft.NETCore.App.Runtime.Mono.android-arm
Microsoft.NETCore.App.Host.win-arm64
Microsoft.NETCore.App.Host.win-x64
Microsoft.NETCore.App.Host.win-x86
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm64
assets/symbols/Microsoft.NETCore.App.PGO.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.win-x64.Microsoft.NETCore.ILDAsm
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.Win32.Registry.AccessControl
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.linux-arm.Microsoft.NETCore.DotNetAppHost
Microsoft.Win32.SystemEvents
Microsoft.Windows.Compatibility
Microsoft.XmlSerializer.Generator
runtime.linux-arm.Microsoft.NETCore.DotNetHost
runtime.linux-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-arm.Microsoft.NETCore.DotNetHostResolver
runtime.linux-arm.Microsoft.NETCore.ILDAsm
runtime.linux-arm.Microsoft.NETCore.ILAsm
runtime.linux-arm64.Microsoft.NETCore.DotNetAppHost
runtime.linux-arm.Microsoft.NETCore.TestHost
runtime.linux-arm.runtime.native.System.IO.Ports
runtime.linux-arm64.Microsoft.NETCore.DotNetHost
runtime.linux-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-arm64.Microsoft.NETCore.ILAsm
runtime.linux-arm64.Microsoft.NETCore.ILDAsm
runtime.linux-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-arm.Microsoft.NETCore.DotNetAppHost
runtime.linux-arm64.runtime.native.System.IO.Ports
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-x64
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.TestHost
Microsoft.NETCore.Platforms
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.osx-x64
Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-x64
Microsoft.NETCore.App.Runtime.Mono.tvossimulator-x64
Microsoft.NETCore.App.Runtime.win-arm64
runtime.linux-musl-arm.Microsoft.NETCore.ILAsm
Microsoft.NETCore.App.Runtime.Mono.win-x86
Microsoft.NETCore.App.Runtime.win-arm
Microsoft.NETCore.App.Runtime.osx-arm64
Microsoft.NETCore.App.Runtime.win-x86
Microsoft.NETCore.App.Runtime.win-x64
Microsoft.NETCore.App.Runtime.Mono.win-x64
Microsoft.NETCore.DotNetHost
Microsoft.NETCore.App.Runtime.Mono.linux-musl-x64
Microsoft.NETCore.DotNetAppHost
Microsoft.NETCore.DotNetHostPolicy
Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.ILAsm
runtime.linux-musl-arm.Microsoft.NETCore.ILDAsm
runtime.linux-x64.Microsoft.NETCore.DotNetAppHost
runtime.linux-musl-x64.Microsoft.NETCore.ILDAsm
runtime.linux-musl-arm.Microsoft.NETCore.TestHost
runtime.linux-x64.Microsoft.NETCore.DotNetHost
runtime.linux-x64.Microsoft.NETCore.ILAsm
runtime.linux-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-x64.Microsoft.NETCore.ILDAsm
runtime.linux-x64.Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.App.Runtime.Mono.iossimulator-x86
runtime.osx-arm64.Microsoft.NETCore.DotNetAppHost
runtime.linux-x64.runtime.native.System.IO.Ports
runtime.native.System.IO.Ports
Microsoft.NETCore.App.Runtime.Mono.linux-arm
Microsoft.NETCore.App.Runtime.Mono.linux-arm64
runtime.osx-arm64.Microsoft.NETCore.DotNetHost
runtime.osx-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-x64.Microsoft.NETCore.TestHost
Microsoft.Extensions.Logging.Debug
Microsoft.Extensions.Logging
Microsoft.Extensions.Logging.Abstractions
Microsoft.Extensions.Logging.Configuration
Microsoft.NET.Runtime.Android.Sample.Mono
Microsoft.NETCore.App.Crossgen2.linux-arm
Microsoft.NET.Runtime.RuntimeConfigParser.Task
Microsoft.NET.Runtime.wasm.Sample.Mono
Microsoft.NET.Runtime.WebAssembly.Sdk
Microsoft.NET.Sdk.IL
Microsoft.NET.Workload.Mono.ToolChain.Manifest-6.0.100
Microsoft.NETCore.App.Crossgen2.linux-musl-arm
Microsoft.Bcl.AsyncInterfaces
Microsoft.Extensions.Caching.Memory
Microsoft.Extensions.Caching.Abstractions
Microsoft.Extensions.Configuration.Abstractions
Microsoft.Extensions.Configuration
Microsoft.Extensions.Configuration.Binder
Microsoft.Extensions.Configuration.CommandLine
Microsoft.Extensions.Configuration.EnvironmentVariables
Microsoft.Extensions.Configuration.Xml
Microsoft.Extensions.Configuration.FileExtensions
Microsoft.Extensions.Configuration.Json
Microsoft.Extensions.Configuration.UserSecrets
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.DependencyInjection.Specification.Tests
Microsoft.Extensions.FileProviders.Composite
Microsoft.Extensions.DependencyInjection.Abstractions
Microsoft.Extensions.DependencyModel
Microsoft.Extensions.FileProviders.Abstractions
Microsoft.Extensions.FileProviders.Physical
Microsoft.Extensions.FileSystemGlobbing
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x64
Microsoft.NETCore.App.Runtime.Mono.android-x86
Microsoft.NETCore.App.Host.linux-arm64
Microsoft.NETCore.App.Host.win-arm
runtime.win-x64.Microsoft.NETCore.TestHost
runtime.win-x86.Microsoft.NETCore.DotNetHost
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Host.linux-musl-x64
Microsoft.NETCore.App.Host.linux-musl-arm64
Microsoft.NETCore.App.Host.linux-x64
Microsoft.NETCore.App.Host.osx-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.browser-wasm
Microsoft.NETCore.App.Host.osx-x64
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm64
Microsoft.NETCore.App.Ref
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x64
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Encodings.Web.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.Tasks.Dataflow.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Windows.Extensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Speech.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ServiceProcess.ServiceController.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ServiceModel.Syndication.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.CodeDom.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Collections.Immutable.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ComponentModel.Composition.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ComponentModel.Composition.Registration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.AttributedModel.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Convention.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Hosting.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Runtime.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.TypedParts.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Runtime.Mono.tvos-arm64
runtime.win-arm64.Microsoft.NETCore.ILDAsm
runtime.win-arm64.Microsoft.NETCore.TestHost
runtime.win-x64.Microsoft.NETCore.DotNetHost
runtime.win-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.win-x64.Microsoft.NETCore.ILAsm
runtime.win-x64.Microsoft.NETCore.DotNetAppHost
runtime.win-x64.Microsoft.NETCore.DotNetHostResolver
runtime.osx-arm64.Microsoft.NETCore.ILAsm
runtime.osx-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetAppHost
runtime.osx-arm64.Microsoft.NETCore.ILDAsm
runtime.osx-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHost
runtime.linux-musl-arm64.Microsoft.NETCore.ILAsm
runtime.linux-musl-x64.Microsoft.NETCore.DotNetAppHost
runtime.linux-musl-arm64.Microsoft.NETCore.ILDAsm
runtime.linux-musl-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHost
runtime.linux-x64.Microsoft.NETCore.TestHost
Microsoft.NETCore.App.Crossgen2.linux-x64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x86
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-x64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x86
Microsoft.NETCore.App.Crossgen2.osx-x64
Microsoft.NETCore.App.Crossgen2.win-x86
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Composite.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.iOS.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.MonoAOTCompiler.Task.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.RuntimeConfigParser.Task.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.wasm.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.WebAssembly.Sdk.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Sdk.IL.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Workload.Mono.ToolChain.Manifest-6.0.100.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.AccessControl.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.Channels.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Encoding.CodePages.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.EventLog.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.PerformanceCounter.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Drawing.Common.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.AccountManagement.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.Protocols.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Formats.Asn1.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Packaging.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Formats.Cbor.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Hashing.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Pipelines.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Management.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Memory.Data.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.Metadata.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.MetadataLoadContext.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Resources.Extensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Runtime.Caching.6.0.0-preview.6.21325.12.symbols.nupkg
Runtime/6.0.0-preview.6.21325.12/productVersion.txt
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHost
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-arm64
Microsoft.NETCore.ILDAsm
Microsoft.NETCore.App.Runtime.Mono.linux-x64
Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-arm64
Microsoft.NETCore.App.Runtime.Mono.maccatalyst-x64
Microsoft.NETCore.App.Runtime.Mono.maccatalyst-arm64
Microsoft.ILVerification
Microsoft.Extensions.Options.DataAnnotations
Microsoft.Extensions.Primitives
Microsoft.IO.Redist
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvos-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64
Microsoft.NETCore.App.Runtime.Mono.android-arm64
Microsoft.NETCore.App.Runtime.linux-musl-x64
Microsoft.NETCore.App.Runtime.Mono.ios-arm
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x86
Microsoft.NETCore.App.Runtime.Mono.browser-wasm
Microsoft.NETCore.App.Runtime.Mono.ios-arm64
Microsoft.NETCore.App.Runtime.Mono.iossimulator-x64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm
assets/symbols/runtime.linux-arm64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Crossgen2.osx-arm64
Microsoft.NETCore.App.Crossgen2.win-arm
Microsoft.NETCore.App.Crossgen2.win-arm64
Microsoft.NETCore.App.Crossgen2.win-x64
Microsoft.NETCore.App.Host.linux-arm
Microsoft.NETCore.App.Host.linux-musl-arm
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x64
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.browser-wasm
assets/symbols/Microsoft.NETCore.App.Host.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Ref.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvos-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.BrowserDebugHost.Transport.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Private.CoreFx.OOB.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Win32.Registry.AccessControl.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Win32.SystemEvents.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Windows.Compatibility.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.XmlSerializer.Generator.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.ios-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-x86.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.win-x86.Microsoft.NETCore.DotNetAppHost
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvos-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Composition.Convention
System.Composition.Hosting
System.Diagnostics.PerformanceCounter
System.Drawing.Common
System.DirectoryServices.AccountManagement
System.DirectoryServices.Protocols
assets/symbols/Microsoft.Extensions.DependencyInjection.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileSystemGlobbing.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.Systemd.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.WindowsServices.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Console.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Configuration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.EventLog.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Crossgen2.linux-musl-arm64
Microsoft.Extensions.Http
Microsoft.Extensions.Hosting.WindowsServices
Microsoft.Extensions.Hosting.Systemd
dotnet-ilverify
Microsoft.Extensions.Hosting.Abstractions
Microsoft.Diagnostics.Tracing.EventSource.Redist
Microsoft.Extensions.Hosting
Microsoft.Extensions.Configuration.Ini
Microsoft.NETCore.App.Crossgen2.linux-musl-x64
Microsoft.NETCore.App.Runtime.linux-musl-arm64
Microsoft.NETCore.App.Runtime.linux-x64
Microsoft.NETCore.App.Runtime.Mono.android-x64
Microsoft.NETCore.App.Runtime.Mono.iossimulator-arm64
Microsoft.NETCore.App.Runtime.linux-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x86
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.ios-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.maccatalyst-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.maccatalyst-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Net.Http.Json
assets/symbols/Microsoft.NET.Runtime.Android.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Net.HostModel.PGO.6.0.0-preview.6.21325.12.symbols.nupkg
System.Memory.Data
assets/symbols/Microsoft.IO.Redist.6.0.0-preview.6.21325.12.symbols.nupkg
System.Net.Http.WinHttpHandler
System.Numerics.Tensors
System.Reflection.Metadata
System.Reflection.Context
System.Runtime.Caching
System.Resources.Extensions
System.Reflection.MetadataLoadContext
System.Security.Cryptography.Pkcs
System.Security.Cryptography.Xml
System.Runtime.CompilerServices.Unsafe
System.Security.Cryptography.ProtectedData
System.Security.Permissions
System.ServiceModel.Syndication
System.Text.Encodings.Web
System.ServiceProcess.ServiceController
System.Speech
System.Text.Json
System.Text.Encoding.CodePages
System.Threading.Channels
System.Threading.AccessControl
System.Threading.Tasks.Dataflow
System.Windows.Extensions
System.Management
System.IO.Ports
assets/symbols/Microsoft.NET.HostModel.6.0.0-preview.6.21325.12.symbols.nupkg
System.IO.Packaging
System.IO.Pipelines
runtime.win-x86.Microsoft.NETCore.DotNetHostPolicy
runtime.win-x86.Microsoft.NETCore.ILAsm
runtime.win-x86.Microsoft.NETCore.ILDAsm
runtime.win-x86.Microsoft.NETCore.DotNetHostResolver
runtime.win-x86.Microsoft.NETCore.TestHost
System.CodeDom
System.Collections.Immutable
System.ComponentModel.Composition
System.ComponentModel.Composition.Registration
System.Composition
System.Composition.AttributedModel
System.Composition.Runtime
System.Data.Odbc
System.Configuration.ConfigurationManager
System.Data.OleDb
System.DirectoryServices
System.Diagnostics.DiagnosticSource
System.Diagnostics.EventLog
System.Formats.Cbor
System.IO.Hashing
System.Composition.TypedParts
assets/symbols/Microsoft.Extensions.DependencyModel.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.DependencyInjection.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.DependencyInjection.Specification.Tests.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.HostFactoryResolver.Sources.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Composite.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Physical.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.TraceSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.DataAnnotations.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.ILVerification.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/ILCompiler.Reflection.ReadyToRun.Experimental.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Formats.Asn1
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Http.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Debug.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.EventSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.ConfigurationExtensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Primitives.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.UserSecrets.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/dotnet-pgo.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/dotnet-ilverify.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.FileExtensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.EnvironmentVariables.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.CommandLine.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Caching.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.AspNetCore.Internal.Transport.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Caching.Memory.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Xml.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Ini.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Binder.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Diagnostics.Tracing.EventSource.Redist.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Bcl.AsyncInterfaces.6.0.0-preview.6.21325.12.symbols.nupkg
### Changes
- [d1d1c930](https://dev.azure.com/dnceng/internal/_git/dotnet-release/commit/d1d1c930f41b432c33af6325285ef0b92c714ede) - Michelle McDaniel - Merged PR 15883: Remove -preview# from the aka.ms channel name for releases
| 1.0 | Build failed: Validate-DotNet/main #runtime-95694-.NET6Preview6 - Build [#runtime-95694-.NET6Preview6](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1210240) partiallySucceeded
## :warning: : internal / Validate-DotNet partiallySucceeded
### Summary
**Finished** - Tue, 29 Jun 2021 04:03:11 GMT
**Duration** - 298 minutes
**Requested for** - DotNet Bot
**Reason** - manual
### Details
#### Validation Ring
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Missing symbols for 2 modules in the package D:\workspace\_work\1\a\signed\shipping\assets\symbols\Microsoft.NETCore.App.Runtime.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Missing symbols for 2 modules in the package D:\workspace\_work\1\a\signed\shipping\assets\symbols\Microsoft.NETCore.App.Runtime.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - (NETCORE_ENGINEERING_TELEMETRY=CheckSymbols) Symbols missing for 2/308 packages
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/252) - PowerShell exited with code '1'.
#### Required Validation Ring
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1210240/logs/237) - Number of checksums and assets don't match. Checksums: 182. Assets: 786. Assets with no corresponding checksum are:
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Runtime.CompilerServices.Unsafe.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.Pkcs.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.ProtectedData.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Cryptography.Xml.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Security.Permissions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.DiagnosticSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Data.OleDb.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Data.Odbc.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Configuration.ConfigurationManager.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
Runtime/6.0.0-preview.6.21325.12/runtime-productVersion.txt
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Net.Http.WinHttpHandler.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Net.Http.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.Context.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Numerics.Tensors.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Runtime.Mono.LLVM.osx-x64
Microsoft.NETCore.App.Runtime.Mono.osx-arm64
Microsoft.NETCore.App.Runtime.Mono.osx-x64
Microsoft.NETCore.App.Runtime.Mono.tvossimulator-arm64
Microsoft.NETCore.App.Runtime.osx-x64
runtime.osx-x64.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.Microsoft.NETCore.DotNetHost
runtime.win-arm.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.Microsoft.NETCore.ILAsm
runtime.osx-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.osx-x64.Microsoft.NETCore.ILDAsm
runtime.osx-x64.Microsoft.NETCore.DotNetHostResolver
runtime.osx-x64.Microsoft.NETCore.TestHost
runtime.win-arm64.Microsoft.NETCore.DotNetAppHost
runtime.osx-x64.runtime.native.System.IO.Ports
runtime.win-arm.Microsoft.NETCore.DotNetHost
runtime.win-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.win-arm.Microsoft.NETCore.DotNetHostResolver
runtime.win-arm.Microsoft.NETCore.ILAsm
runtime.win-arm.Microsoft.NETCore.ILDAsm
runtime.win-arm.Microsoft.NETCore.TestHost
runtime.win-arm64.Microsoft.NETCore.DotNetHost
runtime.win-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.win-arm64.Microsoft.NETCore.ILAsm
runtime.win-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-x64.Microsoft.NETCore.ILAsm
Microsoft.Extensions.Logging.Console
Microsoft.Extensions.Logging.EventSource
Microsoft.Extensions.Logging.TraceSource
Microsoft.Extensions.Logging.EventLog
Microsoft.Extensions.Options
Microsoft.Extensions.Options.ConfigurationExtensions
Microsoft.NETCore.App.Crossgen2.linux-arm64
Microsoft.NETCore.App.Composite
Microsoft.NET.Runtime.iOS.Sample.Mono
Microsoft.NET.Runtime.MonoAOTCompiler.Task
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-x64
Microsoft.NETCore.App.Runtime.linux-musl-arm
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.browser-wasm
Microsoft.NETCore.App.Runtime.linux-arm
Microsoft.NETCore.App.Runtime.Mono.android-arm
Microsoft.NETCore.App.Host.win-arm64
Microsoft.NETCore.App.Host.win-x64
Microsoft.NETCore.App.Host.win-x86
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm64
assets/symbols/Microsoft.NETCore.App.PGO.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.win-x64.Microsoft.NETCore.ILDAsm
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.Win32.Registry.AccessControl
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.linux-arm.Microsoft.NETCore.DotNetAppHost
Microsoft.Win32.SystemEvents
Microsoft.Windows.Compatibility
Microsoft.XmlSerializer.Generator
runtime.linux-arm.Microsoft.NETCore.DotNetHost
runtime.linux-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-arm.Microsoft.NETCore.DotNetHostResolver
runtime.linux-arm.Microsoft.NETCore.ILDAsm
runtime.linux-arm.Microsoft.NETCore.ILAsm
runtime.linux-arm64.Microsoft.NETCore.DotNetAppHost
runtime.linux-arm.Microsoft.NETCore.TestHost
runtime.linux-arm.runtime.native.System.IO.Ports
runtime.linux-arm64.Microsoft.NETCore.DotNetHost
runtime.linux-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-arm64.Microsoft.NETCore.ILAsm
runtime.linux-arm64.Microsoft.NETCore.ILDAsm
runtime.linux-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-arm.Microsoft.NETCore.DotNetAppHost
runtime.linux-arm64.runtime.native.System.IO.Ports
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-x64
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.TestHost
Microsoft.NETCore.Platforms
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.osx-x64
Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-x64
Microsoft.NETCore.App.Runtime.Mono.tvossimulator-x64
Microsoft.NETCore.App.Runtime.win-arm64
runtime.linux-musl-arm.Microsoft.NETCore.ILAsm
Microsoft.NETCore.App.Runtime.Mono.win-x86
Microsoft.NETCore.App.Runtime.win-arm
Microsoft.NETCore.App.Runtime.osx-arm64
Microsoft.NETCore.App.Runtime.win-x86
Microsoft.NETCore.App.Runtime.win-x64
Microsoft.NETCore.App.Runtime.Mono.win-x64
Microsoft.NETCore.DotNetHost
Microsoft.NETCore.App.Runtime.Mono.linux-musl-x64
Microsoft.NETCore.DotNetAppHost
Microsoft.NETCore.DotNetHostPolicy
Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.ILAsm
runtime.linux-musl-arm.Microsoft.NETCore.ILDAsm
runtime.linux-x64.Microsoft.NETCore.DotNetAppHost
runtime.linux-musl-x64.Microsoft.NETCore.ILDAsm
runtime.linux-musl-arm.Microsoft.NETCore.TestHost
runtime.linux-x64.Microsoft.NETCore.DotNetHost
runtime.linux-x64.Microsoft.NETCore.ILAsm
runtime.linux-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-x64.Microsoft.NETCore.ILDAsm
runtime.linux-x64.Microsoft.NETCore.DotNetHostResolver
Microsoft.NETCore.App.Runtime.Mono.iossimulator-x86
runtime.osx-arm64.Microsoft.NETCore.DotNetAppHost
runtime.linux-x64.runtime.native.System.IO.Ports
runtime.native.System.IO.Ports
Microsoft.NETCore.App.Runtime.Mono.linux-arm
Microsoft.NETCore.App.Runtime.Mono.linux-arm64
runtime.osx-arm64.Microsoft.NETCore.DotNetHost
runtime.osx-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-x64.Microsoft.NETCore.TestHost
Microsoft.Extensions.Logging.Debug
Microsoft.Extensions.Logging
Microsoft.Extensions.Logging.Abstractions
Microsoft.Extensions.Logging.Configuration
Microsoft.NET.Runtime.Android.Sample.Mono
Microsoft.NETCore.App.Crossgen2.linux-arm
Microsoft.NET.Runtime.RuntimeConfigParser.Task
Microsoft.NET.Runtime.wasm.Sample.Mono
Microsoft.NET.Runtime.WebAssembly.Sdk
Microsoft.NET.Sdk.IL
Microsoft.NET.Workload.Mono.ToolChain.Manifest-6.0.100
Microsoft.NETCore.App.Crossgen2.linux-musl-arm
Microsoft.Bcl.AsyncInterfaces
Microsoft.Extensions.Caching.Memory
Microsoft.Extensions.Caching.Abstractions
Microsoft.Extensions.Configuration.Abstractions
Microsoft.Extensions.Configuration
Microsoft.Extensions.Configuration.Binder
Microsoft.Extensions.Configuration.CommandLine
Microsoft.Extensions.Configuration.EnvironmentVariables
Microsoft.Extensions.Configuration.Xml
Microsoft.Extensions.Configuration.FileExtensions
Microsoft.Extensions.Configuration.Json
Microsoft.Extensions.Configuration.UserSecrets
Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.DependencyInjection.Specification.Tests
Microsoft.Extensions.FileProviders.Composite
Microsoft.Extensions.DependencyInjection.Abstractions
Microsoft.Extensions.DependencyModel
Microsoft.Extensions.FileProviders.Abstractions
Microsoft.Extensions.FileProviders.Physical
Microsoft.Extensions.FileSystemGlobbing
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x64
Microsoft.NETCore.App.Runtime.Mono.android-x86
Microsoft.NETCore.App.Host.linux-arm64
Microsoft.NETCore.App.Host.win-arm
runtime.win-x64.Microsoft.NETCore.TestHost
runtime.win-x86.Microsoft.NETCore.DotNetHost
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Host.linux-musl-x64
Microsoft.NETCore.App.Host.linux-musl-arm64
Microsoft.NETCore.App.Host.linux-x64
Microsoft.NETCore.App.Host.osx-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.browser-wasm
Microsoft.NETCore.App.Host.osx-x64
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm64
Microsoft.NETCore.App.Ref
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x64
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Encodings.Web.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.Tasks.Dataflow.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Windows.Extensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Speech.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ServiceProcess.ServiceController.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ServiceModel.Syndication.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.CodeDom.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Collections.Immutable.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ComponentModel.Composition.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.ComponentModel.Composition.Registration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.AttributedModel.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Convention.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Hosting.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.Runtime.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Composition.TypedParts.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Runtime.Mono.tvos-arm64
runtime.win-arm64.Microsoft.NETCore.ILDAsm
runtime.win-arm64.Microsoft.NETCore.TestHost
runtime.win-x64.Microsoft.NETCore.DotNetHost
runtime.win-x64.Microsoft.NETCore.DotNetHostPolicy
runtime.win-x64.Microsoft.NETCore.ILAsm
runtime.win-x64.Microsoft.NETCore.DotNetAppHost
runtime.win-x64.Microsoft.NETCore.DotNetHostResolver
runtime.osx-arm64.Microsoft.NETCore.ILAsm
runtime.osx-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetAppHost
runtime.osx-arm64.Microsoft.NETCore.ILDAsm
runtime.osx-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostPolicy
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostResolver
runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHost
runtime.linux-musl-arm64.Microsoft.NETCore.ILAsm
runtime.linux-musl-x64.Microsoft.NETCore.DotNetAppHost
runtime.linux-musl-arm64.Microsoft.NETCore.ILDAsm
runtime.linux-musl-arm64.Microsoft.NETCore.TestHost
runtime.linux-musl-x64.Microsoft.NETCore.DotNetHost
runtime.linux-x64.Microsoft.NETCore.TestHost
Microsoft.NETCore.App.Crossgen2.linux-x64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x86
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-x64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x64
Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x86
Microsoft.NETCore.App.Crossgen2.osx-x64
Microsoft.NETCore.App.Crossgen2.win-x86
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Composite.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.iOS.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.MonoAOTCompiler.Task.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.RuntimeConfigParser.Task.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.wasm.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Runtime.WebAssembly.Sdk.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Sdk.IL.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NET.Workload.Mono.ToolChain.Manifest-6.0.100.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.AccessControl.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Threading.Channels.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Text.Encoding.CodePages.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.EventLog.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Diagnostics.PerformanceCounter.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Drawing.Common.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.AccountManagement.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.DirectoryServices.Protocols.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Formats.Asn1.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Packaging.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Formats.Cbor.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Hashing.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Pipelines.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Management.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Memory.Data.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.Metadata.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Reflection.MetadataLoadContext.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Resources.Extensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/System.Runtime.Caching.6.0.0-preview.6.21325.12.symbols.nupkg
Runtime/6.0.0-preview.6.21325.12/productVersion.txt
runtime.linux-musl-arm.Microsoft.NETCore.DotNetHost
Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-arm64
Microsoft.NETCore.ILDAsm
Microsoft.NETCore.App.Runtime.Mono.linux-x64
Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-arm64
Microsoft.NETCore.App.Runtime.Mono.maccatalyst-x64
Microsoft.NETCore.App.Runtime.Mono.maccatalyst-arm64
Microsoft.ILVerification
Microsoft.Extensions.Options.DataAnnotations
Microsoft.Extensions.Primitives
Microsoft.IO.Redist
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvos-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64
Microsoft.NETCore.App.Runtime.Mono.android-arm64
Microsoft.NETCore.App.Runtime.linux-musl-x64
Microsoft.NETCore.App.Runtime.Mono.ios-arm
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-x86
Microsoft.NETCore.App.Runtime.Mono.browser-wasm
Microsoft.NETCore.App.Runtime.Mono.ios-arm64
Microsoft.NETCore.App.Runtime.Mono.iossimulator-x64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm
assets/symbols/runtime.linux-arm64.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Crossgen2.osx-arm64
Microsoft.NETCore.App.Crossgen2.win-arm
Microsoft.NETCore.App.Crossgen2.win-arm64
Microsoft.NETCore.App.Crossgen2.win-x64
Microsoft.NETCore.App.Host.linux-arm
Microsoft.NETCore.App.Host.linux-musl-arm
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-arm
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x64
Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.browser-wasm
assets/symbols/Microsoft.NETCore.App.Host.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Ref.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.ios-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvos-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.BrowserDebugHost.Transport.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Private.CoreFx.OOB.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Win32.Registry.AccessControl.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Win32.SystemEvents.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Windows.Compatibility.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.XmlSerializer.Generator.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.CrossOsDiag.Private.CoreCLR.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-arm.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.ios-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-x86.6.0.0-preview.6.21325.12.symbols.nupkg
runtime.win-x86.Microsoft.NETCore.DotNetAppHost
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.tvos-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Composition.Convention
System.Composition.Hosting
System.Diagnostics.PerformanceCounter
System.Drawing.Common
System.DirectoryServices.AccountManagement
System.DirectoryServices.Protocols
assets/symbols/Microsoft.Extensions.DependencyInjection.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileSystemGlobbing.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.Systemd.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.WindowsServices.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Console.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Configuration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.EventLog.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-arm64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.Microsoft.NETCore.TestHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.osx-x64.runtime.native.System.IO.Ports.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-arm64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.android-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.linux-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.iossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.browser-wasm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-arm64.Microsoft.NETCore.ILDAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostResolver.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.DotNetHostPolicy.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.linux-musl-x64.Microsoft.NETCore.ILAsm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/runtime.win-x86.Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
Microsoft.NETCore.App.Crossgen2.linux-musl-arm64
Microsoft.Extensions.Http
Microsoft.Extensions.Hosting.WindowsServices
Microsoft.Extensions.Hosting.Systemd
dotnet-ilverify
Microsoft.Extensions.Hosting.Abstractions
Microsoft.Diagnostics.Tracing.EventSource.Redist
Microsoft.Extensions.Hosting
Microsoft.Extensions.Configuration.Ini
Microsoft.NETCore.App.Crossgen2.linux-musl-x64
Microsoft.NETCore.App.Runtime.linux-musl-arm64
Microsoft.NETCore.App.Runtime.linux-x64
Microsoft.NETCore.App.Runtime.Mono.android-x64
Microsoft.NETCore.App.Runtime.Mono.iossimulator-arm64
Microsoft.NETCore.App.Runtime.linux-arm64
Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.android-x86
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.maccatalyst-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.osx-x64.Cross.tvossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.AOT.win-x64.Cross.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Crossgen2.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.ios-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.iossimulator-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.maccatalyst-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.maccatalyst-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.linux-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Net.Http.Json
assets/symbols/Microsoft.NET.Runtime.Android.Sample.Mono.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Net.HostModel.PGO.6.0.0-preview.6.21325.12.symbols.nupkg
System.Memory.Data
assets/symbols/Microsoft.IO.Redist.6.0.0-preview.6.21325.12.symbols.nupkg
System.Net.Http.WinHttpHandler
System.Numerics.Tensors
System.Reflection.Metadata
System.Reflection.Context
System.Runtime.Caching
System.Resources.Extensions
System.Reflection.MetadataLoadContext
System.Security.Cryptography.Pkcs
System.Security.Cryptography.Xml
System.Runtime.CompilerServices.Unsafe
System.Security.Cryptography.ProtectedData
System.Security.Permissions
System.ServiceModel.Syndication
System.Text.Encodings.Web
System.ServiceProcess.ServiceController
System.Speech
System.Text.Json
System.Text.Encoding.CodePages
System.Threading.Channels
System.Threading.AccessControl
System.Threading.Tasks.Dataflow
System.Windows.Extensions
System.Management
System.IO.Ports
assets/symbols/Microsoft.NET.HostModel.6.0.0-preview.6.21325.12.symbols.nupkg
System.IO.Packaging
System.IO.Pipelines
runtime.win-x86.Microsoft.NETCore.DotNetHostPolicy
runtime.win-x86.Microsoft.NETCore.ILAsm
runtime.win-x86.Microsoft.NETCore.ILDAsm
runtime.win-x86.Microsoft.NETCore.DotNetHostResolver
runtime.win-x86.Microsoft.NETCore.TestHost
System.CodeDom
System.Collections.Immutable
System.ComponentModel.Composition
System.ComponentModel.Composition.Registration
System.Composition
System.Composition.AttributedModel
System.Composition.Runtime
System.Data.Odbc
System.Configuration.ConfigurationManager
System.Data.OleDb
System.DirectoryServices
System.Diagnostics.DiagnosticSource
System.Diagnostics.EventLog
System.Formats.Cbor
System.IO.Hashing
System.Composition.TypedParts
assets/symbols/Microsoft.Extensions.DependencyModel.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.DependencyInjection.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.DependencyInjection.Specification.Tests.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.HostFactoryResolver.Sources.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Composite.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.FileProviders.Physical.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Hosting.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.TraceSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.DataAnnotations.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.ILVerification.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/ILCompiler.Reflection.ReadyToRun.Experimental.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
System.Formats.Asn1
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.AOT.osx-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.LLVM.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Http.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.Debug.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Logging.EventSource.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Options.ConfigurationExtensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Primitives.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.UserSecrets.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Json.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/dotnet-pgo.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/dotnet-ilverify.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.FileExtensions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.EnvironmentVariables.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.CommandLine.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Caching.Abstractions.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.AspNetCore.Internal.Transport.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Caching.Memory.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-musl-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Host.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.osx-arm64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.win-x86.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.DotNetAppHost.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.linux-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-arm.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.NETCore.App.Runtime.Mono.android-x64.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Xml.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Ini.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Extensions.Configuration.Binder.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Diagnostics.Tracing.EventSource.Redist.6.0.0-preview.6.21325.12.symbols.nupkg
assets/symbols/Microsoft.Bcl.AsyncInterfaces.6.0.0-preview.6.21325.12.symbols.nupkg
### Changes
- [d1d1c930](https://dev.azure.com/dnceng/internal/_git/dotnet-release/commit/d1d1c930f41b432c33af6325285ef0b92c714ede) - Michelle McDaniel - Merged PR 15883: Remove -preview# from the aka.ms channel name for releases
| infrastructure | build failed validate dotnet main runtime build partiallysucceeded warning internal validate dotnet partiallysucceeded summary finished tue jun gmt duration minutes requested for dotnet bot reason manual details validation ring x netcore engineering telemetry checksymbols missing symbols for modules in the package d workspace work a signed shipping assets symbols microsoft netcore app runtime win arm preview symbols nupkg x netcore engineering telemetry checksymbols missing symbols for modules in the package d workspace work a signed shipping assets symbols microsoft netcore app runtime win preview symbols nupkg x netcore engineering telemetry checksymbols symbols missing for packages x powershell exited with code required validation ring warning number of checksums and assets don t match checksums assets assets with no corresponding checksum are assets symbols runtime osx microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux microsoft netcore ilasm preview symbols nupkg assets symbols runtime linux microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux runtime native system io ports preview symbols nupkg assets symbols runtime linux microsoft netcore testhost preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime osx microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux microsoft crossosdiag private coreclr preview symbols nupkg assets symbols runtime linux microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux microsoft netcore ilasm preview symbols nupkg assets symbols runtime linux microsoft netcore testhost preview symbols nupkg assets symbols runtime linux runtime native system io ports preview symbols nupkg assets symbols runtime linux musl arm microsoft crossosdiag private coreclr preview symbols nupkg assets symbols runtime win microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux musl microsoft netcore testhost preview symbols nupkg assets symbols runtime win microsoft netcore dotnetapphost preview symbols nupkg assets symbols system runtime compilerservices unsafe preview symbols nupkg assets symbols system security cryptography pkcs preview symbols nupkg assets symbols system security cryptography protecteddata preview symbols nupkg assets symbols system security cryptography xml preview symbols nupkg assets symbols system security permissions preview symbols nupkg assets symbols system diagnostics diagnosticsource preview symbols nupkg assets symbols system data oledb preview symbols nupkg assets symbols system data odbc preview symbols nupkg assets symbols system configuration configurationmanager preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime win microsoft netcore ilasm preview symbols nupkg assets symbols runtime win microsoft netcore ildasm preview symbols nupkg assets symbols runtime win microsoft netcore testhost preview symbols nupkg assets symbols runtime win microsoft netcore ildasm preview symbols nupkg assets symbols runtime win microsoft netcore testhost preview symbols nupkg runtime preview runtime productversion txt assets symbols runtime linux musl microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore testhost preview symbols nupkg assets symbols system net http winhttphandler preview symbols nupkg assets symbols system net http json preview symbols nupkg assets symbols system reflection context preview symbols nupkg assets symbols system numerics tensors preview symbols nupkg assets symbols runtime linux microsoft netcore dotnethostpolicy preview symbols nupkg microsoft netcore app runtime mono llvm osx microsoft netcore app runtime mono osx microsoft netcore app runtime mono osx microsoft netcore app runtime mono tvossimulator microsoft netcore app runtime osx runtime osx microsoft netcore dotnetapphost runtime osx microsoft netcore dotnethost runtime win arm microsoft netcore dotnetapphost runtime osx microsoft netcore ilasm runtime osx microsoft netcore dotnethostpolicy runtime osx microsoft netcore ildasm runtime osx microsoft netcore dotnethostresolver runtime osx microsoft netcore testhost runtime win microsoft netcore dotnetapphost runtime osx runtime native system io ports runtime win arm microsoft netcore dotnethost runtime win arm microsoft netcore dotnethostpolicy runtime win arm microsoft netcore dotnethostresolver runtime win arm microsoft netcore ilasm runtime win arm microsoft netcore ildasm runtime win arm microsoft netcore testhost runtime win microsoft netcore dotnethost runtime win microsoft netcore dotnethostpolicy runtime win microsoft netcore ilasm runtime win microsoft netcore dotnethostresolver runtime linux musl microsoft netcore dotnethostpolicy runtime linux musl microsoft netcore dotnethostresolver runtime linux musl microsoft netcore ilasm microsoft extensions logging console microsoft extensions logging eventsource microsoft extensions logging tracesource microsoft extensions logging eventlog microsoft extensions options microsoft extensions options configurationextensions microsoft netcore app linux microsoft netcore app composite microsoft net runtime ios sample mono microsoft net runtime monoaotcompiler task microsoft netcore app runtime aot osx cross maccatalyst microsoft netcore app runtime aot osx cross iossimulator microsoft netcore app runtime aot osx cross maccatalyst microsoft netcore app runtime linux musl arm microsoft netcore app runtime aot win cross browser wasm microsoft netcore app runtime linux arm microsoft netcore app runtime mono android arm microsoft netcore app host win microsoft netcore app host win microsoft netcore app host win microsoft netcore app runtime aot osx cross android assets symbols microsoft netcore app pgo preview symbols nupkg runtime win microsoft netcore ildasm assets symbols runtime linux microsoft netcore dotnetapphost preview symbols nupkg microsoft registry accesscontrol assets symbols runtime linux microsoft netcore dotnethost preview symbols nupkg assets symbols runtime linux arm runtime native system io ports preview symbols nupkg runtime linux arm microsoft netcore dotnetapphost microsoft systemevents microsoft windows compatibility microsoft xmlserializer generator runtime linux arm microsoft netcore dotnethost runtime linux arm microsoft netcore dotnethostpolicy runtime linux arm microsoft netcore dotnethostresolver runtime linux arm microsoft netcore ildasm runtime linux arm microsoft netcore ilasm runtime linux microsoft netcore dotnetapphost runtime linux arm microsoft netcore testhost runtime linux arm runtime native system io ports runtime linux microsoft netcore dotnethost runtime linux microsoft netcore dotnethostpolicy runtime linux microsoft netcore dotnethostresolver runtime linux microsoft netcore ilasm runtime linux microsoft netcore ildasm runtime linux microsoft netcore testhost runtime linux musl arm microsoft netcore dotnetapphost runtime linux runtime native system io ports microsoft netcore app runtime mono llvm aot linux runtime linux musl arm microsoft netcore dotnethostpolicy runtime linux musl arm microsoft netcore dotnethostresolver microsoft netcore testhost microsoft netcore platforms microsoft netcore app runtime mono llvm aot osx microsoft netcore app runtime mono llvm linux microsoft netcore app runtime mono tvossimulator microsoft netcore app runtime win runtime linux musl arm microsoft netcore ilasm microsoft netcore app runtime mono win microsoft netcore app runtime win arm microsoft netcore app runtime osx microsoft netcore app runtime win microsoft netcore app runtime win microsoft netcore app runtime mono win microsoft netcore dotnethost microsoft netcore app runtime mono linux musl microsoft netcore dotnetapphost microsoft netcore dotnethostpolicy microsoft netcore dotnethostresolver microsoft netcore ilasm runtime linux musl arm microsoft netcore ildasm runtime linux microsoft netcore dotnetapphost runtime linux musl microsoft netcore ildasm runtime linux musl arm microsoft netcore testhost runtime linux microsoft netcore dotnethost runtime linux microsoft netcore ilasm runtime linux microsoft netcore dotnethostpolicy runtime linux microsoft netcore ildasm runtime linux microsoft netcore dotnethostresolver microsoft netcore app runtime mono iossimulator runtime osx microsoft netcore dotnetapphost runtime linux runtime native system io ports runtime native system io ports microsoft netcore app runtime mono linux arm microsoft netcore app runtime mono linux runtime osx microsoft netcore dotnethost runtime osx microsoft netcore dotnethostpolicy runtime linux musl microsoft netcore testhost microsoft extensions logging debug microsoft extensions logging microsoft extensions logging abstractions microsoft extensions logging configuration microsoft net runtime android sample mono microsoft netcore app linux arm microsoft net runtime runtimeconfigparser task microsoft net runtime wasm sample mono microsoft net runtime webassembly sdk microsoft net sdk il microsoft net workload mono toolchain manifest microsoft netcore app linux musl arm microsoft bcl asyncinterfaces microsoft extensions caching memory microsoft extensions caching abstractions microsoft extensions configuration abstractions microsoft extensions configuration microsoft extensions configuration binder microsoft extensions configuration commandline microsoft extensions configuration environmentvariables microsoft extensions configuration xml microsoft extensions configuration fileextensions microsoft extensions configuration json microsoft extensions configuration usersecrets microsoft extensions dependencyinjection microsoft extensions dependencyinjection specification tests microsoft extensions fileproviders composite microsoft extensions dependencyinjection abstractions microsoft extensions dependencymodel microsoft extensions fileproviders abstractions microsoft extensions fileproviders physical microsoft extensions filesystemglobbing microsoft netcore app runtime aot osx cross iossimulator microsoft netcore app runtime mono android microsoft netcore app host linux microsoft netcore app host win arm runtime win microsoft netcore testhost runtime win microsoft netcore dotnethost assets symbols microsoft netcore app runtime aot linux cross android arm preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross ios arm preview symbols nupkg microsoft netcore app host linux musl microsoft netcore app host linux musl microsoft netcore app host linux microsoft netcore app host osx microsoft netcore app runtime aot osx cross browser wasm microsoft netcore app host osx microsoft netcore app runtime aot linux cross android arm microsoft netcore app runtime aot linux cross android microsoft netcore app ref microsoft netcore app runtime aot linux cross android assets symbols microsoft netcore app runtime aot linux cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross android arm preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross iossimulator preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross maccatalyst preview symbols nupkg assets symbols microsoft netcore app linux arm preview symbols nupkg assets symbols microsoft netcore app osx preview symbols nupkg assets symbols microsoft netcore app runtime linux arm preview symbols nupkg assets symbols microsoft netcore app runtime mono tvossimulator preview symbols nupkg assets symbols system text encodings web preview symbols nupkg assets symbols system threading tasks dataflow preview symbols nupkg assets symbols system windows extensions preview symbols nupkg assets symbols system speech preview symbols nupkg assets symbols system serviceprocess servicecontroller preview symbols nupkg assets symbols system servicemodel syndication preview symbols nupkg assets symbols runtime osx microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime win microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore ilasm preview symbols nupkg assets symbols runtime win microsoft netcore ilasm preview symbols nupkg assets symbols runtime win microsoft netcore ildasm preview symbols nupkg assets symbols runtime win microsoft netcore testhost preview symbols nupkg assets symbols system codedom preview symbols nupkg assets symbols system collections immutable preview symbols nupkg assets symbols system componentmodel composition preview symbols nupkg assets symbols system componentmodel composition registration preview symbols nupkg assets symbols system composition preview symbols nupkg assets symbols system composition attributedmodel preview symbols nupkg assets symbols system composition convention preview symbols nupkg assets symbols system composition hosting preview symbols nupkg assets symbols system composition runtime preview symbols nupkg assets symbols system composition typedparts preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostresolver preview symbols nupkg microsoft netcore app runtime mono tvos runtime win microsoft netcore ildasm runtime win microsoft netcore testhost runtime win microsoft netcore dotnethost runtime win microsoft netcore dotnethostpolicy runtime win microsoft netcore ilasm runtime win microsoft netcore dotnetapphost runtime win microsoft netcore dotnethostresolver runtime osx microsoft netcore ilasm runtime osx microsoft netcore testhost runtime linux musl microsoft netcore dotnetapphost runtime osx microsoft netcore ildasm runtime osx microsoft netcore dotnethostresolver runtime linux musl microsoft netcore dotnethostpolicy runtime linux musl microsoft netcore dotnethostresolver runtime linux musl microsoft netcore dotnethost runtime linux musl microsoft netcore ilasm runtime linux musl microsoft netcore dotnetapphost runtime linux musl microsoft netcore ildasm runtime linux musl microsoft netcore testhost runtime linux musl microsoft netcore dotnethost runtime linux microsoft netcore testhost microsoft netcore app linux microsoft netcore app runtime aot osx cross iossimulator microsoft netcore app runtime aot osx cross tvossimulator microsoft netcore app runtime aot osx cross tvossimulator microsoft netcore app runtime aot win cross android microsoft netcore app runtime aot win cross android arm microsoft netcore app runtime aot win cross android microsoft netcore app runtime aot win cross android microsoft netcore app osx microsoft netcore app win assets symbols microsoft netcore app runtime aot linux cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross iossimulator preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross tvossimulator preview symbols nupkg assets symbols microsoft netcore app composite preview symbols nupkg assets symbols microsoft netcore app host osx preview symbols nupkg assets symbols microsoft net runtime ios sample mono preview symbols nupkg assets symbols microsoft net runtime monoaotcompiler task preview symbols nupkg assets symbols microsoft net runtime runtimeconfigparser task preview symbols nupkg assets symbols microsoft net runtime wasm sample mono preview symbols nupkg assets symbols microsoft net runtime webassembly sdk preview symbols nupkg assets symbols microsoft net sdk il preview symbols nupkg assets symbols microsoft net workload mono toolchain manifest preview symbols nupkg assets symbols microsoft netcore app linux musl preview symbols nupkg assets symbols microsoft netcore app runtime linux musl preview symbols nupkg assets symbols microsoft netcore app runtime mono android preview symbols nupkg assets symbols runtime linux microsoft netcore ildasm preview symbols nupkg assets symbols system text json preview symbols nupkg assets symbols system threading accesscontrol preview symbols nupkg assets symbols system threading channels preview symbols nupkg assets symbols system text encoding codepages preview symbols nupkg assets symbols runtime osx microsoft netcore testhost preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore dotnethost preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux musl arm microsoft netcore ilasm preview symbols nupkg assets symbols runtime linux musl microsoft crossosdiag private coreclr preview symbols nupkg assets symbols runtime win arm microsoft netcore testhost preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux musl microsoft crossosdiag private coreclr preview symbols nupkg assets symbols runtime linux musl microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux musl microsoft netcore testhost preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols system diagnostics eventlog preview symbols nupkg assets symbols system diagnostics performancecounter preview symbols nupkg assets symbols system directoryservices preview symbols nupkg assets symbols system drawing common preview symbols nupkg assets symbols system directoryservices accountmanagement preview symbols nupkg assets symbols system directoryservices protocols preview symbols nupkg assets symbols system formats preview symbols nupkg assets symbols system io packaging preview symbols nupkg assets symbols system formats cbor preview symbols nupkg assets symbols system io hashing preview symbols nupkg assets symbols system io pipelines preview symbols nupkg assets symbols system management preview symbols nupkg assets symbols system io ports preview symbols nupkg assets symbols system memory data preview symbols nupkg assets symbols system reflection metadata preview symbols nupkg assets symbols system reflection metadataloadcontext preview symbols nupkg assets symbols system resources extensions preview symbols nupkg assets symbols system runtime caching preview symbols nupkg runtime preview productversion txt runtime linux musl arm microsoft netcore dotnethost microsoft netcore app runtime mono llvm aot linux microsoft netcore ildasm microsoft netcore app runtime mono linux microsoft netcore app runtime mono llvm linux microsoft netcore app runtime mono maccatalyst microsoft netcore app runtime mono maccatalyst microsoft ilverification microsoft extensions options dataannotations microsoft extensions primitives microsoft io redist microsoft netcore app runtime aot osx cross tvos microsoft netcore app runtime aot osx cross ios microsoft netcore app runtime mono android microsoft netcore app runtime linux musl microsoft netcore app runtime mono ios arm microsoft netcore app runtime aot linux cross android microsoft netcore app runtime mono browser wasm microsoft netcore app runtime mono ios microsoft netcore app runtime mono iossimulator microsoft netcore app runtime aot osx cross ios arm assets symbols runtime linux microsoft crossosdiag private coreclr preview symbols nupkg microsoft netcore app osx microsoft netcore app win arm microsoft netcore app win microsoft netcore app win microsoft netcore app host linux arm microsoft netcore app host linux musl arm microsoft netcore app runtime aot osx cross android arm microsoft netcore app runtime aot osx cross android microsoft netcore app runtime aot linux cross browser wasm assets symbols microsoft netcore app host win preview symbols nupkg assets symbols microsoft netcore app ref preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross browser wasm preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross ios preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross tvos preview symbols nupkg assets symbols microsoft netcore app runtime aot win cross android arm preview symbols nupkg assets symbols microsoft netcore app runtime aot win cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot win cross android preview symbols nupkg assets symbols microsoft netcore app host win preview symbols nupkg assets symbols microsoft netcore app linux musl preview symbols nupkg assets symbols microsoft netcore app osx preview symbols nupkg assets symbols microsoft netcore app win preview symbols nupkg assets symbols microsoft netcore app runtime linux preview symbols nupkg assets symbols microsoft netcore app runtime win arm preview symbols nupkg assets symbols runtime linux arm microsoft netcore dotnetapphost preview symbols nupkg assets symbols microsoft netcore browserdebughost transport preview symbols nupkg assets symbols microsoft netcore dotnethost preview symbols nupkg assets symbols microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols microsoft netcore ildasm preview symbols nupkg assets symbols microsoft netcore dotnethostresolver preview symbols nupkg assets symbols microsoft netcore ilasm preview symbols nupkg assets symbols microsoft netcore testhost preview symbols nupkg assets symbols microsoft private corefx oob preview symbols nupkg assets symbols microsoft registry accesscontrol preview symbols nupkg assets symbols microsoft systemevents preview symbols nupkg assets symbols microsoft windows compatibility preview symbols nupkg assets symbols microsoft xmlserializer generator preview symbols nupkg assets symbols runtime linux arm microsoft crossosdiag private coreclr preview symbols nupkg assets symbols runtime linux arm microsoft netcore dotnethost preview symbols nupkg assets symbols runtime linux arm microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime linux arm microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux arm microsoft netcore ilasm preview symbols nupkg assets symbols microsoft netcore app runtime mono win preview symbols nupkg assets symbols runtime linux arm microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux arm microsoft netcore testhost preview symbols nupkg assets symbols microsoft netcore app runtime mono win preview symbols nupkg assets symbols microsoft netcore app runtime mono tvossimulator preview symbols nupkg assets symbols microsoft netcore app runtime mono browser wasm preview symbols nupkg assets symbols microsoft netcore app runtime mono ios preview symbols nupkg assets symbols microsoft netcore app runtime mono iossimulator preview symbols nupkg runtime win microsoft netcore dotnetapphost assets symbols microsoft netcore app runtime mono linux musl preview symbols nupkg assets symbols microsoft netcore app runtime mono llvm aot linux preview symbols nupkg assets symbols microsoft netcore app runtime mono tvos preview symbols nupkg system composition convention system composition hosting system diagnostics performancecounter system drawing common system directoryservices accountmanagement system directoryservices protocols assets symbols microsoft extensions dependencyinjection preview symbols nupkg assets symbols microsoft extensions filesystemglobbing preview symbols nupkg assets symbols microsoft extensions hosting preview symbols nupkg assets symbols microsoft extensions hosting systemd preview symbols nupkg assets symbols microsoft extensions hosting windowsservices preview symbols nupkg assets symbols microsoft extensions logging console preview symbols nupkg assets symbols microsoft extensions logging configuration preview symbols nupkg assets symbols microsoft extensions logging eventlog preview symbols nupkg assets symbols runtime native system io ports preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethost preview symbols nupkg assets symbols runtime osx microsoft netcore ilasm preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime win arm microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime osx microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime osx microsoft netcore ilasm preview symbols nupkg assets symbols runtime osx microsoft netcore ildasm preview symbols nupkg assets symbols runtime osx microsoft netcore testhost preview symbols nupkg assets symbols runtime osx runtime native system io ports preview symbols nupkg assets symbols runtime win arm microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win arm microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime win arm microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime win arm microsoft netcore ilasm preview symbols nupkg assets symbols runtime win arm microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime linux microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime win microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore dotnethost preview symbols nupkg assets symbols runtime win microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols microsoft netcore app runtime aot linux cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot linux cross browser wasm preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross android preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross iossimulator preview symbols nupkg assets symbols microsoft netcore app runtime aot win cross browser wasm preview symbols nupkg assets symbols microsoft netcore app linux musl arm preview symbols nupkg assets symbols microsoft netcore app win arm preview symbols nupkg assets symbols microsoft netcore app win preview symbols nupkg assets symbols microsoft netcore app host linux arm preview symbols nupkg assets symbols microsoft netcore app host linux preview symbols nupkg assets symbols microsoft netcore app host win arm preview symbols nupkg assets symbols microsoft netcore app runtime osx preview symbols nupkg assets symbols microsoft netcore app runtime mono android preview symbols nupkg assets symbols microsoft netcore app runtime mono iossimulator preview symbols nupkg assets symbols microsoft netcore app runtime mono linux preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnetapphost preview symbols nupkg assets symbols runtime linux musl microsoft netcore ilasm preview symbols nupkg assets symbols runtime linux musl microsoft netcore ildasm preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethost preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethostresolver preview symbols nupkg assets symbols runtime linux musl microsoft netcore dotnethostpolicy preview symbols nupkg assets symbols runtime linux musl microsoft netcore ilasm preview symbols nupkg assets symbols runtime win microsoft netcore dotnetapphost preview symbols nupkg microsoft netcore app linux musl microsoft extensions http microsoft extensions hosting windowsservices microsoft extensions hosting systemd dotnet ilverify microsoft extensions hosting abstractions microsoft diagnostics tracing eventsource redist microsoft extensions hosting microsoft extensions configuration ini microsoft netcore app linux musl microsoft netcore app runtime linux musl microsoft netcore app runtime linux microsoft netcore app runtime mono android microsoft netcore app runtime mono iossimulator microsoft netcore app runtime linux microsoft netcore app runtime aot osx cross android assets symbols microsoft netcore app runtime aot osx cross maccatalyst preview symbols nupkg assets symbols microsoft netcore app runtime aot osx cross tvossimulator preview symbols nupkg assets symbols microsoft netcore app runtime aot win cross android preview symbols nupkg assets symbols microsoft netcore app host win preview symbols nupkg assets symbols microsoft netcore app linux preview symbols nupkg assets symbols microsoft netcore app linux preview symbols nupkg assets symbols microsoft netcore app win preview symbols nupkg assets symbols microsoft netcore app runtime linux musl arm preview symbols nupkg assets symbols microsoft netcore app runtime win preview symbols nupkg assets symbols microsoft netcore app runtime linux musl preview symbols nupkg assets symbols microsoft netcore app runtime mono ios arm preview symbols nupkg assets symbols microsoft netcore app runtime mono iossimulator preview symbols nupkg assets symbols microsoft netcore app runtime mono linux arm preview symbols nupkg assets symbols microsoft netcore app runtime mono llvm aot linux preview symbols nupkg assets symbols microsoft netcore app runtime mono llvm linux preview symbols nupkg assets symbols microsoft netcore app runtime mono llvm osx preview symbols nupkg assets symbols microsoft netcore app runtime mono maccatalyst preview symbols nupkg assets symbols microsoft netcore app runtime mono maccatalyst preview symbols nupkg assets symbols microsoft netcore app runtime mono osx preview symbols nupkg assets symbols microsoft netcore app runtime mono osx preview symbols nupkg assets symbols microsoft netcore app runtime mono linux preview symbols nupkg system net http json assets symbols microsoft net runtime android sample mono preview symbols nupkg assets symbols microsoft net hostmodel pgo preview symbols nupkg system memory data assets symbols microsoft io redist preview symbols nupkg system net http winhttphandler system numerics tensors system reflection metadata system reflection context system runtime caching system resources extensions system reflection metadataloadcontext system security cryptography pkcs system security cryptography xml system runtime compilerservices unsafe system security cryptography protecteddata system security permissions system servicemodel syndication system text encodings web system serviceprocess servicecontroller system speech system text json system text encoding codepages system threading channels system threading accesscontrol system threading tasks dataflow system windows extensions system management system io ports assets symbols microsoft net hostmodel preview symbols nupkg system io packaging system io pipelines runtime win microsoft netcore dotnethostpolicy runtime win microsoft netcore ilasm runtime win microsoft netcore ildasm runtime win microsoft netcore dotnethostresolver runtime win microsoft netcore testhost system codedom system collections immutable system componentmodel composition system componentmodel composition registration system composition system composition attributedmodel system composition runtime system data odbc system configuration configurationmanager system data oledb system directoryservices system diagnostics diagnosticsource system diagnostics eventlog system formats cbor system io hashing system composition typedparts assets symbols microsoft extensions dependencymodel preview symbols nupkg assets symbols microsoft extensions dependencyinjection abstractions preview symbols nupkg assets symbols microsoft extensions dependencyinjection specification tests preview symbols nupkg assets symbols microsoft extensions fileproviders abstractions preview symbols nupkg assets symbols microsoft extensions hostfactoryresolver sources preview symbols nupkg assets symbols microsoft extensions fileproviders composite preview symbols nupkg assets symbols microsoft extensions fileproviders physical preview symbols nupkg assets symbols microsoft extensions hosting abstractions preview symbols nupkg assets symbols microsoft extensions logging preview symbols nupkg assets symbols microsoft extensions logging abstractions preview symbols nupkg assets symbols microsoft extensions logging tracesource preview symbols nupkg assets symbols microsoft extensions options dataannotations preview symbols nupkg assets symbols microsoft ilverification preview symbols nupkg assets symbols ilcompiler reflection readytorun experimental preview symbols nupkg assets symbols microsoft netcore app runtime win preview symbols nupkg system formats assets symbols microsoft netcore app runtime mono llvm aot osx preview symbols nupkg assets symbols microsoft netcore app runtime mono llvm linux preview symbols nupkg assets symbols microsoft extensions http preview symbols nupkg assets symbols microsoft extensions logging debug preview symbols nupkg assets symbols microsoft extensions logging eventsource preview symbols nupkg assets symbols microsoft extensions options preview symbols nupkg assets symbols microsoft extensions options configurationextensions preview symbols nupkg assets symbols microsoft extensions primitives preview symbols nupkg assets symbols microsoft extensions configuration usersecrets preview symbols nupkg assets symbols microsoft extensions configuration json preview symbols nupkg assets symbols dotnet pgo preview symbols nupkg assets symbols dotnet ilverify preview symbols nupkg assets symbols microsoft extensions configuration fileextensions preview symbols nupkg assets symbols microsoft extensions configuration environmentvariables preview symbols nupkg assets symbols microsoft extensions configuration preview symbols nupkg assets symbols microsoft extensions configuration commandline preview symbols nupkg assets symbols microsoft extensions configuration abstractions preview symbols nupkg assets symbols microsoft extensions caching abstractions preview symbols nupkg assets symbols microsoft aspnetcore internal transport preview symbols nupkg assets symbols microsoft extensions caching memory preview symbols nupkg assets symbols microsoft netcore app host linux musl arm preview symbols nupkg assets symbols microsoft netcore app host linux musl preview symbols nupkg assets symbols microsoft netcore app host linux musl preview symbols nupkg assets symbols microsoft netcore app host linux preview symbols nupkg assets symbols microsoft netcore app host osx preview symbols nupkg assets symbols microsoft netcore app runtime osx preview symbols nupkg assets symbols microsoft netcore app runtime win preview symbols nupkg assets symbols microsoft netcore dotnetapphost preview symbols nupkg assets symbols microsoft netcore app runtime linux preview symbols nupkg assets symbols microsoft netcore app runtime mono android arm preview symbols nupkg assets symbols microsoft netcore app runtime mono android preview symbols nupkg assets symbols microsoft extensions configuration xml preview symbols nupkg assets symbols microsoft extensions configuration ini preview symbols nupkg assets symbols microsoft extensions configuration binder preview symbols nupkg assets symbols microsoft diagnostics tracing eventsource redist preview symbols nupkg assets symbols microsoft bcl asyncinterfaces preview symbols nupkg changes michelle mcdaniel merged pr remove preview from the aka ms channel name for releases | 1 |
434,181 | 30,445,660,541 | IssuesEvent | 2023-07-15 16:26:46 | Alarm-Siren/6502-kicad-library | https://api.github.com/repos/Alarm-Siren/6502-kicad-library | closed | Make library compatible with Kicad's Package & Content Manager (PCM) | enhancement compatibility documentation | Make library compatible with Kicad's Package & Content Manager (PCM)
Update README file with new installation instructions accordingly. | 1.0 | Make library compatible with Kicad's Package & Content Manager (PCM) - Make library compatible with Kicad's Package & Content Manager (PCM)
Update README file with new installation instructions accordingly. | non_infrastructure | make library compatible with kicad s package content manager pcm make library compatible with kicad s package content manager pcm update readme file with new installation instructions accordingly | 0 |
506,286 | 14,661,506,320 | IssuesEvent | 2020-12-29 03:56:57 | kevin-hanselman/dud | https://api.github.com/repos/kevin-hanselman/dud | closed | checkout: copy strategy fails when artifact checked out as link | bug low priority | ```
$ ./duc init
$ ./duc add 50mb_random.bin
$ ./duc commit
$ ./duc checkout --copy
stage checkout failed: checkout "50mb_random.bin": open 50mb_random.bin: file exists
```
The `checkout` command should be smart enough to remove the link prior to attempting the checkout.
Note that this is different than adding a `--force` flag, which would unconditionally remove the workspace file before checkout. | 1.0 | checkout: copy strategy fails when artifact checked out as link - ```
$ ./duc init
$ ./duc add 50mb_random.bin
$ ./duc commit
$ ./duc checkout --copy
stage checkout failed: checkout "50mb_random.bin": open 50mb_random.bin: file exists
```
The `checkout` command should be smart enough to remove the link prior to attempting the checkout.
Note that this is different than adding a `--force` flag, which would unconditionally remove the workspace file before checkout. | non_infrastructure | checkout copy strategy fails when artifact checked out as link duc init duc add random bin duc commit duc checkout copy stage checkout failed checkout random bin open random bin file exists the checkout command should be smart enough to remove the link prior to attempting the checkout note that this is different than adding a force flag which would unconditionally remove the workspace file before checkout | 0 |
28,201 | 12,807,267,977 | IssuesEvent | 2020-07-03 11:07:01 | GovernIB/notib | https://api.github.com/repos/GovernIB/notib | closed | Error en les peticions provinents de la integració Helium-Notib | Lloc:WebServices Prioritat:Molt_Alta Tipus:Error | Les notificacions realitzades des de Helium dónen error:

| 1.0 | Error en les peticions provinents de la integració Helium-Notib - Les notificacions realitzades des de Helium dónen error:

| non_infrastructure | error en les peticions provinents de la integració helium notib les notificacions realitzades des de helium dónen error | 0 |
5,870 | 6,018,373,563 | IssuesEvent | 2017-06-07 12:10:40 | AdguardTeam/AdguardFilters | https://api.github.com/repos/AdguardTeam/AdguardFilters | closed | There should be a way to have specific rules for different product types | Infrastructure | Windows/Mac/Android/Extension rules could be slightly different.
For instance, Win/Mac/Android have `$$` rules support, while extension don't. | 1.0 | There should be a way to have specific rules for different product types - Windows/Mac/Android/Extension rules could be slightly different.
For instance, Win/Mac/Android have `$$` rules support, while extension don't. | infrastructure | there should be a way to have specific rules for different product types windows mac android extension rules could be slightly different for instance win mac android have rules support while extension don t | 1 |
33,008 | 6,149,768,223 | IssuesEvent | 2017-06-27 20:52:32 | Azure/azure-iot-sdk-c | https://api.github.com/repos/Azure/azure-iot-sdk-c | closed | broken link to build instructions | documentation fix checked in | Link to instructions to build the example referenced in readme at https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_http/mbed
Has markdown to a link https://github.com/Azure/azure-iot-sdk-c/blob/doc/get_started/mbed-freescale-k64f-c.md which is missing.
| 1.0 | broken link to build instructions - Link to instructions to build the example referenced in readme at https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_http/mbed
Has markdown to a link https://github.com/Azure/azure-iot-sdk-c/blob/doc/get_started/mbed-freescale-k64f-c.md which is missing.
| non_infrastructure | broken link to build instructions link to instructions to build the example referenced in readme at has markdown to a link which is missing | 0 |
6,152 | 6,199,107,389 | IssuesEvent | 2017-07-05 20:46:38 | amnh-library/API-Portal | https://api.github.com/repos/amnh-library/API-Portal | reopened | Weekly library data scraping | systems & infrastructure | Set up cron in dev for scraping content - Scripts should fire off WEEKLY on weekends | 1.0 | Weekly library data scraping - Set up cron in dev for scraping content - Scripts should fire off WEEKLY on weekends | infrastructure | weekly library data scraping set up cron in dev for scraping content scripts should fire off weekly on weekends | 1 |
25,381 | 18,670,667,475 | IssuesEvent | 2021-10-30 16:45:48 | battlecode/battlecode21 | https://api.github.com/repos/battlecode/battlecode21 | closed | Automate tournament-challonge updating | infrastructure | Track down @arvid220u's mysterious script, integrate it into tournament runner.
Be careful not to reveal information before we mean to | 1.0 | Automate tournament-challonge updating - Track down @arvid220u's mysterious script, integrate it into tournament runner.
Be careful not to reveal information before we mean to | infrastructure | automate tournament challonge updating track down s mysterious script integrate it into tournament runner be careful not to reveal information before we mean to | 1 |
237,288 | 7,758,159,862 | IssuesEvent | 2018-05-31 18:39:50 | zom/Zom-iOS | https://api.github.com/repos/zom/Zom-iOS | closed | In 'Add friend' view, auto-add '@home.zom.im' when friends are manually entered | FOR REVIEW enhancement high-priority | @N-Pex We've discussed this briefly. In the 'add friends' view, if someone only enters a name (ex: carrie) and no server (ex: @home.zom.im), will you add the logic to automatically add the '@home.zom.im' to the entry, so that the user has a better chance of actually adding someone?
Let me know if you have questions. We do this on Android already. | 1.0 | In 'Add friend' view, auto-add '@home.zom.im' when friends are manually entered - @N-Pex We've discussed this briefly. In the 'add friends' view, if someone only enters a name (ex: carrie) and no server (ex: @home.zom.im), will you add the logic to automatically add the '@home.zom.im' to the entry, so that the user has a better chance of actually adding someone?
Let me know if you have questions. We do this on Android already. | non_infrastructure | in add friend view auto add home zom im when friends are manually entered n pex we ve discussed this briefly in the add friends view if someone only enters a name ex carrie and no server ex home zom im will you add the logic to automatically add the home zom im to the entry so that the user has a better chance of actually adding someone let me know if you have questions we do this on android already | 0 |
129,047 | 17,671,180,546 | IssuesEvent | 2021-08-23 06:23:29 | elementary/switchboard-plug-onlineaccounts | https://api.github.com/repos/elementary/switchboard-plug-onlineaccounts | opened | Provide way for Flatpak to retrieve credentials | Priority: Wishlist Needs Design | ## Problem
Flatpacked apps using EDS are not able to retrieve credentials from the host system. Thats why we did not Flatpack Mail till now (https://github.com/elementary/mail/issues/591) and also ran into problems with Tasks for certain operations (https://github.com/elementary/tasks/issues/263).
## Proposal
Ideally we should re-use an existing way to retrieve the credentials from the host system.
## Prior Art
@Marukesu found a way to escape the sandbox and retrieve the credentials from the host system: https://github.com/elementary/tasks/issues/263#issuecomment-903379945
| 1.0 | Provide way for Flatpak to retrieve credentials - ## Problem
Flatpacked apps using EDS are not able to retrieve credentials from the host system. Thats why we did not Flatpack Mail till now (https://github.com/elementary/mail/issues/591) and also ran into problems with Tasks for certain operations (https://github.com/elementary/tasks/issues/263).
## Proposal
Ideally we should re-use an existing way to retrieve the credentials from the host system.
## Prior Art
@Marukesu found a way to escape the sandbox and retrieve the credentials from the host system: https://github.com/elementary/tasks/issues/263#issuecomment-903379945
| non_infrastructure | provide way for flatpak to retrieve credentials problem flatpacked apps using eds are not able to retrieve credentials from the host system thats why we did not flatpack mail till now and also ran into problems with tasks for certain operations proposal ideally we should re use an existing way to retrieve the credentials from the host system prior art marukesu found a way to escape the sandbox and retrieve the credentials from the host system | 0 |
27,148 | 21,213,570,735 | IssuesEvent | 2022-04-11 03:53:02 | woocommerce/woocommerce | https://api.github.com/repos/woocommerce/woocommerce | opened | Standardize `lint` Executor | type: task tool: monorepo infrastructure | <!-- This form is for other issue types specific to the WooCommerce plugin. This is not a support portal. -->
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
There should be a single `lint` executor that runs linting against all of the code in a project. [Like the `build` executor](https://github.com/woocommerce/woocommerce/issues/32551), we should be relying on Nx plugins where possible. We should standardize `lint-{language}` for each language and then have a central linting executor that depends on them.
| 1.0 | Standardize `lint` Executor - <!-- This form is for other issue types specific to the WooCommerce plugin. This is not a support portal. -->
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
There should be a single `lint` executor that runs linting against all of the code in a project. [Like the `build` executor](https://github.com/woocommerce/woocommerce/issues/32551), we should be relying on Nx plugins where possible. We should standardize `lint-{language}` for each language and then have a central linting executor that depends on them.
| infrastructure | standardize lint executor prerequisites mark completed items with an i have checked that my issue type is not listed here my issue is not a security issue support request bug report enhancement or feature request please use the link above if it is issue description there should be a single lint executor that runs linting against all of the code in a project we should be relying on nx plugins where possible we should standardize lint language for each language and then have a central linting executor that depends on them | 1 |
342,688 | 10,320,559,572 | IssuesEvent | 2019-08-30 20:55:23 | wevote/WebApp | https://api.github.com/repos/wevote/WebApp | opened | Address & Elections: Button styling improvements | Difficulty: Easy Priority: 2 | Two thing to tackle in:
WebApp/src/js/components/Ballot/BallotElectionListWithFilters.jsx
(CAUTION: there is another component with similar name -- make sure to work on this one "WithFilters")
1) Make "Cancel" and "Save" buttons wider so each takes up 50% of the width
2) Left align the election name, and right align the election date. Please make sure to make this change for the Upcoming Elections as well as for the Prior Elections.

Please test in both desktop and mobile.
| 1.0 | Address & Elections: Button styling improvements - Two thing to tackle in:
WebApp/src/js/components/Ballot/BallotElectionListWithFilters.jsx
(CAUTION: there is another component with similar name -- make sure to work on this one "WithFilters")
1) Make "Cancel" and "Save" buttons wider so each takes up 50% of the width
2) Left align the election name, and right align the election date. Please make sure to make this change for the Upcoming Elections as well as for the Prior Elections.

Please test in both desktop and mobile.
| non_infrastructure | address elections button styling improvements two thing to tackle in webapp src js components ballot ballotelectionlistwithfilters jsx caution there is another component with similar name make sure to work on this one withfilters make cancel and save buttons wider so each takes up of the width left align the election name and right align the election date please make sure to make this change for the upcoming elections as well as for the prior elections please test in both desktop and mobile | 0 |
5,161 | 5,477,353,407 | IssuesEvent | 2017-03-12 07:08:04 | danielricci/gosling-engine | https://api.github.com/repos/danielricci/gosling-engine | closed | Dispatcher passing extra args | enhancement Infrastructure | _From @danielricci on January 29, 2017 1:48_
There needs to be a way to pass in extra arguments when dispatching.
Look at the Message object and look at the args field, this isnt passeed in yet
What we need is for a custom interface to extend the ActionListener, and have an abstract class with that that requires the setArgs, and in there you will have an args, see if this will work or if we can do this within an interface.
_Copied from original issue: danielricci/chess#67_ | 1.0 | Dispatcher passing extra args - _From @danielricci on January 29, 2017 1:48_
There needs to be a way to pass in extra arguments when dispatching.
Look at the Message object and look at the args field, this isnt passeed in yet
What we need is for a custom interface to extend the ActionListener, and have an abstract class with that that requires the setArgs, and in there you will have an args, see if this will work or if we can do this within an interface.
_Copied from original issue: danielricci/chess#67_ | infrastructure | dispatcher passing extra args from danielricci on january there needs to be a way to pass in extra arguments when dispatching look at the message object and look at the args field this isnt passeed in yet what we need is for a custom interface to extend the actionlistener and have an abstract class with that that requires the setargs and in there you will have an args see if this will work or if we can do this within an interface copied from original issue danielricci chess | 1 |
25,875 | 19,318,708,199 | IssuesEvent | 2021-12-14 01:16:11 | bootstrapworld/curriculum | https://api.github.com/repos/bootstrapworld/curriculum | closed | fall 2021 page numbers are missing?? | Infrastructure | The deploy script seems to have overwritten some of our content in fall2021, instead of creating spring2022. Need to address this asap. | 1.0 | fall 2021 page numbers are missing?? - The deploy script seems to have overwritten some of our content in fall2021, instead of creating spring2022. Need to address this asap. | infrastructure | fall page numbers are missing the deploy script seems to have overwritten some of our content in instead of creating need to address this asap | 1 |
31,201 | 25,409,684,168 | IssuesEvent | 2022-11-22 17:52:01 | celestiaorg/test-infra | https://api.github.com/repos/celestiaorg/test-infra | opened | testground/infra: Missing pod(s) in the test run | bug infrastructure | Now, in the EKS cluster, we can rarely see that a set of test pods can be missing out of nowhere
Some sample logs:
```shell
Nov 22 17:48:27.938574 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m2s", "succeeded": 0, "running": 7, "pending": 1, "failed": 0, "unknown": 0}
Nov 22 17:48:29.948165 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m4s", "succeeded": 0, "running": 8, "pending": 0, "failed": 0, "unknown": 0}
Nov 22 17:48:29.948214 [34mINFO[0m all testplan instances in `Running` state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "took": "2m4s"}
Nov 22 17:49:22.248158 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m56s", "succeeded": 0, "running": 8, "pending": 0, "failed": 0, "unknown": 0}
Nov 22 17:49:24.258297 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m58s", "succeeded": 0, "running": 7, "pending": 0, "failed": 0, "unknown": 0}
``` | 1.0 | testground/infra: Missing pod(s) in the test run - Now, in the EKS cluster, we can rarely see that a set of test pods can be missing out of nowhere
Some sample logs:
```shell
Nov 22 17:48:27.938574 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m2s", "succeeded": 0, "running": 7, "pending": 1, "failed": 0, "unknown": 0}
Nov 22 17:48:29.948165 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m4s", "succeeded": 0, "running": 8, "pending": 0, "failed": 0, "unknown": 0}
Nov 22 17:48:29.948214 [34mINFO[0m all testplan instances in `Running` state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "took": "2m4s"}
Nov 22 17:49:22.248158 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m56s", "succeeded": 0, "running": 8, "pending": 0, "failed": 0, "unknown": 0}
Nov 22 17:49:24.258297 [35mDEBUG[0m testplan pods state {"runner": "cluster:k8s", "run_id": "cdugl9926un8din2mr60", "running_for": "2m58s", "succeeded": 0, "running": 7, "pending": 0, "failed": 0, "unknown": 0}
``` | infrastructure | testground infra missing pod s in the test run now in the eks cluster we can rarely see that a set of test pods can be missing out of nowhere some sample logs shell nov testplan pods state runner cluster run id running for succeeded running pending failed unknown nov testplan pods state runner cluster run id running for succeeded running pending failed unknown nov all testplan instances in running state runner cluster run id took nov testplan pods state runner cluster run id running for succeeded running pending failed unknown nov testplan pods state runner cluster run id running for succeeded running pending failed unknown | 1 |
7,322 | 7,888,627,332 | IssuesEvent | 2018-06-27 22:59:49 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | SwapWithPreviewAction command switch to swap deployment slots failing! | app-service/svc cxp in-progress product-issue triaged | My app service is
running on Azure
has authentication enabled (API Management - ClientId and Secret)
has a deployment slot called staging
Before we enabled the authentication we could run the following powershell command to swap staging and production slots:
Switch-AzureRmWebAppSlot -SourceSlotName "staging" -DestinationSlotName "production" -Name "app1" -ResourceGroupName "group1" -verbose -SwapWithPreviewAction ApplySlotConfig
Swap with preview allows us to verify the deployed code works with the production configuration settings before switching users over to the newly deployed version.
However, after we enabled authentication to protect our app, we now receive the following error using the SwapWithPreviewAction
===============================
Switch-AzureRmWebAppSlot : Swap with Preview cannot be used when one of the slots has site authentication enabled.
At line:3 char:19
+ ... e-Command { Switch-AzureRmWebAppSlot -SourceSlotName "staging" -Desti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Switch-AzureRmWebAppSlot], CloudException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.WebApps.Cmdlets.DeploymentSlots.SwitchAzureWebAppSlot
==================================
Does anyone know if it is possible to run swaps with authentication enabled? If so, how?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 49f5ea0e-d951-7e7f-142e-3509b2ec5eda
* Version Independent ID: eea6bd7f-8790-caa0-aed8-bc0e79680bbd
* Content: [Set up staging environments for web apps in Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-staged-publishing)
* Content Source: [articles/app-service/web-sites-staged-publishing.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-staged-publishing.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | SwapWithPreviewAction command switch to swap deployment slots failing! - My app service is
running on Azure
has authentication enabled (API Management - ClientId and Secret)
has a deployment slot called staging
Before we enabled the authentication we could run the following powershell command to swap staging and production slots:
Switch-AzureRmWebAppSlot -SourceSlotName "staging" -DestinationSlotName "production" -Name "app1" -ResourceGroupName "group1" -verbose -SwapWithPreviewAction ApplySlotConfig
Swap with preview allows us to verify the deployed code works with the production configuration settings before switching users over to the newly deployed version.
However, after we enabled authentication to protect our app, we now receive the following error using the SwapWithPreviewAction
===============================
Switch-AzureRmWebAppSlot : Swap with Preview cannot be used when one of the slots has site authentication enabled.
At line:3 char:19
+ ... e-Command { Switch-AzureRmWebAppSlot -SourceSlotName "staging" -Desti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Switch-AzureRmWebAppSlot], CloudException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.WebApps.Cmdlets.DeploymentSlots.SwitchAzureWebAppSlot
==================================
Does anyone know if it is possible to run swaps with authentication enabled? If so, how?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 49f5ea0e-d951-7e7f-142e-3509b2ec5eda
* Version Independent ID: eea6bd7f-8790-caa0-aed8-bc0e79680bbd
* Content: [Set up staging environments for web apps in Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-staged-publishing)
* Content Source: [articles/app-service/web-sites-staged-publishing.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-staged-publishing.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_infrastructure | swapwithpreviewaction command switch to swap deployment slots failing my app service is running on azure has authentication enabled api management clientid and secret has a deployment slot called staging before we enabled the authentication we could run the following powershell command to swap staging and production slots switch azurermwebappslot sourceslotname staging destinationslotname production name resourcegroupname verbose swapwithpreviewaction applyslotconfig swap with preview allows us to verify the deployed code works with the production configuration settings before switching users over to the newly deployed version however after we enabled authentication to protect our app we now receive the following error using the swapwithpreviewaction switch azurermwebappslot swap with preview cannot be used when one of the slots has site authentication enabled at line char e command switch azurermwebappslot sourceslotname staging desti categoryinfo closeerror cloudexception fullyqualifiederrorid microsoft azure commands webapps cmdlets deploymentslots switchazurewebappslot does anyone know if it is possible to run swaps with authentication enabled if so how document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin | 0 |
28,218 | 23,094,238,369 | IssuesEvent | 2022-07-26 17:53:23 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | [wasm-mt] place multi-threaded wasm build into separate artifacts folders | arch-wasm area-Infrastructure-mono | Build artifacts for the `/p:WasmEnableThreads=true` should go into a separate output folder. In other words it should be possible to build with `/p:WasmEnableThreads=false` in the same tree and without the build artifacts interfering with each other.
Needed by https://github.com/dotnet/runtime/issues/68508
Part of #68162 | 1.0 | [wasm-mt] place multi-threaded wasm build into separate artifacts folders - Build artifacts for the `/p:WasmEnableThreads=true` should go into a separate output folder. In other words it should be possible to build with `/p:WasmEnableThreads=false` in the same tree and without the build artifacts interfering with each other.
Needed by https://github.com/dotnet/runtime/issues/68508
Part of #68162 | infrastructure | place multi threaded wasm build into separate artifacts folders build artifacts for the p wasmenablethreads true should go into a separate output folder in other words it should be possible to build with p wasmenablethreads false in the same tree and without the build artifacts interfering with each other needed by part of | 1 |
377,894 | 26,274,584,765 | IssuesEvent | 2023-01-06 20:31:15 | shared-recruiting-co/shared-recruiting-co | https://api.github.com/repos/shared-recruiting-co/shared-recruiting-co | closed | In-App Documentation | documentation enhancement | **Is your feature request related to a problem? Please describe.**
As a user, I want to read documentation to learn what I can do with SRC
**Describe the solution you'd like**
Create a new docs site, either as a subdomain `docs.sharedrecruiting.co` or as a path prefix `sharedrecruiting.co/docs`. Keep the documentation non-technical. Technical folks can come here 😄
**Describe alternatives you've considered**
I think a hosted provider like Gitbooks is too heavy handed for our needs and worse for SEO.
**Additional context**
Leverage existing documentation templates and libraries
- https://tailwindui.com/templates/syntax
- https://markdoc.dev/
- https://docusaurus.io/
| 1.0 | In-App Documentation - **Is your feature request related to a problem? Please describe.**
As a user, I want to read documentation to learn what I can do with SRC
**Describe the solution you'd like**
Create a new docs site, either as a subdomain `docs.sharedrecruiting.co` or as a path prefix `sharedrecruiting.co/docs`. Keep the documentation non-technical. Technical folks can come here 😄
**Describe alternatives you've considered**
I think a hosted provider like Gitbooks is too heavy handed for our needs and worse for SEO.
**Additional context**
Leverage existing documentation templates and libraries
- https://tailwindui.com/templates/syntax
- https://markdoc.dev/
- https://docusaurus.io/
| non_infrastructure | in app documentation is your feature request related to a problem please describe as a user i want to read documentation to learn what i can do with src describe the solution you d like create a new docs site either as a subdomain docs sharedrecruiting co or as a path prefix sharedrecruiting co docs keep the documentation non technical technical folks can come here 😄 describe alternatives you ve considered i think a hosted provider like gitbooks is too heavy handed for our needs and worse for seo additional context leverage existing documentation templates and libraries | 0 |
29,883 | 24,369,510,694 | IssuesEvent | 2022-10-03 17:59:14 | romcal/romcal | https://api.github.com/repos/romcal/romcal | opened | Revise list of labels | infrastructure | I think we should change the list of labels:
- `bug`/`feature` scopes (`bugFeatureScope`):
- `martyrology` or `sanctorale` (we should choose one and use it consistently):
- any bug/feature related to the list of saints and their metadata;
- `non-personal celebrations`;
- any bug/feature related to the list of non-personal celebrations and their metadata;
- `temporale`;
- any bug/feature related to Proper of Time and related metadata;
- `localization`;
- any bug/feature related to localisation and related configuration;
- `celebrations`;
- any bug/feature related to celebrations and their definitions within calendars;
- `calendars`;
- any bug/feature related to calendars, their metadata and configuration (but not to celebrations);
- `tests`;
- any bug/feature related to tests of any kind.
- `infrastructure` scopes (`infraScope`):
- `dependencies` or `deps`;
- any bug/feature related to tests of any kind.
List of issues:
- `bug: ${bugFeatureScope}`:
- `typo` label should be moved into this label;
- `feature: ${bugFeatureScope}` or `feat: ${bugFeatureScope}`;
- `enhancement: ${bugFeatureScope}`;
- `discussion`:
- should be auto-moved to GitHub Discussions;
- `question` label should be merged into `discussion`;
- `duplicate`:
- should be auto-closed;
- `documentation`:
- should this be changed to a `intrastructure` scope (`infraScope`)?
- `infrastructure: ${infraScope}` or `infra: ${infraScope}`;
- `help wanted`;
- `invalid`:
- when should this label be used?
- unless we specify what should be labelled as `invalid`, we should remove this label;
- `wontfix`.
Any suggestion to improvement welcome. :wink: | 1.0 | Revise list of labels - I think we should change the list of labels:
- `bug`/`feature` scopes (`bugFeatureScope`):
- `martyrology` or `sanctorale` (we should choose one and use it consistently):
- any bug/feature related to the list of saints and their metadata;
- `non-personal celebrations`;
- any bug/feature related to the list of non-personal celebrations and their metadata;
- `temporale`;
- any bug/feature related to Proper of Time and related metadata;
- `localization`;
- any bug/feature related to localisation and related configuration;
- `celebrations`;
- any bug/feature related to celebrations and their definitions within calendars;
- `calendars`;
- any bug/feature related to calendars, their metadata and configuration (but not to celebrations);
- `tests`;
- any bug/feature related to tests of any kind.
- `infrastructure` scopes (`infraScope`):
- `dependencies` or `deps`;
- any bug/feature related to tests of any kind.
List of issues:
- `bug: ${bugFeatureScope}`:
- `typo` label should be moved into this label;
- `feature: ${bugFeatureScope}` or `feat: ${bugFeatureScope}`;
- `enhancement: ${bugFeatureScope}`;
- `discussion`:
- should be auto-moved to GitHub Discussions;
- `question` label should be merged into `discussion`;
- `duplicate`:
- should be auto-closed;
- `documentation`:
- should this be changed to a `intrastructure` scope (`infraScope`)?
- `infrastructure: ${infraScope}` or `infra: ${infraScope}`;
- `help wanted`;
- `invalid`:
- when should this label be used?
- unless we specify what should be labelled as `invalid`, we should remove this label;
- `wontfix`.
Any suggestion to improvement welcome. :wink: | infrastructure | revise list of labels i think we should change the list of labels bug feature scopes bugfeaturescope martyrology or sanctorale we should choose one and use it consistently any bug feature related to the list of saints and their metadata non personal celebrations any bug feature related to the list of non personal celebrations and their metadata temporale any bug feature related to proper of time and related metadata localization any bug feature related to localisation and related configuration celebrations any bug feature related to celebrations and their definitions within calendars calendars any bug feature related to calendars their metadata and configuration but not to celebrations tests any bug feature related to tests of any kind infrastructure scopes infrascope dependencies or deps any bug feature related to tests of any kind list of issues bug bugfeaturescope typo label should be moved into this label feature bugfeaturescope or feat bugfeaturescope enhancement bugfeaturescope discussion should be auto moved to github discussions question label should be merged into discussion duplicate should be auto closed documentation should this be changed to a intrastructure scope infrascope infrastructure infrascope or infra infrascope help wanted invalid when should this label be used unless we specify what should be labelled as invalid we should remove this label wontfix any suggestion to improvement welcome wink | 1 |
34,411 | 29,804,905,960 | IssuesEvent | 2023-06-16 10:53:11 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | opened | Plugin version must not contain branch name | ctg-bug comp-infrastructure | **Description**
It was made that generated plugin should contain the first 4 symbols of branch name.
But inside plugin should be named according to naming convention:
[yyyy.mm.<minor-version-indicator>]
Now branch name is included into plugin version.
And thus IDEA suggests to update plugin with previous released version.
**To Reproduce**
1. Install [UnitTestBot plugin built from main ](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA 2023.1
2. Restart IDEA and open a project
3. Open IDEA -> Settings -> Plugins -> Installed
**Expected behavior**
Plugin version should be `2023.6.4608`
No update should be suggested.
**Actual behavior**
Plugin version is `2023.6.4608`
Update to previously released version `2023.3` is suggested.
**Screenshots, logs**
<img width="474" alt="Screenshot 2023-06-16 at 13 51 30" src="https://github.com/UnitTestBot/UTBotJava/assets/37301492/4d9db16e-f8fc-493e-a434-186db1009a1e">
**Environment**
IntelliJ IDEA version - 2023.1.2 CE
**Additional context**
Related to:
- #2013
| 1.0 | Plugin version must not contain branch name - **Description**
It was made that generated plugin should contain the first 4 symbols of branch name.
But inside plugin should be named according to naming convention:
[yyyy.mm.<minor-version-indicator>]
Now branch name is included into plugin version.
And thus IDEA suggests to update plugin with previous released version.
**To Reproduce**
1. Install [UnitTestBot plugin built from main ](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA 2023.1
2. Restart IDEA and open a project
3. Open IDEA -> Settings -> Plugins -> Installed
**Expected behavior**
Plugin version should be `2023.6.4608`
No update should be suggested.
**Actual behavior**
Plugin version is `2023.6.4608`
Update to previously released version `2023.3` is suggested.
**Screenshots, logs**
<img width="474" alt="Screenshot 2023-06-16 at 13 51 30" src="https://github.com/UnitTestBot/UTBotJava/assets/37301492/4d9db16e-f8fc-493e-a434-186db1009a1e">
**Environment**
IntelliJ IDEA version - 2023.1.2 CE
**Additional context**
Related to:
- #2013
| infrastructure | plugin version must not contain branch name description it was made that generated plugin should contain the first symbols of branch name but inside plugin should be named according to naming convention now branch name is included into plugin version and thus idea suggests to update plugin with previous released version to reproduce install in intellij idea restart idea and open a project open idea settings plugins installed expected behavior plugin version should be no update should be suggested actual behavior plugin version is update to previously released version is suggested screenshots logs img width alt screenshot at src environment intellij idea version ce additional context related to | 1 |
174,548 | 21,300,220,905 | IssuesEvent | 2022-04-15 01:24:00 | Dko1905/refinedPasswordManager | https://api.github.com/repos/Dko1905/refinedPasswordManager | opened | CVE-2022-22968 (Low) detected in spring-context-5.3.1.jar | security vulnerability | ## CVE-2022-22968 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.1.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-context/5.3.1/736836c8098981ddabd309a0c15f967594da62bc/spring-context-5.3.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0-SNAPSHOT.jar (Root Library)
- spring-webmvc-5.3.1.jar
- :x: **spring-context-5.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-context:5.2.21,5.3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-22968 (Low) detected in spring-context-5.3.1.jar - ## CVE-2022-22968 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.1.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-context/5.3.1/736836c8098981ddabd309a0c15f967594da62bc/spring-context-5.3.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.0-SNAPSHOT.jar (Root Library)
- spring-webmvc-5.3.1.jar
- :x: **spring-context-5.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-context:5.2.21,5.3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve low detected in spring context jar cve low severity vulnerability vulnerable library spring context jar spring context library home page a href path to dependency file build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring context spring context jar dependency hierarchy spring boot starter web snapshot jar root library spring webmvc jar x spring context jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions the patterns for disallowedfields on a databinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field including upper and lower case for the first character of all nested fields within the property path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring context step up your open source security game with whitesource | 0 |
491,129 | 14,145,667,519 | IssuesEvent | 2020-11-10 18:03:42 | processing/p5.js-web-editor | https://api.github.com/repos/processing/p5.js-web-editor | closed | Code folding no longer works in editor | good first issue help wanted priority:medium type:bug | #### Nature of issue?
- Found a bug
#### Details about the bug:
- Web browser and version: Google Chrome 83.0.4103.61 (Official Build) (64-bit)
- Operating System: Linux ( Ubuntu 18.04.4 LTS )
Code folding no longer works in the editor. The arrows in the gutter that are usually used to toggle folding don't respond to clicks. | 1.0 | Code folding no longer works in editor - #### Nature of issue?
- Found a bug
#### Details about the bug:
- Web browser and version: Google Chrome 83.0.4103.61 (Official Build) (64-bit)
- Operating System: Linux ( Ubuntu 18.04.4 LTS )
Code folding no longer works in the editor. The arrows in the gutter that are usually used to toggle folding don't respond to clicks. | non_infrastructure | code folding no longer works in editor nature of issue found a bug details about the bug web browser and version google chrome official build bit operating system linux ubuntu lts code folding no longer works in the editor the arrows in the gutter that are usually used to toggle folding don t respond to clicks | 0 |
3,729 | 4,514,646,115 | IssuesEvent | 2016-09-05 00:31:25 | jquery/esprima | https://api.github.com/repos/jquery/esprima | closed | Drop support for Node.js 0.10 | infrastructure | After 2016-10-01, we need not support it. Reference:
[Node.js LTS schedule](https://github.com/nodejs/LTS#lts_schedule).
At that time, it is possible to use the latest version of TypeScript formatter (right now it's still [blocked](https://github.com/jquery/esprima/pull/1522#issuecomment-242276843) by Node.js 0.10). | 1.0 | Drop support for Node.js 0.10 - After 2016-10-01, we need not support it. Reference:
[Node.js LTS schedule](https://github.com/nodejs/LTS#lts_schedule).
At that time, it is possible to use the latest version of TypeScript formatter (right now it's still [blocked](https://github.com/jquery/esprima/pull/1522#issuecomment-242276843) by Node.js 0.10). | infrastructure | drop support for node js after we need not support it reference at that time it is possible to use the latest version of typescript formatter right now it s still by node js | 1 |
14,777 | 11,138,976,041 | IssuesEvent | 2019-12-21 00:58:47 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | 🚨 New commits are not being built by Hydra, and channels updates are stopped due to a down database server. 🚨 | 1.severity: blocker 1.severity: channel blocker infrastructure | At the time of writing, trying to connect to https://hydra.nixos.org/ give the following error:
```
DBIx::Class::Storage::DBI::catch {...} (): DBI Connection failed: DBI connect('dbname=hydra;host=10.254.1.2;user=hydra;','',...) failed: could not connect to server: Connection timed out
Is the server running on host "10.254.1.2" and accepting
TCP/IP connections on port 5432? at /nix/store/bf666x0vqgdaxfal1ywvi9any2vx9q7x-hydra-perl-deps/lib/perl5/site_perl/5.30.0/DBIx/Class/Storage/DBI.pm line 1517. at /nix/store/8271gj7cpcbz4mf6s0kc6905fnfh1bng-hydra-0.1.20191108.4779757/libexec/hydra/lib/Hydra/Helper/CatalystUtils.pm line 418
```
cc @edolstra @grahamc @FRidh (not sure who else is on the infrastructure team) | 1.0 | 🚨 New commits are not being built by Hydra, and channels updates are stopped due to a down database server. 🚨 - At the time of writing, trying to connect to https://hydra.nixos.org/ give the following error:
```
DBIx::Class::Storage::DBI::catch {...} (): DBI Connection failed: DBI connect('dbname=hydra;host=10.254.1.2;user=hydra;','',...) failed: could not connect to server: Connection timed out
Is the server running on host "10.254.1.2" and accepting
TCP/IP connections on port 5432? at /nix/store/bf666x0vqgdaxfal1ywvi9any2vx9q7x-hydra-perl-deps/lib/perl5/site_perl/5.30.0/DBIx/Class/Storage/DBI.pm line 1517. at /nix/store/8271gj7cpcbz4mf6s0kc6905fnfh1bng-hydra-0.1.20191108.4779757/libexec/hydra/lib/Hydra/Helper/CatalystUtils.pm line 418
```
cc @edolstra @grahamc @FRidh (not sure who else is on the infrastructure team) | infrastructure | 🚨 new commits are not being built by hydra and channels updates are stopped due to a down database server 🚨 at the time of writing trying to connect to give the following error dbix class storage dbi catch dbi connection failed dbi connect dbname hydra host user hydra failed could not connect to server connection timed out is the server running on host and accepting tcp ip connections on port at nix store hydra perl deps lib site perl dbix class storage dbi pm line at nix store hydra libexec hydra lib hydra helper catalystutils pm line cc edolstra grahamc fridh not sure who else is on the infrastructure team | 1 |
300,509 | 25,973,177,890 | IssuesEvent | 2022-12-19 12:57:51 | DucTrann1310/FeedbackOnline | https://api.github.com/repos/DucTrann1310/FeedbackOnline | opened | [BugID_86]_GUI_Học viên chưa feedback_Nội dung label ko đúng | bug Open Low Cosmetic UI_Label/Message Acceptance Testing | Actual output: label 'Quản lý học viên chưa Feedback'
Expected output: label 'Quản lý học viên chưa feedback'
-------------------------- | 1.0 | [BugID_86]_GUI_Học viên chưa feedback_Nội dung label ko đúng - Actual output: label 'Quản lý học viên chưa Feedback'
Expected output: label 'Quản lý học viên chưa feedback'
-------------------------- | non_infrastructure | gui học viên chưa feedback nội dung label ko đúng actual output label quản lý học viên chưa feedback expected output label quản lý học viên chưa feedback | 0 |
25,944 | 19,484,143,294 | IssuesEvent | 2021-12-26 01:56:57 | tophat/syrupy | https://api.github.com/repos/tophat/syrupy | opened | Switch to Poetry (or alternative) for syrupy dependency management | infrastructure | We've been having too many issues with pip-tools. Right now CI is failing because of conflicts that pip-tools can't easily resolve. I want to explore some alternatives such as https://python-poetry.org/ which are supposed to be easier to use. | 1.0 | Switch to Poetry (or alternative) for syrupy dependency management - We've been having too many issues with pip-tools. Right now CI is failing because of conflicts that pip-tools can't easily resolve. I want to explore some alternatives such as https://python-poetry.org/ which are supposed to be easier to use. | infrastructure | switch to poetry or alternative for syrupy dependency management we ve been having too many issues with pip tools right now ci is failing because of conflicts that pip tools can t easily resolve i want to explore some alternatives such as which are supposed to be easier to use | 1 |
403,411 | 27,416,513,272 | IssuesEvent | 2023-03-01 14:08:02 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | opened | Documentation: Confusing experience when learning about registering blocks | [Feature] Block API [Type] Developer Documentation | ### Description
The experience of starting learning about block development might be confusing for folks depending on how they start interacting with learning materials. We should ensure the message is unified and directs folks to the place we find the most expected.
### Step-by-step reproduction instructions
## Google
1. Go to [Google](https://www.google.com/) and type `gutenberg register block`.
2. Open both links leading to Block Editor Handbook.
3. Check their content.
## npm
1. Go visit [`@wordpress/blocks`](https://www.npmjs.com/package/@wordpress/blocks) package on npm.
2. Read the content of README file.
## ChatGPT
1. Go to [ChatGPT](https://chat.openai.com).
3. Try a prompt asking the bot to generate a WordPress plugin with a block that contains PHP, JavaScript, CSS and JSON files.
### Screenshots, screen recording, code snippet
## Google
<img width="745" alt="Screenshot 2023-03-01 at 14 59 29" src="https://user-images.githubusercontent.com/699132/222163299-61c1c19b-2bad-4f5b-941c-4fa74a594f1c.png">
## npm
<img width="846" alt="Screenshot 2023-03-01 at 14 59 00" src="https://user-images.githubusercontent.com/699132/222163285-e73ac084-bcc7-4745-8052-d110464e017c.png">
### Environment info
_No response_
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | 1.0 | Documentation: Confusing experience when learning about registering blocks - ### Description
The experience of starting learning about block development might be confusing for folks depending on how they start interacting with learning materials. We should ensure the message is unified and directs folks to the place we find the most expected.
### Step-by-step reproduction instructions
## Google
1. Go to [Google](https://www.google.com/) and type `gutenberg register block`.
2. Open both links leading to Block Editor Handbook.
3. Check their content.
## npm
1. Go visit [`@wordpress/blocks`](https://www.npmjs.com/package/@wordpress/blocks) package on npm.
2. Read the content of README file.
## ChatGPT
1. Go to [ChatGPT](https://chat.openai.com).
3. Try a prompt asking the bot to generate a WordPress plugin with a block that contains PHP, JavaScript, CSS and JSON files.
### Screenshots, screen recording, code snippet
## Google
<img width="745" alt="Screenshot 2023-03-01 at 14 59 29" src="https://user-images.githubusercontent.com/699132/222163299-61c1c19b-2bad-4f5b-941c-4fa74a594f1c.png">
## npm
<img width="846" alt="Screenshot 2023-03-01 at 14 59 00" src="https://user-images.githubusercontent.com/699132/222163285-e73ac084-bcc7-4745-8052-d110464e017c.png">
### Environment info
_No response_
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | non_infrastructure | documentation confusing experience when learning about registering blocks description the experience of starting learning about block development might be confusing for folks depending on how they start interacting with learning materials we should ensure the message is unified and directs folks to the place we find the most expected step by step reproduction instructions google go to and type gutenberg register block open both links leading to block editor handbook check their content npm go visit package on npm read the content of readme file chatgpt go to try a prompt asking the bot to generate a wordpress plugin with a block that contains php javascript css and json files screenshots screen recording code snippet google img width alt screenshot at src npm img width alt screenshot at src environment info no response please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg yes | 0 |
3,849 | 4,651,669,967 | IssuesEvent | 2016-10-03 11:05:51 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | tools/bot/pub.py messes with the file system | area-infrastructure Type: bug | 1) it deletes the docs/language directory
2) it creates a .test-outcome.log file which is not ignored by default
I'm running this from a mac. Using git. | 1.0 | tools/bot/pub.py messes with the file system - 1) it deletes the docs/language directory
2) it creates a .test-outcome.log file which is not ignored by default
I'm running this from a mac. Using git. | infrastructure | tools bot pub py messes with the file system it deletes the docs language directory it creates a test outcome log file which is not ignored by default i m running this from a mac using git | 1 |
310,323 | 26,711,334,063 | IssuesEvent | 2023-01-28 00:36:52 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [SEGURANCA] [REMOTO] [REDTEAM] Analista de Segurança (Redteam) na [GC SECURITY] | LINUX Norma ISO 27001 REMOTO SEGURANÇA SCRIPT PCI HELP WANTED PROGRAMACAO PENTEST OWASP TOP 10 REDTEAM Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Nós estamos em busca de um Analista de Segurança (Redteam)
- Aqui na GC você vai:
- Realizar a avaliação de segurança em aplicaçoes WEB, aplicativos mobile e infraestrutura (redes e servidores);
- Atender a clientes com alto nivel de segurança, portanto skills para bypass em WAF e firewall serão necessários;
- Gerar evidencais e descrição das vulnerabilidades encontradas, mas não é necessário gerar relatórios gerenciais.
## Local
- Remoto
## Benefícios
- Salário baseado em pesquisa reconhecida no mercado
- VR (R$33,00/dia útil)
- VT ou vaga de estacionamento
- Plano de saúde e odontológico
- Inglês in company
- Snacks no escritório
- Ambiente descontraído e colaborativo com igualdade de oportunidades
- Possibilidade de trabalho remoto (para pessoas de outras cidades/estados)
## Requisitos
**Obrigatórios:**
- Ter experiência com pentest de 2 a 3 anos;
- Ter experiencia comprovada em ferramentas de pentest como burp, frida, nmap, sqlmap, nuclei, amass e outras;
- Ter conhecimentos sólidos em Linux, programação/script e frameworks de segurança (PCI, OWASP TOP 10, NIST e ISO 27001).
**Diferenciais:**
- Tiver conhecimento em Code Review, OSINT e Forense Digital
## Contratação
- a combinar
## Nossa empresa
- A GC Security atua desde 2008 para construir um ambientes digitais mais seguros. Nossa abordagem é diferente, testamos a segurança da informação das empresas com a mesma tecnologia e táticas utilizadas pelo cibercrime. Sabemos que não existe uma solução única para aumentar a resiliência das empresas, por isso aliamos tecnologia de ponta e os melhores profissionais do mercado para enxergar falhas e vulnerabilidades que os outros não veem. Se você quer usar seu talento para construir um mundo mais seguro, veio ao lugar certo!
## Como se candidatar
- [Clique aqui para se candidatar](https://hipsters.jobs/job/17956/analista-de-seguran%C3%A7a-redteam/)
| 1.0 | [SEGURANCA] [REMOTO] [REDTEAM] Analista de Segurança (Redteam) na [GC SECURITY] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Nós estamos em busca de um Analista de Segurança (Redteam)
- Aqui na GC você vai:
- Realizar a avaliação de segurança em aplicaçoes WEB, aplicativos mobile e infraestrutura (redes e servidores);
- Atender a clientes com alto nivel de segurança, portanto skills para bypass em WAF e firewall serão necessários;
- Gerar evidencais e descrição das vulnerabilidades encontradas, mas não é necessário gerar relatórios gerenciais.
## Local
- Remoto
## Benefícios
- Salário baseado em pesquisa reconhecida no mercado
- VR (R$33,00/dia útil)
- VT ou vaga de estacionamento
- Plano de saúde e odontológico
- Inglês in company
- Snacks no escritório
- Ambiente descontraído e colaborativo com igualdade de oportunidades
- Possibilidade de trabalho remoto (para pessoas de outras cidades/estados)
## Requisitos
**Obrigatórios:**
- Ter experiência com pentest de 2 a 3 anos;
- Ter experiencia comprovada em ferramentas de pentest como burp, frida, nmap, sqlmap, nuclei, amass e outras;
- Ter conhecimentos sólidos em Linux, programação/script e frameworks de segurança (PCI, OWASP TOP 10, NIST e ISO 27001).
**Diferenciais:**
- Tiver conhecimento em Code Review, OSINT e Forense Digital
## Contratação
- a combinar
## Nossa empresa
- A GC Security atua desde 2008 para construir um ambientes digitais mais seguros. Nossa abordagem é diferente, testamos a segurança da informação das empresas com a mesma tecnologia e táticas utilizadas pelo cibercrime. Sabemos que não existe uma solução única para aumentar a resiliência das empresas, por isso aliamos tecnologia de ponta e os melhores profissionais do mercado para enxergar falhas e vulnerabilidades que os outros não veem. Se você quer usar seu talento para construir um mundo mais seguro, veio ao lugar certo!
## Como se candidatar
- [Clique aqui para se candidatar](https://hipsters.jobs/job/17956/analista-de-seguran%C3%A7a-redteam/)
| non_infrastructure | analista de segurança redteam na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga nós estamos em busca de um analista de segurança redteam aqui na gc você vai realizar a avaliação de segurança em aplicaçoes web aplicativos mobile e infraestrutura redes e servidores atender a clientes com alto nivel de segurança portanto skills para bypass em waf e firewall serão necessários gerar evidencais e descrição das vulnerabilidades encontradas mas não é necessário gerar relatórios gerenciais local remoto benefícios salário baseado em pesquisa reconhecida no mercado vr r dia útil vt ou vaga de estacionamento plano de saúde e odontológico inglês in company snacks no escritório ambiente descontraído e colaborativo com igualdade de oportunidades possibilidade de trabalho remoto para pessoas de outras cidades estados requisitos obrigatórios ter experiência com pentest de a anos ter experiencia comprovada em ferramentas de pentest como burp frida nmap sqlmap nuclei amass e outras ter conhecimentos sólidos em linux programação script e frameworks de segurança pci owasp top nist e iso diferenciais tiver conhecimento em code review osint e forense digital contratação a combinar nossa empresa a gc security atua desde para construir um ambientes digitais mais seguros nossa abordagem é diferente testamos a segurança da informação das empresas com a mesma tecnologia e táticas utilizadas pelo cibercrime sabemos que não existe uma solução única para aumentar a resiliência das empresas por isso aliamos tecnologia de ponta e os melhores profissionais do mercado para enxergar falhas e vulnerabilidades que os outros não veem se você quer usar seu talento para construir um mundo mais seguro veio ao lugar certo como se candidatar | 0 |
2,909 | 3,961,140,470 | IssuesEvent | 2016-05-02 11:20:33 | openSUSE/daps | https://api.github.com/repos/openSUSE/daps | closed | Introduce Version Scheme for SUSEDoc Documents | docbook5 in progress infrastructure validation | ## Problem
Currently, when dealing with SUSEDoc documents, we add the version identifier into the `version` attribute.
This works most of the time. However, we never really raised the version number in the past. Although care was taken to provide compatibility between old and new versions, some validation errors seem to have slipped through. This makes searching and fixing unnecessary hard and cumbersome.
We should follow the [Naming and versioning DocBook customizations](http://docbook.org/docs/howto/#cust-naming).
## Solution
We could introduce the following rules:
* Old versions of SUSEDoc stay the same. They never change.
* Only bugfixes can be added to an old version. Bugfixes are typos or mistyped order
* Whenever we introduce changes we raise the version number.
* Incompatible changes raise the major version.
* Compatible changes raise the minor version.
When you customize the schema, use the following syntax to identify your DocBook derivation:
```
base_version-[subset|extension|variant] [name[-version]?]+
```
For SUSEDoc, we could use this:
```
5.1-subset SUSEDoc-0.9
``` | 1.0 | Introduce Version Scheme for SUSEDoc Documents - ## Problem
Currently, when dealing with SUSEDoc documents, we add the version identifier into the `version` attribute.
This works most of the time. However, we never really raised the version number in the past. Although care was taken to provide compatibility between old and new versions, some validation errors seem to have slipped through. This makes searching and fixing unnecessary hard and cumbersome.
We should follow the [Naming and versioning DocBook customizations](http://docbook.org/docs/howto/#cust-naming).
## Solution
We could introduce the following rules:
* Old versions of SUSEDoc stay the same. They never change.
* Only bugfixes can be added to an old version. Bugfixes are typos or mistyped order
* Whenever we introduce changes we raise the version number.
* Incompatible changes raise the major version.
* Compatible changes raise the minor version.
When you customize the schema, use the following syntax to identify your DocBook derivation:
```
base_version-[subset|extension|variant] [name[-version]?]+
```
For SUSEDoc, we could use this:
```
5.1-subset SUSEDoc-0.9
``` | infrastructure | introduce version scheme for susedoc documents problem currently when dealing with susedoc documents we add the version identifier into the version attribute this works most of the time however we never really raised the version number in the past although care was taken to provide compatibility between old and new versions some validation errors seem to have slipped through this makes searching and fixing unnecessary hard and cumbersome we should follow the solution we could introduce the following rules old versions of susedoc stay the same they never change only bugfixes can be added to an old version bugfixes are typos or mistyped order whenever we introduce changes we raise the version number incompatible changes raise the major version compatible changes raise the minor version when you customize the schema use the following syntax to identify your docbook derivation base version for susedoc we could use this subset susedoc | 1 |
20,289 | 13,792,704,901 | IssuesEvent | 2020-10-09 13:58:26 | bkochuna/ners570f20-Lab06 | https://api.github.com/repos/bkochuna/ners570f20-Lab06 | closed | Decide on infrastructure tools | infrastructure | The infrastructure tools to be decided upon by the team are the configure and build system (e.g. autotools, CMake, raw makefiles, etc.), and testing infrastructure (what tools will you use to define and execute tests), what systems or services will you use for running the tests. | 1.0 | Decide on infrastructure tools - The infrastructure tools to be decided upon by the team are the configure and build system (e.g. autotools, CMake, raw makefiles, etc.), and testing infrastructure (what tools will you use to define and execute tests), what systems or services will you use for running the tests. | infrastructure | decide on infrastructure tools the infrastructure tools to be decided upon by the team are the configure and build system e g autotools cmake raw makefiles etc and testing infrastructure what tools will you use to define and execute tests what systems or services will you use for running the tests | 1 |
518,954 | 15,037,869,598 | IssuesEvent | 2021-02-02 16:48:55 | eddieantonio/predictive-text-studio | https://api.github.com/repos/eddieantonio/predictive-text-studio | closed | Google Sheets URL Failing on Landing Page | bug 🔥 High priority | When adding a sharable link to the Google Sheets input field on the landing page
`Error: Could not connect to Google Sheets` is displayed. | 1.0 | Google Sheets URL Failing on Landing Page - When adding a sharable link to the Google Sheets input field on the landing page
`Error: Could not connect to Google Sheets` is displayed. | non_infrastructure | google sheets url failing on landing page when adding a sharable link to the google sheets input field on the landing page error could not connect to google sheets is displayed | 0 |
3,348 | 4,240,682,876 | IssuesEvent | 2016-07-06 14:11:58 | Graylog2/graylog2-server | https://api.github.com/repos/Graylog2/graylog2-server | opened | More lenient PluginLoader Version checks during pre-release | infrastructure | The plugin loading mechanism is currently very strict about plugins' required server versions.
During pre-release this creates unnecessary work updating plugin metadata all the time, because we cannot simply leave them on the next release version (e.g. server is `2.1.0-alpha.2-SNAPSHOT` but plugins require `2.1.0`).
As long as the server is strict when it is not in a pre-release version itself, I cannot see any immediate problems this would cause.
Essentially the server should do the following:
* Check its own version if it is pre-release.
* If yes, take its version without pre-release and compare plugin's required version as it does not (`greaterThanOrEqualTo`)
* If no, take its version as it is and compare to plugin's required version.
Does anyone see any problems with this approch? | 1.0 | More lenient PluginLoader Version checks during pre-release - The plugin loading mechanism is currently very strict about plugins' required server versions.
During pre-release this creates unnecessary work updating plugin metadata all the time, because we cannot simply leave them on the next release version (e.g. server is `2.1.0-alpha.2-SNAPSHOT` but plugins require `2.1.0`).
As long as the server is strict when it is not in a pre-release version itself, I cannot see any immediate problems this would cause.
Essentially the server should do the following:
* Check its own version if it is pre-release.
* If yes, take its version without pre-release and compare plugin's required version as it does not (`greaterThanOrEqualTo`)
* If no, take its version as it is and compare to plugin's required version.
Does anyone see any problems with this approch? | infrastructure | more lenient pluginloader version checks during pre release the plugin loading mechanism is currently very strict about plugins required server versions during pre release this creates unnecessary work updating plugin metadata all the time because we cannot simply leave them on the next release version e g server is alpha snapshot but plugins require as long as the server is strict when it is not in a pre release version itself i cannot see any immediate problems this would cause essentially the server should do the following check its own version if it is pre release if yes take its version without pre release and compare plugin s required version as it does not greaterthanorequalto if no take its version as it is and compare to plugin s required version does anyone see any problems with this approch | 1 |
35,780 | 9,660,013,699 | IssuesEvent | 2019-05-20 14:38:20 | syndesisio/syndesis | https://api.github.com/repos/syndesisio/syndesis | opened | Migrate to the React code base | cat/build cat/process group/ui | ## This is a...
<pre><code>
[x ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Documentation issue or request
</code></pre>
## Description
We need to move the code from https://github.com/syndesisio/syndesis-react to this repo.
The plan is to:
1. rename `ui` to `ui-angular`
2. making all fail and keep track of all the intervention point
3. move react code to `ui-react`
4. fix all the pointers to go to `ui-react`
| 1.0 | Migrate to the React code base - ## This is a...
<pre><code>
[x ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Documentation issue or request
</code></pre>
## Description
We need to move the code from https://github.com/syndesisio/syndesis-react to this repo.
The plan is to:
1. rename `ui` to `ui-angular`
2. making all fail and keep track of all the intervention point
3. move react code to `ui-react`
4. fix all the pointers to go to `ui-react`
| non_infrastructure | migrate to the react code base this is a feature request regression a behavior that used to work and stopped working in a new release bug report documentation issue or request description we need to move the code from to this repo the plan is to rename ui to ui angular making all fail and keep track of all the intervention point move react code to ui react fix all the pointers to go to ui react | 0 |
340,520 | 10,273,142,401 | IssuesEvent | 2019-08-23 18:26:04 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | Apply negative number values filters to map | bug-type: unexpected behavior priority: high problem: bug type: corrective | To reproduce:
Use any layer with lat/long columns. Typing in a negative value to filter lat/long columns and applying to map will not work (this cannot be reproduced until #3698 is merged, so just trust me until then)
The problem looks to be in the `sqlNodeToAqlNode` in `query.js`, where support is required to handle prefixes such as `-`. If anyone can think of additional prefixes that may need handling, comment below. | 1.0 | Apply negative number values filters to map - To reproduce:
Use any layer with lat/long columns. Typing in a negative value to filter lat/long columns and applying to map will not work (this cannot be reproduced until #3698 is merged, so just trust me until then)
The problem looks to be in the `sqlNodeToAqlNode` in `query.js`, where support is required to handle prefixes such as `-`. If anyone can think of additional prefixes that may need handling, comment below. | non_infrastructure | apply negative number values filters to map to reproduce use any layer with lat long columns typing in a negative value to filter lat long columns and applying to map will not work this cannot be reproduced until is merged so just trust me until then the problem looks to be in the sqlnodetoaqlnode in query js where support is required to handle prefixes such as if anyone can think of additional prefixes that may need handling comment below | 0 |
66,595 | 16,658,542,509 | IssuesEvent | 2021-06-06 00:25:21 | spack/spack | https://api.github.com/repos/spack/spack | closed | veloc 1.4, 1.3: build fails: transfer_module.cpp: too many arguments to function 'int AXL_Init()' | build-error e4s ecp | `veloc@1.4` (and `@1.3`) fails to build using:
* spack@develop (087110bcb013566f6ba392d4c271e891f4b3a2b1 from `Thu Apr 29 16:43:01 2021 +0200`)
* Ubuntu 20.04 - GCC 9.3.0
* Ubuntu 18.04 - GCC 7.5.0
* RHEL 8 - GCC 8.3.1
* RHEL 7 - GCC 9.3.0
Using container: `ecpe4s/ubuntu20.04-runner-x86_64:2021-03-10`
Concrete spec: [veloc-oqsntu.spec.yaml.txt](https://github.com/spack/spack/files/6400144/veloc-oqsntu.spec.yaml.txt)
Build log: [veloc-build-out.txt](https://github.com/spack/spack/files/6400165/veloc-build-out.txt)
```
$> spack mirror add E4S https://cache.e4s.io
$> spack buildcache keys -it
$> spack install --cache-only --only dependencies --include-build-deps -f ./veloc-oqsntu.spec.yaml
... OK
$> spack install --no-cache -f ./veloc-oqsntu.spec.yaml
...
==> Installing veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y
==> Fetching https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/_source-cache/archive/d5/d5d12aedb9e97f079c4428aaa486bfa4e31fe1db547e103c52e76c8ec906d0a8.zip
############################################################################################################################################################################################ 100.0%
==> No patches needed for veloc
==> veloc: Executing phase: 'cmake'
==> veloc: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j16'
4 errors found in build log:
75 [ 47%] Linking C executable heatdis_original
76 cd /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5/test && /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/cmake-3.19.7-7zkgd
4xkg62fl5x2upq4mof5dkkkg3u4/bin/cmake -E cmake_link_script CMakeFiles/heatdis_original.dir/link.txt --verbose=1
77 /opt/spack/lib/spack/env/gcc/gcc -O2 -g -DNDEBUG CMakeFiles/heatdis_original.dir/heatdis_original.c.o -o heatdis_original -Wl,-rpath,/opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gc
c-9.3.0/mpich-3.4.1-hm77n22t37spis2wa4wssqtmqnvuhfz6/lib -lm /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/mpich-3.4.1-hm77n22t37spis2wa4wssqtmqnvuhfz6/lib/libmpi.so
78 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
79 [ 47%] Built target heatdis_original
80 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp: In constructor 'transfer_module_t::transfer_module_t(const con
fig_t&)':
>> 81 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:54:28: error: too many arguments to function 'int AXL_Init()'
82 54 | int ret = AXL_Init(NULL);
83 | ^
84 In file included from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.hpp:12,
85 from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:1:
86 /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/axl-0.4.0-kv7mn663t4uj5aw6ssv26zgfzzgt3xev/include/axl.h:58:5: note: declared here
87 58 | int AXL_Init (void);
88 | ^~~~~~~~
89 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp: In function 'int axl_transfer_file(axl_xfer_t, const string&,
const string&)':
>> 90 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:68:45: error: too few arguments to function 'int AXL_Create(axl
_xfer_t, const char*, const char*)'
91 68 | int id = AXL_Create(type, source.c_str());
92 | ^
93 In file included from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.hpp:12,
94 from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:1:
95 /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/axl-0.4.0-kv7mn663t4uj5aw6ssv26zgfzzgt3xev/include/axl.h:73:5: note: declared here
96 73 | int AXL_Create (axl_xfer_t xtype, const char* name, const char* state_file);
97 | ^~~~~~~~~~
>> 98 make[2]: *** [src/modules/CMakeFiles/veloc-modules.dir/build.make:111: src/modules/CMakeFiles/veloc-modules.dir/transfer_module.cpp.o] Error 1
99 make[2]: *** Waiting for unfinished jobs....
100 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
>> 101 make[1]: *** [CMakeFiles/Makefile2:281: src/modules/CMakeFiles/veloc-modules.dir/all] Error 2
102 make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
103 make: *** [Makefile:163: all] Error 2
See build log for details:
/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-out.txt
```
@gonsie
| 1.0 | veloc 1.4, 1.3: build fails: transfer_module.cpp: too many arguments to function 'int AXL_Init()' - `veloc@1.4` (and `@1.3`) fails to build using:
* spack@develop (087110bcb013566f6ba392d4c271e891f4b3a2b1 from `Thu Apr 29 16:43:01 2021 +0200`)
* Ubuntu 20.04 - GCC 9.3.0
* Ubuntu 18.04 - GCC 7.5.0
* RHEL 8 - GCC 8.3.1
* RHEL 7 - GCC 9.3.0
Using container: `ecpe4s/ubuntu20.04-runner-x86_64:2021-03-10`
Concrete spec: [veloc-oqsntu.spec.yaml.txt](https://github.com/spack/spack/files/6400144/veloc-oqsntu.spec.yaml.txt)
Build log: [veloc-build-out.txt](https://github.com/spack/spack/files/6400165/veloc-build-out.txt)
```
$> spack mirror add E4S https://cache.e4s.io
$> spack buildcache keys -it
$> spack install --cache-only --only dependencies --include-build-deps -f ./veloc-oqsntu.spec.yaml
... OK
$> spack install --no-cache -f ./veloc-oqsntu.spec.yaml
...
==> Installing veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y
==> Fetching https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/_source-cache/archive/d5/d5d12aedb9e97f079c4428aaa486bfa4e31fe1db547e103c52e76c8ec906d0a8.zip
############################################################################################################################################################################################ 100.0%
==> No patches needed for veloc
==> veloc: Executing phase: 'cmake'
==> veloc: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j16'
4 errors found in build log:
75 [ 47%] Linking C executable heatdis_original
76 cd /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5/test && /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/cmake-3.19.7-7zkgd
4xkg62fl5x2upq4mof5dkkkg3u4/bin/cmake -E cmake_link_script CMakeFiles/heatdis_original.dir/link.txt --verbose=1
77 /opt/spack/lib/spack/env/gcc/gcc -O2 -g -DNDEBUG CMakeFiles/heatdis_original.dir/heatdis_original.c.o -o heatdis_original -Wl,-rpath,/opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gc
c-9.3.0/mpich-3.4.1-hm77n22t37spis2wa4wssqtmqnvuhfz6/lib -lm /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/mpich-3.4.1-hm77n22t37spis2wa4wssqtmqnvuhfz6/lib/libmpi.so
78 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
79 [ 47%] Built target heatdis_original
80 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp: In constructor 'transfer_module_t::transfer_module_t(const con
fig_t&)':
>> 81 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:54:28: error: too many arguments to function 'int AXL_Init()'
82 54 | int ret = AXL_Init(NULL);
83 | ^
84 In file included from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.hpp:12,
85 from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:1:
86 /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/axl-0.4.0-kv7mn663t4uj5aw6ssv26zgfzzgt3xev/include/axl.h:58:5: note: declared here
87 58 | int AXL_Init (void);
88 | ^~~~~~~~
89 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp: In function 'int axl_transfer_file(axl_xfer_t, const string&,
const string&)':
>> 90 /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:68:45: error: too few arguments to function 'int AXL_Create(axl
_xfer_t, const char*, const char*)'
91 68 | int id = AXL_Create(type, source.c_str());
92 | ^
93 In file included from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.hpp:12,
94 from /tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-src/src/modules/transfer_module.cpp:1:
95 /opt/spack/opt/spack/linux-ubuntu20.04-x86_64/gcc-9.3.0/axl-0.4.0-kv7mn663t4uj5aw6ssv26zgfzzgt3xev/include/axl.h:73:5: note: declared here
96 73 | int AXL_Create (axl_xfer_t xtype, const char* name, const char* state_file);
97 | ^~~~~~~~~~
>> 98 make[2]: *** [src/modules/CMakeFiles/veloc-modules.dir/build.make:111: src/modules/CMakeFiles/veloc-modules.dir/transfer_module.cpp.o] Error 1
99 make[2]: *** Waiting for unfinished jobs....
100 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
>> 101 make[1]: *** [CMakeFiles/Makefile2:281: src/modules/CMakeFiles/veloc-modules.dir/all] Error 2
102 make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-oqsntu5'
103 make: *** [Makefile:163: all] Error 2
See build log for details:
/tmp/root/spack-stage/spack-stage-veloc-1.4-oqsntu54uhbqae6uw3vlvkdbxzzaet5y/spack-build-out.txt
```
@gonsie
| non_infrastructure | veloc build fails transfer module cpp too many arguments to function int axl init veloc and fails to build using spack develop from thu apr ubuntu gcc ubuntu gcc rhel gcc rhel gcc using container runner concrete spec build log spack mirror add spack buildcache keys it spack install cache only only dependencies include build deps f veloc oqsntu spec yaml ok spack install no cache f veloc oqsntu spec yaml installing veloc fetching no patches needed for veloc veloc executing phase cmake veloc executing phase build error processerror command exited with status make errors found in build log linking c executable heatdis original cd tmp root spack stage spack stage veloc spack build test opt spack opt spack linux gcc cmake bin cmake e cmake link script cmakefiles heatdis original dir link txt verbose opt spack lib spack env gcc gcc g dndebug cmakefiles heatdis original dir heatdis original c o o heatdis original wl rpath opt spack opt spack linux gc c mpich lib lm opt spack opt spack linux gcc mpich lib libmpi so make leaving directory tmp root spack stage spack stage veloc spack build built target heatdis original tmp root spack stage spack stage veloc spack src src modules transfer module cpp in constructor transfer module t transfer module t const con fig t tmp root spack stage spack stage veloc spack src src modules transfer module cpp error too many arguments to function int axl init int ret axl init null in file included from tmp root spack stage spack stage veloc spack src src modules transfer module hpp from tmp root spack stage spack stage veloc spack src src modules transfer module cpp opt spack opt spack linux gcc axl include axl h note declared here int axl init void tmp root spack stage spack stage veloc spack src src modules transfer module cpp in function int axl transfer file axl xfer t const string const string tmp root spack stage spack stage veloc spack src src modules transfer module cpp error too few arguments to function int axl create axl xfer t const char const char int id axl create type source c str in file included from tmp root spack stage spack stage veloc spack src src modules transfer module hpp from tmp root spack stage spack stage veloc spack src src modules transfer module cpp opt spack opt spack linux gcc axl include axl h note declared here int axl create axl xfer t xtype const char name const char state file make error make waiting for unfinished jobs make leaving directory tmp root spack stage spack stage veloc spack build make error make leaving directory tmp root spack stage spack stage veloc spack build make error see build log for details tmp root spack stage spack stage veloc spack build out txt gonsie | 0 |
78,051 | 14,944,225,639 | IssuesEvent | 2021-01-26 00:55:47 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Multiplayer - Recover Shuttle > Lost Drone in Level > Server Lobby > Restarted Round > No Shuttle | Bug Code Networking | - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
When paying to Recover Shuttle, start level, lost it in this level, returning back to the Server Lobby to Restart the Round, resulted in No Shuttle.
Recovered:
https://www.twitch.tv/videos/804257031?t=01h56m41s
We lost the drone in the level as we played, and then we decided to server lobby to save scum.
Server Lobbied:
https://www.twitch.tv/videos/804257031?t=02h49m26s
Restarted:
https://www.twitch.tv/videos/804257031?t=02h49m53s
Missing Shuttle:
https://www.twitch.tv/videos/804257031?t=02h50m21s
**Version**
0.1100.0.6 | 1.0 | Multiplayer - Recover Shuttle > Lost Drone in Level > Server Lobby > Restarted Round > No Shuttle - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
When paying to Recover Shuttle, start level, lost it in this level, returning back to the Server Lobby to Restart the Round, resulted in No Shuttle.
Recovered:
https://www.twitch.tv/videos/804257031?t=01h56m41s
We lost the drone in the level as we played, and then we decided to server lobby to save scum.
Server Lobbied:
https://www.twitch.tv/videos/804257031?t=02h49m26s
Restarted:
https://www.twitch.tv/videos/804257031?t=02h49m53s
Missing Shuttle:
https://www.twitch.tv/videos/804257031?t=02h50m21s
**Version**
0.1100.0.6 | non_infrastructure | multiplayer recover shuttle lost drone in level server lobby restarted round no shuttle i have searched the issue tracker to check if the issue has already been reported description when paying to recover shuttle start level lost it in this level returning back to the server lobby to restart the round resulted in no shuttle recovered we lost the drone in the level as we played and then we decided to server lobby to save scum server lobbied restarted missing shuttle version | 0 |
7,157 | 6,797,607,832 | IssuesEvent | 2017-11-01 23:51:46 | vmware/docker-volume-vsphere | https://api.github.com/repos/vmware/docker-volume-vsphere | closed | Upgrading docker version in CI testbeds | component/ci-infrastructure mustfix P0 | Currently we have two setups with 17.06 and 17.03.02 docker.
Docker stable release 17.09.0-ce (2017-09-26) is out and 17.03.02 should be upgraded to 17.09 (TOT - https://docs.docker.com/release-notes/docker-ce/)
/CC @tusharnt | 1.0 | Upgrading docker version in CI testbeds - Currently we have two setups with 17.06 and 17.03.02 docker.
Docker stable release 17.09.0-ce (2017-09-26) is out and 17.03.02 should be upgraded to 17.09 (TOT - https://docs.docker.com/release-notes/docker-ce/)
/CC @tusharnt | infrastructure | upgrading docker version in ci testbeds currently we have two setups with and docker docker stable release ce is out and should be upgraded to tot cc tusharnt | 1 |
213,680 | 24,016,301,765 | IssuesEvent | 2022-09-15 01:16:20 | Baneeishaque/PropertyFinder-final-v2 | https://api.github.com/repos/Baneeishaque/PropertyFinder-final-v2 | closed | CVE-2021-35065 (High) detected in glob-parent-2.0.0.tgz - autoclosed | security vulnerability | ## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- jest-23.4.2.tgz (Root Library)
- jest-cli-23.6.0.tgz
- micromatch-2.3.11.tgz
- parse-glob-3.0.4.tgz
- glob-base-0.3.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/PropertyFinder-final-v2/commit/7b628d8bb1eff423d4bbc77e1c4c6a35c84a82da">7b628d8bb1eff423d4bbc77e1c4c6a35c84a82da</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution (glob-parent): 6.0.1</p>
<p>Direct dependency fix Resolution (jest): 24.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-35065 (High) detected in glob-parent-2.0.0.tgz - autoclosed - ## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- jest-23.4.2.tgz (Root Library)
- jest-cli-23.6.0.tgz
- micromatch-2.3.11.tgz
- parse-glob-3.0.4.tgz
- glob-base-0.3.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/PropertyFinder-final-v2/commit/7b628d8bb1eff423d4bbc77e1c4c6a35c84a82da">7b628d8bb1eff423d4bbc77e1c4c6a35c84a82da</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution (glob-parent): 6.0.1</p>
<p>Direct dependency fix Resolution (jest): 24.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in glob parent tgz autoclosed cve high severity vulnerability vulnerable library glob parent tgz strips glob magic from a string to provide the parent path library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy jest tgz root library jest cli tgz micromatch tgz parse glob tgz glob base tgz x glob parent tgz vulnerable library found in head commit a href vulnerability details the package glob parent before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent direct dependency fix resolution jest step up your open source security game with mend | 0 |
73,850 | 3,421,937,587 | IssuesEvent | 2015-12-08 20:51:49 | google/paco | https://api.github.com/repos/google/paco | closed | Server ios experiment lists should not paginate | Component-iOS Component-Server Priority-High | The iOS client cannot handle pagination yet. Make sure that when the requestor is an iOS device that we return the whole list. | 1.0 | Server ios experiment lists should not paginate - The iOS client cannot handle pagination yet. Make sure that when the requestor is an iOS device that we return the whole list. | non_infrastructure | server ios experiment lists should not paginate the ios client cannot handle pagination yet make sure that when the requestor is an ios device that we return the whole list | 0 |
15,830 | 11,725,760,326 | IssuesEvent | 2020-03-10 13:30:36 | Tribler/tribler | https://api.github.com/repos/Tribler/tribler | opened | Separate version_id for the Core and the GUI | enhancement infrastructure | Currently, the GUI receives version_id from the Core with `NTFY.TRIBLER_STARTED` event. Later, this version_id is used when sending crash reports through the GUI. However, this creates two time-windows where the GUI does not know the Core version and does not even know its own version! If the GUI or the Core crashes in one of these moments (namely, important start-up moments), the reports will be sent with `None` as version_id.
The proper way to solve this is to:
1. send version_id with each crash-JSON-notification-message going from the Core to the GUI
2. put version_id in the GUI separately and adjust version_id-changing logic it Jenkins jobs accordingly
3. make the GUI decide what version to use in a report based on what generated the crash (the Core or the GUI itself), and include both components' versions in the report.
4. ensure the reporter dialog is able to catch errors in the GUI as early as possible.
| 1.0 | Separate version_id for the Core and the GUI - Currently, the GUI receives version_id from the Core with `NTFY.TRIBLER_STARTED` event. Later, this version_id is used when sending crash reports through the GUI. However, this creates two time-windows where the GUI does not know the Core version and does not even know its own version! If the GUI or the Core crashes in one of these moments (namely, important start-up moments), the reports will be sent with `None` as version_id.
The proper way to solve this is to:
1. send version_id with each crash-JSON-notification-message going from the Core to the GUI
2. put version_id in the GUI separately and adjust version_id-changing logic it Jenkins jobs accordingly
3. make the GUI decide what version to use in a report based on what generated the crash (the Core or the GUI itself), and include both components' versions in the report.
4. ensure the reporter dialog is able to catch errors in the GUI as early as possible.
| infrastructure | separate version id for the core and the gui currently the gui receives version id from the core with ntfy tribler started event later this version id is used when sending crash reports through the gui however this creates two time windows where the gui does not know the core version and does not even know its own version if the gui or the core crashes in one of these moments namely important start up moments the reports will be sent with none as version id the proper way to solve this is to send version id with each crash json notification message going from the core to the gui put version id in the gui separately and adjust version id changing logic it jenkins jobs accordingly make the gui decide what version to use in a report based on what generated the crash the core or the gui itself and include both components versions in the report ensure the reporter dialog is able to catch errors in the gui as early as possible | 1 |
35,176 | 30,819,174,561 | IssuesEvent | 2023-08-01 15:13:14 | cal-itp/eligibility-server | https://api.github.com/repos/cal-itp/eligibility-server | closed | Make availability/uptime check route through FrontDoor | infrastructure | Originally from @afeld in https://github.com/cal-itp/eligibility-server/pull/222#pullrequestreview-1207290155
> We currently have the uptime check accessing the App Service directly, not through Front Door. Thinking we should change it to the latter to make it end-to-end. This would remove the need for allowing those requests to the App Service, though not a big deal to leave them. | 1.0 | Make availability/uptime check route through FrontDoor - Originally from @afeld in https://github.com/cal-itp/eligibility-server/pull/222#pullrequestreview-1207290155
> We currently have the uptime check accessing the App Service directly, not through Front Door. Thinking we should change it to the latter to make it end-to-end. This would remove the need for allowing those requests to the App Service, though not a big deal to leave them. | infrastructure | make availability uptime check route through frontdoor originally from afeld in we currently have the uptime check accessing the app service directly not through front door thinking we should change it to the latter to make it end to end this would remove the need for allowing those requests to the app service though not a big deal to leave them | 1 |
15,031 | 11,303,124,774 | IssuesEvent | 2020-01-17 19:20:00 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | arm64 CoreCLR release tests on Apline.38.arm64.open consistently failing | area-Infrastructure-coreclr | The netcoreapp5.0-Linux-Release-arm64-CoreCLR_release test run is consistently failing on CI runs of the primary pipelines. The console logs just give a 127 exit code and no other actionable messages that I can see:
```
You are using pip version 19.0.2, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
+ export 'PYTHONPATH=:/root/helix/scripts'
+ cd /root/helix/work/workitem
+ mkdir -p /home/helixbot/dotnetbuild/dumps/
+ /root/helix/work/correlation/scripts/a13a31b99c7746148e665a8f7e80f46f/execute.sh
+ ./RunTests.sh --runtime-path /root/helix/work/correlation
----- start Fri Jan 17 11:35:25 UTC 2020 =============== To repro directly: =====================================================
pushd .
/root/helix/work/correlation/dotnet exec --runtimeconfig Common.Tests.runtimeconfig.json --depsfile Common.Tests.deps.json xunit.console.dll Common.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing -notrait category=nonnetcoreapptests -notrait category=nonlinuxtests
popd
===========================================================================================================
/root/helix/work/workitem /root/helix/work/workitem
Error relocating /root/helix/work/correlation/dotnet: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEEC1Ev: symbol not found
/root/helix/work/workitem
----- end Fri Jan 17 11:35:25 UTC 2020 ----- exit code 127 ----------------------------------------------------------
Looking around for any Linux dump...
... found no dump in /root/helix/work/workitem
```
Possibly need to disable this leg entirely to get the pipeline job passing again.
Example Builds:
- https://dnceng.visualstudio.com/public/_build/results?buildId=487475&view=ms.vss-test-web.build-test-results-tab
- https://dnceng.visualstudio.com/public/_build/results?buildId=487740&view=ms.vss-test-web.build-test-results-tab
| 1.0 | arm64 CoreCLR release tests on Apline.38.arm64.open consistently failing - The netcoreapp5.0-Linux-Release-arm64-CoreCLR_release test run is consistently failing on CI runs of the primary pipelines. The console logs just give a 127 exit code and no other actionable messages that I can see:
```
You are using pip version 19.0.2, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
+ export 'PYTHONPATH=:/root/helix/scripts'
+ cd /root/helix/work/workitem
+ mkdir -p /home/helixbot/dotnetbuild/dumps/
+ /root/helix/work/correlation/scripts/a13a31b99c7746148e665a8f7e80f46f/execute.sh
+ ./RunTests.sh --runtime-path /root/helix/work/correlation
----- start Fri Jan 17 11:35:25 UTC 2020 =============== To repro directly: =====================================================
pushd .
/root/helix/work/correlation/dotnet exec --runtimeconfig Common.Tests.runtimeconfig.json --depsfile Common.Tests.deps.json xunit.console.dll Common.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing -notrait category=nonnetcoreapptests -notrait category=nonlinuxtests
popd
===========================================================================================================
/root/helix/work/workitem /root/helix/work/workitem
Error relocating /root/helix/work/correlation/dotnet: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEEC1Ev: symbol not found
/root/helix/work/workitem
----- end Fri Jan 17 11:35:25 UTC 2020 ----- exit code 127 ----------------------------------------------------------
Looking around for any Linux dump...
... found no dump in /root/helix/work/workitem
```
Possibly need to disable this leg entirely to get the pipeline job passing again.
Example Builds:
- https://dnceng.visualstudio.com/public/_build/results?buildId=487475&view=ms.vss-test-web.build-test-results-tab
- https://dnceng.visualstudio.com/public/_build/results?buildId=487740&view=ms.vss-test-web.build-test-results-tab
| infrastructure | coreclr release tests on apline open consistently failing the linux release coreclr release test run is consistently failing on ci runs of the primary pipelines the console logs just give a exit code and no other actionable messages that i can see you are using pip version however version is available you should consider upgrading via the pip install upgrade pip command export pythonpath root helix scripts cd root helix work workitem mkdir p home helixbot dotnetbuild dumps root helix work correlation scripts execute sh runtests sh runtime path root helix work correlation start fri jan utc to repro directly pushd root helix work correlation dotnet exec runtimeconfig common tests runtimeconfig json depsfile common tests deps json xunit console dll common tests dll xml testresults xml nologo nocolor notrait category ignoreforci notrait category outerloop notrait category failing notrait category nonnetcoreapptests notrait category nonlinuxtests popd root helix work workitem root helix work workitem error relocating root helix work correlation dotnet symbol not found root helix work workitem end fri jan utc exit code looking around for any linux dump found no dump in root helix work workitem possibly need to disable this leg entirely to get the pipeline job passing again example builds | 1 |
810,491 | 30,245,692,652 | IssuesEvent | 2023-07-06 16:17:23 | elastic/security-docs | https://api.github.com/repos/elastic/security-docs | opened | Add Lateral Movement Detection analytics package info to release notes | enhancement release-notes Feature: Entity Analytics v8.9.0 Priority: Medium Effort: Small | Related issue: https://github.com/elastic/infosec/issues/14114
### Description
We have released a new version of the Lateral Movement Detection analytics package in 8.9 adding the ability to detect RDP threats. This should be added to the `8.9` release notes. | 1.0 | Add Lateral Movement Detection analytics package info to release notes - Related issue: https://github.com/elastic/infosec/issues/14114
### Description
We have released a new version of the Lateral Movement Detection analytics package in 8.9 adding the ability to detect RDP threats. This should be added to the `8.9` release notes. | non_infrastructure | add lateral movement detection analytics package info to release notes related issue description we have released a new version of the lateral movement detection analytics package in adding the ability to detect rdp threats this should be added to the release notes | 0 |
275,697 | 30,285,063,412 | IssuesEvent | 2023-07-08 15:12:09 | yaobinwen/robin_on_rails | https://api.github.com/repos/yaobinwen/robin_on_rails | closed | CSSLP: Study Chapter 8: Design Processes | security | # Description
This is a sub-issue of #94.
- [x] Study the textbook.
- [x] Make the notes.
- [x] Make the notes about threat tree model. | True | CSSLP: Study Chapter 8: Design Processes - # Description
This is a sub-issue of #94.
- [x] Study the textbook.
- [x] Make the notes.
- [x] Make the notes about threat tree model. | non_infrastructure | csslp study chapter design processes description this is a sub issue of study the textbook make the notes make the notes about threat tree model | 0 |
8,102 | 7,229,545,167 | IssuesEvent | 2018-02-11 20:48:15 | kaitai-io/kaitai_struct | https://api.github.com/repos/kaitai-io/kaitai_struct | closed | issues labels | infrastructure | I realize that pretty much everything else is higher priority than... labels... colors, but if you would want to spare 5 minutes to look at Construct labels and maybe adjust some colors? :wink:
https://github.com/construct/construct/labels
- remove duplicate, invalid, wontfix
- adjust colors to more ahem bright
If you feel like this issue is just too unimportant, just close it. | 1.0 | issues labels - I realize that pretty much everything else is higher priority than... labels... colors, but if you would want to spare 5 minutes to look at Construct labels and maybe adjust some colors? :wink:
https://github.com/construct/construct/labels
- remove duplicate, invalid, wontfix
- adjust colors to more ahem bright
If you feel like this issue is just too unimportant, just close it. | infrastructure | issues labels i realize that pretty much everything else is higher priority than labels colors but if you would want to spare minutes to look at construct labels and maybe adjust some colors wink remove duplicate invalid wontfix adjust colors to more ahem bright if you feel like this issue is just too unimportant just close it | 1 |
5,797 | 5,961,751,965 | IssuesEvent | 2017-05-29 18:53:27 | emberjs/guides | https://api.github.com/repos/emberjs/guides | closed | Warn users they are reading old documentation for < V2.0 | infrastructure | After reading a few pages of the guide for 1.10 I realized that I was on the wrong version. Is there a way the guides could warn readers they are reading old versions? Maybe something like symfony does: http://symfony.com/legacy
I was taken to the guide from google searching and started reading about resource and route and was very confused.
| 1.0 | Warn users they are reading old documentation for < V2.0 - After reading a few pages of the guide for 1.10 I realized that I was on the wrong version. Is there a way the guides could warn readers they are reading old versions? Maybe something like symfony does: http://symfony.com/legacy
I was taken to the guide from google searching and started reading about resource and route and was very confused.
| infrastructure | warn users they are reading old documentation for after reading a few pages of the guide for i realized that i was on the wrong version is there a way the guides could warn readers they are reading old versions maybe something like symfony does i was taken to the guide from google searching and started reading about resource and route and was very confused | 1 |
1,532 | 3,265,056,164 | IssuesEvent | 2015-10-22 14:45:06 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Mono intermittently segfaults | Area-Infrastructure Contributor Pain Resolution-Not Applicable | http://dotnet-ci.cloudapp.net/job/roslyn_future_lin_dbg_unit32/20/
```
00:21:38.744 Stacktrace:
00:21:38.744
00:21:38.744
00:21:38.744 Native stacktrace:
00:21:38.744
00:21:38.748 mono() [0x4a1dd8]
00:21:38.748 mono() [0x4f739e]
00:21:38.748 mono() [0x422d28]
00:21:38.748 /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7ffc40338340]
00:21:38.748 mono(mono_object_isinst+0x2d) [0x5ae5bd]
00:21:38.748 mono() [0x54aa31]
00:21:38.748 [0x410dfd17]
00:21:38.753
00:21:38.753 Debug info from gdb:
00:21:38.753
00:21:38.764
00:21:38.764 =================================================================
00:21:38.764 Got a SIGSEGV while executing native code. This usually indicates
00:21:38.764 a fatal error in the mono runtime or one of the native libraries
00:21:38.764 used by your application.
00:21:38.764 =================================================================
``` | 1.0 | Mono intermittently segfaults - http://dotnet-ci.cloudapp.net/job/roslyn_future_lin_dbg_unit32/20/
```
00:21:38.744 Stacktrace:
00:21:38.744
00:21:38.744
00:21:38.744 Native stacktrace:
00:21:38.744
00:21:38.748 mono() [0x4a1dd8]
00:21:38.748 mono() [0x4f739e]
00:21:38.748 mono() [0x422d28]
00:21:38.748 /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7ffc40338340]
00:21:38.748 mono(mono_object_isinst+0x2d) [0x5ae5bd]
00:21:38.748 mono() [0x54aa31]
00:21:38.748 [0x410dfd17]
00:21:38.753
00:21:38.753 Debug info from gdb:
00:21:38.753
00:21:38.764
00:21:38.764 =================================================================
00:21:38.764 Got a SIGSEGV while executing native code. This usually indicates
00:21:38.764 a fatal error in the mono runtime or one of the native libraries
00:21:38.764 used by your application.
00:21:38.764 =================================================================
``` | infrastructure | mono intermittently segfaults stacktrace native stacktrace mono mono mono lib linux gnu libpthread so mono mono object isinst mono debug info from gdb got a sigsegv while executing native code this usually indicates a fatal error in the mono runtime or one of the native libraries used by your application | 1 |
54,169 | 13,448,858,530 | IssuesEvent | 2020-09-08 16:01:56 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Health services have the wrong URL setting | Defect VAMC system | **Describe the defect**
Facility health services have the following path: `facility-health-services/[node:title]`
System health services have `[node:field_administration:entity:name]/system-health-services/[node:title]`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to /admin/content
2. Filter by Facility or VAMC system health services content type
3. Review URLs
**Expected behavior**
Facility health service URLs should be `[VAMC-system-path]/[health-services]/[node-title]`
VAMC system health service URLs should be `[VAMC-system-path]/[health-services]/[taxonomy-term]`
eg pittsburgh-health-care/health-services/system/radiology
AC
- [ ] New systems
- [ ] Existing nodes should be updated
| 1.0 | Health services have the wrong URL setting - **Describe the defect**
Facility health services have the following path: `facility-health-services/[node:title]`
System health services have `[node:field_administration:entity:name]/system-health-services/[node:title]`
**To Reproduce**
Steps to reproduce the behavior:
1. Go to /admin/content
2. Filter by Facility or VAMC system health services content type
3. Review URLs
**Expected behavior**
Facility health service URLs should be `[VAMC-system-path]/[health-services]/[node-title]`
VAMC system health service URLs should be `[VAMC-system-path]/[health-services]/[taxonomy-term]`
eg pittsburgh-health-care/health-services/system/radiology
AC
- [ ] New systems
- [ ] Existing nodes should be updated
| non_infrastructure | health services have the wrong url setting describe the defect facility health services have the following path facility health services system health services have system health services to reproduce steps to reproduce the behavior go to admin content filter by facility or vamc system health services content type review urls expected behavior facility health service urls should be vamc system health service urls should be eg pittsburgh health care health services system radiology ac new systems existing nodes should be updated | 0 |
18,511 | 13,041,370,740 | IssuesEvent | 2020-07-28 20:15:24 | sass/dart-sass | https://api.github.com/repos/sass/dart-sass | closed | Compilation failures with new string_scanner version | infrastructure needs info | Language changes like "throw" returning the "Never" type are causing compilation failures of dart-sass when upgrading string_scanner.
This should be fixed ASAP.
The following have been reported:
ERROR: third_party/dart/sass/lib/src/parse/parser.dart:447
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/parser.dart:487
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/sass.dart:343
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/selector.dart:265
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/stylesheet.dart:182
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/stylesheet.dart:2147
Dead code. #dead_code
| 1.0 | Compilation failures with new string_scanner version - Language changes like "throw" returning the "Never" type are causing compilation failures of dart-sass when upgrading string_scanner.
This should be fixed ASAP.
The following have been reported:
ERROR: third_party/dart/sass/lib/src/parse/parser.dart:447
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/parser.dart:487
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/sass.dart:343
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/selector.dart:265
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/stylesheet.dart:182
Dead code. #dead_code
ERROR: third_party/dart/sass/lib/src/parse/stylesheet.dart:2147
Dead code. #dead_code
| infrastructure | compilation failures with new string scanner version language changes like throw returning the never type are causing compilation failures of dart sass when upgrading string scanner this should be fixed asap the following have been reported error third party dart sass lib src parse parser dart dead code dead code error third party dart sass lib src parse parser dart dead code dead code error third party dart sass lib src parse sass dart dead code dead code error third party dart sass lib src parse selector dart dead code dead code error third party dart sass lib src parse stylesheet dart dead code dead code error third party dart sass lib src parse stylesheet dart dead code dead code | 1 |
129,066 | 18,070,756,722 | IssuesEvent | 2021-09-21 02:25:22 | gdcorp-action-public-forks/actions-set-secret | https://api.github.com/repos/gdcorp-action-public-forks/actions-set-secret | opened | CVE-2021-3807 (Medium) detected in ansi-regex-4.1.0.tgz, ansi-regex-5.0.0.tgz | security vulnerability | ## CVE-2021-3807 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>
<details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p>
<p>Path to dependency file: actions-set-secret/package.json</p>
<p>Path to vulnerable library: actions-set-secret/node_modules/table/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.6.0.tgz (Root Library)
- table-5.4.6.tgz
- string-width-3.1.0.tgz
- strip-ansi-5.2.0.tgz
- :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: actions-set-secret/package.json</p>
<p>Path to vulnerable library: actions-set-secret/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.6.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"4.1.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.6.0;table:5.4.6;string-width:3.1.0;strip-ansi:5.2.0;ansi-regex:4.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.6.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-3807 (Medium) detected in ansi-regex-4.1.0.tgz, ansi-regex-5.0.0.tgz - ## CVE-2021-3807 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>
<details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p>
<p>Path to dependency file: actions-set-secret/package.json</p>
<p>Path to vulnerable library: actions-set-secret/node_modules/table/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.6.0.tgz (Root Library)
- table-5.4.6.tgz
- string-width-3.1.0.tgz
- strip-ansi-5.2.0.tgz
- :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p>
<p>Path to dependency file: actions-set-secret/package.json</p>
<p>Path to vulnerable library: actions-set-secret/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.6.0.tgz (Root Library)
- strip-ansi-6.0.0.tgz
- :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"4.1.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.6.0;table:5.4.6;string-width:3.1.0;strip-ansi:5.2.0;ansi-regex:4.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"eslint:7.6.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve medium detected in ansi regex tgz ansi regex tgz cve medium severity vulnerability vulnerable libraries ansi regex tgz ansi regex tgz ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file actions set secret package json path to vulnerable library actions set secret node modules table node modules ansi regex package json dependency hierarchy eslint tgz root library table tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file actions set secret package json path to vulnerable library actions set secret node modules ansi regex package json dependency hierarchy eslint tgz root library strip ansi tgz x ansi regex tgz vulnerable library found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree eslint table string width strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex packagetype javascript node js packagename ansi regex packageversion packagefilepaths istransitivedependency true dependencytree eslint strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex basebranches vulnerabilityidentifier cve vulnerabilitydetails ansi regex is vulnerable to inefficient regular expression complexity vulnerabilityurl | 0 |
54,731 | 30,330,003,453 | IssuesEvent | 2023-07-11 05:22:22 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | storage: `MVCCValueMerger` is allocation heavy | C-performance A-storage T-storage | In a high-throughput, write-heavy workload (`kv33/2kb`), we see that **22.5%** of go heap allocated memory (by size) come from `storage.MVCCValueMerger`.
This is expensive and puts pressure on the Go GC, leading to more frequent GCs and higher foreground tail latency. Can we improve it? Here are some ideas:
- **(easy, high impact)** properly size `meta.RawBytes` in `MVCCValueMerger.Finish` and then `MarshalToSizedBuffer` into it instead of `append`
- **(easy)** hang `merged roachpb.InternalTimeSeriesData` off `MVCCValueMerger` to escape to heap
- **(easy)** hang `meta` enginepb.MVCCMetadataSubsetForMergeSerialization off `MVCCValueMerger` to escape to heap
- **(easy)** don't let `timeSeriesOp` loop iteration variable escape, instead use `&t.timeSeriesOps[i]`
- **(medium)** re-use `MVCCValueMerger.meta.RawBytes` across calls to `deserializeMVCCValueAndAppend` by not resetting it in `protoutil.Unmarshal`
- **(hard)** batch allocate inner primitive slices in `InternalTimeSeriesData`
<img width="1761" alt="Screenshot 2023-07-11 at 1 04 58 AM" src="https://github.com/cockroachdb/cockroach/assets/5438456/5cff4637-21b9-452f-8c6b-65b1ed78eeef">
[heap_profile.pb.gz](https://github.com/cockroachdb/cockroach/files/12010865/heap_profile.pb.gz) (use the `alloc_space` sample) | True | storage: `MVCCValueMerger` is allocation heavy - In a high-throughput, write-heavy workload (`kv33/2kb`), we see that **22.5%** of go heap allocated memory (by size) come from `storage.MVCCValueMerger`.
This is expensive and puts pressure on the Go GC, leading to more frequent GCs and higher foreground tail latency. Can we improve it? Here are some ideas:
- **(easy, high impact)** properly size `meta.RawBytes` in `MVCCValueMerger.Finish` and then `MarshalToSizedBuffer` into it instead of `append`
- **(easy)** hang `merged roachpb.InternalTimeSeriesData` off `MVCCValueMerger` to escape to heap
- **(easy)** hang `meta` enginepb.MVCCMetadataSubsetForMergeSerialization off `MVCCValueMerger` to escape to heap
- **(easy)** don't let `timeSeriesOp` loop iteration variable escape, instead use `&t.timeSeriesOps[i]`
- **(medium)** re-use `MVCCValueMerger.meta.RawBytes` across calls to `deserializeMVCCValueAndAppend` by not resetting it in `protoutil.Unmarshal`
- **(hard)** batch allocate inner primitive slices in `InternalTimeSeriesData`
<img width="1761" alt="Screenshot 2023-07-11 at 1 04 58 AM" src="https://github.com/cockroachdb/cockroach/assets/5438456/5cff4637-21b9-452f-8c6b-65b1ed78eeef">
[heap_profile.pb.gz](https://github.com/cockroachdb/cockroach/files/12010865/heap_profile.pb.gz) (use the `alloc_space` sample) | non_infrastructure | storage mvccvaluemerger is allocation heavy in a high throughput write heavy workload we see that of go heap allocated memory by size come from storage mvccvaluemerger this is expensive and puts pressure on the go gc leading to more frequent gcs and higher foreground tail latency can we improve it here are some ideas easy high impact properly size meta rawbytes in mvccvaluemerger finish and then marshaltosizedbuffer into it instead of append easy hang merged roachpb internaltimeseriesdata off mvccvaluemerger to escape to heap easy hang meta enginepb mvccmetadatasubsetformergeserialization off mvccvaluemerger to escape to heap easy don t let timeseriesop loop iteration variable escape instead use t timeseriesops medium re use mvccvaluemerger meta rawbytes across calls to deserializemvccvalueandappend by not resetting it in protoutil unmarshal hard batch allocate inner primitive slices in internaltimeseriesdata img width alt screenshot at am src use the alloc space sample | 0 |
9,482 | 8,000,028,694 | IssuesEvent | 2018-07-22 11:03:10 | procxx/kepka | https://api.github.com/repos/procxx/kepka | opened | Error installing from AUR | bug infrastructure | when installing **kepka** from AUR error at the end of the installation:
```
rm: невозможно удалить '/tmp/pamac-build-andreyk/kepka-git/pkg/kepka-git/usr/share/kservices5/tg.protocol': Это каталог
```
there is a line in PKGBUILD:
```
# I don't want to add conflicts=('telegram-desktop') thus I will not install tg.protocol.
rm -f "$pkgdir/usr/share/kservices5/tg.protocol"
```
| 1.0 | Error installing from AUR - when installing **kepka** from AUR error at the end of the installation:
```
rm: невозможно удалить '/tmp/pamac-build-andreyk/kepka-git/pkg/kepka-git/usr/share/kservices5/tg.protocol': Это каталог
```
there is a line in PKGBUILD:
```
# I don't want to add conflicts=('telegram-desktop') thus I will not install tg.protocol.
rm -f "$pkgdir/usr/share/kservices5/tg.protocol"
```
| infrastructure | error installing from aur when installing kepka from aur error at the end of the installation rm невозможно удалить tmp pamac build andreyk kepka git pkg kepka git usr share tg protocol это каталог there is a line in pkgbuild i don t want to add conflicts telegram desktop thus i will not install tg protocol rm f pkgdir usr share tg protocol | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.