Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
258,245
| 22,295,795,432
|
IssuesEvent
|
2022-06-13 01:18:23
|
backend-br/vagas
|
https://api.github.com/repos/backend-br/vagas
|
closed
|
[remoto] Pessoa Desenvolvedora Web Sênior(Fullstack com vivência em PHP) @ Feedz
|
PJ PHP DevOps Presencial Testes automatizados CI CTO FullStack Stale
|
🚀 A FEEDZ:
Nosso propósito é criar ambientes de trabalho mais felizes! Por isso, começar pelo nosso próprio ninho é tão importante! Somos Parrots determinados e que se importam uns com os outros, a sede de resultado é latente e o feedback sincero é rotina.
Aqui temos um ambiente de trabalho focado na cooperação entre Devs, confiamos no trabalho de nossos colaboradores e por isso acreditamos que nossos desenvolvedores devem ter autonomia!
Prezamos pela liberdade de falar o que se pensa e pelos feedbacks constantes, além de ter um time realmente diverso 💙 Temos também flexibilidade de horários. Tem que ir ao banco? Precisa sair mais cedo para pegar os filhos na escola? Sem problema.
Estamos buscando uma pessoa para o cargo de Pessoa Desenvolvedora Fullstack Sênior(PHP e JS) que irá atuar no nosso time de Produto, trabalhando diretamente com nossos tech leads e CTO, resolvendo desafios que impactam mais de 100 mil usuários diretamente e nos ajudando a evoluir a senioridade do time.
Aqui na Feedz valorizamos muito a diversidade e queremos Parrots que queiram voar conosco, independente da etnia, gênero, sexualidade, nacionalidade, idade ou deficiência! Vem pro ninho 💛
📚 RESPONSABILIDADES E ATIVIDADES:
Aplicar boas práticas alinhadas com o time no desenvolvimento (criando códigos coerentes e alinhados ao Clean Code; desenvolvendo com foco em segurança, patterns, CI/CD, automação, etc);
Desenvolver novos módulos na plataforma e auxiliar nas definições de arquitetura;
Identificar pontos de melhoria na performance da base instalada;
Desenvolvimento e aplicação de testes automatizados;
Mentorear o time de desenvolvimento em parceria com tech leads;
Fazer a gestão da informação das demandas atualizando o status dos projetos.
🥇 PRÉ-REQUISITOS:
Aproximadamente 10 anos de experiência com desenvolvimento web (Full stack com vivência em PHP);
Experiência com sistemas de alto volume de dados;
Vivência com testes automatizados_(unitários, integração e/ou aceitação)_;
Gostar de trabalhar em time. Pessoa Desenvolvedora sozinha não faz verão!
🎯DIFERENCIAIS:
Experiência com e-commerce ou SaaS;
Vivência em processos de DevOps(Bitbucket, Gitlab, CI/CD ou relacionados;).
✨ BENEFÍCIOS:
100% Home Office (não iremos voltar ao trabalho presencial!!)
Auxílio Parrot Feliz (R$100/mês de investimento no desenvolvimento pessoal, saúde física ou mental do Parrot)
Auxílio Contas (R$160/mês)
Auxílio Parrot Casamenteiro(R$250 quando Parrots casam)
Auxílio Ninhada(R$250 no nascimento ou adoção de pequenos Parrots)
Happy Bird(day) (um dia de folga no mês de aniversário)
Descontos nas empresas conveniadas com a ACATE
Ifood Office(voucher para usar dentro do app em alguns eventos internos)
💰 SALÁRIO A DEFINIR(PJ)
Candidate-se em: https://enliztjob.app.link/VRrBpK1Y0ob
|
1.0
|
[remoto] Pessoa Desenvolvedora Web Sênior(Fullstack com vivência em PHP) @ Feedz - 🚀 A FEEDZ:
Nosso propósito é criar ambientes de trabalho mais felizes! Por isso, começar pelo nosso próprio ninho é tão importante! Somos Parrots determinados e que se importam uns com os outros, a sede de resultado é latente e o feedback sincero é rotina.
Aqui temos um ambiente de trabalho focado na cooperação entre Devs, confiamos no trabalho de nossos colaboradores e por isso acreditamos que nossos desenvolvedores devem ter autonomia!
Prezamos pela liberdade de falar o que se pensa e pelos feedbacks constantes, além de ter um time realmente diverso 💙 Temos também flexibilidade de horários. Tem que ir ao banco? Precisa sair mais cedo para pegar os filhos na escola? Sem problema.
Estamos buscando uma pessoa para o cargo de Pessoa Desenvolvedora Fullstack Sênior(PHP e JS) que irá atuar no nosso time de Produto, trabalhando diretamente com nossos tech leads e CTO, resolvendo desafios que impactam mais de 100 mil usuários diretamente e nos ajudando a evoluir a senioridade do time.
Aqui na Feedz valorizamos muito a diversidade e queremos Parrots que queiram voar conosco, independente da etnia, gênero, sexualidade, nacionalidade, idade ou deficiência! Vem pro ninho 💛
📚 RESPONSABILIDADES E ATIVIDADES:
Aplicar boas práticas alinhadas com o time no desenvolvimento (criando códigos coerentes e alinhados ao Clean Code; desenvolvendo com foco em segurança, patterns, CI/CD, automação, etc);
Desenvolver novos módulos na plataforma e auxiliar nas definições de arquitetura;
Identificar pontos de melhoria na performance da base instalada;
Desenvolvimento e aplicação de testes automatizados;
Mentorear o time de desenvolvimento em parceria com tech leads;
Fazer a gestão da informação das demandas atualizando o status dos projetos.
🥇 PRÉ-REQUISITOS:
Aproximadamente 10 anos de experiência com desenvolvimento web (Full stack com vivência em PHP);
Experiência com sistemas de alto volume de dados;
Vivência com testes automatizados_(unitários, integração e/ou aceitação)_;
Gostar de trabalhar em time. Pessoa Desenvolvedora sozinha não faz verão!
🎯DIFERENCIAIS:
Experiência com e-commerce ou SaaS;
Vivência em processos de DevOps(Bitbucket, Gitlab, CI/CD ou relacionados;).
✨ BENEFÍCIOS:
100% Home Office (não iremos voltar ao trabalho presencial!!)
Auxílio Parrot Feliz (R$100/mês de investimento no desenvolvimento pessoal, saúde física ou mental do Parrot)
Auxílio Contas (R$160/mês)
Auxílio Parrot Casamenteiro(R$250 quando Parrots casam)
Auxílio Ninhada(R$250 no nascimento ou adoção de pequenos Parrots)
Happy Bird(day) (um dia de folga no mês de aniversário)
Descontos nas empresas conveniadas com a ACATE
Ifood Office(voucher para usar dentro do app em alguns eventos internos)
💰 SALÁRIO A DEFINIR(PJ)
Candidate-se em: https://enliztjob.app.link/VRrBpK1Y0ob
|
non_process
|
pessoa desenvolvedora web sênior fullstack com vivência em php feedz 🚀 a feedz nosso propósito é criar ambientes de trabalho mais felizes por isso começar pelo nosso próprio ninho é tão importante somos parrots determinados e que se importam uns com os outros a sede de resultado é latente e o feedback sincero é rotina aqui temos um ambiente de trabalho focado na cooperação entre devs confiamos no trabalho de nossos colaboradores e por isso acreditamos que nossos desenvolvedores devem ter autonomia prezamos pela liberdade de falar o que se pensa e pelos feedbacks constantes além de ter um time realmente diverso 💙 temos também flexibilidade de horários tem que ir ao banco precisa sair mais cedo para pegar os filhos na escola sem problema estamos buscando uma pessoa para o cargo de pessoa desenvolvedora fullstack sênior php e js que irá atuar no nosso time de produto trabalhando diretamente com nossos tech leads e cto resolvendo desafios que impactam mais de mil usuários diretamente e nos ajudando a evoluir a senioridade do time aqui na feedz valorizamos muito a diversidade e queremos parrots que queiram voar conosco independente da etnia gênero sexualidade nacionalidade idade ou deficiência vem pro ninho 💛 📚 responsabilidades e atividades aplicar boas práticas alinhadas com o time no desenvolvimento criando códigos coerentes e alinhados ao clean code desenvolvendo com foco em segurança patterns ci cd automação etc desenvolver novos módulos na plataforma e auxiliar nas definições de arquitetura identificar pontos de melhoria na performance da base instalada desenvolvimento e aplicação de testes automatizados mentorear o time de desenvolvimento em parceria com tech leads fazer a gestão da informação das demandas atualizando o status dos projetos 🥇 pré requisitos aproximadamente anos de experiência com desenvolvimento web full stack com vivência em php experiência com sistemas de alto volume de dados vivência com testes automatizados unitários integração e ou aceitação gostar de trabalhar em time pessoa desenvolvedora sozinha não faz verão 🎯diferenciais experiência com e commerce ou saas vivência em processos de devops bitbucket gitlab ci cd ou relacionados ✨ benefícios home office não iremos voltar ao trabalho presencial auxílio parrot feliz r mês de investimento no desenvolvimento pessoal saúde física ou mental do parrot auxílio contas r mês auxílio parrot casamenteiro r quando parrots casam auxílio ninhada r no nascimento ou adoção de pequenos parrots happy bird day um dia de folga no mês de aniversário descontos nas empresas conveniadas com a acate ifood office voucher para usar dentro do app em alguns eventos internos 💰 salário a definir pj candidate se em
| 0
|
19,561
| 25,884,837,239
|
IssuesEvent
|
2022-12-14 13:52:44
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Restrict or officially support process inputs storing data in process node attributes
|
requires discussion type/feature request priority/nice-to-have topic/processes
|
In #5801 a new keyword `is_metadata` was added to the `InputPort` constructor. By default it is `False`, but when set to `True`, it signals that inputs to this port will not be linked up as `Data` nodes to the process node, but the data will be stored on the process node itself, somehow. The port is currently only used internally, by all the ports in the `metadata` namespace, and the data is mostly stored in the node's attributes, with some exceptions, like the label and description that are stored in columns of the database model.
Currently, the feature is not officially documented. We should decide whether we should allow plugins to add `is_metadata` ports and have that data automatically stored in the node's attribute, or whether this should be disallowed, and any added ports outside of the AiiDA base classes, will raise an error.
|
1.0
|
Restrict or officially support process inputs storing data in process node attributes - In #5801 a new keyword `is_metadata` was added to the `InputPort` constructor. By default it is `False`, but when set to `True`, it signals that inputs to this port will not be linked up as `Data` nodes to the process node, but the data will be stored on the process node itself, somehow. The port is currently only used internally, by all the ports in the `metadata` namespace, and the data is mostly stored in the node's attributes, with some exceptions, like the label and description that are stored in columns of the database model.
Currently, the feature is not officially documented. We should decide whether we should allow plugins to add `is_metadata` ports and have that data automatically stored in the node's attribute, or whether this should be disallowed, and any added ports outside of the AiiDA base classes, will raise an error.
|
process
|
restrict or officially support process inputs storing data in process node attributes in a new keyword is metadata was added to the inputport constructor by default it is false but when set to true it signals that inputs to this port will not be linked up as data nodes to the process node but the data will be stored on the process node itself somehow the port is currently only used internally by all the ports in the metadata namespace and the data is mostly stored in the node s attributes with some exceptions like the label and description that are stored in columns of the database model currently the feature is not officially documented we should decide whether we should allow plugins to add is metadata ports and have that data automatically stored in the node s attribute or whether this should be disallowed and any added ports outside of the aiida base classes will raise an error
| 1
|
86,704
| 24,930,474,919
|
IssuesEvent
|
2022-10-31 11:10:35
|
cilium/cilium
|
https://api.github.com/repos/cilium/cilium
|
closed
|
Image Signing
|
kind/feature area/build cncf/mentorship
|
## Proposal / RFE
**Is your feature request related to a problem?**
Kubernetes itself is currently adding support for signing release artifacts (including cluster component images) using [cosign](https://github.com/sigstore/cosign). It would be excellent if Cilium could sign their release images as well during the build pipeline so that those users desiring more secure clusters could use an admission controller (like [connaisseur](https://github.com/sse-secure-systems/connaisseur) that validates image signatures as an additional layer of security.
Connaisseur allows for enabling verification for whitelisted namespaces and because Cilium is typically deployed into the kube-system namespace it would enable enabling verification for the kube-system namespace once the Kubernetes images are signed as well.
https://github.com/kubernetes/enhancements/issues/3031
https://github.com/kubernetes/release/issues/2227
https://github.com/kubernetes/release/issues/2383
https://github.com/sigstore/cosign
https://github.com/sse-secure-systems/connaisseur
|
1.0
|
Image Signing - ## Proposal / RFE
**Is your feature request related to a problem?**
Kubernetes itself is currently adding support for signing release artifacts (including cluster component images) using [cosign](https://github.com/sigstore/cosign). It would be excellent if Cilium could sign their release images as well during the build pipeline so that those users desiring more secure clusters could use an admission controller (like [connaisseur](https://github.com/sse-secure-systems/connaisseur) that validates image signatures as an additional layer of security.
Connaisseur allows for enabling verification for whitelisted namespaces and because Cilium is typically deployed into the kube-system namespace it would enable enabling verification for the kube-system namespace once the Kubernetes images are signed as well.
https://github.com/kubernetes/enhancements/issues/3031
https://github.com/kubernetes/release/issues/2227
https://github.com/kubernetes/release/issues/2383
https://github.com/sigstore/cosign
https://github.com/sse-secure-systems/connaisseur
|
non_process
|
image signing proposal rfe is your feature request related to a problem kubernetes itself is currently adding support for signing release artifacts including cluster component images using it would be excellent if cilium could sign their release images as well during the build pipeline so that those users desiring more secure clusters could use an admission controller like that validates image signatures as an additional layer of security connaisseur allows for enabling verification for whitelisted namespaces and because cilium is typically deployed into the kube system namespace it would enable enabling verification for the kube system namespace once the kubernetes images are signed as well
| 0
|
297,843
| 9,182,304,000
|
IssuesEvent
|
2019-03-05 12:30:40
|
servicemesher/istio-official-translation
|
https://api.github.com/repos/servicemesher/istio-official-translation
|
closed
|
content/docs/setup/kubernetes/multicluster-install/gateways/index.md
|
lang/zh pending priority/P0 sync/update version/1.1
|
文件路径:content/docs/setup/kubernetes/multicluster-install/gateways/index.md
[源码](https://github.com/istio/istio.github.io/tree/master/content/docs/setup/kubernetes/multicluster-install/gateways/index.md)
[网址](https://istio.io//docs/setup/kubernetes/multicluster-install/gateways/index.htm)
```diff
diff --git a/content/docs/setup/kubernetes/multicluster-install/gateways/index.md b/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
index da715ad7..2cd15e01 100644
--- a/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
+++ b/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
@@ -30,57 +30,46 @@ on **each** Kubernetes cluster.
* The IP address of the `istio-ingressgateway` service in each cluster must
be accessible from every other cluster.
-* A **Root CA**. Cross cluster communication requires mutual TLS connection
- between services. To enable mutual TLS communication across clusters, each
+* A **Root CA**. Cross cluster communication requires mTLS connection
+ between services. To enable mTLS communication across clusters, each
cluster's Citadel will be configured with intermediate CA credentials
- generated by a shared root CA. For illustration purposes, we use a
- sample root CA certificate available in the Istio installation
+ generated by a shared root CA. For illustration purposes, we will use a
+ sample root CA certificate available as part of Istio install
under the `samples/certs` directory.
-## Deploy the Istio control plane in each cluster
+## Deploy Istio control plane in each cluster
-1. Generate intermediate CA certificates for each cluster's Citadel from your
- organization's root CA. The shared root CA enables mutual TLS communication
- across different clusters.
+1. Generate intermediate CA certs for each cluster's Citadel from your
+organization's root CA. The shared root CA enables mTLS communication
+across different clusters. For illustration purposes, we will use
+the sample root certificates as the intermediate certificate.
- > For illustration purposes, the following instructions use the root certificate from
- > the Istio samples directory as the intermediate certificates.
-
-1. In **every cluster**, create a Kubernetes secret for your generated CA certificates
+1. In every cluster, create a Kubernetes secret for your generated CA certs
using a command similar to the following:
{{< text bash >}}
$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
- --from-file=@samples/certs/ca-cert.pem@ \
- --from-file=@samples/certs/ca-key.pem@ \
- --from-file=@samples/certs/root-cert.pem@ \
- --from-file=@samples/certs/cert-chain.pem@
+ --from-file=samples/certs/ca-cert.pem \
+ --from-file=samples/certs/ca-key.pem \
+ --from-file=samples/certs/root-cert.pem \
+ --from-file=samples/certs/cert-chain.pem
{{< /text >}}
-1. Update Helm’s dependencies by following step 2 in the
- [Installation with Helm](/docs/setup/kubernetes/helm-install/#installation-steps) instructions.
-
-1. Generate a multicluster-gateways Istio configuration file using `helm`:
+1. Install the Istio control plane in every cluster using the following commands:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
- -f @install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml@ > $HOME/istio.yaml
- {{< /text >}}
-
- For further details and customization options, refer to the
- [Installation with Helm](/docs/setup/kubernetes/helm-install/) instructions.
-
-1. Run the following command in **every cluster** to deploy an identical Istio control plane
- configuration in all of them.
-
- {{< text bash >}}
+ -f install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml > $HOME/istio.yaml
$ kubectl apply -f $HOME/istio.yaml
{{< /text >}}
-## Setup DNS
+For further details and customization options, refer to the [Installation
+with Helm](/docs/setup/kubernetes/helm-install/) instructions.
-Providing DNS resolution for services in remote clusters will allow
+## Configure DNS
+
+Providing a DNS resolution for services in remote clusters will allow
existing applications to function unmodified, as applications typically
expect to resolve services by their DNS names and access the resulting
IP. Istio itself does not use the DNS for routing requests between
@@ -88,15 +77,13 @@ services. Services local to a cluster share a common DNS suffix
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
services.
-To provide a similar setup for services from remote clusters, we name
+To provide a similar setup for services from remote clusters, we will name
services from remote clusters in the format
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes' DNS needs to be configured to point to CoreDNS as the DNS
-server for the `.global` DNS domain. Create one of the following ConfigMaps
-or update an existing one:
-
-For clusters that use kube-dns:
+server for the `.global` DNS domain. Create the following ConfigMap (or
+update an existing one):
{{< text bash >}}
$ kubectl apply -f - <<EOF
@@ -111,63 +98,161 @@ data:
EOF
{{< /text >}}
-For clusters that use CoreDNS:
+## Adding services from other clusters
-{{< text bash >}}
-$ kubectl apply -f - <<EOF
-apiVersion: v1
-kind: ConfigMap
+Each service in the remote cluster that needs to be accessed from a given
+cluster requires a `ServiceEntry` configuration. The host used in the
+service entry should be of the form `<name>.<namespace>.global` where name
+and namespace correspond to the remote service's name and namespace
+respectively. In order to provide DNS resolution for services under the
+`*.global` domain, you need to assign these services an IP address. We
+suggest assigning an IP address from the 127.255.0.0/16 subnet. These IPs
+are non-routable outside of a pod. Application traffic for these IPs will
+be captured by the sidecar and routed to the appropriate remote service
+
+> Each service (in the .global DNS domain) must have a unique IP within the cluster.
+
+For example, the diagram above depicts two services `foo.ns1` in `cluster1`
+and `bar.ns2` in `cluster2`. In order to access `bar.ns2` from `cluster1`,
+add the following service entry to `cluster1`:
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
metadata:
- name: coredns
- namespace: kube-system
-data:
- Corefile: |
- .:53 {
- errors
- health
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- upstream
- fallthrough in-addr.arpa ip6.arpa
- }
- prometheus :9153
- proxy . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
- global:53 {
- errors
- cache 30
- proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
- }
-EOF
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ # Treat remote cluster services as part of the service mesh
+ # as all clusters in the service mesh share the same root of trust.
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ # the IP address to which bar.ns2.global will resolve to
+ # must be unique for each remote service, within a given cluster.
+ # This address need not be routable. Traffic for this IP will be captured
+ # by the sidecar and routed appropriately.
+ - 127.255.0.2
+ endpoints:
+ # This is the routable address of the ingress gateway in cluster2 that
+ # sits in front of bar.ns2 service. Traffic from the sidecar will be routed
+ # to this address.
+ - address: <IPofCluster2IngressGateway>
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
+{{< /text >}}
+
+If you wish to route all egress traffic from `cluster1` via a dedicated
+egress gateway, use the following service entry for `bar.ns2`
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
+metadata:
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ - 127.255.0.2
+ endpoints:
+ - address: <IPofCluster2IngressGateway>
+ network: external
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
+ - address: istio-egressgateway.istio-system.svc.cluster.local
+ ports:
+ http1: 15443
+ tcp2: 15443
{{< /text >}}
-## Configure application services
+Verify the setup by trying to access `bar.ns2.global` or `bar.ns2` from any
+pod on `cluster1`. Both DNS names should resolve to 127.255.0.2, the
+address used in the service entry configuration.
-Every service in a given cluster that needs to be accessed from a different remote
-cluster requires a `ServiceEntry` configuration in the remote cluster.
-The host used in the service entry should be of the form `<name>.<namespace>.global`
-where name and namespace correspond to the service's name and namespace respectively.
-Visit our [multicluster using gateways](/docs/examples/multicluster/gateways/)
-example for detailed configuration instructions.
+The configurations above will result in all traffic in `cluster1` for
+`bar.ns2.global` on *any port* to be routed to the endpoint
+`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
-## Uninstalling
+The gateway for port 15443 is a special SNI-aware Envoy that has been
+preconfigured and installed as part of the Istio installation step
+described in the prerequisite section. Traffic entering port 15443 will be
+load balanced among pods of the appropriate internal service of the target
+cluster (in this case, `bar.ns2`).
-Uninstall Istio by running the following commands on **every cluster**:
+> Do not create a Gateway configuration for port 15443.
-{{< text bash >}}
-$ kubectl delete -f $HOME/istio.yaml
-$ kubectl delete ns istio-system
+## Version-aware routing to remote services
+
+If the remote service being added has multiple versions, add one or more
+labels to the service entry endpoint, and follow the steps outlined in the
+[request routing](/docs/tasks/traffic-management/request-routing/) section
+to create appropriate virtual services and destination rules. For example,
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
+metadata:
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ # the IP address to which bar.ns2.global will resolve to
+ # must be unique for each service.
+ - 127.255.0.2
+ endpoints:
+ - address: <IPofCluster2IngressGateway>
+ labels:
+ version: beta
+ some: thing
+ foo: bar
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
{{< /text >}}
+Use destination rules to create subsets for `bar.ns2` service with
+appropriate label selectors. The set of steps to follow are identical to
+those used for a local service.
+
## Summary
-Using Istio gateways, a common root CA, and service entries, you can configure
-a single Istio service mesh across multiple Kubernetes clusters.
-Once configured this way, traffic can be transparently routed to remote clusters
+Using Istio gateways, a common root CA, and service entries, you configured
+a single Istio service mesh across multiple Kubernetes clusters. Although
+the above procedure involved a certain amount of manual work, the entire
+process could be automated by creating service entries for each service in
+the system, with a unique IP allocated from the 127.255.0.0/16 subnet. Once
+configured this way, traffic can be transparently routed to remote clusters
without any application involvement.
-Although this approach requires a certain amount of manual configuration for
-remote service access, the service entry creation process could be automated.
```
|
1.0
|
content/docs/setup/kubernetes/multicluster-install/gateways/index.md - 文件路径:content/docs/setup/kubernetes/multicluster-install/gateways/index.md
[源码](https://github.com/istio/istio.github.io/tree/master/content/docs/setup/kubernetes/multicluster-install/gateways/index.md)
[网址](https://istio.io//docs/setup/kubernetes/multicluster-install/gateways/index.htm)
```diff
diff --git a/content/docs/setup/kubernetes/multicluster-install/gateways/index.md b/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
index da715ad7..2cd15e01 100644
--- a/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
+++ b/content/docs/setup/kubernetes/multicluster-install/gateways/index.md
@@ -30,57 +30,46 @@ on **each** Kubernetes cluster.
* The IP address of the `istio-ingressgateway` service in each cluster must
be accessible from every other cluster.
-* A **Root CA**. Cross cluster communication requires mutual TLS connection
- between services. To enable mutual TLS communication across clusters, each
+* A **Root CA**. Cross cluster communication requires mTLS connection
+ between services. To enable mTLS communication across clusters, each
cluster's Citadel will be configured with intermediate CA credentials
- generated by a shared root CA. For illustration purposes, we use a
- sample root CA certificate available in the Istio installation
+ generated by a shared root CA. For illustration purposes, we will use a
+ sample root CA certificate available as part of Istio install
under the `samples/certs` directory.
-## Deploy the Istio control plane in each cluster
+## Deploy Istio control plane in each cluster
-1. Generate intermediate CA certificates for each cluster's Citadel from your
- organization's root CA. The shared root CA enables mutual TLS communication
- across different clusters.
+1. Generate intermediate CA certs for each cluster's Citadel from your
+organization's root CA. The shared root CA enables mTLS communication
+across different clusters. For illustration purposes, we will use
+the sample root certificates as the intermediate certificate.
- > For illustration purposes, the following instructions use the root certificate from
- > the Istio samples directory as the intermediate certificates.
-
-1. In **every cluster**, create a Kubernetes secret for your generated CA certificates
+1. In every cluster, create a Kubernetes secret for your generated CA certs
using a command similar to the following:
{{< text bash >}}
$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
- --from-file=@samples/certs/ca-cert.pem@ \
- --from-file=@samples/certs/ca-key.pem@ \
- --from-file=@samples/certs/root-cert.pem@ \
- --from-file=@samples/certs/cert-chain.pem@
+ --from-file=samples/certs/ca-cert.pem \
+ --from-file=samples/certs/ca-key.pem \
+ --from-file=samples/certs/root-cert.pem \
+ --from-file=samples/certs/cert-chain.pem
{{< /text >}}
-1. Update Helm’s dependencies by following step 2 in the
- [Installation with Helm](/docs/setup/kubernetes/helm-install/#installation-steps) instructions.
-
-1. Generate a multicluster-gateways Istio configuration file using `helm`:
+1. Install the Istio control plane in every cluster using the following commands:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
- -f @install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml@ > $HOME/istio.yaml
- {{< /text >}}
-
- For further details and customization options, refer to the
- [Installation with Helm](/docs/setup/kubernetes/helm-install/) instructions.
-
-1. Run the following command in **every cluster** to deploy an identical Istio control plane
- configuration in all of them.
-
- {{< text bash >}}
+ -f install/kubernetes/helm/istio/values-istio-multicluster-gateways.yaml > $HOME/istio.yaml
$ kubectl apply -f $HOME/istio.yaml
{{< /text >}}
-## Setup DNS
+For further details and customization options, refer to the [Installation
+with Helm](/docs/setup/kubernetes/helm-install/) instructions.
-Providing DNS resolution for services in remote clusters will allow
+## Configure DNS
+
+Providing a DNS resolution for services in remote clusters will allow
existing applications to function unmodified, as applications typically
expect to resolve services by their DNS names and access the resulting
IP. Istio itself does not use the DNS for routing requests between
@@ -88,15 +77,13 @@ services. Services local to a cluster share a common DNS suffix
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
services.
-To provide a similar setup for services from remote clusters, we name
+To provide a similar setup for services from remote clusters, we will name
services from remote clusters in the format
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes' DNS needs to be configured to point to CoreDNS as the DNS
-server for the `.global` DNS domain. Create one of the following ConfigMaps
-or update an existing one:
-
-For clusters that use kube-dns:
+server for the `.global` DNS domain. Create the following ConfigMap (or
+update an existing one):
{{< text bash >}}
$ kubectl apply -f - <<EOF
@@ -111,63 +98,161 @@ data:
EOF
{{< /text >}}
-For clusters that use CoreDNS:
+## Adding services from other clusters
-{{< text bash >}}
-$ kubectl apply -f - <<EOF
-apiVersion: v1
-kind: ConfigMap
+Each service in the remote cluster that needs to be accessed from a given
+cluster requires a `ServiceEntry` configuration. The host used in the
+service entry should be of the form `<name>.<namespace>.global` where name
+and namespace correspond to the remote service's name and namespace
+respectively. In order to provide DNS resolution for services under the
+`*.global` domain, you need to assign these services an IP address. We
+suggest assigning an IP address from the 127.255.0.0/16 subnet. These IPs
+are non-routable outside of a pod. Application traffic for these IPs will
+be captured by the sidecar and routed to the appropriate remote service
+
+> Each service (in the .global DNS domain) must have a unique IP within the cluster.
+
+For example, the diagram above depicts two services `foo.ns1` in `cluster1`
+and `bar.ns2` in `cluster2`. In order to access `bar.ns2` from `cluster1`,
+add the following service entry to `cluster1`:
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
metadata:
- name: coredns
- namespace: kube-system
-data:
- Corefile: |
- .:53 {
- errors
- health
- kubernetes cluster.local in-addr.arpa ip6.arpa {
- pods insecure
- upstream
- fallthrough in-addr.arpa ip6.arpa
- }
- prometheus :9153
- proxy . /etc/resolv.conf
- cache 30
- loop
- reload
- loadbalance
- }
- global:53 {
- errors
- cache 30
- proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
- }
-EOF
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ # Treat remote cluster services as part of the service mesh
+ # as all clusters in the service mesh share the same root of trust.
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ # the IP address to which bar.ns2.global will resolve to
+ # must be unique for each remote service, within a given cluster.
+ # This address need not be routable. Traffic for this IP will be captured
+ # by the sidecar and routed appropriately.
+ - 127.255.0.2
+ endpoints:
+ # This is the routable address of the ingress gateway in cluster2 that
+ # sits in front of bar.ns2 service. Traffic from the sidecar will be routed
+ # to this address.
+ - address: <IPofCluster2IngressGateway>
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
+{{< /text >}}
+
+If you wish to route all egress traffic from `cluster1` via a dedicated
+egress gateway, use the following service entry for `bar.ns2`
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
+metadata:
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ - 127.255.0.2
+ endpoints:
+ - address: <IPofCluster2IngressGateway>
+ network: external
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
+ - address: istio-egressgateway.istio-system.svc.cluster.local
+ ports:
+ http1: 15443
+ tcp2: 15443
{{< /text >}}
-## Configure application services
+Verify the setup by trying to access `bar.ns2.global` or `bar.ns2` from any
+pod on `cluster1`. Both DNS names should resolve to 127.255.0.2, the
+address used in the service entry configuration.
-Every service in a given cluster that needs to be accessed from a different remote
-cluster requires a `ServiceEntry` configuration in the remote cluster.
-The host used in the service entry should be of the form `<name>.<namespace>.global`
-where name and namespace correspond to the service's name and namespace respectively.
-Visit our [multicluster using gateways](/docs/examples/multicluster/gateways/)
-example for detailed configuration instructions.
+The configurations above will result in all traffic in `cluster1` for
+`bar.ns2.global` on *any port* to be routed to the endpoint
+`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
-## Uninstalling
+The gateway for port 15443 is a special SNI-aware Envoy that has been
+preconfigured and installed as part of the Istio installation step
+described in the prerequisite section. Traffic entering port 15443 will be
+load balanced among pods of the appropriate internal service of the target
+cluster (in this case, `bar.ns2`).
-Uninstall Istio by running the following commands on **every cluster**:
+> Do not create a Gateway configuration for port 15443.
-{{< text bash >}}
-$ kubectl delete -f $HOME/istio.yaml
-$ kubectl delete ns istio-system
+## Version-aware routing to remote services
+
+If the remote service being added has multiple versions, add one or more
+labels to the service entry endpoint, and follow the steps outlined in the
+[request routing](/docs/tasks/traffic-management/request-routing/) section
+to create appropriate virtual services and destination rules. For example,
+
+{{< text yaml >}}
+apiVersion: networking.istio.io/v1alpha3
+kind: ServiceEntry
+metadata:
+ name: bar-ns2
+spec:
+ hosts:
+ # must be of form name.namespace.global
+ - bar.ns2.global
+ location: MESH_INTERNAL
+ ports:
+ - name: http1
+ number: 8080
+ protocol: http
+ - name: tcp2
+ number: 9999
+ protocol: tcp
+ resolution: DNS
+ addresses:
+ # the IP address to which bar.ns2.global will resolve to
+ # must be unique for each service.
+ - 127.255.0.2
+ endpoints:
+ - address: <IPofCluster2IngressGateway>
+ labels:
+ version: beta
+ some: thing
+ foo: bar
+ ports:
+ http1: 15443 # Do not change this port value
+ tcp2: 15443 # Do not change this port value
{{< /text >}}
+Use destination rules to create subsets for `bar.ns2` service with
+appropriate label selectors. The set of steps to follow are identical to
+those used for a local service.
+
## Summary
-Using Istio gateways, a common root CA, and service entries, you can configure
-a single Istio service mesh across multiple Kubernetes clusters.
-Once configured this way, traffic can be transparently routed to remote clusters
+Using Istio gateways, a common root CA, and service entries, you configured
+a single Istio service mesh across multiple Kubernetes clusters. Although
+the above procedure involved a certain amount of manual work, the entire
+process could be automated by creating service entries for each service in
+the system, with a unique IP allocated from the 127.255.0.0/16 subnet. Once
+configured this way, traffic can be transparently routed to remote clusters
without any application involvement.
-Although this approach requires a certain amount of manual configuration for
-remote service access, the service entry creation process could be automated.
```
|
non_process
|
content docs setup kubernetes multicluster install gateways index md 文件路径:content docs setup kubernetes multicluster install gateways index md diff diff git a content docs setup kubernetes multicluster install gateways index md b content docs setup kubernetes multicluster install gateways index md index a content docs setup kubernetes multicluster install gateways index md b content docs setup kubernetes multicluster install gateways index md on each kubernetes cluster the ip address of the istio ingressgateway service in each cluster must be accessible from every other cluster a root ca cross cluster communication requires mutual tls connection between services to enable mutual tls communication across clusters each a root ca cross cluster communication requires mtls connection between services to enable mtls communication across clusters each cluster s citadel will be configured with intermediate ca credentials generated by a shared root ca for illustration purposes we use a sample root ca certificate available in the istio installation generated by a shared root ca for illustration purposes we will use a sample root ca certificate available as part of istio install under the samples certs directory deploy the istio control plane in each cluster deploy istio control plane in each cluster generate intermediate ca certificates for each cluster s citadel from your organization s root ca the shared root ca enables mutual tls communication across different clusters generate intermediate ca certs for each cluster s citadel from your organization s root ca the shared root ca enables mtls communication across different clusters for illustration purposes we will use the sample root certificates as the intermediate certificate for illustration purposes the following instructions use the root certificate from the istio samples directory as the intermediate certificates in every cluster create a kubernetes secret for your generated ca certificates in every cluster create a kubernetes secret for your generated ca certs using a command similar to the following kubectl create namespace istio system kubectl create secret generic cacerts n istio system from file samples certs ca cert pem from file samples certs ca key pem from file samples certs root cert pem from file samples certs cert chain pem from file samples certs ca cert pem from file samples certs ca key pem from file samples certs root cert pem from file samples certs cert chain pem update helm’s dependencies by following step in the docs setup kubernetes helm install installation steps instructions generate a multicluster gateways istio configuration file using helm install the istio control plane in every cluster using the following commands helm template install kubernetes helm istio name istio namespace istio system f install kubernetes helm istio values istio multicluster gateways yaml home istio yaml for further details and customization options refer to the docs setup kubernetes helm install instructions run the following command in every cluster to deploy an identical istio control plane configuration in all of them f install kubernetes helm istio values istio multicluster gateways yaml home istio yaml kubectl apply f home istio yaml setup dns for further details and customization options refer to the installation with helm docs setup kubernetes helm install instructions providing dns resolution for services in remote clusters will allow configure dns providing a dns resolution for services in remote clusters will allow existing applications to function unmodified as applications typically expect to resolve services by their dns names and access the resulting ip istio itself does not use the dns for routing requests between services services local to a cluster share a common dns suffix e g svc cluster local kubernetes dns provides dns resolution for these services to provide a similar setup for services from remote clusters we name to provide a similar setup for services from remote clusters we will name services from remote clusters in the format global istio also ships with a coredns server that will provide dns resolution for these services in order to utilize this dns kubernetes dns needs to be configured to point to coredns as the dns server for the global dns domain create one of the following configmaps or update an existing one for clusters that use kube dns server for the global dns domain create the following configmap or update an existing one kubectl apply f eof data eof for clusters that use coredns adding services from other clusters kubectl apply f eof apiversion kind configmap each service in the remote cluster that needs to be accessed from a given cluster requires a serviceentry configuration the host used in the service entry should be of the form global where name and namespace correspond to the remote service s name and namespace respectively in order to provide dns resolution for services under the global domain you need to assign these services an ip address we suggest assigning an ip address from the subnet these ips are non routable outside of a pod application traffic for these ips will be captured by the sidecar and routed to the appropriate remote service each service in the global dns domain must have a unique ip within the cluster for example the diagram above depicts two services foo in and bar in in order to access bar from add the following service entry to apiversion networking istio io kind serviceentry metadata name coredns namespace kube system data corefile errors health kubernetes cluster local in addr arpa arpa pods insecure upstream fallthrough in addr arpa arpa prometheus proxy etc resolv conf cache loop reload loadbalance global errors cache proxy kubectl get svc n istio system istiocoredns o jsonpath spec clusterip eof name bar spec hosts must be of form name namespace global bar global treat remote cluster services as part of the service mesh as all clusters in the service mesh share the same root of trust location mesh internal ports name number protocol http name number protocol tcp resolution dns addresses the ip address to which bar global will resolve to must be unique for each remote service within a given cluster this address need not be routable traffic for this ip will be captured by the sidecar and routed appropriately endpoints this is the routable address of the ingress gateway in that sits in front of bar service traffic from the sidecar will be routed to this address address ports do not change this port value do not change this port value if you wish to route all egress traffic from via a dedicated egress gateway use the following service entry for bar apiversion networking istio io kind serviceentry metadata name bar spec hosts must be of form name namespace global bar global location mesh internal ports name number protocol http name number protocol tcp resolution dns addresses endpoints address network external ports do not change this port value do not change this port value address istio egressgateway istio system svc cluster local ports configure application services verify the setup by trying to access bar global or bar from any pod on both dns names should resolve to the address used in the service entry configuration every service in a given cluster that needs to be accessed from a different remote cluster requires a serviceentry configuration in the remote cluster the host used in the service entry should be of the form global where name and namespace correspond to the service s name and namespace respectively visit our docs examples multicluster gateways example for detailed configuration instructions the configurations above will result in all traffic in for bar global on any port to be routed to the endpoint over an mtls connection uninstalling the gateway for port is a special sni aware envoy that has been preconfigured and installed as part of the istio installation step described in the prerequisite section traffic entering port will be load balanced among pods of the appropriate internal service of the target cluster in this case bar uninstall istio by running the following commands on every cluster do not create a gateway configuration for port kubectl delete f home istio yaml kubectl delete ns istio system version aware routing to remote services if the remote service being added has multiple versions add one or more labels to the service entry endpoint and follow the steps outlined in the docs tasks traffic management request routing section to create appropriate virtual services and destination rules for example apiversion networking istio io kind serviceentry metadata name bar spec hosts must be of form name namespace global bar global location mesh internal ports name number protocol http name number protocol tcp resolution dns addresses the ip address to which bar global will resolve to must be unique for each service endpoints address labels version beta some thing foo bar ports do not change this port value do not change this port value use destination rules to create subsets for bar service with appropriate label selectors the set of steps to follow are identical to those used for a local service summary using istio gateways a common root ca and service entries you can configure a single istio service mesh across multiple kubernetes clusters once configured this way traffic can be transparently routed to remote clusters using istio gateways a common root ca and service entries you configured a single istio service mesh across multiple kubernetes clusters although the above procedure involved a certain amount of manual work the entire process could be automated by creating service entries for each service in the system with a unique ip allocated from the subnet once configured this way traffic can be transparently routed to remote clusters without any application involvement although this approach requires a certain amount of manual configuration for remote service access the service entry creation process could be automated
| 0
|
105,276
| 9,049,734,336
|
IssuesEvent
|
2019-02-12 06:08:36
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Use cluster elasticsearch pod as project logging target
|
area/logging kind/bug priority/2 status/resolved status/to-test team/cn version/2.0
|
**Rancher versions:**
2.0.2
**Infrastructure Stack versions:**
Google Cloud Kubernetes 1.10 cluster
**Steps to Reproduce:**
We tried to install 6.2.4 version of elasticsearch. It works, and is ready to accept connexions. But we cannot use this elasticsearch endpoint, within our cluster project, as target for Rancher 2 project logging target.
Service is up and running:

But rancher can't reach it:



Even pointing the direct cluster ip doesn't work:

Exposing the service through Google Interface, and defining a clusterIP mapping on port 9200 didn't help.


Please note I can easily ping the service from another service using both clusterIp, svc.cluster.local address.
|
1.0
|
Use cluster elasticsearch pod as project logging target - **Rancher versions:**
2.0.2
**Infrastructure Stack versions:**
Google Cloud Kubernetes 1.10 cluster
**Steps to Reproduce:**
We tried to install 6.2.4 version of elasticsearch. It works, and is ready to accept connexions. But we cannot use this elasticsearch endpoint, within our cluster project, as target for Rancher 2 project logging target.
Service is up and running:

But rancher can't reach it:



Even pointing the direct cluster ip doesn't work:

Exposing the service through Google Interface, and defining a clusterIP mapping on port 9200 didn't help.


Please note I can easily ping the service from another service using both clusterIp, svc.cluster.local address.
|
non_process
|
use cluster elasticsearch pod as project logging target rancher versions infrastructure stack versions google cloud kubernetes cluster steps to reproduce we tried to install version of elasticsearch it works and is ready to accept connexions but we cannot use this elasticsearch endpoint within our cluster project as target for rancher project logging target service is up and running but rancher can t reach it even pointing the direct cluster ip doesn t work exposing the service through google interface and defining a clusterip mapping on port didn t help please note i can easily ping the service from another service using both clusterip svc cluster local address
| 0
|
22,678
| 31,925,974,651
|
IssuesEvent
|
2023-09-19 01:49:28
|
prusa3d/Prusa-Firmware
|
https://api.github.com/repos/prusa3d/Prusa-Firmware
|
closed
|
MK2.5S 3.6.0 & some 3.7.0RC1 firmware Issues
|
testing MK2.5 FW 3.7.0 processing stale-issue
|
Hello Firmware team,
I hope this message is placed on the proper place for you guys to see.
Yesterday i made the following post on the Prusa Community FB Group and since github is the place for firmware i decided to raise it to issue.
OK, seems i am out of ideas and need the mindhives brainstorming:
On our MK2.5 we had baught an MMU2 which was only slightly tested and currently not running, which means that we qualified for an S upgrade that was received some time last week. This printer runs on a full bear frame that was running flawlesly with a filament sensor and when the 3.5.1 from Stephen Armstrong was uploaded it was even better.
Since we received the S upgrade, decided yesterday to install it and in the process, clean Z and X bearing, and check all plug and wires on the rambo.
Since it was all taken apart i decide to run the wizard that constantly failed on EX-Heater/thermistor.-> Stop it and run a preheat.....guess what, runs fine!. -> Check thermistor, try a few times again, fail, fail fail. Change Thermistor, fail once more ->Contact support, i am given advice (that made no sence) to factory reset and burn firmware again, i do that and guess? Same issue!, decide to ditch wizard and go with XYZ, WHAT THE...the whole thing its been working with no hickups for a long long time and each month it receives its love and care with occasional changes on nozzle, thermistors or what might be needed. On the MK52 bed calibration 1/4 fails to be found 3 times, on the 4th try fails to find second and on the 5th finds all. (YES PROBE IS OK since after this, Z height was tuned to -0.390) -> test a couple of more preheats and on PET (right before it had done ABS) i get BED PREHEAT FAILED and on second attempt i get with 40+c on the bed, a BED MINITEMP ERROR. -> I do the 3.6.0. for the MK2.5S firmware once more run all once more (no wizzard, but XYZ) runs ok, got once the preheat error and at the moment its printing.
Any ideas?
Since yesterday i have the following results:
WITH more than 15hours of prints on this S upgrade printer since the "rebuild", Still fails to pass hotend test on wizard (while hotend measure and functions properly), still gives bed minitemp error on strangest occasions and also has given me twice more on preheat BED PREHEAT FAILED.
All in all this has nothing to do with my rebuild but rather the firmware it self. I found that even though Slic3rPE gives Firmware upload success, from the advanced tab i can see an AVRdude error at the begining of the update. I can provide any additional information required, but please dont ask me to go on online support, since no matter how much those people want to help, plenty of them dont have more than preset responces to post questions.
|
1.0
|
MK2.5S 3.6.0 & some 3.7.0RC1 firmware Issues - Hello Firmware team,
I hope this message is placed on the proper place for you guys to see.
Yesterday i made the following post on the Prusa Community FB Group and since github is the place for firmware i decided to raise it to issue.
OK, seems i am out of ideas and need the mindhives brainstorming:
On our MK2.5 we had baught an MMU2 which was only slightly tested and currently not running, which means that we qualified for an S upgrade that was received some time last week. This printer runs on a full bear frame that was running flawlesly with a filament sensor and when the 3.5.1 from Stephen Armstrong was uploaded it was even better.
Since we received the S upgrade, decided yesterday to install it and in the process, clean Z and X bearing, and check all plug and wires on the rambo.
Since it was all taken apart i decide to run the wizard that constantly failed on EX-Heater/thermistor.-> Stop it and run a preheat.....guess what, runs fine!. -> Check thermistor, try a few times again, fail, fail fail. Change Thermistor, fail once more ->Contact support, i am given advice (that made no sence) to factory reset and burn firmware again, i do that and guess? Same issue!, decide to ditch wizard and go with XYZ, WHAT THE...the whole thing its been working with no hickups for a long long time and each month it receives its love and care with occasional changes on nozzle, thermistors or what might be needed. On the MK52 bed calibration 1/4 fails to be found 3 times, on the 4th try fails to find second and on the 5th finds all. (YES PROBE IS OK since after this, Z height was tuned to -0.390) -> test a couple of more preheats and on PET (right before it had done ABS) i get BED PREHEAT FAILED and on second attempt i get with 40+c on the bed, a BED MINITEMP ERROR. -> I do the 3.6.0. for the MK2.5S firmware once more run all once more (no wizzard, but XYZ) runs ok, got once the preheat error and at the moment its printing.
Any ideas?
Since yesterday i have the following results:
WITH more than 15hours of prints on this S upgrade printer since the "rebuild", Still fails to pass hotend test on wizard (while hotend measure and functions properly), still gives bed minitemp error on strangest occasions and also has given me twice more on preheat BED PREHEAT FAILED.
All in all this has nothing to do with my rebuild but rather the firmware it self. I found that even though Slic3rPE gives Firmware upload success, from the advanced tab i can see an AVRdude error at the begining of the update. I can provide any additional information required, but please dont ask me to go on online support, since no matter how much those people want to help, plenty of them dont have more than preset responces to post questions.
|
process
|
some firmware issues hello firmware team i hope this message is placed on the proper place for you guys to see yesterday i made the following post on the prusa community fb group and since github is the place for firmware i decided to raise it to issue ok seems i am out of ideas and need the mindhives brainstorming on our we had baught an which was only slightly tested and currently not running which means that we qualified for an s upgrade that was received some time last week this printer runs on a full bear frame that was running flawlesly with a filament sensor and when the from stephen armstrong was uploaded it was even better since we received the s upgrade decided yesterday to install it and in the process clean z and x bearing and check all plug and wires on the rambo since it was all taken apart i decide to run the wizard that constantly failed on ex heater thermistor stop it and run a preheat guess what runs fine check thermistor try a few times again fail fail fail change thermistor fail once more contact support i am given advice that made no sence to factory reset and burn firmware again i do that and guess same issue decide to ditch wizard and go with xyz what the the whole thing its been working with no hickups for a long long time and each month it receives its love and care with occasional changes on nozzle thermistors or what might be needed on the bed calibration fails to be found times on the try fails to find second and on the finds all yes probe is ok since after this z height was tuned to test a couple of more preheats and on pet right before it had done abs i get bed preheat failed and on second attempt i get with c on the bed a bed minitemp error i do the for the firmware once more run all once more no wizzard but xyz runs ok got once the preheat error and at the moment its printing any ideas since yesterday i have the following results with more than of prints on this s upgrade printer since the rebuild still fails to pass hotend test on wizard while hotend measure and functions properly still gives bed minitemp error on strangest occasions and also has given me twice more on preheat bed preheat failed all in all this has nothing to do with my rebuild but rather the firmware it self i found that even though gives firmware upload success from the advanced tab i can see an avrdude error at the begining of the update i can provide any additional information required but please dont ask me to go on online support since no matter how much those people want to help plenty of them dont have more than preset responces to post questions
| 1
|
433,594
| 12,507,453,622
|
IssuesEvent
|
2020-06-02 14:09:03
|
rucio/rucio
|
https://api.github.com/repos/rucio/rucio
|
opened
|
Logging in protocols
|
Core & Internals Monitoring & Logging Priority: High
|
Motivation
----------
This issue tracks of all issues related to logging in the protocols. It is follow up on #3472
- [ ] xrootd
- [ ] srm
- [ ] storm
- [ ] gsiftp
- [ ] webdav
- [ ] gfal implementation
- [ ] others
|
1.0
|
Logging in protocols - Motivation
----------
This issue tracks of all issues related to logging in the protocols. It is follow up on #3472
- [ ] xrootd
- [ ] srm
- [ ] storm
- [ ] gsiftp
- [ ] webdav
- [ ] gfal implementation
- [ ] others
|
non_process
|
logging in protocols motivation this issue tracks of all issues related to logging in the protocols it is follow up on xrootd srm storm gsiftp webdav gfal implementation others
| 0
|
20,239
| 26,845,669,903
|
IssuesEvent
|
2023-02-03 06:40:52
|
sebastianbergmann/phpunit-documentation-english
|
https://api.github.com/repos/sebastianbergmann/phpunit-documentation-english
|
closed
|
Document the development process for the PHPUnit documentation
|
process
|
- [x] Figure out a strategy (branching and tagging) for maintaining multiple versions of the documentation (ReadTheDocs does not render branches other than `master` and requires tags for alternative versions)
- [x] Document how to contribute to an existing language edition of the documentation
- [ ] Document how to build the documentation on a local machine
- [x] Document how to start a new language edition of the documentation
|
1.0
|
Document the development process for the PHPUnit documentation - - [x] Figure out a strategy (branching and tagging) for maintaining multiple versions of the documentation (ReadTheDocs does not render branches other than `master` and requires tags for alternative versions)
- [x] Document how to contribute to an existing language edition of the documentation
- [ ] Document how to build the documentation on a local machine
- [x] Document how to start a new language edition of the documentation
|
process
|
document the development process for the phpunit documentation figure out a strategy branching and tagging for maintaining multiple versions of the documentation readthedocs does not render branches other than master and requires tags for alternative versions document how to contribute to an existing language edition of the documentation document how to build the documentation on a local machine document how to start a new language edition of the documentation
| 1
|
5,202
| 7,976,141,289
|
IssuesEvent
|
2018-07-17 11:42:15
|
wcmc-its/ReCiter
|
https://api.github.com/repos/wcmc-its/ReCiter
|
closed
|
For each of an author’s aliases, modify initial query based on lexical rules
|
On Hold Phase: Information Retrieval Phase: Preprocessing
|
### Background and approach
We have identified five special circumstances that may pose challenges when using authors’ names to identify publications. These are 1) the author has a nickname; 2) the author’s last name has changed, most often due to marriage; 3) the author’s name has a suffix; 4) the author’s last name contains a space or hyphen; and 5) the author uses their middle name as though it were their first name.
When institutional records indicate that the author has a nickname (<b>circumstance 1</b>), or the last name has changed due to marriage (<b>circumstance 2</b>), these must be supplied to ReCiter as alternate name representations (aliases).
The aliases are to be retrieved by querying for rows that match the target author’s cwid in the rc_identity_directory table.
Author names affected by circumstances 3, 4, and 5 have one or more lexical signals that can be recognized by ReCiter. Once one of these three circumstances is recognized for any of an author’s aliases, ReCiter must then use a set of rules that correspond to the circumstance to generate name variants from the alias. Each of these name variants for all of the author’s aliases is combined using an OR query to maximize the recall of the PubMed query.
For the purposes of this project, code to support circumstances 1 and 2 is not required; for authors affected by either of these circumstances, multiple name representations will be supplied to ReCiter from institutional data as aliases. All aliases are then passed through the code below to deal with circumstances 3, 4, and 5 should they exist.
### Circumstance 3. The author’s name has a suffix.
When an author’s name includes a suffix of JR, II, III, or IV, generate a lexical variant with the alias and another without the alias. For example, if the author’s name is Paul Jones II, these two variants should be added to the query:
- Jones Mark[au] OR
- Jones Mark II[au]
### Circumstance 4. The author’s last name contains a space or hyphen
If the last name contains a space, modify the PubMed query so that a name’s possible variants are included. Add the following variations to the query:
- FirstTermFromLastName, FirstInitial[au] OR
- FirstTokenFromLastName-LastTokenFromLastName, FirstInitial[au]
- FirstTokenFromLastName LastTokenFromLastName, FirstInitial[au]
If the last name contains a hyphen, add the following variations to the query:
- FirstTokenFromLastName, FirstInitial[au] OR
- FirstTokenFromLastName-LastTokenFromLastName, FirstInitial[au]
Examples
- CWID ses9022, first name: Selin; last name: Somersan Karakaya. The PubMed query would be: "Somersan Karakaya S[au] OR Somersan-Karakaya S[au] OR Somersan S[au]"
- CWID csjohnso, first name: Carol; last name: Storey-Johnson. The PubMed query would be "Storey-Johnson C[au]" OR Storey C[au]"
### Circumstance 5. The author’s first name consists of a single letter
If the first name consists of a single letter, add the following variations to the query:
- LastName FirstInitial[au] OR
- LastName MiddleInitial[au] OR
- LastName MiddleInitialFirstInitial[au] OR
- LastName FirstInitialMiddleInitial[au]
Examples:
- W. Clay Bracken
- M. Flint Beal
After the rules for circumstances 3, 4, and 5 have been applied when appropriate for each of a target author’s aliases, all variations are combined into a single query. A name in which several of these conditions occur may generate a number of possible permutations. This is okay.
### Circumstance 6. The author’s first name contains a space or hyphen
When the author's first name contains a space, this may indicate that the author uses his or her middle name as a first name.
This query can be used to identify such cases, along with cases in which the first name field only includes a first initial:
select \* from rc_identity
where (char_length(first_name) < 2 OR
(char_length(first_name) = 2 and first_name like '%.') OR
(mid(first_name,2,1) = ' ')
)
An example is J David Warren, for whom "J David" appears in the first name field, while the middle name field is null.
Add the following variations to the query:
- LastName, FirstLetterOfFirstTokenFromFirstName[au] OR
- LastName, FirstLetterOfSecondTokenFromFirstName[au]
In the case of J David Warren, the updated query would be:
Warren J[AU] OR Warren D[AU]
|
1.0
|
For each of an author’s aliases, modify initial query based on lexical rules - ### Background and approach
We have identified five special circumstances that may pose challenges when using authors’ names to identify publications. These are 1) the author has a nickname; 2) the author’s last name has changed, most often due to marriage; 3) the author’s name has a suffix; 4) the author’s last name contains a space or hyphen; and 5) the author uses their middle name as though it were their first name.
When institutional records indicate that the author has a nickname (<b>circumstance 1</b>), or the last name has changed due to marriage (<b>circumstance 2</b>), these must be supplied to ReCiter as alternate name representations (aliases).
The aliases are to be retrieved by querying for rows that match the target author’s cwid in the rc_identity_directory table.
Author names affected by circumstances 3, 4, and 5 have one or more lexical signals that can be recognized by ReCiter. Once one of these three circumstances is recognized for any of an author’s aliases, ReCiter must then use a set of rules that correspond to the circumstance to generate name variants from the alias. Each of these name variants for all of the author’s aliases is combined using an OR query to maximize the recall of the PubMed query.
For the purposes of this project, code to support circumstances 1 and 2 is not required; for authors affected by either of these circumstances, multiple name representations will be supplied to ReCiter from institutional data as aliases. All aliases are then passed through the code below to deal with circumstances 3, 4, and 5 should they exist.
### Circumstance 3. The author’s name has a suffix.
When an author’s name includes a suffix of JR, II, III, or IV, generate a lexical variant with the alias and another without the alias. For example, if the author’s name is Paul Jones II, these two variants should be added to the query:
- Jones Mark[au] OR
- Jones Mark II[au]
### Circumstance 4. The author’s last name contains a space or hyphen
If the last name contains a space, modify the PubMed query so that a name’s possible variants are included. Add the following variations to the query:
- FirstTermFromLastName, FirstInitial[au] OR
- FirstTokenFromLastName-LastTokenFromLastName, FirstInitial[au]
- FirstTokenFromLastName LastTokenFromLastName, FirstInitial[au]
If the last name contains a hyphen, add the following variations to the query:
- FirstTokenFromLastName, FirstInitial[au] OR
- FirstTokenFromLastName-LastTokenFromLastName, FirstInitial[au]
Examples
- CWID ses9022, first name: Selin; last name: Somersan Karakaya. The PubMed query would be: "Somersan Karakaya S[au] OR Somersan-Karakaya S[au] OR Somersan S[au]"
- CWID csjohnso, first name: Carol; last name: Storey-Johnson. The PubMed query would be "Storey-Johnson C[au]" OR Storey C[au]"
### Circumstance 5. The author’s first name consists of a single letter
If the first name consists of a single letter, add the following variations to the query:
- LastName FirstInitial[au] OR
- LastName MiddleInitial[au] OR
- LastName MiddleInitialFirstInitial[au] OR
- LastName FirstInitialMiddleInitial[au]
Examples:
- W. Clay Bracken
- M. Flint Beal
After the rules for circumstances 3, 4, and 5 have been applied when appropriate for each of a target author’s aliases, all variations are combined into a single query. A name in which several of these conditions occur may generate a number of possible permutations. This is okay.
### Circumstance 6. The author’s first name contains a space or hyphen
When the author's first name contains a space, this may indicate that the author uses his or her middle name as a first name.
This query can be used to identify such cases, along with cases in which the first name field only includes a first initial:
select \* from rc_identity
where (char_length(first_name) < 2 OR
(char_length(first_name) = 2 and first_name like '%.') OR
(mid(first_name,2,1) = ' ')
)
An example is J David Warren, for whom "J David" appears in the first name field, while the middle name field is null.
Add the following variations to the query:
- LastName, FirstLetterOfFirstTokenFromFirstName[au] OR
- LastName, FirstLetterOfSecondTokenFromFirstName[au]
In the case of J David Warren, the updated query would be:
Warren J[AU] OR Warren D[AU]
|
process
|
for each of an author’s aliases modify initial query based on lexical rules background and approach we have identified five special circumstances that may pose challenges when using authors’ names to identify publications these are the author has a nickname the author’s last name has changed most often due to marriage the author’s name has a suffix the author’s last name contains a space or hyphen and the author uses their middle name as though it were their first name when institutional records indicate that the author has a nickname circumstance or the last name has changed due to marriage circumstance these must be supplied to reciter as alternate name representations aliases the aliases are to be retrieved by querying for rows that match the target author’s cwid in the rc identity directory table author names affected by circumstances and have one or more lexical signals that can be recognized by reciter once one of these three circumstances is recognized for any of an author’s aliases reciter must then use a set of rules that correspond to the circumstance to generate name variants from the alias each of these name variants for all of the author’s aliases is combined using an or query to maximize the recall of the pubmed query for the purposes of this project code to support circumstances and is not required for authors affected by either of these circumstances multiple name representations will be supplied to reciter from institutional data as aliases all aliases are then passed through the code below to deal with circumstances and should they exist circumstance the author’s name has a suffix when an author’s name includes a suffix of jr ii iii or iv generate a lexical variant with the alias and another without the alias for example if the author’s name is paul jones ii these two variants should be added to the query jones mark or jones mark ii circumstance the author’s last name contains a space or hyphen if the last name contains a space modify the pubmed query so that a name’s possible variants are included add the following variations to the query firsttermfromlastname firstinitial or firsttokenfromlastname lasttokenfromlastname firstinitial firsttokenfromlastname lasttokenfromlastname firstinitial if the last name contains a hyphen add the following variations to the query firsttokenfromlastname firstinitial or firsttokenfromlastname lasttokenfromlastname firstinitial examples cwid first name selin last name somersan karakaya the pubmed query would be somersan karakaya s or somersan karakaya s or somersan s cwid csjohnso first name carol last name storey johnson the pubmed query would be storey johnson c or storey c circumstance the author’s first name consists of a single letter if the first name consists of a single letter add the following variations to the query lastname firstinitial or lastname middleinitial or lastname middleinitialfirstinitial or lastname firstinitialmiddleinitial examples w clay bracken m flint beal after the rules for circumstances and have been applied when appropriate for each of a target author’s aliases all variations are combined into a single query a name in which several of these conditions occur may generate a number of possible permutations this is okay circumstance the author’s first name contains a space or hyphen when the author s first name contains a space this may indicate that the author uses his or her middle name as a first name this query can be used to identify such cases along with cases in which the first name field only includes a first initial select from rc identity where char length first name or char length first name and first name like or mid first name an example is j david warren for whom j david appears in the first name field while the middle name field is null add the following variations to the query lastname firstletteroffirsttokenfromfirstname or lastname firstletterofsecondtokenfromfirstname in the case of j david warren the updated query would be warren j or warren d
| 1
|
7,266
| 10,421,312,183
|
IssuesEvent
|
2019-09-16 05:35:58
|
trynmaps/metrics-mvp
|
https://api.github.com/repos/trynmaps/metrics-mvp
|
closed
|
Refactor the codebase to pull out all SF-specific strings in constants files
|
Backend Process
|
Part of the OT For Any City project; [see the spec](https://docs.google.com/document/d/1VoGpaReLsnudHk2LOR-BaGZAAb1p_jI1Le8cugpSj1U/edit#).
Basically, make it so that we could easily swap this to any other city with little more than some string changes.
|
1.0
|
Refactor the codebase to pull out all SF-specific strings in constants files - Part of the OT For Any City project; [see the spec](https://docs.google.com/document/d/1VoGpaReLsnudHk2LOR-BaGZAAb1p_jI1Le8cugpSj1U/edit#).
Basically, make it so that we could easily swap this to any other city with little more than some string changes.
|
process
|
refactor the codebase to pull out all sf specific strings in constants files part of the ot for any city project basically make it so that we could easily swap this to any other city with little more than some string changes
| 1
|
69,071
| 30,028,671,595
|
IssuesEvent
|
2023-06-27 08:09:03
|
gradido/gradido
|
https://api.github.com/repos/gradido/gradido
|
opened
|
🔧 [Refactor][frontend] - ContributionMessagesFormular.vue Doctor
|
refactor service: admin frontend
|
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🔧 Refactor ticket
❗ @vue/runtime-dom missing for Vue 2
Vue 2 does not have JSX types definitions, so template type checking will not work correctly. You can resolve this problem by installing @vue/runtime-dom and adding it to your project's devDependencies.
vue: /home/tulex/Entwicklung/Projekte/gradido/admin/node_modules/vue/package.json
|
1.0
|
🔧 [Refactor][frontend] - ContributionMessagesFormular.vue Doctor - <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🔧 Refactor ticket
❗ @vue/runtime-dom missing for Vue 2
Vue 2 does not have JSX types definitions, so template type checking will not work correctly. You can resolve this problem by installing @vue/runtime-dom and adding it to your project's devDependencies.
vue: /home/tulex/Entwicklung/Projekte/gradido/admin/node_modules/vue/package.json
|
non_process
|
🔧 contributionmessagesformular vue doctor 🔧 refactor ticket ❗ vue runtime dom missing for vue vue does not have jsx types definitions so template type checking will not work correctly you can resolve this problem by installing vue runtime dom and adding it to your project s devdependencies vue home tulex entwicklung projekte gradido admin node modules vue package json
| 0
|
329,328
| 28,215,661,627
|
IssuesEvent
|
2023-04-05 08:46:34
|
AY2223S2-CS2103T-F12-1/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-F12-1/tp
|
reopened
|
[PE-D][Tester A] Default AddressBook unable to load upon opening jar file
|
wontfix Tester A
|
When executing `java -jar docedex.jar`, the following warning pops up:

As such, a blank doctor and patient list is provided instead as shown instead of the photo given in User Guide under Quick Start section:

<!--session: 1680242408060-cdd77d1d-c128-472b-8ece-cb8cab59fe24-->
<!--Version: Web v3.4.7-->
-------------
Labels: `type.FunctionalityBug` `severity.Low`
original: FireRadical22/ped#1
|
1.0
|
[PE-D][Tester A] Default AddressBook unable to load upon opening jar file - When executing `java -jar docedex.jar`, the following warning pops up:

As such, a blank doctor and patient list is provided instead as shown instead of the photo given in User Guide under Quick Start section:

<!--session: 1680242408060-cdd77d1d-c128-472b-8ece-cb8cab59fe24-->
<!--Version: Web v3.4.7-->
-------------
Labels: `type.FunctionalityBug` `severity.Low`
original: FireRadical22/ped#1
|
non_process
|
default addressbook unable to load upon opening jar file when executing java jar docedex jar the following warning pops up as such a blank doctor and patient list is provided instead as shown instead of the photo given in user guide under quick start section labels type functionalitybug severity low original ped
| 0
|
1,556
| 4,159,631,598
|
IssuesEvent
|
2016-06-17 09:50:22
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Remove Rollback from GUI
|
process_wontfix type_feature
|
Remove rollback vDisk and vMachine from the GUI. Users should use the clone functionality to accomplish the same result.
|
1.0
|
Remove Rollback from GUI - Remove rollback vDisk and vMachine from the GUI. Users should use the clone functionality to accomplish the same result.
|
process
|
remove rollback from gui remove rollback vdisk and vmachine from the gui users should use the clone functionality to accomplish the same result
| 1
|
297,783
| 22,392,685,150
|
IssuesEvent
|
2022-06-17 09:15:48
|
samuel-watson/glmmr
|
https://api.github.com/repos/samuel-watson/glmmr
|
opened
|
Update help files
|
bug documentation
|
@ypan1988 Could you add any errors or required changes to the help files to this thread?
|
1.0
|
Update help files - @ypan1988 Could you add any errors or required changes to the help files to this thread?
|
non_process
|
update help files could you add any errors or required changes to the help files to this thread
| 0
|
191,805
| 6,843,073,450
|
IssuesEvent
|
2017-11-12 11:13:41
|
Z3r0byte/Magis
|
https://api.github.com/repos/Z3r0byte/Magis
|
closed
|
Auto silent permission
|
bug high priority
|
The background service sometimes crashes because it doesnt have permission to change the do not disturb state.
|
1.0
|
Auto silent permission - The background service sometimes crashes because it doesnt have permission to change the do not disturb state.
|
non_process
|
auto silent permission the background service sometimes crashes because it doesnt have permission to change the do not disturb state
| 0
|
549,879
| 16,101,471,837
|
IssuesEvent
|
2021-04-27 09:50:08
|
dotnet/templating
|
https://api.github.com/repos/dotnet/templating
|
closed
|
As a templating owner, I want to move to System.CommandLine, to modernize and simplify the codebase
|
Cost:M Priority:1 User Story parent:1240372 triaged
|
The _user story_ collects all the work required to move to use of System.CommandLine.
Issues to be done after migration to System.CommandLine:
- [ ] https://github.com/dotnet/templating/issues/2348
- [ ] [feature] auto-completion for template parameters
- [ ] implement the sub commands
- [ ] https://github.com/dotnet/templating/issues/2191
The issues that should be fixed after moving to new parser:
- [ ] https://github.com/dotnet/templating/issues/1544
|
1.0
|
As a templating owner, I want to move to System.CommandLine, to modernize and simplify the codebase - The _user story_ collects all the work required to move to use of System.CommandLine.
Issues to be done after migration to System.CommandLine:
- [ ] https://github.com/dotnet/templating/issues/2348
- [ ] [feature] auto-completion for template parameters
- [ ] implement the sub commands
- [ ] https://github.com/dotnet/templating/issues/2191
The issues that should be fixed after moving to new parser:
- [ ] https://github.com/dotnet/templating/issues/1544
|
non_process
|
as a templating owner i want to move to system commandline to modernize and simplify the codebase the user story collects all the work required to move to use of system commandline issues to be done after migration to system commandline auto completion for template parameters implement the sub commands the issues that should be fixed after moving to new parser
| 0
|
10,435
| 13,220,066,632
|
IssuesEvent
|
2020-08-17 11:42:42
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
Conky started incorrectly
|
in process
|
I'm creating a separate issue for this, which was mentioned in #60
Conky is currently being started from user pi's crontab. That assumes autologin and all sorts of other badness (like locking things down to the pi user) and race conditions.
The proper way to start applications that should fire up upon login is using ~/.config/autostart.
My suggestion, take this snippet, put it in ~/.local/share/applications/ and then symlink that file into ~/.config/autostart. This means it will both show up in the menu (under hamradio) as well as autostart.
conky.desktop:
```
[Desktop Entry]
Name=Conky
Comment=Conky
GenericName=Conky Screen Background Monitor
Exec=conky
Type=Application
Encoding=UTF-8
Terminal=false
Categories=HamRadio
Keywords=Radio
```
|
1.0
|
Conky started incorrectly - I'm creating a separate issue for this, which was mentioned in #60
Conky is currently being started from user pi's crontab. That assumes autologin and all sorts of other badness (like locking things down to the pi user) and race conditions.
The proper way to start applications that should fire up upon login is using ~/.config/autostart.
My suggestion, take this snippet, put it in ~/.local/share/applications/ and then symlink that file into ~/.config/autostart. This means it will both show up in the menu (under hamradio) as well as autostart.
conky.desktop:
```
[Desktop Entry]
Name=Conky
Comment=Conky
GenericName=Conky Screen Background Monitor
Exec=conky
Type=Application
Encoding=UTF-8
Terminal=false
Categories=HamRadio
Keywords=Radio
```
|
process
|
conky started incorrectly i m creating a separate issue for this which was mentioned in conky is currently being started from user pi s crontab that assumes autologin and all sorts of other badness like locking things down to the pi user and race conditions the proper way to start applications that should fire up upon login is using config autostart my suggestion take this snippet put it in local share applications and then symlink that file into config autostart this means it will both show up in the menu under hamradio as well as autostart conky desktop name conky comment conky genericname conky screen background monitor exec conky type application encoding utf terminal false categories hamradio keywords radio
| 1
|
9,817
| 12,826,393,193
|
IssuesEvent
|
2020-07-06 16:28:36
|
obinnaokechukwu/internship-2020
|
https://api.github.com/repos/obinnaokechukwu/internship-2020
|
opened
|
Test OBS multicasting with multiple streaming services
|
process
|
Test OBS multicasting with multiple streaming services YouTube, FaceBook Live, etc.
|
1.0
|
Test OBS multicasting with multiple streaming services - Test OBS multicasting with multiple streaming services YouTube, FaceBook Live, etc.
|
process
|
test obs multicasting with multiple streaming services test obs multicasting with multiple streaming services youtube facebook live etc
| 1
|
627,597
| 19,909,527,348
|
IssuesEvent
|
2022-01-25 15:53:10
|
enviroCar/enviroCar-app
|
https://api.github.com/repos/enviroCar/enviroCar-app
|
closed
|
BUG while deleting the car, if no options selected
|
bug 3 - Done Priority - 3 - Low
|
There is a bug when you do not select an option in the middle of deleting the car, and the radio button unchecks.
possible solution:
uncheck the radio button only when the car is getting deleted
https://user-images.githubusercontent.com/75211982/143727223-19126b72-7575-490c-bbf4-47e45aff52a1.mp4
|
1.0
|
BUG while deleting the car, if no options selected - There is a bug when you do not select an option in the middle of deleting the car, and the radio button unchecks.
possible solution:
uncheck the radio button only when the car is getting deleted
https://user-images.githubusercontent.com/75211982/143727223-19126b72-7575-490c-bbf4-47e45aff52a1.mp4
|
non_process
|
bug while deleting the car if no options selected there is a bug when you do not select an option in the middle of deleting the car and the radio button unchecks possible solution uncheck the radio button only when the car is getting deleted
| 0
|
7,824
| 10,997,065,873
|
IssuesEvent
|
2019-12-03 08:21:02
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Missing docs for copy_fields and truncate_fields processors
|
:Processors libbeat needs_docs
|
There is no documentation [listed](https://www.elastic.co/guide/en/beats/filebeat/7.2/defining-processors.html#processors) for the [`copy_fields`](https://github.com/elastic/beats/pull/11303) and [`truncate_fields`](https://github.com/elastic/beats/pull/11297) processors.
|
1.0
|
Missing docs for copy_fields and truncate_fields processors - There is no documentation [listed](https://www.elastic.co/guide/en/beats/filebeat/7.2/defining-processors.html#processors) for the [`copy_fields`](https://github.com/elastic/beats/pull/11303) and [`truncate_fields`](https://github.com/elastic/beats/pull/11297) processors.
|
process
|
missing docs for copy fields and truncate fields processors there is no documentation for the and processors
| 1
|
2,691
| 5,540,276,797
|
IssuesEvent
|
2017-03-22 09:41:47
|
g8os/ays_template_g8os
|
https://api.github.com/repos/g8os/ays_template_g8os
|
closed
|
Actor template for G8OS Store
|
process_wontfix type_feature
|
- [x] node.g8os
- [x] container.g8os
- [x] ardb-server
- [x] ardb-cluster
- [x] disk.g8os
- [ ] volume.blockstor
|
1.0
|
Actor template for G8OS Store - - [x] node.g8os
- [x] container.g8os
- [x] ardb-server
- [x] ardb-cluster
- [x] disk.g8os
- [ ] volume.blockstor
|
process
|
actor template for store node container ardb server ardb cluster disk volume blockstor
| 1
|
9,987
| 13,036,168,734
|
IssuesEvent
|
2020-07-28 11:45:15
|
solid/process
|
https://api.github.com/repos/solid/process
|
closed
|
Record roadmap and agree on process
|
process proposal
|
There are various kanban boards, diagrams and links explaining what various parties are working on. For example, [here](https://www.w3.org/DesignIssues/diagrams/solid/2018-soild-work.svg), [here](https://github.com/solid/specification/projects/1), and [here](https://github.com/solid/process/projects/1).
There is a repository that was set up some time ago called roadmap](https://github.com/solid/roadmap) however it was not clear what the scope of that repository was or how to co-create and review what should go there.
In this issue I propose having a conversation about how to set up the practicalities of the roadmap.
Here is a first draft of the roadmap process proposal.
# Roadmap
The Solid roadmap explains the upcoming plans and needs of the various parties involved in implementing Solid.
The [Solid specification project board](https://github.com/solid/specification/projects/1) is the best place to keep up to date on how Solid development is evolving and what is in the pipeline. The Solid roadmap is not the place to read about Solid development however, the roadmap is a place to read about the implementation of Solid.
The [roadmap repository readme](https://github.com/solid/roadmap/blob/master/README.md) gives an overview of the categories of tasks on the roadmap.
Each .md file on the [roadmap repository](https://github.com/solid/roadmap) defines the scope of a particular task on the roadmap and a description of the profile of a person needed to complete that task.
Anyone can make a suggestion to the roadmap repository. Roadmap proposals are to be reviewed by Tim Berners-Lee as Director of Solid to be officially incorporated into the roadmap.
The status of the tasks is tracked on the [roadmap kanban project board](https://github.com/solid/roadmap/projects/1) which the Solid Manager is responsible for keeping up to date.
|
1.0
|
Record roadmap and agree on process - There are various kanban boards, diagrams and links explaining what various parties are working on. For example, [here](https://www.w3.org/DesignIssues/diagrams/solid/2018-soild-work.svg), [here](https://github.com/solid/specification/projects/1), and [here](https://github.com/solid/process/projects/1).
There is a repository that was set up some time ago called roadmap](https://github.com/solid/roadmap) however it was not clear what the scope of that repository was or how to co-create and review what should go there.
In this issue I propose having a conversation about how to set up the practicalities of the roadmap.
Here is a first draft of the roadmap process proposal.
# Roadmap
The Solid roadmap explains the upcoming plans and needs of the various parties involved in implementing Solid.
The [Solid specification project board](https://github.com/solid/specification/projects/1) is the best place to keep up to date on how Solid development is evolving and what is in the pipeline. The Solid roadmap is not the place to read about Solid development however, the roadmap is a place to read about the implementation of Solid.
The [roadmap repository readme](https://github.com/solid/roadmap/blob/master/README.md) gives an overview of the categories of tasks on the roadmap.
Each .md file on the [roadmap repository](https://github.com/solid/roadmap) defines the scope of a particular task on the roadmap and a description of the profile of a person needed to complete that task.
Anyone can make a suggestion to the roadmap repository. Roadmap proposals are to be reviewed by Tim Berners-Lee as Director of Solid to be officially incorporated into the roadmap.
The status of the tasks is tracked on the [roadmap kanban project board](https://github.com/solid/roadmap/projects/1) which the Solid Manager is responsible for keeping up to date.
|
process
|
record roadmap and agree on process there are various kanban boards diagrams and links explaining what various parties are working on for example and there is a repository that was set up some time ago called roadmap however it was not clear what the scope of that repository was or how to co create and review what should go there in this issue i propose having a conversation about how to set up the practicalities of the roadmap here is a first draft of the roadmap process proposal roadmap the solid roadmap explains the upcoming plans and needs of the various parties involved in implementing solid the is the best place to keep up to date on how solid development is evolving and what is in the pipeline the solid roadmap is not the place to read about solid development however the roadmap is a place to read about the implementation of solid the gives an overview of the categories of tasks on the roadmap each md file on the defines the scope of a particular task on the roadmap and a description of the profile of a person needed to complete that task anyone can make a suggestion to the roadmap repository roadmap proposals are to be reviewed by tim berners lee as director of solid to be officially incorporated into the roadmap the status of the tasks is tracked on the which the solid manager is responsible for keeping up to date
| 1
|
13,453
| 15,930,548,755
|
IssuesEvent
|
2021-04-14 01:09:37
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Several libraries have lingering rubocop issues after the Ruby 3 migration
|
type: process
|
It seems to affect handwritten parts (samples, acceptance tests) of generated libraries. I know google-cloud-vision is affected, and there was at least one other.
|
1.0
|
Several libraries have lingering rubocop issues after the Ruby 3 migration - It seems to affect handwritten parts (samples, acceptance tests) of generated libraries. I know google-cloud-vision is affected, and there was at least one other.
|
process
|
several libraries have lingering rubocop issues after the ruby migration it seems to affect handwritten parts samples acceptance tests of generated libraries i know google cloud vision is affected and there was at least one other
| 1
|
15,098
| 18,835,501,353
|
IssuesEvent
|
2021-11-11 00:06:14
|
crim-ca/weaver
|
https://api.github.com/repos/crim-ca/weaver
|
closed
|
[Feature] Cache WPS-1/2 endpoint
|
triage/enhancement process/wps1 process/wps2 triage/feature
|
In order to avoid uselessly recreating the PyWPS Service which requires re-fetch and re-creation of every underlying DB processes on each WPS-1/2 request, we could use caching to quickly return the matching response.
Things that this caching needs to consider (maybe others also... to test) :
- request parameters to differentiate between various WPS calls (GetCapabilities, DescribeProcess, etc.)
- request response format according to Accept/Accept-Language headers
- any process update/addition (we could reset the cache flag when other routes submit this kind of modifications)
- request status must always redo since we want changing response over time although it is the same request (maybe in this case we can filter fetched DB processes to only this ID to at least reduce the number of items to generate/parse)
|
2.0
|
[Feature] Cache WPS-1/2 endpoint - In order to avoid uselessly recreating the PyWPS Service which requires re-fetch and re-creation of every underlying DB processes on each WPS-1/2 request, we could use caching to quickly return the matching response.
Things that this caching needs to consider (maybe others also... to test) :
- request parameters to differentiate between various WPS calls (GetCapabilities, DescribeProcess, etc.)
- request response format according to Accept/Accept-Language headers
- any process update/addition (we could reset the cache flag when other routes submit this kind of modifications)
- request status must always redo since we want changing response over time although it is the same request (maybe in this case we can filter fetched DB processes to only this ID to at least reduce the number of items to generate/parse)
|
process
|
cache wps endpoint in order to avoid uselessly recreating the pywps service which requires re fetch and re creation of every underlying db processes on each wps request we could use caching to quickly return the matching response things that this caching needs to consider maybe others also to test request parameters to differentiate between various wps calls getcapabilities describeprocess etc request response format according to accept accept language headers any process update addition we could reset the cache flag when other routes submit this kind of modifications request status must always redo since we want changing response over time although it is the same request maybe in this case we can filter fetched db processes to only this id to at least reduce the number of items to generate parse
| 1
|
11,551
| 14,434,433,215
|
IssuesEvent
|
2020-12-07 07:04:19
|
slok/kahoy
|
https://api.github.com/repos/slok/kahoy
|
closed
|
Allow waiting with a custom command
|
manager processor resources
|
At this moment we are able to sleep a specific amount of time between group applies. Would be nice to have a user-specified way of waiting, for example using raw kubectl or a script.
|
1.0
|
Allow waiting with a custom command - At this moment we are able to sleep a specific amount of time between group applies. Would be nice to have a user-specified way of waiting, for example using raw kubectl or a script.
|
process
|
allow waiting with a custom command at this moment we are able to sleep a specific amount of time between group applies would be nice to have a user specified way of waiting for example using raw kubectl or a script
| 1
|
73,085
| 9,640,048,019
|
IssuesEvent
|
2019-05-16 14:44:39
|
microsoftgraph/microsoft-graph-docs
|
https://api.github.com/repos/microsoftgraph/microsoft-graph-docs
|
closed
|
immutable-id
|
area: outlook request: documentation
|
Good to know: The message ID is not something you want to rely on. I found this announcement after searching for a while:
https://developer.microsoft.com/en-us/outlook/blogs/announcing-immutable-id-for-outlook-resources-in-microsoft-graph/
Summary: Add this in the headers of your request: Prefer: IdType="ImmutableId" to get more (stable) ID's that will only change in case of archiving or a actual move to another mailbox etc.
If you got a lot of (historic) data. You can even convert the existing ID's to the Immutable structure with this endpoint: POST https://graph.microsoft.com/beta/me/translateExchangeIds
Read more in the (source) / announcement.
https://developer.microsoft.com/en-us/outlook/blogs/announcing-immutable-id-for-outlook-resources-in-microsoft-graph/
@Microsoft, I was really missing this important detail on this page.
Kind regards
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1dd382e4-95b6-3ccf-6a15-02d0cc7ed841
* Version Independent ID: 276cebbe-10bc-100e-f9b1-51b2ba30c64b
* Content: [message resource type - Microsoft Graph v1.0](https://docs.microsoft.com/en-us/graph/api/resources/message?view=graph-rest-1.0#feedback)
* Content Source: [api-reference/v1.0/resources/message.md](https://github.com/microsoftgraph/microsoft-graph-docs/blob/master/api-reference/v1.0/resources/message.md)
* Product: **outlook**
* Technology: **microsoft-graph**
* GitHub Login: @angelgolfer-ms
* Microsoft Alias: **MSGraphDocsVteam**
|
1.0
|
immutable-id - Good to know: The message ID is not something you want to rely on. I found this announcement after searching for a while:
https://developer.microsoft.com/en-us/outlook/blogs/announcing-immutable-id-for-outlook-resources-in-microsoft-graph/
Summary: Add this in the headers of your request: Prefer: IdType="ImmutableId" to get more (stable) ID's that will only change in case of archiving or a actual move to another mailbox etc.
If you got a lot of (historic) data. You can even convert the existing ID's to the Immutable structure with this endpoint: POST https://graph.microsoft.com/beta/me/translateExchangeIds
Read more in the (source) / announcement.
https://developer.microsoft.com/en-us/outlook/blogs/announcing-immutable-id-for-outlook-resources-in-microsoft-graph/
@Microsoft, I was really missing this important detail on this page.
Kind regards
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1dd382e4-95b6-3ccf-6a15-02d0cc7ed841
* Version Independent ID: 276cebbe-10bc-100e-f9b1-51b2ba30c64b
* Content: [message resource type - Microsoft Graph v1.0](https://docs.microsoft.com/en-us/graph/api/resources/message?view=graph-rest-1.0#feedback)
* Content Source: [api-reference/v1.0/resources/message.md](https://github.com/microsoftgraph/microsoft-graph-docs/blob/master/api-reference/v1.0/resources/message.md)
* Product: **outlook**
* Technology: **microsoft-graph**
* GitHub Login: @angelgolfer-ms
* Microsoft Alias: **MSGraphDocsVteam**
|
non_process
|
immutable id good to know the message id is not something you want to rely on i found this announcement after searching for a while summary add this in the headers of your request prefer idtype immutableid to get more stable id s that will only change in case of archiving or a actual move to another mailbox etc if you got a lot of historic data you can even convert the existing id s to the immutable structure with this endpoint post read more in the source announcement microsoft i was really missing this important detail on this page kind regards document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product outlook technology microsoft graph github login angelgolfer ms microsoft alias msgraphdocsvteam
| 0
|
14,795
| 18,071,657,954
|
IssuesEvent
|
2021-09-21 04:06:26
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
opened
|
Firestore: Fill in missing samples for custom objects and array updates
|
type: process api: firestore samples
|
On https://firebase.google.com/docs/firestore/manage-data/add-data if you switch the sample tabs to Ruby, you'll see that samples are missing for custom objects and array updates. I'm not sure whether we actually have these features implemented for Ruby. Can you determine the status, and if these are implemented, create the needed samples, and if not, let me know and we'll find out whether we should implement them.
|
1.0
|
Firestore: Fill in missing samples for custom objects and array updates - On https://firebase.google.com/docs/firestore/manage-data/add-data if you switch the sample tabs to Ruby, you'll see that samples are missing for custom objects and array updates. I'm not sure whether we actually have these features implemented for Ruby. Can you determine the status, and if these are implemented, create the needed samples, and if not, let me know and we'll find out whether we should implement them.
|
process
|
firestore fill in missing samples for custom objects and array updates on if you switch the sample tabs to ruby you ll see that samples are missing for custom objects and array updates i m not sure whether we actually have these features implemented for ruby can you determine the status and if these are implemented create the needed samples and if not let me know and we ll find out whether we should implement them
| 1
|
12,277
| 14,790,006,775
|
IssuesEvent
|
2021-01-12 11:23:05
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile] Email template > App name should be displayed
|
Bug P2 Process: Tested dev
|
AR : App id is displayed in the email template
ER : App name should be displayed in the email template for the following
1. Account lock
2. Forgot password

|
1.0
|
[Mobile] Email template > App name should be displayed - AR : App id is displayed in the email template
ER : App name should be displayed in the email template for the following
1. Account lock
2. Forgot password

|
process
|
email template app name should be displayed ar app id is displayed in the email template er app name should be displayed in the email template for the following account lock forgot password
| 1
|
9,155
| 12,216,579,538
|
IssuesEvent
|
2020-05-01 15:26:00
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Refactor fields algorithm - allow configuring a source layer
|
Feature Request Processing
|
When configuring a refactor fields algorithm, there is an expression builder available to help configuring the expression.
Within the expression builder, there are currently no fields available. In a perfect world, the available fields would be determined from the connected input. Introducing such a system might be nice but is quite complex and even then might not be able to be 100% completely reliable in every situation.
I would therefore propose a second layer combo box next to the current possibility to set the "target layer" where a "source layer" can be specified which is used in the expression builders. This helps not only to have the names of the fields available but also to get available values like code lists etc. from the source layer.
|
1.0
|
Refactor fields algorithm - allow configuring a source layer - When configuring a refactor fields algorithm, there is an expression builder available to help configuring the expression.
Within the expression builder, there are currently no fields available. In a perfect world, the available fields would be determined from the connected input. Introducing such a system might be nice but is quite complex and even then might not be able to be 100% completely reliable in every situation.
I would therefore propose a second layer combo box next to the current possibility to set the "target layer" where a "source layer" can be specified which is used in the expression builders. This helps not only to have the names of the fields available but also to get available values like code lists etc. from the source layer.
|
process
|
refactor fields algorithm allow configuring a source layer when configuring a refactor fields algorithm there is an expression builder available to help configuring the expression within the expression builder there are currently no fields available in a perfect world the available fields would be determined from the connected input introducing such a system might be nice but is quite complex and even then might not be able to be completely reliable in every situation i would therefore propose a second layer combo box next to the current possibility to set the target layer where a source layer can be specified which is used in the expression builders this helps not only to have the names of the fields available but also to get available values like code lists etc from the source layer
| 1
|
52,090
| 7,747,535,131
|
IssuesEvent
|
2018-05-30 03:56:33
|
ESCOMP/cesm
|
https://api.github.com/repos/ESCOMP/cesm
|
closed
|
Get rid of ">" in quick start instructions?
|
documentation
|
I feel that the ">" at the start of shell commands in the quick start instructions is more troublesome than helpful, because it means you can't copy and paste commands. At least one other person (I think it was Cecile Hannay, but I may be wrong) felt the same way. This is a low-priority cleanup fix, but would be nice to do if others agree.
@bertinia are you attached to these ">" characters?
|
1.0
|
Get rid of ">" in quick start instructions? - I feel that the ">" at the start of shell commands in the quick start instructions is more troublesome than helpful, because it means you can't copy and paste commands. At least one other person (I think it was Cecile Hannay, but I may be wrong) felt the same way. This is a low-priority cleanup fix, but would be nice to do if others agree.
@bertinia are you attached to these ">" characters?
|
non_process
|
get rid of in quick start instructions i feel that the at the start of shell commands in the quick start instructions is more troublesome than helpful because it means you can t copy and paste commands at least one other person i think it was cecile hannay but i may be wrong felt the same way this is a low priority cleanup fix but would be nice to do if others agree bertinia are you attached to these characters
| 0
|
8,354
| 11,503,118,744
|
IssuesEvent
|
2020-02-12 20:26:57
|
ESIPFed/sweet
|
https://api.github.com/repos/ESIPFed/sweet
|
opened
|
[BUG] Issues with periglacial semantics
|
Cryosphere alignment bug enhancement phenomena (macroscale) process (microscale)
|
A bug report assumes that you are not trying to introduced a new feature, but instead change an existing one... hopefully correcting some latent error in the process.
## Description
SWEET's phenCryo TTL file includes :
```
### http://sweetontology.net/phenCryo/Glaciation
sophcr:Glaciation rdf:type owl:Class ;
rdfs:subClassOf sophcr:Accumulation ;
owl:disjointWith sophcr:GlacierRetreat ;
rdfs:label "glaciation"@en .
### http://sweetontology.net/phenCryo/Periglacial
sophcr:Periglacial rdf:type owl:Class ;
rdfs:subClassOf sophcr:GlacialProcess ;
rdfs:label "periglacial"@en .
### http://sweetontology.net/phenCryo/Periglaciation
sophcr:Periglaciation rdf:type owl:Class ;
rdfs:subClassOf sophcr:Glaciation ;
rdfs:label "periglaciation"@en .
```
Our work in the Semantic Harmonization Cluster and with the GCW / Polar Semantics team notes that periglacial processes or phenomena - by definition - cannot be subclasses of glacial counterparts.
## What would you like to see changed?
Periglacial phenomena should be siblings of their glacial counterparts; however, note that periglacial processes need not be accumulation processes. This suggests that periglacial should be a subclass of "physical process". This change makes the class more general, which may cause some issues for users who have used it in a precise manner.
Note that this is part of the ENVO/SWEET alignment.
Relevant ENVO classes include:
http://purl.obolibrary.org/obo/ENVO_01001659
http://purl.obolibrary.org/obo/ENVO_01001641
## Attribution
If you would like a nano-attribution, please use the [following guidance](https://github.com/ESIPFed/sweet/wiki/SWEET-Class-Annotations-Proposal).
The guidance doesn't seem to be up. The ESIP Semantics Harmonization Cluster can be credited.
|
1.0
|
[BUG] Issues with periglacial semantics - A bug report assumes that you are not trying to introduced a new feature, but instead change an existing one... hopefully correcting some latent error in the process.
## Description
SWEET's phenCryo TTL file includes :
```
### http://sweetontology.net/phenCryo/Glaciation
sophcr:Glaciation rdf:type owl:Class ;
rdfs:subClassOf sophcr:Accumulation ;
owl:disjointWith sophcr:GlacierRetreat ;
rdfs:label "glaciation"@en .
### http://sweetontology.net/phenCryo/Periglacial
sophcr:Periglacial rdf:type owl:Class ;
rdfs:subClassOf sophcr:GlacialProcess ;
rdfs:label "periglacial"@en .
### http://sweetontology.net/phenCryo/Periglaciation
sophcr:Periglaciation rdf:type owl:Class ;
rdfs:subClassOf sophcr:Glaciation ;
rdfs:label "periglaciation"@en .
```
Our work in the Semantic Harmonization Cluster and with the GCW / Polar Semantics team notes that periglacial processes or phenomena - by definition - cannot be subclasses of glacial counterparts.
## What would you like to see changed?
Periglacial phenomena should be siblings of their glacial counterparts; however, note that periglacial processes need not be accumulation processes. This suggests that periglacial should be a subclass of "physical process". This change makes the class more general, which may cause some issues for users who have used it in a precise manner.
Note that this is part of the ENVO/SWEET alignment.
Relevant ENVO classes include:
http://purl.obolibrary.org/obo/ENVO_01001659
http://purl.obolibrary.org/obo/ENVO_01001641
## Attribution
If you would like a nano-attribution, please use the [following guidance](https://github.com/ESIPFed/sweet/wiki/SWEET-Class-Annotations-Proposal).
The guidance doesn't seem to be up. The ESIP Semantics Harmonization Cluster can be credited.
|
process
|
issues with periglacial semantics a bug report assumes that you are not trying to introduced a new feature but instead change an existing one hopefully correcting some latent error in the process description sweet s phencryo ttl file includes sophcr glaciation rdf type owl class rdfs subclassof sophcr accumulation owl disjointwith sophcr glacierretreat rdfs label glaciation en sophcr periglacial rdf type owl class rdfs subclassof sophcr glacialprocess rdfs label periglacial en sophcr periglaciation rdf type owl class rdfs subclassof sophcr glaciation rdfs label periglaciation en our work in the semantic harmonization cluster and with the gcw polar semantics team notes that periglacial processes or phenomena by definition cannot be subclasses of glacial counterparts what would you like to see changed periglacial phenomena should be siblings of their glacial counterparts however note that periglacial processes need not be accumulation processes this suggests that periglacial should be a subclass of physical process this change makes the class more general which may cause some issues for users who have used it in a precise manner note that this is part of the envo sweet alignment relevant envo classes include attribution if you would like a nano attribution please use the the guidance doesn t seem to be up the esip semantics harmonization cluster can be credited
| 1
|
6,786
| 9,920,706,895
|
IssuesEvent
|
2019-06-30 12:00:55
|
Project-Cartographer/H2PC_TagExtraction
|
https://api.github.com/repos/Project-Cartographer/H2PC_TagExtraction
|
closed
|
Undo processing on bitmap
|
bug post-processing
|
Copied from #10
* Appears to be missing bitmap data a majority of the time among other things. Seems to be an issue with the extractor not loading the resource maps. Fixed by 0600328
* Has issues beyond this where even if the bitmap data is internal then it may not be written for whatever reason.
* Something is preventing them from being compressed in H2Tool. This means that extracted bitmaps can't be reused until the cause is found and fixed. Worked around by Project-Cartographer/H2Codez@f4adbbc
|
1.0
|
Undo processing on bitmap - Copied from #10
* Appears to be missing bitmap data a majority of the time among other things. Seems to be an issue with the extractor not loading the resource maps. Fixed by 0600328
* Has issues beyond this where even if the bitmap data is internal then it may not be written for whatever reason.
* Something is preventing them from being compressed in H2Tool. This means that extracted bitmaps can't be reused until the cause is found and fixed. Worked around by Project-Cartographer/H2Codez@f4adbbc
|
process
|
undo processing on bitmap copied from appears to be missing bitmap data a majority of the time among other things seems to be an issue with the extractor not loading the resource maps fixed by has issues beyond this where even if the bitmap data is internal then it may not be written for whatever reason something is preventing them from being compressed in this means that extracted bitmaps can t be reused until the cause is found and fixed worked around by project cartographer
| 1
|
12,489
| 14,952,666,109
|
IssuesEvent
|
2021-01-26 15:47:50
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Explore Opps Loss error on first open
|
Process Heating bug
|
For a newly created/imported mod, the phast object doesn't have "exploreOppsShow____" yet, causing error.
|
1.0
|
Explore Opps Loss error on first open - For a newly created/imported mod, the phast object doesn't have "exploreOppsShow____" yet, causing error.
|
process
|
explore opps loss error on first open for a newly created imported mod the phast object doesn t have exploreoppsshow yet causing error
| 1
|
10,556
| 13,341,116,808
|
IssuesEvent
|
2020-08-28 15:25:46
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
stageDependencies example is incorrect
|
Pri2 devops-cicd-process/tech devops/prod doc-bug
|
I wasted a bunch of time because this stageDependencies example is incorrect:
```yaml
- stage: B
condition: and(succeeded(), ne(dependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
- job: B2
condition: ne(stageDependencies.A.A1.outputs['stagevar.stageexists'], 'true')
steps:
- script: echo hello from Stage B2
```
Fortunately this github comment provides excellent info about what works and doesn't:
https://github.com/microsoft/azure-pipelines-tasks/issues/4743#issuecomment-643469964
Bottom line - it should be:
```yaml
- stage: B
condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
- job: B2
condition: ne(stageDependencies.A.outputs['A1.stagevar.stageexists'], 'true')
steps:
- script: echo hello from Stage B2
```
Note that I was able to make the `stageDependencies.StageName.JobName.outputs(StepName.VariableName)` syntax work for a job condition, but not for a stage condition.
For a stage condition, but syntax is: `stageDependencies.StageName.outputs(JobName.StepName.VariableName)`. Fortunately the new docs help explain this structure (thanks!), but this leftover example is still incorrect.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
stageDependencies example is incorrect - I wasted a bunch of time because this stageDependencies example is incorrect:
```yaml
- stage: B
condition: and(succeeded(), ne(dependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
- job: B2
condition: ne(stageDependencies.A.A1.outputs['stagevar.stageexists'], 'true')
steps:
- script: echo hello from Stage B2
```
Fortunately this github comment provides excellent info about what works and doesn't:
https://github.com/microsoft/azure-pipelines-tasks/issues/4743#issuecomment-643469964
Bottom line - it should be:
```yaml
- stage: B
condition: and(succeeded(), ne(dependencies.A.outputs['A1.printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
- job: B2
condition: ne(stageDependencies.A.outputs['A1.stagevar.stageexists'], 'true')
steps:
- script: echo hello from Stage B2
```
Note that I was able to make the `stageDependencies.StageName.JobName.outputs(StepName.VariableName)` syntax work for a job condition, but not for a stage condition.
For a stage condition, but syntax is: `stageDependencies.StageName.outputs(JobName.StepName.VariableName)`. Fortunately the new docs help explain this structure (thanks!), but this leftover example is still incorrect.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
stagedependencies example is incorrect i wasted a bunch of time because this stagedependencies example is incorrect yaml stage b condition and succeeded ne dependencies a outputs true dependson a jobs job steps script echo hello from stage b job condition ne stagedependencies a outputs true steps script echo hello from stage fortunately this github comment provides excellent info about what works and doesn t bottom line it should be yaml stage b condition and succeeded ne dependencies a outputs true dependson a jobs job steps script echo hello from stage b job condition ne stagedependencies a outputs true steps script echo hello from stage note that i was able to make the stagedependencies stagename jobname outputs stepname variablename syntax work for a job condition but not for a stage condition for a stage condition but syntax is stagedependencies stagename outputs jobname stepname variablename fortunately the new docs help explain this structure thanks but this leftover example is still incorrect document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
256,735
| 8,128,496,914
|
IssuesEvent
|
2018-08-17 12:03:30
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
OS X dylibs have the wrong dylib names after installation, sometimes the wrong name
|
Bug Likelihood: 5 - Always Priority: Normal Severity: 3 - Major Irritation
|
When building VisIt on OS X against QT previously installed elsewhere,
the names inside many of the plugins end up incorrect, like:
@executable_path@executable_path/../lib/QtGui.framework/Versions/4/QtGui @executable_path@executable_path/../lib/QtOpenGL.framework/Versions/4/QtOpenGL @executable_path@executable_path/../lib/QtCore.framework/Versions/4/QtCore,
in /volatile/visit-r12128-ser/2.1.0/darwin-i386/plugins/plots/libVWellBorePlot.dylib.
Attached is a file that shows all of the corrections that have to be made
on the installed dylibs.
I suspect some overzealous script in VisIt is doing this, but I don't
know where.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 334
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: OS X dylibs have the wrong dylib names after installation, sometimes the wrong name
Assigned to: Brad Whitlock
Category:
Target version: 2.1
Author: John Cary
Start: 08/05/2010
Due date:
% Done: 0
Estimated time:
Created: 08/05/2010 02:12 pm
Updated: 08/31/2010 03:20 pm
Likelihood: 5 - Always
Severity: 3 - Major Irritation
Found in version: trunk
Impact:
Expected Use:
OS: OSX
Support Group: Any
Description:
When building VisIt on OS X against QT previously installed elsewhere,
the names inside many of the plugins end up incorrect, like:
@executable_path@executable_path/../lib/QtGui.framework/Versions/4/QtGui @executable_path@executable_path/../lib/QtOpenGL.framework/Versions/4/QtOpenGL @executable_path@executable_path/../lib/QtCore.framework/Versions/4/QtCore,
in /volatile/visit-r12128-ser/2.1.0/darwin-i386/plugins/plots/libVWellBorePlot.dylib.
Attached is a file that shows all of the corrections that have to be made
on the installed dylibs.
I suspect some overzealous script in VisIt is doing this, but I don't
know where.
Comments:
I changed the osxfixup script so its regular expressions only match paths at the start of a string. This allows it to replace /path/to/lib/libfoo.dylib with a relative path while not changing @executable_path/../lib/libfoo.dylib.
|
1.0
|
OS X dylibs have the wrong dylib names after installation, sometimes the wrong name - When building VisIt on OS X against QT previously installed elsewhere,
the names inside many of the plugins end up incorrect, like:
@executable_path@executable_path/../lib/QtGui.framework/Versions/4/QtGui @executable_path@executable_path/../lib/QtOpenGL.framework/Versions/4/QtOpenGL @executable_path@executable_path/../lib/QtCore.framework/Versions/4/QtCore,
in /volatile/visit-r12128-ser/2.1.0/darwin-i386/plugins/plots/libVWellBorePlot.dylib.
Attached is a file that shows all of the corrections that have to be made
on the installed dylibs.
I suspect some overzealous script in VisIt is doing this, but I don't
know where.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 334
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: OS X dylibs have the wrong dylib names after installation, sometimes the wrong name
Assigned to: Brad Whitlock
Category:
Target version: 2.1
Author: John Cary
Start: 08/05/2010
Due date:
% Done: 0
Estimated time:
Created: 08/05/2010 02:12 pm
Updated: 08/31/2010 03:20 pm
Likelihood: 5 - Always
Severity: 3 - Major Irritation
Found in version: trunk
Impact:
Expected Use:
OS: OSX
Support Group: Any
Description:
When building VisIt on OS X against QT previously installed elsewhere,
the names inside many of the plugins end up incorrect, like:
@executable_path@executable_path/../lib/QtGui.framework/Versions/4/QtGui @executable_path@executable_path/../lib/QtOpenGL.framework/Versions/4/QtOpenGL @executable_path@executable_path/../lib/QtCore.framework/Versions/4/QtCore,
in /volatile/visit-r12128-ser/2.1.0/darwin-i386/plugins/plots/libVWellBorePlot.dylib.
Attached is a file that shows all of the corrections that have to be made
on the installed dylibs.
I suspect some overzealous script in VisIt is doing this, but I don't
know where.
Comments:
I changed the osxfixup script so its regular expressions only match paths at the start of a string. This allows it to replace /path/to/lib/libfoo.dylib with a relative path while not changing @executable_path/../lib/libfoo.dylib.
|
non_process
|
os x dylibs have the wrong dylib names after installation sometimes the wrong name when building visit on os x against qt previously installed elsewhere the names inside many of the plugins end up incorrect like executable path executable path lib qtgui framework versions qtgui executable path executable path lib qtopengl framework versions qtopengl executable path executable path lib qtcore framework versions qtcore in volatile visit ser darwin plugins plots libvwellboreplot dylib attached is a file that shows all of the corrections that have to be made on the installed dylibs i suspect some overzealous script in visit is doing this but i don t know where redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject os x dylibs have the wrong dylib names after installation sometimes the wrong name assigned to brad whitlock category target version author john cary start due date done estimated time created pm updated pm likelihood always severity major irritation found in version trunk impact expected use os osx support group any description when building visit on os x against qt previously installed elsewhere the names inside many of the plugins end up incorrect like executable path executable path lib qtgui framework versions qtgui executable path executable path lib qtopengl framework versions qtopengl executable path executable path lib qtcore framework versions qtcore in volatile visit ser darwin plugins plots libvwellboreplot dylib attached is a file that shows all of the corrections that have to be made on the installed dylibs i suspect some overzealous script in visit is doing this but i don t know where comments i changed the osxfixup script so its regular expressions only match paths at the start of a string this allows it to replace path to lib libfoo dylib with a relative path while not changing executable path lib libfoo dylib
| 0
|
859
| 3,317,658,071
|
IssuesEvent
|
2015-11-06 22:53:00
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Release v. 0.1.3
|
release process
|
**Note**: Consider resolving remaining open tasks before releasing new version.
**Initial release notes**:
- fixed bug with incorrect status after going back from background inside the sample app reported in issue #31
- fixed RxJava usage in sample app
- fixed RxJava usage in code snippets in `README.md`
- added static code analysis
- updated code formatting
- added sample sample app in Kotlin
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release v. 0.1.3 - **Note**: Consider resolving remaining open tasks before releasing new version.
**Initial release notes**:
- fixed bug with incorrect status after going back from background inside the sample app reported in issue #31
- fixed RxJava usage in sample app
- fixed RxJava usage in code snippets in `README.md`
- added static code analysis
- updated code formatting
- added sample sample app in Kotlin
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release v note consider resolving remaining open tasks before releasing new version initial release notes fixed bug with incorrect status after going back from background inside the sample app reported in issue fixed rxjava usage in sample app fixed rxjava usage in code snippets in readme md added static code analysis updated code formatting added sample sample app in kotlin things to do bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release
| 1
|
158,238
| 12,407,007,089
|
IssuesEvent
|
2020-05-21 20:14:18
|
PointCloudLibrary/pcl
|
https://api.github.com/repos/PointCloudLibrary/pcl
|
closed
|
kfpcs_ia unit test is failing spuriously on multiple platforms
|
module: registration module: test status: stale
|
I've seen it happening on the Ubuntu 16.04 and Windows x64 images.
```
2018-12-05T03:04:22.2409061Z test 86
2018-12-05T03:04:22.2409768Z Start 86: kfpcs_ia
2018-12-05T03:04:22.2411330Z
2018-12-05T03:04:22.2411626Z 86: Test command: D:\a\build\bin\test_kfpcs_ia.exe "D:/a/1/s/test/office1_keypoints.pcd" "D:/a/1/s/test/office2_keypoints.pcd"
2018-12-05T03:04:22.2411819Z 86: Test timeout computed to be: 10000000
2018-12-05T03:04:22.2579513Z 86: [==========] Running 1 test from 1 test case.
2018-12-05T03:04:22.2580064Z 86: [----------] Global test environment set-up.
2018-12-05T03:04:22.2580290Z 86: [----------] 1 test from PCL
2018-12-05T03:04:22.2580481Z 86: [ RUN ] PCL.KFPCSInitialAlignment
2018-12-05T03:05:15.7471516Z 86: d:\a\1\s\test\registration\test_kfpcs_ia.cpp(95): error: The difference between angle3d and 0.f is 3.0308370590209961, which exceeds max_angle3d, where
2018-12-05T03:05:15.7474876Z 86: angle3d evaluates to 3.0308370590209961,
2018-12-05T03:05:15.7475391Z 86: 0.f evaluates to 0, and
2018-12-05T03:05:15.7475576Z 86: max_angle3d evaluates to 0.1745000034570694.
2018-12-05T03:05:15.7475781Z 86: d:\a\1\s\test\registration\test_kfpcs_ia.cpp(96): error: The difference between translation3d and 0.f is 1.8633067607879639, which exceeds max_translation3d, where
2018-12-05T03:05:15.7475963Z 86: translation3d evaluates to 1.8633067607879639,
2018-12-05T03:05:15.7476159Z 86: 0.f evaluates to 0, and
2018-12-05T03:05:15.7476347Z 86: max_translation3d evaluates to 1.
2018-12-05T03:05:15.7494280Z 86: [ FAILED ] PCL.KFPCSInitialAlignment (53491 ms)
2018-12-05T03:05:15.7494419Z 86: [----------] 1 test from PCL (53491 ms total)
2018-12-05T03:05:15.7494492Z 86:
2018-12-05T03:05:15.7494549Z 86: [----------] Global test environment tear-down
2018-12-05T03:05:15.7494608Z 86: [==========] 1 test from 1 test case ran. (53491 ms total)
2018-12-05T03:05:15.7494663Z 86: [ PASSED ] 0 tests.
2018-12-05T03:05:15.7494745Z 86: [ FAILED ] 1 test, listed below:
2018-12-05T03:05:15.7494800Z 86: [ FAILED ] PCL.KFPCSInitialAlignment
2018-12-05T03:05:15.7494849Z 86:
2018-12-05T03:05:15.7494916Z 86: 1 FAILED TEST
2018-12-05T03:05:15.7524715Z 86/106 Test #86: kfpcs_ia ...............................***Failed 53.51 sec
```
|
1.0
|
kfpcs_ia unit test is failing spuriously on multiple platforms - I've seen it happening on the Ubuntu 16.04 and Windows x64 images.
```
2018-12-05T03:04:22.2409061Z test 86
2018-12-05T03:04:22.2409768Z Start 86: kfpcs_ia
2018-12-05T03:04:22.2411330Z
2018-12-05T03:04:22.2411626Z 86: Test command: D:\a\build\bin\test_kfpcs_ia.exe "D:/a/1/s/test/office1_keypoints.pcd" "D:/a/1/s/test/office2_keypoints.pcd"
2018-12-05T03:04:22.2411819Z 86: Test timeout computed to be: 10000000
2018-12-05T03:04:22.2579513Z 86: [==========] Running 1 test from 1 test case.
2018-12-05T03:04:22.2580064Z 86: [----------] Global test environment set-up.
2018-12-05T03:04:22.2580290Z 86: [----------] 1 test from PCL
2018-12-05T03:04:22.2580481Z 86: [ RUN ] PCL.KFPCSInitialAlignment
2018-12-05T03:05:15.7471516Z 86: d:\a\1\s\test\registration\test_kfpcs_ia.cpp(95): error: The difference between angle3d and 0.f is 3.0308370590209961, which exceeds max_angle3d, where
2018-12-05T03:05:15.7474876Z 86: angle3d evaluates to 3.0308370590209961,
2018-12-05T03:05:15.7475391Z 86: 0.f evaluates to 0, and
2018-12-05T03:05:15.7475576Z 86: max_angle3d evaluates to 0.1745000034570694.
2018-12-05T03:05:15.7475781Z 86: d:\a\1\s\test\registration\test_kfpcs_ia.cpp(96): error: The difference between translation3d and 0.f is 1.8633067607879639, which exceeds max_translation3d, where
2018-12-05T03:05:15.7475963Z 86: translation3d evaluates to 1.8633067607879639,
2018-12-05T03:05:15.7476159Z 86: 0.f evaluates to 0, and
2018-12-05T03:05:15.7476347Z 86: max_translation3d evaluates to 1.
2018-12-05T03:05:15.7494280Z 86: [ FAILED ] PCL.KFPCSInitialAlignment (53491 ms)
2018-12-05T03:05:15.7494419Z 86: [----------] 1 test from PCL (53491 ms total)
2018-12-05T03:05:15.7494492Z 86:
2018-12-05T03:05:15.7494549Z 86: [----------] Global test environment tear-down
2018-12-05T03:05:15.7494608Z 86: [==========] 1 test from 1 test case ran. (53491 ms total)
2018-12-05T03:05:15.7494663Z 86: [ PASSED ] 0 tests.
2018-12-05T03:05:15.7494745Z 86: [ FAILED ] 1 test, listed below:
2018-12-05T03:05:15.7494800Z 86: [ FAILED ] PCL.KFPCSInitialAlignment
2018-12-05T03:05:15.7494849Z 86:
2018-12-05T03:05:15.7494916Z 86: 1 FAILED TEST
2018-12-05T03:05:15.7524715Z 86/106 Test #86: kfpcs_ia ...............................***Failed 53.51 sec
```
|
non_process
|
kfpcs ia unit test is failing spuriously on multiple platforms i ve seen it happening on the ubuntu and windows images test start kfpcs ia test command d a build bin test kfpcs ia exe d a s test keypoints pcd d a s test keypoints pcd test timeout computed to be running test from test case global test environment set up test from pcl pcl kfpcsinitialalignment d a s test registration test kfpcs ia cpp error the difference between and f is which exceeds max where evaluates to f evaluates to and max evaluates to d a s test registration test kfpcs ia cpp error the difference between and f is which exceeds max where evaluates to f evaluates to and max evaluates to pcl kfpcsinitialalignment ms test from pcl ms total global test environment tear down test from test case ran ms total tests test listed below pcl kfpcsinitialalignment failed test test kfpcs ia failed sec
| 0
|
6,985
| 3,072,199,927
|
IssuesEvent
|
2015-08-19 15:48:31
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
closed
|
Document `master_finger` more prominently
|
Bug Documentation High Severity P2
|
Technically without either pre-seeding the master public key to new minions, or setting `master_finger`, minions are susceptible to a MITM attack at first connection. This is a convenience trade off, but a community member pointed out that this trade-off is not featured prominently in our documentation, nor the solution.
Add mentions of `master_finger` to introductory documentation and anywhere we talk about security.
|
1.0
|
Document `master_finger` more prominently - Technically without either pre-seeding the master public key to new minions, or setting `master_finger`, minions are susceptible to a MITM attack at first connection. This is a convenience trade off, but a community member pointed out that this trade-off is not featured prominently in our documentation, nor the solution.
Add mentions of `master_finger` to introductory documentation and anywhere we talk about security.
|
non_process
|
document master finger more prominently technically without either pre seeding the master public key to new minions or setting master finger minions are susceptible to a mitm attack at first connection this is a convenience trade off but a community member pointed out that this trade off is not featured prominently in our documentation nor the solution add mentions of master finger to introductory documentation and anywhere we talk about security
| 0
|
251,558
| 27,182,442,934
|
IssuesEvent
|
2023-02-18 20:07:47
|
PalisadoesFoundation/talawa
|
https://api.github.com/repos/PalisadoesFoundation/talawa
|
closed
|
Bump lint from 1.6.0 to 2.0.1
|
bug good first issue points 02 security dependencies
|
Bump lint from 1.6.0 to 2.0.1
We need to do this upgrade to improve the performance, reliability and security of the code base.
1. This upgrade will require you to fix other dependencies. You can verify this with the command `flutter pub get`
1. All tests will need to pass after this is completed
1. All existing functionality will need be maintained after this is completed
This is tied to PR https://github.com/PalisadoesFoundation/talawa/pull/1489
Your PR will need to be merged beforehand so that we can successfully close the one mentioned previously
|
True
|
Bump lint from 1.6.0 to 2.0.1 - Bump lint from 1.6.0 to 2.0.1
We need to do this upgrade to improve the performance, reliability and security of the code base.
1. This upgrade will require you to fix other dependencies. You can verify this with the command `flutter pub get`
1. All tests will need to pass after this is completed
1. All existing functionality will need be maintained after this is completed
This is tied to PR https://github.com/PalisadoesFoundation/talawa/pull/1489
Your PR will need to be merged beforehand so that we can successfully close the one mentioned previously
|
non_process
|
bump lint from to bump lint from to we need to do this upgrade to improve the performance reliability and security of the code base this upgrade will require you to fix other dependencies you can verify this with the command flutter pub get all tests will need to pass after this is completed all existing functionality will need be maintained after this is completed this is tied to pr your pr will need to be merged beforehand so that we can successfully close the one mentioned previously
| 0
|
9,336
| 12,341,054,014
|
IssuesEvent
|
2020-05-14 21:06:56
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Business Metrics
|
P3 enhancement process
|
**Problem**
We are missing a lot of business level metrics that would be useful to us and various other teams. We should add this to mirror node so anyone that runs it can verify the network status with their own application and data.
**Solution**
Add the following metrics:
- Add type=balance to `hedera.mirror.parse.duration`
- Transaction latency as calculated by consensusTimestamp - transactionId.transactionValidStart. Until we have events this is best approximation
- Top K accounts by TPS
- Top K topics by TPS
- Transfer Volume
**Alternatives**
**Additional Context**
|
1.0
|
Business Metrics - **Problem**
We are missing a lot of business level metrics that would be useful to us and various other teams. We should add this to mirror node so anyone that runs it can verify the network status with their own application and data.
**Solution**
Add the following metrics:
- Add type=balance to `hedera.mirror.parse.duration`
- Transaction latency as calculated by consensusTimestamp - transactionId.transactionValidStart. Until we have events this is best approximation
- Top K accounts by TPS
- Top K topics by TPS
- Transfer Volume
**Alternatives**
**Additional Context**
|
process
|
business metrics problem we are missing a lot of business level metrics that would be useful to us and various other teams we should add this to mirror node so anyone that runs it can verify the network status with their own application and data solution add the following metrics add type balance to hedera mirror parse duration transaction latency as calculated by consensustimestamp transactionid transactionvalidstart until we have events this is best approximation top k accounts by tps top k topics by tps transfer volume alternatives additional context
| 1
|
9,873
| 12,885,802,732
|
IssuesEvent
|
2020-07-13 08:31:36
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Strings end with '\n' cause Chrome only test failures
|
browser: chrome pkg/driver process: tests stage: work in progress type: bug
|
### Current behavior:
The following tests from `tests/integration/commands/actions/type_spec.coffee` fail on Chrome 73 but pass on Electron 59:
* `can wrap cursor to next line in [contenteditable] with {rightarrow}`
* `can wrap cursor to prev line in [contenteditable] with {leftarrow}failed`
* `can use {rightarrow} and nested elementsfailed`
* `enter and \n should act the same for [contenteditable]`
* `can type into [contenteditable] with existing <div>`
* `can type into [contenteditable] with existing <p>failed`
* `collapses selection to start on {leftarrow}`
* `collapses selection to end on {rightarrow}`
* `should move cursor to the start of each line in contenteditable`
* `should move cursor to the end of each line in contenteditable`
* `up and down arrow on contenteditablefailed`
* `downarrow ignores current selectionfailed`
* `inserts new line into [contenteditable]`
* `inserts new line into [contenteditable] from midline`
The test failures have a common pattern: The expected result strings all end with `\n` but the actual result does not end with `\n`.

### Desired behavior:
All tests from `type_spec.coffee` pass on both Chrome and Electron.
### Steps to reproduce: (app code and test code)
* Clone Cypress' `develop` branch.
* Install Chrome 73.
* Open 3 terminals running the following commands:
Terminal 1:
```shell
cd cypress/package/driver
npm start
```
Terminal 2:
```shell
cd cypress/package/runner
npm run watch
```
Terminal 3:
```shell
cd cypress/packages/driver
npm run cypress:open
```
* Select "Chrome 73" as the browser.
* Click `type_spec.coffee` under command > actions.
* Wait for the tests to finish. :)
### Versions
Chrome Version 73.0.3683.103 (Official Build) (64-bit)
Mac OS (latest released version)
Cypress 3.2.0
|
1.0
|
Strings end with '\n' cause Chrome only test failures - ### Current behavior:
The following tests from `tests/integration/commands/actions/type_spec.coffee` fail on Chrome 73 but pass on Electron 59:
* `can wrap cursor to next line in [contenteditable] with {rightarrow}`
* `can wrap cursor to prev line in [contenteditable] with {leftarrow}failed`
* `can use {rightarrow} and nested elementsfailed`
* `enter and \n should act the same for [contenteditable]`
* `can type into [contenteditable] with existing <div>`
* `can type into [contenteditable] with existing <p>failed`
* `collapses selection to start on {leftarrow}`
* `collapses selection to end on {rightarrow}`
* `should move cursor to the start of each line in contenteditable`
* `should move cursor to the end of each line in contenteditable`
* `up and down arrow on contenteditablefailed`
* `downarrow ignores current selectionfailed`
* `inserts new line into [contenteditable]`
* `inserts new line into [contenteditable] from midline`
The test failures have a common pattern: The expected result strings all end with `\n` but the actual result does not end with `\n`.

### Desired behavior:
All tests from `type_spec.coffee` pass on both Chrome and Electron.
### Steps to reproduce: (app code and test code)
* Clone Cypress' `develop` branch.
* Install Chrome 73.
* Open 3 terminals running the following commands:
Terminal 1:
```shell
cd cypress/package/driver
npm start
```
Terminal 2:
```shell
cd cypress/package/runner
npm run watch
```
Terminal 3:
```shell
cd cypress/packages/driver
npm run cypress:open
```
* Select "Chrome 73" as the browser.
* Click `type_spec.coffee` under command > actions.
* Wait for the tests to finish. :)
### Versions
Chrome Version 73.0.3683.103 (Official Build) (64-bit)
Mac OS (latest released version)
Cypress 3.2.0
|
process
|
strings end with n cause chrome only test failures current behavior the following tests from tests integration commands actions type spec coffee fail on chrome but pass on electron can wrap cursor to next line in with rightarrow can wrap cursor to prev line in with leftarrow failed can use rightarrow and nested elementsfailed enter and n should act the same for can type into with existing can type into with existing failed collapses selection to start on leftarrow collapses selection to end on rightarrow should move cursor to the start of each line in contenteditable should move cursor to the end of each line in contenteditable up and down arrow on contenteditablefailed downarrow ignores current selectionfailed inserts new line into inserts new line into from midline the test failures have a common pattern the expected result strings all end with n but the actual result does not end with n desired behavior all tests from type spec coffee pass on both chrome and electron steps to reproduce app code and test code clone cypress develop branch install chrome open terminals running the following commands terminal shell cd cypress package driver npm start terminal shell cd cypress package runner npm run watch terminal shell cd cypress packages driver npm run cypress open select chrome as the browser click type spec coffee under command actions wait for the tests to finish versions chrome version official build bit mac os latest released version cypress
| 1
|
88,326
| 15,800,773,173
|
IssuesEvent
|
2021-04-03 01:13:23
|
SmartBear/readyapi-swagger-assertion-plugin
|
https://api.github.com/repos/SmartBear/readyapi-swagger-assertion-plugin
|
opened
|
CVE-2021-21350 (High) detected in xstream-1.3.1.jar
|
security vulnerability
|
## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: readyapi-swagger-assertion-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.7.0.jar (Root Library)
- ready-api-soapui-1.7.0.jar
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.7.0;com.smartbear:ready-api-soapui:1.7.0;com.thoughtworks.xstream:xstream:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.16"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21350","vulnerabilityDetails":"XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the Security Framework, you will have to use at least version 1.4.16.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-21350 (High) detected in xstream-1.3.1.jar - ## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: readyapi-swagger-assertion-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.7.0.jar (Root Library)
- ready-api-soapui-1.7.0.jar
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.7.0;com.smartbear:ready-api-soapui:1.7.0;com.thoughtworks.xstream:xstream:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.16"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21350","vulnerabilityDetails":"XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the Security Framework, you will have to use at least version 1.4.16.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar path to dependency file readyapi swagger assertion plugin pom xml path to vulnerable library home wss scanner repository thoughtworks xstream xstream jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar x xstream jar vulnerable library found in base branch master vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com smartbear ready api soapui pro com smartbear ready api soapui com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types if you rely on xstream default blacklist of the security framework you will have to use at least version vulnerabilityurl
| 0
|
11,007
| 4,128,041,026
|
IssuesEvent
|
2016-06-10 02:54:30
|
TEAMMATES/teammates
|
https://api.github.com/repos/TEAMMATES/teammates
|
closed
|
Re-organize FileHelper classes
|
a-CodeQuality m.Aspect
|
There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.
|
1.0
|
Re-organize FileHelper classes - There are two `FileHelper`s, one for production (reading input stream etc.) and one for non-production (reading files etc.), but they're not very well-organized right now. Also, there are some self-defined functions that can actually fit in either one of these classes.
|
non_process
|
re organize filehelper classes there are two filehelper s one for production reading input stream etc and one for non production reading files etc but they re not very well organized right now also there are some self defined functions that can actually fit in either one of these classes
| 0
|
38,931
| 12,624,098,717
|
IssuesEvent
|
2020-06-14 03:45:48
|
scriptex/react-svg-donuts
|
https://api.github.com/repos/scriptex/react-svg-donuts
|
closed
|
CVE-2020-13822 (High) detected in elliptic-6.5.2.tgz
|
security vulnerability
|
## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-svg-donuts/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-svg-donuts/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.43.0.tgz (Root Library)
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.2.0.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/react-svg-donuts/commit/16e5b3f75fa65549e86a0891e03657872edc0ee4">16e5b3f75fa65549e86a0891e03657872edc0ee4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-13822 (High) detected in elliptic-6.5.2.tgz - ## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-svg-donuts/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-svg-donuts/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.43.0.tgz (Root Library)
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.2.0.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/react-svg-donuts/commit/16e5b3f75fa65549e86a0891e03657872edc0ee4">16e5b3f75fa65549e86a0891e03657872edc0ee4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in elliptic tgz cve high severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tmp ws scm react svg donuts package json path to vulnerable library tmp ws scm react svg donuts node modules elliptic package json dependency hierarchy webpack tgz root library node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href vulnerability details the elliptic package for node js allows ecdsa signature malleability via variations in encoding leading bytes or integer overflows this could conceivably have a security relevant impact if an application relied on a single canonical signature publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
6,562
| 9,648,880,214
|
IssuesEvent
|
2019-05-17 17:30:39
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Update text on Next Steps page of application
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Internship applicants
What: Reviewing next steps
Why: To properly state what is required and not for their application
Acceptance criteria:
Issue: The next steps page indicates that references are required but they are actually not.
Please update the next steps page Step 2 as follows:
Tell us about your work, military or other experience.
- Remove the subtext on reference data
|
1.0
|
Update text on Next Steps page of application - Who: Internship applicants
What: Reviewing next steps
Why: To properly state what is required and not for their application
Acceptance criteria:
Issue: The next steps page indicates that references are required but they are actually not.
Please update the next steps page Step 2 as follows:
Tell us about your work, military or other experience.
- Remove the subtext on reference data
|
process
|
update text on next steps page of application who internship applicants what reviewing next steps why to properly state what is required and not for their application acceptance criteria issue the next steps page indicates that references are required but they are actually not please update the next steps page step as follows tell us about your work military or other experience remove the subtext on reference data
| 1
|
17,055
| 22,474,525,761
|
IssuesEvent
|
2022-06-22 11:03:57
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
opened
|
Add Project Number to Applications List
|
application-process
|
When looking at the list of applications it would be useful to show Project.number for applications which have an approved project with a number.
We should also be able to search on that page (and the home page) for project numbers.
|
1.0
|
Add Project Number to Applications List - When looking at the list of applications it would be useful to show Project.number for applications which have an approved project with a number.
We should also be able to search on that page (and the home page) for project numbers.
|
process
|
add project number to applications list when looking at the list of applications it would be useful to show project number for applications which have an approved project with a number we should also be able to search on that page and the home page for project numbers
| 1
|
285,862
| 31,155,707,356
|
IssuesEvent
|
2023-08-16 13:00:41
|
nidhi7598/linux-4.1.15_CVE-2018-5873
|
https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2018-5873
|
opened
|
CVE-2017-17852 (High) detected in linuxlinux-4.1.52
|
Mend: dependency security vulnerability
|
## CVE-2017-17852 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging mishandling of 32-bit ALU ops.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17852>CVE-2017-17852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2017-17852">https://www.linuxkernelcves.com/cves/CVE-2017-17852</a></p>
<p>Release Date: 2017-12-23</p>
<p>Fix Resolution: v4.15-rc5,v4.14.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-17852 (High) detected in linuxlinux-4.1.52 - ## CVE-2017-17852 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging mishandling of 32-bit ALU ops.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17852>CVE-2017-17852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2017-17852">https://www.linuxkernelcves.com/cves/CVE-2017-17852</a></p>
<p>Release Date: 2017-12-23</p>
<p>Fix Resolution: v4.15-rc5,v4.14.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files kernel bpf verifier c kernel bpf verifier c vulnerability details kernel bpf verifier c in the linux kernel through allows local users to cause a denial of service memory corruption or possibly have unspecified other impact by leveraging mishandling of bit alu ops publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
171,146
| 20,925,842,343
|
IssuesEvent
|
2022-03-24 22:47:05
|
scriptex/material-tetris
|
https://api.github.com/repos/scriptex/material-tetris
|
opened
|
CVE-2021-44906 (High) detected in minimist-1.2.5.tgz
|
security vulnerability
|
## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- tslint-6.1.3.tgz (Root Library)
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/material-tetris/commit/259e37068ceb8417aee02167c39a29cfdee81ad2">259e37068ceb8417aee02167c39a29cfdee81ad2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-44906 (High) detected in minimist-1.2.5.tgz - ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- tslint-6.1.3.tgz (Root Library)
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/material-tetris/commit/259e37068ceb8417aee02167c39a29cfdee81ad2">259e37068ceb8417aee02167c39a29cfdee81ad2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in minimist tgz cve high severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy tslint tgz root library mkdirp tgz x minimist tgz vulnerable library found in head commit a href vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient step up your open source security game with whitesource
| 0
|
9,320
| 12,338,259,540
|
IssuesEvent
|
2020-05-14 16:10:39
|
DiSSCo/user-stories
|
https://api.github.com/repos/DiSSCo/user-stories
|
opened
|
interoperability between my CMS and UCAS
|
1. NH museum 2. Collection Management 4. Data processing ICEDIG-SURVEY Specimen level
|
As a Curator I want to add annotated information from an Unified Curation and Annotation System (UCAS) to my collection management system (CMS) so that I can update infomation on my specimens in my CMS for this I need interoperability between my CMS and UCAS
|
1.0
|
interoperability between my CMS and UCAS - As a Curator I want to add annotated information from an Unified Curation and Annotation System (UCAS) to my collection management system (CMS) so that I can update infomation on my specimens in my CMS for this I need interoperability between my CMS and UCAS
|
process
|
interoperability between my cms and ucas as a curator i want to add annotated information from an unified curation and annotation system ucas to my collection management system cms so that i can update infomation on my specimens in my cms for this i need interoperability between my cms and ucas
| 1
|
19,739
| 26,087,548,038
|
IssuesEvent
|
2022-12-26 06:11:12
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
java.lang.UnsatisfiedLinkError: dlopen failed: "/data/user/0/com.google.mediapipe.examples.facemesh/incrementallib/libmediapipe_jni.so" is 32-bit instead of 64-bit
|
more data needed type: support / not a bug (process)
|

Hello,
Any one have an idea on this
i have been using android-ndk-r21b-linux-x86_64 NDK in Ubuntu 20.02 64 bit.
|
1.0
|
java.lang.UnsatisfiedLinkError: dlopen failed: "/data/user/0/com.google.mediapipe.examples.facemesh/incrementallib/libmediapipe_jni.so" is 32-bit instead of 64-bit - 
Hello,
Any one have an idea on this
i have been using android-ndk-r21b-linux-x86_64 NDK in Ubuntu 20.02 64 bit.
|
process
|
java lang unsatisfiedlinkerror dlopen failed data user com google mediapipe examples facemesh incrementallib libmediapipe jni so is bit instead of bit hello any one have an idea on this i have been using android ndk linux ndk in ubuntu bit
| 1
|
10,905
| 13,684,880,957
|
IssuesEvent
|
2020-09-30 06:04:05
|
prisma/migrate
|
https://api.github.com/repos/prisma/migrate
|
closed
|
Prisma doesn't create tables in SQLite
|
process/candidate
|
## Bug description
<!-- A clear and concise description of what the bug is. -->
When I want to do a query from `user` table (or any other tables) `prisma` says the table does not exist.
**error**:
```
The table `dev.User` does not exist in the current database.
at iy.request (/home/hamidb80/Documents/programming/projects/dairyto/backend/node_modules/@prisma/client/runtime/src/runtime/getPrismaClient.ts:1181:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:17816) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 4)
(node:17816) UnhandledPromiseRejectionWarning: Error:
Invalid `prisma.user.findMany()` invocation in
/home/hamidb80/Documents/programming/projects/dairyto/backend/src/views/main.ts:17:31
The table `dev.User` does not exist in the current database.
at iy.request (/home/hamidb80/Documents/programming/projects/dairyto/backend/node_modules/@prisma/client/runtime/src/runtime/getPrismaClient.ts:1181:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:17816) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 6)
prisma:query SELECT 1
prisma:query SELECT `dev`.`User`.`id`, `dev`.`User`.`email`, `dev`.`User`.`name` FROM `dev`.`User` WHERE 1=1 LIMIT ? OFFSET ?
^C
```
I opened the database by a database manager software (`dbeaver` in this case), there was only one table (`_Migrations`) which was empty.
## How to reproduce
**You have 2 choices:**
1. you can clone my repo on your computer:
`https://github.com/hamidb80/diaryto-backend/tree/a83327c108186ee7c12ca712ddd0253323f0da1d`
run these commands:
```
npx prisma generate --schema=./src/database/schema.prisma
npx prisma migrate up --schema=./src/database/schema.prisma --experimental
npx prisma migrate save --name init --schema=./src/database/schema.prisma --experimental
```
run the server by this:
```
npm start
```
and go to `http://127.0.0.1:<port>/api/` [the default value of port is 8000]
2. put my `Prisma schema` file in a new project
after that, enter these commands in the terminal
```
npx prisma generate --schema=<schema.prisma path>
npx prisma migrate up --schema=<schema.prisma path> --experimental
npx prisma migrate save --name init --schema=<schema.prisma path> --experimental
```
and then do a query or check the database
## Expected behavior
get records from the tables correctly
## Prisma information
Prisma schema:
```SQL
datasource db {
provider = "sqlite"
url = "file:./dev.db"
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
}
model Video {
id Int @id @default(autoincrement())
path String @unique
title String?
}
```
## Environment & setup
- OS: ubuntu mate 20.04.1 LTS
- Database: SQLite
- Node.js version: v10.19.0
- Prisma version: ^2.7.1
|
1.0
|
Prisma doesn't create tables in SQLite - ## Bug description
<!-- A clear and concise description of what the bug is. -->
When I want to do a query from `user` table (or any other tables) `prisma` says the table does not exist.
**error**:
```
The table `dev.User` does not exist in the current database.
at iy.request (/home/hamidb80/Documents/programming/projects/dairyto/backend/node_modules/@prisma/client/runtime/src/runtime/getPrismaClient.ts:1181:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:17816) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 4)
(node:17816) UnhandledPromiseRejectionWarning: Error:
Invalid `prisma.user.findMany()` invocation in
/home/hamidb80/Documents/programming/projects/dairyto/backend/src/views/main.ts:17:31
The table `dev.User` does not exist in the current database.
at iy.request (/home/hamidb80/Documents/programming/projects/dairyto/backend/node_modules/@prisma/client/runtime/src/runtime/getPrismaClient.ts:1181:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:17816) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 6)
prisma:query SELECT 1
prisma:query SELECT `dev`.`User`.`id`, `dev`.`User`.`email`, `dev`.`User`.`name` FROM `dev`.`User` WHERE 1=1 LIMIT ? OFFSET ?
^C
```
I opened the database by a database manager software (`dbeaver` in this case), there was only one table (`_Migrations`) which was empty.
## How to reproduce
**You have 2 choices:**
1. you can clone my repo on your computer:
`https://github.com/hamidb80/diaryto-backend/tree/a83327c108186ee7c12ca712ddd0253323f0da1d`
run these commands:
```
npx prisma generate --schema=./src/database/schema.prisma
npx prisma migrate up --schema=./src/database/schema.prisma --experimental
npx prisma migrate save --name init --schema=./src/database/schema.prisma --experimental
```
run the server by this:
```
npm start
```
and go to `http://127.0.0.1:<port>/api/` [the default value of port is 8000]
2. put my `Prisma schema` file in a new project
after that, enter these commands in the terminal
```
npx prisma generate --schema=<schema.prisma path>
npx prisma migrate up --schema=<schema.prisma path> --experimental
npx prisma migrate save --name init --schema=<schema.prisma path> --experimental
```
and then do a query or check the database
## Expected behavior
get records from the tables correctly
## Prisma information
Prisma schema:
```SQL
datasource db {
provider = "sqlite"
url = "file:./dev.db"
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
}
model Video {
id Int @id @default(autoincrement())
path String @unique
title String?
}
```
## Environment & setup
- OS: ubuntu mate 20.04.1 LTS
- Database: SQLite
- Node.js version: v10.19.0
- Prisma version: ^2.7.1
|
process
|
prisma doesn t create tables in sqlite bug description when i want to do a query from user table or any other tables prisma says the table does not exist error the table dev user does not exist in the current database at iy request home documents programming projects dairyto backend node modules prisma client runtime src runtime getprismaclient ts at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id node unhandledpromiserejectionwarning error invalid prisma user findmany invocation in home documents programming projects dairyto backend src views main ts the table dev user does not exist in the current database at iy request home documents programming projects dairyto backend node modules prisma client runtime src runtime getprismaclient ts at process tickcallback internal process next tick js node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch rejection id prisma query select prisma query select dev user id dev user email dev user name from dev user where limit offset c i opened the database by a database manager software dbeaver in this case there was only one table migrations which was empty how to reproduce you have choices you can clone my repo on your computer run these commands npx prisma generate schema src database schema prisma npx prisma migrate up schema src database schema prisma experimental npx prisma migrate save name init schema src database schema prisma experimental run the server by this npm start and go to put my prisma schema file in a new project after that enter these commands in the terminal npx prisma generate schema npx prisma migrate up schema experimental npx prisma migrate save name init schema experimental and then do a query or check the database expected behavior get records from the tables correctly prisma information prisma schema sql datasource db provider sqlite url file dev db generator client provider prisma client js model user id int id default autoincrement email string unique name string model video id int id default autoincrement path string unique title string environment setup os ubuntu mate lts database sqlite node js version prisma version
| 1
|
180,952
| 13,979,404,908
|
IssuesEvent
|
2020-10-27 00:02:57
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
☂️ Migrate framework tests to null safety
|
P3 a: null-safety a: tests framework passed first triage
|
Prerequisites (cc @jonahwilliams):
- [x] migrate the framework _[in progress by @a14n]_
- [x] publish platform/file/process in SDK allowlist, publish on pub, _[in progress by @jonahwilliams]_
- [x] migrate flutter_goldens/flutter_goldens_client, https://github.com/flutter/flutter/issues/53908 _[@jonahwilliams signed up for this]_
- [x] migrate flutter_test, https://github.com/flutter/flutter/issues/53908, _[in progress by @goderbauer: #66663]_
It doesn't make sense to start migrating the tests before all the bullets above are done.
The long pole that will block flutter_test migration is the migration of the framework.
|
1.0
|
☂️ Migrate framework tests to null safety - Prerequisites (cc @jonahwilliams):
- [x] migrate the framework _[in progress by @a14n]_
- [x] publish platform/file/process in SDK allowlist, publish on pub, _[in progress by @jonahwilliams]_
- [x] migrate flutter_goldens/flutter_goldens_client, https://github.com/flutter/flutter/issues/53908 _[@jonahwilliams signed up for this]_
- [x] migrate flutter_test, https://github.com/flutter/flutter/issues/53908, _[in progress by @goderbauer: #66663]_
It doesn't make sense to start migrating the tests before all the bullets above are done.
The long pole that will block flutter_test migration is the migration of the framework.
|
non_process
|
☂️ migrate framework tests to null safety prerequisites cc jonahwilliams migrate the framework publish platform file process in sdk allowlist publish on pub migrate flutter goldens flutter goldens client migrate flutter test it doesn t make sense to start migrating the tests before all the bullets above are done the long pole that will block flutter test migration is the migration of the framework
| 0
|
163,284
| 13,914,549,207
|
IssuesEvent
|
2020-10-20 22:24:29
|
KSP-KOS/KOS
|
https://api.github.com/repos/KSP-KOS/KOS
|
closed
|
file open error results in crash.
|
documentation
|
When I do `local f is open("0:/craft/survey.csv").` for a file that is absent, I get my script crashed and an error printed in the kOS terminal, contrary to the docs. Which behavior is correct? (I guess the docs probably are)
```
OPEN(PATH)
Will return a VolumeFile or VolumeDirectory representing the item pointed to by PATH.
It will return a Boolean false if there’s nothing present under the given path. Also see
Volume:OPEN.
```
|
1.0
|
file open error results in crash. - When I do `local f is open("0:/craft/survey.csv").` for a file that is absent, I get my script crashed and an error printed in the kOS terminal, contrary to the docs. Which behavior is correct? (I guess the docs probably are)
```
OPEN(PATH)
Will return a VolumeFile or VolumeDirectory representing the item pointed to by PATH.
It will return a Boolean false if there’s nothing present under the given path. Also see
Volume:OPEN.
```
|
non_process
|
file open error results in crash when i do local f is open craft survey csv for a file that is absent i get my script crashed and an error printed in the kos terminal contrary to the docs which behavior is correct i guess the docs probably are open path will return a volumefile or volumedirectory representing the item pointed to by path it will return a boolean false if there’s nothing present under the given path also see volume open
| 0
|
1,911
| 4,746,363,368
|
IssuesEvent
|
2016-10-21 10:46:43
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Extract clinical study reports from EMA
|
Collectors help wanted Processors
|
Yesterday [EMA announced it would give access to its clinical study reports](http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2016/10/news_detail_002624.jsp&mid=WC0b01ac058004d5c1). This is great news! Now we need to extract this information and provide it through OpenTrials.
|
1.0
|
Extract clinical study reports from EMA - Yesterday [EMA announced it would give access to its clinical study reports](http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2016/10/news_detail_002624.jsp&mid=WC0b01ac058004d5c1). This is great news! Now we need to extract this information and provide it through OpenTrials.
|
process
|
extract clinical study reports from ema yesterday this is great news now we need to extract this information and provide it through opentrials
| 1
|
625
| 3,091,438,586
|
IssuesEvent
|
2015-08-26 13:13:06
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На главном портале реализовать валидацию поля, которое промаркеровано как Электронный адрес + подкрашивать красной рамочкой такие поля (в т.ч. по номеру телефона) и справа отображать текст ошибки, если поле таки не валидно.
|
active hi priority In process of testing test
|
ВАЖНО: сделать точь-в точь как в задаче: https://github.com/e-government-ua/i/issues/375
только уже не для номера телефона
(по сути механизм уже там готовый будет использоваться, останется дописать кусок валидации формата телефона)
в файле:
\i\central-js\client\app\service\built-in\controllers\servicebuiltinbankid.controller.js
встроить:
function checkEmail(email) {
var re = /^([\w-]+(?:\.[\w-]+)*)@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$/i;
return re.test(email);
}
посмотрев как через обьект "markers" проверяется - какие поля нужно валидировать
ВАЖНО: Добавить отрисовку "красной рамочки" вокруг полей и справа пояснения ошибки, если поле не валидно (для некоторых полей такое уже реализовано и даже не дает сабмитмть... но только для обязательных полей пока и для номерных полей.. а нужно и для номера телефона и для почты)
|
1.0
|
На главном портале реализовать валидацию поля, которое промаркеровано как Электронный адрес + подкрашивать красной рамочкой такие поля (в т.ч. по номеру телефона) и справа отображать текст ошибки, если поле таки не валидно. - ВАЖНО: сделать точь-в точь как в задаче: https://github.com/e-government-ua/i/issues/375
только уже не для номера телефона
(по сути механизм уже там готовый будет использоваться, останется дописать кусок валидации формата телефона)
в файле:
\i\central-js\client\app\service\built-in\controllers\servicebuiltinbankid.controller.js
встроить:
function checkEmail(email) {
var re = /^([\w-]+(?:\.[\w-]+)*)@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$/i;
return re.test(email);
}
посмотрев как через обьект "markers" проверяется - какие поля нужно валидировать
ВАЖНО: Добавить отрисовку "красной рамочки" вокруг полей и справа пояснения ошибки, если поле не валидно (для некоторых полей такое уже реализовано и даже не дает сабмитмть... но только для обязательных полей пока и для номерных полей.. а нужно и для номера телефона и для почты)
|
process
|
на главном портале реализовать валидацию поля которое промаркеровано как электронный адрес подкрашивать красной рамочкой такие поля в т ч по номеру телефона и справа отображать текст ошибки если поле таки не валидно важно сделать точь в точь как в задаче только уже не для номера телефона по сути механизм уже там готовый будет использоваться останется дописать кусок валидации формата телефона в файле i central js client app service built in controllers servicebuiltinbankid controller js встроить function checkemail email var re w i return re test email посмотрев как через обьект markers проверяется какие поля нужно валидировать важно добавить отрисовку красной рамочки вокруг полей и справа пояснения ошибки если поле не валидно для некоторых полей такое уже реализовано и даже не дает сабмитмть но только для обязательных полей пока и для номерных полей а нужно и для номера телефона и для почты
| 1
|
7,640
| 10,736,590,585
|
IssuesEvent
|
2019-10-29 11:13:32
|
Viir/Kalmit
|
https://api.github.com/repos/Viir/Kalmit
|
closed
|
Expand automatic tests of the web host to cover sending different content-type with HTTP responses
|
interface-between-process-and-host
|
Similar to the tests in https://github.com/Viir/Kalmit/commit/27d0c1036719ff30b8e020fb81769be512422dbe
|
1.0
|
Expand automatic tests of the web host to cover sending different content-type with HTTP responses - Similar to the tests in https://github.com/Viir/Kalmit/commit/27d0c1036719ff30b8e020fb81769be512422dbe
|
process
|
expand automatic tests of the web host to cover sending different content type with http responses similar to the tests in
| 1
|
11,136
| 13,957,691,926
|
IssuesEvent
|
2020-10-24 08:10:33
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
RO - Harvesting request
|
Geoportal Harvesting process RO - Romania
|
Dear Angelo,
Can you please start a new harvesting from Romanian Geoportal? We did some changes and we want to see the outcome.
Thank you in advance for your help.
Best Regards,
Simona Bunea
|
1.0
|
RO - Harvesting request - Dear Angelo,
Can you please start a new harvesting from Romanian Geoportal? We did some changes and we want to see the outcome.
Thank you in advance for your help.
Best Regards,
Simona Bunea
|
process
|
ro harvesting request dear angelo can you please start a new harvesting from romanian geoportal we did some changes and we want to see the outcome thank you in advance for your help best regards simona bunea
| 1
|
15,179
| 18,952,573,956
|
IssuesEvent
|
2021-11-18 16:33:40
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[iOS/tvOS] System.Diagnostics.Tests.ProcessTests.TestGetProcesses fails on devices
|
area-System.Diagnostics.Process os-ios os-tvos
|
This [test](https://github.com/dotnet/runtime/blob/defa26b9e1159f1a2a3470c52084410edff982b6/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs#L1108-L1124) fails with:
```
System.ComponentModel.Win32Exception : Could not get all running Process IDs.
Stack trace
at Interop.libproc.proc_listallpids()
at System.Diagnostics.ProcessManager.GetProcessInfos(String machineName)
at System.Diagnostics.Process.GetProcesses(String machineName)
at System.Diagnostics.Process.GetProcesses()
at System.Diagnostics.Tests.ProcessTests.TestGetProcesses()
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
I suspect we'll need to bring in the [headers](https://opensource.apple.com/source/xnu/xnu-2422.1.72/libsyscall/wrappers/libproc/libproc.h.auto.html) for proc_listallpids
|
1.0
|
[iOS/tvOS] System.Diagnostics.Tests.ProcessTests.TestGetProcesses fails on devices - This [test](https://github.com/dotnet/runtime/blob/defa26b9e1159f1a2a3470c52084410edff982b6/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs#L1108-L1124) fails with:
```
System.ComponentModel.Win32Exception : Could not get all running Process IDs.
Stack trace
at Interop.libproc.proc_listallpids()
at System.Diagnostics.ProcessManager.GetProcessInfos(String machineName)
at System.Diagnostics.Process.GetProcesses(String machineName)
at System.Diagnostics.Process.GetProcesses()
at System.Diagnostics.Tests.ProcessTests.TestGetProcesses()
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
I suspect we'll need to bring in the [headers](https://opensource.apple.com/source/xnu/xnu-2422.1.72/libsyscall/wrappers/libproc/libproc.h.auto.html) for proc_listallpids
|
process
|
system diagnostics tests processtests testgetprocesses fails on devices this fails with system componentmodel could not get all running process ids stack trace at interop libproc proc listallpids at system diagnostics processmanager getprocessinfos string machinename at system diagnostics process getprocesses string machinename at system diagnostics process getprocesses at system diagnostics tests processtests testgetprocesses at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture i suspect we ll need to bring in the for proc listallpids
| 1
|
21,067
| 28,015,034,228
|
IssuesEvent
|
2023-03-27 21:49:01
|
GoogleCloudPlatform/vertex-ai-samples
|
https://api.github.com/repos/GoogleCloudPlatform/vertex-ai-samples
|
closed
|
Custom training that uses BigQuery Python library should use an explicit PROJECT_ID
|
type: process type: cleanup
|
## Expected Behavior
Some notebooks that demonstrate custom training use an inferred project:
```
from google.cloud import bigquery
client = bigquery.Client()
```
This will cause permission issues. See:
See https://cloud.google.com/vertex-ai/docs/workbench/managed/executor#explicit-project-selection.
## Solution
Use an explicit project id:
```
import os
from google.cloud import bigquery
project_number = os.environ["CLOUD_ML_PROJECT_ID"]
client = bigquery.Client(project=project_number)
```
|
1.0
|
Custom training that uses BigQuery Python library should use an explicit PROJECT_ID - ## Expected Behavior
Some notebooks that demonstrate custom training use an inferred project:
```
from google.cloud import bigquery
client = bigquery.Client()
```
This will cause permission issues. See:
See https://cloud.google.com/vertex-ai/docs/workbench/managed/executor#explicit-project-selection.
## Solution
Use an explicit project id:
```
import os
from google.cloud import bigquery
project_number = os.environ["CLOUD_ML_PROJECT_ID"]
client = bigquery.Client(project=project_number)
```
|
process
|
custom training that uses bigquery python library should use an explicit project id expected behavior some notebooks that demonstrate custom training use an inferred project from google cloud import bigquery client bigquery client this will cause permission issues see see solution use an explicit project id import os from google cloud import bigquery project number os environ client bigquery client project project number
| 1
|
285,587
| 24,679,213,163
|
IssuesEvent
|
2022-10-18 19:42:39
|
celestiaorg/test-infra
|
https://api.github.com/repos/celestiaorg/test-infra
|
closed
|
monorepo: multiple test-plans implementation in a monorepo
|
enhancement question test
|
We need to think about either it is viable to start creating `test-infra` into a monorepo that contains all test-plans implementation
|
1.0
|
monorepo: multiple test-plans implementation in a monorepo - We need to think about either it is viable to start creating `test-infra` into a monorepo that contains all test-plans implementation
|
non_process
|
monorepo multiple test plans implementation in a monorepo we need to think about either it is viable to start creating test infra into a monorepo that contains all test plans implementation
| 0
|
39,059
| 8,575,195,580
|
IssuesEvent
|
2018-11-12 16:38:48
|
yiisoft/yii2
|
https://api.github.com/repos/yiisoft/yii2
|
closed
|
Error on database configuration in second test running in codeception unit test
|
Codeception
|
### What steps will reproduce the problem?
I created a test class like below:
```php
<?php
namespace common\tests\unit\models;
use common\models\Developer;
use common\tests\fixtures\DeveloperFixture;
use Faker\Factory;
class DeveloperTest extends \Codeception\Test\Unit
{
/**
* @var \common\tests\UnitTester
*/
protected $tester;
/**
* @return array
*/
public function _fixtures()
{
return [
'developer' => [
'class' => DeveloperFixture::class,
'dataFile' => codecept_data_dir() . 'developer.php'
]
];
}
/**
* Test validation rules of developer model.
* This method use validationRuleDataProvider() to create some test cases.
*
* @see DeveloperTest::validationRuleDataProvider() Related test cases.
*/
public function testValidationRules()
{
$output = new \Codeception\Lib\Console\Output([]);
foreach ($this->validationRuleDataProvider() as $key => $example) {
list($attribute, $value, $expect) = $example;
$output->writeln("\n\n_____________ RUN testValidationRules case 0} __________");
$output->writeln("Test attribute is: {$attribute}");
$output->writeln("Test value is: {$value}");
$output->writeln("Expected value is: {$expect}");
$developer = new Developer();
$developer->$attribute = $value;
$validateValue = $developer->validate([$attribute]);
if (!$validateValue) {
$output->write("Validation error for {$attribute} is :\n\t");
$output->writeln($developer->getErrors($attribute));
$output->write("\n\n");
}
if ($expect) {
$this->assertTrue($validateValue, 'Validation rule accepted.');
} else {
$this->assertFalse($validateValue, 'Validation rule rejected.');
}
}
}
/**
* Data provider for testValidationRules.
*
* @see DeveloperTest::testValidationRules() Test that use this provider.
*
* @return array
*/
protected function validationRuleDataProvider()
{
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
return [
['name', $faker_fa->realText(100), 1],
['description', $faker_fa->realText(200), 1],
['name', $faker_fa->numberBetween(100), 0],
['description', $faker_fa->numberBetween(20000000), 0],
['name', $faker_en->realText(100), 1],
['description', $faker_en->realText(200), 1],
['name', $faker_en->numberBetween(2000000), 0],
['description', $faker_en->numberBetween(20000), 0],
['name', $faker_fa->realText(200), 1],
['description', $faker_fa->realText(2000), 1],
['name', $faker_fa->boolean, 0],
['description', $faker_en->boolean, 0],
['name', $faker_en->realText(200), 1],
['description', $faker_en->realText(2000), 1],
['name', $faker_en->unixTime, 0],
['description', $faker_en->numberBetween(2147000000), 0],
];
}
/**
* Test to saving user in database.
* We are using Factory object to create dynamic test cases.
*/
public function testSaving()
{
// use the factory to create a Faker\Generator instance
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
$developer = new Developer([
'name' => $faker_en->company,
'description' => $faker_fa->realText()
]);
$saveStatus = $developer->save();
$this->assertTrue($saveStatus, 'Developer object saved into database.');
}
/**
* Test update action of Developer model.
* We are using Factory object to create dynamic test cases.
*/
protected function testUpdate()
{
$index = rand(0, 9);
/** @var Developer $developer */
$sample = $this->tester->grabFixture('developer')->data['developer' . $index];
$developer = Developer::findOne(['name' => $sample['name']]);
$this->assertNotNull($developer, "No developer object found.");
// use the factory to create a Faker\Generator instance
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
$developer->name = $faker_en->company;
$developer->description = $faker_fa->realText();
$this->assertTrue($developer->save(), "Developer object updated.");
}
public function _before(){
}
public function _after(){
}
}
```
My codeception configuration file is like below:
```php
namespace: common\tests
actor_suffix: Tester
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
modules:
config:
Yii2:
configFile: 'config/test-local.php'
```
And my unit suit configuration file is like this:
```php
suite_namespace: common\tests\unit
actor: UnitTester
bootstrap: false
modules:
enabled:
- Yii2:
part: fixtures
```
### What is the expected result?
When i run command like `codecept -c common run unit models/DeveloperTest --steps --debug -vvv
` all tests should be run and pass.
### What do you get instead?
I got below errors that say `mongodb` and `i18n` component not configured.
This problem happen in second test function running like `testSave` and `testUpdate` in `DeveloperTest` class.
First method run correctly.
```cmd
✔ DeveloperTest: Validation rules (0.44s)
[TransactionForcer] no longer watching new connections
[yii\db\Connection::open] 'Opening DB connection: mysql:host=mysql;dbname=gamestore_test'
[ConnectionWatcher] Connection opened!
Destroying application
[ConnectionWatcher] no longer watching new connections
[ConnectionWatcher] closing all (2) connections
- DeveloperTest: Saving Destroying application
Starting application
E DeveloperTest: Saving
E DeveloperTest: Saving (0.05s)
Destroying application
Suite done, restoring $_SERVER to original
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Time: 1.68 seconds, Memory: 30.00MB
There were 2 errors:
---------
1) DeveloperTest: Saving
Test tests/unit/models/DeveloperTest.php:testSaving
[yii\base\InvalidConfigException] Unexpected configuration type for the "mongodb" component: boolean
#1 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:208
#2 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:261
#3 /app/vendor/yiisoft/yii2/base/Component.php:180
#4 /app/vendor/yiisoft/yii2/BaseYii.php:546
#5 /app/vendor/yiisoft/yii2/base/BaseObject.php:107
#6 /app/vendor/yiisoft/yii2/base/Application.php:206
#7 yii\base\Application->__construct
#8 /app/vendor/yiisoft/yii2/di/Container.php:383
#9 /app/vendor/yiisoft/yii2/di/Container.php:156
#10 /app/vendor/yiisoft/yii2/BaseYii.php:349
---------
2) DeveloperTest: Saving
Test tests/unit/models/DeveloperTest.php:testSaving
[yii\base\InvalidConfigException] Unknown component ID: i18n
#1 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:139
#2 /app/vendor/yiisoft/yii2/base/Module.php:742
#3 /app/vendor/yiisoft/yii2/base/Application.php:580
#4 /app/vendor/yiisoft/yii2/BaseYii.php:526
#5 /app/vendor/yiisoft/yii2/validators/RequiredValidator.php:60
#6 /app/vendor/yiisoft/yii2/base/BaseObject.php:109
#7 yii\base\BaseObject->__construct
#8 /app/vendor/yiisoft/yii2/di/Container.php:383
#9 /app/vendor/yiisoft/yii2/di/Container.php:156
#10 /app/vendor/yiisoft/yii2/BaseYii.php:349
ERRORS!
Tests: 2, Assertions: 16, Errors: 2.
```
### Additional info
| Q | A
| ---------------- | ---
| Yii version | 2.0.15.1
| PHP version | 7.2.7
| Operating system | Linux alpine
|
1.0
|
Error on database configuration in second test running in codeception unit test - ### What steps will reproduce the problem?
I created a test class like below:
```php
<?php
namespace common\tests\unit\models;
use common\models\Developer;
use common\tests\fixtures\DeveloperFixture;
use Faker\Factory;
class DeveloperTest extends \Codeception\Test\Unit
{
/**
* @var \common\tests\UnitTester
*/
protected $tester;
/**
* @return array
*/
public function _fixtures()
{
return [
'developer' => [
'class' => DeveloperFixture::class,
'dataFile' => codecept_data_dir() . 'developer.php'
]
];
}
/**
* Test validation rules of developer model.
* This method use validationRuleDataProvider() to create some test cases.
*
* @see DeveloperTest::validationRuleDataProvider() Related test cases.
*/
public function testValidationRules()
{
$output = new \Codeception\Lib\Console\Output([]);
foreach ($this->validationRuleDataProvider() as $key => $example) {
list($attribute, $value, $expect) = $example;
$output->writeln("\n\n_____________ RUN testValidationRules case 0} __________");
$output->writeln("Test attribute is: {$attribute}");
$output->writeln("Test value is: {$value}");
$output->writeln("Expected value is: {$expect}");
$developer = new Developer();
$developer->$attribute = $value;
$validateValue = $developer->validate([$attribute]);
if (!$validateValue) {
$output->write("Validation error for {$attribute} is :\n\t");
$output->writeln($developer->getErrors($attribute));
$output->write("\n\n");
}
if ($expect) {
$this->assertTrue($validateValue, 'Validation rule accepted.');
} else {
$this->assertFalse($validateValue, 'Validation rule rejected.');
}
}
}
/**
* Data provider for testValidationRules.
*
* @see DeveloperTest::testValidationRules() Test that use this provider.
*
* @return array
*/
protected function validationRuleDataProvider()
{
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
return [
['name', $faker_fa->realText(100), 1],
['description', $faker_fa->realText(200), 1],
['name', $faker_fa->numberBetween(100), 0],
['description', $faker_fa->numberBetween(20000000), 0],
['name', $faker_en->realText(100), 1],
['description', $faker_en->realText(200), 1],
['name', $faker_en->numberBetween(2000000), 0],
['description', $faker_en->numberBetween(20000), 0],
['name', $faker_fa->realText(200), 1],
['description', $faker_fa->realText(2000), 1],
['name', $faker_fa->boolean, 0],
['description', $faker_en->boolean, 0],
['name', $faker_en->realText(200), 1],
['description', $faker_en->realText(2000), 1],
['name', $faker_en->unixTime, 0],
['description', $faker_en->numberBetween(2147000000), 0],
];
}
/**
* Test to saving user in database.
* We are using Factory object to create dynamic test cases.
*/
public function testSaving()
{
// use the factory to create a Faker\Generator instance
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
$developer = new Developer([
'name' => $faker_en->company,
'description' => $faker_fa->realText()
]);
$saveStatus = $developer->save();
$this->assertTrue($saveStatus, 'Developer object saved into database.');
}
/**
* Test update action of Developer model.
* We are using Factory object to create dynamic test cases.
*/
protected function testUpdate()
{
$index = rand(0, 9);
/** @var Developer $developer */
$sample = $this->tester->grabFixture('developer')->data['developer' . $index];
$developer = Developer::findOne(['name' => $sample['name']]);
$this->assertNotNull($developer, "No developer object found.");
// use the factory to create a Faker\Generator instance
$faker_fa = Factory::create('fa_IR');
$faker_en = Factory::create();
$developer->name = $faker_en->company;
$developer->description = $faker_fa->realText();
$this->assertTrue($developer->save(), "Developer object updated.");
}
public function _before(){
}
public function _after(){
}
}
```
My codeception configuration file is like below:
```php
namespace: common\tests
actor_suffix: Tester
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
modules:
config:
Yii2:
configFile: 'config/test-local.php'
```
And my unit suit configuration file is like this:
```php
suite_namespace: common\tests\unit
actor: UnitTester
bootstrap: false
modules:
enabled:
- Yii2:
part: fixtures
```
### What is the expected result?
When i run command like `codecept -c common run unit models/DeveloperTest --steps --debug -vvv
` all tests should be run and pass.
### What do you get instead?
I got below errors that say `mongodb` and `i18n` component not configured.
This problem happen in second test function running like `testSave` and `testUpdate` in `DeveloperTest` class.
First method run correctly.
```cmd
✔ DeveloperTest: Validation rules (0.44s)
[TransactionForcer] no longer watching new connections
[yii\db\Connection::open] 'Opening DB connection: mysql:host=mysql;dbname=gamestore_test'
[ConnectionWatcher] Connection opened!
Destroying application
[ConnectionWatcher] no longer watching new connections
[ConnectionWatcher] closing all (2) connections
- DeveloperTest: Saving Destroying application
Starting application
E DeveloperTest: Saving
E DeveloperTest: Saving (0.05s)
Destroying application
Suite done, restoring $_SERVER to original
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Time: 1.68 seconds, Memory: 30.00MB
There were 2 errors:
---------
1) DeveloperTest: Saving
Test tests/unit/models/DeveloperTest.php:testSaving
[yii\base\InvalidConfigException] Unexpected configuration type for the "mongodb" component: boolean
#1 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:208
#2 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:261
#3 /app/vendor/yiisoft/yii2/base/Component.php:180
#4 /app/vendor/yiisoft/yii2/BaseYii.php:546
#5 /app/vendor/yiisoft/yii2/base/BaseObject.php:107
#6 /app/vendor/yiisoft/yii2/base/Application.php:206
#7 yii\base\Application->__construct
#8 /app/vendor/yiisoft/yii2/di/Container.php:383
#9 /app/vendor/yiisoft/yii2/di/Container.php:156
#10 /app/vendor/yiisoft/yii2/BaseYii.php:349
---------
2) DeveloperTest: Saving
Test tests/unit/models/DeveloperTest.php:testSaving
[yii\base\InvalidConfigException] Unknown component ID: i18n
#1 /app/vendor/yiisoft/yii2/di/ServiceLocator.php:139
#2 /app/vendor/yiisoft/yii2/base/Module.php:742
#3 /app/vendor/yiisoft/yii2/base/Application.php:580
#4 /app/vendor/yiisoft/yii2/BaseYii.php:526
#5 /app/vendor/yiisoft/yii2/validators/RequiredValidator.php:60
#6 /app/vendor/yiisoft/yii2/base/BaseObject.php:109
#7 yii\base\BaseObject->__construct
#8 /app/vendor/yiisoft/yii2/di/Container.php:383
#9 /app/vendor/yiisoft/yii2/di/Container.php:156
#10 /app/vendor/yiisoft/yii2/BaseYii.php:349
ERRORS!
Tests: 2, Assertions: 16, Errors: 2.
```
### Additional info
| Q | A
| ---------------- | ---
| Yii version | 2.0.15.1
| PHP version | 7.2.7
| Operating system | Linux alpine
|
non_process
|
error on database configuration in second test running in codeception unit test what steps will reproduce the problem i created a test class like below php php namespace common tests unit models use common models developer use common tests fixtures developerfixture use faker factory class developertest extends codeception test unit var common tests unittester protected tester return array public function fixtures return developer class developerfixture class datafile codecept data dir developer php test validation rules of developer model this method use validationruledataprovider to create some test cases see developertest validationruledataprovider related test cases public function testvalidationrules output new codeception lib console output foreach this validationruledataprovider as key example list attribute value expect example output writeln n n run testvalidationrules case output writeln test attribute is attribute output writeln test value is value output writeln expected value is expect developer new developer developer attribute value validatevalue developer validate if validatevalue output write validation error for attribute is n t output writeln developer geterrors attribute output write n n if expect this asserttrue validatevalue validation rule accepted else this assertfalse validatevalue validation rule rejected data provider for testvalidationrules see developertest testvalidationrules test that use this provider return array protected function validationruledataprovider faker fa factory create fa ir faker en factory create return test to saving user in database we are using factory object to create dynamic test cases public function testsaving use the factory to create a faker generator instance faker fa factory create fa ir faker en factory create developer new developer name faker en company description faker fa realtext savestatus developer save this asserttrue savestatus developer object saved into database test update action of developer model we are using factory object to create dynamic test cases protected function testupdate index rand var developer developer sample this tester grabfixture developer data developer developer findone this assertnotnull developer no developer object found use the factory to create a faker generator instance faker fa factory create fa ir faker en factory create developer name faker en company developer description faker fa realtext this asserttrue developer save developer object updated public function before public function after my codeception configuration file is like below php namespace common tests actor suffix tester paths tests tests output tests output data tests data support tests support settings bootstrap bootstrap php colors true memory limit modules config configfile config test local php and my unit suit configuration file is like this php suite namespace common tests unit actor unittester bootstrap false modules enabled part fixtures what is the expected result when i run command like codecept c common run unit models developertest steps debug vvv all tests should be run and pass what do you get instead i got below errors that say mongodb and component not configured this problem happen in second test function running like testsave and testupdate in developertest class first method run correctly cmd ✔ developertest validation rules no longer watching new connections opening db connection mysql host mysql dbname gamestore test connection opened destroying application no longer watching new connections closing all connections developertest saving destroying application starting application e developertest saving e developertest saving destroying application suite done restoring server to original time seconds memory there were errors developertest saving test tests unit models developertest php testsaving unexpected configuration type for the mongodb component boolean app vendor yiisoft di servicelocator php app vendor yiisoft di servicelocator php app vendor yiisoft base component php app vendor yiisoft baseyii php app vendor yiisoft base baseobject php app vendor yiisoft base application php yii base application construct app vendor yiisoft di container php app vendor yiisoft di container php app vendor yiisoft baseyii php developertest saving test tests unit models developertest php testsaving unknown component id app vendor yiisoft di servicelocator php app vendor yiisoft base module php app vendor yiisoft base application php app vendor yiisoft baseyii php app vendor yiisoft validators requiredvalidator php app vendor yiisoft base baseobject php yii base baseobject construct app vendor yiisoft di container php app vendor yiisoft di container php app vendor yiisoft baseyii php errors tests assertions errors additional info q a yii version php version operating system linux alpine
| 0
|
5,750
| 8,596,850,027
|
IssuesEvent
|
2018-11-15 16:58:35
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Get Permittingatx analytics
|
Content Type: Process Page Size: XS Team: Content
|
Reach out to Rachel Crist to get access to the analytics for permittingatx to better inform the IA on alpha as we transition.
|
1.0
|
Get Permittingatx analytics - Reach out to Rachel Crist to get access to the analytics for permittingatx to better inform the IA on alpha as we transition.
|
process
|
get permittingatx analytics reach out to rachel crist to get access to the analytics for permittingatx to better inform the ia on alpha as we transition
| 1
|
211,089
| 7,198,282,040
|
IssuesEvent
|
2018-02-05 12:13:39
|
kubernetes/federation
|
https://api.github.com/repos/kubernetes/federation
|
closed
|
Address outstanding DNS review comments in #26694
|
area/federation lifecycle/stale milestone/removed priority/backlog sig/multicluster team/control-plane (deprecated - do not use)
|
<a href="https://github.com/quinton-hoole"><img src="https://avatars0.githubusercontent.com/u/10390785?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [quinton-hoole](https://github.com/quinton-hoole)**
_Monday Jun 06, 2016 at 23:42 GMT_
_Originally opened as https://github.com/kubernetes/kubernetes/issues/26921_
----
See https://github.com/kubernetes/kubernetes/pull/26694 for details. Specifically:
1. Don't call ensureDnsRecords() if the DNS provider has not been initialized.
2. Don't discard errors returned by getClusterZoneNames()
cc: @mfanjie FYI
|
1.0
|
Address outstanding DNS review comments in #26694 - <a href="https://github.com/quinton-hoole"><img src="https://avatars0.githubusercontent.com/u/10390785?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [quinton-hoole](https://github.com/quinton-hoole)**
_Monday Jun 06, 2016 at 23:42 GMT_
_Originally opened as https://github.com/kubernetes/kubernetes/issues/26921_
----
See https://github.com/kubernetes/kubernetes/pull/26694 for details. Specifically:
1. Don't call ensureDnsRecords() if the DNS provider has not been initialized.
2. Don't discard errors returned by getClusterZoneNames()
cc: @mfanjie FYI
|
non_process
|
address outstanding dns review comments in issue by monday jun at gmt originally opened as see for details specifically don t call ensurednsrecords if the dns provider has not been initialized don t discard errors returned by getclusterzonenames cc mfanjie fyi
| 0
|
12,775
| 15,162,005,239
|
IssuesEvent
|
2021-02-12 09:57:13
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
[BE] Salesforce log support MVP set
|
p0 team:data processing
|
## Description
As an analyst, I want to be able to pull important Salesforce audit logs via a SaaS log experience.
## Acceptance Criteria
- Support for MVP set of Salesforce log types
|
1.0
|
[BE] Salesforce log support MVP set - ## Description
As an analyst, I want to be able to pull important Salesforce audit logs via a SaaS log experience.
## Acceptance Criteria
- Support for MVP set of Salesforce log types
|
process
|
salesforce log support mvp set description as an analyst i want to be able to pull important salesforce audit logs via a saas log experience acceptance criteria support for mvp set of salesforce log types
| 1
|
11,110
| 13,957,680,134
|
IssuesEvent
|
2020-10-24 08:07:02
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
SE: Validation in Inspire Geoportal and Reference validator
|
Geoportal Harvesting process SE - Sweden
|
We are now working quite hard to get all metadata, services and data to be approved by Inspire Geoportal.
But we have some doubts on what will happen during the monitoring in December.
So this ticket is more to get some clarifications
We understand the recent upgrades for Reference validator have not added any additional stricter rules, only bugfixes.
But still we see records that do not fulfill all rules in Inspire Reference validator are anyhow accepted by Inspire Geoportal.
When the monitoring in December starts, can we be assured that same numbers as we now see in eg Tematic viewer will be approved during Monitoring ?
We have some doubts there since I thiknthe inspire Geoportal is not using the Reference validator for validation, but the monitoring process will use the reference validator.
We have tried to understand the information we have got during summer. (se image)
But thereare stillsom gaps.
Kind Regards
Michael Östling / Lantmäteriet (SE)
|
1.0
|
SE: Validation in Inspire Geoportal and Reference validator - We are now working quite hard to get all metadata, services and data to be approved by Inspire Geoportal.
But we have some doubts on what will happen during the monitoring in December.
So this ticket is more to get some clarifications
We understand the recent upgrades for Reference validator have not added any additional stricter rules, only bugfixes.
But still we see records that do not fulfill all rules in Inspire Reference validator are anyhow accepted by Inspire Geoportal.
When the monitoring in December starts, can we be assured that same numbers as we now see in eg Tematic viewer will be approved during Monitoring ?
We have some doubts there since I thiknthe inspire Geoportal is not using the Reference validator for validation, but the monitoring process will use the reference validator.
We have tried to understand the information we have got during summer. (se image)
But thereare stillsom gaps.
Kind Regards
Michael Östling / Lantmäteriet (SE)
|
process
|
se validation in inspire geoportal and reference validator we are now working quite hard to get all metadata services and data to be approved by inspire geoportal but we have some doubts on what will happen during the monitoring in december so this ticket is more to get some clarifications we understand the recent upgrades for reference validator have not added any additional stricter rules only bugfixes but still we see records that do not fulfill all rules in inspire reference validator are anyhow accepted by inspire geoportal when the monitoring in december starts can we be assured that same numbers as we now see in eg tematic viewer will be approved during monitoring we have some doubts there since i thiknthe inspire geoportal is not using the reference validator for validation but the monitoring process will use the reference validator we have tried to understand the information we have got during summer se image but thereare stillsom gaps kind regards michael ouml stling lantm auml teriet se
| 1
|
10,839
| 13,621,675,695
|
IssuesEvent
|
2020-09-24 01:23:26
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
closed
|
Add a processing command to combine sources into an RGB image
|
f - displays f - processing feature type - enhancement
|
Ideally each input should allow you to choose what color it is mapped to and whether the data is scaled or used as direct values. You should be able to add/remove channels.
Notes 2020-08-28:
- handle inputs of two different sizes
- handle complex data or RGB image inputs
- handle collections that have 2D datums
Would be nice:
- option to have any number of input channels
- option to choose color of each channel
- option to choose whether to auto scale input data
- option to set relative intensity of each channel
- option to add legend to output
Computation notes:
- computations do not currently allow lists of tuples (input image, mixer color, auto scaling enabled, intensity) as inputs
- this feature would be required for all but the simplest color mixer
|
1.0
|
Add a processing command to combine sources into an RGB image - Ideally each input should allow you to choose what color it is mapped to and whether the data is scaled or used as direct values. You should be able to add/remove channels.
Notes 2020-08-28:
- handle inputs of two different sizes
- handle complex data or RGB image inputs
- handle collections that have 2D datums
Would be nice:
- option to have any number of input channels
- option to choose color of each channel
- option to choose whether to auto scale input data
- option to set relative intensity of each channel
- option to add legend to output
Computation notes:
- computations do not currently allow lists of tuples (input image, mixer color, auto scaling enabled, intensity) as inputs
- this feature would be required for all but the simplest color mixer
|
process
|
add a processing command to combine sources into an rgb image ideally each input should allow you to choose what color it is mapped to and whether the data is scaled or used as direct values you should be able to add remove channels notes handle inputs of two different sizes handle complex data or rgb image inputs handle collections that have datums would be nice option to have any number of input channels option to choose color of each channel option to choose whether to auto scale input data option to set relative intensity of each channel option to add legend to output computation notes computations do not currently allow lists of tuples input image mixer color auto scaling enabled intensity as inputs this feature would be required for all but the simplest color mixer
| 1
|
20,755
| 27,488,877,438
|
IssuesEvent
|
2023-03-04 11:19:08
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Ubuntu: `stdout` and `stderr` data is not emitted on child processes (with piped `stdio`)
|
child_process
|
### Version
`v18.0.0-nightly20220311d8c4e375f2`
### Platform
`Linux parallels-Parallels-Virtual-Platform 5.13.0-37-generic #42~20.04.1-Ubuntu SMP Tue Mar 15 15:44:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
### Subsystem
_No response_
### What steps will reproduce the bug?
Run `node main.js` having following two files in same folder:
_child.js_
```javascript
console.log("Some text");
const interval = setInterval(() => {
console.log("Other text")
}, 100)
setTimeout(() => clearInterval(interval), 1000);
```
_main.js_
```javascript
const childProcess = require('child_process');
const child = childProcess.spawn('node', ['child.js'], { stdio: 'pipe'});
if (child.stdout) {
console.log("Attach std listeners")
child.stdout.on('data', data => console.log("Child stdout:", String(data)));
child.stderr.on('data', data => console.log("Child stderr:", String(data)));
}
child.on('close', (...args) => { console.log("Child closed", args); });
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior?
Expected output (e.g. observable on macOS) is:
```
Attach std listeners
Child stdout: Some text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child closed [ 0, null ]
```
### What do you see instead?
```
Attach std listeners
CLOSED [ 0, null ]
```
### Additional information
When `stdio` is set to `inherit` then the child output is exposed as expected.
I observe it on every Node.js version I've tried (v14, v16, v17, and latest build)
|
1.0
|
Ubuntu: `stdout` and `stderr` data is not emitted on child processes (with piped `stdio`) - ### Version
`v18.0.0-nightly20220311d8c4e375f2`
### Platform
`Linux parallels-Parallels-Virtual-Platform 5.13.0-37-generic #42~20.04.1-Ubuntu SMP Tue Mar 15 15:44:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
### Subsystem
_No response_
### What steps will reproduce the bug?
Run `node main.js` having following two files in same folder:
_child.js_
```javascript
console.log("Some text");
const interval = setInterval(() => {
console.log("Other text")
}, 100)
setTimeout(() => clearInterval(interval), 1000);
```
_main.js_
```javascript
const childProcess = require('child_process');
const child = childProcess.spawn('node', ['child.js'], { stdio: 'pipe'});
if (child.stdout) {
console.log("Attach std listeners")
child.stdout.on('data', data => console.log("Child stdout:", String(data)));
child.stderr.on('data', data => console.log("Child stderr:", String(data)));
}
child.on('close', (...args) => { console.log("Child closed", args); });
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior?
Expected output (e.g. observable on macOS) is:
```
Attach std listeners
Child stdout: Some text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child stdout: Other text
Child closed [ 0, null ]
```
### What do you see instead?
```
Attach std listeners
CLOSED [ 0, null ]
```
### Additional information
When `stdio` is set to `inherit` then the child output is exposed as expected.
I observe it on every Node.js version I've tried (v14, v16, v17, and latest build)
|
process
|
ubuntu stdout and stderr data is not emitted on child processes with piped stdio version platform linux parallels parallels virtual platform generic ubuntu smp tue mar utc gnu linux subsystem no response what steps will reproduce the bug run node main js having following two files in same folder child js javascript console log some text const interval setinterval console log other text settimeout clearinterval interval main js javascript const childprocess require child process const child childprocess spawn node stdio pipe if child stdout console log attach std listeners child stdout on data data console log child stdout string data child stderr on data data console log child stderr string data child on close args console log child closed args how often does it reproduce is there a required condition always what is the expected behavior expected output e g observable on macos is attach std listeners child stdout some text child stdout other text child stdout other text child stdout other text child stdout other text child stdout other text child stdout other text child stdout other text child stdout other text child stdout other text child closed what do you see instead attach std listeners closed additional information when stdio is set to inherit then the child output is exposed as expected i observe it on every node js version i ve tried and latest build
| 1
|
255,335
| 27,484,911,010
|
IssuesEvent
|
2023-03-04 01:33:36
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
closed
|
CVE-2020-27152 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed
|
security vulnerability
|
## CVE-2020-27152 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ioapic_lazy_update_eoi in arch/x86/kvm/ioapic.c in the Linux kernel before 5.9.2. It has an infinite loop related to improper interaction between a resampler and edge triggering, aka CID-77377064c3a9.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-27152>CVE-2020-27152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: v5.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-27152 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2020-27152 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ioapic_lazy_update_eoi in arch/x86/kvm/ioapic.c in the Linux kernel before 5.9.2. It has an infinite loop related to improper interaction between a resampler and edge triggering, aka CID-77377064c3a9.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-27152>CVE-2020-27152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: v5.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files arch kvm ioapic c arch kvm ioapic c vulnerability details an issue was discovered in ioapic lazy update eoi in arch kvm ioapic c in the linux kernel before it has an infinite loop related to improper interaction between a resampler and edge triggering aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend
| 0
|
2,047
| 4,857,090,318
|
IssuesEvent
|
2016-11-12 12:06:20
|
lxde/lxqt
|
https://api.github.com/repos/lxde/lxqt
|
reopened
|
kalarm crash after closing in LXQT
|
invalid/dup/rejected wont-process-this
|
**How to reproduce:**
1. Start Kalarm
2. Close Kalarm.
3. get Crash
**Software used**
1. Fedora 25
2. lxqt 0.11
3. Qt 5.7
4. kalarm-16.08.2-1.fc25
**Expected behavior**
Kalarm exit without crash
**Actual behavior**
kalarm crashes after exit
Backtrace is here: http://pastebin.com/3ay0FxxL
More info here https://bugs.kde.org/show_bug.cgi?id=372223
|
1.0
|
kalarm crash after closing in LXQT - **How to reproduce:**
1. Start Kalarm
2. Close Kalarm.
3. get Crash
**Software used**
1. Fedora 25
2. lxqt 0.11
3. Qt 5.7
4. kalarm-16.08.2-1.fc25
**Expected behavior**
Kalarm exit without crash
**Actual behavior**
kalarm crashes after exit
Backtrace is here: http://pastebin.com/3ay0FxxL
More info here https://bugs.kde.org/show_bug.cgi?id=372223
|
process
|
kalarm crash after closing in lxqt how to reproduce start kalarm close kalarm get crash software used fedora lxqt qt kalarm expected behavior kalarm exit without crash actual behavior kalarm crashes after exit backtrace is here more info here
| 1
|
11,324
| 14,140,458,334
|
IssuesEvent
|
2020-11-10 11:14:01
|
kubeflow/website
|
https://api.github.com/repos/kubeflow/website
|
closed
|
[Release 1.1] Release Website 1.1
|
area/docs kind/process lifecycle/stale priority/p0
|
Opening this issue to track doing a release of the website for Kubeflow 1.1
* We still need to find an owner to drive this (see kubeflow/kubeflow#5022)
* The first step is cutting a 1.0 branch from master providing a stable link to the docs so we can begin updating the docs on master.
* Docs for versioning the website: https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#version-the-website
|
1.0
|
[Release 1.1] Release Website 1.1 - Opening this issue to track doing a release of the website for Kubeflow 1.1
* We still need to find an owner to drive this (see kubeflow/kubeflow#5022)
* The first step is cutting a 1.0 branch from master providing a stable link to the docs so we can begin updating the docs on master.
* Docs for versioning the website: https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#version-the-website
|
process
|
release website opening this issue to track doing a release of the website for kubeflow we still need to find an owner to drive this see kubeflow kubeflow the first step is cutting a branch from master providing a stable link to the docs so we can begin updating the docs on master docs for versioning the website
| 1
|
18,733
| 24,627,815,100
|
IssuesEvent
|
2022-10-16 18:44:57
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
why deprecate the parquet component
|
question processors needs more info
|
Hello,
I see in the document that the request is marked as deprecated. This component is being used in our project. What is the main reason for deprecated.
|
1.0
|
why deprecate the parquet component - Hello,
I see in the document that the request is marked as deprecated. This component is being used in our project. What is the main reason for deprecated.
|
process
|
why deprecate the parquet component hello i see in the document that the request is marked as deprecated this component is being used in our project what is the main reason for deprecated
| 1
|
7,566
| 10,682,242,740
|
IssuesEvent
|
2019-10-22 04:27:55
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Add new algorithm "Print layout map extent to layer"
|
3.8 Automatic new feature Processing Alg
|
Original commit: https://github.com/qgis/QGIS/commit/92c8fddac2bca80c60ff0978d34dc01e9db5bb79 by nyalldawson
This algorithm creates a polygon layer containing the extent
of a print layout map item, with attributes specifying the map
size (in layout units), scale and rotatation.
The main use case is when you want to create an advanced overview
indicator and the inbuilt layout tools to do this don't suffice.
|
1.0
|
[FEATURE][processing] Add new algorithm "Print layout map extent to layer" - Original commit: https://github.com/qgis/QGIS/commit/92c8fddac2bca80c60ff0978d34dc01e9db5bb79 by nyalldawson
This algorithm creates a polygon layer containing the extent
of a print layout map item, with attributes specifying the map
size (in layout units), scale and rotatation.
The main use case is when you want to create an advanced overview
indicator and the inbuilt layout tools to do this don't suffice.
|
process
|
add new algorithm print layout map extent to layer original commit by nyalldawson this algorithm creates a polygon layer containing the extent of a print layout map item with attributes specifying the map size in layout units scale and rotatation the main use case is when you want to create an advanced overview indicator and the inbuilt layout tools to do this don t suffice
| 1
|
156,209
| 24,583,862,853
|
IssuesEvent
|
2022-10-13 17:52:24
|
woocommerce/woocommerce-android
|
https://api.github.com/repos/woocommerce/woocommerce-android
|
closed
|
Long store names in My Store
|
type: bug feature: stats good first issue category: design
|
If a store has a name long enough to wrap to another line, it overlaps the "My Store" heading.

|
1.0
|
Long store names in My Store - If a store has a name long enough to wrap to another line, it overlaps the "My Store" heading.

|
non_process
|
long store names in my store if a store has a name long enough to wrap to another line it overlaps the my store heading
| 0
|
16,086
| 20,254,340,740
|
IssuesEvent
|
2022-02-14 21:18:24
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Multi-target GPR predicts only 1 std when normalize_y=False
|
Bug module:gaussian_process
|
### Describe the bug
Supposed to have been fixed in [#20761](https://github.com/scikit-learn/scikit-learn/pull/20761)?
See #22199
When using a GPR model for multi-target data, if we don't set normalize_y=True then the shape of the predicted standard deviation is (n_samples,) instead of (n_samples, n_targets) and similarly for the covariance.
### Steps/Code to Reproduce
```
import numpy as np
import sklearn
from sklearn.gaussian_process import GaussianProcessRegressor as GPR
print(sklearn.__version__)
X_train = np.random.rand(7,3)
Y_train = np.random.randn(7,2)
X_test = np.random.rand(4,3)
# ---- WORKING CODE ---- #
model = GPR(normalize_y=True)
model.fit(X_train, Y_train)
Y_pred, Y_std = model.predict(X_test, return_std=True)
print(Y_pred.shape, Y_std.shape)
# ---- BROKEN CODE ---- #
model = GPR()
model.fit(X_train, Y_train)
Y_pred, Y_std = model.predict(X_test, return_std=True)
print(Y_pred.shape, Y_std.shape)
```
### Expected Results
Should get Y_std.shape = (n_samples, n_targets) = (4,2)
### Actual Results
Get Y_std.shape = (n_samples,) = (4,)
### Versions
System:
python: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:27:35) [Clang 11.1.0 ]
executable: /Users/tnakam10/opt/anaconda3/envs/aerofusion/bin/python
machine: macOS-11.6.1-x86_64-i386-64bit
Python dependencies:
pip: 21.3.1
setuptools: 60.5.0
sklearn: 1.0.2
numpy: 1.19.5
scipy: 1.7.3
Cython: None
pandas: 1.3.5
matplotlib: 3.5.1
joblib: 1.1.0
threadpoolctl: 3.0.0
Built with OpenMP: True
|
1.0
|
Multi-target GPR predicts only 1 std when normalize_y=False - ### Describe the bug
Supposed to have been fixed in [#20761](https://github.com/scikit-learn/scikit-learn/pull/20761)?
See #22199
When using a GPR model for multi-target data, if we don't set normalize_y=True then the shape of the predicted standard deviation is (n_samples,) instead of (n_samples, n_targets) and similarly for the covariance.
### Steps/Code to Reproduce
```
import numpy as np
import sklearn
from sklearn.gaussian_process import GaussianProcessRegressor as GPR
print(sklearn.__version__)
X_train = np.random.rand(7,3)
Y_train = np.random.randn(7,2)
X_test = np.random.rand(4,3)
# ---- WORKING CODE ---- #
model = GPR(normalize_y=True)
model.fit(X_train, Y_train)
Y_pred, Y_std = model.predict(X_test, return_std=True)
print(Y_pred.shape, Y_std.shape)
# ---- BROKEN CODE ---- #
model = GPR()
model.fit(X_train, Y_train)
Y_pred, Y_std = model.predict(X_test, return_std=True)
print(Y_pred.shape, Y_std.shape)
```
### Expected Results
Should get Y_std.shape = (n_samples, n_targets) = (4,2)
### Actual Results
Get Y_std.shape = (n_samples,) = (4,)
### Versions
System:
python: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:27:35) [Clang 11.1.0 ]
executable: /Users/tnakam10/opt/anaconda3/envs/aerofusion/bin/python
machine: macOS-11.6.1-x86_64-i386-64bit
Python dependencies:
pip: 21.3.1
setuptools: 60.5.0
sklearn: 1.0.2
numpy: 1.19.5
scipy: 1.7.3
Cython: None
pandas: 1.3.5
matplotlib: 3.5.1
joblib: 1.1.0
threadpoolctl: 3.0.0
Built with OpenMP: True
|
process
|
multi target gpr predicts only std when normalize y false describe the bug supposed to have been fixed in see when using a gpr model for multi target data if we don t set normalize y true then the shape of the predicted standard deviation is n samples instead of n samples n targets and similarly for the covariance steps code to reproduce import numpy as np import sklearn from sklearn gaussian process import gaussianprocessregressor as gpr print sklearn version x train np random rand y train np random randn x test np random rand working code model gpr normalize y true model fit x train y train y pred y std model predict x test return std true print y pred shape y std shape broken code model gpr model fit x train y train y pred y std model predict x test return std true print y pred shape y std shape expected results should get y std shape n samples n targets actual results get y std shape n samples versions system python packaged by conda forge default jun executable users opt envs aerofusion bin python machine macos python dependencies pip setuptools sklearn numpy scipy cython none pandas matplotlib joblib threadpoolctl built with openmp true
| 1
|
14,382
| 17,401,801,377
|
IssuesEvent
|
2021-08-02 20:52:20
|
googleapis/python-datastore
|
https://api.github.com/repos/googleapis/python-datastore
|
opened
|
Split out system tests into separate Kokoro job
|
type: process
|
Working to reduce CI latency. Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time):
```bash
$ for job in $(nox --list | grep "^\*" | cut -d " " -f 2); do
echo $job;
nox -e $job --install-only;
time nox -re $job;
done
nox > Running session lint
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint
nox > python -m pip install flake8 black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Skipping flake8 run, as --install-only is set.
nox > Session lint was successful.
nox > Running session lint
nox > Re-using existing virtual environment at .nox/lint.
nox > python -m pip install flake8 black==19.10b0
nox > black --check docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
64 files would be left unchanged.
nox > flake8 google tests
nox > Session lint was successful.
real 0m2.613s
user 0m6.705s
sys 0m0.242s
blacken
nox > Running session blacken
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken
nox > python -m pip install black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Session blacken was successful.
nox > Running session blacken
nox > Re-using existing virtual environment at .nox/blacken.
nox > python -m pip install black==19.10b0
nox > black docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
64 files left unchanged.
nox > Session blacken was successful.
real 0m0.912s
user 0m0.834s
sys 0m0.081s
lint_setup_py
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > Skipping python run, as --install-only is set.
nox > Session lint_setup_py was successful.
nox > Running session lint_setup_py
nox > Re-using existing virtual environment at .nox/lint_setup_py.
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
nox > Session lint_setup_py was successful.
real 0m1.064s
user 0m0.922s
sys 0m0.142s
unit-3.6
nox > Running session unit-3.6
nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.6 was successful.
nox > Running session unit-3.6
nox > Re-using existing virtual environment at .nox/unit-3-6.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.6_sponge_log.xml -
578 passed in 5.67s
nox > Session unit-3.6 was successful.
real 0m9.988s
user 0m9.432s
sys 0m0.538s
unit-3.7
nox > Running session unit-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.7 was successful.
nox > Running session unit-3.7
nox > Re-using existing virtual environment at .nox/unit-3-7.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.7_sponge_log.xml -
578 passed in 5.54s
nox > Session unit-3.7 was successful.
real 0m9.483s
user 0m8.993s
sys 0m0.482s
unit-3.8
nox > Running session unit-3.8
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.8 was successful.
nox > Running session unit-3.8
nox > Re-using existing virtual environment at .nox/unit-3-8.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.8_sponge_log.xml -
578 passed in 5.29s
nox > Session unit-3.8 was successful.
real 0m8.927s
user 0m8.447s
sys 0m0.473s
unit-3.9
nox > Running session unit-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.9 was successful.
nox > Running session unit-3.9
nox > Re-using existing virtual environment at .nox/unit-3-9.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.9_sponge_log.xml -
578 passed in 5.72s
nox > Session unit-3.9 was successful.
real 0m9.258s
user 0m8.727s
sys 0m0.521s
system-3.8(disable_grpc=False)
nox > Running session system-3.8(disable_grpc=False)
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system-3-8-disable_grpc-false
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.8(disable_grpc=False) was successful.
nox > Running session system-3.8(disable_grpc=False)
nox > Re-using existing virtual environment at .nox/system-3-8-disable_grpc-false.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system
............................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/system_3.8_sponge_log.xml -
31 passed in 80.59s (0:01:20)
nox > Session system-3.8(disable_grpc=False) was successful.
real 1m23.987s
user 0m11.917s
sys 0m2.127s
system-3.8(disable_grpc=True)
nox > Running session system-3.8(disable_grpc=True)
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system-3-8-disable_grpc-true
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.8(disable_grpc=True) was successful.
nox > Running session system-3.8(disable_grpc=True)
nox > Re-using existing virtual environment at .nox/system-3-8-disable_grpc-true.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system
............................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/system_3.8_sponge_log.xml -
31 passed in 63.67s (0:01:03)
nox > Session system-3.8(disable_grpc=True) was successful.
real 1m7.158s
user 0m11.968s
sys 0m0.528s
cover
nox > Running session cover
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover
nox > python -m pip install coverage pytest-cov
nox > Skipping coverage run, as --install-only is set.
nox > Skipping coverage run, as --install-only is set.
nox > Session cover was successful.
nox > Running session cover
nox > Re-using existing virtual environment at .nox/cover.
nox > python -m pip install coverage pytest-cov
nox > coverage report --show-missing --fail-under=100
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------------------------------------------
google/cloud/datastore/__init__.py 9 0 0 0 100%
google/cloud/datastore/_app_engine_key_pb2.py 23 0 2 0 100%
google/cloud/datastore/_gapic.py 15 0 2 0 100%
google/cloud/datastore/_http.py 72 0 12 0 100%
google/cloud/datastore/batch.py 106 0 32 0 100%
google/cloud/datastore/client.py 230 0 118 0 100%
google/cloud/datastore/entity.py 23 0 6 0 100%
google/cloud/datastore/helpers.py 196 0 130 0 100%
google/cloud/datastore/key.py 202 0 76 0 100%
google/cloud/datastore/query.py 205 0 64 0 100%
google/cloud/datastore/transaction.py 51 0 10 0 100%
google/cloud/datastore/version.py 1 0 0 0 100%
google/cloud/datastore_admin_v1/services/__init__.py 0 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/__init__.py 3 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/async_client.py 90 0 20 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/client.py 178 0 58 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/pagers.py 42 0 10 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/__init__.py 9 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/base.py 46 0 8 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/grpc.py 71 0 20 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/grpc_asyncio.py 74 0 20 0 100%
google/cloud/datastore_admin_v1/types/__init__.py 3 0 0 0 100%
google/cloud/datastore_admin_v1/types/datastore_admin.py 74 0 0 0 100%
google/cloud/datastore_admin_v1/types/index.py 27 0 0 0 100%
google/cloud/datastore_v1/services/__init__.py 0 0 0 0 100%
google/cloud/datastore_v1/services/datastore/__init__.py 3 0 0 0 100%
google/cloud/datastore_v1/services/datastore/async_client.py 123 0 40 0 100%
google/cloud/datastore_v1/services/datastore/client.py 214 0 84 0 100%
google/cloud/datastore_v1/services/datastore/transports/__init__.py 9 0 0 0 100%
google/cloud/datastore_v1/services/datastore/transports/base.py 49 0 8 0 100%
google/cloud/datastore_v1/services/datastore/transports/grpc.py 78 0 24 0 100%
google/cloud/datastore_v1/services/datastore/transports/grpc_asyncio.py 81 0 24 0 100%
google/cloud/datastore_v1/types/__init__.py 4 0 0 0 100%
google/cloud/datastore_v1/types/datastore.py 76 0 0 0 100%
google/cloud/datastore_v1/types/entity.py 35 0 0 0 100%
google/cloud/datastore_v1/types/query.py 80 0 0 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_admin_v1/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_admin_v1/test_datastore_admin.py 560 0 20 0 100%
tests/unit/gapic/datastore_v1/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_v1/test_datastore.py 691 0 6 0 100%
tests/unit/test__gapic.py 32 0 2 0 100%
tests/unit/test__http.py 522 0 58 0 100%
tests/unit/test_batch.py 377 0 12 0 100%
tests/unit/test_client.py 1025 0 54 0 100%
tests/unit/test_entity.py 167 0 0 0 100%
tests/unit/test_helpers.py 741 0 14 0 100%
tests/unit/test_key.py 535 0 4 0 100%
tests/unit/test_query.py 583 0 24 0 100%
tests/unit/test_transaction.py 276 0 6 0 100%
---------------------------------------------------------------------------------------------------------------------------------
TOTAL 8011 0 968 0 100%
nox > coverage erase
nox > Session cover was successful.
real 0m2.396s
user 0m2.273s
sys 0m0.125s
docs
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > Skipping sphinx-build run, as --install-only is set.
nox > Session docs was successful.
nox > Running session docs
nox > Re-using existing virtual environment at .nox/docs.
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.0.1
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batches.rst, changelog.md, client.rst, entities.rst, helpers.rst, index.rst, keys.rst, queries.rst, transactions.rst
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 12 source files that are out of date
updating environment: [new config] 12 added, 0 changed, 0 removed
reading sources... [ 8%] README
reading sources... [ 16%] UPGRADING
/home/tseaver/projects/agendaless/Google/src/python-datastore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 25%] admin_client
reading sources... [ 33%] batches
reading sources... [ 41%] changelog
/home/tseaver/projects/agendaless/Google/src/python-datastore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 50%] client
reading sources... [ 58%] entities
reading sources... [ 66%] helpers
reading sources... [ 75%] index
reading sources... [ 83%] keys
reading sources... [ 91%] queries
reading sources... [100%] transactions
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 8%] README
writing output... [ 16%] UPGRADING
writing output... [ 25%] admin_client
writing output... [ 33%] batches
writing output... [ 41%] changelog
writing output... [ 50%] client
writing output... [ 58%] entities
writing output... [ 66%] helpers
writing output... [ 75%] index
writing output... [ 83%] keys
writing output... [ 91%] queries
writing output... [100%] transactions
generating indices... genindex py-modindex done
highlighting module code... [ 12%] google.cloud.datastore.batch
highlighting module code... [ 25%] google.cloud.datastore.client
highlighting module code... [ 37%] google.cloud.datastore.entity
highlighting module code... [ 50%] google.cloud.datastore.helpers
highlighting module code... [ 62%] google.cloud.datastore.key
highlighting module code... [ 75%] google.cloud.datastore.query
highlighting module code... [ 87%] google.cloud.datastore.transaction
highlighting module code... [100%] google.cloud.datastore_admin_v1.services.datastore_admin.client
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in docs/_build/html.
nox > Session docs was successful.
real 0m7.417s
user 0m6.868s
sys 0m0.355s
```
Given that the system tests run twice, in ~ 1.5 minutes each, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test.
This change will require updates to the `google3` internal configuration for Kokoro, similar to those @tswast made to enable them for https://github.com/googleapis/python-bigtable/pull/390.
|
1.0
|
Split out system tests into separate Kokoro job - Working to reduce CI latency. Here are timings on my local machine (note the pre-run with `--install-only` to avoid measuring virtualenv creation time):
```bash
$ for job in $(nox --list | grep "^\*" | cut -d " " -f 2); do
echo $job;
nox -e $job --install-only;
time nox -re $job;
done
nox > Running session lint
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint
nox > python -m pip install flake8 black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Skipping flake8 run, as --install-only is set.
nox > Session lint was successful.
nox > Running session lint
nox > Re-using existing virtual environment at .nox/lint.
nox > python -m pip install flake8 black==19.10b0
nox > black --check docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
64 files would be left unchanged.
nox > flake8 google tests
nox > Session lint was successful.
real 0m2.613s
user 0m6.705s
sys 0m0.242s
blacken
nox > Running session blacken
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/blacken
nox > python -m pip install black==19.10b0
nox > Skipping black run, as --install-only is set.
nox > Session blacken was successful.
nox > Running session blacken
nox > Re-using existing virtual environment at .nox/blacken.
nox > python -m pip install black==19.10b0
nox > black docs google tests noxfile.py setup.py
All done! ✨ 🍰 ✨
64 files left unchanged.
nox > Session blacken was successful.
real 0m0.912s
user 0m0.834s
sys 0m0.081s
lint_setup_py
nox > Running session lint_setup_py
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/lint_setup_py
nox > python -m pip install docutils pygments
nox > Skipping python run, as --install-only is set.
nox > Session lint_setup_py was successful.
nox > Running session lint_setup_py
nox > Re-using existing virtual environment at .nox/lint_setup_py.
nox > python -m pip install docutils pygments
nox > python setup.py check --restructuredtext --strict
running check
nox > Session lint_setup_py was successful.
real 0m1.064s
user 0m0.922s
sys 0m0.142s
unit-3.6
nox > Running session unit-3.6
nox > Creating virtual environment (virtualenv) using python3.6 in .nox/unit-3-6
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.6 was successful.
nox > Running session unit-3.6
nox > Re-using existing virtual environment at .nox/unit-3-6.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.6.txt
nox > py.test --quiet --junitxml=unit_3.6_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.6_sponge_log.xml -
578 passed in 5.67s
nox > Session unit-3.6 was successful.
real 0m9.988s
user 0m9.432s
sys 0m0.538s
unit-3.7
nox > Running session unit-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.7 was successful.
nox > Running session unit-3.7
nox > Re-using existing virtual environment at .nox/unit-3-7.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.7.txt
nox > py.test --quiet --junitxml=unit_3.7_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.7_sponge_log.xml -
578 passed in 5.54s
nox > Session unit-3.7 was successful.
real 0m9.483s
user 0m8.993s
sys 0m0.482s
unit-3.8
nox > Running session unit-3.8
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.8 was successful.
nox > Running session unit-3.8
nox > Re-using existing virtual environment at .nox/unit-3-8.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.8_sponge_log.xml -
578 passed in 5.29s
nox > Session unit-3.8 was successful.
real 0m8.927s
user 0m8.447s
sys 0m0.473s
unit-3.9
nox > Running session unit-3.9
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/unit-3-9
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session unit-3.9 was successful.
nox > Running session unit-3.9
nox > Re-using existing virtual environment at .nox/unit-3-9.
nox > python -m pip install asyncmock pytest-asyncio -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install mock pytest pytest-cov -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.9.txt
nox > py.test --quiet --junitxml=unit_3.9_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 12%]
........................................................................ [ 24%]
........................................................................ [ 37%]
........................................................................ [ 49%]
........................................................................ [ 62%]
........................................................................ [ 74%]
........................................................................ [ 87%]
........................................................................ [ 99%]
.. [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/unit_3.9_sponge_log.xml -
578 passed in 5.72s
nox > Session unit-3.9 was successful.
real 0m9.258s
user 0m8.727s
sys 0m0.521s
system-3.8(disable_grpc=False)
nox > Running session system-3.8(disable_grpc=False)
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system-3-8-disable_grpc-false
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.8(disable_grpc=False) was successful.
nox > Running session system-3.8(disable_grpc=False)
nox > Re-using existing virtual environment at .nox/system-3-8-disable_grpc-false.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system
............................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/system_3.8_sponge_log.xml -
31 passed in 80.59s (0:01:20)
nox > Session system-3.8(disable_grpc=False) was successful.
real 1m23.987s
user 0m11.917s
sys 0m2.127s
system-3.8(disable_grpc=True)
nox > Running session system-3.8(disable_grpc=True)
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/system-3-8-disable_grpc-true
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > Skipping py.test run, as --install-only is set.
nox > Session system-3.8(disable_grpc=True) was successful.
nox > Running session system-3.8(disable_grpc=True)
nox > Re-using existing virtual environment at .nox/system-3-8-disable_grpc-true.
nox > python -m pip install --pre grpcio
nox > python -m pip install mock pytest google-cloud-testutils -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > python -m pip install -e . -c /home/tseaver/projects/agendaless/Google/src/python-datastore/testing/constraints-3.8.txt
nox > py.test --quiet --junitxml=system_3.8_sponge_log.xml tests/system
............................... [100%]
- generated xml file: /home/tseaver/projects/agendaless/Google/src/python-datastore/system_3.8_sponge_log.xml -
31 passed in 63.67s (0:01:03)
nox > Session system-3.8(disable_grpc=True) was successful.
real 1m7.158s
user 0m11.968s
sys 0m0.528s
cover
nox > Running session cover
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover
nox > python -m pip install coverage pytest-cov
nox > Skipping coverage run, as --install-only is set.
nox > Skipping coverage run, as --install-only is set.
nox > Session cover was successful.
nox > Running session cover
nox > Re-using existing virtual environment at .nox/cover.
nox > python -m pip install coverage pytest-cov
nox > coverage report --show-missing --fail-under=100
Name Stmts Miss Branch BrPart Cover Missing
---------------------------------------------------------------------------------------------------------------------------------
google/cloud/datastore/__init__.py 9 0 0 0 100%
google/cloud/datastore/_app_engine_key_pb2.py 23 0 2 0 100%
google/cloud/datastore/_gapic.py 15 0 2 0 100%
google/cloud/datastore/_http.py 72 0 12 0 100%
google/cloud/datastore/batch.py 106 0 32 0 100%
google/cloud/datastore/client.py 230 0 118 0 100%
google/cloud/datastore/entity.py 23 0 6 0 100%
google/cloud/datastore/helpers.py 196 0 130 0 100%
google/cloud/datastore/key.py 202 0 76 0 100%
google/cloud/datastore/query.py 205 0 64 0 100%
google/cloud/datastore/transaction.py 51 0 10 0 100%
google/cloud/datastore/version.py 1 0 0 0 100%
google/cloud/datastore_admin_v1/services/__init__.py 0 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/__init__.py 3 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/async_client.py 90 0 20 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/client.py 178 0 58 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/pagers.py 42 0 10 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/__init__.py 9 0 0 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/base.py 46 0 8 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/grpc.py 71 0 20 0 100%
google/cloud/datastore_admin_v1/services/datastore_admin/transports/grpc_asyncio.py 74 0 20 0 100%
google/cloud/datastore_admin_v1/types/__init__.py 3 0 0 0 100%
google/cloud/datastore_admin_v1/types/datastore_admin.py 74 0 0 0 100%
google/cloud/datastore_admin_v1/types/index.py 27 0 0 0 100%
google/cloud/datastore_v1/services/__init__.py 0 0 0 0 100%
google/cloud/datastore_v1/services/datastore/__init__.py 3 0 0 0 100%
google/cloud/datastore_v1/services/datastore/async_client.py 123 0 40 0 100%
google/cloud/datastore_v1/services/datastore/client.py 214 0 84 0 100%
google/cloud/datastore_v1/services/datastore/transports/__init__.py 9 0 0 0 100%
google/cloud/datastore_v1/services/datastore/transports/base.py 49 0 8 0 100%
google/cloud/datastore_v1/services/datastore/transports/grpc.py 78 0 24 0 100%
google/cloud/datastore_v1/services/datastore/transports/grpc_asyncio.py 81 0 24 0 100%
google/cloud/datastore_v1/types/__init__.py 4 0 0 0 100%
google/cloud/datastore_v1/types/datastore.py 76 0 0 0 100%
google/cloud/datastore_v1/types/entity.py 35 0 0 0 100%
google/cloud/datastore_v1/types/query.py 80 0 0 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_admin_v1/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_admin_v1/test_datastore_admin.py 560 0 20 0 100%
tests/unit/gapic/datastore_v1/__init__.py 0 0 0 0 100%
tests/unit/gapic/datastore_v1/test_datastore.py 691 0 6 0 100%
tests/unit/test__gapic.py 32 0 2 0 100%
tests/unit/test__http.py 522 0 58 0 100%
tests/unit/test_batch.py 377 0 12 0 100%
tests/unit/test_client.py 1025 0 54 0 100%
tests/unit/test_entity.py 167 0 0 0 100%
tests/unit/test_helpers.py 741 0 14 0 100%
tests/unit/test_key.py 535 0 4 0 100%
tests/unit/test_query.py 583 0 24 0 100%
tests/unit/test_transaction.py 276 0 6 0 100%
---------------------------------------------------------------------------------------------------------------------------------
TOTAL 8011 0 968 0 100%
nox > coverage erase
nox > Session cover was successful.
real 0m2.396s
user 0m2.273s
sys 0m0.125s
docs
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/docs
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > Skipping sphinx-build run, as --install-only is set.
nox > Session docs was successful.
nox > Running session docs
nox > Re-using existing virtual environment at .nox/docs.
nox > python -m pip install -e .
nox > python -m pip install sphinx==4.0.1 alabaster recommonmark
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.0.1
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, admin_client.rst, batches.rst, changelog.md, client.rst, entities.rst, helpers.rst, index.rst, keys.rst, queries.rst, transactions.rst
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 12 source files that are out of date
updating environment: [new config] 12 added, 0 changed, 0 removed
reading sources... [ 8%] README
reading sources... [ 16%] UPGRADING
/home/tseaver/projects/agendaless/Google/src/python-datastore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 25%] admin_client
reading sources... [ 33%] batches
reading sources... [ 41%] changelog
/home/tseaver/projects/agendaless/Google/src/python-datastore/.nox/docs/lib/python3.8/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 50%] client
reading sources... [ 58%] entities
reading sources... [ 66%] helpers
reading sources... [ 75%] index
reading sources... [ 83%] keys
reading sources... [ 91%] queries
reading sources... [100%] transactions
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 8%] README
writing output... [ 16%] UPGRADING
writing output... [ 25%] admin_client
writing output... [ 33%] batches
writing output... [ 41%] changelog
writing output... [ 50%] client
writing output... [ 58%] entities
writing output... [ 66%] helpers
writing output... [ 75%] index
writing output... [ 83%] keys
writing output... [ 91%] queries
writing output... [100%] transactions
generating indices... genindex py-modindex done
highlighting module code... [ 12%] google.cloud.datastore.batch
highlighting module code... [ 25%] google.cloud.datastore.client
highlighting module code... [ 37%] google.cloud.datastore.entity
highlighting module code... [ 50%] google.cloud.datastore.helpers
highlighting module code... [ 62%] google.cloud.datastore.key
highlighting module code... [ 75%] google.cloud.datastore.query
highlighting module code... [ 87%] google.cloud.datastore.transaction
highlighting module code... [100%] google.cloud.datastore_admin_v1.services.datastore_admin.client
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.
The HTML pages are in docs/_build/html.
nox > Session docs was successful.
real 0m7.417s
user 0m6.868s
sys 0m0.355s
```
Given that the system tests run twice, in ~ 1.5 minutes each, ISTM it would be good to break them out into a separate Kokoro job, running in parallel with the other test.
This change will require updates to the `google3` internal configuration for Kokoro, similar to those @tswast made to enable them for https://github.com/googleapis/python-bigtable/pull/390.
|
process
|
split out system tests into separate kokoro job working to reduce ci latency here are timings on my local machine note the pre run with install only to avoid measuring virtualenv creation time bash for job in nox list grep cut d f do echo job nox e job install only time nox re job done nox running session lint nox creating virtual environment virtualenv using in nox lint nox python m pip install black nox skipping black run as install only is set nox skipping run as install only is set nox session lint was successful nox running session lint nox re using existing virtual environment at nox lint nox python m pip install black nox black check docs google tests noxfile py setup py all done ✨ 🍰 ✨ files would be left unchanged nox google tests nox session lint was successful real user sys blacken nox running session blacken nox creating virtual environment virtualenv using in nox blacken nox python m pip install black nox skipping black run as install only is set nox session blacken was successful nox running session blacken nox re using existing virtual environment at nox blacken nox python m pip install black nox black docs google tests noxfile py setup py all done ✨ 🍰 ✨ files left unchanged nox session blacken was successful real user sys lint setup py nox running session lint setup py nox creating virtual environment virtualenv using in nox lint setup py nox python m pip install docutils pygments nox skipping python run as install only is set nox session lint setup py was successful nox running session lint setup py nox re using existing virtual environment at nox lint setup py nox python m pip install docutils pygments nox python setup py check restructuredtext strict running check nox session lint setup py was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit generated xml file home tseaver projects agendaless google src python datastore unit sponge log xml passed in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit generated xml file home tseaver projects agendaless google src python datastore unit sponge log xml passed in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit generated xml file home tseaver projects agendaless google src python datastore unit sponge log xml passed in nox session unit was successful real user sys unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session unit was successful nox running session unit nox re using existing virtual environment at nox unit nox python m pip install asyncmock pytest asyncio c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install mock pytest pytest cov c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit generated xml file home tseaver projects agendaless google src python datastore unit sponge log xml passed in nox session unit was successful real user sys system disable grpc false nox running session system disable grpc false nox creating virtual environment virtualenv using in nox system disable grpc false nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session system disable grpc false was successful nox running session system disable grpc false nox re using existing virtual environment at nox system disable grpc false nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml system sponge log xml tests system generated xml file home tseaver projects agendaless google src python datastore system sponge log xml passed in nox session system disable grpc false was successful real user sys system disable grpc true nox running session system disable grpc true nox creating virtual environment virtualenv using in nox system disable grpc true nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox skipping py test run as install only is set nox session system disable grpc true was successful nox running session system disable grpc true nox re using existing virtual environment at nox system disable grpc true nox python m pip install pre grpcio nox python m pip install mock pytest google cloud testutils c home tseaver projects agendaless google src python datastore testing constraints txt nox python m pip install e c home tseaver projects agendaless google src python datastore testing constraints txt nox py test quiet junitxml system sponge log xml tests system generated xml file home tseaver projects agendaless google src python datastore system sponge log xml passed in nox session system disable grpc true was successful real user sys cover nox running session cover nox creating virtual environment virtualenv using in nox cover nox python m pip install coverage pytest cov nox skipping coverage run as install only is set nox skipping coverage run as install only is set nox session cover was successful nox running session cover nox re using existing virtual environment at nox cover nox python m pip install coverage pytest cov nox coverage report show missing fail under name stmts miss branch brpart cover missing google cloud datastore init py google cloud datastore app engine key py google cloud datastore gapic py google cloud datastore http py google cloud datastore batch py google cloud datastore client py google cloud datastore entity py google cloud datastore helpers py google cloud datastore key py google cloud datastore query py google cloud datastore transaction py google cloud datastore version py google cloud datastore admin services init py google cloud datastore admin services datastore admin init py google cloud datastore admin services datastore admin async client py google cloud datastore admin services datastore admin client py google cloud datastore admin services datastore admin pagers py google cloud datastore admin services datastore admin transports init py google cloud datastore admin services datastore admin transports base py google cloud datastore admin services datastore admin transports grpc py google cloud datastore admin services datastore admin transports grpc asyncio py google cloud datastore admin types init py google cloud datastore admin types datastore admin py google cloud datastore admin types index py google cloud datastore services init py google cloud datastore services datastore init py google cloud datastore services datastore async client py google cloud datastore services datastore client py google cloud datastore services datastore transports init py google cloud datastore services datastore transports base py google cloud datastore services datastore transports grpc py google cloud datastore services datastore transports grpc asyncio py google cloud datastore types init py google cloud datastore types datastore py google cloud datastore types entity py google cloud datastore types query py tests unit init py tests unit gapic datastore admin init py tests unit gapic datastore admin test datastore admin py tests unit gapic datastore init py tests unit gapic datastore test datastore py tests unit test gapic py tests unit test http py tests unit test batch py tests unit test client py tests unit test entity py tests unit test helpers py tests unit test key py tests unit test query py tests unit test transaction py total nox coverage erase nox session cover was successful real user sys docs nox running session docs nox creating virtual environment virtualenv using in nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox skipping sphinx build run as install only is set nox session docs was successful nox running session docs nox re using existing virtual environment at nox docs nox python m pip install e nox python m pip install sphinx alabaster recommonmark nox sphinx build w t n b html d docs build doctrees docs docs build html running sphinx making output directory done generating autosummary for readme rst upgrading md admin client rst batches rst changelog md client rst entities rst helpers rst index rst keys rst queries rst transactions rst loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from intersphinx inventory has moved building targets for po files that are out of date building targets for source files that are out of date updating environment added changed removed reading sources readme reading sources upgrading home tseaver projects agendaless google src python datastore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources admin client reading sources batches reading sources changelog home tseaver projects agendaless google src python datastore nox docs lib site packages recommonmark parser py userwarning container node skipped type document warn container node skipped type format mdnode t reading sources client reading sources entities reading sources helpers reading sources index reading sources keys reading sources queries reading sources transactions looking for now outdated files none found pickling environment done checking consistency done preparing documents done writing output readme writing output upgrading writing output admin client writing output batches writing output changelog writing output client writing output entities writing output helpers writing output index writing output keys writing output queries writing output transactions generating indices genindex py modindex done highlighting module code google cloud datastore batch highlighting module code google cloud datastore client highlighting module code google cloud datastore entity highlighting module code google cloud datastore helpers highlighting module code google cloud datastore key highlighting module code google cloud datastore query highlighting module code google cloud datastore transaction highlighting module code google cloud datastore admin services datastore admin client writing additional pages search done copying static files done copying extra files done dumping search index in english code en done dumping object inventory done build succeeded the html pages are in docs build html nox session docs was successful real user sys given that the system tests run twice in minutes each istm it would be good to break them out into a separate kokoro job running in parallel with the other test this change will require updates to the internal configuration for kokoro similar to those tswast made to enable them for
| 1
|
12,723
| 14,995,835,267
|
IssuesEvent
|
2021-01-29 14:49:46
|
samhocevar/wincompose
|
https://api.github.com/repos/samhocevar/wincompose
|
closed
|
WinCompose does not work in Blender
|
bug in 3rd party incompatibility keyboard hook
|
When entering a compose sequence into a text field in [Blender](https://www.blender.org/), nothing happens. These fields do accept Unicode, as copy and pasting from another source works perfectly.
|
True
|
WinCompose does not work in Blender - When entering a compose sequence into a text field in [Blender](https://www.blender.org/), nothing happens. These fields do accept Unicode, as copy and pasting from another source works perfectly.
|
non_process
|
wincompose does not work in blender when entering a compose sequence into a text field in nothing happens these fields do accept unicode as copy and pasting from another source works perfectly
| 0
|
481
| 2,911,386,909
|
IssuesEvent
|
2015-06-22 09:14:14
|
haskell-distributed/distributed-process
|
https://api.github.com/repos/haskell-distributed/distributed-process
|
closed
|
Add "cookie" or other identification mechanism to SimpleLocalnet
|
distributed-process-simplelocalnet Feature Request
|
so that we have multiple independent Cloud Haskell applications running on the same network.
|
1.0
|
Add "cookie" or other identification mechanism to SimpleLocalnet - so that we have multiple independent Cloud Haskell applications running on the same network.
|
process
|
add cookie or other identification mechanism to simplelocalnet so that we have multiple independent cloud haskell applications running on the same network
| 1
|
21,675
| 30,120,354,812
|
IssuesEvent
|
2023-06-30 14:42:15
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Process persistent should be using machine scope consistently
|
bug terminal-process
|
Several of storage service don't use machine:
https://github.com/microsoft/vscode/blob/9b710048023e546fbdfe9ef34dbfebf10fd71d27/src/vs/workbench/contrib/terminal/electron-sandbox/localTerminalBackend.ts#L270-L299
Found while investigating https://github.com/microsoft/vscode/issues/133542
|
1.0
|
Process persistent should be using machine scope consistently - Several of storage service don't use machine:
https://github.com/microsoft/vscode/blob/9b710048023e546fbdfe9ef34dbfebf10fd71d27/src/vs/workbench/contrib/terminal/electron-sandbox/localTerminalBackend.ts#L270-L299
Found while investigating https://github.com/microsoft/vscode/issues/133542
|
process
|
process persistent should be using machine scope consistently several of storage service don t use machine found while investigating
| 1
|
219,572
| 24,501,523,332
|
IssuesEvent
|
2022-10-10 13:10:24
|
nidhi7598/linux-3.0.35
|
https://api.github.com/repos/nidhi7598/linux-3.0.35
|
opened
|
CVE-2018-9517 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2018-9517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In pppol2tp_connect, there is possible memory corruption due to a use after free. This could lead to local escalation of privilege with System execution privileges needed. User interaction is not needed for exploitation. Product: Android. Versions: Android kernel. Android ID: A-38159931.
<p>Publish Date: 2018-12-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9517>CVE-2018-9517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-9517">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-9517</a></p>
<p>Release Date: 2018-12-07</p>
<p>Fix Resolution: v4.14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-9517 (High) detected in multiple libraries - ## CVE-2018-9517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b>, <b>linuxlinux-3.0.40</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In pppol2tp_connect, there is possible memory corruption due to a use after free. This could lead to local escalation of privilege with System execution privileges needed. User interaction is not needed for exploitation. Product: Android. Versions: Android kernel. Android ID: A-38159931.
<p>Publish Date: 2018-12-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9517>CVE-2018-9517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-9517">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-9517</a></p>
<p>Release Date: 2018-12-07</p>
<p>Fix Resolution: v4.14</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux vulnerability details in connect there is possible memory corruption due to a use after free this could lead to local escalation of privilege with system execution privileges needed user interaction is not needed for exploitation product android versions android kernel android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
21,368
| 29,194,080,814
|
IssuesEvent
|
2023-05-20 00:31:52
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] Data Analyst na Coodesh
|
SALVADOR PJ BANCO DE DADOS PYTHON SCRUM SQL PL/SQL ETL REQUISITOS REMOTO PROCESSOS GITHUB UMA BI SCRIPT MANUTENÇÃO NEGÓCIOS DATA WAREHOUSE Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-141612791?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Base Service</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>Atuamos no mercado financeiro-tecnológico oferecendo consultoria estratégica, mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos, aliando tecnologia, expertise humana e gestão inovadora.</p>
<p></p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e dar manutenção de consultas PL/SQL, procedures, views, functions e triggers;</li>
<li>Tuning de performance de queries;</li>
<li>Importação e exportação de dados;</li>
<li>Atuar com desenvolvimento e análise em processos ETL;</li>
<li>Automatização e aperfeiçoamento de processos recorrentes;</li>
<li>Manipulação de grandes volumes de dados estruturados;</li>
<li>Propor melhorias nas regras de negócio levantadas junto ao cliente;</li>
<li>Desenvolvimento PL/SQL utilizando as ferramentas SQL Developer e PL/SQL Developer;</li>
<li>Analisar, elaborar e validar script SQL;</li>
<li>Realizar extração e análise de dados.</li>
</ul>
## Base :
<p>Somos uma holding que atua no mercado financeiro-tecnológico oferecendo consultoria estratégica, mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos, aliando tecnologia, expertise humana e gestão inovadora.</p>
</p>
## Habilidades:
- PL/SQL
- SCRUM
- Python
- BI
- Database Views
- Database Triggers
- ETL
## Local:
100% Remoto
## Requisitos:
- Conhecimento em PL/SQL;
- Conhecimento metodologia Scrum;
- Modelagem.
## Diferenciais:
- Conhecer o framework de desenvolvimento DBT;
- Algum conhecimento em linguagem de programação Python;
- Computação em nuvem;
- Cloud data warehouse e ferramentas de BI.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Base ](https://coodesh.com/vagas/data-analyst-141612791?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Banco de Dados
|
1.0
|
[Remoto] Data Analyst na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-141612791?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Base Service</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>Atuamos no mercado financeiro-tecnológico oferecendo consultoria estratégica, mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos, aliando tecnologia, expertise humana e gestão inovadora.</p>
<p></p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolver e dar manutenção de consultas PL/SQL, procedures, views, functions e triggers;</li>
<li>Tuning de performance de queries;</li>
<li>Importação e exportação de dados;</li>
<li>Atuar com desenvolvimento e análise em processos ETL;</li>
<li>Automatização e aperfeiçoamento de processos recorrentes;</li>
<li>Manipulação de grandes volumes de dados estruturados;</li>
<li>Propor melhorias nas regras de negócio levantadas junto ao cliente;</li>
<li>Desenvolvimento PL/SQL utilizando as ferramentas SQL Developer e PL/SQL Developer;</li>
<li>Analisar, elaborar e validar script SQL;</li>
<li>Realizar extração e análise de dados.</li>
</ul>
## Base :
<p>Somos uma holding que atua no mercado financeiro-tecnológico oferecendo consultoria estratégica, mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos, aliando tecnologia, expertise humana e gestão inovadora.</p>
</p>
## Habilidades:
- PL/SQL
- SCRUM
- Python
- BI
- Database Views
- Database Triggers
- ETL
## Local:
100% Remoto
## Requisitos:
- Conhecimento em PL/SQL;
- Conhecimento metodologia Scrum;
- Modelagem.
## Diferenciais:
- Conhecer o framework de desenvolvimento DBT;
- Algum conhecimento em linguagem de programação Python;
- Computação em nuvem;
- Cloud data warehouse e ferramentas de BI.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Base ](https://coodesh.com/vagas/data-analyst-141612791?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Banco de Dados
|
process
|
data analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a base service está em busca de data analyst para compor seu time atuamos no mercado financeiro tecnológico oferecendo consultoria estratégica mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos aliando tecnologia expertise humana e gestão inovadora responsabilidades desenvolver e dar manutenção de consultas pl sql procedures views functions e triggers tuning de performance de queries importação e exportação de dados atuar com desenvolvimento e análise em processos etl automatização e aperfeiçoamento de processos recorrentes manipulação de grandes volumes de dados estruturados propor melhorias nas regras de negócio levantadas junto ao cliente desenvolvimento pl sql utilizando as ferramentas sql developer e pl sql developer analisar elaborar e validar script sql realizar extração e análise de dados base somos uma holding que atua no mercado financeiro tecnológico oferecendo consultoria estratégica mentoria de crescimento e aporte para acelerar o crescimento de negócios do segmento de meios de pagamentos aliando tecnologia expertise humana e gestão inovadora habilidades pl sql scrum python bi database views database triggers etl local remoto requisitos conhecimento em pl sql conhecimento metodologia scrum modelagem diferenciais conhecer o framework de desenvolvimento dbt algum conhecimento em linguagem de programação python computação em nuvem cloud data warehouse e ferramentas de bi como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria banco de dados
| 1
|
464,056
| 13,305,481,284
|
IssuesEvent
|
2020-08-25 18:36:58
|
SynBioDex/SBOLExplorer
|
https://api.github.com/repos/SynBioDex/SBOLExplorer
|
closed
|
Add to SynBioHub admin the ability to set the update interval
|
enhancement priority
|
This should update the cron job that runs the clustering, page rank, and index
|
1.0
|
Add to SynBioHub admin the ability to set the update interval - This should update the cron job that runs the clustering, page rank, and index
|
non_process
|
add to synbiohub admin the ability to set the update interval this should update the cron job that runs the clustering page rank and index
| 0
|
222,890
| 24,711,433,164
|
IssuesEvent
|
2022-10-20 01:21:42
|
alexcorvi/apexo
|
https://api.github.com/repos/alexcorvi/apexo
|
opened
|
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz
|
security vulnerability
|
## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- transform-pouch-1.1.5.tgz (Root Library)
- es3ify-0.2.2.tgz
- jstransform-11.0.3.tgz
- commoner-0.10.8.tgz
- glob-5.0.15.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alexcorvi/apexo/commit/7949f651007c28e1da9a0589a24114575a601e08">7949f651007c28e1da9a0589a24114575a601e08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz - ## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- transform-pouch-1.1.5.tgz (Root Library)
- es3ify-0.2.2.tgz
- jstransform-11.0.3.tgz
- commoner-0.10.8.tgz
- glob-5.0.15.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alexcorvi/apexo/commit/7949f651007c28e1da9a0589a24114575a601e08">7949f651007c28e1da9a0589a24114575a601e08</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file package json path to vulnerable library node modules minimatch package json dependency hierarchy transform pouch tgz root library tgz jstransform tgz commoner tgz glob tgz x minimatch tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch step up your open source security game with mend
| 0
|
263,019
| 23,031,439,412
|
IssuesEvent
|
2022-07-22 14:16:52
|
nhn-on7/marketgg-shop
|
https://api.github.com/repos/nhn-on7/marketgg-shop
|
closed
|
Repository TC 추가 작성
|
Test
|
## Overview
1. 카테고리 분류표 repository TC 작성
2. 카테고리 repository TC 작성
3. TC 점검
## To-do
- [x] 카테고리 분류표 repository TC 작성
- [x] 카테고리 repository TC 작성
- [x] 점검
|
1.0
|
Repository TC 추가 작성 - ## Overview
1. 카테고리 분류표 repository TC 작성
2. 카테고리 repository TC 작성
3. TC 점검
## To-do
- [x] 카테고리 분류표 repository TC 작성
- [x] 카테고리 repository TC 작성
- [x] 점검
|
non_process
|
repository tc 추가 작성 overview 카테고리 분류표 repository tc 작성 카테고리 repository tc 작성 tc 점검 to do 카테고리 분류표 repository tc 작성 카테고리 repository tc 작성 점검
| 0
|
14,097
| 16,987,911,682
|
IssuesEvent
|
2021-06-30 16:23:41
|
CesiumGS/cesium
|
https://api.github.com/repos/CesiumGS/cesium
|
closed
|
Postprocessing is initially disabled and will fail if enabled again
|
category - post-processing type - bug
|
<!--
Thanks for helping us improve Cesium! Please describe what the expected behavior is vs what actually happens.
Creating a Sandcastle example (https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/) that reproduces the issue helps us a lot in tracking down bugs. Paste the link you get from the "Share" button in Sandcastle below.
-->
Click the example's "enable" checkbox.
Sandcastle example:
[https://sandcastle.cesium.com/#c=fVVRb+I4EP4rFk+hUAMt190rUN0VWN3qdrfVstp7iYScxIBbx45sh5au+t9vxgmQQO5aqfWMZ76Z+cYz2TJDtoK/cEMmRPEXMuVW5Cn96XVB2Iq9PNXKMaG4CVtd8itUhNiNzmXypxIpc/yWOJPzbqje26NQhWoLqJm2wgmtALfEnDLj4MTUNV0Znc742nBug8vB1TXtfxgObwa/d8lwSPu/9a8/9G8QCoFyIwEjbFHaW7A0k3zGHOulOuHS9gror0yVpyUc6VpGYQu9fRHUGRY/82SunHA7gCrVHGXBLWVJEviaFEuhFIjXRWlfwO3h5NU+8G1BAgFjcfR4rzOwMmydQpTFhiXcLHRuYk4maBi2ciVW2qTE+orM1YzEWmrzg7+63PARCQGkRTqFMYDthFqTLY+vyHbpCqOp1iYRCui3J/axVtYRoRz5e/792/zL8p/Psx9/QeWDm1NkLRKSQmcDPLXrl7/qIpbrM7COZwAWv6XLTLxy+Z0BOaTnFchtpo2jby+j/3CHvKDzTD5qi81oKIdcIstB01WXfKR9cuFzaDdHuCZsy4F6/pPJnGMI0AV92m+yhyaQAJkSYNgfwb9xjTTQdDrtc78GbmpwTwXc0xncUyPc/0DiT62gzoSUvFzNguqr6dao7RR9uvCkB6JLntptatbRqDnM+7m6QVVLpDchK6mZC2qv7KJWcBPna7n8BChTTL1ozzCoAnfJ4LxZZTIwXMXrhpl0j0bDerLHkbYxV5xWrhYOQIsJ9wN+XHCPJ0ZBOdD1ob1tHOJi2NuhOs46JvAVNwMk45G4YpHkCQAwaf1q9LZl9Gel42edu2I3BQfv/c5zWsuIITuJjnOMT9fczSXH4/3uM5TTKm3CFjqd4rIsk7t7oWBs1vaI390jo49/rBgN9x68nWMR7aIGsSJHV7ph9uFFAWcZN24XoFO7vd+DJ/ELJSGY9UNkudkiG9U8vPvBzOaRjY2IeJBnCcx5pTuYKbANBCJ/q1zF/qtyZheUuVS6T8smAI33UDZnqlJOeYfwgNwA5/sVqlbXP9qxdTvJ7/YZ/yFSXHO4+gP4MDkOaxz3Uy/K4VvjaLxPHDx7VddxIrZEJJOGLyuJJbMWbla5lAvxxsPW3bgH9meuMHXY2AcYGsl2aLYZ3H0plJTScQ/EZs/DozlUMnbIw0FERaSTXUWBKlOTUZPczQsGxz04n9+OhcpyR9wu41jshsfPkX6FKoFodhnB09yrcUrKbviKT+BAUY0OYi0/kCsF7KvGX5SLv/8C](https://sandcastle.cesium.com/#c=fVVRb+I4EP4rFk+hUAMt190rUN0VWN3qdrfVstp7iYScxIBbx45sh5au+t9vxgmQQO5aqfWMZ76Z+cYz2TJDtoK/cEMmRPEXMuVW5Cn96XVB2Iq9PNXKMaG4CVtd8itUhNiNzmXypxIpc/yWOJPzbqje26NQhWoLqJm2wgmtALfEnDLj4MTUNV0Znc742nBug8vB1TXtfxgObwa/d8lwSPu/9a8/9G8QCoFyIwEjbFHaW7A0k3zGHOulOuHS9gror0yVpyUc6VpGYQu9fRHUGRY/82SunHA7gCrVHGXBLWVJEviaFEuhFIjXRWlfwO3h5NU+8G1BAgFjcfR4rzOwMmydQpTFhiXcLHRuYk4maBi2ciVW2qTE+orM1YzEWmrzg7+63PARCQGkRTqFMYDthFqTLY+vyHbpCqOp1iYRCui3J/axVtYRoRz5e/792/zL8p/Psx9/QeWDm1NkLRKSQmcDPLXrl7/qIpbrM7COZwAWv6XLTLxy+Z0BOaTnFchtpo2jby+j/3CHvKDzTD5qi81oKIdcIstB01WXfKR9cuFzaDdHuCZsy4F6/pPJnGMI0AV92m+yhyaQAJkSYNgfwb9xjTTQdDrtc78GbmpwTwXc0xncUyPc/0DiT62gzoSUvFzNguqr6dao7RR9uvCkB6JLntptatbRqDnM+7m6QVVLpDchK6mZC2qv7KJWcBPna7n8BChTTL1ozzCoAnfJ4LxZZTIwXMXrhpl0j0bDerLHkbYxV5xWrhYOQIsJ9wN+XHCPJ0ZBOdD1ob1tHOJi2NuhOs46JvAVNwMk45G4YpHkCQAwaf1q9LZl9Gel42edu2I3BQfv/c5zWsuIITuJjnOMT9fczSXH4/3uM5TTKm3CFjqd4rIsk7t7oWBs1vaI390jo49/rBgN9x68nWMR7aIGsSJHV7ph9uFFAWcZN24XoFO7vd+DJ/ELJSGY9UNkudkiG9U8vPvBzOaRjY2IeJBnCcx5pTuYKbANBCJ/q1zF/qtyZheUuVS6T8smAI33UDZnqlJOeYfwgNwA5/sVqlbXP9qxdTvJ7/YZ/yFSXHO4+gP4MDkOaxz3Uy/K4VvjaLxPHDx7VddxIrZEJJOGLyuJJbMWbla5lAvxxsPW3bgH9meuMHXY2AcYGsl2aLYZ3H0plJTScQ/EZs/DozlUMnbIw0FERaSTXUWBKlOTUZPczQsGxz04n9+OhcpyR9wu41jshsfPkX6FKoFodhnB09yrcUrKbviKT+BAUY0OYi0/kCsF7KvGX5SLv/8C)
However, if the enable property is true initially, it's fine.
```diff
var viewer = new Cesium.Viewer("cesiumContainer", {
shouldAnimate: true,
});
var position = Cesium.Cartesian3.fromDegrees(-123.0744619, 44.0503706);
var url = "../SampleData/models/CesiumMan/Cesium_Man.glb";
viewer.trackedEntity = viewer.entities.add({
name: url,
position: position,
model: {
uri: url,
},
});
var fragmentShaderSource =
"uniform sampler2D colorTexture; \n" +
"varying vec2 v_textureCoordinates; \n" +
"const int KERNEL_WIDTH = 16; \n" +
"void main(void) \n" +
"{ \n" +
" vec2 step = czm_pixelRatio / czm_viewport.zw; \n" +
" vec2 integralPos = v_textureCoordinates - mod(v_textureCoordinates, 8.0 * step); \n" +
" vec3 averageValue = vec3(0.0); \n" +
" for (int i = 0; i < KERNEL_WIDTH; i++) \n" +
" { \n" +
" for (int j = 0; j < KERNEL_WIDTH; j++) \n" +
" { \n" +
" averageValue += texture2D(colorTexture, integralPos + step * vec2(i, j)).rgb; \n" +
" } \n" +
" } \n" +
" averageValue /= float(KERNEL_WIDTH * KERNEL_WIDTH); \n" +
" gl_FragColor = vec4(averageValue, 1.0); \n" +
"} \n";
const postProcess = viewer.scene.postProcessStages.add(
new Cesium.PostProcessStage({
fragmentShader: fragmentShaderSource,
})
);
var viewModel = {
- enabled: false,
+ enabled: true,
};
Cesium.knockout.track(viewModel);
var toolbar = document.getElementById("toolbar");
Cesium.knockout.applyBindings(viewModel, toolbar);
for (var name in viewModel) {
if (viewModel.hasOwnProperty(name)) {
Cesium.knockout
.getObservable(viewModel, name)
.subscribe(updatePostProcess);
}
}
function updatePostProcess() {
postProcess.enabled = Boolean(viewModel.enabled);
}
updatePostProcess();
```
Browser: Chrome 86
Operating System: Windows 10
BTW, the fragment shader is from [https://sandcastle.cesium.com/?src=Custom%20Post%20Process.html&label=Post%20Processing](https://sandcastle.cesium.com/?src=Custom%20Post%20Process.html&label=Post%20Processing)
<!--
If you can also contribute a fix, we'd absolutely appreciate it! Fixing a bug in Cesium often means fixing a bug for thousands of applications and millions of end users.
Check out the contributor guide to get started:
https://github.com/CesiumGS/cesium/blob/master/CONTRIBUTING.md
Just let us know you're working on it and we'd be happy to provide advice and feedback.
-->
|
1.0
|
Postprocessing is initially disabled and will fail if enabled again - <!--
Thanks for helping us improve Cesium! Please describe what the expected behavior is vs what actually happens.
Creating a Sandcastle example (https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/) that reproduces the issue helps us a lot in tracking down bugs. Paste the link you get from the "Share" button in Sandcastle below.
-->
Click the example's "enable" checkbox.
Sandcastle example:
[https://sandcastle.cesium.com/#c=fVVRb+I4EP4rFk+hUAMt190rUN0VWN3qdrfVstp7iYScxIBbx45sh5au+t9vxgmQQO5aqfWMZ76Z+cYz2TJDtoK/cEMmRPEXMuVW5Cn96XVB2Iq9PNXKMaG4CVtd8itUhNiNzmXypxIpc/yWOJPzbqje26NQhWoLqJm2wgmtALfEnDLj4MTUNV0Znc742nBug8vB1TXtfxgObwa/d8lwSPu/9a8/9G8QCoFyIwEjbFHaW7A0k3zGHOulOuHS9gror0yVpyUc6VpGYQu9fRHUGRY/82SunHA7gCrVHGXBLWVJEviaFEuhFIjXRWlfwO3h5NU+8G1BAgFjcfR4rzOwMmydQpTFhiXcLHRuYk4maBi2ciVW2qTE+orM1YzEWmrzg7+63PARCQGkRTqFMYDthFqTLY+vyHbpCqOp1iYRCui3J/axVtYRoRz5e/792/zL8p/Psx9/QeWDm1NkLRKSQmcDPLXrl7/qIpbrM7COZwAWv6XLTLxy+Z0BOaTnFchtpo2jby+j/3CHvKDzTD5qi81oKIdcIstB01WXfKR9cuFzaDdHuCZsy4F6/pPJnGMI0AV92m+yhyaQAJkSYNgfwb9xjTTQdDrtc78GbmpwTwXc0xncUyPc/0DiT62gzoSUvFzNguqr6dao7RR9uvCkB6JLntptatbRqDnM+7m6QVVLpDchK6mZC2qv7KJWcBPna7n8BChTTL1ozzCoAnfJ4LxZZTIwXMXrhpl0j0bDerLHkbYxV5xWrhYOQIsJ9wN+XHCPJ0ZBOdD1ob1tHOJi2NuhOs46JvAVNwMk45G4YpHkCQAwaf1q9LZl9Gel42edu2I3BQfv/c5zWsuIITuJjnOMT9fczSXH4/3uM5TTKm3CFjqd4rIsk7t7oWBs1vaI390jo49/rBgN9x68nWMR7aIGsSJHV7ph9uFFAWcZN24XoFO7vd+DJ/ELJSGY9UNkudkiG9U8vPvBzOaRjY2IeJBnCcx5pTuYKbANBCJ/q1zF/qtyZheUuVS6T8smAI33UDZnqlJOeYfwgNwA5/sVqlbXP9qxdTvJ7/YZ/yFSXHO4+gP4MDkOaxz3Uy/K4VvjaLxPHDx7VddxIrZEJJOGLyuJJbMWbla5lAvxxsPW3bgH9meuMHXY2AcYGsl2aLYZ3H0plJTScQ/EZs/DozlUMnbIw0FERaSTXUWBKlOTUZPczQsGxz04n9+OhcpyR9wu41jshsfPkX6FKoFodhnB09yrcUrKbviKT+BAUY0OYi0/kCsF7KvGX5SLv/8C](https://sandcastle.cesium.com/#c=fVVRb+I4EP4rFk+hUAMt190rUN0VWN3qdrfVstp7iYScxIBbx45sh5au+t9vxgmQQO5aqfWMZ76Z+cYz2TJDtoK/cEMmRPEXMuVW5Cn96XVB2Iq9PNXKMaG4CVtd8itUhNiNzmXypxIpc/yWOJPzbqje26NQhWoLqJm2wgmtALfEnDLj4MTUNV0Znc742nBug8vB1TXtfxgObwa/d8lwSPu/9a8/9G8QCoFyIwEjbFHaW7A0k3zGHOulOuHS9gror0yVpyUc6VpGYQu9fRHUGRY/82SunHA7gCrVHGXBLWVJEviaFEuhFIjXRWlfwO3h5NU+8G1BAgFjcfR4rzOwMmydQpTFhiXcLHRuYk4maBi2ciVW2qTE+orM1YzEWmrzg7+63PARCQGkRTqFMYDthFqTLY+vyHbpCqOp1iYRCui3J/axVtYRoRz5e/792/zL8p/Psx9/QeWDm1NkLRKSQmcDPLXrl7/qIpbrM7COZwAWv6XLTLxy+Z0BOaTnFchtpo2jby+j/3CHvKDzTD5qi81oKIdcIstB01WXfKR9cuFzaDdHuCZsy4F6/pPJnGMI0AV92m+yhyaQAJkSYNgfwb9xjTTQdDrtc78GbmpwTwXc0xncUyPc/0DiT62gzoSUvFzNguqr6dao7RR9uvCkB6JLntptatbRqDnM+7m6QVVLpDchK6mZC2qv7KJWcBPna7n8BChTTL1ozzCoAnfJ4LxZZTIwXMXrhpl0j0bDerLHkbYxV5xWrhYOQIsJ9wN+XHCPJ0ZBOdD1ob1tHOJi2NuhOs46JvAVNwMk45G4YpHkCQAwaf1q9LZl9Gel42edu2I3BQfv/c5zWsuIITuJjnOMT9fczSXH4/3uM5TTKm3CFjqd4rIsk7t7oWBs1vaI390jo49/rBgN9x68nWMR7aIGsSJHV7ph9uFFAWcZN24XoFO7vd+DJ/ELJSGY9UNkudkiG9U8vPvBzOaRjY2IeJBnCcx5pTuYKbANBCJ/q1zF/qtyZheUuVS6T8smAI33UDZnqlJOeYfwgNwA5/sVqlbXP9qxdTvJ7/YZ/yFSXHO4+gP4MDkOaxz3Uy/K4VvjaLxPHDx7VddxIrZEJJOGLyuJJbMWbla5lAvxxsPW3bgH9meuMHXY2AcYGsl2aLYZ3H0plJTScQ/EZs/DozlUMnbIw0FERaSTXUWBKlOTUZPczQsGxz04n9+OhcpyR9wu41jshsfPkX6FKoFodhnB09yrcUrKbviKT+BAUY0OYi0/kCsF7KvGX5SLv/8C)
However, if the enable property is true initially, it's fine.
```diff
var viewer = new Cesium.Viewer("cesiumContainer", {
shouldAnimate: true,
});
var position = Cesium.Cartesian3.fromDegrees(-123.0744619, 44.0503706);
var url = "../SampleData/models/CesiumMan/Cesium_Man.glb";
viewer.trackedEntity = viewer.entities.add({
name: url,
position: position,
model: {
uri: url,
},
});
var fragmentShaderSource =
"uniform sampler2D colorTexture; \n" +
"varying vec2 v_textureCoordinates; \n" +
"const int KERNEL_WIDTH = 16; \n" +
"void main(void) \n" +
"{ \n" +
" vec2 step = czm_pixelRatio / czm_viewport.zw; \n" +
" vec2 integralPos = v_textureCoordinates - mod(v_textureCoordinates, 8.0 * step); \n" +
" vec3 averageValue = vec3(0.0); \n" +
" for (int i = 0; i < KERNEL_WIDTH; i++) \n" +
" { \n" +
" for (int j = 0; j < KERNEL_WIDTH; j++) \n" +
" { \n" +
" averageValue += texture2D(colorTexture, integralPos + step * vec2(i, j)).rgb; \n" +
" } \n" +
" } \n" +
" averageValue /= float(KERNEL_WIDTH * KERNEL_WIDTH); \n" +
" gl_FragColor = vec4(averageValue, 1.0); \n" +
"} \n";
const postProcess = viewer.scene.postProcessStages.add(
new Cesium.PostProcessStage({
fragmentShader: fragmentShaderSource,
})
);
var viewModel = {
- enabled: false,
+ enabled: true,
};
Cesium.knockout.track(viewModel);
var toolbar = document.getElementById("toolbar");
Cesium.knockout.applyBindings(viewModel, toolbar);
for (var name in viewModel) {
if (viewModel.hasOwnProperty(name)) {
Cesium.knockout
.getObservable(viewModel, name)
.subscribe(updatePostProcess);
}
}
function updatePostProcess() {
postProcess.enabled = Boolean(viewModel.enabled);
}
updatePostProcess();
```
Browser: Chrome 86
Operating System: Windows 10
BTW, the fragment shader is from [https://sandcastle.cesium.com/?src=Custom%20Post%20Process.html&label=Post%20Processing](https://sandcastle.cesium.com/?src=Custom%20Post%20Process.html&label=Post%20Processing)
<!--
If you can also contribute a fix, we'd absolutely appreciate it! Fixing a bug in Cesium often means fixing a bug for thousands of applications and millions of end users.
Check out the contributor guide to get started:
https://github.com/CesiumGS/cesium/blob/master/CONTRIBUTING.md
Just let us know you're working on it and we'd be happy to provide advice and feedback.
-->
|
process
|
postprocessing is initially disabled and will fail if enabled again thanks for helping us improve cesium please describe what the expected behavior is vs what actually happens creating a sandcastle example that reproduces the issue helps us a lot in tracking down bugs paste the link you get from the share button in sandcastle below click the example s enable checkbox sandcastle example however if the enable property is true initially it s fine diff var viewer new cesium viewer cesiumcontainer shouldanimate true var position cesium fromdegrees var url sampledata models cesiumman cesium man glb viewer trackedentity viewer entities add name url position position model uri url var fragmentshadersource uniform colortexture n varying v texturecoordinates n const int kernel width n void main void n n step czm pixelratio czm viewport zw n integralpos v texturecoordinates mod v texturecoordinates step n averagevalue n for int i i kernel width i n n for int j j kernel width j n n averagevalue colortexture integralpos step i j rgb n n n averagevalue float kernel width kernel width n gl fragcolor averagevalue n n const postprocess viewer scene postprocessstages add new cesium postprocessstage fragmentshader fragmentshadersource var viewmodel enabled false enabled true cesium knockout track viewmodel var toolbar document getelementbyid toolbar cesium knockout applybindings viewmodel toolbar for var name in viewmodel if viewmodel hasownproperty name cesium knockout getobservable viewmodel name subscribe updatepostprocess function updatepostprocess postprocess enabled boolean viewmodel enabled updatepostprocess browser chrome operating system windows btw the fragment shader is from if you can also contribute a fix we d absolutely appreciate it fixing a bug in cesium often means fixing a bug for thousands of applications and millions of end users check out the contributor guide to get started just let us know you re working on it and we d be happy to provide advice and feedback
| 1
|
100,902
| 16,490,595,332
|
IssuesEvent
|
2021-05-25 02:43:29
|
EcommEasy/EcommEasy-Admin
|
https://api.github.com/repos/EcommEasy/EcommEasy-Admin
|
opened
|
CVE-2021-23369 (High) detected in handlebars-4.1.1.tgz
|
security vulnerability
|
## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /EcommEasy-Admin/package.json</p>
<p>Path to vulnerable library: EcommEasy-Admin/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23369 (High) detected in handlebars-4.1.1.tgz - ## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /EcommEasy-Admin/package.json</p>
<p>Path to vulnerable library: EcommEasy-Admin/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file ecommeasy admin package json path to vulnerable library ecommeasy admin node modules handlebars package json dependency hierarchy x handlebars tgz vulnerable library vulnerability details the package handlebars before are vulnerable to remote code execution rce when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
8,582
| 11,755,245,596
|
IssuesEvent
|
2020-03-13 09:08:10
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
reopened
|
Export the current Prisma version
|
kind/feature process/next-milestone
|
I'd like to be able to access the current Prisma version:
```
import { version } from "PrismaClient"
console.log(version)
```
which would print
```
2.0.0-alpha.785
```
The main use case for this is to [verify our deployment platforms are not out of sync](https://github.com/prisma/prisma2-e2e-tests/issues/51) and actually test the latest Prisma version. Also, we think it may be useful if the version string is printed somewhere (maybe additionally in a comment in the first few lines), so users can easily verify that the generated code is indeed generated by a given Prisma version.
|
1.0
|
Export the current Prisma version - I'd like to be able to access the current Prisma version:
```
import { version } from "PrismaClient"
console.log(version)
```
which would print
```
2.0.0-alpha.785
```
The main use case for this is to [verify our deployment platforms are not out of sync](https://github.com/prisma/prisma2-e2e-tests/issues/51) and actually test the latest Prisma version. Also, we think it may be useful if the version string is printed somewhere (maybe additionally in a comment in the first few lines), so users can easily verify that the generated code is indeed generated by a given Prisma version.
|
process
|
export the current prisma version i d like to be able to access the current prisma version import version from prismaclient console log version which would print alpha the main use case for this is to and actually test the latest prisma version also we think it may be useful if the version string is printed somewhere maybe additionally in a comment in the first few lines so users can easily verify that the generated code is indeed generated by a given prisma version
| 1
|
19,039
| 25,042,616,554
|
IssuesEvent
|
2022-11-04 23:03:37
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
opened
|
BP: Create URL parameter
|
Batch Processor
|
Part of #1455
Add a URL parameter such as https://streamstats.usgs.gov/ss?bp=submit, so that when a user accesses that link, it automatically opens the BP modal.
|
1.0
|
BP: Create URL parameter - Part of #1455
Add a URL parameter such as https://streamstats.usgs.gov/ss?bp=submit, so that when a user accesses that link, it automatically opens the BP modal.
|
process
|
bp create url parameter part of add a url parameter such as so that when a user accesses that link it automatically opens the bp modal
| 1
|
62,127
| 3,172,494,644
|
IssuesEvent
|
2015-09-23 08:38:00
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
xCAT provision Sles11.2 will hang-on when excuting remoteshell script
|
priority:normal type:bug
|
xCAT provision Sles11.2 will hang-on and wait for input passphrase when executing "ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub" command in remoteshell script.
Root Cause: Sles11.2 install openssh-5.1p1-41.57.1 build-in package, and this version openssh don't support ecdsa key type.
So there needs a openssh support check before ecdsa key generation. In remoteshell script, line 283, we will add "ssh-keygen -t ecdsa -y -f /etc/ssh/ssh_host_ecdsa_key -P "" " command and check the result to judge support or not.
Follow is changes:
<code>star@userver:~/xcat-core$ git diff</code>
<code>diff --git a/xCAT/postscripts/remoteshell b/xCAT/postscripts/remoteshell</code>
<code>index 8177bee..e4eb788 100755</code>
<code>--- a/xCAT/postscripts/remoteshell</code>
<code>+++ b/xCAT/postscripts/remoteshell</code>
<code>@@ -281,9 +281,14 @@ if [ -f /etc/ssh/ssh_host_ecdsa_key ]; then</code>
<code> if ! grep "PRIVATE KEY" /etc/ssh/ssh_host_ecdsa_key > /dev/null 2>&1 ; then</code>
<code> rm /etc/ssh/ssh_host_ecdsa_key</code>
<code> else</code>
<code>- ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>- chmod 644 /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>- chown root /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ ssh-keygen -t ecdsa -y -f /etc/ssh/ssh_host_ecdsa_key -P ""</code>
<code>+ if [ "x$?" = "x0" ]; then</code>
<code>+ ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ chmod 644 /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ chown root /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ else</code>
<code>+ rm -fr /etc/ssh/ssh_host_ecdsa_key</code>
<code>+ fi</code>
<code> fi</code>
|
1.0
|
xCAT provision Sles11.2 will hang-on when excuting remoteshell script - xCAT provision Sles11.2 will hang-on and wait for input passphrase when executing "ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub" command in remoteshell script.
Root Cause: Sles11.2 install openssh-5.1p1-41.57.1 build-in package, and this version openssh don't support ecdsa key type.
So there needs a openssh support check before ecdsa key generation. In remoteshell script, line 283, we will add "ssh-keygen -t ecdsa -y -f /etc/ssh/ssh_host_ecdsa_key -P "" " command and check the result to judge support or not.
Follow is changes:
<code>star@userver:~/xcat-core$ git diff</code>
<code>diff --git a/xCAT/postscripts/remoteshell b/xCAT/postscripts/remoteshell</code>
<code>index 8177bee..e4eb788 100755</code>
<code>--- a/xCAT/postscripts/remoteshell</code>
<code>+++ b/xCAT/postscripts/remoteshell</code>
<code>@@ -281,9 +281,14 @@ if [ -f /etc/ssh/ssh_host_ecdsa_key ]; then</code>
<code> if ! grep "PRIVATE KEY" /etc/ssh/ssh_host_ecdsa_key > /dev/null 2>&1 ; then</code>
<code> rm /etc/ssh/ssh_host_ecdsa_key</code>
<code> else</code>
<code>- ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>- chmod 644 /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>- chown root /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ ssh-keygen -t ecdsa -y -f /etc/ssh/ssh_host_ecdsa_key -P ""</code>
<code>+ if [ "x$?" = "x0" ]; then</code>
<code>+ ssh-keygen -y -f /etc/ssh/ssh_host_ecdsa_key > /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ chmod 644 /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ chown root /etc/ssh/ssh_host_ecdsa_key.pub</code>
<code>+ else</code>
<code>+ rm -fr /etc/ssh/ssh_host_ecdsa_key</code>
<code>+ fi</code>
<code> fi</code>
|
non_process
|
xcat provision will hang on when excuting remoteshell script xcat provision will hang on and wait for input passphrase when executing ssh keygen y f etc ssh ssh host ecdsa key etc ssh ssh host ecdsa key pub command in remoteshell script root cause install openssh build in package and this version openssh don t support ecdsa key type so there needs a openssh support check before ecdsa key generation in remoteshell script line we will add ssh keygen t ecdsa y f etc ssh ssh host ecdsa key p command and check the result to judge support or not follow is changes star userver xcat core git diff diff git a xcat postscripts remoteshell b xcat postscripts remoteshell index a xcat postscripts remoteshell b xcat postscripts remoteshell if then if grep private key etc ssh ssh host ecdsa key dev null then rm etc ssh ssh host ecdsa key else ssh keygen y f etc ssh ssh host ecdsa key etc ssh ssh host ecdsa key pub chmod etc ssh ssh host ecdsa key pub chown root etc ssh ssh host ecdsa key pub ssh keygen t ecdsa y f etc ssh ssh host ecdsa key p if then ssh keygen y f etc ssh ssh host ecdsa key etc ssh ssh host ecdsa key pub chmod etc ssh ssh host ecdsa key pub chown root etc ssh ssh host ecdsa key pub else rm fr etc ssh ssh host ecdsa key fi fi
| 0
|
85,923
| 10,697,196,261
|
IssuesEvent
|
2019-10-23 16:02:05
|
async-rs/async-std
|
https://api.github.com/repos/async-rs/async-std
|
closed
|
Design of async channels
|
api design
|
It's time to port `crossbeam-channel` to futures.
Previous discussions:
- https://github.com/crossbeam-rs/crossbeam/issues/314
- https://github.com/crossbeam-rs/crossbeam-channel/issues/39
- https://github.com/crossbeam-rs/crossbeam-channel/issues/61
cc @matklad @BurntSushi @glaebhoerl
Instead of copying `crossbeam-channel`'s API directly, I'm thinking perhaps we should design async channels a bit differently.
In our previous discussions, we figured that dropped receivers should disconnect the channel and make send operations fail for the following reason. If a receiver thread panics, the sending side needs a way to stop producing messages and terminate. If dropped receivers disconnect the channel, the sending side will usually panic due to an attempt of sending a message into the disconnected channel.
In Go, sending into a channel is not a fallible operation even if there are no more receivers. That is because Go only uses bounded channels so they will eventually fill up and the sending side will then block on the channel, attempting to send another message into the channel while it's full. Fortunately, Go's scheduler has a deadlock detection mechanism so it will realize the sending side is deadlocked and will thus make the goroutine fail.
In `async-std`, we could implement a similar kind of deadlock detection: a task is deadlocked if it's sleeping and there are no more wakers associated with it, or if all tasks are suddenly put to sleep. Therefore, channel disconnection from the receiver side is not such a crucial feature and can simplify the API a lot.
If we were to have only bounded channels and infallible send operations, the API could look like this:
```rust
fn new(cap: usize) -> (Sender<T>, Receiver<T>);
struct Sender<T>;
struct Receiver<T>;
impl<T> Sender<T> {
async fn send(&self, msg: T);
}
impl<T> Receiver<T> {
fn try_recv(&self) -> Option<T>;
async fn recv(&self) -> Option<T>;
}
impl<T> Clone for Sender<T> {}
impl<T> Clone for Receiver<T> {}
impl<T> Stream for Receiver<T> {
type Item = Option<T>;
}
```
This is a very simple and ergonomic API that is easy to learn.
In our previous discussions, we also had the realization that bounded channels are typically more suitable for CSP-based concurrency models, while unbounded channels are a better fit for actor-based concurrency models. Even `futures` and `tokio` expose the `mpsc::channel()` constructor for bounded channels as the "default" and most ergonomic one, while unbounded channels are discouraged with a more verbose API and are presented in the docs sort of as the type of channel we should reach for in more exceptional situations.
Another benefit of the API as presented above is that it is relatively easy to implement and we could have a working implementation very soon.
As for selection, I can imagine having a `select` macro similar to the one in the `futures` crate that could be used as follows (this example is adapted from our `a-chat` tutorial):
```rust
loop {
select! {
msg = rx.recv() => stream.write_all(msg.unwrap().as_bytes()).await?;
shutdown.recv() => break,
}
}
```
What does everyone think?
|
1.0
|
Design of async channels - It's time to port `crossbeam-channel` to futures.
Previous discussions:
- https://github.com/crossbeam-rs/crossbeam/issues/314
- https://github.com/crossbeam-rs/crossbeam-channel/issues/39
- https://github.com/crossbeam-rs/crossbeam-channel/issues/61
cc @matklad @BurntSushi @glaebhoerl
Instead of copying `crossbeam-channel`'s API directly, I'm thinking perhaps we should design async channels a bit differently.
In our previous discussions, we figured that dropped receivers should disconnect the channel and make send operations fail for the following reason. If a receiver thread panics, the sending side needs a way to stop producing messages and terminate. If dropped receivers disconnect the channel, the sending side will usually panic due to an attempt of sending a message into the disconnected channel.
In Go, sending into a channel is not a fallible operation even if there are no more receivers. That is because Go only uses bounded channels so they will eventually fill up and the sending side will then block on the channel, attempting to send another message into the channel while it's full. Fortunately, Go's scheduler has a deadlock detection mechanism so it will realize the sending side is deadlocked and will thus make the goroutine fail.
In `async-std`, we could implement a similar kind of deadlock detection: a task is deadlocked if it's sleeping and there are no more wakers associated with it, or if all tasks are suddenly put to sleep. Therefore, channel disconnection from the receiver side is not such a crucial feature and can simplify the API a lot.
If we were to have only bounded channels and infallible send operations, the API could look like this:
```rust
fn new(cap: usize) -> (Sender<T>, Receiver<T>);
struct Sender<T>;
struct Receiver<T>;
impl<T> Sender<T> {
async fn send(&self, msg: T);
}
impl<T> Receiver<T> {
fn try_recv(&self) -> Option<T>;
async fn recv(&self) -> Option<T>;
}
impl<T> Clone for Sender<T> {}
impl<T> Clone for Receiver<T> {}
impl<T> Stream for Receiver<T> {
type Item = Option<T>;
}
```
This is a very simple and ergonomic API that is easy to learn.
In our previous discussions, we also had the realization that bounded channels are typically more suitable for CSP-based concurrency models, while unbounded channels are a better fit for actor-based concurrency models. Even `futures` and `tokio` expose the `mpsc::channel()` constructor for bounded channels as the "default" and most ergonomic one, while unbounded channels are discouraged with a more verbose API and are presented in the docs sort of as the type of channel we should reach for in more exceptional situations.
Another benefit of the API as presented above is that it is relatively easy to implement and we could have a working implementation very soon.
As for selection, I can imagine having a `select` macro similar to the one in the `futures` crate that could be used as follows (this example is adapted from our `a-chat` tutorial):
```rust
loop {
select! {
msg = rx.recv() => stream.write_all(msg.unwrap().as_bytes()).await?;
shutdown.recv() => break,
}
}
```
What does everyone think?
|
non_process
|
design of async channels it s time to port crossbeam channel to futures previous discussions cc matklad burntsushi glaebhoerl instead of copying crossbeam channel s api directly i m thinking perhaps we should design async channels a bit differently in our previous discussions we figured that dropped receivers should disconnect the channel and make send operations fail for the following reason if a receiver thread panics the sending side needs a way to stop producing messages and terminate if dropped receivers disconnect the channel the sending side will usually panic due to an attempt of sending a message into the disconnected channel in go sending into a channel is not a fallible operation even if there are no more receivers that is because go only uses bounded channels so they will eventually fill up and the sending side will then block on the channel attempting to send another message into the channel while it s full fortunately go s scheduler has a deadlock detection mechanism so it will realize the sending side is deadlocked and will thus make the goroutine fail in async std we could implement a similar kind of deadlock detection a task is deadlocked if it s sleeping and there are no more wakers associated with it or if all tasks are suddenly put to sleep therefore channel disconnection from the receiver side is not such a crucial feature and can simplify the api a lot if we were to have only bounded channels and infallible send operations the api could look like this rust fn new cap usize sender receiver struct sender struct receiver impl sender async fn send self msg t impl receiver fn try recv self option async fn recv self option impl clone for sender impl clone for receiver impl stream for receiver type item option this is a very simple and ergonomic api that is easy to learn in our previous discussions we also had the realization that bounded channels are typically more suitable for csp based concurrency models while unbounded channels are a better fit for actor based concurrency models even futures and tokio expose the mpsc channel constructor for bounded channels as the default and most ergonomic one while unbounded channels are discouraged with a more verbose api and are presented in the docs sort of as the type of channel we should reach for in more exceptional situations another benefit of the api as presented above is that it is relatively easy to implement and we could have a working implementation very soon as for selection i can imagine having a select macro similar to the one in the futures crate that could be used as follows this example is adapted from our a chat tutorial rust loop select msg rx recv stream write all msg unwrap as bytes await shutdown recv break what does everyone think
| 0
|
18,026
| 24,032,798,837
|
IssuesEvent
|
2022-09-15 16:19:42
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
opened
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'dns' invalid in google-cloud-dns/.repo-metadata.json
* api_shortname 'unknown' invalid in google-cloud-location/.repo-metadata.json
* api_shortname field missing from google-cloud-resource_manager/.repo-metadata.json
* api_shortname 'unknown' invalid in google-iam-v1/.repo-metadata.json
* api_shortname 'unknown' invalid in grafeas-v1/.repo-metadata.json
* api_shortname 'unknown' invalid in grafeas/.repo-metadata.json
* must have required property 'library_type' in stackdriver/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'dns' invalid in google-cloud-dns/.repo-metadata.json
* api_shortname 'unknown' invalid in google-cloud-location/.repo-metadata.json
* api_shortname field missing from google-cloud-resource_manager/.repo-metadata.json
* api_shortname 'unknown' invalid in google-iam-v1/.repo-metadata.json
* api_shortname 'unknown' invalid in grafeas-v1/.repo-metadata.json
* api_shortname 'unknown' invalid in grafeas/.repo-metadata.json
* must have required property 'library_type' in stackdriver/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 api shortname dns invalid in google cloud dns repo metadata json api shortname unknown invalid in google cloud location repo metadata json api shortname field missing from google cloud resource manager repo metadata json api shortname unknown invalid in google iam repo metadata json api shortname unknown invalid in grafeas repo metadata json api shortname unknown invalid in grafeas repo metadata json must have required property library type in stackdriver repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
1,234
| 3,774,393,696
|
IssuesEvent
|
2016-03-17 09:04:27
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] Incremental output missing since 3.0.2
|
Bug Process Status: Needs Review
|
Hi, starting with Symfony 3.0.2, getIncrementalOutput() does not contain the produced incremental output in some circumstances.
I traced the cause down to PR #17423. I also discovered that if you call getOutput() before the getIncerementalOutput(), the later one works properly. I guess this is also the reason why it works in unit tests, as the [getOutput](https://github.com/symfony/symfony/blob/master/src/Symfony/Component/Process/Tests/ProcessTest.php#L353) is called there before to detect if some new output was already provided.
The difference is most probably in the readPipes() call inside the getOutput() method. However the fix is a bit beyond my knowledge, so at least there is testcase (part of `ProcessTest::testIncrementalOutput()` test)
This test passes:
```php
// ...
foreach (array('foo', 'bar') as $s) {
sleep(1);
$p->getOutput();
$this->assertSame($s, $p->getIncrementalOutput());
flock($h, LOCK_UN);
}
// ...
```
This does not: (note missing getOutput() call)
```php
// ...
foreach (array('foo', 'bar') as $s) {
sleep(1);
$this->assertSame($s, $p->getIncrementalOutput());
flock($h, LOCK_UN);
}
// ...
```
Is there anything more I can provide? cc @romainneutron @nicolas-grekas
Thanks!
|
1.0
|
[Process] Incremental output missing since 3.0.2 - Hi, starting with Symfony 3.0.2, getIncrementalOutput() does not contain the produced incremental output in some circumstances.
I traced the cause down to PR #17423. I also discovered that if you call getOutput() before the getIncerementalOutput(), the later one works properly. I guess this is also the reason why it works in unit tests, as the [getOutput](https://github.com/symfony/symfony/blob/master/src/Symfony/Component/Process/Tests/ProcessTest.php#L353) is called there before to detect if some new output was already provided.
The difference is most probably in the readPipes() call inside the getOutput() method. However the fix is a bit beyond my knowledge, so at least there is testcase (part of `ProcessTest::testIncrementalOutput()` test)
This test passes:
```php
// ...
foreach (array('foo', 'bar') as $s) {
sleep(1);
$p->getOutput();
$this->assertSame($s, $p->getIncrementalOutput());
flock($h, LOCK_UN);
}
// ...
```
This does not: (note missing getOutput() call)
```php
// ...
foreach (array('foo', 'bar') as $s) {
sleep(1);
$this->assertSame($s, $p->getIncrementalOutput());
flock($h, LOCK_UN);
}
// ...
```
Is there anything more I can provide? cc @romainneutron @nicolas-grekas
Thanks!
|
process
|
incremental output missing since hi starting with symfony getincrementaloutput does not contain the produced incremental output in some circumstances i traced the cause down to pr i also discovered that if you call getoutput before the getincerementaloutput the later one works properly i guess this is also the reason why it works in unit tests as the is called there before to detect if some new output was already provided the difference is most probably in the readpipes call inside the getoutput method however the fix is a bit beyond my knowledge so at least there is testcase part of processtest testincrementaloutput test this test passes php foreach array foo bar as s sleep p getoutput this assertsame s p getincrementaloutput flock h lock un this does not note missing getoutput call php foreach array foo bar as s sleep this assertsame s p getincrementaloutput flock h lock un is there anything more i can provide cc romainneutron nicolas grekas thanks
| 1
|
17,604
| 23,427,733,713
|
IssuesEvent
|
2022-08-14 16:42:13
|
vortexntnu/Vortex-CV
|
https://api.github.com/repos/vortexntnu/Vortex-CV
|
closed
|
Object Component detection (Image Processing)
|
enhancement high priority Image Processing Object Detection
|
**Time estimate:** 10 hours
**Description of task:**
The full Hough transform detector doesn't work well when there isn't enough derivative gradient for the canny detector to find lines in objects, therefore we can use the transform to detect a partial component of objects first, e.g. short horizontal lines.
Needs a way to rule out false-positives. This might be achieved with #59 .
|
1.0
|
Object Component detection (Image Processing) - **Time estimate:** 10 hours
**Description of task:**
The full Hough transform detector doesn't work well when there isn't enough derivative gradient for the canny detector to find lines in objects, therefore we can use the transform to detect a partial component of objects first, e.g. short horizontal lines.
Needs a way to rule out false-positives. This might be achieved with #59 .
|
process
|
object component detection image processing time estimate hours description of task the full hough transform detector doesn t work well when there isn t enough derivative gradient for the canny detector to find lines in objects therefore we can use the transform to detect a partial component of objects first e g short horizontal lines needs a way to rule out false positives this might be achieved with
| 1
|
17,705
| 23,589,710,848
|
IssuesEvent
|
2022-08-23 14:19:48
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Generate XYZ tiles freezes on / never advances past 1% progress
|
Feedback Processing Bug
|
### What is the bug or the crash?
Attempting to make tiles out of a small geoTIFF (113 MB).
When I run the program, it does not give any feedback (no fail error) but does not advance past 1% progress.
OS is Mac and here is the dialog box with QGIS version 3.22.9-Białowieża and other details
<img width="1047" alt="Screen Shot 2022-08-23 at 8 19 07 AM" src="https://user-images.githubusercontent.com/21337146/186159262-f2a87b65-5418-4817-b6c9-f321b00fe9d2.png">
### Steps to reproduce the issue
Extent: Calculate from layer
-68.189041875,112.567014849,19.997011249,77.552965717 [EPSG:4326]
Min Zoom 12
Max Zoom 21
DPI 96
Background color #000000
0% opacity
Tile format JPG
Quality 75
Metatile size originally used default 4, but have tried 20, same thing happens
Here is the log, there is never any additional info in the log, because it just stays at 1%:
QGIS version: 3.22.9-Białowieża
QGIS code revision: a8e9e6fae5
Qt version: 5.14.2
Python version: 3.8.7
GDAL version: 3.2.1
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 6.3.2, May 1st, 2020
Algorithm started at: 2022-08-23T08:39:24
Algorithm 'Generate XYZ tiles (Directory)' starting…
Input parameters:
{ 'BACKGROUND_COLOR' : QColor(0, 0, 0, 0), 'DPI' : 96, 'EXTENT' : '-68.189041875,112.567014849,19.997011249,77.552965717 [EPSG:4326]', 'METATILESIZE' : 2, 'OUTPUT_DIRECTORY' : '/Users/irlipton/Downloads', 'OUTPUT_HTML' : '/Users/irlipton/Downloads/index.html', 'QUALITY' : 75, 'TILE_FORMAT' : 1, 'TILE_HEIGHT' : 256, 'TILE_WIDTH' : 256, 'TMS_CONVENTION' : False, 'ZOOM_MAX' : 21, 'ZOOM_MIN' : 12 }
### Versions
3.22.9-Białowieża
QGIS code revision: a8e9e6fae5
Qt version: 5.14.2
Python version: 3.8.7
GDAL version: 3.2.1
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 6.3.2, May 1st, 2020
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
No tile folders or index.html show up in designated file path
|
1.0
|
Generate XYZ tiles freezes on / never advances past 1% progress - ### What is the bug or the crash?
Attempting to make tiles out of a small geoTIFF (113 MB).
When I run the program, it does not give any feedback (no fail error) but does not advance past 1% progress.
OS is Mac and here is the dialog box with QGIS version 3.22.9-Białowieża and other details
<img width="1047" alt="Screen Shot 2022-08-23 at 8 19 07 AM" src="https://user-images.githubusercontent.com/21337146/186159262-f2a87b65-5418-4817-b6c9-f321b00fe9d2.png">
### Steps to reproduce the issue
Extent: Calculate from layer
-68.189041875,112.567014849,19.997011249,77.552965717 [EPSG:4326]
Min Zoom 12
Max Zoom 21
DPI 96
Background color #000000
0% opacity
Tile format JPG
Quality 75
Metatile size originally used default 4, but have tried 20, same thing happens
Here is the log, there is never any additional info in the log, because it just stays at 1%:
QGIS version: 3.22.9-Białowieża
QGIS code revision: a8e9e6fae5
Qt version: 5.14.2
Python version: 3.8.7
GDAL version: 3.2.1
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 6.3.2, May 1st, 2020
Algorithm started at: 2022-08-23T08:39:24
Algorithm 'Generate XYZ tiles (Directory)' starting…
Input parameters:
{ 'BACKGROUND_COLOR' : QColor(0, 0, 0, 0), 'DPI' : 96, 'EXTENT' : '-68.189041875,112.567014849,19.997011249,77.552965717 [EPSG:4326]', 'METATILESIZE' : 2, 'OUTPUT_DIRECTORY' : '/Users/irlipton/Downloads', 'OUTPUT_HTML' : '/Users/irlipton/Downloads/index.html', 'QUALITY' : 75, 'TILE_FORMAT' : 1, 'TILE_HEIGHT' : 256, 'TILE_WIDTH' : 256, 'TMS_CONVENTION' : False, 'ZOOM_MAX' : 21, 'ZOOM_MIN' : 12 }
### Versions
3.22.9-Białowieża
QGIS code revision: a8e9e6fae5
Qt version: 5.14.2
Python version: 3.8.7
GDAL version: 3.2.1
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 6.3.2, May 1st, 2020
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
No tile folders or index.html show up in designated file path
|
process
|
generate xyz tiles freezes on never advances past progress what is the bug or the crash attempting to make tiles out of a small geotiff mb when i run the program it does not give any feedback no fail error but does not advance past progress os is mac and here is the dialog box with qgis version białowieża and other details img width alt screen shot at am src steps to reproduce the issue extent calculate from layer min zoom max zoom dpi background color opacity tile format jpg quality metatile size originally used default but have tried same thing happens here is the log there is never any additional info in the log because it just stays at qgis version białowieża qgis code revision qt version python version gdal version geos version capi proj version rel may algorithm started at algorithm generate xyz tiles directory starting… input parameters background color qcolor dpi extent metatilesize output directory users irlipton downloads output html users irlipton downloads index html quality tile format tile height tile width tms convention false zoom max zoom min versions białowieża qgis code revision qt version python version gdal version geos version capi proj version rel may supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no tile folders or index html show up in designated file path
| 1
|
11,286
| 3,000,557,177
|
IssuesEvent
|
2015-07-24 02:58:16
|
TerriaJS/terriajs
|
https://api.github.com/repos/TerriaJS/terriajs
|
closed
|
Gazetteer search results should show more details of location
|
Design and/or UX
|
Using the gazetteer search for a suburb or city name will almost always return multiple results (eg cities/suburbs with the same name in different areas). It currently isn't possible to distinguish these results without clicking on it and seeing where it goes on the map. The search results should show additional info about the location (eg one option would be to do: if suburb, show city and state - if city, show state) to allow the user to quickly get to the intended location
|
1.0
|
Gazetteer search results should show more details of location - Using the gazetteer search for a suburb or city name will almost always return multiple results (eg cities/suburbs with the same name in different areas). It currently isn't possible to distinguish these results without clicking on it and seeing where it goes on the map. The search results should show additional info about the location (eg one option would be to do: if suburb, show city and state - if city, show state) to allow the user to quickly get to the intended location
|
non_process
|
gazetteer search results should show more details of location using the gazetteer search for a suburb or city name will almost always return multiple results eg cities suburbs with the same name in different areas it currently isn t possible to distinguish these results without clicking on it and seeing where it goes on the map the search results should show additional info about the location eg one option would be to do if suburb show city and state if city show state to allow the user to quickly get to the intended location
| 0
|
134,170
| 18,435,846,351
|
IssuesEvent
|
2021-10-14 12:58:33
|
vindi/pyboleto
|
https://api.github.com/repos/vindi/pyboleto
|
opened
|
Vulnerabilidade Buffer overflow
|
security
|
**Descreva a vulnerabilidade de segurança (se houver CVE, coloque como
referência)**
CVE-2020-10379
In Pillow before 6.2.3 and 7.x before 7.0.1, there are two Buffer Overflows in libImaging/TiffDecode.c.
References
https://nvd.nist.gov/vuln/detail/CVE-2020-10379
python-pillow/Pillow#4538
python-pillow/Pillow@46f4a34#diff-9478f2787e3ae9668a15123b165c23ac
https://github.com/python-pillow/Pillow/commits/master/src/libImaging
https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/BEBCPE4F2VHTIT6EZA2YZQZLPVDEBJGD/
https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/HOKHNWV2VS5GESY7IBD237E7C6T3I427/
https://pillow.readthedocs.io/en/stable/releasenotes/6.2.3.html
https://pillow.readthedocs.io/en/stable/releasenotes/7.1.0.html
https://snyk.io/vuln/SNYK-PYTHON-PILLOW-574577
https://usn.ubuntu.com/4430-2/
**Classifique a prioridade de correção, de acordo com a severidade da
vulnerabilidade** 30 dias
|
True
|
Vulnerabilidade Buffer overflow - **Descreva a vulnerabilidade de segurança (se houver CVE, coloque como
referência)**
CVE-2020-10379
In Pillow before 6.2.3 and 7.x before 7.0.1, there are two Buffer Overflows in libImaging/TiffDecode.c.
References
https://nvd.nist.gov/vuln/detail/CVE-2020-10379
python-pillow/Pillow#4538
python-pillow/Pillow@46f4a34#diff-9478f2787e3ae9668a15123b165c23ac
https://github.com/python-pillow/Pillow/commits/master/src/libImaging
https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/BEBCPE4F2VHTIT6EZA2YZQZLPVDEBJGD/
https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/HOKHNWV2VS5GESY7IBD237E7C6T3I427/
https://pillow.readthedocs.io/en/stable/releasenotes/6.2.3.html
https://pillow.readthedocs.io/en/stable/releasenotes/7.1.0.html
https://snyk.io/vuln/SNYK-PYTHON-PILLOW-574577
https://usn.ubuntu.com/4430-2/
**Classifique a prioridade de correção, de acordo com a severidade da
vulnerabilidade** 30 dias
|
non_process
|
vulnerabilidade buffer overflow descreva a vulnerabilidade de segurança se houver cve coloque como referência cve in pillow before and x before there are two buffer overflows in libimaging tiffdecode c references python pillow pillow python pillow pillow diff classifique a prioridade de correção de acordo com a severidade da vulnerabilidade dias
| 0
|
22,055
| 30,573,033,523
|
IssuesEvent
|
2023-07-21 01:10:03
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.19.77 has 4 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.77",
"result": {
"issues": 4,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:122",
"code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:128",
"code": " subprocess.call([\"rbxtsc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:167",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:174",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp079pz3j2/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.19.77 has 4 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.77",
"result": {
"issues": 4,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:122",
"code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:128",
"code": " subprocess.call([\"rbxtsc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:167",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.77/src/robloxpy.py:174",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp079pz3j2/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp roblox pyc
| 1
|
16,062
| 20,203,281,420
|
IssuesEvent
|
2022-02-11 17:20:19
|
IIIF/cookbook-recipes
|
https://api.github.com/repos/IIIF/cookbook-recipes
|
closed
|
Process: issue per use case?
|
meta: process meta: discuss
|
The placeholder canvas idea came from AV, where "poster image" is a well known term. However the same modeling pattern is easily applied to other scenarios, such as a journal title with many issues ... and a poster to display for the title before the issues load up.
The proposal (from the AV table) is that the journal example should be a separate issue with a separate recipe that refers back to the original model.
(This comes from #13)
|
1.0
|
Process: issue per use case? -
The placeholder canvas idea came from AV, where "poster image" is a well known term. However the same modeling pattern is easily applied to other scenarios, such as a journal title with many issues ... and a poster to display for the title before the issues load up.
The proposal (from the AV table) is that the journal example should be a separate issue with a separate recipe that refers back to the original model.
(This comes from #13)
|
process
|
process issue per use case the placeholder canvas idea came from av where poster image is a well known term however the same modeling pattern is easily applied to other scenarios such as a journal title with many issues and a poster to display for the title before the issues load up the proposal from the av table is that the journal example should be a separate issue with a separate recipe that refers back to the original model this comes from
| 1
|
20,984
| 4,651,653,745
|
IssuesEvent
|
2016-10-03 11:01:30
|
99xt/aws-userpool-boilerplate
|
https://api.github.com/repos/99xt/aws-userpool-boilerplate
|
opened
|
Create Guide for verification process
|
Documentation
|
For the added verification process make a guide with using code snippets and explanations.
|
1.0
|
Create Guide for verification process - For the added verification process make a guide with using code snippets and explanations.
|
non_process
|
create guide for verification process for the added verification process make a guide with using code snippets and explanations
| 0
|
58,229
| 6,584,171,619
|
IssuesEvent
|
2017-09-13 09:12:51
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
opened
|
Improve isolation of features by using before- and afterFeature() hooks
|
component/integration-test kind/task priority/minor
|
The beforeFeature() and afterFeature() hooks have been added into Godog recently. We could update to newest Godog and improve the isolation of individual features. Right now we are doing cleaning and checks just before the whole suite, but we should do it on individual features to make sure there is nothing which leaks into following features - for example when we manipulate the config file.
|
1.0
|
Improve isolation of features by using before- and afterFeature() hooks - The beforeFeature() and afterFeature() hooks have been added into Godog recently. We could update to newest Godog and improve the isolation of individual features. Right now we are doing cleaning and checks just before the whole suite, but we should do it on individual features to make sure there is nothing which leaks into following features - for example when we manipulate the config file.
|
non_process
|
improve isolation of features by using before and afterfeature hooks the beforefeature and afterfeature hooks have been added into godog recently we could update to newest godog and improve the isolation of individual features right now we are doing cleaning and checks just before the whole suite but we should do it on individual features to make sure there is nothing which leaks into following features for example when we manipulate the config file
| 0
|
69,017
| 8,368,132,011
|
IssuesEvent
|
2018-10-04 14:04:31
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Refine Service Page design
|
Content type: Service Page Resident Interface Size: XL Team: Design + Research
|
Updating design system as we go, refine Service Page design.
Current mockup: https://xd.adobe.com/view/cfb1ac5a-4640-4832-4d44-ad95b6fac902-b980/
See #650.
|
1.0
|
Refine Service Page design - Updating design system as we go, refine Service Page design.
Current mockup: https://xd.adobe.com/view/cfb1ac5a-4640-4832-4d44-ad95b6fac902-b980/
See #650.
|
non_process
|
refine service page design updating design system as we go refine service page design current mockup see
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.