Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,953
| 12,059,403,982
|
IssuesEvent
|
2020-04-15 19:12:42
|
fablabbcn/fablabs.io
|
https://api.github.com/repos/fablabbcn/fablabs.io
|
closed
|
Improve the e-mail notifications to Referee Labs
|
Approval Process enhancement
|
The [e-mail notifications](https://github.com/fablabbcn/fablabs/tree/d8f2ab6aa1808288735d25f92a7fe6864c4af511/app/views/referee_mailer) to `Referee Labs` should be improved, at least in this way:
- Add not just the `Lab` creator name, but all the `Lab` employees
- Add more text explaining what the `Referee Lab` is supposed to do
- Add a link to online documentation about the approval process (to be done)
- Add a link to the GitHub repo, if they need to open any issue
- Add a link to the `Discuss` section, if there's something to be discussed about the `Lab` to be approved
- Add greetings at the end of the e-mail
|
1.0
|
Improve the e-mail notifications to Referee Labs - The [e-mail notifications](https://github.com/fablabbcn/fablabs/tree/d8f2ab6aa1808288735d25f92a7fe6864c4af511/app/views/referee_mailer) to `Referee Labs` should be improved, at least in this way:
- Add not just the `Lab` creator name, but all the `Lab` employees
- Add more text explaining what the `Referee Lab` is supposed to do
- Add a link to online documentation about the approval process (to be done)
- Add a link to the GitHub repo, if they need to open any issue
- Add a link to the `Discuss` section, if there's something to be discussed about the `Lab` to be approved
- Add greetings at the end of the e-mail
|
process
|
improve the e mail notifications to referee labs the to referee labs should be improved at least in this way add not just the lab creator name but all the lab employees add more text explaining what the referee lab is supposed to do add a link to online documentation about the approval process to be done add a link to the github repo if they need to open any issue add a link to the discuss section if there s something to be discussed about the lab to be approved add greetings at the end of the e mail
| 1
|
19,959
| 26,436,698,162
|
IssuesEvent
|
2023-01-15 13:32:27
|
ThomasHSimm/Pesticide
|
https://api.github.com/repos/ThomasHSimm/Pesticide
|
closed
|
Derive "region" variable from the address
|
feature engineering pre-processing
|
The adress data is too unique and not useful for ML, but aggregated up to region, it might be. Therefore it would be best to create a derived variable column, `region`.
Addresses can be categorised as "NE" "NW" "SE" etc, which can later be one-hot-encoded.
|
1.0
|
Derive "region" variable from the address - The adress data is too unique and not useful for ML, but aggregated up to region, it might be. Therefore it would be best to create a derived variable column, `region`.
Addresses can be categorised as "NE" "NW" "SE" etc, which can later be one-hot-encoded.
|
process
|
derive region variable from the address the adress data is too unique and not useful for ml but aggregated up to region it might be therefore it would be best to create a derived variable column region addresses can be categorised as ne nw se etc which can later be one hot encoded
| 1
|
275,454
| 30,246,199,220
|
IssuesEvent
|
2023-07-06 16:39:56
|
kyverno/kyverno
|
https://api.github.com/repos/kyverno/kyverno
|
closed
|
Vulnerabilities detected
|
security
|
High or critical vulnerabilities detected. Scan results are below:
{"SchemaVersion":2,"ArtifactName":"ghcr.io/kyverno/kyverno:latest","ArtifactType":"container_image","Metadata":{"OS":{"Family":"alpine","Name":"3.18.0"},"ImageID":"sha256:787a01dd6a5a9f28f6f5632b263ba65cacdede3d0d1a69d3fbe8286e988ee4a1","DiffIDs":["sha256:93a1cc8181f0588a11761569adefd18b8d0fa345ff263ff96aaccf8ce54a41e2","sha256:ffe56a1c5f3878e9b5f803842adb9e2ce81584b6bd027e8599582aefe14a975b","sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"],"RepoTags":["ghcr.io/kyverno/kyverno:latest"],"RepoDigests":["ghcr.io/kyverno/kyverno@sha256:26fefdc82340326e110cf0dfb41f7f9e176c395c1bccabd575b2e0c5a2f97b2f"],"ImageConfig":{"architecture":"amd64","author":"github.com/ko-build/ko","created":"2023-06-05T10:34:41Z","history":[{"author":"apko","created":"2023-06-05T10:34:41Z","created_by":"apko","comment":"This is an apko single-layer image"},{"author":"ko","created":"0001-01-01T00:00:00Z","created_by":"ko build ko://github.com/kyverno/kyverno/cmd/kyverno","comment":"kodata contents, at $KO_DATA_PATH"},{"author":"ko","created":"0001-01-01T00:00:00Z","created_by":"ko build ko://github.com/kyverno/kyverno/cmd/kyverno","comment":"go build output, at /ko-app/kyverno"}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:93a1cc8181f0588a11761569adefd18b8d0fa345ff263ff96aaccf8ce54a41e2","sha256:ffe56a1c5f3878e9b5f803842adb9e2ce81584b6bd027e8599582aefe14a975b","sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"]},"config":{"Entrypoint":["/ko-app/kyverno"],"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ko-app","SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt","KO_DATA_PATH=/var/run/ko"],"User":"65532"}}},"Results":[{"Target":"ghcr.io/kyverno/kyverno:latest (alpine 3.18.0)","Class":"os-pkgs","Type":"alpine"},{"Target":"ko-app/kyverno","Class":"lang-pkgs","Type":"gobinary","Vulnerabilities":[{"VulnerabilityID":"CVE-2023-33959","PkgName":"github.com/notaryproject/notation-go","InstalledVersion":"v1.0.0-rc.3","FixedVersion":"1.0.0-rc.6","Layer":{"Digest":"sha256:bc711d9ec018f26400809477f1f4164852f7f9b8fc87cd792b9e268d11422ff1","DiffID":"sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"},"SeveritySource":"ghsa","PrimaryURL":"https://avd.aquasec.com/nvd/cve-2023-33959","DataSource":{"ID":"ghsa","Name":"GitHub Security Advisory Go","URL":"https://github.com/advisories?query=type%3Areviewed+ecosystem%3Ago"},"Title":"notation-go's verification bypass can cause users to verify the wrong artifact","Description":"notation is a CLI tool to sign and verify OCI artifacts and container images. An attacker who has compromised a registry can cause users to verify the wrong artifact. The problem has been fixed in the release v1.0.0-rc.6. Users should upgrade their notation-go library to v1.0.0-rc.6 or above. Users unable to upgrade may restrict container registries to a set of secure and trusted container registries.","Severity":"HIGH","CweIDs":["CWE-347"],"CVSS":{"ghsa":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H","V3Score":8.8},"nvd":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H","V3Score":8.8}},"References":["https://github.com/advisories/GHSA-xhg5-42rf-296r","https://github.com/notaryproject/notation-go/releases/tag/v1.0.0-rc.6","https://github.com/notaryproject/notation-go/security/advisories/GHSA-xhg5-42rf-296r","https://nvd.nist.gov/vuln/detail/CVE-2023-33959"],"PublishedDate":"2023-06-06T19:15:00Z","LastModifiedDate":"2023-06-16T03:31:00Z"},{"VulnerabilityID":"CVE-2023-30551","PkgName":"github.com/sigstore/rekor","InstalledVersion":"v1.0.1","FixedVersion":"1.1.1","Layer":{"Digest":"sha256:bc711d9ec018f26400809477f1f4164852f7f9b8fc87cd792b9e268d11422ff1","DiffID":"sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"},"SeveritySource":"ghsa","PrimaryURL":"https://avd.aquasec.com/nvd/cve-2023-30551","DataSource":{"ID":"ghsa","Name":"GitHub Security Advisory Go","URL":"https://github.com/advisories?query=type%3Areviewed+ecosystem%3Ago"},"Title":"Rekor's compressed archives can result in OOM conditions","Description":"Rekor is an open source software supply chain transparency log. Rekor prior to version 1.1.1 may crash due to out of memory (OOM) conditions caused by reading archive metadata files into memory without checking their sizes first. Verification of a JAR file submitted to Rekor can cause an out of memory crash if files within the META-INF directory of the JAR are sufficiently large. Parsing of an APK file submitted to Rekor can cause an out of memory crash if the .SIGN or .PKGINFO files within the APK are sufficiently large. The OOM crash has been patched in Rekor version 1.1.1. There are no known workarounds.","Severity":"HIGH","CweIDs":["CWE-770"],"CVSS":{"ghsa":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H","V3Score":7.5},"nvd":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H","V3Score":7.5}},"References":["https://github.com/advisories/GHSA-2h5h-59f5-c5x9","https://github.com/sigstore/rekor/commit/cf42ace82667025fe128f7a50cf6b4cdff51cc48","https://github.com/sigstore/rekor/releases/tag/v1.1.1","https://github.com/sigstore/rekor/security/advisories/GHSA-2h5h-59f5-c5x9","https://nvd.nist.gov/vuln/detail/CVE-2023-30551"],"PublishedDate":"2023-05-08T16:15:00Z","LastModifiedDate":"2023-05-12T16:27:00Z"}]}]}
|
True
|
Vulnerabilities detected - High or critical vulnerabilities detected. Scan results are below:
{"SchemaVersion":2,"ArtifactName":"ghcr.io/kyverno/kyverno:latest","ArtifactType":"container_image","Metadata":{"OS":{"Family":"alpine","Name":"3.18.0"},"ImageID":"sha256:787a01dd6a5a9f28f6f5632b263ba65cacdede3d0d1a69d3fbe8286e988ee4a1","DiffIDs":["sha256:93a1cc8181f0588a11761569adefd18b8d0fa345ff263ff96aaccf8ce54a41e2","sha256:ffe56a1c5f3878e9b5f803842adb9e2ce81584b6bd027e8599582aefe14a975b","sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"],"RepoTags":["ghcr.io/kyverno/kyverno:latest"],"RepoDigests":["ghcr.io/kyverno/kyverno@sha256:26fefdc82340326e110cf0dfb41f7f9e176c395c1bccabd575b2e0c5a2f97b2f"],"ImageConfig":{"architecture":"amd64","author":"github.com/ko-build/ko","created":"2023-06-05T10:34:41Z","history":[{"author":"apko","created":"2023-06-05T10:34:41Z","created_by":"apko","comment":"This is an apko single-layer image"},{"author":"ko","created":"0001-01-01T00:00:00Z","created_by":"ko build ko://github.com/kyverno/kyverno/cmd/kyverno","comment":"kodata contents, at $KO_DATA_PATH"},{"author":"ko","created":"0001-01-01T00:00:00Z","created_by":"ko build ko://github.com/kyverno/kyverno/cmd/kyverno","comment":"go build output, at /ko-app/kyverno"}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:93a1cc8181f0588a11761569adefd18b8d0fa345ff263ff96aaccf8ce54a41e2","sha256:ffe56a1c5f3878e9b5f803842adb9e2ce81584b6bd027e8599582aefe14a975b","sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"]},"config":{"Entrypoint":["/ko-app/kyverno"],"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ko-app","SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt","KO_DATA_PATH=/var/run/ko"],"User":"65532"}}},"Results":[{"Target":"ghcr.io/kyverno/kyverno:latest (alpine 3.18.0)","Class":"os-pkgs","Type":"alpine"},{"Target":"ko-app/kyverno","Class":"lang-pkgs","Type":"gobinary","Vulnerabilities":[{"VulnerabilityID":"CVE-2023-33959","PkgName":"github.com/notaryproject/notation-go","InstalledVersion":"v1.0.0-rc.3","FixedVersion":"1.0.0-rc.6","Layer":{"Digest":"sha256:bc711d9ec018f26400809477f1f4164852f7f9b8fc87cd792b9e268d11422ff1","DiffID":"sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"},"SeveritySource":"ghsa","PrimaryURL":"https://avd.aquasec.com/nvd/cve-2023-33959","DataSource":{"ID":"ghsa","Name":"GitHub Security Advisory Go","URL":"https://github.com/advisories?query=type%3Areviewed+ecosystem%3Ago"},"Title":"notation-go's verification bypass can cause users to verify the wrong artifact","Description":"notation is a CLI tool to sign and verify OCI artifacts and container images. An attacker who has compromised a registry can cause users to verify the wrong artifact. The problem has been fixed in the release v1.0.0-rc.6. Users should upgrade their notation-go library to v1.0.0-rc.6 or above. Users unable to upgrade may restrict container registries to a set of secure and trusted container registries.","Severity":"HIGH","CweIDs":["CWE-347"],"CVSS":{"ghsa":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H","V3Score":8.8},"nvd":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H","V3Score":8.8}},"References":["https://github.com/advisories/GHSA-xhg5-42rf-296r","https://github.com/notaryproject/notation-go/releases/tag/v1.0.0-rc.6","https://github.com/notaryproject/notation-go/security/advisories/GHSA-xhg5-42rf-296r","https://nvd.nist.gov/vuln/detail/CVE-2023-33959"],"PublishedDate":"2023-06-06T19:15:00Z","LastModifiedDate":"2023-06-16T03:31:00Z"},{"VulnerabilityID":"CVE-2023-30551","PkgName":"github.com/sigstore/rekor","InstalledVersion":"v1.0.1","FixedVersion":"1.1.1","Layer":{"Digest":"sha256:bc711d9ec018f26400809477f1f4164852f7f9b8fc87cd792b9e268d11422ff1","DiffID":"sha256:c0187fd6370013aec7f90eb41f8681ce9320ffd05215d7c92f0b0c4717b168f4"},"SeveritySource":"ghsa","PrimaryURL":"https://avd.aquasec.com/nvd/cve-2023-30551","DataSource":{"ID":"ghsa","Name":"GitHub Security Advisory Go","URL":"https://github.com/advisories?query=type%3Areviewed+ecosystem%3Ago"},"Title":"Rekor's compressed archives can result in OOM conditions","Description":"Rekor is an open source software supply chain transparency log. Rekor prior to version 1.1.1 may crash due to out of memory (OOM) conditions caused by reading archive metadata files into memory without checking their sizes first. Verification of a JAR file submitted to Rekor can cause an out of memory crash if files within the META-INF directory of the JAR are sufficiently large. Parsing of an APK file submitted to Rekor can cause an out of memory crash if the .SIGN or .PKGINFO files within the APK are sufficiently large. The OOM crash has been patched in Rekor version 1.1.1. There are no known workarounds.","Severity":"HIGH","CweIDs":["CWE-770"],"CVSS":{"ghsa":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H","V3Score":7.5},"nvd":{"V3Vector":"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H","V3Score":7.5}},"References":["https://github.com/advisories/GHSA-2h5h-59f5-c5x9","https://github.com/sigstore/rekor/commit/cf42ace82667025fe128f7a50cf6b4cdff51cc48","https://github.com/sigstore/rekor/releases/tag/v1.1.1","https://github.com/sigstore/rekor/security/advisories/GHSA-2h5h-59f5-c5x9","https://nvd.nist.gov/vuln/detail/CVE-2023-30551"],"PublishedDate":"2023-05-08T16:15:00Z","LastModifiedDate":"2023-05-12T16:27:00Z"}]}]}
|
non_process
|
vulnerabilities detected high or critical vulnerabilities detected scan results are below schemaversion artifactname ghcr io kyverno kyverno latest artifacttype container image metadata os family alpine name imageid diffids repotags repodigests imageconfig architecture author github com ko build ko created history os linux rootfs type layers diff ids config entrypoint env user results cvss ghsa cvss av n ac l pr n ui r s u c h i h a h nvd cvss av n ac l pr n ui r s u c h i h a h references publisheddate lastmodifieddate vulnerabilityid cve pkgname github com sigstore rekor installedversion fixedversion layer digest diffid severitysource ghsa primaryurl security advisory go url compressed archives can result in oom conditions description rekor is an open source software supply chain transparency log rekor prior to version may crash due to out of memory oom conditions caused by reading archive metadata files into memory without checking their sizes first verification of a jar file submitted to rekor can cause an out of memory crash if files within the meta inf directory of the jar are sufficiently large parsing of an apk file submitted to rekor can cause an out of memory crash if the sign or pkginfo files within the apk are sufficiently large the oom crash has been patched in rekor version there are no known workarounds severity high cweids cvss ghsa cvss av n ac l pr n ui n s u c n i n a h nvd cvss av n ac l pr n ui n s u c n i n a h references publisheddate lastmodifieddate
| 0
|
138,004
| 20,264,897,759
|
IssuesEvent
|
2022-02-15 11:05:08
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
closed
|
Allow transactions list to be filtered by APM types
|
needs design size: small component: alternative payment methods
|
As of https://github.com/Automattic/woocommerce-payments/pull/691, we're hiding all transaction types that start with `payment`, as we do not support APMs yet. Once we start supporting them, we need to let them be used as filters.
|
1.0
|
Allow transactions list to be filtered by APM types - As of https://github.com/Automattic/woocommerce-payments/pull/691, we're hiding all transaction types that start with `payment`, as we do not support APMs yet. Once we start supporting them, we need to let them be used as filters.
|
non_process
|
allow transactions list to be filtered by apm types as of we re hiding all transaction types that start with payment as we do not support apms yet once we start supporting them we need to let them be used as filters
| 0
|
3,733
| 6,733,143,597
|
IssuesEvent
|
2017-10-18 13:58:51
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
closed
|
Warranty Assessment Dashboard - Assign Inspector
|
form process workflow
|
create a form to assign an inspector to a warranty contract
|
1.0
|
Warranty Assessment Dashboard - Assign Inspector - create a form to assign an inspector to a warranty contract
|
process
|
warranty assessment dashboard assign inspector create a form to assign an inspector to a warranty contract
| 1
|
151,129
| 13,391,303,517
|
IssuesEvent
|
2020-09-02 22:15:19
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
closed
|
Install KubeSphere on GKE
|
area/documentation stale
|
This guide walks you throungh the steps of KubeSphere minimal installation on Google Kubernetes Engine:
https://kubesphere.io/docs/v2.1/en/installation/install-on-gke/
|
1.0
|
Install KubeSphere on GKE - This guide walks you throungh the steps of KubeSphere minimal installation on Google Kubernetes Engine:
https://kubesphere.io/docs/v2.1/en/installation/install-on-gke/
|
non_process
|
install kubesphere on gke this guide walks you throungh the steps of kubesphere minimal installation on google kubernetes engine
| 0
|
24,900
| 4,128,039,528
|
IssuesEvent
|
2016-06-10 02:53:39
|
ParmEd/ParmEd
|
https://api.github.com/repos/ParmEd/ParmEd
|
closed
|
NetCDFReporter slows down simulation substantially
|
defect
|
After ~1 million steps have run, the overhead of adding to a NetCDF file via the canonical scipy implementation is debilitating. See [the report by George Pantelopulos on the OpenMM forums](https://simtk.org/forums/viewtopic.php?f=161&t=6388&view=unread#unread).
The solution here is to re-enable `netCDF4` support for writing. The existing `netcdf` module should always be faster reading, so this is really only applicable for writing.
|
1.0
|
NetCDFReporter slows down simulation substantially - After ~1 million steps have run, the overhead of adding to a NetCDF file via the canonical scipy implementation is debilitating. See [the report by George Pantelopulos on the OpenMM forums](https://simtk.org/forums/viewtopic.php?f=161&t=6388&view=unread#unread).
The solution here is to re-enable `netCDF4` support for writing. The existing `netcdf` module should always be faster reading, so this is really only applicable for writing.
|
non_process
|
netcdfreporter slows down simulation substantially after million steps have run the overhead of adding to a netcdf file via the canonical scipy implementation is debilitating see the solution here is to re enable support for writing the existing netcdf module should always be faster reading so this is really only applicable for writing
| 0
|
4,253
| 7,188,903,813
|
IssuesEvent
|
2018-02-02 11:56:19
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
opened
|
Promote 'TimestampWithNanos' from 'spanner' to 'api_core'
|
api: core type: process
|
It would be useful for other services as they begin to render nanosecond-resolution timestamps.
See: #4807
|
1.0
|
Promote 'TimestampWithNanos' from 'spanner' to 'api_core' - It would be useful for other services as they begin to render nanosecond-resolution timestamps.
See: #4807
|
process
|
promote timestampwithnanos from spanner to api core it would be useful for other services as they begin to render nanosecond resolution timestamps see
| 1
|
14,056
| 16,860,384,095
|
IssuesEvent
|
2021-06-21 12:19:51
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
2.23.0 can not recognize `binaryTargets: env("..")` inside `generator client` section
|
bug/2-confirmed kind/regression process/candidate topic: cli-format topic: cli-generate topic: env
|
### Bug description
`prisma [format|generate]` is not working with `env("BINARY_TARGETS")` in latest prisma (2.23.0) π’
**ERROR** π
```
Error: Schema Parsing P1012
Get config
error: Expected a String value, but received functional value "env".
--> schema.prisma:8
|
7 | provider = "prisma-client-js"
8 | binaryTargets = env("BINARY_TARGETS")
|
```
@janpio
### How to reproduce
just run `prisma format` or `prisma generate` with schema below
### Expected behavior
should works
### Prisma information
```prisma
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
binaryTargets = env("BINARY_TARGETS")
}
model User {
id Int @id
}
```
### Environment & setup
- OS: Mac OS
- Database: all
- Node.js version: v14.16.0
### Prisma Version
```
prisma : 2.23.0
@prisma/client : 2.23.0
Current platform : darwin
Query Engine : query-engine adf5e8cba3daf12d456d911d72b6e9418681b28b
Migration Engine : migration-engine-cli adf5e8cba3daf12d456d911d72b6e9418681b28b
Introspection Engine : introspection-core adf5e8cba3daf12d456d911d72b6e9418681b28b
Format Binary : prisma-fmt adf5e8cba3daf12d456d911d72b6e9418681b28b
Default Engines Hash : adf5e8cba3daf12d456d911d72b6e9418681b28b
Studio : 0.393.0
```
---
parent https://github.com/prisma/prisma/issues/6830
|
1.0
|
2.23.0 can not recognize `binaryTargets: env("..")` inside `generator client` section - ### Bug description
`prisma [format|generate]` is not working with `env("BINARY_TARGETS")` in latest prisma (2.23.0) π’
**ERROR** π
```
Error: Schema Parsing P1012
Get config
error: Expected a String value, but received functional value "env".
--> schema.prisma:8
|
7 | provider = "prisma-client-js"
8 | binaryTargets = env("BINARY_TARGETS")
|
```
@janpio
### How to reproduce
just run `prisma format` or `prisma generate` with schema below
### Expected behavior
should works
### Prisma information
```prisma
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
binaryTargets = env("BINARY_TARGETS")
}
model User {
id Int @id
}
```
### Environment & setup
- OS: Mac OS
- Database: all
- Node.js version: v14.16.0
### Prisma Version
```
prisma : 2.23.0
@prisma/client : 2.23.0
Current platform : darwin
Query Engine : query-engine adf5e8cba3daf12d456d911d72b6e9418681b28b
Migration Engine : migration-engine-cli adf5e8cba3daf12d456d911d72b6e9418681b28b
Introspection Engine : introspection-core adf5e8cba3daf12d456d911d72b6e9418681b28b
Format Binary : prisma-fmt adf5e8cba3daf12d456d911d72b6e9418681b28b
Default Engines Hash : adf5e8cba3daf12d456d911d72b6e9418681b28b
Studio : 0.393.0
```
---
parent https://github.com/prisma/prisma/issues/6830
|
process
|
can not recognize binarytargets env inside generator client section bug description prisma is not working with env binary targets in latest prisma π’ error π error schema parsing get config error expected a string value but received functional value env schema prisma provider prisma client js binarytargets env binary targets janpio how to reproduce just run prisma format or prisma generate with schema below expected behavior should works prisma information prisma datasource db provider mysql url env database url generator client provider prisma client js binarytargets env binary targets model user id int id environment setup os mac os database all node js version prisma version prisma prisma client current platform darwin query engine query engine migration engine migration engine cli introspection engine introspection core format binary prisma fmt default engines hash studio parent
| 1
|
215,921
| 16,722,678,328
|
IssuesEvent
|
2021-06-10 09:12:34
|
nhost/hasura-backend-plus
|
https://api.github.com/repos/nhost/hasura-backend-plus
|
closed
|
memory leak in Jest tests
|
Priority: Low Scope: Testing Type: Bug
|
Jest tests are therefore ran with `--forceExit`
The way express servers are started and closed for every test file probably has to be reviewed
|
1.0
|
memory leak in Jest tests - Jest tests are therefore ran with `--forceExit`
The way express servers are started and closed for every test file probably has to be reviewed
|
non_process
|
memory leak in jest tests jest tests are therefore ran with forceexit the way express servers are started and closed for every test file probably has to be reviewed
| 0
|
188,874
| 14,477,633,016
|
IssuesEvent
|
2020-12-10 06:57:31
|
qrsforever/web_clipper_data
|
https://api.github.com/repos/qrsforever/web_clipper_data
|
opened
|
SIFT | How To Use SIFT For Image Matching In Python
|
test
|
## Overview
- A beginner-friendly introduction to the powerful SIFT (Scale Invariant Feature Transform) technique
- Learn how to perform Feature Matching using SIFT
- We also showcase SIFT in Python through hands-on coding
## Introduction
Take a look at the below collection of images and think of the common element between them:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-17-49-55.png)
The resplendent Eiffel Tower, of course! The keen-eyed among you will also have noticed that each image has a different background, is captured from different angles, and also has different objects in the foreground (in some cases).
Iβm sure all of this took you a fraction of a second to figure out. It doesnβt matter if the image is rotated at a weird angle or zoomed in to show only half of the Tower. This is primarily because you have seen the images of the Eiffel Tower multiple times and your memory easily recalls its features. We naturally understand that the scale or angle of the image may change but the object remains the same.
But machines have an almighty struggle with the same idea. Itβs a challenge for them to identify the object in an image if we change certain things (like the angle or the scale). Hereβs the good news β machines are super flexible and we can teach them to identify images at an almost human-level.
This is one of the most exciting aspects of working in [computer vision](https://courses.analyticsvidhya.com/courses/computer-vision-using-deep-learning-version2?utm_source=blog&utm_medium=detailed-guide-powerful-sift-technique-image-matching-python)!
So, in this article, we will talk about an image matching algorithm that identifies the key features from the images and is able to match these features to a new image of the same object. Letβs get rolling!
## Table of Contents
1. Introduction to SIFT
2. Constructing a Scale Space
1. Gaussian Blur
2. Difference of Gaussian
3. Keypoint Localization
1. Local Maxima/Minima
2. Keypoint Selection
4. Orientation Assignment
1. Calculate Magnitude & Orientation
2. Create Histogram of Magnitude & Orientation
5. Keypoint Descriptor
6. Feature Matching
## Introduction to SIFT
> SIFT, or Scale Invariant Feature Transform, is a feature detection algorithm in Computer Vision.
SIFT helps locate the local features in an image, commonly known as the β_keypoints_β of the image. These keypoints are scale & rotation invariant that can be used for various computer vision applications, like image matching, object detection, scene detection, etc.
We can also use the keypoints generated using SIFT as features for the image during model training. **The major advantage of SIFT features, over edge features or hog features, is that they are not affected by the size or orientation of the image.**
For example, here is another image of the Eiffel Tower along with its smaller version. The keypoints of the object in the first image are matched with the keypoints found in the second image. The same goes for two images when the object in the other image is slightly rotated. Amazing, right?
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-01-16-08-16.png)
Letβs understand how these keypoints are identified and what are the techniques used to ensure the scale and rotation invariance. Broadly speaking, the entire process can be divided into 4 parts:
- **Constructing a Scale Space:** To make sure that features are scale-independent
- **Keypoint Localisation:** Identifying the suitable features or keypoints
- **Orientation Assignment:** Ensure the keypoints are rotation invariant
- **Keypoint Descriptor:** Assign a unique fingerprint to each keypoint
Finally, we can use these keypoints for feature matching!
_This article is based on the original paper by David G. Lowe. Here is the link: [Distinctive Image Features from Scale-Invariant Keypoints](https://people.eecs.berkeley.edu/~malik/cs294/lowe-ijcv04.pdf).
_
## Constructing the Scale Space
We need to identify the most distinct features in a given image while ignoring any noise. Additionally, we need to ensure that the features are not scale-dependent. These are critical concepts so letβs talk about them one-by-one.
> We use the **Gaussian Blurring technique** to reduce the noise in an image.
So, for every pixel in an image, the Gaussian Blur calculates a value based on its neighboring pixels. Below is an example of image before and after applying the Gaussian Blur. As you can see, the texture and minor details are removed from the image and only the relevant information like the shape and edges remain:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_110.png)
Gaussian Blur successfully removed the noise from the images and we have highlighted the important features of the image. Now, _we need to ensure that these features must not be scale-dependent._ This means we will be searching for these features on multiple scales, by creating a βscale spaceβ.
> Scale space is a collection of images having different scales, generated from a single image.
Hence, these blur images are created for multiple scales. To create a new set of images of different scales, we will take the original image and reduce the scale by half. For each new image, we will create blur versions as we saw above.
Here is an example to understand it in a better manner. We have the original image of size (275, 183) and a scaled image of dimension (138, 92). For both the images, two blur images are created:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_21.png)
You might be thinking β how many times do we need to scale the image and how many subsequent blur images need to be created for each scaled image? **The ideal number of octaves should be four**, and for each octave, the number of blur images should be five.
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-24-18-27-46.png)
### Difference of Gaussian
So far we have created images of multiple scales (often represented by Ο) and used Gaussian blur for each of them to reduce the noise in the image. Next, we will try to enhance the features using a technique called Difference of Gaussians or DoG.
> Difference of Gaussian is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original.
DoG creates another set of images, for each octave, by subtracting every image from the previous image in the same scale. Here is a visual explanation of how DoG is implemented:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-12-48-03.png)
_Note: The image is taken from the original paper. The octaves are now represented in a vertical form for a clearer view.Β _
Let us create the DoG for the images in scale space. Take a look at the below diagram. On the left, we have 5 images, all from the first octave (thus having the same scale). Each subsequent image is created by applying the Gaussian blur over the previous image.
On the right, we have four images generated by subtracting the consecutive Gaussians. The results are jaw-dropping!
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-14-18-26.png)
We have enhanced features for each of these images. Note that here I am implementing it only for the first octave but the same process happens for all the octaves.
Now that we have a new set of images, we are going to use this to find the important keypoints.
## Keypoint Localization
Once the images have been created, the next step is to find the important keypoints from the image that can be used for feature matching. **The idea is to find the local maxima and minima for the images.** This part is divided into two steps:
1. Find the local maxima and minima
2. Remove low contrast keypoints (keypoint selection)
### Local Maxima and Local Minima
> To locate the local maxima and minima, we go through every pixel in the image and compare it with its neighboring pixels.
When I say βneighboringβ, this not only includes the surrounding pixels of that image (in which the pixel lies), but also the nine pixels for the previous and next image in the octave.
This means that every pixel value is compared with 26 other pixel values to find whether it is the local maxima/minima. For example, in the below diagram, we have three images from the first octave. The pixel marked _x_ is compared with the neighboring pixels (in green) and is selected as a keypoint if it is the highest or lowest among the neighbors:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-16-50-01.png)
We now have potential keypoints that represent the images and are scale-invariant. We will apply the last check over the selected keypoints to ensure that these are the most accurate keypoints to represent the image.
### Keypoint Selection
Kudos! So far we have successfully generated scale-invariant keypoints. But some of these keypoints may not be robust to noise. This is why we need to perform a final check to make sure that we have the most accurate keypoints to represent the image features.
**Hence, we will eliminate the keypoints that have low contrast, or lie very close to the edge.**
To deal with the low contrast keypoints, a second-order Taylor expansion is computed for each keypoint. If the resulting value is less than 0.03 (in magnitude), we reject the keypoint.
So what do we do about the remaining keypoints? Well, we perform a check to identify the poorly located keypoints. These are the keypoints that are close to the edge and have a high edge response but may not be robust to a small amount of noise. A second-order Hessian matrix is used to identify such keypoints. You can go through the math behind this here.
Now that we have performed both the contrast test and the edge test to reject the unstable keypoints, we will now assign an orientation value for each keypoint to make the rotation invariant.
## Orientation Assignment
At this stage, we have a set of stable keypoints for the images. We will now assign an orientation to each of these keypoints so that they are invariant to rotation. We can again divide this step into two smaller steps:
1. Calculate the magnitude and orientation
2. Create a histogram for magnitude and orientation
### Calculate Magnitude and Orientation
Consider the sample image shown below:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-19-22-24.png)
Letβs say we want to find the magnitude and orientation for the pixel value in red. For this, we will calculate the gradients in x and y directions by taking the difference between 55 & 46 and 56 & 42. This comes out to be Gx = 9 and Gy = 14 respectively.
Once we have the gradients, we can find the magnitude and orientation using the following formulas:
Magnitude =Β β\[(Gx)2+(Gy)2]Β =Β 16.64
Ξ¦ = atan(Gy / Gx) = atan(1.55) = 57.17
> The magnitude represents the intensity of the pixel and the orientation gives the direction for the same.
We can now create a histogram given that we have these magnitude and orientation values for the pixels.
### Creating a Histogram for Magnitude and Orientation
On the x-axis, we will have bins for angle values, like 0-9, 10 β 19, 20-29, up to 360. Since our angle value is 57, it will fall in the 6th bin. The 6th bin value will be in proportion to the magnitude of the pixel, i.e. 16.64.Β We will do this for all the pixels around the keypoint.
This is how we get the below histogram:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-26-18-53-12.png)
_You can refer to this article for a much detailed explanation for calculating the gradient, magnitude, orientation and plotting histogram β_ _[A Valuable Introduction to the Histogram of Oriented Gradients](https://www.analyticsvidhya.com/blog/2019/09/feature-engineering-images-introduction-hog-feature-descriptor/)._
This histogram would peak at some point. **The bin at which we see the peak will be the orientation for the keypoint.** Additionally, if there is another significant peak (seen between 80 β 100%), then another keypoint is generated with the magnitude and scale the same as the keypoint used to generate the histogram. And the angle or orientation will be equal to the new bin that has the peak.
Effectively at this point, we can say that there can be a small increase in the number of keypoints.
## Keypoint Descriptor
This is the final step for SIFT. So far, we have stable keypoints that are scale-invariant and rotation invariant. In this section, we will use the neighboring pixels, their orientations, and magnitude, to generate a unique fingerprint for this keypoint called a βdescriptorβ.
Additionally, since we use the surrounding pixels, the descriptors will be partially invariant to illumination or brightness of the images.
We will first take a 16Γ16 neighborhood around the keypoint. This 16Γ16 block is further divided into 4Γ4 sub-blocks and for each of these sub-blocks, we generate the histogram using magnitude and orientation.
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-26-20-10-52.png)
At this stage, the bin size is increased and we take only 8 bins (not 36). Each of these arrows represents the 8 bins and the length of the arrows define the magnitude. So, we will have a total of 128 bin values for every keypoint.
Here is an example:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
| | #reading image |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | #keypoints |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | img_1 = cv2.drawKeypoints(gray1,keypoints_1,img1) |
| | plt.imshow(img_1) |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_41.png)
## Feature Matching
We will now use the SIFT features for feature matching. For this purpose, I have downloaded two images of the Eiffel Tower, taken from different positions. You can try it with any two images that you want.
Here are the two images that I have used:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | figure, ax = plt.subplots(1, 2, figsize=(16, 8)) |
| | ax\[0].imshow(img1, cmap='gray') |
| | ax\[1].imshow(img2, cmap='gray') |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_71.png)
Now, for both these images, we are going to generate the SIFT features. First, we have to construct a SIFT object and then use the function _detectAndCompute_ to get the keypoints. It will return two values β the keypoints and the descriptors.
Letβs determine the keypoints and print the total number of keypoints found in each image:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | #sift |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | keypoints_2, descriptors_2 = sift.detectAndCompute(img2,None) |
| | len(keypoints_1), len(keypoints_2) |
283, 540
Next, letβs try and match the features from image 1 with features from image 2. We will be using the function _match()_ from the _BFmatcher_ (brute force match) module. Also, we will draw lines between the features that match in both the images. This can be done using the _drawMatches_ function in OpenCV.
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | #sift |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | keypoints_2, descriptors_2 = sift.detectAndCompute(img2,None) |
| | #feature matching |
| | bf = cv2.BFMatcher(cv2.NORM_L1, crossCheck=True) |
| | matches = bf.match(descriptors_1,descriptors_2) |
| | matches = sorted(matches, key = lambda x:x.distance) |
| | img3 = cv2.drawMatches(img1, keypoints_1, img2, keypoints_2, matches\[:50], img2, flags=2) |
| | plt.imshow(img3),plt.show() |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_61.png)I have plotted only 50 matches here for clarityβs sake. You can increase the number according to what you prefer. To find out how many keypoints are matched, we can print the length of the variable _matches_. In this case, the answer would be 190.
## End Notes
In this article, we discussed the SIFT feature matching algorithm in detail. Here is a site that provides excellent visualization for each step of SIFT. You can add your own image and it will create the keypoints for that image as well. Check it out [here](http://weitz.de/sift/).
Another popular feature matching algorithm is SURF (Speeded Up Robust Feature), which is simply a faster version of SIFT. I would encourage you to go ahead and explore it as well.
And if youβre new to the world of computer vision and image data, I recommend checking out the below course:
- [Computer Vision using Deep Learning 2.0](https://courses.analyticsvidhya.com/courses/computer-vision-using-deep-learning-version2?utm_source=blog&utm_medium=detailed-guide-powerful-sift-technique-image-matching-python)
You can also read this article on our Mobile APP [
](https://play.google.com/store/apps/details?id=com.analyticsvidhya.android&utm_source=blog_article&utm_campaign=blog&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1)[
](https://apps.apple.com/us/app/analytics-vidhya/id1470025572)
### Related Articles
[https://www.analyticsvidhya.com/blog/2019/10/detailed-guide-powerful-sift-technique-image-matching-python/](https://www.analyticsvidhya.com/blog/2019/10/detailed-guide-powerful-sift-technique-image-matching-python/)
|
1.0
|
SIFT | How To Use SIFT For Image Matching In Python - ## Overview
- A beginner-friendly introduction to the powerful SIFT (Scale Invariant Feature Transform) technique
- Learn how to perform Feature Matching using SIFT
- We also showcase SIFT in Python through hands-on coding
## Introduction
Take a look at the below collection of images and think of the common element between them:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-20-17-49-55.png)
The resplendent Eiffel Tower, of course! The keen-eyed among you will also have noticed that each image has a different background, is captured from different angles, and also has different objects in the foreground (in some cases).
Iβm sure all of this took you a fraction of a second to figure out. It doesnβt matter if the image is rotated at a weird angle or zoomed in to show only half of the Tower. This is primarily because you have seen the images of the Eiffel Tower multiple times and your memory easily recalls its features. We naturally understand that the scale or angle of the image may change but the object remains the same.
But machines have an almighty struggle with the same idea. Itβs a challenge for them to identify the object in an image if we change certain things (like the angle or the scale). Hereβs the good news β machines are super flexible and we can teach them to identify images at an almost human-level.
This is one of the most exciting aspects of working in [computer vision](https://courses.analyticsvidhya.com/courses/computer-vision-using-deep-learning-version2?utm_source=blog&utm_medium=detailed-guide-powerful-sift-technique-image-matching-python)!
So, in this article, we will talk about an image matching algorithm that identifies the key features from the images and is able to match these features to a new image of the same object. Letβs get rolling!
## Table of Contents
1. Introduction to SIFT
2. Constructing a Scale Space
1. Gaussian Blur
2. Difference of Gaussian
3. Keypoint Localization
1. Local Maxima/Minima
2. Keypoint Selection
4. Orientation Assignment
1. Calculate Magnitude & Orientation
2. Create Histogram of Magnitude & Orientation
5. Keypoint Descriptor
6. Feature Matching
## Introduction to SIFT
> SIFT, or Scale Invariant Feature Transform, is a feature detection algorithm in Computer Vision.
SIFT helps locate the local features in an image, commonly known as the β_keypoints_β of the image. These keypoints are scale & rotation invariant that can be used for various computer vision applications, like image matching, object detection, scene detection, etc.
We can also use the keypoints generated using SIFT as features for the image during model training. **The major advantage of SIFT features, over edge features or hog features, is that they are not affected by the size or orientation of the image.**
For example, here is another image of the Eiffel Tower along with its smaller version. The keypoints of the object in the first image are matched with the keypoints found in the second image. The same goes for two images when the object in the other image is slightly rotated. Amazing, right?
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-01-16-08-16.png)
Letβs understand how these keypoints are identified and what are the techniques used to ensure the scale and rotation invariance. Broadly speaking, the entire process can be divided into 4 parts:
- **Constructing a Scale Space:** To make sure that features are scale-independent
- **Keypoint Localisation:** Identifying the suitable features or keypoints
- **Orientation Assignment:** Ensure the keypoints are rotation invariant
- **Keypoint Descriptor:** Assign a unique fingerprint to each keypoint
Finally, we can use these keypoints for feature matching!
_This article is based on the original paper by David G. Lowe. Here is the link: [Distinctive Image Features from Scale-Invariant Keypoints](https://people.eecs.berkeley.edu/~malik/cs294/lowe-ijcv04.pdf).
_
## Constructing the Scale Space
We need to identify the most distinct features in a given image while ignoring any noise. Additionally, we need to ensure that the features are not scale-dependent. These are critical concepts so letβs talk about them one-by-one.
> We use the **Gaussian Blurring technique** to reduce the noise in an image.
So, for every pixel in an image, the Gaussian Blur calculates a value based on its neighboring pixels. Below is an example of image before and after applying the Gaussian Blur. As you can see, the texture and minor details are removed from the image and only the relevant information like the shape and edges remain:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_110.png)
Gaussian Blur successfully removed the noise from the images and we have highlighted the important features of the image. Now, _we need to ensure that these features must not be scale-dependent._ This means we will be searching for these features on multiple scales, by creating a βscale spaceβ.
> Scale space is a collection of images having different scales, generated from a single image.
Hence, these blur images are created for multiple scales. To create a new set of images of different scales, we will take the original image and reduce the scale by half. For each new image, we will create blur versions as we saw above.
Here is an example to understand it in a better manner. We have the original image of size (275, 183) and a scaled image of dimension (138, 92). For both the images, two blur images are created:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_21.png)
You might be thinking β how many times do we need to scale the image and how many subsequent blur images need to be created for each scaled image? **The ideal number of octaves should be four**, and for each octave, the number of blur images should be five.
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-24-18-27-46.png)
### Difference of Gaussian
So far we have created images of multiple scales (often represented by Ο) and used Gaussian blur for each of them to reduce the noise in the image. Next, we will try to enhance the features using a technique called Difference of Gaussians or DoG.
> Difference of Gaussian is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original.
DoG creates another set of images, for each octave, by subtracting every image from the previous image in the same scale. Here is a visual explanation of how DoG is implemented:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-12-48-03.png)
_Note: The image is taken from the original paper. The octaves are now represented in a vertical form for a clearer view.Β _
Let us create the DoG for the images in scale space. Take a look at the below diagram. On the left, we have 5 images, all from the first octave (thus having the same scale). Each subsequent image is created by applying the Gaussian blur over the previous image.
On the right, we have four images generated by subtracting the consecutive Gaussians. The results are jaw-dropping!
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-14-18-26.png)
We have enhanced features for each of these images. Note that here I am implementing it only for the first octave but the same process happens for all the octaves.
Now that we have a new set of images, we are going to use this to find the important keypoints.
## Keypoint Localization
Once the images have been created, the next step is to find the important keypoints from the image that can be used for feature matching. **The idea is to find the local maxima and minima for the images.** This part is divided into two steps:
1. Find the local maxima and minima
2. Remove low contrast keypoints (keypoint selection)
### Local Maxima and Local Minima
> To locate the local maxima and minima, we go through every pixel in the image and compare it with its neighboring pixels.
When I say βneighboringβ, this not only includes the surrounding pixels of that image (in which the pixel lies), but also the nine pixels for the previous and next image in the octave.
This means that every pixel value is compared with 26 other pixel values to find whether it is the local maxima/minima. For example, in the below diagram, we have three images from the first octave. The pixel marked _x_ is compared with the neighboring pixels (in green) and is selected as a keypoint if it is the highest or lowest among the neighbors:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-16-50-01.png)
We now have potential keypoints that represent the images and are scale-invariant. We will apply the last check over the selected keypoints to ensure that these are the most accurate keypoints to represent the image.
### Keypoint Selection
Kudos! So far we have successfully generated scale-invariant keypoints. But some of these keypoints may not be robust to noise. This is why we need to perform a final check to make sure that we have the most accurate keypoints to represent the image features.
**Hence, we will eliminate the keypoints that have low contrast, or lie very close to the edge.**
To deal with the low contrast keypoints, a second-order Taylor expansion is computed for each keypoint. If the resulting value is less than 0.03 (in magnitude), we reject the keypoint.
So what do we do about the remaining keypoints? Well, we perform a check to identify the poorly located keypoints. These are the keypoints that are close to the edge and have a high edge response but may not be robust to a small amount of noise. A second-order Hessian matrix is used to identify such keypoints. You can go through the math behind this here.
Now that we have performed both the contrast test and the edge test to reject the unstable keypoints, we will now assign an orientation value for each keypoint to make the rotation invariant.
## Orientation Assignment
At this stage, we have a set of stable keypoints for the images. We will now assign an orientation to each of these keypoints so that they are invariant to rotation. We can again divide this step into two smaller steps:
1. Calculate the magnitude and orientation
2. Create a histogram for magnitude and orientation
### Calculate Magnitude and Orientation
Consider the sample image shown below:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-25-19-22-24.png)
Letβs say we want to find the magnitude and orientation for the pixel value in red. For this, we will calculate the gradients in x and y directions by taking the difference between 55 & 46 and 56 & 42. This comes out to be Gx = 9 and Gy = 14 respectively.
Once we have the gradients, we can find the magnitude and orientation using the following formulas:
Magnitude =Β β\[(Gx)2+(Gy)2]Β =Β 16.64
Ξ¦ = atan(Gy / Gx) = atan(1.55) = 57.17
> The magnitude represents the intensity of the pixel and the orientation gives the direction for the same.
We can now create a histogram given that we have these magnitude and orientation values for the pixels.
### Creating a Histogram for Magnitude and Orientation
On the x-axis, we will have bins for angle values, like 0-9, 10 β 19, 20-29, up to 360. Since our angle value is 57, it will fall in the 6th bin. The 6th bin value will be in proportion to the magnitude of the pixel, i.e. 16.64.Β We will do this for all the pixels around the keypoint.
This is how we get the below histogram:
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-26-18-53-12.png)
_You can refer to this article for a much detailed explanation for calculating the gradient, magnitude, orientation and plotting histogram β_ _[A Valuable Introduction to the Histogram of Oriented Gradients](https://www.analyticsvidhya.com/blog/2019/09/feature-engineering-images-introduction-hog-feature-descriptor/)._
This histogram would peak at some point. **The bin at which we see the peak will be the orientation for the keypoint.** Additionally, if there is another significant peak (seen between 80 β 100%), then another keypoint is generated with the magnitude and scale the same as the keypoint used to generate the histogram. And the angle or orientation will be equal to the new bin that has the peak.
Effectively at this point, we can say that there can be a small increase in the number of keypoints.
## Keypoint Descriptor
This is the final step for SIFT. So far, we have stable keypoints that are scale-invariant and rotation invariant. In this section, we will use the neighboring pixels, their orientations, and magnitude, to generate a unique fingerprint for this keypoint called a βdescriptorβ.
Additionally, since we use the surrounding pixels, the descriptors will be partially invariant to illumination or brightness of the images.
We will first take a 16Γ16 neighborhood around the keypoint. This 16Γ16 block is further divided into 4Γ4 sub-blocks and for each of these sub-blocks, we generate the histogram using magnitude and orientation.
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/Screenshot-from-2019-09-26-20-10-52.png)
At this stage, the bin size is increased and we take only 8 bins (not 36). Each of these arrows represents the 8 bins and the length of the arrows define the magnitude. So, we will have a total of 128 bin values for every keypoint.
Here is an example:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
| | #reading image |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | #keypoints |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | img_1 = cv2.drawKeypoints(gray1,keypoints_1,img1) |
| | plt.imshow(img_1) |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_41.png)
## Feature Matching
We will now use the SIFT features for feature matching. For this purpose, I have downloaded two images of the Eiffel Tower, taken from different positions. You can try it with any two images that you want.
Here are the two images that I have used:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | figure, ax = plt.subplots(1, 2, figsize=(16, 8)) |
| | ax\[0].imshow(img1, cmap='gray') |
| | ax\[1].imshow(img2, cmap='gray') |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_71.png)
Now, for both these images, we are going to generate the SIFT features. First, we have to construct a SIFT object and then use the function _detectAndCompute_ to get the keypoints. It will return two values β the keypoints and the descriptors.
Letβs determine the keypoints and print the total number of keypoints found in each image:
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | #sift |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | keypoints_2, descriptors_2 = sift.detectAndCompute(img2,None) |
| | len(keypoints_1), len(keypoints_2) |
283, 540
Next, letβs try and match the features from image 1 with features from image 2. We will be using the function _match()_ from the _BFmatcher_ (brute force match) module. Also, we will draw lines between the features that match in both the images. This can be done using the _drawMatches_ function in OpenCV.
| | import cv2 |
| | import matplotlib.pyplot as plt |
| | %matplotlib inline |
\| \| # read images |
| | img1 = cv2.imread('eiffel_2.jpeg') |
| | img2 = cv2.imread('eiffel_1.jpg') |
| | img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) |
| | img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) |
| | #sift |
| | sift = cv2.xfeatures2d.SIFT_create() |
| | keypoints_1, descriptors_1 = sift.detectAndCompute(img1,None) |
| | keypoints_2, descriptors_2 = sift.detectAndCompute(img2,None) |
| | #feature matching |
| | bf = cv2.BFMatcher(cv2.NORM_L1, crossCheck=True) |
| | matches = bf.match(descriptors_1,descriptors_2) |
| | matches = sorted(matches, key = lambda x:x.distance) |
| | img3 = cv2.drawMatches(img1, keypoints_1, img2, keypoints_2, matches\[:50], img2, flags=2) |
| | plt.imshow(img3),plt.show() |
[
](https://cdn.analyticsvidhya.com/wp-content/uploads/2019/09/index_61.png)I have plotted only 50 matches here for clarityβs sake. You can increase the number according to what you prefer. To find out how many keypoints are matched, we can print the length of the variable _matches_. In this case, the answer would be 190.
## End Notes
In this article, we discussed the SIFT feature matching algorithm in detail. Here is a site that provides excellent visualization for each step of SIFT. You can add your own image and it will create the keypoints for that image as well. Check it out [here](http://weitz.de/sift/).
Another popular feature matching algorithm is SURF (Speeded Up Robust Feature), which is simply a faster version of SIFT. I would encourage you to go ahead and explore it as well.
And if youβre new to the world of computer vision and image data, I recommend checking out the below course:
- [Computer Vision using Deep Learning 2.0](https://courses.analyticsvidhya.com/courses/computer-vision-using-deep-learning-version2?utm_source=blog&utm_medium=detailed-guide-powerful-sift-technique-image-matching-python)
You can also read this article on our Mobile APP [
](https://play.google.com/store/apps/details?id=com.analyticsvidhya.android&utm_source=blog_article&utm_campaign=blog&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1)[
](https://apps.apple.com/us/app/analytics-vidhya/id1470025572)
### Related Articles
[https://www.analyticsvidhya.com/blog/2019/10/detailed-guide-powerful-sift-technique-image-matching-python/](https://www.analyticsvidhya.com/blog/2019/10/detailed-guide-powerful-sift-technique-image-matching-python/)
|
non_process
|
sift how to use sift for image matching in python overview a beginner friendly introduction to the powerful sift scale invariant feature transform technique learn how to perform feature matching using sift we also showcase sift in python through hands on coding introduction take a look at the below collection of images and think of the common element between them the resplendent eiffel tower of course the keen eyed among you will also have noticed that each image has a different background is captured from different angles and also has different objects in the foreground in some cases iβm sure all of this took you a fraction of a second to figure out it doesnβt matter if the image is rotated at a weird angle or zoomed in to show only half of the tower this is primarily because you have seen the images of the eiffel tower multiple times and your memory easily recalls its features we naturally understand that the scale or angle of the image may change but the object remains the same but machines have an almighty struggle with the same idea itβs a challenge for them to identify the object in an image if we change certain things like the angle or the scale hereβs the good news β machines are super flexible and we can teach them to identify images at an almost human level this is one of the most exciting aspects of working in so in this article we will talk about an image matching algorithm that identifies the key features from the images and is able to match these features to a new image of the same object letβs get rolling table of contents introduction to sift constructing a scale space gaussian blur difference of gaussian keypoint localization local maxima minima keypoint selection orientation assignment calculate magnitude orientation create histogram of magnitude orientation keypoint descriptor feature matching introduction to sift sift or scale invariant feature transform is a feature detection algorithm in computer vision sift helps locate the local features in an image commonly known as the β keypoints β of the image these keypoints are scale rotation invariant that can be used for various computer vision applications like image matching object detection scene detection etc we can also use the keypoints generated using sift as features for the image during model training the major advantage of sift features over edge features or hog features is that they are not affected by the size or orientation of the image for example here is another image of the eiffel tower along with its smaller version the keypoints of the object in the first image are matched with the keypoints found in the second image the same goes for two images when the object in the other image is slightly rotated amazing right letβs understand how these keypoints are identified and what are the techniques used to ensure the scale and rotation invariance broadly speaking the entire process can be divided into parts constructing a scale space to make sure that features are scale independent keypoint localisation identifying the suitable features or keypoints orientation assignment ensure the keypoints are rotation invariant keypoint descriptor assign a unique fingerprint to each keypoint finally we can use these keypoints for feature matching this article is based on the original paper by david g lowe here is the link constructing the scale space we need to identify the most distinct features in a given image while ignoring any noise additionally we need to ensure that the features are not scale dependent these are critical concepts so letβs talk about them one by one we use the gaussian blurring technique to reduce the noise in an image so for every pixel in an image the gaussian blur calculates a value based on its neighboring pixels below is an example of image before and after applying the gaussian blur as you can see the texture and minor details are removed from the image and only the relevant information like the shape and edges remain gaussian blur successfully removed the noise from the images and we have highlighted the important features of the image now we need to ensure that these features must not be scale dependent this means we will be searching for these features on multiple scales by creating a βscale spaceβ scale space is a collection of images having different scales generated from a single image hence these blur images are created for multiple scales to create a new set of images of different scales we will take the original image and reduce the scale by half for each new image we will create blur versions as we saw above here is an example to understand it in a better manner we have the original image of size and a scaled image of dimension for both the images two blur images are created you might be thinking β how many times do we need to scale the image and how many subsequent blur images need to be created for each scaled image the ideal number of octaves should be four and for each octave the number of blur images should be five difference of gaussian so far we have created images of multiple scales often represented by Ο and used gaussian blur for each of them to reduce the noise in the image next we will try to enhance the features using a technique called difference of gaussians or dog difference of gaussian is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another less blurred version of the original dog creates another set of images for each octave by subtracting every image from the previous image in the same scale here is a visual explanation of how dog is implemented note the image is taken from the original paper the octaves are now represented in a vertical form for a clearer view Β let us create the dog for the images in scale space take a look at the below diagram on the left we have images all from the first octave thus having the same scale each subsequent image is created by applying the gaussian blur over the previous image on the right we have four images generated by subtracting the consecutive gaussians the results are jaw dropping we have enhanced features for each of these images note that here i am implementing it only for the first octave but the same process happens for all the octaves now that we have a new set of images we are going to use this to find the important keypoints keypoint localization once the images have been created the next step is to find the important keypoints from the image that can be used for feature matching the idea is to find the local maxima and minima for the images this part is divided into two steps find the local maxima and minima remove low contrast keypoints keypoint selection local maxima and local minima to locate the local maxima and minima we go through every pixel in the image and compare it with its neighboring pixels when i say βneighboringβ this not only includes the surrounding pixels of that image in which the pixel lies but also the nine pixels for the previous and next image in the octave this means that every pixel value is compared with other pixel values to find whether it is the local maxima minima for example in the below diagram we have three images from the first octave the pixel marked x is compared with the neighboring pixels in green and is selected as a keypoint if it is the highest or lowest among the neighbors we now have potential keypoints that represent the images and are scale invariant we will apply the last check over the selected keypoints to ensure that these are the most accurate keypoints to represent the image keypoint selection kudos so far we have successfully generated scale invariant keypoints but some of these keypoints may not be robust to noise this is why we need to perform a final check to make sure that we have the most accurate keypoints to represent the image features hence we will eliminate the keypoints that have low contrast or lie very close to the edge to deal with the low contrast keypoints a second order taylor expansion is computed for each keypoint if the resulting value is less than in magnitude we reject the keypoint so what do we do about the remaining keypoints well we perform a check to identify the poorly located keypoints these are the keypoints that are close to the edge and have a high edge response but may not be robust to a small amount of noise a second order hessian matrix is used to identify such keypoints you can go through the math behind this here now that we have performed both the contrast test and the edge test to reject the unstable keypoints we will now assign an orientation value for each keypoint to make the rotation invariant orientation assignment at this stage we have a set of stable keypoints for the images we will now assign an orientation to each of these keypoints so that they are invariant to rotation we can again divide this step into two smaller steps calculate the magnitude and orientation create a histogram for magnitude and orientation calculate magnitude and orientation consider the sample image shown below letβs say we want to find the magnitude and orientation for the pixel value in red for this we will calculate the gradients in x and y directions by taking the difference between and this comes out to be gx and gy respectively once we have the gradients we can find the magnitude and orientation using the following formulas magnitude Β β Β Β Ο atan gy gx atan the magnitude represents the intensity of the pixel and the orientation gives the direction for the same we can now create a histogram given that we have these magnitude and orientation values for the pixels creating a histogram for magnitude and orientation on the x axis we will have bins for angle values like β up to since our angle value is it will fall in the bin the bin value will be in proportion to the magnitude of the pixel i e Β we will do this for all the pixels around the keypoint this is how we get the below histogram you can refer to this article for a much detailed explanation for calculating the gradient magnitude orientation and plotting histogram β this histogram would peak at some point the bin at which we see the peak will be the orientation for the keypoint additionally if there is another significant peak seen between β then another keypoint is generated with the magnitude and scale the same as the keypoint used to generate the histogram and the angle or orientation will be equal to the new bin that has the peak effectively at this point we can say that there can be a small increase in the number of keypoints keypoint descriptor this is the final step for sift so far we have stable keypoints that are scale invariant and rotation invariant in this section we will use the neighboring pixels their orientations and magnitude to generate a unique fingerprint for this keypoint called a βdescriptorβ additionally since we use the surrounding pixels the descriptors will be partially invariant to illumination or brightness of the images we will first take a Γ neighborhood around the keypoint this Γ block is further divided into Γ sub blocks and for each of these sub blocks we generate the histogram using magnitude and orientation at this stage the bin size is increased and we take only bins not each of these arrows represents the bins and the length of the arrows define the magnitude so we will have a total of bin values for every keypoint here is an example import import matplotlib pyplot as plt matplotlib inline reading image imread eiffel jpeg cvtcolor color keypoints sift sift create keypoints descriptors sift detectandcompute none img drawkeypoints keypoints plt imshow img feature matching we will now use the sift features for feature matching for this purpose i have downloaded two images of the eiffel tower taken from different positions you can try it with any two images that you want here are the two images that i have used import import matplotlib pyplot as plt matplotlib inline read images imread eiffel jpeg imread eiffel jpg cvtcolor color cvtcolor color figure ax plt subplots figsize ax imshow cmap gray ax imshow cmap gray now for both these images we are going to generate the sift features first we have to construct a sift object and then use the function detectandcompute to get the keypoints it will return two values β the keypoints and the descriptors letβs determine the keypoints and print the total number of keypoints found in each image import import matplotlib pyplot as plt matplotlib inline read images imread eiffel jpeg imread eiffel jpg cvtcolor color cvtcolor color sift sift sift create keypoints descriptors sift detectandcompute none keypoints descriptors sift detectandcompute none len keypoints len keypoints next letβs try and match the features from image with features from image we will be using the function match from the bfmatcher brute force match module also we will draw lines between the features that match in both the images this can be done using the drawmatches function in opencv import import matplotlib pyplot as plt matplotlib inline read images imread eiffel jpeg imread eiffel jpg cvtcolor color cvtcolor color sift sift sift create keypoints descriptors sift detectandcompute none keypoints descriptors sift detectandcompute none feature matching bf bfmatcher norm crosscheck true matches bf match descriptors descriptors matches sorted matches key lambda x x distance drawmatches keypoints keypoints matches flags plt imshow plt show have plotted only matches here for clarityβs sake you can increase the number according to what you prefer to find out how many keypoints are matched we can print the length of the variable matches in this case the answer would be end notes in this article we discussed the sift feature matching algorithm in detail here is a site that provides excellent visualization for each step of sift you can add your own image and it will create the keypoints for that image as well check it out another popular feature matching algorithm is surf speeded up robust feature which is simply a faster version of sift i would encourage you to go ahead and explore it as well and if youβre new to the world of computer vision and image data i recommend checking out the below course you can also read this article on our mobile app related articles
| 0
|
820,537
| 30,777,067,130
|
IssuesEvent
|
2023-07-31 07:23:31
|
SkriptLang/Skript
|
https://api.github.com/repos/SkriptLang/Skript
|
opened
|
π‘Add `applyBoneMeal` effect
|
enhancement priority: lowest
|
### Suggestion
Would be nice to add [applyBoneMeal](https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/block/Block.html#applyBoneMeal(org.bukkit.block.BlockFace)) effect.
### Why?
Doesn't exist.
### Other
_No response_
### Agreement
- [X] I have read the guidelines above and affirm I am following them with this suggestion.
|
1.0
|
π‘Add `applyBoneMeal` effect - ### Suggestion
Would be nice to add [applyBoneMeal](https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/block/Block.html#applyBoneMeal(org.bukkit.block.BlockFace)) effect.
### Why?
Doesn't exist.
### Other
_No response_
### Agreement
- [X] I have read the guidelines above and affirm I am following them with this suggestion.
|
non_process
|
π‘add applybonemeal effect suggestion would be nice to add effect why doesn t exist other no response agreement i have read the guidelines above and affirm i am following them with this suggestion
| 0
|
61,113
| 6,725,688,908
|
IssuesEvent
|
2017-10-17 06:59:45
|
nuxsmin/sysPass
|
https://api.github.com/repos/nuxsmin/sysPass
|
closed
|
Force HTTPS: still on port 80
|
NeedTests
|
I have installed syspass via the official docker images. I then enabled 'Force HTTPS', but something is weird.
When I connect to it (http://my-server), it redirects to https://my-server:80. Notice the :80, and my browser gives me error messages that this is not correct SSL. So it seems like something is still running on :80 that is not SSL (I guess the redirector)
To reproduce:
1. Start with a default installation
2. Enable Force HTTPS => Save
3. Observe
* You should have been redirected to https://my-server
* You have been redirected to https://my-server:80
I think I may have found the issue, but I could be wrong. Basically, you're testing if :443 is not already the port, and want to set it. But you still set it to :80, which leads to this bug.
https://github.com/nuxsmin/sysPass/blob/2ff0fe000ad08c8edac833715dfdc2f81f6c438e/inc/SP/Util/HttpUtil.class.php#L42
I could very well be wrong, so then it's just a bug report.
|
1.0
|
Force HTTPS: still on port 80 - I have installed syspass via the official docker images. I then enabled 'Force HTTPS', but something is weird.
When I connect to it (http://my-server), it redirects to https://my-server:80. Notice the :80, and my browser gives me error messages that this is not correct SSL. So it seems like something is still running on :80 that is not SSL (I guess the redirector)
To reproduce:
1. Start with a default installation
2. Enable Force HTTPS => Save
3. Observe
* You should have been redirected to https://my-server
* You have been redirected to https://my-server:80
I think I may have found the issue, but I could be wrong. Basically, you're testing if :443 is not already the port, and want to set it. But you still set it to :80, which leads to this bug.
https://github.com/nuxsmin/sysPass/blob/2ff0fe000ad08c8edac833715dfdc2f81f6c438e/inc/SP/Util/HttpUtil.class.php#L42
I could very well be wrong, so then it's just a bug report.
|
non_process
|
force https still on port i have installed syspass via the official docker images i then enabled force https but something is weird when i connect to it it redirects to notice the and my browser gives me error messages that this is not correct ssl so it seems like something is still running on that is not ssl i guess the redirector to reproduce start with a default installation enable force https save observe you should have been redirected to you have been redirected to i think i may have found the issue but i could be wrong basically you re testing if is not already the port and want to set it but you still set it to which leads to this bug i could very well be wrong so then it s just a bug report
| 0
|
22,065
| 30,590,726,528
|
IssuesEvent
|
2023-07-21 16:48:11
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
cacholote 0.4.1 has 1 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/cacholote
https://inspector.pypi.io/project/cacholote
```{
"dependency": "cacholote",
"version": "0.4.1",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "cacholote-0.4.1/tests/conftest.py:38",
"code": " proc = subprocess.Popen(\n shlex.split(f\"moto_server s3 -p {port}\"),\n stderr=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stdin=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpj5ttpx0f/cacholote"
}
}```
|
1.0
|
cacholote 0.4.1 has 1 GuardDog issues - https://pypi.org/project/cacholote
https://inspector.pypi.io/project/cacholote
```{
"dependency": "cacholote",
"version": "0.4.1",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "cacholote-0.4.1/tests/conftest.py:38",
"code": " proc = subprocess.Popen(\n shlex.split(f\"moto_server s3 -p {port}\"),\n stderr=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stdin=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpj5ttpx0f/cacholote"
}
}```
|
process
|
cacholote has guarddog issues dependency cacholote version result issues errors results silent process execution location cacholote tests conftest py code proc subprocess popen n shlex split f moto server p port n stderr subprocess devnull n stdout subprocess devnull n stdin subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp cacholote
| 1
|
23,615
| 6,446,680,514
|
IssuesEvent
|
2017-08-14 00:00:30
|
langbakk/cntrl
|
https://api.github.com/repos/langbakk/cntrl
|
closed
|
BUG: on charts / Statistics page, chart-containers are rendered twice
|
bug codereview Priority 2
|
When chosing one chart to show, the container is rendered twice, creating a large margin below the container. Only the first iteration is populated with data.
When chosing more than one statistic, the extra canvas-iteration show between the different charts, and also below the last one.
|
1.0
|
BUG: on charts / Statistics page, chart-containers are rendered twice - When chosing one chart to show, the container is rendered twice, creating a large margin below the container. Only the first iteration is populated with data.
When chosing more than one statistic, the extra canvas-iteration show between the different charts, and also below the last one.
|
non_process
|
bug on charts statistics page chart containers are rendered twice when chosing one chart to show the container is rendered twice creating a large margin below the container only the first iteration is populated with data when chosing more than one statistic the extra canvas iteration show between the different charts and also below the last one
| 0
|
272,218
| 8,506,236,114
|
IssuesEvent
|
2018-10-30 16:06:19
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
Always render AddonsByAuthorsCard
|
component: user profile priority: p3 state: pull request ready
|
### Describe the problem and steps to reproduce it:
1. go to https://addons-dev.allizom.org/en-US/firefox/user/abine/
2. refresh
3. observe
### What happened?
The left card is rendered but the right side is empty. Then the right side starts to render some stuff, then it is fully loaded with the add-ons by the user.
Current state in -dev:

### What did you expect to happen?
The full layout should be rendered in a "loading" state.
Expected state locally (hence the no-css render at the beginning):

### Anything else we should know?
<!-- Please include a link to the page, screenshots and any relevant files. -->
|
1.0
|
Always render AddonsByAuthorsCard - ### Describe the problem and steps to reproduce it:
1. go to https://addons-dev.allizom.org/en-US/firefox/user/abine/
2. refresh
3. observe
### What happened?
The left card is rendered but the right side is empty. Then the right side starts to render some stuff, then it is fully loaded with the add-ons by the user.
Current state in -dev:

### What did you expect to happen?
The full layout should be rendered in a "loading" state.
Expected state locally (hence the no-css render at the beginning):

### Anything else we should know?
<!-- Please include a link to the page, screenshots and any relevant files. -->
|
non_process
|
always render addonsbyauthorscard describe the problem and steps to reproduce it go to refresh observe what happened the left card is rendered but the right side is empty then the right side starts to render some stuff then it is fully loaded with the add ons by the user current state in dev what did you expect to happen the full layout should be rendered in a loading state expected state locally hence the no css render at the beginning anything else we should know
| 0
|
17,586
| 23,399,164,555
|
IssuesEvent
|
2022-08-12 05:39:44
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] [Consent API] Gateway app > App is is crashing in the below scenario
|
Bug P0 iOS Process: Fixed Process: Tested dev Deferred
|
Steps:
1. Install the app
2. Create the account
3. Enter a valid verification code
4. Create a passcode
5. Click on Next
6. Select any value displayed on the notification settings pop up
7. Observe
AR: App is crashing
ER: Participant should be able to navigate to the studies list screen
|
2.0
|
[iOS] [Consent API] Gateway app > App is is crashing in the below scenario - Steps:
1. Install the app
2. Create the account
3. Enter a valid verification code
4. Create a passcode
5. Click on Next
6. Select any value displayed on the notification settings pop up
7. Observe
AR: App is crashing
ER: Participant should be able to navigate to the studies list screen
|
process
|
gateway app app is is crashing in the below scenario steps install the app create the account enter a valid verification code create a passcode click on next select any value displayed on the notification settings pop up observe ar app is crashing er participant should be able to navigate to the studies list screen
| 1
|
14,343
| 17,370,285,464
|
IssuesEvent
|
2021-07-30 13:08:57
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
opened
|
Use a password manager to share infrastructure passwords between the team
|
:label: team-process type: enhancement
|
# Summary
There are a few places where we share accounts to access the same services or webpages. An example of this is the grafana of a hub, where we don't necessarily want to create a new admin username for every single hub engineer.
For these cases, we currently follow a practice of "ask a team member what the password is". This makes it hard to know who has access to which passwords, and is an extra step team members must follow to get access. It's also something that has to be done _each time_ a new password is needed.
# Proposal
We use a team password application like [1Password](https://1password.com/). This would allow us to store the passwords in an encrypted service, and we could purchase a team account that would provide each of us access to them (I believe their base team account is $20/mo for 10 people, which isn't bad).
We could then use this to store any team passwords that don't require 2FA in order to log-in, and then the only step for providing access to a new team member is to get them a 1Password account.
# Actions
- [ ] Answer questions below, and if we wish to proceed...
- [ ] Write up a proposed process for how we share passwords
- [ ] Set up password manager and accounts for team members
- [ ] Write it up in team compass
# Questions
- Is anybody opposed to this idea? Would it lead us to any obvious anti-patterns?
- Any strong preferences for a particular password manager? 1Password vs. LastPass, for example.
|
1.0
|
Use a password manager to share infrastructure passwords between the team - # Summary
There are a few places where we share accounts to access the same services or webpages. An example of this is the grafana of a hub, where we don't necessarily want to create a new admin username for every single hub engineer.
For these cases, we currently follow a practice of "ask a team member what the password is". This makes it hard to know who has access to which passwords, and is an extra step team members must follow to get access. It's also something that has to be done _each time_ a new password is needed.
# Proposal
We use a team password application like [1Password](https://1password.com/). This would allow us to store the passwords in an encrypted service, and we could purchase a team account that would provide each of us access to them (I believe their base team account is $20/mo for 10 people, which isn't bad).
We could then use this to store any team passwords that don't require 2FA in order to log-in, and then the only step for providing access to a new team member is to get them a 1Password account.
# Actions
- [ ] Answer questions below, and if we wish to proceed...
- [ ] Write up a proposed process for how we share passwords
- [ ] Set up password manager and accounts for team members
- [ ] Write it up in team compass
# Questions
- Is anybody opposed to this idea? Would it lead us to any obvious anti-patterns?
- Any strong preferences for a particular password manager? 1Password vs. LastPass, for example.
|
process
|
use a password manager to share infrastructure passwords between the team summary there are a few places where we share accounts to access the same services or webpages an example of this is the grafana of a hub where we don t necessarily want to create a new admin username for every single hub engineer for these cases we currently follow a practice of ask a team member what the password is this makes it hard to know who has access to which passwords and is an extra step team members must follow to get access it s also something that has to be done each time a new password is needed proposal we use a team password application like this would allow us to store the passwords in an encrypted service and we could purchase a team account that would provide each of us access to them i believe their base team account is mo for people which isn t bad we could then use this to store any team passwords that don t require in order to log in and then the only step for providing access to a new team member is to get them a account actions answer questions below and if we wish to proceed write up a proposed process for how we share passwords set up password manager and accounts for team members write it up in team compass questions is anybody opposed to this idea would it lead us to any obvious anti patterns any strong preferences for a particular password manager vs lastpass for example
| 1
|
6,125
| 8,996,599,400
|
IssuesEvent
|
2019-02-02 02:43:25
|
bow-simulation/virtualbow
|
https://api.github.com/repos/bow-simulation/virtualbow
|
closed
|
Create rpm and AppImage releases
|
area: linux area: software process prio: normal type: help wanted type: improvement
|
In GitLab by **spfeifer** on Jan 24, 2018, 12:11
`deb` + `rpm` should cover most Linux systems. For anything else there is the `snap`, but maybe replace that with `AppImage`. It should be easier to use than snaps.
Note: [Linuxdeployqt](https://github.com/probonopd/linuxdeployqt) can create AppImages. Have a look at [fpm](https://github.com/jordansissel/fpm) for various package formats.
|
1.0
|
Create rpm and AppImage releases - In GitLab by **spfeifer** on Jan 24, 2018, 12:11
`deb` + `rpm` should cover most Linux systems. For anything else there is the `snap`, but maybe replace that with `AppImage`. It should be easier to use than snaps.
Note: [Linuxdeployqt](https://github.com/probonopd/linuxdeployqt) can create AppImages. Have a look at [fpm](https://github.com/jordansissel/fpm) for various package formats.
|
process
|
create rpm and appimage releases in gitlab by spfeifer on jan deb rpm should cover most linux systems for anything else there is the snap but maybe replace that with appimage it should be easier to use than snaps note can create appimages have a look at for various package formats
| 1
|
287,839
| 31,856,424,252
|
IssuesEvent
|
2023-09-15 07:48:17
|
Trinadh465/linux-4.1.15_CVE-2023-26607
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607
|
opened
|
CVE-2018-10840 (Medium) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2018-10840 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel is vulnerable to a heap-based buffer overflow in the fs/ext4/xattr.c:ext4_xattr_set_entry() function. An attacker could exploit this by operating on a mounted crafted ext4 image.
<p>Publish Date: 2018-07-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10840>CVE-2018-10840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-10840">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-10840</a></p>
<p>Release Date: 2018-07-16</p>
<p>Fix Resolution: v4.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-10840 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2018-10840 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux kernel is vulnerable to a heap-based buffer overflow in the fs/ext4/xattr.c:ext4_xattr_set_entry() function. An attacker could exploit this by operating on a mounted crafted ext4 image.
<p>Publish Date: 2018-07-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10840>CVE-2018-10840</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-10840">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-10840</a></p>
<p>Release Date: 2018-07-16</p>
<p>Fix Resolution: v4.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details linux kernel is vulnerable to a heap based buffer overflow in the fs xattr c xattr set entry function an attacker could exploit this by operating on a mounted crafted image publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
114,178
| 17,195,250,558
|
IssuesEvent
|
2021-07-16 16:21:33
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
GetSelfSigned13ServerCertificate() failed with CryptographicException in test run
|
area-System.Security needs more info os-windows
|
This seems to be rare Windows failure but the outcome may deserve some investigation. Some of the tests switched somewhat recently to generating certificate and keys on the fly instead of relying on checked-in copy (as that has issues as well)
The failure I saw looks like
```
System.Net.Http.Functional.Tests.SocketsHttpHandlerTest_HttpClientHandlerTest_Headers_Http3_Mock.SendAsync_RequestHeaderInResponse_Success(name: "Accept-Encoding", value: "identity,gzip") [FAIL]
Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException : The parameter is incorrect.
Stack Trace:
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(162,0): at Internal.Cryptography.Pal.CertificatePal.FilterPFXStore(ReadOnlySpan`1 rawData, SafePasswordHandle password, PfxCertStoreFlags pfxCertStoreFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(87,0): at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile(ReadOnlySpan`1 rawData, String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(20,0): at Internal.Cryptography.Pal.CertificatePal.FromBlob(ReadOnlySpan`1 rawData, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate.cs(63,0): at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(ReadOnlySpan`1 data)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate.cs(52,0): at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(Byte[] data)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2.cs(51,0): at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(Byte[] rawData)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Configuration.Certificates.cs(94,0): at System.Net.Test.Common.Configuration.Certificates.GetSelfSigned13ServerCertificate()
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(27,0): at System.Net.Test.Common.Http3LoopbackServer..ctor(QuicImplementationProvider quicImplementationProvider, GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(81,0): at System.Net.Test.Common.Http3LoopbackServerFactory.CreateServer(GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(86,0): at System.Net.Test.Common.Http3LoopbackServerFactory.CreateServerAsync(Func`3 funcAsync, Int32 millisecondsTimeout, GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Threading\Tasks\TaskTimeoutExtensions.cs(37,0): at System.Threading.Tasks.TaskTimeoutExtensions.TimeoutAfter(Task task, TimeSpan timeout)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Net.Http\tests\FunctionalTests\HttpClientHandlerTest.Headers.cs(317,0): at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Headers.SendAsync_RequestHeaderInResponse_Success(String name, String value)
--- End of stack trace from previous location ---
```
even if the failure is in HTTP test, I'll make it as Security issue for now as I don't see any obvious reason why this is failing. If I remember correctly, there was something similar on Linux if the crypto parameters comes out weak for whatever reason - but I'm not 100% sure.
cc: @bartonjs @scalablecory
|
True
|
GetSelfSigned13ServerCertificate() failed with CryptographicException in test run - This seems to be rare Windows failure but the outcome may deserve some investigation. Some of the tests switched somewhat recently to generating certificate and keys on the fly instead of relying on checked-in copy (as that has issues as well)
The failure I saw looks like
```
System.Net.Http.Functional.Tests.SocketsHttpHandlerTest_HttpClientHandlerTest_Headers_Http3_Mock.SendAsync_RequestHeaderInResponse_Success(name: "Accept-Encoding", value: "identity,gzip") [FAIL]
Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException : The parameter is incorrect.
Stack Trace:
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(162,0): at Internal.Cryptography.Pal.CertificatePal.FilterPFXStore(ReadOnlySpan`1 rawData, SafePasswordHandle password, PfxCertStoreFlags pfxCertStoreFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(87,0): at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile(ReadOnlySpan`1 rawData, String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\Internal\Cryptography\Pal.Windows\CertificatePal.Import.cs(20,0): at Internal.Cryptography.Pal.CertificatePal.FromBlob(ReadOnlySpan`1 rawData, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate.cs(63,0): at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(ReadOnlySpan`1 data)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate.cs(52,0): at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(Byte[] data)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Security.Cryptography.X509Certificates\src\System\Security\Cryptography\X509Certificates\X509Certificate2.cs(51,0): at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(Byte[] rawData)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Configuration.Certificates.cs(94,0): at System.Net.Test.Common.Configuration.Certificates.GetSelfSigned13ServerCertificate()
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(27,0): at System.Net.Test.Common.Http3LoopbackServer..ctor(QuicImplementationProvider quicImplementationProvider, GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(81,0): at System.Net.Test.Common.Http3LoopbackServerFactory.CreateServer(GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Net\Http\Http3LoopbackServer.cs(86,0): at System.Net.Test.Common.Http3LoopbackServerFactory.CreateServerAsync(Func`3 funcAsync, Int32 millisecondsTimeout, GenericLoopbackOptions options)
C:\Users\test\github\wfurt-runtime\src\libraries\Common\tests\System\Threading\Tasks\TaskTimeoutExtensions.cs(37,0): at System.Threading.Tasks.TaskTimeoutExtensions.TimeoutAfter(Task task, TimeSpan timeout)
C:\Users\test\github\wfurt-runtime\src\libraries\System.Net.Http\tests\FunctionalTests\HttpClientHandlerTest.Headers.cs(317,0): at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Headers.SendAsync_RequestHeaderInResponse_Success(String name, String value)
--- End of stack trace from previous location ---
```
even if the failure is in HTTP test, I'll make it as Security issue for now as I don't see any obvious reason why this is failing. If I remember correctly, there was something similar on Linux if the crypto parameters comes out weak for whatever reason - but I'm not 100% sure.
cc: @bartonjs @scalablecory
|
non_process
|
failed with cryptographicexception in test run this seems to be rare windows failure but the outcome may deserve some investigation some of the tests switched somewhat recently to generating certificate and keys on the fly instead of relying on checked in copy as that has issues as well the failure i saw looks like system net http functional tests socketshttphandlertest httpclienthandlertest headers mock sendasync requestheaderinresponse success name accept encoding value identity gzip internal cryptography cryptothrowhelper windowscryptographicexception the parameter is incorrect stack trace c users test github wfurt runtime src libraries system security cryptography src internal cryptography pal windows certificatepal import cs at internal cryptography pal certificatepal filterpfxstore readonlyspan rawdata safepasswordhandle password pfxcertstoreflags pfxcertstoreflags c users test github wfurt runtime src libraries system security cryptography src internal cryptography pal windows certificatepal import cs at internal cryptography pal certificatepal frombloborfile readonlyspan rawdata string filename safepasswordhandle password keystorageflags c users test github wfurt runtime src libraries system security cryptography src internal cryptography pal windows certificatepal import cs at internal cryptography pal certificatepal fromblob readonlyspan rawdata safepasswordhandle password keystorageflags c users test github wfurt runtime src libraries system security cryptography src system security cryptography cs at system security cryptography ctor readonlyspan data c users test github wfurt runtime src libraries system security cryptography src system security cryptography cs at system security cryptography ctor byte data c users test github wfurt runtime src libraries system security cryptography src system security cryptography cs at system security cryptography ctor byte rawdata c users test github wfurt runtime src libraries common tests system net configuration certificates cs at system net test common configuration certificates c users test github wfurt runtime src libraries common tests system net http cs at system net test common ctor quicimplementationprovider quicimplementationprovider genericloopbackoptions options c users test github wfurt runtime src libraries common tests system net http cs at system net test common createserver genericloopbackoptions options c users test github wfurt runtime src libraries common tests system net http cs at system net test common createserverasync func funcasync millisecondstimeout genericloopbackoptions options c users test github wfurt runtime src libraries common tests system threading tasks tasktimeoutextensions cs at system threading tasks tasktimeoutextensions timeoutafter task task timespan timeout c users test github wfurt runtime src libraries system net http tests functionaltests httpclienthandlertest headers cs at system net http functional tests httpclienthandlertest headers sendasync requestheaderinresponse success string name string value end of stack trace from previous location even if the failure is in http test i ll make it as security issue for now as i don t see any obvious reason why this is failing if i remember correctly there was something similar on linux if the crypto parameters comes out weak for whatever reason but i m not sure cc bartonjs scalablecory
| 0
|
6,827
| 9,968,995,464
|
IssuesEvent
|
2019-07-08 16:53:36
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Some of the config/*.yaml do not have license header
|
area/networking kind/bug kind/process kind/spec
|
## In what area(s)?
/kind process
/kind spec
## What version of Knative?
> HEAD
## Expected Behavior
- All config file (`config/*.yaml`) have license header like https://github.com/knative/serving/blob/master/config/200-clusterrole-metrics.yaml#L1-L13.
- presumit test and auto generate script support the license header check and generation (if possible).
## Actual Behavior
Some of the `config/*.yaml` do not have license header. For example:
https://github.com/knative/serving/blob/master/config/200-clusterrole-istio.yaml
https://github.com/knative/serving/blob/master/config/203-local-gateway.yaml
|
1.0
|
Some of the config/*.yaml do not have license header - ## In what area(s)?
/kind process
/kind spec
## What version of Knative?
> HEAD
## Expected Behavior
- All config file (`config/*.yaml`) have license header like https://github.com/knative/serving/blob/master/config/200-clusterrole-metrics.yaml#L1-L13.
- presumit test and auto generate script support the license header check and generation (if possible).
## Actual Behavior
Some of the `config/*.yaml` do not have license header. For example:
https://github.com/knative/serving/blob/master/config/200-clusterrole-istio.yaml
https://github.com/knative/serving/blob/master/config/203-local-gateway.yaml
|
process
|
some of the config yaml do not have license header in what area s kind process kind spec what version of knative head expected behavior all config file config yaml have license header like presumit test and auto generate script support the license header check and generation if possible actual behavior some of the config yaml do not have license header for example
| 1
|
14,358
| 17,380,584,932
|
IssuesEvent
|
2021-07-31 16:19:30
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Missing instructions to import required modules
|
Pri2 automation/svc cxp doc-bug process-automation/subsvc triaged
|
The instructions on this page are missing steps to import two modules: Az.Account and Az.Compute.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 3632c749-8963-f5ed-55ec-28af005780bd
* Version Independent ID: 3ec0f957-e320-7ea7-e5f5-07f543f3c31b
* Content: [Create a PowerShell Workflow runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual)
* Content Source: [articles/automation/learn/automation-tutorial-runbook-textual.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/automation-tutorial-runbook-textual.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Missing instructions to import required modules -
The instructions on this page are missing steps to import two modules: Az.Account and Az.Compute.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 3632c749-8963-f5ed-55ec-28af005780bd
* Version Independent ID: 3ec0f957-e320-7ea7-e5f5-07f543f3c31b
* Content: [Create a PowerShell Workflow runbook in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual)
* Content Source: [articles/automation/learn/automation-tutorial-runbook-textual.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/automation-tutorial-runbook-textual.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
missing instructions to import required modules the instructions on this page are missing steps to import two modules az account and az compute document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
15,480
| 19,688,584,600
|
IssuesEvent
|
2022-01-12 02:35:48
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
opened
|
Provide a way to run large numbers of acceptance tests on a PR
|
type: process
|
Currently, the presubmit acceptance test script runs acceptance tests for any gems that were modified in the pull request, but limits the number to 4 (and actually runs no acceptance tests if the number of changed gems exceeds 4.) This was intentional, to prevent presubmit kokoro jobs from running for hours if a pull request happens to touch lots of gems. But sometimes we might actually want all acceptance tests to run. We can do this by manually triggering the kokoro job, but it would be nice if there was a way for the pull request itself to tell kokoro that it wants all acceptance tests run (perhaps by setting a particular label on the pull request).
|
1.0
|
Provide a way to run large numbers of acceptance tests on a PR - Currently, the presubmit acceptance test script runs acceptance tests for any gems that were modified in the pull request, but limits the number to 4 (and actually runs no acceptance tests if the number of changed gems exceeds 4.) This was intentional, to prevent presubmit kokoro jobs from running for hours if a pull request happens to touch lots of gems. But sometimes we might actually want all acceptance tests to run. We can do this by manually triggering the kokoro job, but it would be nice if there was a way for the pull request itself to tell kokoro that it wants all acceptance tests run (perhaps by setting a particular label on the pull request).
|
process
|
provide a way to run large numbers of acceptance tests on a pr currently the presubmit acceptance test script runs acceptance tests for any gems that were modified in the pull request but limits the number to and actually runs no acceptance tests if the number of changed gems exceeds this was intentional to prevent presubmit kokoro jobs from running for hours if a pull request happens to touch lots of gems but sometimes we might actually want all acceptance tests to run we can do this by manually triggering the kokoro job but it would be nice if there was a way for the pull request itself to tell kokoro that it wants all acceptance tests run perhaps by setting a particular label on the pull request
| 1
|
12,308
| 14,859,802,565
|
IssuesEvent
|
2021-01-18 19:12:44
|
neuropoly/ukbiobank-spinalcord-csa
|
https://api.github.com/repos/neuropoly/ukbiobank-spinalcord-csa
|
closed
|
Unaligned labeling of discs for T2w images after sct_label_vertebrae in process_data.sh
|
process_data
|
## Description
While running the pipeline on 30 subjects, 8 out of 30 subjects have disc labeling not aligned with the spinal cord for T2w images. In `process_data.sh`, disc labeling is generated with T1w image and then the template is use for T2w with `sct_register_multimodal`. This suggest that the patient moved between acquisitions.
The results were obtained by running the following line on Joplin:
~~~
sct_run_batch -jobs -1 -path-data ~/duke/temp/sebeda/test1_30_sub/data_BIDS -path-output ~/ukbiobank_results/ -script process_data.sh -script-args $PATH_GRADCORR_FILE
~~~
The results of the test are now in: `duke/temp/sebeda/test1_30_sub/ukbiobank_results`
The following subject present this problem:
* sub-1000710
* sub-1000537
* sub-1000710
* sub-1000918
* sub-1000985
* sub-1002169
* sub-1002191
* sub-1002721
### Example of the problem
For the subject sub-1000710,
QC for sct_label_vertabrae for T1w:

QC for sct_label_vertabrae for T2w:

While viewing images `sub-1000710_T1w.nii.gz` and `sub-1000710_T2w.nii.gz`, we can see that the patient moved between the two images:

Maybe generate a label template for discs with `sct_label_vertebrae` for T2w as done for T1w.
|
1.0
|
Unaligned labeling of discs for T2w images after sct_label_vertebrae in process_data.sh - ## Description
While running the pipeline on 30 subjects, 8 out of 30 subjects have disc labeling not aligned with the spinal cord for T2w images. In `process_data.sh`, disc labeling is generated with T1w image and then the template is use for T2w with `sct_register_multimodal`. This suggest that the patient moved between acquisitions.
The results were obtained by running the following line on Joplin:
~~~
sct_run_batch -jobs -1 -path-data ~/duke/temp/sebeda/test1_30_sub/data_BIDS -path-output ~/ukbiobank_results/ -script process_data.sh -script-args $PATH_GRADCORR_FILE
~~~
The results of the test are now in: `duke/temp/sebeda/test1_30_sub/ukbiobank_results`
The following subject present this problem:
* sub-1000710
* sub-1000537
* sub-1000710
* sub-1000918
* sub-1000985
* sub-1002169
* sub-1002191
* sub-1002721
### Example of the problem
For the subject sub-1000710,
QC for sct_label_vertabrae for T1w:

QC for sct_label_vertabrae for T2w:

While viewing images `sub-1000710_T1w.nii.gz` and `sub-1000710_T2w.nii.gz`, we can see that the patient moved between the two images:

Maybe generate a label template for discs with `sct_label_vertebrae` for T2w as done for T1w.
|
process
|
unaligned labeling of discs for images after sct label vertebrae in process data sh description while running the pipeline on subjects out of subjects have disc labeling not aligned with the spinal cord for images in process data sh disc labeling is generated with image and then the template is use for with sct register multimodal this suggest that the patient moved between acquisitions the results were obtained by running the following line on joplin sct run batch jobs path data duke temp sebeda sub data bids path output ukbiobank results script process data sh script args path gradcorr file the results of the test are now in duke temp sebeda sub ukbiobank results the following subject present this problem sub sub sub sub sub sub sub sub example of the problem for the subject sub qc for sct label vertabrae for qc for sct label vertabrae for while viewing images sub nii gz and sub nii gz we can see that the patient moved between the two images maybe generate a label template for discs with sct label vertebrae for as done for
| 1
|
9,301
| 12,311,133,470
|
IssuesEvent
|
2020-05-12 11:53:05
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
opened
|
chore: lint check fails on master
|
type: process
|
A recent change of `flake8` or its config causes the code style check to fail on the `master` branch - it complains about a variable name in a line in `job.py` that has last been changed almost 10 months ago:
```
google/cloud/bigquery/job.py:3119:39: E741 ambiguous variable name 'l'
```
|
1.0
|
chore: lint check fails on master - A recent change of `flake8` or its config causes the code style check to fail on the `master` branch - it complains about a variable name in a line in `job.py` that has last been changed almost 10 months ago:
```
google/cloud/bigquery/job.py:3119:39: E741 ambiguous variable name 'l'
```
|
process
|
chore lint check fails on master a recent change of or its config causes the code style check to fail on the master branch it complains about a variable name in a line in job py that has last been changed almost months ago google cloud bigquery job py ambiguous variable name l
| 1
|
149,337
| 11,890,415,148
|
IssuesEvent
|
2020-03-28 18:14:01
|
CBICA/CaPTk
|
https://api.github.com/repos/CBICA/CaPTk
|
closed
|
Writing dicom from nifti flips the image [Command Line]
|
Critical Testathon-Feb-2020
|
**Describe the bug**
Using -n2d in utilities flips the image
**To Reproduce**
Go to /cbica/home/sharmapa/comp_space/Testathon/nifti/ACSL
Command
>captk Utilities -i ACSL_2019.02.28_t1.nii.gz -o ../../N2D/ACSL/ -n2d ../../dicoms/ACSL/ACSL_2019.02.28_t1/
**Screenshots**
Original Dicom

Original Nifti

Niffti to Dicom conversion using -n2d

**CaPTk Version**
1.7.6
**Desktop (please complete the following information):**
Linux on cluster
|
1.0
|
Writing dicom from nifti flips the image [Command Line] - **Describe the bug**
Using -n2d in utilities flips the image
**To Reproduce**
Go to /cbica/home/sharmapa/comp_space/Testathon/nifti/ACSL
Command
>captk Utilities -i ACSL_2019.02.28_t1.nii.gz -o ../../N2D/ACSL/ -n2d ../../dicoms/ACSL/ACSL_2019.02.28_t1/
**Screenshots**
Original Dicom

Original Nifti

Niffti to Dicom conversion using -n2d

**CaPTk Version**
1.7.6
**Desktop (please complete the following information):**
Linux on cluster
|
non_process
|
writing dicom from nifti flips the image describe the bug using in utilities flips the image to reproduce go to cbica home sharmapa comp space testathon nifti acsl command captk utilities i acsl nii gz o acsl dicoms acsl acsl screenshots original dicom original nifti niffti to dicom conversion using captk version desktop please complete the following information linux on cluster
| 0
|
831
| 2,633,491,543
|
IssuesEvent
|
2015-03-09 03:48:02
|
piwik/piwik
|
https://api.github.com/repos/piwik/piwik
|
closed
|
Computation load after adding new segments on long existing Piwik instance
|
c: Performance c: Usability Enhancement RFC
|
Current archiving flow can bring certaing ammount of problems when archiving segments on instances which are 2-3-4 years old. During normal flow of cron archiving, there will always be only last 2 years processed. Adding new segment(s) can bring up two following problems for archive process:
- if at any time archiving will fall back to computing last3, last4 (anything bigger than last2) for year period - it can cause processing of days and months for that 3rd year. This will cause huge increase of archiving time. In addition, the more segments exist on instance, the more additional computing will have to be done to complete last3 archiving.
- On big traffic instances adding new segments can also be troublesome, because Piwik would try to process last 2 days, weeks, months and years. Given a batch of 50 segments, such 'catching up' will take significantly bigger ammount of time. Workaround for this is to add only couple segments at one time, but this can be troublesome when having many Piwik admins.
The goal of this ticket is to decide best approarch to this issue and hopefully plan implementation for improvement
|
True
|
Computation load after adding new segments on long existing Piwik instance - Current archiving flow can bring certaing ammount of problems when archiving segments on instances which are 2-3-4 years old. During normal flow of cron archiving, there will always be only last 2 years processed. Adding new segment(s) can bring up two following problems for archive process:
- if at any time archiving will fall back to computing last3, last4 (anything bigger than last2) for year period - it can cause processing of days and months for that 3rd year. This will cause huge increase of archiving time. In addition, the more segments exist on instance, the more additional computing will have to be done to complete last3 archiving.
- On big traffic instances adding new segments can also be troublesome, because Piwik would try to process last 2 days, weeks, months and years. Given a batch of 50 segments, such 'catching up' will take significantly bigger ammount of time. Workaround for this is to add only couple segments at one time, but this can be troublesome when having many Piwik admins.
The goal of this ticket is to decide best approarch to this issue and hopefully plan implementation for improvement
|
non_process
|
computation load after adding new segments on long existing piwik instance current archiving flow can bring certaing ammount of problems when archiving segments on instances which are years old during normal flow of cron archiving there will always be only last years processed adding new segment s can bring up two following problems for archive process if at any time archiving will fall back to computing anything bigger than for year period it can cause processing of days and months for that year this will cause huge increase of archiving time in addition the more segments exist on instance the more additional computing will have to be done to complete archiving on big traffic instances adding new segments can also be troublesome because piwik would try to process last days weeks months and years given a batch of segments such catching up will take significantly bigger ammount of time workaround for this is to add only couple segments at one time but this can be troublesome when having many piwik admins the goal of this ticket is to decide best approarch to this issue and hopefully plan implementation for improvement
| 0
|
346,526
| 24,886,957,742
|
IssuesEvent
|
2022-10-28 08:37:04
|
Tan-Jia-Rong/ped
|
https://api.github.com/repos/Tan-Jia-Rong/ped
|
opened
|
EditCommand InvalidCommandMessage is not updated
|
type.DocumentationBug severity.VeryLow
|
# In application

The following message is returned upon invalid command format.
```
Invalid command format!
edit: Edits the details of the person currently being viewed. Existing values will be overwritten by the input values.
Parameters: INDEX (must be a positive integer) [n/NAME] [p/PHONE] [lp/LESSON PLAN] [h/INDEX HOMEWORK][a/INDEX ATTENDANCE][s/INDEX SESSION][g/INDEX GRADE PROGRESS][t/TAG]...
Example: edit 1 p/91234567
```
Notice that edit no longer work as such, however the parameter fields are not updated.
<!--session: 1666943665038-5d62fbce-5564-42bc-8c1d-e89d15f9da5b-->
<!--Version: Web v3.4.4-->
|
1.0
|
EditCommand InvalidCommandMessage is not updated - # In application

The following message is returned upon invalid command format.
```
Invalid command format!
edit: Edits the details of the person currently being viewed. Existing values will be overwritten by the input values.
Parameters: INDEX (must be a positive integer) [n/NAME] [p/PHONE] [lp/LESSON PLAN] [h/INDEX HOMEWORK][a/INDEX ATTENDANCE][s/INDEX SESSION][g/INDEX GRADE PROGRESS][t/TAG]...
Example: edit 1 p/91234567
```
Notice that edit no longer work as such, however the parameter fields are not updated.
<!--session: 1666943665038-5d62fbce-5564-42bc-8c1d-e89d15f9da5b-->
<!--Version: Web v3.4.4-->
|
non_process
|
editcommand invalidcommandmessage is not updated in application the following message is returned upon invalid command format invalid command format edit edits the details of the person currently being viewed existing values will be overwritten by the input values parameters index must be a positive integer example edit p notice that edit no longer work as such however the parameter fields are not updated
| 0
|
8,688
| 11,826,863,930
|
IssuesEvent
|
2020-03-21 20:05:02
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
multiple fields in join attributes by location
|
Bug Processing
|
In "Join attributes by location (summary)" processing alg, when defining fields as model input you cannot use several of them in the fields to summarise (you have a combobox).

observed in QGIS 3.12
|
1.0
|
multiple fields in join attributes by location - In "Join attributes by location (summary)" processing alg, when defining fields as model input you cannot use several of them in the fields to summarise (you have a combobox).

observed in QGIS 3.12
|
process
|
multiple fields in join attributes by location in join attributes by location summary processing alg when defining fields as model input you cannot use several of them in the fields to summarise you have a combobox observed in qgis
| 1
|
5,213
| 7,065,250,463
|
IssuesEvent
|
2018-01-06 17:44:11
|
aws/aws-sdk-ruby
|
https://api.github.com/repos/aws/aws-sdk-ruby
|
closed
|
Question: How to search in specific folders when results are more than 1000?
|
service api usage-question
|
Please fill out the sections below to help us address your issue
### Issue description
I'm trying to gather a list of files inside of a prefix with a very large number of objects (+1000). For instance:
```
mydata/logs/2007-08-09/backup.gz
...
mydata/logs/2010-05-03/backup.gz
...
mydata/logs/2015-08-14/backup.gz
...
mydata/logs/2017-08-09/backup.gz
```
The ...'s are EVERY single date between each timestamp. I'm searching under mydata/logs and it has 1000's of objects in it. I only want to return those prefixes from 2017, i.e.
```
mydata/logs/2017-**-**/*
```
Is this possible? If not, would it be possible to search backwards from the latest date(i.e. 2018-01-05)?
### Gem name ('aws-sdk', 'aws-sdk-resources' or service gems like 'aws-sdk-s3') and its version
aws-sdk-s3 1.8.0
### Version of Ruby, OS environment
Ruby 2.4.1
### Code snippets / steps to reproduce
Output of search only returns the first 1000 results from a prefix.
|
1.0
|
Question: How to search in specific folders when results are more than 1000? - Please fill out the sections below to help us address your issue
### Issue description
I'm trying to gather a list of files inside of a prefix with a very large number of objects (+1000). For instance:
```
mydata/logs/2007-08-09/backup.gz
...
mydata/logs/2010-05-03/backup.gz
...
mydata/logs/2015-08-14/backup.gz
...
mydata/logs/2017-08-09/backup.gz
```
The ...'s are EVERY single date between each timestamp. I'm searching under mydata/logs and it has 1000's of objects in it. I only want to return those prefixes from 2017, i.e.
```
mydata/logs/2017-**-**/*
```
Is this possible? If not, would it be possible to search backwards from the latest date(i.e. 2018-01-05)?
### Gem name ('aws-sdk', 'aws-sdk-resources' or service gems like 'aws-sdk-s3') and its version
aws-sdk-s3 1.8.0
### Version of Ruby, OS environment
Ruby 2.4.1
### Code snippets / steps to reproduce
Output of search only returns the first 1000 results from a prefix.
|
non_process
|
question how to search in specific folders when results are more than please fill out the sections below to help us address your issue issue description i m trying to gather a list of files inside of a prefix with a very large number of objects for instance mydata logs backup gz mydata logs backup gz mydata logs backup gz mydata logs backup gz the s are every single date between each timestamp i m searching under mydata logs and it has s of objects in it i only want to return those prefixes from i e mydata logs is this possible if not would it be possible to search backwards from the latest date i e gem name aws sdk aws sdk resources or service gems like aws sdk and its version aws sdk version of ruby os environment ruby code snippets steps to reproduce output of search only returns the first results from a prefix
| 0
|
25,580
| 7,727,528,329
|
IssuesEvent
|
2018-05-25 03:13:14
|
JuliaLang/julia
|
https://api.github.com/repos/JuliaLang/julia
|
closed
|
Support larger machines with OpenBLAS
|
build linear algebra
|
Julia's `deps/Makefile` contains these lines:
```
# On linux, try to provision for the largest possible machine currently
OPENBLAS_BUILD_OPTS += NUM_THREADS=16
```
We have a system with 28 cores, and larger systems exist. Should this limit be increased?
|
1.0
|
Support larger machines with OpenBLAS - Julia's `deps/Makefile` contains these lines:
```
# On linux, try to provision for the largest possible machine currently
OPENBLAS_BUILD_OPTS += NUM_THREADS=16
```
We have a system with 28 cores, and larger systems exist. Should this limit be increased?
|
non_process
|
support larger machines with openblas julia s deps makefile contains these lines on linux try to provision for the largest possible machine currently openblas build opts num threads we have a system with cores and larger systems exist should this limit be increased
| 0
|
50,305
| 13,509,744,664
|
IssuesEvent
|
2020-09-14 09:39:23
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Vulnerability: Non valid cookies crashes HttpHeaders (and therefore webservers)
|
P2 area-library closed-obsolete library-io type-security
|
_This issue was originally filed by nane.kr...@gmail.com_
---
HttpHeaders raises Exception when provided with non valid cookies (e.g. cookie values with whitespaces).
This causes webserver frameworks like start to crash when accessing cookies.
So an attacker can just provide a non valid cookie and webserver is crashing.
Proposal: non valid cookies should be handled by HttpHeaders without raising an exception.
I propose to do an urlCompenentEncode/Decode before setting/reading cookies via HttpHeaders.
Nevertheless I am not in the details of HttpHeaders implementation of dart:io. But the actual solution seems to be a vulnerability to me.
|
True
|
Vulnerability: Non valid cookies crashes HttpHeaders (and therefore webservers) - _This issue was originally filed by nane.kr...@gmail.com_
---
HttpHeaders raises Exception when provided with non valid cookies (e.g. cookie values with whitespaces).
This causes webserver frameworks like start to crash when accessing cookies.
So an attacker can just provide a non valid cookie and webserver is crashing.
Proposal: non valid cookies should be handled by HttpHeaders without raising an exception.
I propose to do an urlCompenentEncode/Decode before setting/reading cookies via HttpHeaders.
Nevertheless I am not in the details of HttpHeaders implementation of dart:io. But the actual solution seems to be a vulnerability to me.
|
non_process
|
vulnerability non valid cookies crashes httpheaders and therefore webservers this issue was originally filed by nane kr gmail com httpheaders raises exception when provided with non valid cookies e g cookie values with whitespaces this causes webserver frameworks like start to crash when accessing cookies so an attacker can just provide a non valid cookie and webserver is crashing proposal non valid cookies should be handled by httpheaders without raising an exception i propose to do an urlcompenentencode decode before setting reading cookies via httpheaders nevertheless i am not in the details of httpheaders implementation of dart io but the actual solution seems to be a vulnerability to me
| 0
|
5,868
| 8,686,694,970
|
IssuesEvent
|
2018-12-03 11:34:10
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
ORM redesign: rename `JobCalculation` to `CalcJobNode`
|
topic/JobCalculationAndProcess topic/NamingIssues topic/ORM
|
With the new hierarchy in place, it is probably best to already move the implementation for the old `JobCalculation` class there as well, even though we might want to keep an alias to `JobCalculation` for the time being, to not break plugins too heavily. However, doing this, will make the future migration of separating the `node` from the `process` a lot easier and it allows us to delete the old `aiida.orm.calculation` hierarchy.
|
1.0
|
ORM redesign: rename `JobCalculation` to `CalcJobNode` - With the new hierarchy in place, it is probably best to already move the implementation for the old `JobCalculation` class there as well, even though we might want to keep an alias to `JobCalculation` for the time being, to not break plugins too heavily. However, doing this, will make the future migration of separating the `node` from the `process` a lot easier and it allows us to delete the old `aiida.orm.calculation` hierarchy.
|
process
|
orm redesign rename jobcalculation to calcjobnode with the new hierarchy in place it is probably best to already move the implementation for the old jobcalculation class there as well even though we might want to keep an alias to jobcalculation for the time being to not break plugins too heavily however doing this will make the future migration of separating the node from the process a lot easier and it allows us to delete the old aiida orm calculation hierarchy
| 1
|
12,909
| 15,285,472,935
|
IssuesEvent
|
2021-02-23 13:36:16
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Branding Centralisation for Participant Manager
|
Feature request P1 Process: Fixed Process: Release 2 Process: Tested dev
|
we require to centralize all branding elements to a single place in the code with additional properties file values as required.
Also update the readme documentation once the task is done
|
3.0
|
Branding Centralisation for Participant Manager - we require to centralize all branding elements to a single place in the code with additional properties file values as required.
Also update the readme documentation once the task is done
|
process
|
branding centralisation for participant manager we require to centralize all branding elements to a single place in the code with additional properties file values as required also update the readme documentation once the task is done
| 1
|
131,270
| 10,686,948,194
|
IssuesEvent
|
2019-10-22 15:15:54
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
net: TestTCPServer flaky on macOS 10.12 builders
|
NeedsInvestigation OS-Darwin Testing
|
From https://build.golang.org/log/19f0ac1d66a927076b256862638641e261489304 on the `darwin-amd64-race` builder:
```
--- FAIL: TestTCPServer (10.01s)
server_test.go:60: skipping tcp :0<-127.0.0.1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::]:0<-::1 test
server_test.go:60: skipping tcp :0<-::1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-::1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-::1 test
server_test.go:60: skipping tcp [::]:0<-127.0.0.1 test
server_test.go:60: skipping tcp :0<-127.0.0.1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::]:0<-::1 test
server_test.go:60: skipping tcp :0<-::1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-::1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-::1 test
server_test.go:60: skipping tcp [::]:0<-127.0.0.1 test
server_test.go:107: dial tcp [::1]:49737: i/o timeout
FAIL
FAIL net 23.757s
```
CC @mikioh @ianlancetaylor
|
1.0
|
net: TestTCPServer flaky on macOS 10.12 builders - From https://build.golang.org/log/19f0ac1d66a927076b256862638641e261489304 on the `darwin-amd64-race` builder:
```
--- FAIL: TestTCPServer (10.01s)
server_test.go:60: skipping tcp :0<-127.0.0.1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::]:0<-::1 test
server_test.go:60: skipping tcp :0<-::1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-::1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-::1 test
server_test.go:60: skipping tcp [::]:0<-127.0.0.1 test
server_test.go:60: skipping tcp :0<-127.0.0.1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-127.0.0.1 test
server_test.go:60: skipping tcp [::]:0<-::1 test
server_test.go:60: skipping tcp :0<-::1 test
server_test.go:60: skipping tcp 0.0.0.0:0<-::1 test
server_test.go:60: skipping tcp [::ffff:0.0.0.0]:0<-::1 test
server_test.go:60: skipping tcp [::]:0<-127.0.0.1 test
server_test.go:107: dial tcp [::1]:49737: i/o timeout
FAIL
FAIL net 23.757s
```
CC @mikioh @ianlancetaylor
|
non_process
|
net testtcpserver flaky on macos builders from on the darwin race builder fail testtcpserver server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go skipping tcp test server test go dial tcp i o timeout fail fail net cc mikioh ianlancetaylor
| 0
|
27,552
| 4,321,492,812
|
IssuesEvent
|
2016-07-25 10:24:23
|
node-influx/node-influx
|
https://api.github.com/repos/node-influx/node-influx
|
closed
|
Records not inserted if precision is not nanoseconds
|
bug pull-request welcome question tests-required
|
I'm using InfluxDB v0.9.5 and if the precision is not `ns` (nanoseconds), the records are not inserted. No error/warning whatsoever. Also see https://goo.gl/3NcNwM:
> The ability to query with different precisions was unhooked in the transition from 0.8.8 to 0.9.0. It will return, but meanwhile the workaround is to always specify timestamps in nanosecond precision as per RFC 3339.
Anyone else experiencing this issue? Should the default precision be changed to `ns`?
|
1.0
|
Records not inserted if precision is not nanoseconds - I'm using InfluxDB v0.9.5 and if the precision is not `ns` (nanoseconds), the records are not inserted. No error/warning whatsoever. Also see https://goo.gl/3NcNwM:
> The ability to query with different precisions was unhooked in the transition from 0.8.8 to 0.9.0. It will return, but meanwhile the workaround is to always specify timestamps in nanosecond precision as per RFC 3339.
Anyone else experiencing this issue? Should the default precision be changed to `ns`?
|
non_process
|
records not inserted if precision is not nanoseconds i m using influxdb and if the precision is not ns nanoseconds the records are not inserted no error warning whatsoever also see the ability to query with different precisions was unhooked in the transition from to it will return but meanwhile the workaround is to always specify timestamps in nanosecond precision as per rfc anyone else experiencing this issue should the default precision be changed to ns
| 0
|
7,870
| 11,044,626,648
|
IssuesEvent
|
2019-12-09 13:41:53
|
prisma/photonjs
|
https://api.github.com/repos/prisma/photonjs
|
opened
|
Photon facade with netlify
|
bug/2-confirmed kind/bug kind/regression process/candidate
|
Deploying a function with the following generator configuration on netlify fails with the following error
Config:
```
generator photon {
provider = "photonjs"
}
```
Error:
```
Error:
Invalid `photon.()` invocation in /var/task/hello-facade.js:8:37
Photon binary for current platform rhel-openssl-1.0.x could not be found.
Photon looked in null but couldn't find it.
Make sure to adjust the generator configuration in the schema.prisma file:
generator photon {
provider = "photonjs"
binaryTargets = ["native"]
}
Please run prisma2 generate for your changes to take effect.
Note, that by providing `native`, Photon automatically resolves `rhel-openssl-1.0.x`.
Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md
```
Unique deployment link with this error: https://5dee48f3f80a08893213d0b7--p2-netlify-facade.netlify.com/.netlify/functions/hello-facade
Note: The suggestion in the error looks incomplete, when seen in production.
---
I changed it to the following config, ran `prisma2 generate` and re-deployed, to match the binary being detected by Photon:
Config:
```
generator photon {
provider = "photonjs"
binaryTargets = ["native", "rhel-openssl-1.0.x"]
}
```
But it still fails as if the binary is not there and the error message is a bit weird too, as it lists a thing that is already there.
Error:
```
Error:
Invalid `photon.()` invocation in /var/task/hello-facade.js:8:37
Photon binary for current platform rhel-openssl-1.0.x could not be found.
Photon looked in null but couldn't find it.
Make sure to adjust the generator configuration in the schema.prisma file:
generator photon {
provider = "photonjs"
binaryTargets = ["native", "rhel-openssl-1.0.x", "rhel-openssl-1.0.x"]
}
Please run prisma2 generate for your changes to take effect.
Note, that by providing `native`, Photon automatically resolves `rhel-openssl-1.0.x`.
Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md
```
Unique deployment link with this error: https://5dee4a45377315f699e946df--p2-netlify-facade.netlify.com/.netlify/functions/hello-facade
**Note: All of this was with a global version of prisma2 CLI.**
---
Trying it with a local version of `prisma2` CLI now, same result.
Note that for both local, global CLI, it does list that it is downloading the listed `rhel` binary.
```
divyendusingh [p2-netlify]$ prisma2 generate
> Downloading Prisma engines for darwin and rhel-openssl-1.0.x [====================] 100%
Generating Photon.js to ./node_modules/@prisma/photon
Done in 1.78s
divyendusingh [p2-netlify]$ yarn prisma2 generate
yarn run v1.17.3
$ /Users/divyendusingh/Documents/prisma/p2-netlify/node_modules/.bin/prisma2 generate
> Downloading Prisma engines for darwin and rhel-openssl-1.0.x [====================] 100%
Generating Photon.js to ./node_modules/@prisma/photon
Done in 1.82s
β¨ Done in 3.42s.
```
|
1.0
|
Photon facade with netlify - Deploying a function with the following generator configuration on netlify fails with the following error
Config:
```
generator photon {
provider = "photonjs"
}
```
Error:
```
Error:
Invalid `photon.()` invocation in /var/task/hello-facade.js:8:37
Photon binary for current platform rhel-openssl-1.0.x could not be found.
Photon looked in null but couldn't find it.
Make sure to adjust the generator configuration in the schema.prisma file:
generator photon {
provider = "photonjs"
binaryTargets = ["native"]
}
Please run prisma2 generate for your changes to take effect.
Note, that by providing `native`, Photon automatically resolves `rhel-openssl-1.0.x`.
Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md
```
Unique deployment link with this error: https://5dee48f3f80a08893213d0b7--p2-netlify-facade.netlify.com/.netlify/functions/hello-facade
Note: The suggestion in the error looks incomplete, when seen in production.
---
I changed it to the following config, ran `prisma2 generate` and re-deployed, to match the binary being detected by Photon:
Config:
```
generator photon {
provider = "photonjs"
binaryTargets = ["native", "rhel-openssl-1.0.x"]
}
```
But it still fails as if the binary is not there and the error message is a bit weird too, as it lists a thing that is already there.
Error:
```
Error:
Invalid `photon.()` invocation in /var/task/hello-facade.js:8:37
Photon binary for current platform rhel-openssl-1.0.x could not be found.
Photon looked in null but couldn't find it.
Make sure to adjust the generator configuration in the schema.prisma file:
generator photon {
provider = "photonjs"
binaryTargets = ["native", "rhel-openssl-1.0.x", "rhel-openssl-1.0.x"]
}
Please run prisma2 generate for your changes to take effect.
Note, that by providing `native`, Photon automatically resolves `rhel-openssl-1.0.x`.
Read more about deploying Photon: https://github.com/prisma/prisma2/blob/master/docs/core/generators/photonjs.md
```
Unique deployment link with this error: https://5dee4a45377315f699e946df--p2-netlify-facade.netlify.com/.netlify/functions/hello-facade
**Note: All of this was with a global version of prisma2 CLI.**
---
Trying it with a local version of `prisma2` CLI now, same result.
Note that for both local, global CLI, it does list that it is downloading the listed `rhel` binary.
```
divyendusingh [p2-netlify]$ prisma2 generate
> Downloading Prisma engines for darwin and rhel-openssl-1.0.x [====================] 100%
Generating Photon.js to ./node_modules/@prisma/photon
Done in 1.78s
divyendusingh [p2-netlify]$ yarn prisma2 generate
yarn run v1.17.3
$ /Users/divyendusingh/Documents/prisma/p2-netlify/node_modules/.bin/prisma2 generate
> Downloading Prisma engines for darwin and rhel-openssl-1.0.x [====================] 100%
Generating Photon.js to ./node_modules/@prisma/photon
Done in 1.82s
β¨ Done in 3.42s.
```
|
process
|
photon facade with netlify deploying a function with the following generator configuration on netlify fails with the following error config generator photon provider photonjs error error invalid photon invocation in var task hello facade js photon binary for current platform rhel openssl x could not be found photon looked in null but couldn t find it make sure to adjust the generator configuration in the schema prisma file generator photon provider photonjs binarytargets please run generate for your changes to take effect note that by providing native photon automatically resolves rhel openssl x read more about deploying photon unique deployment link with this error note the suggestion in the error looks incomplete when seen in production i changed it to the following config ran generate and re deployed to match the binary being detected by photon config generator photon provider photonjs binarytargets but it still fails as if the binary is not there and the error message is a bit weird too as it lists a thing that is already there error error invalid photon invocation in var task hello facade js photon binary for current platform rhel openssl x could not be found photon looked in null but couldn t find it make sure to adjust the generator configuration in the schema prisma file generator photon provider photonjs binarytargets please run generate for your changes to take effect note that by providing native photon automatically resolves rhel openssl x read more about deploying photon unique deployment link with this error note all of this was with a global version of cli trying it with a local version of cli now same result note that for both local global cli it does list that it is downloading the listed rhel binary divyendusingh generate downloading prisma engines for darwin and rhel openssl x generating photon js to node modules prisma photon done in divyendusingh yarn generate yarn run users divyendusingh documents prisma netlify node modules bin generate downloading prisma engines for darwin and rhel openssl x generating photon js to node modules prisma photon done in β¨ done in
| 1
|
222,755
| 7,438,471,024
|
IssuesEvent
|
2018-03-27 00:34:41
|
cwrc/ontology
|
https://api.github.com/repos/cwrc/ontology
|
closed
|
cwrcFamily definition
|
priority:routine project:CWRC Ontology status:needs discussion
|
use: https://en.wikipedia.org/wiki/Family
_and append?_
Within the context of this ontology, cwrc:Family is not so much designed to address questions surrounding βblood ties,β but offers instead a focus on the social (as opposed to biological) aspects of family relations.
|
1.0
|
cwrcFamily definition - use: https://en.wikipedia.org/wiki/Family
_and append?_
Within the context of this ontology, cwrc:Family is not so much designed to address questions surrounding βblood ties,β but offers instead a focus on the social (as opposed to biological) aspects of family relations.
|
non_process
|
cwrcfamily definition use and append within the context of this ontology cwrc family is not so much designed to address questions surrounding βblood ties β but offers instead a focus on the social as opposed to biological aspects of family relations
| 0
|
5,869
| 8,687,795,295
|
IssuesEvent
|
2018-12-03 14:39:34
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
opened
|
JobProcess task_retrieve_job is not idempotent which can cause failures
|
priority/important topic/JobCalculationAndProcess
|
The retrieval task can fail if it is executed for a second time. This happens when a the retrieved files node is attached to the job for a second time causing a uniqueness violation.
If we know that, by the time the link is being created, the retrieved files node is correctly set up then we could just reuse the one we find from last time, effectively ignoring the uniqueness violation and using the current node.
|
1.0
|
JobProcess task_retrieve_job is not idempotent which can cause failures - The retrieval task can fail if it is executed for a second time. This happens when a the retrieved files node is attached to the job for a second time causing a uniqueness violation.
If we know that, by the time the link is being created, the retrieved files node is correctly set up then we could just reuse the one we find from last time, effectively ignoring the uniqueness violation and using the current node.
|
process
|
jobprocess task retrieve job is not idempotent which can cause failures the retrieval task can fail if it is executed for a second time this happens when a the retrieved files node is attached to the job for a second time causing a uniqueness violation if we know that by the time the link is being created the retrieved files node is correctly set up then we could just reuse the one we find from last time effectively ignoring the uniqueness violation and using the current node
| 1
|
420,232
| 28,240,932,590
|
IssuesEvent
|
2023-04-06 07:05:28
|
CryptoBlades/cryptoblades
|
https://api.github.com/repos/CryptoBlades/cryptoblades
|
opened
|
[Feature] - Removing ex-staff from CBK
|
documentation enhancement
|
### Prerequisites
- [ ] I checked to make sure that this feature has not already been filed
- [ ] I'm reporting this information to the correct repository
- [X] I understand enough about this issue to complete a comprehensive document
### Describe the feature and its requirements
-The CBK website has Dan on it still and he should be removed (https://cryptobladeskingdoms.io/)
### Is your feature request related to an existing issue? Please describe.
N/A
### Is there anything stopping this feature being completed?
N/A
### Describe alternatives you've considered
N/A
### Additional context
_No response_
|
1.0
|
[Feature] - Removing ex-staff from CBK - ### Prerequisites
- [ ] I checked to make sure that this feature has not already been filed
- [ ] I'm reporting this information to the correct repository
- [X] I understand enough about this issue to complete a comprehensive document
### Describe the feature and its requirements
-The CBK website has Dan on it still and he should be removed (https://cryptobladeskingdoms.io/)
### Is your feature request related to an existing issue? Please describe.
N/A
### Is there anything stopping this feature being completed?
N/A
### Describe alternatives you've considered
N/A
### Additional context
_No response_
|
non_process
|
removing ex staff from cbk prerequisites i checked to make sure that this feature has not already been filed i m reporting this information to the correct repository i understand enough about this issue to complete a comprehensive document describe the feature and its requirements the cbk website has dan on it still and he should be removed is your feature request related to an existing issue please describe n a is there anything stopping this feature being completed n a describe alternatives you ve considered n a additional context no response
| 0
|
18,663
| 24,582,012,050
|
IssuesEvent
|
2022-10-13 16:19:39
|
MicrosoftDocs/windows-dev-docs
|
https://api.github.com/repos/MicrosoftDocs/windows-dev-docs
|
closed
|
Image size
|
uwp/prod processes-and-threading/tech Pri2
|
The image size is given twice but differs, 600 by 320 and 620 by 300
#### Document Details
β *Do not edit this section. It is required for learn.microsoft.com β GitHub issue linking.*
* ID: 1a0fe87b-78e1-64a6-a7ed-51d8a6373a5d
* Version Independent ID: 71898667-5572-11bb-f53c-0d1ebc77511d
* Content: [Display a splash screen for more time - UWP applications](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/create-a-customized-splash-screen)
* Content Source: [windows-apps-src/launch-resume/create-a-customized-splash-screen.md](https://github.com/MicrosoftDocs/windows-dev-docs/blob/docs/windows-apps-src/launch-resume/create-a-customized-splash-screen.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
1.0
|
Image size - The image size is given twice but differs, 600 by 320 and 620 by 300
#### Document Details
β *Do not edit this section. It is required for learn.microsoft.com β GitHub issue linking.*
* ID: 1a0fe87b-78e1-64a6-a7ed-51d8a6373a5d
* Version Independent ID: 71898667-5572-11bb-f53c-0d1ebc77511d
* Content: [Display a splash screen for more time - UWP applications](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/create-a-customized-splash-screen)
* Content Source: [windows-apps-src/launch-resume/create-a-customized-splash-screen.md](https://github.com/MicrosoftDocs/windows-dev-docs/blob/docs/windows-apps-src/launch-resume/create-a-customized-splash-screen.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
process
|
image size the image size is given twice but differs by and by document details β do not edit this section it is required for learn microsoft com β github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft
| 1
|
11,388
| 14,223,843,273
|
IssuesEvent
|
2020-11-17 18:47:24
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Enhance testbench to fully support fields parameter
|
api: storage type: process
|
The `fields` query parameter can include complex filter expressions:
https://cloud.google.com/storage/docs/json_api/v1/how-tos/performance#partial-response
But we current only support simple field names separated by comma (e.g. `field1,field2`).
|
1.0
|
Enhance testbench to fully support fields parameter - The `fields` query parameter can include complex filter expressions:
https://cloud.google.com/storage/docs/json_api/v1/how-tos/performance#partial-response
But we current only support simple field names separated by comma (e.g. `field1,field2`).
|
process
|
enhance testbench to fully support fields parameter the fields query parameter can include complex filter expressions but we current only support simple field names separated by comma e g
| 1
|
13,070
| 8,788,429,637
|
IssuesEvent
|
2018-12-20 22:09:41
|
aspnet/AspNetCore
|
https://api.github.com/repos/aspnet/AspNetCore
|
closed
|
Role-based authorization [Authorize (Roles = "Admin")] does not work when upgrading to ASP.NET Core 2.2
|
area-security
|
[Authorize (Roles = "Admin")] does not work when upgrading to ASP.net core 2.2
this is controller:
<img width="269" alt="image" src="https://user-images.githubusercontent.com/6295602/50211678-2b7b2c80-03b4-11e9-9883-2d478edefc6d.png">
this is database: [AspNetUsers]
<img width="289" alt="image" src="https://user-images.githubusercontent.com/6295602/50211720-42218380-03b4-11e9-9359-6501841df804.png">
this is database: [AspNetRoles]
<img width="376" alt="image" src="https://user-images.githubusercontent.com/6295602/50211752-5796ad80-03b4-11e9-9a3b-686996a783ac.png">
this is database: [AspNetUserRoles]
<img width="284" alt="image" src="https://user-images.githubusercontent.com/6295602/50211783-6a10e700-03b4-11e9-844d-a28ddfa8302e.png">
This is the result of the run after I login use chris@neo.org
<img width="567" alt="image" src="https://user-images.githubusercontent.com/6295602/50211934-c8d66080-03b4-11e9-8f0b-ebc5697ebb26.png">
This works on asp.net core 2.1, but it doesn't work when I upgraded to asp.net core 2.2
|
True
|
Role-based authorization [Authorize (Roles = "Admin")] does not work when upgrading to ASP.NET Core 2.2 - [Authorize (Roles = "Admin")] does not work when upgrading to ASP.net core 2.2
this is controller:
<img width="269" alt="image" src="https://user-images.githubusercontent.com/6295602/50211678-2b7b2c80-03b4-11e9-9883-2d478edefc6d.png">
this is database: [AspNetUsers]
<img width="289" alt="image" src="https://user-images.githubusercontent.com/6295602/50211720-42218380-03b4-11e9-9359-6501841df804.png">
this is database: [AspNetRoles]
<img width="376" alt="image" src="https://user-images.githubusercontent.com/6295602/50211752-5796ad80-03b4-11e9-9a3b-686996a783ac.png">
this is database: [AspNetUserRoles]
<img width="284" alt="image" src="https://user-images.githubusercontent.com/6295602/50211783-6a10e700-03b4-11e9-844d-a28ddfa8302e.png">
This is the result of the run after I login use chris@neo.org
<img width="567" alt="image" src="https://user-images.githubusercontent.com/6295602/50211934-c8d66080-03b4-11e9-8f0b-ebc5697ebb26.png">
This works on asp.net core 2.1, but it doesn't work when I upgraded to asp.net core 2.2
|
non_process
|
role based authorization does not work when upgrading to asp net core does not work when upgrading to asp net core this is controller img width alt image src this is database img width alt image src this is database img width alt image src this is database img width alt image src this is the result of the run after i login use chris neo org img width alt image src this works on asp net core but it doesn t work when i upgraded to asp net core
| 0
|
16,562
| 21,575,276,600
|
IssuesEvent
|
2022-05-02 13:09:23
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
NPE: Cannot invoke "String.getBytes()" because "key" is null
|
kind/bug area/reliability team/process-automation
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Evaluating a decision results in a NullPointerException. The NPE occurs during evaluation of a decision. It appears to be iterating over a map of variables and breaks when it encounters a variable with the key `null`.
**To Reproduce**
Not sure
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
No NPE should occur. If any null values are possible we should be able to deal with them.
**Log/Stacktrace**
https://console.cloud.google.com/errors/detail/CP_pnKWJn4Ta9AE;service=zeebe;time=P7D?project=camunda-cloud-240911
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "String.getBytes()" because "key" is null
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$2(FeelToMessagePackTransformer.scala:50) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at scala.collection.immutable.Map$Map2.foreach(Map.scala:342) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.writeValue(FeelToMessagePackTransformer.scala:49) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$1(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$1$adapted(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at scala.collection.immutable.List.foreach(List.scala:333) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.writeValue(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.toMessagePack(FeelToMessagePackTransformer.scala:27) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.toMessagePack(DmnScalaDecisionEngine.java:170) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.EvaluatedDmnScalaDecision.of(EvaluatedDmnScalaDecision.java:56) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.lambda$getEvaluatedDecisions$1(DmnScalaDecisionEngine.java:162) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at scala.collection.immutable.List.foreach(List.scala:333) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.getEvaluatedDecisions(DmnScalaDecisionEngine.java:159) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at java.util.Optional.map(Unknown Source) ~[?:?]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.evaluateDecisionById(DmnScalaDecisionEngine.java:105) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecisionInDrg(BpmnDecisionBehavior.java:182) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.lambda$evaluateDecision$3(BpmnDecisionBehavior.java:111) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecision(BpmnDecisionBehavior.java:109) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.lambda$onActivate$0(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.onActivate(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:40) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:21) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processEvent$2(BpmnStreamProcessor.java:128) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:127) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- OS: Camunda Cloud
- Zeebe Version: Seen in 8.0.0 and 8.0.1
|
1.0
|
NPE: Cannot invoke "String.getBytes()" because "key" is null - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Evaluating a decision results in a NullPointerException. The NPE occurs during evaluation of a decision. It appears to be iterating over a map of variables and breaks when it encounters a variable with the key `null`.
**To Reproduce**
Not sure
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
No NPE should occur. If any null values are possible we should be able to deal with them.
**Log/Stacktrace**
https://console.cloud.google.com/errors/detail/CP_pnKWJn4Ta9AE;service=zeebe;time=P7D?project=camunda-cloud-240911
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.NullPointerException: Cannot invoke "String.getBytes()" because "key" is null
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$2(FeelToMessagePackTransformer.scala:50) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at scala.collection.immutable.Map$Map2.foreach(Map.scala:342) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.writeValue(FeelToMessagePackTransformer.scala:49) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$1(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.$anonfun$writeValue$1$adapted(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at scala.collection.immutable.List.foreach(List.scala:333) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.writeValue(FeelToMessagePackTransformer.scala:42) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.feel.impl.FeelToMessagePackTransformer.toMessagePack(FeelToMessagePackTransformer.scala:27) ~[zeebe-feel-integration-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.toMessagePack(DmnScalaDecisionEngine.java:170) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.EvaluatedDmnScalaDecision.of(EvaluatedDmnScalaDecision.java:56) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.lambda$getEvaluatedDecisions$1(DmnScalaDecisionEngine.java:162) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at scala.collection.immutable.List.foreach(List.scala:333) ~[scala-library-2.13.8.jar:?]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.getEvaluatedDecisions(DmnScalaDecisionEngine.java:159) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at java.util.Optional.map(Unknown Source) ~[?:?]
at io.camunda.zeebe.dmn.impl.DmnScalaDecisionEngine.evaluateDecisionById(DmnScalaDecisionEngine.java:105) ~[zeebe-dmn-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecisionInDrg(BpmnDecisionBehavior.java:182) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.lambda$evaluateDecision$3(BpmnDecisionBehavior.java:111) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecision(BpmnDecisionBehavior.java:109) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.lambda$onActivate$0(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.onActivate(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:40) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:21) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processEvent$2(BpmnStreamProcessor.java:128) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:127) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- OS: Camunda Cloud
- Zeebe Version: Seen in 8.0.0 and 8.0.1
|
process
|
npe cannot invoke string getbytes because key is null describe the bug evaluating a decision results in a nullpointerexception the npe occurs during evaluation of a decision it appears to be iterating over a map of variables and breaks when it encounters a variable with the key null to reproduce not sure expected behavior no npe should occur if any null values are possible we should be able to deal with them log stacktrace full stacktrace java lang nullpointerexception cannot invoke string getbytes because key is null at io camunda zeebe feel impl feeltomessagepacktransformer anonfun writevalue feeltomessagepacktransformer scala at scala collection immutable map foreach map scala at io camunda zeebe feel impl feeltomessagepacktransformer writevalue feeltomessagepacktransformer scala at io camunda zeebe feel impl feeltomessagepacktransformer anonfun writevalue feeltomessagepacktransformer scala at io camunda zeebe feel impl feeltomessagepacktransformer anonfun writevalue adapted feeltomessagepacktransformer scala at scala collection immutable list foreach list scala at io camunda zeebe feel impl feeltomessagepacktransformer writevalue feeltomessagepacktransformer scala at io camunda zeebe feel impl feeltomessagepacktransformer tomessagepack feeltomessagepacktransformer scala at io camunda zeebe dmn impl dmnscaladecisionengine tomessagepack dmnscaladecisionengine java at io camunda zeebe dmn impl evaluateddmnscaladecision of evaluateddmnscaladecision java at io camunda zeebe dmn impl dmnscaladecisionengine lambda getevaluateddecisions dmnscaladecisionengine java at scala collection immutable list foreach list scala at io camunda zeebe dmn impl dmnscaladecisionengine getevaluateddecisions dmnscaladecisionengine java at java util optional map unknown source at io camunda zeebe dmn impl dmnscaladecisionengine evaluatedecisionbyid dmnscaladecisionengine java at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior evaluatedecisionindrg bpmndecisionbehavior java at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior lambda evaluatedecision bpmndecisionbehavior java at io camunda zeebe util either right flatmap either java at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior evaluatedecision bpmndecisionbehavior java at io camunda zeebe engine processing bpmn task businessruletaskprocessor calleddecisionbehavior lambda onactivate businessruletaskprocessor java at io camunda zeebe util either right flatmap either java at io camunda zeebe engine processing bpmn task businessruletaskprocessor calleddecisionbehavior onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn task businessruletaskprocessor onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn task businessruletaskprocessor onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor lambda processevent bpmnstreamprocessor java at io camunda zeebe util either right ifrightorleft either java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processevent bpmnstreamprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor lambda processrecord bpmnstreamprocessor java at io camunda zeebe util either right ifrightorleft either java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processrecord bpmnstreamprocessor java at io camunda zeebe engine processing streamprocessor typedrecordprocessor processrecord typedrecordprocessor java at io camunda zeebe engine processing streamprocessor processingstatemachine lambda processintransaction processingstatemachine java at io camunda zeebe db impl rocksdb transaction zeebetransaction run zeebetransaction java at io camunda zeebe engine processing streamprocessor processingstatemachine processintransaction processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine processcommand processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine trytoreadnextrecord processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine readnextrecord processingstatemachine java at io camunda zeebe util sched actorjob invoke actorjob java at io camunda zeebe util sched actorjob execute actorjob java at io camunda zeebe util sched actortask execute actortask java at io camunda zeebe util sched actorthread executecurrenttask actorthread java at io camunda zeebe util sched actorthread dowork actorthread java at io camunda zeebe util sched actorthread run actorthread java environment os camunda cloud zeebe version seen in and
| 1
|
398,478
| 11,741,512,289
|
IssuesEvent
|
2020-03-11 21:55:58
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
Tweedehands boeken-verkoop
|
education feature priority: low
|
In GitLab by gerlings on Sep 7, 2016, 19:47
Adding a forum on which Thalia members can resell books.
|
1.0
|
Tweedehands boeken-verkoop - In GitLab by gerlings on Sep 7, 2016, 19:47
Adding a forum on which Thalia members can resell books.
|
non_process
|
tweedehands boeken verkoop in gitlab by gerlings on sep adding a forum on which thalia members can resell books
| 0
|
13,054
| 15,389,653,622
|
IssuesEvent
|
2021-03-03 12:23:52
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsolete precomposed term: GO:0021883 cell cycle arrest of committed forebrain neuronal progenitor cell
|
cell cycle and DNA processes obsoletion
|
This should be captured as a GO-CAM. No annotations, no mappings, not in any subsets.
|
1.0
|
Obsolete precomposed term: GO:0021883 cell cycle arrest of committed forebrain neuronal progenitor cell - This should be captured as a GO-CAM. No annotations, no mappings, not in any subsets.
|
process
|
obsolete precomposed term go cell cycle arrest of committed forebrain neuronal progenitor cell this should be captured as a go cam no annotations no mappings not in any subsets
| 1
|
222,014
| 17,032,099,751
|
IssuesEvent
|
2021-07-04 19:33:52
|
Learn-Write-Repeat/Learn-Write-Repeat.github.io
|
https://api.github.com/repos/Learn-Write-Repeat/Learn-Write-Repeat.github.io
|
closed
|
Understanding Template
|
documentation
|
Go through the template made by @akshayadme [here](https://github.com/Learn-Write-Repeat/Learn-Write-Repeat.github.io/issues/5#issuecomment-699019182)
Good things in the template:
1. Informative Navbar and footer.
2. Center aligned blog left and right spaces can be used for other information, like references or similar posts.
3. Header size is enough to give the topic and author details, and not that big which takes the whole page.
4. Interactive and formal design (Code + information)
Try implementing the same things in your template, or build a new one like this.
|
1.0
|
Understanding Template - Go through the template made by @akshayadme [here](https://github.com/Learn-Write-Repeat/Learn-Write-Repeat.github.io/issues/5#issuecomment-699019182)
Good things in the template:
1. Informative Navbar and footer.
2. Center aligned blog left and right spaces can be used for other information, like references or similar posts.
3. Header size is enough to give the topic and author details, and not that big which takes the whole page.
4. Interactive and formal design (Code + information)
Try implementing the same things in your template, or build a new one like this.
|
non_process
|
understanding template go through the template made by akshayadme good things in the template informative navbar and footer center aligned blog left and right spaces can be used for other information like references or similar posts header size is enough to give the topic and author details and not that big which takes the whole page interactive and formal design code information try implementing the same things in your template or build a new one like this
| 0
|
369,251
| 10,894,408,777
|
IssuesEvent
|
2019-11-19 08:37:05
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
Clean up the code on drm/i915/gvt.
|
priority: P3-Medium type: bug
|
1. A bit in GFX_MODE register could disable HW privilege check on commands from non-privilege batch buffers. Malicious guest can set this bit to allow to run privileged commands in non-privilege batch buffer.
2. Bits in CSFE_CHICKEN1 register could allow RCS/VCS/BCS access to other enginesβ non-privilege registers. Malicious guest can enable the access to get information of other context or break other context workload.
3. MI_FLUSH_DW, PIPE_CONTROLβs index mode writes host engine status page instead of per-context status page, guest may corrupt global status page.
4. intel_gvt_ggtt_validate_range() only check memoryβs start and end pointer in aperture OR in hidden range, a large buffer set to cover across this two ranges may be set.
|
1.0
|
Clean up the code on drm/i915/gvt. - 1. A bit in GFX_MODE register could disable HW privilege check on commands from non-privilege batch buffers. Malicious guest can set this bit to allow to run privileged commands in non-privilege batch buffer.
2. Bits in CSFE_CHICKEN1 register could allow RCS/VCS/BCS access to other enginesβ non-privilege registers. Malicious guest can enable the access to get information of other context or break other context workload.
3. MI_FLUSH_DW, PIPE_CONTROLβs index mode writes host engine status page instead of per-context status page, guest may corrupt global status page.
4. intel_gvt_ggtt_validate_range() only check memoryβs start and end pointer in aperture OR in hidden range, a large buffer set to cover across this two ranges may be set.
|
non_process
|
clean up the code on drm gvt a bit in gfx mode register could disable hw privilege check on commands from non privilege batch buffers malicious guest can set this bit to allow to run privileged commands in non privilege batch buffer bits in csfe register could allow rcs vcs bcs access to other enginesβ non privilege registers malicious guest can enable the access to get information of other context or break other context workload mi flush dw pipe controlβs index mode writes host engine status page instead of per context status page guest may corrupt global status page intel gvt ggtt validate range only check memoryβs start and end pointer in aperture or in hidden range a large buffer set to cover across this two ranges may be set
| 0
|
9,182
| 12,227,917,071
|
IssuesEvent
|
2020-05-03 17:11:25
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
opened
|
Tagged prompt detection
|
process:eval
|
Prompt detection is needed for these tasks:
- Navigation and font-locking
- Set busy status of process
- Post processing output:
- Remove successive continuation prompts: `> + + + >`
- Add newline after intermediate prompts.
The _continuation_ prompts are displayed when incomplete expressions are sent to R. When long paragraphs are evaluated, this results in long lines of `+` which we would like to strip from the output.
The _intermediate_ prompts are displayed when multiple complete expressions are evaluated. They separate the outputs of different expressions separated by newline. Expressions separated by semi-colons do not get an intermediate prompt. These intermediate prompts are annoying because they cause output to be misaligned by 2 characters, which is especially problematic when the first line of output are column names.
With this input:
```r
0 +
+
1
3;4
5
```
R outputs two continuation prompts and two intermediate prompts:
```
+ + [1] 1
> [1] 3
[1] 4
> [1] 5
>
```
Prompt detection is important to ESS but it is tricky to get right. Prompt massaging is even trickier and we've gotten extra newlines in outputs lately. Also, the code is getting quite complex. So I'd like to propose a more robust way of detecting prompts, and simpler post-processing behaviour.
I think the way to solve prompt detection it to mark the R prompts with an unlikely sequence of ANSI escapes. We'll use that sequence to robustly detect the prompts created by the REPL. Using ANSI escapes to mark the prompts reduces the chance of leaking the markers where they shouldn't. In most cases, the escapes will be processed out of the output. TRAMP uses a similar strategy for detecting the echo of commands, see `tramp-echo-mark`.
In addition to tagged prompts, I think we should simplify the post-processing.
- Remove continuation prompts without replacement, unless they are trailing. The current replacement isn't very helpful, and instead of `+ . + output` I'd rather just see `output`. I'd prefer to keep things simple, but no strong feeling about keeping the current behaviour and making it customisable though, if you prefer the `.` thing.
- Even if we can detect intermediate prompts robustly, it's not possible to reliably detect when a new line should be inserted. It makes sense when a data frame is printed but not when a string or number is printed. Maybe we should only insert a newline when the output is multiline? However that won't solve the `source(echo = TRUE)` issue, so maybe we should stop trying to insert newlines?
It feels like intermediate prompts should be fixed in R itself since they are a general problem. Maybe there should be an option to echo the current expression after intermediate prompts, insert a newline, and only then the output. Command echo doesn't make sense for the first expression, but they are a good reminder for subsequent expressions, especially when output is long.
As a proof of concept I've implemented tagged prompts and simplified post-processing in https://github.com/lionel-/ESS/commit/dcd685fa. Preliminary tests shows the approach seems to work well. A nice benefit is that this unifies the `nowait` and `nil` code paths. The only difference between the two is echoing of commands.
|
1.0
|
Tagged prompt detection - Prompt detection is needed for these tasks:
- Navigation and font-locking
- Set busy status of process
- Post processing output:
- Remove successive continuation prompts: `> + + + >`
- Add newline after intermediate prompts.
The _continuation_ prompts are displayed when incomplete expressions are sent to R. When long paragraphs are evaluated, this results in long lines of `+` which we would like to strip from the output.
The _intermediate_ prompts are displayed when multiple complete expressions are evaluated. They separate the outputs of different expressions separated by newline. Expressions separated by semi-colons do not get an intermediate prompt. These intermediate prompts are annoying because they cause output to be misaligned by 2 characters, which is especially problematic when the first line of output are column names.
With this input:
```r
0 +
+
1
3;4
5
```
R outputs two continuation prompts and two intermediate prompts:
```
+ + [1] 1
> [1] 3
[1] 4
> [1] 5
>
```
Prompt detection is important to ESS but it is tricky to get right. Prompt massaging is even trickier and we've gotten extra newlines in outputs lately. Also, the code is getting quite complex. So I'd like to propose a more robust way of detecting prompts, and simpler post-processing behaviour.
I think the way to solve prompt detection it to mark the R prompts with an unlikely sequence of ANSI escapes. We'll use that sequence to robustly detect the prompts created by the REPL. Using ANSI escapes to mark the prompts reduces the chance of leaking the markers where they shouldn't. In most cases, the escapes will be processed out of the output. TRAMP uses a similar strategy for detecting the echo of commands, see `tramp-echo-mark`.
In addition to tagged prompts, I think we should simplify the post-processing.
- Remove continuation prompts without replacement, unless they are trailing. The current replacement isn't very helpful, and instead of `+ . + output` I'd rather just see `output`. I'd prefer to keep things simple, but no strong feeling about keeping the current behaviour and making it customisable though, if you prefer the `.` thing.
- Even if we can detect intermediate prompts robustly, it's not possible to reliably detect when a new line should be inserted. It makes sense when a data frame is printed but not when a string or number is printed. Maybe we should only insert a newline when the output is multiline? However that won't solve the `source(echo = TRUE)` issue, so maybe we should stop trying to insert newlines?
It feels like intermediate prompts should be fixed in R itself since they are a general problem. Maybe there should be an option to echo the current expression after intermediate prompts, insert a newline, and only then the output. Command echo doesn't make sense for the first expression, but they are a good reminder for subsequent expressions, especially when output is long.
As a proof of concept I've implemented tagged prompts and simplified post-processing in https://github.com/lionel-/ESS/commit/dcd685fa. Preliminary tests shows the approach seems to work well. A nice benefit is that this unifies the `nowait` and `nil` code paths. The only difference between the two is echoing of commands.
|
process
|
tagged prompt detection prompt detection is needed for these tasks navigation and font locking set busy status of process post processing output remove successive continuation prompts add newline after intermediate prompts the continuation prompts are displayed when incomplete expressions are sent to r when long paragraphs are evaluated this results in long lines of which we would like to strip from the output the intermediate prompts are displayed when multiple complete expressions are evaluated they separate the outputs of different expressions separated by newline expressions separated by semi colons do not get an intermediate prompt these intermediate prompts are annoying because they cause output to be misaligned by characters which is especially problematic when the first line of output are column names with this input r r outputs two continuation prompts and two intermediate prompts prompt detection is important to ess but it is tricky to get right prompt massaging is even trickier and we ve gotten extra newlines in outputs lately also the code is getting quite complex so i d like to propose a more robust way of detecting prompts and simpler post processing behaviour i think the way to solve prompt detection it to mark the r prompts with an unlikely sequence of ansi escapes we ll use that sequence to robustly detect the prompts created by the repl using ansi escapes to mark the prompts reduces the chance of leaking the markers where they shouldn t in most cases the escapes will be processed out of the output tramp uses a similar strategy for detecting the echo of commands see tramp echo mark in addition to tagged prompts i think we should simplify the post processing remove continuation prompts without replacement unless they are trailing the current replacement isn t very helpful and instead of output i d rather just see output i d prefer to keep things simple but no strong feeling about keeping the current behaviour and making it customisable though if you prefer the thing even if we can detect intermediate prompts robustly it s not possible to reliably detect when a new line should be inserted it makes sense when a data frame is printed but not when a string or number is printed maybe we should only insert a newline when the output is multiline however that won t solve the source echo true issue so maybe we should stop trying to insert newlines it feels like intermediate prompts should be fixed in r itself since they are a general problem maybe there should be an option to echo the current expression after intermediate prompts insert a newline and only then the output command echo doesn t make sense for the first expression but they are a good reminder for subsequent expressions especially when output is long as a proof of concept i ve implemented tagged prompts and simplified post processing in preliminary tests shows the approach seems to work well a nice benefit is that this unifies the nowait and nil code paths the only difference between the two is echoing of commands
| 1
|
41,151
| 6,892,976,547
|
IssuesEvent
|
2017-11-22 23:59:25
|
NREL/OpenStudio
|
https://api.github.com/repos/NREL/OpenStudio
|
closed
|
Install Instructions page needs to be updated for 2.x
|
Documentation Request
|
http://nrel.github.io/OpenStudio-user-documentation/getting_started/getting_started/#installation-steps
- [ ] SketchUP 2016 needs to change to 2017
- [ ] Ruby version needs to be upgraded.
Maybe be other changes needed as well, but wanted to document this while I was thinking about it.
|
1.0
|
Install Instructions page needs to be updated for 2.x - http://nrel.github.io/OpenStudio-user-documentation/getting_started/getting_started/#installation-steps
- [ ] SketchUP 2016 needs to change to 2017
- [ ] Ruby version needs to be upgraded.
Maybe be other changes needed as well, but wanted to document this while I was thinking about it.
|
non_process
|
install instructions page needs to be updated for x sketchup needs to change to ruby version needs to be upgraded maybe be other changes needed as well but wanted to document this while i was thinking about it
| 0
|
6,020
| 8,823,126,488
|
IssuesEvent
|
2019-01-02 12:20:57
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
meetings: Can pick dates ending date that earlier than the starting date
|
2.0.7 Process bug bug
|
open new discussion.
fill the fields.
click on date.
pick starting date as today and ending date as yesterday.
it's show message but still saving the dates.

|
1.0
|
meetings: Can pick dates ending date that earlier than the starting date - open new discussion.
fill the fields.
click on date.
pick starting date as today and ending date as yesterday.
it's show message but still saving the dates.

|
process
|
meetings can pick dates ending date that earlier than the starting date open new discussion fill the fields click on date pick starting date as today and ending date as yesterday it s show message but still saving the dates
| 1
|
15,095
| 8,757,965,242
|
IssuesEvent
|
2018-12-14 23:35:23
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
Proposal: Replace string.Format with concatenation when the string uses only nameof
|
Area-Compilers Area-Performance Feature Request
|
#11259 and #22344 proposed additional rules under which `string.Format` (and implictly string interpolation) can be optimised away. Due to the [not superficially obvious issues with calling `ToString()`](https://github.com/dotnet/roslyn/pull/6738#issuecomment-156257037), these proposals have effectively languished.
This proposal (identical to the closed #17356) covers only the most simple of the cases discussed above: when a string being formatted **contains only constant strings and `nameof` expressions**. In that case, **and only that case**, it is completely safe to strip out the format call and replace it with with simple concatenation.
Example:
```csharp
class Foo
{
const string Bar = $"My method is named { nameof(Main) }";
void Main()
{
}
}
```
Here, `Bar` can be rewritten to:
```csharp
const string Bar = "My method is named " + nameof(Main);
```
which is compiled to:
```csharp
const string Bar = "My method is named Main";
```
This is currently prohibited by the compiler since this optimisation is not performed.
|
True
|
Proposal: Replace string.Format with concatenation when the string uses only nameof - #11259 and #22344 proposed additional rules under which `string.Format` (and implictly string interpolation) can be optimised away. Due to the [not superficially obvious issues with calling `ToString()`](https://github.com/dotnet/roslyn/pull/6738#issuecomment-156257037), these proposals have effectively languished.
This proposal (identical to the closed #17356) covers only the most simple of the cases discussed above: when a string being formatted **contains only constant strings and `nameof` expressions**. In that case, **and only that case**, it is completely safe to strip out the format call and replace it with with simple concatenation.
Example:
```csharp
class Foo
{
const string Bar = $"My method is named { nameof(Main) }";
void Main()
{
}
}
```
Here, `Bar` can be rewritten to:
```csharp
const string Bar = "My method is named " + nameof(Main);
```
which is compiled to:
```csharp
const string Bar = "My method is named Main";
```
This is currently prohibited by the compiler since this optimisation is not performed.
|
non_process
|
proposal replace string format with concatenation when the string uses only nameof and proposed additional rules under which string format and implictly string interpolation can be optimised away due to the these proposals have effectively languished this proposal identical to the closed covers only the most simple of the cases discussed above when a string being formatted contains only constant strings and nameof expressions in that case and only that case it is completely safe to strip out the format call and replace it with with simple concatenation example csharp class foo const string bar my method is named nameof main void main here bar can be rewritten to csharp const string bar my method is named nameof main which is compiled to csharp const string bar my method is named main this is currently prohibited by the compiler since this optimisation is not performed
| 0
|
22,596
| 31,818,782,141
|
IssuesEvent
|
2023-09-13 23:18:02
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@truffle/environment 0.2.160 has 2 guarddog issues
|
npm-install-script npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"exit 0\",","location":"package/package.json:19","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" return spawn(\"node\", [chainPath, ipcNetwork, base64OptionsString], {\n detached: true,\n stdio: \"ignore\"\n });","location":"package/develop.js:38","message":"This package is silently executing another executable"}]}```
|
1.0
|
@truffle/environment 0.2.160 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"exit 0\",","location":"package/package.json:19","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" return spawn(\"node\", [chainPath, ipcNetwork, base64OptionsString], {\n detached: true,\n stdio: \"ignore\"\n });","location":"package/develop.js:38","message":"This package is silently executing another executable"}]}```
|
process
|
truffle environment has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n location package develop js message this package is silently executing another executable
| 1
|
20,602
| 27,266,748,848
|
IssuesEvent
|
2023-02-22 18:40:03
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
BP: Add "Compute Flow Statistics" checklist
|
Batch Processor
|
Part of #1455
- [x] Create a checkbox that says "Compute Flow Statistics". When the checkbox is checked, the checklist (described below) should appear. When the checkbox is unchecked, the checklist should disappear.
- [x] Create a checklist that says "Select Flow Statistics:"
- [x] When a Region/State is selected, make a service call to https://streamstats.usgs.gov/nssservices/Regions/AZ,NA/Scenarios (example is for Arizona; you include the "code" for the Region and then add ",NA" as well to include nation-wide Statistic Groups) to get a full list of all the Statistics Groups
- [x] Display the "statisticGroupName" of each Statistic Group in the checklist
- [x] Include a way (buttons?) to "Check all" and "Uncheck all" the flow statistics in the checklist
|
1.0
|
BP: Add "Compute Flow Statistics" checklist - Part of #1455
- [x] Create a checkbox that says "Compute Flow Statistics". When the checkbox is checked, the checklist (described below) should appear. When the checkbox is unchecked, the checklist should disappear.
- [x] Create a checklist that says "Select Flow Statistics:"
- [x] When a Region/State is selected, make a service call to https://streamstats.usgs.gov/nssservices/Regions/AZ,NA/Scenarios (example is for Arizona; you include the "code" for the Region and then add ",NA" as well to include nation-wide Statistic Groups) to get a full list of all the Statistics Groups
- [x] Display the "statisticGroupName" of each Statistic Group in the checklist
- [x] Include a way (buttons?) to "Check all" and "Uncheck all" the flow statistics in the checklist
|
process
|
bp add compute flow statistics checklist part of create a checkbox that says compute flow statistics when the checkbox is checked the checklist described below should appear when the checkbox is unchecked the checklist should disappear create a checklist that says select flow statistics when a region state is selected make a service call to example is for arizona you include the code for the region and then add na as well to include nation wide statistic groups to get a full list of all the statistics groups display the statisticgroupname of each statistic group in the checklist include a way buttons to check all and uncheck all the flow statistics in the checklist
| 1
|
12,456
| 14,935,221,405
|
IssuesEvent
|
2021-01-25 11:37:55
|
smertatli/SWE-573
|
https://api.github.com/repos/smertatli/SWE-573
|
closed
|
Research SoBigData
|
in process
|
Do research about SoBigData API and document the followings:
- What is SoBigData
-For what purposes it can be used
-What features they provide for social media analysis, social network analysis, and text mining
|
1.0
|
Research SoBigData - Do research about SoBigData API and document the followings:
- What is SoBigData
-For what purposes it can be used
-What features they provide for social media analysis, social network analysis, and text mining
|
process
|
research sobigdata do research about sobigdata api and document the followings what is sobigdata for what purposes it can be used what features they provide for social media analysis social network analysis and text mining
| 1
|
64,863
| 26,887,846,931
|
IssuesEvent
|
2023-02-06 05:55:25
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
`az storage account blob-service-properties update` fails when privateEndpointConnections exist as of CLI version 2.44.1
|
Storage Service Attention question customer-reported Auto-Assign Azure CLI Team
|
### `az storage account blob-service-properties update` fails when privateEndpointConnections exists as of CLI version 2.44.1
**Related command**
az storage account blob-service-properties update `
--resource-group $ResourceGroupName `
--account-name $AccountName `
--enable-restore-policy true `
--restore-days 30
**Describe the bug**
ERROR: (ConflictFeatureEnabled) Conflicting feature 'privateEndpointConnections' is enabled. Please disable it and retry.
**To Reproduce**
- storage account create
- need a private endpoint on the storage account
- run the command
**Expected behavior**
The command works and those properties are set (was working in 2.43)
**Environment summary**
azure-cli 2.44.1
|
1.0
|
`az storage account blob-service-properties update` fails when privateEndpointConnections exist as of CLI version 2.44.1 - ### `az storage account blob-service-properties update` fails when privateEndpointConnections exists as of CLI version 2.44.1
**Related command**
az storage account blob-service-properties update `
--resource-group $ResourceGroupName `
--account-name $AccountName `
--enable-restore-policy true `
--restore-days 30
**Describe the bug**
ERROR: (ConflictFeatureEnabled) Conflicting feature 'privateEndpointConnections' is enabled. Please disable it and retry.
**To Reproduce**
- storage account create
- need a private endpoint on the storage account
- run the command
**Expected behavior**
The command works and those properties are set (was working in 2.43)
**Environment summary**
azure-cli 2.44.1
|
non_process
|
az storage account blob service properties update fails when privateendpointconnections exist as of cli version az storage account blob service properties update fails when privateendpointconnections exists as of cli version related command az storage account blob service properties update resource group resourcegroupname account name accountname enable restore policy true restore days describe the bug error conflictfeatureenabled conflicting feature privateendpointconnections is enabled please disable it and retry to reproduce storage account create need a private endpoint on the storage account run the command expected behavior the command works and those properties are set was working in environment summary azure cli
| 0
|
125,974
| 17,861,748,388
|
IssuesEvent
|
2021-09-06 02:20:10
|
Galaxy-Software-Service/WebGoat
|
https://api.github.com/repos/Galaxy-Software-Service/WebGoat
|
reopened
|
CVE-2020-7774 (High) detected in y18n-3.2.1.tgz, y18n-4.0.0.tgz
|
security vulnerability
|
## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>y18n-3.2.1.tgz</b>, <b>y18n-4.0.0.tgz</b></p></summary>
<p>
<details><summary><b>y18n-3.2.1.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-3.2.1.tgz">https://registry.npmjs.org/y18n/-/y18n-3.2.1.tgz</a></p>
<p>Path to dependency file: WebGoat/docs/package.json</p>
<p>Path to vulnerable library: WebGoat/docs/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- yargs-6.4.0.tgz
- :x: **y18n-3.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: WebGoat/docs/package.json</p>
<p>Path to vulnerable library: WebGoat/docs/node_modules/node-sass/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- node-sass-4.14.1.tgz
- sass-graph-2.2.5.tgz
- yargs-13.3.2.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"y18n","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;yargs:6.4.0;y18n:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.0.5"},{"packageType":"javascript/Node.js","packageName":"y18n","packageVersion":"4.0.0","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.14.1;sass-graph:2.2.5;yargs:13.3.2;y18n:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.0.5"}],"vulnerabilityIdentifier":"CVE-2020-7774","vulnerabilityDetails":"This affects the package y18n before 5.0.5. PoC by po6ix: const y18n \u003d require(\u0027y18n\u0027)(); y18n.setLocale(\u0027__proto__\u0027); y18n.updateLocale({polluted: true}); console.log(polluted); // true","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-7774 (High) detected in y18n-3.2.1.tgz, y18n-4.0.0.tgz - ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>y18n-3.2.1.tgz</b>, <b>y18n-4.0.0.tgz</b></p></summary>
<p>
<details><summary><b>y18n-3.2.1.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-3.2.1.tgz">https://registry.npmjs.org/y18n/-/y18n-3.2.1.tgz</a></p>
<p>Path to dependency file: WebGoat/docs/package.json</p>
<p>Path to vulnerable library: WebGoat/docs/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- yargs-6.4.0.tgz
- :x: **y18n-3.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: WebGoat/docs/package.json</p>
<p>Path to vulnerable library: WebGoat/docs/node_modules/node-sass/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- node-sass-4.14.1.tgz
- sass-graph-2.2.5.tgz
- yargs-13.3.2.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"y18n","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"browser-sync:2.26.3;yargs:6.4.0;y18n:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.0.5"},{"packageType":"javascript/Node.js","packageName":"y18n","packageVersion":"4.0.0","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.14.1;sass-graph:2.2.5;yargs:13.3.2;y18n:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.0.5"}],"vulnerabilityIdentifier":"CVE-2020-7774","vulnerabilityDetails":"This affects the package y18n before 5.0.5. PoC by po6ix: const y18n \u003d require(\u0027y18n\u0027)(); y18n.setLocale(\u0027__proto__\u0027); y18n.updateLocale({polluted: true}); console.log(polluted); // true","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in tgz tgz cve high severity vulnerability vulnerable libraries tgz tgz tgz the bare bones internationalization library used by yargs library home page a href path to dependency file webgoat docs package json path to vulnerable library webgoat docs node modules package json dependency hierarchy browser sync tgz root library yargs tgz x tgz vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file webgoat docs package json path to vulnerable library webgoat docs node modules node sass node modules package json dependency hierarchy gulp sass tgz root library node sass tgz sass graph tgz yargs tgz x tgz vulnerable library found in base branch master vulnerability details this affects the package before poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails this affects the package before poc by const require setlocale proto updatelocale polluted true console log polluted true vulnerabilityurl
| 0
|
21,205
| 28,242,533,008
|
IssuesEvent
|
2023-04-06 08:19:37
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
closed
|
Add a helper function to get datasets from HF and write them to a DocumentStore
|
type:feature Contributions wanted! topic:preprocessing topic:document_store P3
|
As suggested by @vblagoje on the Haystack slack, it would be nice to have a helper function, similar to `open_search_index_to_documentstore` or `convert_files_to_docs` that would allow users to provide a HF dataset name and that would write them in Document format to a DocumentStore
|
1.0
|
Add a helper function to get datasets from HF and write them to a DocumentStore - As suggested by @vblagoje on the Haystack slack, it would be nice to have a helper function, similar to `open_search_index_to_documentstore` or `convert_files_to_docs` that would allow users to provide a HF dataset name and that would write them in Document format to a DocumentStore
|
process
|
add a helper function to get datasets from hf and write them to a documentstore as suggested by vblagoje on the haystack slack it would be nice to have a helper function similar to open search index to documentstore or convert files to docs that would allow users to provide a hf dataset name and that would write them in document format to a documentstore
| 1
|
763,795
| 26,774,913,527
|
IssuesEvent
|
2023-01-31 16:28:25
|
gwt-plugins/gwt-eclipse-plugin
|
https://api.github.com/repos/gwt-plugins/gwt-eclipse-plugin
|
closed
|
Test on Eclipse Photon
|
enhancement High Priority
|
It's time to test the gwt-eclipse-plugin with Eclipse Photon because of the upcoming release in June. Currently M6 is available for download:
http://www.eclipse.org/downloads/packages/release/Photon/M6
|
1.0
|
Test on Eclipse Photon - It's time to test the gwt-eclipse-plugin with Eclipse Photon because of the upcoming release in June. Currently M6 is available for download:
http://www.eclipse.org/downloads/packages/release/Photon/M6
|
non_process
|
test on eclipse photon it s time to test the gwt eclipse plugin with eclipse photon because of the upcoming release in june currently is available for download
| 0
|
11,403
| 14,237,748,009
|
IssuesEvent
|
2020-11-18 17:38:02
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Icons for PH calcs
|
Calculator Process Heating
|
Let me know if they still have the white background and I'll send them in slack
Flue Gas

Wall

Charge Materials

Hot Air Leak

Fixture

Opening

Cooling

Atmosphere

|
1.0
|
Icons for PH calcs - Let me know if they still have the white background and I'll send them in slack
Flue Gas

Wall

Charge Materials

Hot Air Leak

Fixture

Opening

Cooling

Atmosphere

|
process
|
icons for ph calcs let me know if they still have the white background and i ll send them in slack flue gas wall charge materials hot air leak fixture opening cooling atmosphere
| 1
|
7,979
| 7,177,807,054
|
IssuesEvent
|
2018-01-31 14:47:25
|
kaitai-io/kaitai_struct
|
https://api.github.com/repos/kaitai-io/kaitai_struct
|
closed
|
construct related, pypi description update
|
infrastructure
|
As the developer of Construct (not Kaitai, although I am somewhat proud of joining your effort :)
I would like to request an update of Kaitai pypi page, this one:
https://pypi.org/project/kaitaistruct/
- update Construct link to this one
https://construct.readthedocs.io/en/latest/
- remove Construct3 because it was abandoned years ago and never released
it doesnt have one feature that Construct 2.8 doesnt have, beacause I imported C3 features into C2
the blogpost is a nice read but the implementation is nothing but useful
|
1.0
|
construct related, pypi description update - As the developer of Construct (not Kaitai, although I am somewhat proud of joining your effort :)
I would like to request an update of Kaitai pypi page, this one:
https://pypi.org/project/kaitaistruct/
- update Construct link to this one
https://construct.readthedocs.io/en/latest/
- remove Construct3 because it was abandoned years ago and never released
it doesnt have one feature that Construct 2.8 doesnt have, beacause I imported C3 features into C2
the blogpost is a nice read but the implementation is nothing but useful
|
non_process
|
construct related pypi description update as the developer of construct not kaitai although i am somewhat proud of joining your effort i would like to request an update of kaitai pypi page this one update construct link to this one remove because it was abandoned years ago and never released it doesnt have one feature that construct doesnt have beacause i imported features into the blogpost is a nice read but the implementation is nothing but useful
| 0
|
382,345
| 26,493,948,157
|
IssuesEvent
|
2023-01-18 02:39:39
|
owncast/owncast
|
https://api.github.com/repos/owncast/owncast
|
closed
|
v0.1.0 documentation: Updated custom emoji docs
|
documentation
|
### Share your bug report, feature request, or comment.
The documentation around custom emoji need to be updated for the changes around v0.1.0 custom emoji and how to manage them.
|
1.0
|
v0.1.0 documentation: Updated custom emoji docs - ### Share your bug report, feature request, or comment.
The documentation around custom emoji need to be updated for the changes around v0.1.0 custom emoji and how to manage them.
|
non_process
|
documentation updated custom emoji docs share your bug report feature request or comment the documentation around custom emoji need to be updated for the changes around custom emoji and how to manage them
| 0
|
195,188
| 14,706,482,499
|
IssuesEvent
|
2021-01-04 19:55:44
|
envoyproxy/envoy
|
https://api.github.com/repos/envoyproxy/envoy
|
closed
|
Deflake xds_integration_test for Windows release builds
|
area/test flakes area/windows bug
|
#13688 enabled xds_integration_test for Windows, but this appears to be flaking, see:
* (Presubmit) https://dev.azure.com/cncf/envoy/_build/results?buildId=59693&view=logs&j=4afecb4c-71c7-5b5c-ab99-a70ed4c927ad&t=4cd2fc51-3314-5d69-4df3-f765ae0c08dc
* (Postsubmit) https://dev.azure.com/cncf/envoy/_build/results?buildId=59350&view=logs&s=27eddb93-7805-576c-c80f-37b2176e40f7&j=b840a642-5ff3-5357-2e4b-e06e40b0cffd
The failure happens in roughly the same test cases. This may have to do with edge triggered behavior. I'm going to disable temporarily and leave this as a TODO for @envoyproxy/windows-dev.
|
1.0
|
Deflake xds_integration_test for Windows release builds - #13688 enabled xds_integration_test for Windows, but this appears to be flaking, see:
* (Presubmit) https://dev.azure.com/cncf/envoy/_build/results?buildId=59693&view=logs&j=4afecb4c-71c7-5b5c-ab99-a70ed4c927ad&t=4cd2fc51-3314-5d69-4df3-f765ae0c08dc
* (Postsubmit) https://dev.azure.com/cncf/envoy/_build/results?buildId=59350&view=logs&s=27eddb93-7805-576c-c80f-37b2176e40f7&j=b840a642-5ff3-5357-2e4b-e06e40b0cffd
The failure happens in roughly the same test cases. This may have to do with edge triggered behavior. I'm going to disable temporarily and leave this as a TODO for @envoyproxy/windows-dev.
|
non_process
|
deflake xds integration test for windows release builds enabled xds integration test for windows but this appears to be flaking see presubmit postsubmit the failure happens in roughly the same test cases this may have to do with edge triggered behavior i m going to disable temporarily and leave this as a todo for envoyproxy windows dev
| 0
|
18,549
| 24,555,333,894
|
IssuesEvent
|
2022-10-12 15:26:59
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Study resources screen > Source anchor date related resources are not getting displayed in the study resources screen
|
Bug P0 iOS Process: Fixed Process: Tested dev
|
Description:
Precondition :
1. Source question should be added by using the date response type
2. Source anchor date related resources should be added in the SB
Steps:
1. Sign up or sign in to the mobile app
3. Enroll to the study
4. Submit the source question
5. Go to the resources screen and Observe (all the anchor date resources are getting displayed)
6. Go to SB, add a new resource in the SB for a particular study
7. Go to the mobile app, click on a particular study
8. Navigate to the resources screen and observe
AR: Source anchor date related resources are not getting displayed in the study resources screen
ER: All the resources should get displayed to the participant
|
2.0
|
[iOS] Study resources screen > Source anchor date related resources are not getting displayed in the study resources screen - Description:
Precondition :
1. Source question should be added by using the date response type
2. Source anchor date related resources should be added in the SB
Steps:
1. Sign up or sign in to the mobile app
3. Enroll to the study
4. Submit the source question
5. Go to the resources screen and Observe (all the anchor date resources are getting displayed)
6. Go to SB, add a new resource in the SB for a particular study
7. Go to the mobile app, click on a particular study
8. Navigate to the resources screen and observe
AR: Source anchor date related resources are not getting displayed in the study resources screen
ER: All the resources should get displayed to the participant
|
process
|
study resources screen source anchor date related resources are not getting displayed in the study resources screen description precondition source question should be added by using the date response type source anchor date related resources should be added in the sb steps sign up or sign in to the mobile app enroll to the study submit the source question go to the resources screen and observe all the anchor date resources are getting displayed go to sb add a new resource in the sb for a particular study go to the mobile app click on a particular study navigate to the resources screen and observe ar source anchor date related resources are not getting displayed in the study resources screen er all the resources should get displayed to the participant
| 1
|
15,895
| 20,092,892,695
|
IssuesEvent
|
2022-02-06 03:02:47
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Example runbook script does not work
|
automation/svc triaged cxp product-question process-automation/subsvc Pri2
|
Running the example runbook script gives the an error
```
Disable-AzContextAutosave : The term 'Disable-AzContextAutosave' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:11 char:1 + Disable-AzContextAutosave -Scope Process | Out-Null + ~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (Disable-AzContextAutosave:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
```
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6
* Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8
* Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity)
* Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/powershell-runbook-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
1.0
|
Example runbook script does not work - Running the example runbook script gives the an error
```
Disable-AzContextAutosave : The term 'Disable-AzContextAutosave' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:11 char:1 + Disable-AzContextAutosave -Scope Process | Out-Null + ~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (Disable-AzContextAutosave:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
```
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6
* Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8
* Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity)
* Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/powershell-runbook-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
process
|
example runbook script does not work running the example runbook script gives the an error disable azcontextautosave the term disable azcontextautosave is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again at line char disable azcontextautosave scope process out null categoryinfo objectnotfound disable azcontextautosave string commandnotfoundexception fullyqualifiederrorid commandnotfoundexception document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias v ssudhir
| 1
|
258,073
| 8,154,179,814
|
IssuesEvent
|
2018-08-23 01:49:07
|
radical-cybertools/radical.pilot
|
https://api.github.com/repos/radical-cybertools/radical.pilot
|
closed
|
Insufficient resources
|
layer:saga priority:high topic:resource type:bug
|
I'm submitting a 1600 core job, 32 cores per task * 50 tasks, 4 stages on BW and I verified in the debug logs and the `agent_0.cfg` that I am indeed submitting the correct pilot request. However, SAGA is throwing an error about insufficient resources. I submitted on the `debug queue` but my request is well below the resource limit.
```
radical.saga.cpi : pmgr.0000.launching.0 : Thread-1 : ERROR : Exception in job monitoring thread: Insufficient system resources: Insufficient system resources: read from process failed '[Errno 5] Input/output error' : (RUNNING:
29241.0:DONE:0
11700.0:RUNNING:
11700.0:DONE:0
59304.0:RUNNING:
59304.0:DONE:0
45551.0:RUNNING:
45551.0:DONE:0
4620.0:RUNNING:
4620.0:DONE:0
61371.0:RUNNING:
61371.0:DONE:0
Shared connection to bw.ncsa.illinois.edu closed.
```
RCT stack:
```
radical.entk : 0.6.1
radical.pilot : 0.47.8 (hot fix release)
radical.utils : 0.47.4
saga : 0.47.3
```
PBS script:
```
#!/bin/bash
#PBS -N pilot.0000
#PBS -v RADICAL_PILOT_PROFILE=TRUE
#PBS -o /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000/bootstrap_1.out
#PBS -e /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000/bootstrap_1.err
#PBS -l walltime=0:30:00
#PBS -q debug
#PBS -A bamm
#PBS -l nodes=50:ppn=32
export PBS_O_WORKDIR=/scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
mkdir -p /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
cd /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
export SAGA_PPN=32
```
[client_logs.zip](https://github.com/radical-cybertools/radical.pilot/files/1920031/client_logs.zip)
Client logs are attached, agent logs are in `/u/sciteam/dakka/scratch/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000`
|
1.0
|
Insufficient resources - I'm submitting a 1600 core job, 32 cores per task * 50 tasks, 4 stages on BW and I verified in the debug logs and the `agent_0.cfg` that I am indeed submitting the correct pilot request. However, SAGA is throwing an error about insufficient resources. I submitted on the `debug queue` but my request is well below the resource limit.
```
radical.saga.cpi : pmgr.0000.launching.0 : Thread-1 : ERROR : Exception in job monitoring thread: Insufficient system resources: Insufficient system resources: read from process failed '[Errno 5] Input/output error' : (RUNNING:
29241.0:DONE:0
11700.0:RUNNING:
11700.0:DONE:0
59304.0:RUNNING:
59304.0:DONE:0
45551.0:RUNNING:
45551.0:DONE:0
4620.0:RUNNING:
4620.0:DONE:0
61371.0:RUNNING:
61371.0:DONE:0
Shared connection to bw.ncsa.illinois.edu closed.
```
RCT stack:
```
radical.entk : 0.6.1
radical.pilot : 0.47.8 (hot fix release)
radical.utils : 0.47.4
saga : 0.47.3
```
PBS script:
```
#!/bin/bash
#PBS -N pilot.0000
#PBS -v RADICAL_PILOT_PROFILE=TRUE
#PBS -o /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000/bootstrap_1.out
#PBS -e /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000/bootstrap_1.err
#PBS -l walltime=0:30:00
#PBS -q debug
#PBS -A bamm
#PBS -l nodes=50:ppn=32
export PBS_O_WORKDIR=/scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
mkdir -p /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
cd /scratch/sciteam/dakka/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000
export SAGA_PPN=32
```
[client_logs.zip](https://github.com/radical-cybertools/radical.pilot/files/1920031/client_logs.zip)
Client logs are attached, agent logs are in `/u/sciteam/dakka/scratch/radical.pilot.sandbox/rp.session.two.jdakka.017638.0007/pilot.0000`
|
non_process
|
insufficient resources i m submitting a core job cores per task tasks stages on bw and i verified in the debug logs and the agent cfg that i am indeed submitting the correct pilot request however saga is throwing an error about insufficient resources i submitted on the debug queue but my request is well below the resource limit radical saga cpi pmgr launching thread error exception in job monitoring thread insufficient system resources insufficient system resources read from process failed input output error running done running done running done running done running done running done shared connection to bw ncsa illinois edu closed rct stack radical entk radical pilot hot fix release radical utils saga pbs script bin bash pbs n pilot pbs v radical pilot profile true pbs o scratch sciteam dakka radical pilot sandbox rp session two jdakka pilot bootstrap out pbs e scratch sciteam dakka radical pilot sandbox rp session two jdakka pilot bootstrap err pbs l walltime pbs q debug pbs a bamm pbs l nodes ppn export pbs o workdir scratch sciteam dakka radical pilot sandbox rp session two jdakka pilot mkdir p scratch sciteam dakka radical pilot sandbox rp session two jdakka pilot cd scratch sciteam dakka radical pilot sandbox rp session two jdakka pilot export saga ppn client logs are attached agent logs are in u sciteam dakka scratch radical pilot sandbox rp session two jdakka pilot
| 0
|
12,241
| 14,743,857,575
|
IssuesEvent
|
2021-01-07 14:30:52
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
AR Error Email
|
anc-process anp-2 ant-support has attachment
|
In GitLab by @kdjstudios on Dec 5, 2019, 10:43
Hey team,
I thought we already had a ticket opened for this, but since it seem to continue to happen I am going to open a new ticket.
If there are no errors, this email should not be sent out. We only want to receive emails for when there are errors.

|
1.0
|
AR Error Email - In GitLab by @kdjstudios on Dec 5, 2019, 10:43
Hey team,
I thought we already had a ticket opened for this, but since it seem to continue to happen I am going to open a new ticket.
If there are no errors, this email should not be sent out. We only want to receive emails for when there are errors.

|
process
|
ar error email in gitlab by kdjstudios on dec hey team i thought we already had a ticket opened for this but since it seem to continue to happen i am going to open a new ticket if there are no errors this email should not be sent out we only want to receive emails for when there are errors uploads image png
| 1
|
1,423
| 3,989,317,965
|
IssuesEvent
|
2016-05-09 13:38:33
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
NTR: (positive/negative) regulation of connective tissue replacement involved in wound healing
|
BHF-UCL miRNA New term request RNA processes
|
Dear Editors,
I would like to request new terms (Re: PMID:25590961, Figure 4C):
- regulation of connective tissue replacement involved in wound healing;
- positive regulation of connective tissue replacement involved in wound healing;
- negative regulation of connective tissue replacement involved in wound healing.
These terms would become children of the 'regulation of wound healing' family of terms and parents to the 'regulation of connective tissue replacement involved in inflammatory response wound healing' family of terms (no inflammation was evaluated in the annotated paper, hence the NTR).
The intended annotation:
Gata4 - negative regulation of connective tissue replacement involved in wound healing - part_of response to ischemia
Thank you,
Barbara
GOC:BHF, GOC:BHF_miRNA and GOC:bc
@rachhuntley
@RLovering
|
1.0
|
NTR: (positive/negative) regulation of connective tissue replacement involved in wound healing - Dear Editors,
I would like to request new terms (Re: PMID:25590961, Figure 4C):
- regulation of connective tissue replacement involved in wound healing;
- positive regulation of connective tissue replacement involved in wound healing;
- negative regulation of connective tissue replacement involved in wound healing.
These terms would become children of the 'regulation of wound healing' family of terms and parents to the 'regulation of connective tissue replacement involved in inflammatory response wound healing' family of terms (no inflammation was evaluated in the annotated paper, hence the NTR).
The intended annotation:
Gata4 - negative regulation of connective tissue replacement involved in wound healing - part_of response to ischemia
Thank you,
Barbara
GOC:BHF, GOC:BHF_miRNA and GOC:bc
@rachhuntley
@RLovering
|
process
|
ntr positive negative regulation of connective tissue replacement involved in wound healing dear editors i would like to request new terms re pmid figure regulation of connective tissue replacement involved in wound healing positive regulation of connective tissue replacement involved in wound healing negative regulation of connective tissue replacement involved in wound healing these terms would become children of the regulation of wound healing family of terms and parents to the regulation of connective tissue replacement involved in inflammatory response wound healing family of terms no inflammation was evaluated in the annotated paper hence the ntr the intended annotation negative regulation of connective tissue replacement involved in wound healing part of response to ischemia thank you barbara goc bhf goc bhf mirna and goc bc rachhuntley rlovering
| 1
|
89,884
| 8,216,653,205
|
IssuesEvent
|
2018-09-05 09:50:24
|
humera987/HumTestData
|
https://api.github.com/repos/humera987/HumTestData
|
opened
|
project_test : api_v1_orgs_id_users_get_auth_invalid
|
project_test
|
Project : project_test
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/orgs/{id}/users
Request :
Response :
Not enough variable values available to expand 'id'
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [500]
--- FX Bot ---
|
1.0
|
project_test : api_v1_orgs_id_users_get_auth_invalid - Project : project_test
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/orgs/{id}/users
Request :
Response :
Not enough variable values available to expand 'id'
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [500]
--- FX Bot ---
|
non_process
|
project test api orgs id users get auth invalid project project test job uat env uat region fxlabs us west result fail status code headers endpoint request response not enough variable values available to expand id logs assertion failed expected value but found fx bot
| 0
|
17,024
| 22,392,358,952
|
IssuesEvent
|
2022-06-17 08:58:02
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Feature request: maxtasksperchild for ProcessPoolExecutor
|
type-feature stdlib 3.11 expert-multiprocessing
|
BPO | [44733](https://bugs.python.org/issue44733)
--- | :---
Nosy | @gpshead, @pitrou, @cool-RR, @loganasherjones
PRs | <li>python/cpython#27373</li><li>python/cpython#32187</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/gpshead'
closed_at = None
created_at = <Date 2021-07-24.12:14:41.532>
labels = ['type-feature', 'library', '3.11']
title = 'Feature request: maxtasksperchild for ProcessPoolExecutor'
updated_at = <Date 2022-03-31.00:07:58.887>
user = 'https://github.com/cool-RR'
```
bugs.python.org fields:
```python
activity = <Date 2022-03-31.00:07:58.887>
actor = 'loganasherjones'
assignee = 'gregory.p.smith'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2021-07-24.12:14:41.532>
creator = 'cool-RR'
dependencies = []
files = []
hgrepos = []
issue_num = 44733
keywords = ['patch']
message_count = 13.0
messages = ['398143', '398240', '398243', '406689', '411449', '411450', '411452', '411471', '411546', '411547', '416310', '416314', '416406']
nosy_count = 4.0
nosy_names = ['gregory.p.smith', 'pitrou', 'cool-RR', 'loganasherjones']
pr_nums = ['27373', '32187']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue44733'
versions = ['Python 3.11']
```
</p></details>
|
1.0
|
Feature request: maxtasksperchild for ProcessPoolExecutor - BPO | [44733](https://bugs.python.org/issue44733)
--- | :---
Nosy | @gpshead, @pitrou, @cool-RR, @loganasherjones
PRs | <li>python/cpython#27373</li><li>python/cpython#32187</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/gpshead'
closed_at = None
created_at = <Date 2021-07-24.12:14:41.532>
labels = ['type-feature', 'library', '3.11']
title = 'Feature request: maxtasksperchild for ProcessPoolExecutor'
updated_at = <Date 2022-03-31.00:07:58.887>
user = 'https://github.com/cool-RR'
```
bugs.python.org fields:
```python
activity = <Date 2022-03-31.00:07:58.887>
actor = 'loganasherjones'
assignee = 'gregory.p.smith'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2021-07-24.12:14:41.532>
creator = 'cool-RR'
dependencies = []
files = []
hgrepos = []
issue_num = 44733
keywords = ['patch']
message_count = 13.0
messages = ['398143', '398240', '398243', '406689', '411449', '411450', '411452', '411471', '411546', '411547', '416310', '416314', '416406']
nosy_count = 4.0
nosy_names = ['gregory.p.smith', 'pitrou', 'cool-RR', 'loganasherjones']
pr_nums = ['27373', '32187']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue44733'
versions = ['Python 3.11']
```
</p></details>
|
process
|
feature request maxtasksperchild for processpoolexecutor bpo nosy gpshead pitrou cool rr loganasherjones prs python cpython python cpython note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee closed at none created at labels title feature request maxtasksperchild for processpoolexecutor updated at user bugs python org fields python activity actor loganasherjones assignee gregory p smith closed false closed date none closer none components creation creator cool rr dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type enhancement url versions
| 1
|
107,267
| 16,751,744,351
|
IssuesEvent
|
2021-06-12 02:02:29
|
turkdevops/graphql-tools
|
https://api.github.com/repos/turkdevops/graphql-tools
|
opened
|
CVE-2021-26707 (Medium) detected in merge-deep-3.0.2.tgz
|
security vulnerability
|
## CVE-2021-26707 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-deep-3.0.2.tgz</b></p></summary>
<p>Recursively merge values in a javascript object.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge-deep/-/merge-deep-3.0.2.tgz">https://registry.npmjs.org/merge-deep/-/merge-deep-3.0.2.tgz</a></p>
<p>Path to dependency file: graphql-tools/docs/package.json</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/merge-deep/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-theme-apollo-docs-4.1.4.tgz (Root Library)
- gatsby-theme-apollo-core-3.0.11.tgz
- webpack-4.3.3.tgz
- plugin-svgo-4.3.1.tgz
- :x: **merge-deep-3.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in merge-deep before 3.0.3. A prototype pollution issue of Object.prototype via a constructor payload may lead to denial of service and other consequences.
<p>Publish Date: 2021-02-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-26707>CVE-2021-26707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1922259">https://bugzilla.redhat.com/show_bug.cgi?id=1922259</a></p>
<p>Release Date: 2021-02-05</p>
<p>Fix Resolution: 3.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-26707 (Medium) detected in merge-deep-3.0.2.tgz - ## CVE-2021-26707 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-deep-3.0.2.tgz</b></p></summary>
<p>Recursively merge values in a javascript object.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge-deep/-/merge-deep-3.0.2.tgz">https://registry.npmjs.org/merge-deep/-/merge-deep-3.0.2.tgz</a></p>
<p>Path to dependency file: graphql-tools/docs/package.json</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/merge-deep/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-theme-apollo-docs-4.1.4.tgz (Root Library)
- gatsby-theme-apollo-core-3.0.11.tgz
- webpack-4.3.3.tgz
- plugin-svgo-4.3.1.tgz
- :x: **merge-deep-3.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in merge-deep before 3.0.3. A prototype pollution issue of Object.prototype via a constructor payload may lead to denial of service and other consequences.
<p>Publish Date: 2021-02-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-26707>CVE-2021-26707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1922259">https://bugzilla.redhat.com/show_bug.cgi?id=1922259</a></p>
<p>Release Date: 2021-02-05</p>
<p>Fix Resolution: 3.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in merge deep tgz cve medium severity vulnerability vulnerable library merge deep tgz recursively merge values in a javascript object library home page a href path to dependency file graphql tools docs package json path to vulnerable library graphql tools docs node modules merge deep package json dependency hierarchy gatsby theme apollo docs tgz root library gatsby theme apollo core tgz webpack tgz plugin svgo tgz x merge deep tgz vulnerable library found in head commit a href found in base branch vulnerability details a flaw was found in merge deep before a prototype pollution issue of object prototype via a constructor payload may lead to denial of service and other consequences publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
69,198
| 3,296,066,195
|
IssuesEvent
|
2015-11-01 14:59:50
|
cs2103aug2015-w15-1j/main
|
https://api.github.com/repos/cs2103aug2015-w15-1j/main
|
closed
|
Parser to check if the day of the month is out of bound
|
priority.high type.bug
|
eg. 30 Feb is out of bound since Feb only contains 28 or 29 days
Currently entering 30 Feb will set it to 2 Mar instead
|
1.0
|
Parser to check if the day of the month is out of bound - eg. 30 Feb is out of bound since Feb only contains 28 or 29 days
Currently entering 30 Feb will set it to 2 Mar instead
|
non_process
|
parser to check if the day of the month is out of bound eg feb is out of bound since feb only contains or days currently entering feb will set it to mar instead
| 0
|
163,364
| 13,916,147,128
|
IssuesEvent
|
2020-10-21 02:34:11
|
TesseractCoding/NeoAlgo
|
https://api.github.com/repos/TesseractCoding/NeoAlgo
|
closed
|
[ALGO/DS] Ackerman Function
|
C-Sharp Go Java JavaScript Python documentation easy good first issue no-issue-activity
|
Ackerman Function

Math
C
C++ [@yogesh-kansal ]
C#
Python
Java
Golang
Javascript
|
1.0
|
[ALGO/DS] Ackerman Function - Ackerman Function

Math
C
C++ [@yogesh-kansal ]
C#
Python
Java
Golang
Javascript
|
non_process
|
ackerman function ackerman function math c c c python java golang javascript
| 0
|
615,246
| 19,251,094,986
|
IssuesEvent
|
2021-12-09 05:23:49
|
internetarchive/openlibrary
|
https://api.github.com/repos/internetarchive/openlibrary
|
closed
|
Books that are `Not in library` have bigger height than the rest of the books card.
|
Type: Bug Priority: 3 Good First Issue Affects: UI Lead: @jimchamp
|
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
### Evidence / Screenshot (if possible)
<img width="1438" alt="Screenshot 2021-12-07 at 7 59 30 AM" src="https://user-images.githubusercontent.com/73935799/144955174-fdc1455e-266b-4676-b023-9804086bb1dc.png">
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
Open https://openlibrary.org/ and check for books that are `Not in the library`.You will find that they have a bigger height in terms of card size than the rest of the books.
<!-- What steps caused you to find the bug? -->
1. Go to ...
2. Do ...
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: `Not in library` book cards being longer than others
* Expected:All book card having the same height irrespective of their library status
### Details
- **Logged in (Y/N)?**Y
- **Browser type/version?**Chrome 96.0.4664.55
- **Operating system?** MacOSX
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@jimchamp
|
1.0
|
Books that are `Not in library` have bigger height than the rest of the books card. - <!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
### Evidence / Screenshot (if possible)
<img width="1438" alt="Screenshot 2021-12-07 at 7 59 30 AM" src="https://user-images.githubusercontent.com/73935799/144955174-fdc1455e-266b-4676-b023-9804086bb1dc.png">
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
Open https://openlibrary.org/ and check for books that are `Not in the library`.You will find that they have a bigger height in terms of card size than the rest of the books.
<!-- What steps caused you to find the bug? -->
1. Go to ...
2. Do ...
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: `Not in library` book cards being longer than others
* Expected:All book card having the same height irrespective of their library status
### Details
- **Logged in (Y/N)?**Y
- **Browser type/version?**Chrome 96.0.4664.55
- **Operating system?** MacOSX
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@jimchamp
|
non_process
|
books that are not in library have bigger height than the rest of the books card evidence screenshot if possible img width alt screenshot at am src relevant url steps to reproduce open and check for books that are not in the library you will find that they have a bigger height in terms of card size than the rest of the books go to do actual not in library book cards being longer than others expected all book card having the same height irrespective of their library status details logged in y n y browser type version chrome operating system macosx environment prod dev local prod proposal constraints related files stakeholders jimchamp
| 0
|
468,006
| 13,459,835,668
|
IssuesEvent
|
2020-09-09 12:50:29
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.marmiton.org - site is not usable
|
browser-fenix engine-gecko ml-needsdiagnosis-false priority-normal
|
<!-- @browser: Firefox Mobile 82.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:82.0) Gecko/82.0 Firefox/82.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/57979 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.marmiton.org/recettes/recette_pudding-aux-poires-et-au-chocolat_24851.aspx
**Browser / Version**: Firefox Mobile 82.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
uBO enabled is causing dysfunctional website
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200906094118</li><li>channel: nightly</li><li>hasTouchScreen: true</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
1.0
|
www.marmiton.org - site is not usable - <!-- @browser: Firefox Mobile 82.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:82.0) Gecko/82.0 Firefox/82.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/57979 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.marmiton.org/recettes/recette_pudding-aux-poires-et-au-chocolat_24851.aspx
**Browser / Version**: Firefox Mobile 82.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
uBO enabled is causing dysfunctional website
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200906094118</li><li>channel: nightly</li><li>hasTouchScreen: true</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
non_process
|
site is not usable url browser version firefox mobile operating system android tested another browser no problem type site is not usable description buttons or links not working steps to reproduce ubo enabled is causing dysfunctional website browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true from with β€οΈ
| 0
|
986
| 3,442,402,230
|
IssuesEvent
|
2015-12-14 22:26:36
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
opened
|
by default, set "size" to 0
|
enhancement sct_process_segmentation
|
also: in the help, please move the field "size" right after the field "-a"
(more intuitive)
|
1.0
|
by default, set "size" to 0 - also: in the help, please move the field "size" right after the field "-a"
(more intuitive)
|
process
|
by default set size to also in the help please move the field size right after the field a more intuitive
| 1
|
3,020
| 2,607,969,729
|
IssuesEvent
|
2015-02-26 00:44:02
|
chrsmithdemos/leveldb
|
https://api.github.com/repos/chrsmithdemos/leveldb
|
opened
|
LevelDB occassionally crash when Snappy enabled
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
Sorry, I will try to find a way to reproduce the problem later. But see the
core dump below, does anyone have any ideas?
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
leveldb-1.14.0, snappy-1.1.0(static library)
Please provide any additional information below.
οΌ1οΌοΌ
#0 0Γ000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
(gdb) bt
#0 0Γ000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
#1 0Γ00000000004450fc in leveldb::Table::BlockReader(void*,
leveldb::ReadOptions const&, leveldb::Slice const&) ()
#2 0Γ00000000004460d0 in leveldb::(anonymous
namespace)::TwoLevelIterator::InitDataBlock() ()
#3 0Γ00000000004462fb in leveldb::(anonymous
namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#4 0Γ000000000044635e in leveldb::(anonymous
namespace)::TwoLevelIterator::Next() ()
#5 0Γ0000000000443460 in leveldb::(anonymous
namespace)::MergingIterator::Next() ()
#6 0Γ000000000042f6c2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#7 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#8 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#9 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#10 0Γ00007f8453145e9a in start_thread (arg=0Γ7f843a645700) at
pthread_create.c:308
#11 0Γ00007f8452e72ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#12 0Γ0000000000000000 in ?? ()
οΌ2οΌοΌ
#0 __memcpy_ssse3_back () at
../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2065
#1 0Γ00007f95e29e0bc8 in std::string::append(char const*, unsigned long) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2 0Γ000000000044a0bb in leveldb::BlockBuilder::Add(leveldb::Slice const&,
leveldb::Slice const&) ()
#3 0Γ0000000000444579 in leveldb::TableBuilder::Add(leveldb::Slice const&,
leveldb::Slice const&) ()
#4 0Γ000000000042f6a2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#5 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#6 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#7 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#8 0Γ00007f95e2219e9a in start_thread (arg=0Γ7f95c6ac8700) at
pthread_create.c:308
#9 0Γ00007f95e1f46ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#10 0Γ0000000000000000 in ?? ()
οΌ3οΌοΌ
#0 0Γ000000000045157d in UnalignedCopy64 (dst=0Γ7f8ebced8304,
src=0Γ7f8e9c3efffd) at snappy-stubs-internal.h:195
#1 TryFastAppend (len=1, available=7147, ip=0Γ7f8e9c3efff5 <Address
0Γ7f8e9c3efff5 out of bounds>, this=<optimized out>) at snappy.cc:1000
#2 DecompressAllTags<snappy::SnappyArrayWriter> (writer=<synthetic pointer>,
this=0Γ7f8ed15ce630) at snappy.cc:730
#3 InternalUncompressAllTags<snappy::SnappyArrayWriter>
(decompressor=0Γ7f8ed15ce630, max_len=4294967295, uncompressed_len=<optimized
out>, writer=<synthetic pointer>) at snappy.cc:866
#4 InternalUncompress<snappy::SnappyArrayWriter> (writer=<synthetic pointer>,
r=<optimized out>, max_len=<optimized out>) at snappy.cc:850
#5 snappy::RawUncompress (compressed=<optimized out>,
uncompressed=0Γ7f8ebced6000 "") at snappy.cc:1042
#6 0Γ0000000000451702 in snappy::RawUncompress (compressed=<optimized out>,
n=<optimized out>, uncompressed=<optimized out>) at snappy.cc:1037
#7 0Γ000000000044c7ee in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
#8 0Γ00000000004450fc in leveldb::Table::BlockReader(void*,
leveldb::ReadOptions const&, leveldb::Slice const&) ()
#9 0Γ00000000004460d0 in leveldb::(anonymous
namespace)::TwoLevelIterator::InitDataBlock() ()
#10 0Γ00000000004462fb in leveldb::(anonymous
namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#11 0Γ000000000044635e in leveldb::(anonymous
namespace)::TwoLevelIterator::Next() ()
#12 0Γ0000000000443460 in leveldb::(anonymous
namespace)::MergingIterator::Next() ()
#13 0Γ000000000042f6c2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#14 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#15 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#16 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#17 0Γ00007f8fb9c63e9a in start_thread (arg=0Γ7f8ed15cf700) at
pthread_create.c:308
#18 0Γ00007f8fb9990ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#19 0Γ0000000000000000 in ?? ()
```
-----
Original issue reported on code.google.com by `wuzuy...@gmail.com` on 7 Nov 2013 at 11:23
|
1.0
|
LevelDB occassionally crash when Snappy enabled - ```
What steps will reproduce the problem?
Sorry, I will try to find a way to reproduce the problem later. But see the
core dump below, does anyone have any ideas?
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
leveldb-1.14.0, snappy-1.1.0(static library)
Please provide any additional information below.
οΌ1οΌοΌ
#0 0Γ000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
(gdb) bt
#0 0Γ000000000044c6bf in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
#1 0Γ00000000004450fc in leveldb::Table::BlockReader(void*,
leveldb::ReadOptions const&, leveldb::Slice const&) ()
#2 0Γ00000000004460d0 in leveldb::(anonymous
namespace)::TwoLevelIterator::InitDataBlock() ()
#3 0Γ00000000004462fb in leveldb::(anonymous
namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#4 0Γ000000000044635e in leveldb::(anonymous
namespace)::TwoLevelIterator::Next() ()
#5 0Γ0000000000443460 in leveldb::(anonymous
namespace)::MergingIterator::Next() ()
#6 0Γ000000000042f6c2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#7 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#8 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#9 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#10 0Γ00007f8453145e9a in start_thread (arg=0Γ7f843a645700) at
pthread_create.c:308
#11 0Γ00007f8452e72ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#12 0Γ0000000000000000 in ?? ()
οΌ2οΌοΌ
#0 __memcpy_ssse3_back () at
../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:2065
#1 0Γ00007f95e29e0bc8 in std::string::append(char const*, unsigned long) ()
from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2 0Γ000000000044a0bb in leveldb::BlockBuilder::Add(leveldb::Slice const&,
leveldb::Slice const&) ()
#3 0Γ0000000000444579 in leveldb::TableBuilder::Add(leveldb::Slice const&,
leveldb::Slice const&) ()
#4 0Γ000000000042f6a2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#5 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#6 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#7 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#8 0Γ00007f95e2219e9a in start_thread (arg=0Γ7f95c6ac8700) at
pthread_create.c:308
#9 0Γ00007f95e1f46ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#10 0Γ0000000000000000 in ?? ()
οΌ3οΌοΌ
#0 0Γ000000000045157d in UnalignedCopy64 (dst=0Γ7f8ebced8304,
src=0Γ7f8e9c3efffd) at snappy-stubs-internal.h:195
#1 TryFastAppend (len=1, available=7147, ip=0Γ7f8e9c3efff5 <Address
0Γ7f8e9c3efff5 out of bounds>, this=<optimized out>) at snappy.cc:1000
#2 DecompressAllTags<snappy::SnappyArrayWriter> (writer=<synthetic pointer>,
this=0Γ7f8ed15ce630) at snappy.cc:730
#3 InternalUncompressAllTags<snappy::SnappyArrayWriter>
(decompressor=0Γ7f8ed15ce630, max_len=4294967295, uncompressed_len=<optimized
out>, writer=<synthetic pointer>) at snappy.cc:866
#4 InternalUncompress<snappy::SnappyArrayWriter> (writer=<synthetic pointer>,
r=<optimized out>, max_len=<optimized out>) at snappy.cc:850
#5 snappy::RawUncompress (compressed=<optimized out>,
uncompressed=0Γ7f8ebced6000 "") at snappy.cc:1042
#6 0Γ0000000000451702 in snappy::RawUncompress (compressed=<optimized out>,
n=<optimized out>, uncompressed=<optimized out>) at snappy.cc:1037
#7 0Γ000000000044c7ee in leveldb::ReadBlock(leveldb::RandomAccessFile*,
leveldb::ReadOptions const&, leveldb::BlockHandle const&,
leveldb::BlockContents*) ()
#8 0Γ00000000004450fc in leveldb::Table::BlockReader(void*,
leveldb::ReadOptions const&, leveldb::Slice const&) ()
#9 0Γ00000000004460d0 in leveldb::(anonymous
namespace)::TwoLevelIterator::InitDataBlock() ()
#10 0Γ00000000004462fb in leveldb::(anonymous
namespace)::TwoLevelIterator::SkipEmptyDataBlocksForward() ()
#11 0Γ000000000044635e in leveldb::(anonymous
namespace)::TwoLevelIterator::Next() ()
#12 0Γ0000000000443460 in leveldb::(anonymous
namespace)::MergingIterator::Next() ()
#13 0Γ000000000042f6c2 in
leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) ()
#14 0Γ000000000042fe80 in leveldb::DBImpl::BackgroundCompaction() ()
#15 0Γ000000000043097b in leveldb::DBImpl::BackgroundCall() ()
#16 0Γ000000000044ccef in leveldb::(anonymous
namespace)::PosixEnv::BGThreadWrapper(void*) ()
#17 0Γ00007f8fb9c63e9a in start_thread (arg=0Γ7f8ed15cf700) at
pthread_create.c:308
#18 0Γ00007f8fb9990ccd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#19 0Γ0000000000000000 in ?? ()
```
-----
Original issue reported on code.google.com by `wuzuy...@gmail.com` on 7 Nov 2013 at 11:23
|
non_process
|
leveldb occassionally crash when snappy enabled what steps will reproduce the problem sorry i will try to find a way to reproduce the problem later but see the core dump below does anyone have any ideas what is the expected output what do you see instead what version of the product are you using on what operating system leveldb snappy static library please provide any additional information below οΌ οΌοΌ Γ in leveldb readblock leveldb randomaccessfile leveldb readoptions const leveldb blockhandle const leveldb blockcontents gdb bt Γ in leveldb readblock leveldb randomaccessfile leveldb readoptions const leveldb blockhandle const leveldb blockcontents Γ in leveldb table blockreader void leveldb readoptions const leveldb slice const Γ in leveldb anonymous namespace twoleveliterator initdatablock Γ in leveldb anonymous namespace twoleveliterator skipemptydatablocksforward Γ in leveldb anonymous namespace twoleveliterator next Γ in leveldb anonymous namespace mergingiterator next Γ in leveldb dbimpl docompactionwork leveldb dbimpl compactionstate Γ in leveldb dbimpl backgroundcompaction Γ in leveldb dbimpl backgroundcall Γ in leveldb anonymous namespace posixenv bgthreadwrapper void Γ in start thread arg Γ at pthread create c Γ in clone at sysdeps unix sysv linux clone s Γ in οΌ οΌοΌ memcpy back at sysdeps multiarch memcpy back s Γ in std string append char const unsigned long from usr lib linux gnu libstdc so Γ in leveldb blockbuilder add leveldb slice const leveldb slice const Γ in leveldb tablebuilder add leveldb slice const leveldb slice const Γ in leveldb dbimpl docompactionwork leveldb dbimpl compactionstate Γ in leveldb dbimpl backgroundcompaction Γ in leveldb dbimpl backgroundcall Γ in leveldb anonymous namespace posixenv bgthreadwrapper void Γ in start thread arg Γ at pthread create c Γ in clone at sysdeps unix sysv linux clone s Γ in οΌ οΌοΌ Γ in dst Γ src Γ at snappy stubs internal h tryfastappend len available ip Γ address Γ out of bounds this at snappy cc decompressalltags writer this Γ at snappy cc internaluncompressalltags decompressor Γ max len uncompressed len optimized out writer at snappy cc internaluncompress writer r max len at snappy cc snappy rawuncompress compressed uncompressed Γ at snappy cc Γ in snappy rawuncompress compressed n uncompressed at snappy cc Γ in leveldb readblock leveldb randomaccessfile leveldb readoptions const leveldb blockhandle const leveldb blockcontents Γ in leveldb table blockreader void leveldb readoptions const leveldb slice const Γ in leveldb anonymous namespace twoleveliterator initdatablock Γ in leveldb anonymous namespace twoleveliterator skipemptydatablocksforward Γ in leveldb anonymous namespace twoleveliterator next Γ in leveldb anonymous namespace mergingiterator next Γ in leveldb dbimpl docompactionwork leveldb dbimpl compactionstate Γ in leveldb dbimpl backgroundcompaction Γ in leveldb dbimpl backgroundcall Γ in leveldb anonymous namespace posixenv bgthreadwrapper void Γ in start thread arg Γ at pthread create c Γ in clone at sysdeps unix sysv linux clone s Γ in original issue reported on code google com by wuzuy gmail com on nov at
| 0
|
137,959
| 30,782,699,971
|
IssuesEvent
|
2023-07-31 11:09:09
|
google/android-fhir
|
https://api.github.com/repos/google/android-fhir
|
closed
|
Store timeStamp as Instant instead of human readable string format in LocalChangeEntity.
|
P2 type:code health
|
Planning to do it in a separate PR.
_Originally posted by @aditya-07 in https://github.com/google/android-fhir/pull/2030#discussion_r1268159725_
|
1.0
|
Store timeStamp as Instant instead of human readable string format in LocalChangeEntity. - Planning to do it in a separate PR.
_Originally posted by @aditya-07 in https://github.com/google/android-fhir/pull/2030#discussion_r1268159725_
|
non_process
|
store timestamp as instant instead of human readable string format in localchangeentity planning to do it in a separate pr originally posted by aditya in
| 0
|
4,398
| 2,852,452,697
|
IssuesEvent
|
2015-06-01 13:43:49
|
apinf/api-umbrella-dashboard
|
https://api.github.com/repos/apinf/api-umbrella-dashboard
|
closed
|
Create brand book for Apinf
|
Documentation enhancement MVP
|
Modify branding text, colors, icons, logo, image(s), social media links, and screen captures of Admin UI.
Deliverable
-----
* [x] Design document - Outlook (typography, color scheme, logo, tone, etc)
|
1.0
|
Create brand book for Apinf - Modify branding text, colors, icons, logo, image(s), social media links, and screen captures of Admin UI.
Deliverable
-----
* [x] Design document - Outlook (typography, color scheme, logo, tone, etc)
|
non_process
|
create brand book for apinf modify branding text colors icons logo image s social media links and screen captures of admin ui deliverable design document outlook typography color scheme logo tone etc
| 0
|
737,457
| 25,517,537,658
|
IssuesEvent
|
2022-11-28 17:30:53
|
wso2/api-manager
|
https://api.github.com/repos/wso2/api-manager
|
opened
|
Swagger Validation issue
|
Type/Bug Priority/Normal
|
### Description
Some of the swagger files which have invalid definitions are allowed by the APIM server.
### Steps to Reproduce
- Start APIM 4.0.0 server.
- Go to Create an API with OpenAPI definition.
- Give the open API definition as follows
`{
"swagger": "2.0",
"info": {
"title": "TestAPI",
"version": "1.0",
"description": "TestAPI - ApplicationDetails"
},
"host": "http://sample.com",
"basePath": "/testapi/api",
"tags": [
{
"name": "TestTag01",
"description": "Test tag description 01"
},
{
"name": "TestTag02",
"description": "Test tag description 02"
}
],
"paths": {
"/test/all": {
"get": {
"tags": [
"TestTag01"
],
"summary": "getAllTests",
"operationId": "getAllTestsby_tag01",
"produces": [
"*/*"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object"
}
},
"401": {
"description": "Unauthorized"
},
"403": {
"description": "Forbidden"
},
"404": {
"description": "Not Found"
}
},
"deprecated": false
}
}
},
"schemes": [
"https",
"http"
],
"extraInfo": {
"bizOwner": "bizOwner01",
"bizOwnerMail": "bizOwner@test.com",
"endpointSecurityAuthType": "",
"endpointSecurityPassword": "",
"endpointSecurityScheme": "secured",
"endpointSecurityUsername": "",
"tags": "tag01",
"techOwner": "techOwner01",
"techOwnerMail": "techOwner01@test.com",
"tiersCollection": "Unlimited",
"visibility": "public",
"CORSAllowHeaders": "",
"CORSAllowOrigins": ""
}
}`
- APIM 4.0.0 will allow to create the API.
- Use the same API definition and validate against https://editor.swagger.io, this will identify the above API definition as erroneous definition.
### Affected Component
APIM
### Version
4.0.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
1.0
|
Swagger Validation issue - ### Description
Some of the swagger files which have invalid definitions are allowed by the APIM server.
### Steps to Reproduce
- Start APIM 4.0.0 server.
- Go to Create an API with OpenAPI definition.
- Give the open API definition as follows
`{
"swagger": "2.0",
"info": {
"title": "TestAPI",
"version": "1.0",
"description": "TestAPI - ApplicationDetails"
},
"host": "http://sample.com",
"basePath": "/testapi/api",
"tags": [
{
"name": "TestTag01",
"description": "Test tag description 01"
},
{
"name": "TestTag02",
"description": "Test tag description 02"
}
],
"paths": {
"/test/all": {
"get": {
"tags": [
"TestTag01"
],
"summary": "getAllTests",
"operationId": "getAllTestsby_tag01",
"produces": [
"*/*"
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "object"
}
},
"401": {
"description": "Unauthorized"
},
"403": {
"description": "Forbidden"
},
"404": {
"description": "Not Found"
}
},
"deprecated": false
}
}
},
"schemes": [
"https",
"http"
],
"extraInfo": {
"bizOwner": "bizOwner01",
"bizOwnerMail": "bizOwner@test.com",
"endpointSecurityAuthType": "",
"endpointSecurityPassword": "",
"endpointSecurityScheme": "secured",
"endpointSecurityUsername": "",
"tags": "tag01",
"techOwner": "techOwner01",
"techOwnerMail": "techOwner01@test.com",
"tiersCollection": "Unlimited",
"visibility": "public",
"CORSAllowHeaders": "",
"CORSAllowOrigins": ""
}
}`
- APIM 4.0.0 will allow to create the API.
- Use the same API definition and validate against https://editor.swagger.io, this will identify the above API definition as erroneous definition.
### Affected Component
APIM
### Version
4.0.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
non_process
|
swagger validation issue description some of the swagger files which have invalid definitions are allowed by the apim server steps to reproduce start apim server go to create an api with openapi definition give the open api definition as follows swagger info title testapi version description testapi applicationdetails host basepath testapi api tags name description test tag description name description test tag description paths test all get tags summary getalltests operationid getalltestsby produces responses description ok schema type object description unauthorized description forbidden description not found deprecated false schemes https http extrainfo bizowner bizownermail bizowner test com endpointsecurityauthtype endpointsecuritypassword endpointsecurityscheme secured endpointsecurityusername tags techowner techownermail test com tierscollection unlimited visibility public corsallowheaders corsalloworigins apim will allow to create the api use the same api definition and validate against this will identify the above api definition as erroneous definition affected component apim version environment details with versions no response relevant log output no response related issues no response suggested labels no response
| 0
|
30,125
| 11,800,343,657
|
IssuesEvent
|
2020-03-18 17:22:47
|
jgeraigery/kraft-heinz-merger
|
https://api.github.com/repos/jgeraigery/kraft-heinz-merger
|
opened
|
CVE-2018-11694 (High) detected in node-sass-4.13.1.tgz, node-sass-v4.13.1
|
security vulnerability
|
## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/kraft-heinz-merger/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/kraft-heinz-merger/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-3.2.1.tgz (Root Library)
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/72632c58d3cc93458a56d547f4fc315bfa457ab5">72632c58d3cc93458a56d547f4fc315bfa457ab5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.13.1","isTransitiveDependency":true,"dependencyTree":"gulp-sass:3.2.1;node-sass:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.6.0"}],"vulnerabilityIdentifier":"CVE-2018-11694","vulnerabilityDetails":"An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-11694 (High) detected in node-sass-4.13.1.tgz, node-sass-v4.13.1 - ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/kraft-heinz-merger/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/kraft-heinz-merger/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-3.2.1.tgz (Root Library)
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/72632c58d3cc93458a56d547f4fc315bfa457ab5">72632c58d3cc93458a56d547f4fc315bfa457ab5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.13.1","isTransitiveDependency":true,"dependencyTree":"gulp-sass:3.2.1;node-sass:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.6.0"}],"vulnerabilityIdentifier":"CVE-2018-11694","vulnerabilityDetails":"An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in node sass tgz node sass cve high severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm kraft heinz merger package json path to vulnerable library tmp ws scm kraft heinz merger node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact vulnerabilityurl
| 0
|
11,642
| 14,497,967,250
|
IssuesEvent
|
2020-12-11 14:55:16
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
opened
|
BE: System Status API
|
p1 story team:data processing
|
### Description
Backend component that allows retrieve the current status of the system.
### Related Services
Which backend services must change for this story to be completed?
### Designs
Paste the link to your designs here
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
This acts as a checklist and high-level context for anyone reading this issue to verify your implementation.
For example:
- We can collect anonymized frontend crash logs from user browsers
- Users can opt in to send these logs to panther
- The crash logs will contain the following fields : browser version
- Users can opt-out from collection at any time
- ...
|
1.0
|
BE: System Status API - ### Description
Backend component that allows retrieve the current status of the system.
### Related Services
Which backend services must change for this story to be completed?
### Designs
Paste the link to your designs here
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
This acts as a checklist and high-level context for anyone reading this issue to verify your implementation.
For example:
- We can collect anonymized frontend crash logs from user browsers
- Users can opt in to send these logs to panther
- The crash logs will contain the following fields : browser version
- Users can opt-out from collection at any time
- ...
|
process
|
be system status api description backend component that allows retrieve the current status of the system related services which backend services must change for this story to be completed designs paste the link to your designs here acceptance criteria a concise list of specific user stories that qualify this story as done this acts as a checklist and high level context for anyone reading this issue to verify your implementation for example we can collect anonymized frontend crash logs from user browsers users can opt in to send these logs to panther the crash logs will contain the following fields browser version users can opt out from collection at any time
| 1
|
11,835
| 14,655,477,014
|
IssuesEvent
|
2020-12-28 11:06:21
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Improve error message `Error: Database error: Error querying the database: db error: ERROR: must be owner of schema public 0: migration_core::api::Reset at migration-engine/core/src/api.rs:232`
|
engines/migration engine kind/improvement process/candidate team/migrations
|
## Problem
This is the current error message
```
Error: Database error: Error querying the database: db error: ERROR: must be owner of schema public
0: migration_core::api::Reset
at migration-engine/core/src/api.rs:232
```
## Suggested solution
- We could make it a "known error"
- Improve the text, suggestion: `The database's user privileges are insufficient, the user must be owner of the "public" schema`
## Additional context
The current error `must be owner of schema public` is not super helpful if you donβt know itβs about the user privileges and that the schema name is `public`.
So we could improve the message to make it clear to the users what the problem is.
In this case the solution is to grant the "owner" privileges to the db user.
|
1.0
|
Improve error message `Error: Database error: Error querying the database: db error: ERROR: must be owner of schema public 0: migration_core::api::Reset at migration-engine/core/src/api.rs:232` - ## Problem
This is the current error message
```
Error: Database error: Error querying the database: db error: ERROR: must be owner of schema public
0: migration_core::api::Reset
at migration-engine/core/src/api.rs:232
```
## Suggested solution
- We could make it a "known error"
- Improve the text, suggestion: `The database's user privileges are insufficient, the user must be owner of the "public" schema`
## Additional context
The current error `must be owner of schema public` is not super helpful if you donβt know itβs about the user privileges and that the schema name is `public`.
So we could improve the message to make it clear to the users what the problem is.
In this case the solution is to grant the "owner" privileges to the db user.
|
process
|
improve error message error database error error querying the database db error error must be owner of schema public migration core api reset at migration engine core src api rs problem this is the current error message error database error error querying the database db error error must be owner of schema public migration core api reset at migration engine core src api rs suggested solution we could make it a known error improve the text suggestion the database s user privileges are insufficient the user must be owner of the public schema additional context the current error must be owner of schema public is not super helpful if you donβt know itβs about the user privileges and that the schema name is public so we could improve the message to make it clear to the users what the problem is in this case the solution is to grant the owner privileges to the db user
| 1
|
469,672
| 13,523,077,714
|
IssuesEvent
|
2020-09-15 09:26:48
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Wizard: Make Event Saving, Draft Mode, Publishing Clearer
|
Priority: High enhancement
|
**1. When event is created and still unpublished (Draft Mode)**
* _Requirement_ for saving a draft is only
1. to have an event name
* show buttons:
* Discard (only when event was never saved - when creating the event: It does not save the event and discards any data that was entered. After that direct user back to dashboard of all events. If user has saved an event previously, there is no "Discard" possible anymore, then show "Cancel" and send user back to specific event dashboard.)
* Cancel (cancels changes of the wizard step and exists to event dashboard)
* Save/Previous (except for first step)
* Save/Next (saves changes and goes to the next step, except for last step)
* Save (saves draft and exits wizard to dashboard)
* below the above option show another section with two buttons
* Show a horizontal line with 100% width of the page
* Show the text "This event is currently not published. It is in draft mode and it is not visible publicly."
* Preview (open in new tab)
* Publish (opens a pop up - same as "Publish" on the dashboard / afterwards exits to event dashboard). Please show this button as "gray" as long as the event does not have the minimum data listed necessary for publishing an event.

**2. After an event is published already (Live Event)**
* _Requirement_ for an event to be publishable is, that it has
1. A name
2. A location or online event link
3. A ticket
* below the above savings/next/previous options show another section with two buttons
* Show the text "This event is published. Any changes you make will appear on your live event."
* View (open in new tab)
* Unpublish (opens a pop up - same as "Unpublish" on the dashboard / afterwards exits to event dashboard)

**3. Top menu "Wizard Steps"**
When a user clicks a top wizard menu item the following action should take place:
* Cancel changes of the current tab and load the menu tab

|
1.0
|
Wizard: Make Event Saving, Draft Mode, Publishing Clearer - **1. When event is created and still unpublished (Draft Mode)**
* _Requirement_ for saving a draft is only
1. to have an event name
* show buttons:
* Discard (only when event was never saved - when creating the event: It does not save the event and discards any data that was entered. After that direct user back to dashboard of all events. If user has saved an event previously, there is no "Discard" possible anymore, then show "Cancel" and send user back to specific event dashboard.)
* Cancel (cancels changes of the wizard step and exists to event dashboard)
* Save/Previous (except for first step)
* Save/Next (saves changes and goes to the next step, except for last step)
* Save (saves draft and exits wizard to dashboard)
* below the above option show another section with two buttons
* Show a horizontal line with 100% width of the page
* Show the text "This event is currently not published. It is in draft mode and it is not visible publicly."
* Preview (open in new tab)
* Publish (opens a pop up - same as "Publish" on the dashboard / afterwards exits to event dashboard). Please show this button as "gray" as long as the event does not have the minimum data listed necessary for publishing an event.

**2. After an event is published already (Live Event)**
* _Requirement_ for an event to be publishable is, that it has
1. A name
2. A location or online event link
3. A ticket
* below the above savings/next/previous options show another section with two buttons
* Show the text "This event is published. Any changes you make will appear on your live event."
* View (open in new tab)
* Unpublish (opens a pop up - same as "Unpublish" on the dashboard / afterwards exits to event dashboard)

**3. Top menu "Wizard Steps"**
When a user clicks a top wizard menu item the following action should take place:
* Cancel changes of the current tab and load the menu tab

|
non_process
|
wizard make event saving draft mode publishing clearer when event is created and still unpublished draft mode requirement for saving a draft is only to have an event name show buttons discard only when event was never saved when creating the event it does not save the event and discards any data that was entered after that direct user back to dashboard of all events if user has saved an event previously there is no discard possible anymore then show cancel and send user back to specific event dashboard cancel cancels changes of the wizard step and exists to event dashboard save previous except for first step save next saves changes and goes to the next step except for last step save saves draft and exits wizard to dashboard below the above option show another section with two buttons show a horizontal line with width of the page show the text this event is currently not published it is in draft mode and it is not visible publicly preview open in new tab publish opens a pop up same as publish on the dashboard afterwards exits to event dashboard please show this button as gray as long as the event does not have the minimum data listed necessary for publishing an event after an event is published already live event requirement for an event to be publishable is that it has a name a location or online event link a ticket below the above savings next previous options show another section with two buttons show the text this event is published any changes you make will appear on your live event view open in new tab unpublish opens a pop up same as unpublish on the dashboard afterwards exits to event dashboard top menu wizard steps when a user clicks a top wizard menu item the following action should take place cancel changes of the current tab and load the menu tab
| 0
|
156,114
| 12,296,246,475
|
IssuesEvent
|
2020-05-11 06:30:12
|
microsoft/vscode-python
|
https://api.github.com/repos/microsoft/vscode-python
|
opened
|
Pytest discovery breaks when code to test prints to stdout during discovery
|
area-testing needs spec type-bug
|
I had another look and realized something. My project imports libraries from a host application. This application has a plugin, named Redshift, that I guess outputs to stdout.
So, when I run the command
```
c:\Users\WORKSTATIONL\Python\Tests\NymusClient\.venvHoudini18\Scripts\python.exe c:\Users\WORKSTATIONL\.vscode\extensions\ms-python.python-2020.4.76186\pythonFiles\testing_tools\run_adapter.py discover pytest -- --rootdir c:\Users\WORKSTATIONL\Python\Tests\NymusClient -s
```
The entire output is
```
[Redshift] Redshift for Houdini plugin version 3.0.16 (Feb 3 2020 16:30:46)
[Redshift] Plugin compile time HDK version: 18.0.348
[Redshift] Houdini host version: 18.0.348
[Redshift] Plugin dso/dll and config path: T:/__SharedAssets/Assets_Houdini/Redshift_SERVER/Redshift_v3.0.16/Plugins/Houdini/18.0.348/dso
[Redshift] Core data path: T:\__SharedAssets\Assets_Houdini\Redshift_SERVER\Redshift_v3.0.16
[Redshift] Local data path: C:\ProgramData\Redshift
[Redshift] Procedurals path: T:\__SharedAssets\Assets_Houdini\Redshift_SERVER\Redshift_v3.0.16\Procedurals
[Redshift] Preferences file path: C:\ProgramData\Redshift\preferences.xml
[Redshift] License path: C:\ProgramData\Redshift
[{"rootid": ".", "tests": [{"source": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py:12", "parentid": "./src/tests/houdini_tests/io_tests/geometry_test.py", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py::test_writeCache", "markers": [], "name": "test_writeCache"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py:29", "parentid": "./src/tests/houdini_tests/io_tests/geometry_test.py", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py::test_remove_writeCache_result", "markers": [], "name": "test_remove_writeCache_result"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py:11", "parentid": "./src/tests/houdini_tests/io_tests/render_test.py", "id": "./src/tests/houdini_tests/io_tests/render_test.py::test_writeCache", "markers": [], "name": "test_writeCache"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py:28", "parentid": "./src/tests/houdini_tests/io_tests/render_test.py", "id": "./src/tests/houdini_tests/io_tests/render_test.py::test_remove_writeCache_result", "markers": [], "name": "test_remove_writeCache_result"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py:8", "parentid": "./src/tests/houdini_tests/io_tests/utils_test.py", "id": "./src/tests/houdini_tests/io_tests/utils_test.py::test_formatPath", "markers": [], "name": "test_formatPath"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py:21", "parentid": "./src/tests/houdini_tests/io_tests/utils_test.py", "id": "./src/tests/houdini_tests/io_tests/utils_test.py::test_formatPathExpression", "markers": [], "name": "test_formatPathExpression"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:26", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setup_variables", "markers": [], "name": "test_setup_variables"}, {"source":
".\\src\\tests\\renderpal_tests\\server_events_test.py:82", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_GeometryOutputVersionComplete", "markers": [], "name": "test_GeometryOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:93", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionComplete", "markers": [], "name": "test_setGeometryOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:101", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionStatus", "markers": [], "name": "test_setGeometryOutputVersionStatus"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:110", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionCurrent", "markers": [], "name": "test_setGeometryOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:118", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionComplete", "markers": [], "name": "test_setRenderOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:124", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionCurrent", "markers": [], "name": "test_setRenderOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:130", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionStatus", "markers": [], "name": "test_setRenderOutputVersionStatus"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:138", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionComplete", "markers": [], "name": "test_setShotOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:144", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionCurrent", "markers": [], "name": "test_setShotOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:150", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionStatus", "markers": [], "name": "test_setShotOutputVersionStatus"}], "root": "C:\\Users\\WORKSTATIONL\\Python\\Tests\\NymusClient", "parents": [{"relpath": ".\\src", "kind": "folder", "parentid": ".", "id": "./src", "name": "src"}, {"relpath": ".\\src\\tests", "kind": "folder", "parentid": "./src", "id": "./src/tests", "name": "tests"}, {"relpath": ".\\src\\tests\\houdini_tests", "kind": "folder", "parentid": "./src/tests", "id": "./src/tests/houdini_tests", "name": "houdini_tests"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests", "kind": "folder", "parentid": "./src/tests/houdini_tests", "id": "./src/tests/houdini_tests/io_tests", "name": "io_tests"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py", "kind": "file", "parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py", "name": "geometry_test.py"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py", "kind": "file",
"parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/render_test.py", "name": "render_test.py"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py", "kind": "file", "parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/utils_test.py", "name": "utils_test.py"}, {"relpath": ".\\src\\tests\\renderpal_tests", "kind": "folder", "parentid": "./src/tests", "id": "./src/tests/renderpal_tests", "name": "renderpal_tests"}, {"relpath": ".\\src\\tests\\renderpal_tests\\server_events_test.py", "kind": "file", "parentid": "./src/tests/renderpal_tests", "id": "./src/tests/renderpal_tests/server_events_test.py", "name": "server_events_test.py"}]}]
[Redshift] Closing the RS instance. End of the plugin log system.
```
I'm guessing that `[Redshift]` is being interpreted as JSON.
This looks like it may be a separate issue. Let me know if you want me to post a new issue.
_Originally posted by @Anti-Distinctlyminty in https://github.com/microsoft/vscode-python/issues/10108#issuecomment-624544644_
|
1.0
|
Pytest discovery breaks when code to test prints to stdout during discovery - I had another look and realized something. My project imports libraries from a host application. This application has a plugin, named Redshift, that I guess outputs to stdout.
So, when I run the command
```
c:\Users\WORKSTATIONL\Python\Tests\NymusClient\.venvHoudini18\Scripts\python.exe c:\Users\WORKSTATIONL\.vscode\extensions\ms-python.python-2020.4.76186\pythonFiles\testing_tools\run_adapter.py discover pytest -- --rootdir c:\Users\WORKSTATIONL\Python\Tests\NymusClient -s
```
The entire output is
```
[Redshift] Redshift for Houdini plugin version 3.0.16 (Feb 3 2020 16:30:46)
[Redshift] Plugin compile time HDK version: 18.0.348
[Redshift] Houdini host version: 18.0.348
[Redshift] Plugin dso/dll and config path: T:/__SharedAssets/Assets_Houdini/Redshift_SERVER/Redshift_v3.0.16/Plugins/Houdini/18.0.348/dso
[Redshift] Core data path: T:\__SharedAssets\Assets_Houdini\Redshift_SERVER\Redshift_v3.0.16
[Redshift] Local data path: C:\ProgramData\Redshift
[Redshift] Procedurals path: T:\__SharedAssets\Assets_Houdini\Redshift_SERVER\Redshift_v3.0.16\Procedurals
[Redshift] Preferences file path: C:\ProgramData\Redshift\preferences.xml
[Redshift] License path: C:\ProgramData\Redshift
[{"rootid": ".", "tests": [{"source": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py:12", "parentid": "./src/tests/houdini_tests/io_tests/geometry_test.py", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py::test_writeCache", "markers": [], "name": "test_writeCache"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py:29", "parentid": "./src/tests/houdini_tests/io_tests/geometry_test.py", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py::test_remove_writeCache_result", "markers": [], "name": "test_remove_writeCache_result"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py:11", "parentid": "./src/tests/houdini_tests/io_tests/render_test.py", "id": "./src/tests/houdini_tests/io_tests/render_test.py::test_writeCache", "markers": [], "name": "test_writeCache"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py:28", "parentid": "./src/tests/houdini_tests/io_tests/render_test.py", "id": "./src/tests/houdini_tests/io_tests/render_test.py::test_remove_writeCache_result", "markers": [], "name": "test_remove_writeCache_result"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py:8", "parentid": "./src/tests/houdini_tests/io_tests/utils_test.py", "id": "./src/tests/houdini_tests/io_tests/utils_test.py::test_formatPath", "markers": [], "name": "test_formatPath"}, {"source": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py:21", "parentid": "./src/tests/houdini_tests/io_tests/utils_test.py", "id": "./src/tests/houdini_tests/io_tests/utils_test.py::test_formatPathExpression", "markers": [], "name": "test_formatPathExpression"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:26", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setup_variables", "markers": [], "name": "test_setup_variables"}, {"source":
".\\src\\tests\\renderpal_tests\\server_events_test.py:82", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_GeometryOutputVersionComplete", "markers": [], "name": "test_GeometryOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:93", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionComplete", "markers": [], "name": "test_setGeometryOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:101", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionStatus", "markers": [], "name": "test_setGeometryOutputVersionStatus"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:110", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setGeometryOutputVersionCurrent", "markers": [], "name": "test_setGeometryOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:118", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionComplete", "markers": [], "name": "test_setRenderOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:124", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionCurrent", "markers": [], "name": "test_setRenderOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:130", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setRenderOutputVersionStatus", "markers": [], "name": "test_setRenderOutputVersionStatus"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:138", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionComplete", "markers": [], "name": "test_setShotOutputVersionComplete"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:144", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionCurrent", "markers": [], "name": "test_setShotOutputVersionCurrent"}, {"source": ".\\src\\tests\\renderpal_tests\\server_events_test.py:150", "parentid": "./src/tests/renderpal_tests/server_events_test.py", "id": "./src/tests/renderpal_tests/server_events_test.py::test_setShotOutputVersionStatus", "markers": [], "name": "test_setShotOutputVersionStatus"}], "root": "C:\\Users\\WORKSTATIONL\\Python\\Tests\\NymusClient", "parents": [{"relpath": ".\\src", "kind": "folder", "parentid": ".", "id": "./src", "name": "src"}, {"relpath": ".\\src\\tests", "kind": "folder", "parentid": "./src", "id": "./src/tests", "name": "tests"}, {"relpath": ".\\src\\tests\\houdini_tests", "kind": "folder", "parentid": "./src/tests", "id": "./src/tests/houdini_tests", "name": "houdini_tests"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests", "kind": "folder", "parentid": "./src/tests/houdini_tests", "id": "./src/tests/houdini_tests/io_tests", "name": "io_tests"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\geometry_test.py", "kind": "file", "parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/geometry_test.py", "name": "geometry_test.py"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\render_test.py", "kind": "file",
"parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/render_test.py", "name": "render_test.py"}, {"relpath": ".\\src\\tests\\houdini_tests\\io_tests\\utils_test.py", "kind": "file", "parentid": "./src/tests/houdini_tests/io_tests", "id": "./src/tests/houdini_tests/io_tests/utils_test.py", "name": "utils_test.py"}, {"relpath": ".\\src\\tests\\renderpal_tests", "kind": "folder", "parentid": "./src/tests", "id": "./src/tests/renderpal_tests", "name": "renderpal_tests"}, {"relpath": ".\\src\\tests\\renderpal_tests\\server_events_test.py", "kind": "file", "parentid": "./src/tests/renderpal_tests", "id": "./src/tests/renderpal_tests/server_events_test.py", "name": "server_events_test.py"}]}]
[Redshift] Closing the RS instance. End of the plugin log system.
```
I'm guessing that `[Redshift]` is being interpreted as JSON.
This looks like it may be a separate issue. Let me know if you want me to post a new issue.
_Originally posted by @Anti-Distinctlyminty in https://github.com/microsoft/vscode-python/issues/10108#issuecomment-624544644_
|
non_process
|
pytest discovery breaks when code to test prints to stdout during discovery i had another look and realized something my project imports libraries from a host application this application has a plugin named redshift that i guess outputs to stdout so when i run the command c users workstationl python tests nymusclient scripts python exe c users workstationl vscode extensions ms python python pythonfiles testing tools run adapter py discover pytest rootdir c users workstationl python tests nymusclient s the entire output is redshift for houdini plugin version feb plugin compile time hdk version houdini host version plugin dso dll and config path t sharedassets assets houdini redshift server redshift plugins houdini dso core data path t sharedassets assets houdini redshift server redshift local data path c programdata redshift procedurals path t sharedassets assets houdini redshift server redshift procedurals preferences file path c programdata redshift preferences xml license path c programdata redshift name test writecache source src tests houdini tests io tests geometry test py parentid src tests houdini tests io tests geometry test py id src tests houdini tests io tests geometry test py test remove writecache result markers name test remove writecache result source src tests houdini tests io tests render test py parentid src tests houdini tests io tests render test py id src tests houdini tests io tests render test py test writecache markers name test writecache source src tests houdini tests io tests render test py parentid src tests houdini tests io tests render test py id src tests houdini tests io tests render test py test remove writecache result markers name test remove writecache result source src tests houdini tests io tests utils test py parentid src tests houdini tests io tests utils test py id src tests houdini tests io tests utils test py test formatpath markers name test formatpath source src tests houdini tests io tests utils test py parentid src tests houdini tests io tests utils test py id src tests houdini tests io tests utils test py test formatpathexpression markers name test formatpathexpression source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setup variables markers name test setup variables source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test geometryoutputversioncomplete markers name test geometryoutputversioncomplete source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setgeometryoutputversioncomplete markers name test setgeometryoutputversioncomplete source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setgeometryoutputversionstatus markers name test setgeometryoutputversionstatus source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setgeometryoutputversioncurrent markers name test setgeometryoutputversioncurrent source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setrenderoutputversioncomplete markers name test setrenderoutputversioncomplete source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setrenderoutputversioncurrent markers name test setrenderoutputversioncurrent source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setrenderoutputversionstatus markers name test setrenderoutputversionstatus source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setshotoutputversioncomplete markers name test setshotoutputversioncomplete source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setshotoutputversioncurrent markers name test setshotoutputversioncurrent source src tests renderpal tests server events test py parentid src tests renderpal tests server events test py id src tests renderpal tests server events test py test setshotoutputversionstatus markers name test setshotoutputversionstatus root c users workstationl python tests nymusclient parents relpath src kind folder parentid id src name src relpath src tests kind folder parentid src id src tests name tests relpath src tests houdini tests kind folder parentid src tests id src tests houdini tests name houdini tests relpath src tests houdini tests io tests kind folder parentid src tests houdini tests id src tests houdini tests io tests name io tests relpath src tests houdini tests io tests geometry test py kind file parentid src tests houdini tests io tests id src tests houdini tests io tests geometry test py name geometry test py relpath src tests houdini tests io tests render test py kind file parentid src tests houdini tests io tests id src tests houdini tests io tests render test py name render test py relpath src tests houdini tests io tests utils test py kind file parentid src tests houdini tests io tests id src tests houdini tests io tests utils test py name utils test py relpath src tests renderpal tests kind folder parentid src tests id src tests renderpal tests name renderpal tests relpath src tests renderpal tests server events test py kind file parentid src tests renderpal tests id src tests renderpal tests server events test py name server events test py closing the rs instance end of the plugin log system i m guessing that is being interpreted as json this looks like it may be a separate issue let me know if you want me to post a new issue originally posted by anti distinctlyminty in
| 0
|
115,734
| 11,886,390,569
|
IssuesEvent
|
2020-03-27 21:48:38
|
utPLSQL/utPLSQL
|
https://api.github.com/repos/utPLSQL/utPLSQL
|
closed
|
utPLSQL not found after doing installation with power shell
|
documentation enhancement
|
**Describe the bug**
UT not installed correctly?
**Provide version info**
18.0.0.0.0
18.0.0
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production 0
Version 18.4.0.0.0
NLS_LANGUAGE ENGLISH
NLS_TERRITORY GERMANY
NLS_CURRENCY β¬
NLS_ISO_CURRENCY GERMANY
NLS_NUMERIC_CHARACTERS ,.
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD.MM.RR
NLS_DATE_LANGUAGE ENGLISH
NLS_SORT BINARY
NLS_TIME_FORMAT HH24:MI:SSXFF
NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
NLS_DUAL_CURRENCY β¬
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
IBMPC/WIN_NT64-9.1.0
**Information about client software**
SQLDeveloper
**To Reproduce**
1. Installation steps with PowerShell on Windows --> http://utplsql.org/utPLSQL/latest/userguide/install.html
2. when going to SQL Developer and running
select substr(ut.version(),1,60) as ut_version from dual;
3. the following error occurs
ORA-00904: "UT"."VERSION": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 1 Column: 15
**Expected behavior**
I would expect the version number here.
|
1.0
|
utPLSQL not found after doing installation with power shell - **Describe the bug**
UT not installed correctly?
**Provide version info**
18.0.0.0.0
18.0.0
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production 0
Version 18.4.0.0.0
NLS_LANGUAGE ENGLISH
NLS_TERRITORY GERMANY
NLS_CURRENCY β¬
NLS_ISO_CURRENCY GERMANY
NLS_NUMERIC_CHARACTERS ,.
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD.MM.RR
NLS_DATE_LANGUAGE ENGLISH
NLS_SORT BINARY
NLS_TIME_FORMAT HH24:MI:SSXFF
NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
NLS_DUAL_CURRENCY β¬
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
IBMPC/WIN_NT64-9.1.0
**Information about client software**
SQLDeveloper
**To Reproduce**
1. Installation steps with PowerShell on Windows --> http://utplsql.org/utPLSQL/latest/userguide/install.html
2. when going to SQL Developer and running
select substr(ut.version(),1,60) as ut_version from dual;
3. the following error occurs
ORA-00904: "UT"."VERSION": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 1 Column: 15
**Expected behavior**
I would expect the version number here.
|
non_process
|
utplsql not found after doing installation with power shell describe the bug ut not installed correctly provide version info oracle database express edition release production oracle database express edition release production oracle database express edition release production version nls language english nls territory germany nls currency β¬ nls iso currency germany nls numeric characters nls calendar gregorian nls date format dd mm rr nls date language english nls sort binary nls time format mi ssxff nls timestamp format dd mm rr mi ssxff nls time tz format mi ssxff tzr nls timestamp tz format dd mm rr mi ssxff tzr nls dual currency β¬ nls comp binary nls length semantics byte nls nchar conv excp false ibmpc win information about client software sqldeveloper to reproduce installation steps with powershell on windows when going to sql developer and running select substr ut version as ut version from dual the following error occurs ora ut version invalid identifier s invalid identifier cause action error at line column expected behavior i would expect the version number here
| 0
|
384,617
| 26,595,780,221
|
IssuesEvent
|
2023-01-23 12:21:38
|
interactions-py/interactions.py
|
https://api.github.com/repos/interactions-py/interactions.py
|
closed
|
[REQUEST] Improve docs for getting guild members
|
documentation enhancement
|
### Describe the bug.
Going by the docs, right now there would seem to be three ways of getting all members of a guild. From what I can tell, two of these currently don't work as expected.
Assume that there's a bot which is a member of one guild with six members, including itself. Given that the bot's Server Members Intent is activated via the Discord Developer portal, I should be able to implement a slash command that outputs the guild's members.
The basic code would look like this:
```python
import interactions
bot = interactions.Client(
token=...,
intents=interactions.Intents.DEFAULT | interactions.Intents.GUILD_MEMBERS,
)
@bot.command()
async def members_info(ctx: interactions.CommandContext):
...
bot.start()
```
### Guild.members
As per the docs, Guild.members contains "the members in the guild". However, the following code
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = guild.members
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {len(members)} - ")
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
will have the bot output:
```Server: Test
Members count: 1 - Bot
```
As far as I can tell, Guild.members will **always** only contain the bot itself, against the documentation's claim.
### Guild.get_all_members()
This method is the one that currently works most intuitively. The following code
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = await guild.get_all_members()
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {len(members)} - ")
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
will have the bot output
```Server: Test
Members count: 6 - Bot,UserOne,UserTwo,UserThree,UserFour,UserFive
```
**However, as per the [source's warning](https://github.com/interactions-py/interactions.py/blob/2c902a2649fde05f4fe58672fc60ce8c84435cb7/interactions/api/models/guild.py#L2452), this method is deprecated.**
### Guild.get_members()
This method seems to simply not work at all.
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = guild.get_members() # the method isn't marked as async
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {members.object_count()} - ") # this will return 0
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
This will fail with
```
Task exception was never retrieved
future: <Task finished name='Task-21' coro=<server_info() done, defined at .../venv/lib/python3.10/site-packages/interactions/client/models/command.py:930> exception=TypeError("'AsyncMembersIterator' object is not iterable")>
Traceback (most recent call last):
File ".../venv/lib/python3.10/site-packages/interactions/client/models/command.py", line 970, in wrapper
raise e
File ".../venv/lib/python3.10/site-packages/interactions/client/models/command.py", line 939, in wrapper
return await coro(ctx, *args, **kwargs)
File ".../main.py", line 135, in server_info
st += ",".join(str(member) for member in members)
TypeError: 'AsyncMembersIterator' object is not iterable
```
This error message seems a little weird, because AsyncMembersIterator does implement DiscordPaginationIterator which implements BaseIterator which provides `__iter__` and `__next__`, so I would assume that the object would be iterable. Interestingly enough, `flat = await members.flatten()` **does** work, giving all members as a list. So why doesn't it work via its iterator?
### List the steps.
See writeup.
### What you expected.
See writeup.
### What you saw.
See writeup.
### What version of the library did you use?
release
### Version specification
4.3.4
### Code of Conduct
- [X] I agree to follow the contribution requirements.
|
1.0
|
[REQUEST] Improve docs for getting guild members - ### Describe the bug.
Going by the docs, right now there would seem to be three ways of getting all members of a guild. From what I can tell, two of these currently don't work as expected.
Assume that there's a bot which is a member of one guild with six members, including itself. Given that the bot's Server Members Intent is activated via the Discord Developer portal, I should be able to implement a slash command that outputs the guild's members.
The basic code would look like this:
```python
import interactions
bot = interactions.Client(
token=...,
intents=interactions.Intents.DEFAULT | interactions.Intents.GUILD_MEMBERS,
)
@bot.command()
async def members_info(ctx: interactions.CommandContext):
...
bot.start()
```
### Guild.members
As per the docs, Guild.members contains "the members in the guild". However, the following code
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = guild.members
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {len(members)} - ")
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
will have the bot output:
```Server: Test
Members count: 1 - Bot
```
As far as I can tell, Guild.members will **always** only contain the bot itself, against the documentation's claim.
### Guild.get_all_members()
This method is the one that currently works most intuitively. The following code
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = await guild.get_all_members()
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {len(members)} - ")
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
will have the bot output
```Server: Test
Members count: 6 - Bot,UserOne,UserTwo,UserThree,UserFour,UserFive
```
**However, as per the [source's warning](https://github.com/interactions-py/interactions.py/blob/2c902a2649fde05f4fe58672fc60ce8c84435cb7/interactions/api/models/guild.py#L2452), this method is deprecated.**
### Guild.get_members()
This method seems to simply not work at all.
```python
@bot.command()
async def members_info(ctx: interactions.CommandContext):
await ctx.send("processing...")
str = ""
for guild in bot.guilds:
members = guild.get_members() # the method isn't marked as async
str += (f"Server: {guild.name} \r\n")
str += (f"Members count: {members.object_count()} - ") # this will return 0
str += ",".join(str(member) for member in members)
str += "\r\n\r\n"
await ctx.edit(str)
```
This will fail with
```
Task exception was never retrieved
future: <Task finished name='Task-21' coro=<server_info() done, defined at .../venv/lib/python3.10/site-packages/interactions/client/models/command.py:930> exception=TypeError("'AsyncMembersIterator' object is not iterable")>
Traceback (most recent call last):
File ".../venv/lib/python3.10/site-packages/interactions/client/models/command.py", line 970, in wrapper
raise e
File ".../venv/lib/python3.10/site-packages/interactions/client/models/command.py", line 939, in wrapper
return await coro(ctx, *args, **kwargs)
File ".../main.py", line 135, in server_info
st += ",".join(str(member) for member in members)
TypeError: 'AsyncMembersIterator' object is not iterable
```
This error message seems a little weird, because AsyncMembersIterator does implement DiscordPaginationIterator which implements BaseIterator which provides `__iter__` and `__next__`, so I would assume that the object would be iterable. Interestingly enough, `flat = await members.flatten()` **does** work, giving all members as a list. So why doesn't it work via its iterator?
### List the steps.
See writeup.
### What you expected.
See writeup.
### What you saw.
See writeup.
### What version of the library did you use?
release
### Version specification
4.3.4
### Code of Conduct
- [X] I agree to follow the contribution requirements.
|
non_process
|
improve docs for getting guild members describe the bug going by the docs right now there would seem to be three ways of getting all members of a guild from what i can tell two of these currently don t work as expected assume that there s a bot which is a member of one guild with six members including itself given that the bot s server members intent is activated via the discord developer portal i should be able to implement a slash command that outputs the guild s members the basic code would look like this python import interactions bot interactions client token intents interactions intents default interactions intents guild members bot command async def members info ctx interactions commandcontext bot start guild members as per the docs guild members contains the members in the guild however the following code python bot command async def members info ctx interactions commandcontext await ctx send processing str for guild in bot guilds members guild members str f server guild name r n str f members count len members str join str member for member in members str r n r n await ctx edit str will have the bot output server test members count bot as far as i can tell guild members will always only contain the bot itself against the documentation s claim guild get all members this method is the one that currently works most intuitively the following code python bot command async def members info ctx interactions commandcontext await ctx send processing str for guild in bot guilds members await guild get all members str f server guild name r n str f members count len members str join str member for member in members str r n r n await ctx edit str will have the bot output server test members count bot userone usertwo userthree userfour userfive however as per the this method is deprecated guild get members this method seems to simply not work at all python bot command async def members info ctx interactions commandcontext await ctx send processing str for guild in bot guilds members guild get members the method isn t marked as async str f server guild name r n str f members count members object count this will return str join str member for member in members str r n r n await ctx edit str this will fail with task exception was never retrieved future exception typeerror asyncmembersiterator object is not iterable traceback most recent call last file venv lib site packages interactions client models command py line in wrapper raise e file venv lib site packages interactions client models command py line in wrapper return await coro ctx args kwargs file main py line in server info st join str member for member in members typeerror asyncmembersiterator object is not iterable this error message seems a little weird because asyncmembersiterator does implement discordpaginationiterator which implements baseiterator which provides iter and next so i would assume that the object would be iterable interestingly enough flat await members flatten does work giving all members as a list so why doesn t it work via its iterator list the steps see writeup what you expected see writeup what you saw see writeup what version of the library did you use release version specification code of conduct i agree to follow the contribution requirements
| 0
|
447,358
| 12,887,789,149
|
IssuesEvent
|
2020-07-13 11:53:16
|
crestic-urca/remotelabz
|
https://api.github.com/repos/crestic-urca/remotelabz
|
closed
|
Same device in multiple lab
|
bug high priority
|
In GitLab by @fnolot on Sep 18, 2019, 23:04
When we create 2 laboratories and choose in these 2 laboratories the same device, as soon as one lab is running, when we start the second lab, the device is already started!
|
1.0
|
Same device in multiple lab - In GitLab by @fnolot on Sep 18, 2019, 23:04
When we create 2 laboratories and choose in these 2 laboratories the same device, as soon as one lab is running, when we start the second lab, the device is already started!
|
non_process
|
same device in multiple lab in gitlab by fnolot on sep when we create laboratories and choose in these laboratories the same device as soon as one lab is running when we start the second lab the device is already started
| 0
|
102,330
| 21,947,460,965
|
IssuesEvent
|
2022-05-24 03:15:08
|
learnpack/learnpack
|
https://api.github.com/repos/learnpack/learnpack
|
closed
|
When opening one exercise with several files, only the last one gets opened
|
bug vscode plugin π½
|
Hrs: <hrs>1.5</hrs>
All this behavior is happening in `grading: isolated`
The way the plugin works, if the exercise has 3 files to open, for example: index.html, index.js and style.css.
It will open index.html, but then when it opens index.js it will replace the same TextEditor with the content of index.js (removing the index.html that was there in the first place), and then it will do the same with style.css.
This behavior was ideal for one file exercises but long term is better to make sure that a `new` editor is opened instead of reusing the old one.
Note: when another exercise is opened (with all of its files) we need to make sure the previous files are closed to avoid overwhelming the user with too many files.
## How to replicate
1. Open the Layout exercises: https://github.com/4GeeksAcademy/css-layouts-tutorial-exercises
2. Wait for learnpack to run and show the instructions
3. Click next until you hit exercise `03-Position-relative-vs-absolute`
4. You will see how bot files are open on the same editor on top of each other.
|
1.0
|
When opening one exercise with several files, only the last one gets opened - Hrs: <hrs>1.5</hrs>
All this behavior is happening in `grading: isolated`
The way the plugin works, if the exercise has 3 files to open, for example: index.html, index.js and style.css.
It will open index.html, but then when it opens index.js it will replace the same TextEditor with the content of index.js (removing the index.html that was there in the first place), and then it will do the same with style.css.
This behavior was ideal for one file exercises but long term is better to make sure that a `new` editor is opened instead of reusing the old one.
Note: when another exercise is opened (with all of its files) we need to make sure the previous files are closed to avoid overwhelming the user with too many files.
## How to replicate
1. Open the Layout exercises: https://github.com/4GeeksAcademy/css-layouts-tutorial-exercises
2. Wait for learnpack to run and show the instructions
3. Click next until you hit exercise `03-Position-relative-vs-absolute`
4. You will see how bot files are open on the same editor on top of each other.
|
non_process
|
when opening one exercise with several files only the last one gets opened hrs all this behavior is happening in grading isolated the way the plugin works if the exercise has files to open for example index html index js and style css it will open index html but then when it opens index js it will replace the same texteditor with the content of index js removing the index html that was there in the first place and then it will do the same with style css this behavior was ideal for one file exercises but long term is better to make sure that a new editor is opened instead of reusing the old one note when another exercise is opened with all of its files we need to make sure the previous files are closed to avoid overwhelming the user with too many files how to replicate open the layout exercises wait for learnpack to run and show the instructions click next until you hit exercise position relative vs absolute you will see how bot files are open on the same editor on top of each other
| 0
|
167,089
| 14,101,245,233
|
IssuesEvent
|
2020-11-06 06:24:31
|
tronghieu60s/project-winform
|
https://api.github.com/repos/tronghieu60s/project-winform
|
closed
|
Assigned tasks for the 1st time
|
documentation enhancement
|
@ Kim Ngan: Validating 2 form (data required) in frmLogin, check btnLogin when click.
@ Tran Tri: Validating form add new user frmMain (textbox, combobox, datetimepicker)
|
1.0
|
Assigned tasks for the 1st time - @ Kim Ngan: Validating 2 form (data required) in frmLogin, check btnLogin when click.
@ Tran Tri: Validating form add new user frmMain (textbox, combobox, datetimepicker)
|
non_process
|
assigned tasks for the time kim ngan validating form data required in frmlogin check btnlogin when click tran tri validating form add new user frmmain textbox combobox datetimepicker
| 0
|
15,581
| 19,704,457,967
|
IssuesEvent
|
2022-01-12 20:13:08
|
googleapis/php-grafeas
|
https://api.github.com/repos/googleapis/php-grafeas
|
closed
|
Your .repo-metadata.json file has a problem π€
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan π:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname field missing from .repo-metadata.json
βοΈ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem π€ - You have a problem with your .repo-metadata.json file:
Result of scan π:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname field missing from .repo-metadata.json
βοΈ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem π€ you have a problem with your repo metadata json file result of scan π client documentation must match pattern in repo metadata json release level must be equal to one of the allowed values in repo metadata json api shortname field missing from repo metadata json βοΈ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
7,816
| 10,980,458,216
|
IssuesEvent
|
2019-11-30 14:31:23
|
codeuniversity/smag-mvp
|
https://api.github.com/repos/codeuniversity/smag-mvp
|
closed
|
Insert image encoding in elasticsearch for k-nearest-neighbour search of similar images
|
Image Processing
|
To do the nearest neighbor search of faces efficiently, we get index the encoding of faces with https://github.com/ageitgey/face_recognition and search the n-nearest-neighbors with this elasticsearch addon: https://github.com/lior-k/fast-elasticsearch-vector-scoring
|
1.0
|
Insert image encoding in elasticsearch for k-nearest-neighbour search of similar images - To do the nearest neighbor search of faces efficiently, we get index the encoding of faces with https://github.com/ageitgey/face_recognition and search the n-nearest-neighbors with this elasticsearch addon: https://github.com/lior-k/fast-elasticsearch-vector-scoring
|
process
|
insert image encoding in elasticsearch for k nearest neighbour search of similar images to do the nearest neighbor search of faces efficiently we get index the encoding of faces with and search the n nearest neighbors with this elasticsearch addon
| 1
|
330,477
| 10,041,161,858
|
IssuesEvent
|
2019-07-18 21:54:27
|
milnel2/blocks4alliOS
|
https://api.github.com/repos/milnel2/blocks4alliOS
|
closed
|
If - Else Block
|
UI add hard medium priority
|
Add an else statement block. Have 'If' on it's own and add an 'If, Else' block.
|
1.0
|
If - Else Block - Add an else statement block. Have 'If' on it's own and add an 'If, Else' block.
|
non_process
|
if else block add an else statement block have if on it s own and add an if else block
| 0
|
41,172
| 5,342,717,036
|
IssuesEvent
|
2017-02-17 09:08:22
|
mautic/mautic
|
https://api.github.com/repos/mautic/mautic
|
closed
|
Bug - DWC permission inefficient & themes
|
Bug Ready To Test
|
| Q | A
| ---| ---
| Bug report? | Y
| Feature request? | -
| Enhancement? | -
## Description:
When you change permissions on role for `Dynamic Web Content`, it is not saved. Same issue on `themes`.
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.4.0 (and previous)
| PHP version | 5.6.24
### Steps to reproduce:
1. Create role with limited access
2. Click **Full** on `Themes` and `DWC`
3. Save and close
4. Edit and see that **Full** is not checked anymore.
|
1.0
|
Bug - DWC permission inefficient & themes - | Q | A
| ---| ---
| Bug report? | Y
| Feature request? | -
| Enhancement? | -
## Description:
When you change permissions on role for `Dynamic Web Content`, it is not saved. Same issue on `themes`.
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.4.0 (and previous)
| PHP version | 5.6.24
### Steps to reproduce:
1. Create role with limited access
2. Click **Full** on `Themes` and `DWC`
3. Save and close
4. Edit and see that **Full** is not checked anymore.
|
non_process
|
bug dwc permission inefficient themes q a bug report y feature request enhancement description when you change permissions on role for dynamic web content it is not saved same issue on themes if a bug q a mautic version and previous php version steps to reproduce create role with limited access click full on themes and dwc save and close edit and see that full is not checked anymore
| 0
|
16,840
| 9,537,528,422
|
IssuesEvent
|
2019-04-30 12:46:14
|
doitsujin/dxvk
|
https://api.github.com/repos/doitsujin/dxvk
|
closed
|
Discards are always emitted at the ends of shaders.
|
enhancement performance
|
This is a significant performance problems on some apps. On Megadimension Neptunia VIIR, for instance, moving discards early appears to give about a 20x perf improvement over keeping them late. The fundamental problem here is that D3D defines discards in such a way that helper invocations (required for derivatives) are well-defined after a D3D discard but not after an OpenGL or Vulkan discard.
I've experimented with trying to fix this inside the driver by writing a pass that attempts to move discards as far up the shader as possible. The pass just checks that it's not moving the discard past any derivatives or texture operations with implicit LOD. However, I'm not sure if driver fixing is really what we want because we can likely do better even if there are derivatives or texture operations with implicit LOD. For example, if you can make the assumption that derivatives (both explicit and those used for texturing) use a 2x2 quad which is always groups of 4 consecutive subgroup invocations, one could emit something like this:
void d3d_kill()
{
do_discard = true;
if (subgroupClusteredAnd(do_discard, 4))
discard;
}
In this case, you can jump for some of the invocations even if not all invocations discard and there are derivatives so long as an entire 2x2 quad is killed. This is basically how we would implement a D3D-style discard in our driver.
Another option would be an extension which allows a new SPIR-V execution mode which says "discards don't affect derivatives" and then just let the driver do what it wants to do. This would likely be easier for DXVK but maybe harder to get the Vulkan working group to swallow as an extension. I'm pretty sure providing the subgroup invocation -> quad mapping is something I could sell as an extension fairly easily.
|
True
|
Discards are always emitted at the ends of shaders. - This is a significant performance problems on some apps. On Megadimension Neptunia VIIR, for instance, moving discards early appears to give about a 20x perf improvement over keeping them late. The fundamental problem here is that D3D defines discards in such a way that helper invocations (required for derivatives) are well-defined after a D3D discard but not after an OpenGL or Vulkan discard.
I've experimented with trying to fix this inside the driver by writing a pass that attempts to move discards as far up the shader as possible. The pass just checks that it's not moving the discard past any derivatives or texture operations with implicit LOD. However, I'm not sure if driver fixing is really what we want because we can likely do better even if there are derivatives or texture operations with implicit LOD. For example, if you can make the assumption that derivatives (both explicit and those used for texturing) use a 2x2 quad which is always groups of 4 consecutive subgroup invocations, one could emit something like this:
void d3d_kill()
{
do_discard = true;
if (subgroupClusteredAnd(do_discard, 4))
discard;
}
In this case, you can jump for some of the invocations even if not all invocations discard and there are derivatives so long as an entire 2x2 quad is killed. This is basically how we would implement a D3D-style discard in our driver.
Another option would be an extension which allows a new SPIR-V execution mode which says "discards don't affect derivatives" and then just let the driver do what it wants to do. This would likely be easier for DXVK but maybe harder to get the Vulkan working group to swallow as an extension. I'm pretty sure providing the subgroup invocation -> quad mapping is something I could sell as an extension fairly easily.
|
non_process
|
discards are always emitted at the ends of shaders this is a significant performance problems on some apps on megadimension neptunia viir for instance moving discards early appears to give about a perf improvement over keeping them late the fundamental problem here is that defines discards in such a way that helper invocations required for derivatives are well defined after a discard but not after an opengl or vulkan discard i ve experimented with trying to fix this inside the driver by writing a pass that attempts to move discards as far up the shader as possible the pass just checks that it s not moving the discard past any derivatives or texture operations with implicit lod however i m not sure if driver fixing is really what we want because we can likely do better even if there are derivatives or texture operations with implicit lod for example if you can make the assumption that derivatives both explicit and those used for texturing use a quad which is always groups of consecutive subgroup invocations one could emit something like this void kill do discard true if subgroupclusteredand do discard discard in this case you can jump for some of the invocations even if not all invocations discard and there are derivatives so long as an entire quad is killed this is basically how we would implement a style discard in our driver another option would be an extension which allows a new spir v execution mode which says discards don t affect derivatives and then just let the driver do what it wants to do this would likely be easier for dxvk but maybe harder to get the vulkan working group to swallow as an extension i m pretty sure providing the subgroup invocation quad mapping is something i could sell as an extension fairly easily
| 0
|
331,532
| 24,311,863,963
|
IssuesEvent
|
2022-09-29 23:44:39
|
TKRHinton/Martyrs_Bleed_Neon
|
https://api.github.com/repos/TKRHinton/Martyrs_Bleed_Neon
|
closed
|
Plug Ins
|
documentation
|
Install the necessary plug ins for project and test them (including dialogue system for unity)
|
1.0
|
Plug Ins - Install the necessary plug ins for project and test them (including dialogue system for unity)
|
non_process
|
plug ins install the necessary plug ins for project and test them including dialogue system for unity
| 0
|
14,237
| 17,154,979,336
|
IssuesEvent
|
2021-07-14 05:07:01
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
opened
|
Update scripts to handle app-specific updates to the Participant Datastore
|
Android P1 Participant datastore Process: Enhancement iOS
|
Scripts for the Participant Datastore need to be modified to make app-specific updates for the following (based on App ID):
/* Android configuration */
SET @android_bundle_id := ''; /* This is the value of `applicationId` that configured in Android/app/build.gradle during Android configuration */
SET @android_server_key := ''; /* This is the Firebase Cloud Messaging server key obtained during Android configuration */
/* iOS configuration */
SET @ios_bundle_id := ''; /* Obtain this value using Xcode: Project target > General tab > Identity section > Bundle identifier */
SET @ios_certificate := ''; /* This is the Base64 converted p12 file that obtained during iOS configuration */
SET @ios_certificate_password := ''; /* This is the password for the p12 certificate (necessary if the certificate is encrypted - otherwise leave empty) */
Reference link to currently existing script: https://github.com/GoogleCloudPlatform/fda-mystudies/blob/master/participant-datastore/sqlscript/mystudies_app_info_update_db_script.sql
|
1.0
|
Update scripts to handle app-specific updates to the Participant Datastore - Scripts for the Participant Datastore need to be modified to make app-specific updates for the following (based on App ID):
/* Android configuration */
SET @android_bundle_id := ''; /* This is the value of `applicationId` that configured in Android/app/build.gradle during Android configuration */
SET @android_server_key := ''; /* This is the Firebase Cloud Messaging server key obtained during Android configuration */
/* iOS configuration */
SET @ios_bundle_id := ''; /* Obtain this value using Xcode: Project target > General tab > Identity section > Bundle identifier */
SET @ios_certificate := ''; /* This is the Base64 converted p12 file that obtained during iOS configuration */
SET @ios_certificate_password := ''; /* This is the password for the p12 certificate (necessary if the certificate is encrypted - otherwise leave empty) */
Reference link to currently existing script: https://github.com/GoogleCloudPlatform/fda-mystudies/blob/master/participant-datastore/sqlscript/mystudies_app_info_update_db_script.sql
|
process
|
update scripts to handle app specific updates to the participant datastore scripts for the participant datastore need to be modified to make app specific updates for the following based on app id android configuration set android bundle id this is the value of applicationid that configured in android app build gradle during android configuration set android server key this is the firebase cloud messaging server key obtained during android configuration ios configuration set ios bundle id obtain this value using xcode project target general tab identity section bundle identifier set ios certificate this is the converted file that obtained during ios configuration set ios certificate password this is the password for the certificate necessary if the certificate is encrypted otherwise leave empty reference link to currently existing script
| 1
|
329,262
| 28,210,622,704
|
IssuesEvent
|
2023-04-05 03:47:34
|
osrf/ros2_test_cases
|
https://api.github.com/repos/osrf/ros2_test_cases
|
opened
|
CLI
|
tutorials fastdds debian jammy amd64 generation-1 docs iron intermediate testing
|
Check the documentation for the 'CLI' page
## Setup
- DDS vendor: FastDDS
- BuildType: Debian
- Os: Ubuntu Jammy
- Chip: Amd64
## Links
- [CLI page](https://docs.ros.org/en/rolling/Tutorials/Intermediate/Testing/CLI.html)
## Checks
- [ ] **I was able to follow the documentation.**
- [ ] **The documentation seemed clear to me.**
- [ ] **The documentation didn't have any obvious errors.**
---
*You can find the code used to generate this test case [here](https://github.com/audrow/yatm)*
|
1.0
|
CLI - Check the documentation for the 'CLI' page
## Setup
- DDS vendor: FastDDS
- BuildType: Debian
- Os: Ubuntu Jammy
- Chip: Amd64
## Links
- [CLI page](https://docs.ros.org/en/rolling/Tutorials/Intermediate/Testing/CLI.html)
## Checks
- [ ] **I was able to follow the documentation.**
- [ ] **The documentation seemed clear to me.**
- [ ] **The documentation didn't have any obvious errors.**
---
*You can find the code used to generate this test case [here](https://github.com/audrow/yatm)*
|
non_process
|
cli check the documentation for the cli page setup dds vendor fastdds buildtype debian os ubuntu jammy chip links checks i was able to follow the documentation the documentation seemed clear to me the documentation didn t have any obvious errors you can find the code used to generate this test case
| 0
|
16,194
| 20,674,371,236
|
IssuesEvent
|
2022-03-10 07:39:41
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
CockroachDB: use idiomatic native types
|
process/candidate topic: schema engines/data model parser team/migrations topic: cockroachdb team/psl-wg
|
The question is whether we want to stay close to postgres or use the idiomatic cockroachdb names for the types. Example: string types https://www.cockroachlabs.com/docs/v21.2/string#related-types
My suggestion is to do a complete overhaul to adhere to the crdb recommendations.
Related issues:
https://github.com/prisma/prisma/issues/12234
https://github.com/prisma/prisma/issues/12236
|
1.0
|
CockroachDB: use idiomatic native types - The question is whether we want to stay close to postgres or use the idiomatic cockroachdb names for the types. Example: string types https://www.cockroachlabs.com/docs/v21.2/string#related-types
My suggestion is to do a complete overhaul to adhere to the crdb recommendations.
Related issues:
https://github.com/prisma/prisma/issues/12234
https://github.com/prisma/prisma/issues/12236
|
process
|
cockroachdb use idiomatic native types the question is whether we want to stay close to postgres or use the idiomatic cockroachdb names for the types example string types my suggestion is to do a complete overhaul to adhere to the crdb recommendations related issues
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.