Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,570
| 8,407,409,989
|
IssuesEvent
|
2018-10-11 20:51:46
|
SynBioDex/SEPs
|
https://api.github.com/repos/SynBioDex/SEPs
|
reopened
|
SEP 005 -- SBOL Voting Procedure
|
Accepted Active Type: Process
|
# SEP 005 -- SBOL Voting Procedure
| SEP | 005 |
| --- | --- |
| **Title** | SBOL voting procedure |
| **Authors** | Raik Gruenberg |
| **Editor** | Raik Gruenberg |
| **Type** | Process |
| **Status** | Draft |
| **Created** | 24-Jan-2016 |
| **Last modified** | 02-Feb-2016 |
## Abstract
This proposal describes the voting procedure used to accept or reject changes to the SBOL data model or to SBOL community rules.
## 1. Rationale
With the introduction of SEPs, SBOL voting rules need to be updated. Previously, voting could be initiated by any two members of the sbol-dev mailing list on any issue of choice. According to the new proposal, voting can be initiated only on issues that first have been documented as an SBOL Enhancement Proposals (SEPs).
## 2. Specification
### 2.1 Changes to SBOL governance document
This SEP replaces two sections within the governance document (http://sbolstandard.org/development/gov/): "Voting process" and "Voting form". Note: Election rules remain unchanged.
### 2.2 Voting Process
1. Any member of the SBOL Developers Group can submit an SBOL Enhancement Proposal (SEP) to the editors and/or to the community at large.
2. The SBOL editors are expected to move a given SEP draft to a vote once they agree that there has been sufficient debate. However, any member of the SBOL Developers Group can initiate the vote as long as one other member of the group seconds the motion.
3. SBOL Editors post a voting form (see below), for a final discussion period of 2 working days.
4. Voting runs for 5 working days, starting at the end of the discussion period. All members of the SBOL Development Group are eligible to vote.
5. The SBOL Editors may extend the voting period by up to an additional 5 working days when they feel that an insufficient number of votes have been obtained.
6. SBOL Editors tally and call the vote. First vote will be judged by a 67% majority to indicate "rough consensus".
1. If rough consensus is not reached, discussion of 3 working days is to follow. SEP authors can modify or withdraw their proposal during that time.
2. The reasons for decisions must be recorded with the results of the vote.
3. Any second followup vote will be ruled by 50% majority and will be treated as the decision
### 2.3 Voting form
The voting form must:
1. State clearly the SEP number and title being voted on and provide a link to this SEP.
2. State the eligibility criteria for voting, “All members of the SBOL Developers Group are eligible to vote.”
3. Provide the following options for the vote:
- accept -- vote to accept the SEP
- reject -- vote to reject the SEP
- abstain -- no opinion, abstain votes will not be counted when determining majorities
- defer / table for further discussion -- keep the SEP in draft stage (in contrast to "abstain" this vote will be counted when determining majorities).
4. include a field for entering the e-mail address of the voter
5. include a field for further comments.
## 3. Discussion
The voting procedure is still very similar to the previous rules. New is the idea of SEPs and that it should, preferably, be the editors who move proposals for a vote. However, the proposal also still retains the previous practice -- anyone on the list can initiate the vote, as long as it is seconded by any other developer. Editors can therefore not block the vote on any issue.
Voting for election purposes remains unchanged and is described in SEP #3.
## Copyright
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 005</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
1.0
|
SEP 005 -- SBOL Voting Procedure - # SEP 005 -- SBOL Voting Procedure
| SEP | 005 |
| --- | --- |
| **Title** | SBOL voting procedure |
| **Authors** | Raik Gruenberg |
| **Editor** | Raik Gruenberg |
| **Type** | Process |
| **Status** | Draft |
| **Created** | 24-Jan-2016 |
| **Last modified** | 02-Feb-2016 |
## Abstract
This proposal describes the voting procedure used to accept or reject changes to the SBOL data model or to SBOL community rules.
## 1. Rationale
With the introduction of SEPs, SBOL voting rules need to be updated. Previously, voting could be initiated by any two members of the sbol-dev mailing list on any issue of choice. According to the new proposal, voting can be initiated only on issues that first have been documented as an SBOL Enhancement Proposals (SEPs).
## 2. Specification
### 2.1 Changes to SBOL governance document
This SEP replaces two sections within the governance document (http://sbolstandard.org/development/gov/): "Voting process" and "Voting form". Note: Election rules remain unchanged.
### 2.2 Voting Process
1. Any member of the SBOL Developers Group can submit an SBOL Enhancement Proposal (SEP) to the editors and/or to the community at large.
2. The SBOL editors are expected to move a given SEP draft to a vote once they agree that there has been sufficient debate. However, any member of the SBOL Developers Group can initiate the vote as long as one other member of the group seconds the motion.
3. SBOL Editors post a voting form (see below), for a final discussion period of 2 working days.
4. Voting runs for 5 working days, starting at the end of the discussion period. All members of the SBOL Development Group are eligible to vote.
5. The SBOL Editors may extend the voting period by up to an additional 5 working days when they feel that an insufficient number of votes have been obtained.
6. SBOL Editors tally and call the vote. First vote will be judged by a 67% majority to indicate "rough consensus".
1. If rough consensus is not reached, discussion of 3 working days is to follow. SEP authors can modify or withdraw their proposal during that time.
2. The reasons for decisions must be recorded with the results of the vote.
3. Any second followup vote will be ruled by 50% majority and will be treated as the decision
### 2.3 Voting form
The voting form must:
1. State clearly the SEP number and title being voted on and provide a link to this SEP.
2. State the eligibility criteria for voting, “All members of the SBOL Developers Group are eligible to vote.”
3. Provide the following options for the vote:
- accept -- vote to accept the SEP
- reject -- vote to reject the SEP
- abstain -- no opinion, abstain votes will not be counted when determining majorities
- defer / table for further discussion -- keep the SEP in draft stage (in contrast to "abstain" this vote will be counted when determining majorities).
4. include a field for entering the e-mail address of the voter
5. include a field for further comments.
## 3. Discussion
The voting procedure is still very similar to the previous rules. New is the idea of SEPs and that it should, preferably, be the editors who move proposals for a vote. However, the proposal also still retains the previous practice -- anyone on the list can initiate the vote, as long as it is seconded by any other developer. Editors can therefore not block the vote on any issue.
Voting for election purposes remains unchanged and is described in SEP #3.
## Copyright
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 005</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
process
|
sep sbol voting procedure sep sbol voting procedure sep title sbol voting procedure authors raik gruenberg editor raik gruenberg type process status draft created jan last modified feb abstract this proposal describes the voting procedure used to accept or reject changes to the sbol data model or to sbol community rules rationale with the introduction of seps sbol voting rules need to be updated previously voting could be initiated by any two members of the sbol dev mailing list on any issue of choice according to the new proposal voting can be initiated only on issues that first have been documented as an sbol enhancement proposals seps specification changes to sbol governance document this sep replaces two sections within the governance document voting process and voting form note election rules remain unchanged voting process any member of the sbol developers group can submit an sbol enhancement proposal sep to the editors and or to the community at large the sbol editors are expected to move a given sep draft to a vote once they agree that there has been sufficient debate however any member of the sbol developers group can initiate the vote as long as one other member of the group seconds the motion sbol editors post a voting form see below for a final discussion period of working days voting runs for working days starting at the end of the discussion period all members of the sbol development group are eligible to vote the sbol editors may extend the voting period by up to an additional working days when they feel that an insufficient number of votes have been obtained sbol editors tally and call the vote first vote will be judged by a majority to indicate rough consensus if rough consensus is not reached discussion of working days is to follow sep authors can modify or withdraw their proposal during that time the reasons for decisions must be recorded with the results of the vote any second followup vote will be ruled by majority and will be treated as the decision voting form the voting form must state clearly the sep number and title being voted on and provide a link to this sep state the eligibility criteria for voting “all members of the sbol developers group are eligible to vote ” provide the following options for the vote accept vote to accept the sep reject vote to reject the sep abstain no opinion abstain votes will not be counted when determining majorities defer table for further discussion keep the sep in draft stage in contrast to abstain this vote will be counted when determining majorities include a field for entering the e mail address of the voter include a field for further comments discussion the voting procedure is still very similar to the previous rules new is the idea of seps and that it should preferably be the editors who move proposals for a vote however the proposal also still retains the previous practice anyone on the list can initiate the vote as long as it is seconded by any other developer editors can therefore not block the vote on any issue voting for election purposes remains unchanged and is described in sep copyright p xmlns dct xmlns vcard a rel license href to the extent possible under law a rel dct publisher href sbolstandard org sbol developers has waived all copyright and related or neighboring rights to sep this work is published from span property vcard country datatype dct content us about sbolstandard org united states
| 1
|
11,434
| 14,248,489,782
|
IssuesEvent
|
2020-11-19 13:01:00
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
opened
|
GitHub Actions CI flakiness
|
type: process
|
See https://github.com/googleapis/repo-automation-bots/runs/1424133746?check_suite_focus=true.
```
Error: Command failed: git diff --name-only origin/
fatal: ambiguous argument 'origin/': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
```
Seems to be related to the new setup from #626.
cc @JustinBeckwith @bcoe @tmatsuo
|
1.0
|
GitHub Actions CI flakiness - See https://github.com/googleapis/repo-automation-bots/runs/1424133746?check_suite_focus=true.
```
Error: Command failed: git diff --name-only origin/
fatal: ambiguous argument 'origin/': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
```
Seems to be related to the new setup from #626.
cc @JustinBeckwith @bcoe @tmatsuo
|
process
|
github actions ci flakiness see error command failed git diff name only origin fatal ambiguous argument origin unknown revision or path not in the working tree use to separate paths from revisions like this git seems to be related to the new setup from cc justinbeckwith bcoe tmatsuo
| 1
|
14,371
| 17,395,583,706
|
IssuesEvent
|
2021-08-02 13:06:37
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Actions can't be performed in iframe on fiddle.net
|
AREA: client SYSTEM: URL processing SYSTEM: iframe processing TYPE: bug
|
### What is your Test Scenario?
I'm trying to record actions in a result iframe on fiddle.net after click on a `Run` button
https://github.com/DevExpress/testcafe-studio/issues/2052
### What is the Current behavior?
Error raises `TestCafeDriver doesn't exist in the iframe`
### What is the Expected behavior?
I expect TestCafeDriver exists in the iframe
### Steps to Reproduce:
The easiest way to reproduce is to run a following test:
```js
import { Selector } from 'testcafe';
fixture `f`
.page `https://jsfiddle.net/`;
test('t', async t => {
await t
.debug() //resume after loading is complete
.click(Selector('#run'))
.debug() // resume after result iframe has a white background
.switchToIframe(Selector('[name="result"]'))
.click(Selector('body')); // error raises 'Content of the iframe in which the test is currently operating did not load.'
});
```
On the second `debug` action you can check that TestCafeDriver instance doesn't exist in iframe

### Your Environment details:
* testcafe version: 0.23.3
|
2.0
|
Actions can't be performed in iframe on fiddle.net - ### What is your Test Scenario?
I'm trying to record actions in a result iframe on fiddle.net after click on a `Run` button
https://github.com/DevExpress/testcafe-studio/issues/2052
### What is the Current behavior?
Error raises `TestCafeDriver doesn't exist in the iframe`
### What is the Expected behavior?
I expect TestCafeDriver exists in the iframe
### Steps to Reproduce:
The easiest way to reproduce is to run a following test:
```js
import { Selector } from 'testcafe';
fixture `f`
.page `https://jsfiddle.net/`;
test('t', async t => {
await t
.debug() //resume after loading is complete
.click(Selector('#run'))
.debug() // resume after result iframe has a white background
.switchToIframe(Selector('[name="result"]'))
.click(Selector('body')); // error raises 'Content of the iframe in which the test is currently operating did not load.'
});
```
On the second `debug` action you can check that TestCafeDriver instance doesn't exist in iframe

### Your Environment details:
* testcafe version: 0.23.3
|
process
|
actions can t be performed in iframe on fiddle net what is your test scenario i m trying to record actions in a result iframe on fiddle net after click on a run button what is the current behavior error raises testcafedriver doesn t exist in the iframe what is the expected behavior i expect testcafedriver exists in the iframe steps to reproduce the easiest way to reproduce is to run a following test js import selector from testcafe fixture f page test t async t await t debug resume after loading is complete click selector run debug resume after result iframe has a white background switchtoiframe selector click selector body error raises content of the iframe in which the test is currently operating did not load on the second debug action you can check that testcafedriver instance doesn t exist in iframe your environment details testcafe version
| 1
|
838
| 3,305,436,722
|
IssuesEvent
|
2015-11-04 04:45:34
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
abbreviated-form and term keyref links are not resolved when chunk="to-content" [DOT 1.8 and 2.0]]
|
bug preprocess/chunking
|
Let's say I have a DITA Map like:
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "http://docs.oasis-open.org/dita/v1.1/OS/dtd/map.dtd">
<map title="Growing Flowers">
<topicref href="topics/introduction.dita"/>
<topicref href="glossary/glossary_overview.dita" chunk="to-content">
<topicref href="glossary/ot.dita" keys="opentoolkit"/>
</topicref>
</map>
with "introduction.dita" having the content:
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "http://docs.oasis-open.org/dita/v1.1/OS/dtd/topic.dtd">
<topic id="introduction">
<title>Introduction</title>
<body>
<p>Look out for <term keyref="opentoolkit"/> and for <abbreviated-form keyref="opentoolkit"/> but not for <xref keyref="opentoolkit"/>.</p>
</body>
</topic>
and "glossary_overview.dita" having the content:
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic_dfl_sn4_3q">
<title>TEST</title>
<body>
<p></p>
</body>
</topic>
and "ot.dita" having the content:
<!DOCTYPE glossentry PUBLIC "-//OASIS//DTD DITA Glossary//EN" "glossary.dtd">
<glossentry id="wmd">
<glossterm>Weapons of Mass
Destruction</glossterm>
<glossBody>
<glossSurfaceForm>Weapons of Mass
Destruction (WMD)</glossSurfaceForm>
<glossAlt>
<glossAcronym>WMD</glossAcronym>
</glossAlt>
</glossBody>
</glossentry>
Publishing the DITA Map to XHTML, both the abbreviated-form and term keyrefs are not properly solved. The xref with the same keyref is properly solved.
The console output contains:
[xslt] C:\Users\radu_coravu\Desktop\DOT2.0 final\DITA-OT2.0\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:1372: Error! I/O error reported by XML parser processing file:/C:/Users/radu_coravu/Desktop/tomjohnson/flowers/temp/xhtml/oxygen_dita_temp/topics/../glossary/ot.dita: C:\Users\radu_coravu\Desktop\tomjohnson\flowers\temp\xhtml\oxygen_dita_temp\topics\..\glossary\ot.dita (The system cannot find the file specified) Cause: java.io.FileNotFoundException: C:\Users\radu_coravu\Desktop\tomjohnson\flowers\temp\xhtml\oxygen_dita_temp\topics\..\glossary\ot.dita (The system cannot find the file specified)
[xslt] C:\Users\radu_coravu\Desktop\DOT2.0 final\DITA-OT2.0\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:4136: Error! Document has been marked not available: file:/C:/Users/radu_coravu/Desktop/tomjohnson/flowers/temp/xhtml/oxygen_dita_temp/topics/../glossary/ot.dita
|
1.0
|
abbreviated-form and term keyref links are not resolved when chunk="to-content" [DOT 1.8 and 2.0]] - Let's say I have a DITA Map like:
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "http://docs.oasis-open.org/dita/v1.1/OS/dtd/map.dtd">
<map title="Growing Flowers">
<topicref href="topics/introduction.dita"/>
<topicref href="glossary/glossary_overview.dita" chunk="to-content">
<topicref href="glossary/ot.dita" keys="opentoolkit"/>
</topicref>
</map>
with "introduction.dita" having the content:
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "http://docs.oasis-open.org/dita/v1.1/OS/dtd/topic.dtd">
<topic id="introduction">
<title>Introduction</title>
<body>
<p>Look out for <term keyref="opentoolkit"/> and for <abbreviated-form keyref="opentoolkit"/> but not for <xref keyref="opentoolkit"/>.</p>
</body>
</topic>
and "glossary_overview.dita" having the content:
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic_dfl_sn4_3q">
<title>TEST</title>
<body>
<p></p>
</body>
</topic>
and "ot.dita" having the content:
<!DOCTYPE glossentry PUBLIC "-//OASIS//DTD DITA Glossary//EN" "glossary.dtd">
<glossentry id="wmd">
<glossterm>Weapons of Mass
Destruction</glossterm>
<glossBody>
<glossSurfaceForm>Weapons of Mass
Destruction (WMD)</glossSurfaceForm>
<glossAlt>
<glossAcronym>WMD</glossAcronym>
</glossAlt>
</glossBody>
</glossentry>
Publishing the DITA Map to XHTML, both the abbreviated-form and term keyrefs are not properly solved. The xref with the same keyref is properly solved.
The console output contains:
[xslt] C:\Users\radu_coravu\Desktop\DOT2.0 final\DITA-OT2.0\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:1372: Error! I/O error reported by XML parser processing file:/C:/Users/radu_coravu/Desktop/tomjohnson/flowers/temp/xhtml/oxygen_dita_temp/topics/../glossary/ot.dita: C:\Users\radu_coravu\Desktop\tomjohnson\flowers\temp\xhtml\oxygen_dita_temp\topics\..\glossary\ot.dita (The system cannot find the file specified) Cause: java.io.FileNotFoundException: C:\Users\radu_coravu\Desktop\tomjohnson\flowers\temp\xhtml\oxygen_dita_temp\topics\..\glossary\ot.dita (The system cannot find the file specified)
[xslt] C:\Users\radu_coravu\Desktop\DOT2.0 final\DITA-OT2.0\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:4136: Error! Document has been marked not available: file:/C:/Users/radu_coravu/Desktop/tomjohnson/flowers/temp/xhtml/oxygen_dita_temp/topics/../glossary/ot.dita
|
process
|
abbreviated form and term keyref links are not resolved when chunk to content let s say i have a dita map like doctype map public oasis dtd dita map en with introduction dita having the content doctype topic public oasis dtd dita topic en introduction look out for and for but not for and glossary overview dita having the content test and ot dita having the content weapons of mass destruction weapons of mass destruction wmd wmd publishing the dita map to xhtml both the abbreviated form and term keyrefs are not properly solved the xref with the same keyref is properly solved the console output contains c users radu coravu desktop final dita plugins org dita xhtml xsl xslhtml xsl error i o error reported by xml parser processing file c users radu coravu desktop tomjohnson flowers temp xhtml oxygen dita temp topics glossary ot dita c users radu coravu desktop tomjohnson flowers temp xhtml oxygen dita temp topics glossary ot dita the system cannot find the file specified cause java io filenotfoundexception c users radu coravu desktop tomjohnson flowers temp xhtml oxygen dita temp topics glossary ot dita the system cannot find the file specified c users radu coravu desktop final dita plugins org dita xhtml xsl xslhtml xsl error document has been marked not available file c users radu coravu desktop tomjohnson flowers temp xhtml oxygen dita temp topics glossary ot dita
| 1
|
203,360
| 23,146,955,644
|
IssuesEvent
|
2022-07-29 02:39:01
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
Istio JWT validation happens even if RequestAuthentication is not applied to the workload
|
area/security
|
### Bug Description
### Context:
I have two `httpbin` deployments under `foo` namespace:
1. `httpbin` – deployed with the sidecar proxy
2. `httpbin-no-auth` – deployed without sidecar proxy
I also configured `RequestAuthentication` to be applied to the `httpbin` workload:
```yaml
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-example
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: https://accounts.google.com
```
Now, I want to create a `VirtualService`, which will route requests to `httpbin` if the `x-use-auth: true` header is present, or otherwise, route requests to `httpbin-no-auth`:
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: httpbin
namespace: foo
spec:
gateways:
- httpbin
- mesh
hosts:
- httpbin.foo.svc.cluster.local
http:
- match:
- headers:
x-use-auth:
exact: 'true'
route:
- destination:
host: httpbin.foo.svc.cluster.local
port:
number: 8000
- route:
- destination:
host: httpbin-no-auth.foo.svc.cluster.local
port:
number: 8000
```
I also have a `Gateway` to pass the ingress traffic to the mesh:
```yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: httpbin
namespace: foo
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
```
### Expected Behaviour:
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "x-use-auth: true" -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< www-authenticate: Bearer realm="http://httpbin.foo.svc.cluster.local/ip", error="invalid_token"
<
Jwt is not in the form of Header.Payload.Signature with two dots and 3 sections%
```
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
<
{
"origin": "<ip_address>"
}
```
### Actual Behaviour:
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< www-authenticate: Bearer realm="http://httpbin.foo.svc.cluster.local/ip", error="invalid_token"
<
Jwt is not in the form of Header.Payload.Signature with two dots and 3 sections%
```
### Details:
The reason I'm trying to achieve described behaviour is because I have a legacy version of an API service, which is using OAuth2 `access_token` for authentication. Now, I want to migrate this service to OIDC with JWT, without changing hostname/path under which this API is accessible.
### Version
```prose
➜ ~ istioctl version
client version: 1.14.1
control plane version: 1.13.3
data plane version: 1.13.3 (2 proxies)
➜ ~ k version --short
Client Version: v1.23.3
Server Version: v1.22.8-gke.202
```
### Additional Information
_No response_
|
True
|
Istio JWT validation happens even if RequestAuthentication is not applied to the workload - ### Bug Description
### Context:
I have two `httpbin` deployments under `foo` namespace:
1. `httpbin` – deployed with the sidecar proxy
2. `httpbin-no-auth` – deployed without sidecar proxy
I also configured `RequestAuthentication` to be applied to the `httpbin` workload:
```yaml
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-example
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: https://accounts.google.com
```
Now, I want to create a `VirtualService`, which will route requests to `httpbin` if the `x-use-auth: true` header is present, or otherwise, route requests to `httpbin-no-auth`:
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: httpbin
namespace: foo
spec:
gateways:
- httpbin
- mesh
hosts:
- httpbin.foo.svc.cluster.local
http:
- match:
- headers:
x-use-auth:
exact: 'true'
route:
- destination:
host: httpbin.foo.svc.cluster.local
port:
number: 8000
- route:
- destination:
host: httpbin-no-auth.foo.svc.cluster.local
port:
number: 8000
```
I also have a `Gateway` to pass the ingress traffic to the mesh:
```yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: httpbin
namespace: foo
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
```
### Expected Behaviour:
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "x-use-auth: true" -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< www-authenticate: Bearer realm="http://httpbin.foo.svc.cluster.local/ip", error="invalid_token"
<
Jwt is not in the form of Header.Payload.Signature with two dots and 3 sections%
```
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
<
{
"origin": "<ip_address>"
}
```
### Actual Behaviour:
```shell
~ curl -H 'Host: httpbin.foo.svc.cluster.local' -H "Authorization: Bearer invalid.token" http://${INGRESS_IP}/ip -v
<redacted>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< www-authenticate: Bearer realm="http://httpbin.foo.svc.cluster.local/ip", error="invalid_token"
<
Jwt is not in the form of Header.Payload.Signature with two dots and 3 sections%
```
### Details:
The reason I'm trying to achieve described behaviour is because I have a legacy version of an API service, which is using OAuth2 `access_token` for authentication. Now, I want to migrate this service to OIDC with JWT, without changing hostname/path under which this API is accessible.
### Version
```prose
➜ ~ istioctl version
client version: 1.14.1
control plane version: 1.13.3
data plane version: 1.13.3 (2 proxies)
➜ ~ k version --short
Client Version: v1.23.3
Server Version: v1.22.8-gke.202
```
### Additional Information
_No response_
|
non_process
|
istio jwt validation happens even if requestauthentication is not applied to the workload bug description context i have two httpbin deployments under foo namespace httpbin – deployed with the sidecar proxy httpbin no auth – deployed without sidecar proxy i also configured requestauthentication to be applied to the httpbin workload yaml apiversion security istio io kind requestauthentication metadata name jwt example namespace foo spec selector matchlabels app httpbin jwtrules issuer now i want to create a virtualservice which will route requests to httpbin if the x use auth true header is present or otherwise route requests to httpbin no auth yaml apiversion networking istio io kind virtualservice metadata name httpbin namespace foo spec gateways httpbin mesh hosts httpbin foo svc cluster local http match headers x use auth exact true route destination host httpbin foo svc cluster local port number route destination host httpbin no auth foo svc cluster local port number i also have a gateway to pass the ingress traffic to the mesh yaml apiversion networking istio io kind gateway metadata name httpbin namespace foo spec selector istio ingressgateway servers port number name http protocol http hosts expected behaviour shell curl h host httpbin foo svc cluster local h x use auth true h authorization bearer invalid token v mark bundle as not supporting multiuse http unauthorized www authenticate bearer realm error invalid token jwt is not in the form of header payload signature with two dots and sections shell curl h host httpbin foo svc cluster local h authorization bearer invalid token v mark bundle as not supporting multiuse http ok origin actual behaviour shell curl h host httpbin foo svc cluster local h authorization bearer invalid token v mark bundle as not supporting multiuse http unauthorized www authenticate bearer realm error invalid token jwt is not in the form of header payload signature with two dots and sections details the reason i m trying to achieve described behaviour is because i have a legacy version of an api service which is using access token for authentication now i want to migrate this service to oidc with jwt without changing hostname path under which this api is accessible version prose ➜ istioctl version client version control plane version data plane version proxies ➜ k version short client version server version gke additional information no response
| 0
|
8,116
| 11,302,415,055
|
IssuesEvent
|
2020-01-17 17:36:43
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
2 small fixes in pathogen host branch
|
PomBase missing parentage multi-species process parent relationship query term merge
|

add
- [x]
GO:0044068 modulation by symbiont of host cellular process
as a parent of
GO:0052552 modulation by organism of immune response of other organism involved in symbiotic interaction
- [x] and rename to
GO:0052552 modulation by symbiont of host immune response
i.e now it will be named like the descendant term
GO:0075528 modulation by virus of host immune response
|
1.0
|
2 small fixes in pathogen host branch - 
add
- [x]
GO:0044068 modulation by symbiont of host cellular process
as a parent of
GO:0052552 modulation by organism of immune response of other organism involved in symbiotic interaction
- [x] and rename to
GO:0052552 modulation by symbiont of host immune response
i.e now it will be named like the descendant term
GO:0075528 modulation by virus of host immune response
|
process
|
small fixes in pathogen host branch add go modulation by symbiont of host cellular process as a parent of go modulation by organism of immune response of other organism involved in symbiotic interaction and rename to go modulation by symbiont of host immune response i e now it will be named like the descendant term go modulation by virus of host immune response
| 1
|
7,033
| 10,192,805,100
|
IssuesEvent
|
2019-08-12 12:11:16
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
partner names should be seen to every user in the entity
|
2.0.9 Process bug
|
after adding or being added to an entity the name of the other partners should be visible on tooltip after clicking on their avatar ( watcher and commenter cant see)
|
1.0
|
partner names should be seen to every user in the entity - after adding or being added to an entity the name of the other partners should be visible on tooltip after clicking on their avatar ( watcher and commenter cant see)
|
process
|
partner names should be seen to every user in the entity after adding or being added to an entity the name of the other partners should be visible on tooltip after clicking on their avatar watcher and commenter cant see
| 1
|
11,287
| 14,079,905,109
|
IssuesEvent
|
2020-11-04 15:27:32
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
bigquery: CI samples task consistently fails
|
api: bigquery samples type: process
|
google-cloud-bigquery samples task consistently fails. This is probably due to an expectation that the added datasets will appear in the first page of results, and in reality there are many old datasets.
```
1) Failure:
List datasets#test_0001_lists datasets in a project [/tmpfs/src/github/google-cloud-ruby/google-cloud-bigquery/samples/snippets/acceptance/list_datasets_test.rb:27]:
Expected /test_dataset1_1603835162_1753c908/ to match "Datasets in project :\n\tgcloud_ruby_acceptance_2020_08_28t07_36_28z_523e3659_dataset_view\n\tgcloud_ruby_acceptance_2020_08_28t07_36_28z_523e3659_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_2\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_location\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_view\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_2\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_location\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_view\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_2\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_location\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_view\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_2\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_location\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_view\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_2\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_location\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_view\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_2\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_location\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_view\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_2\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_location\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_view\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_with_access\n\truby_asset_sample_898c3ad7d3cd5c3e2ea132420da283ca\n\ttest_dataset1_1592391068\n\ttest_dataset1_1592391079\n\ttest_dataset1_1592391081\n\ttest_dataset1_1592391087\n\ttest_dataset1_1592391111\n\ttest_dataset1_1592469664\n\ttest_dataset1_1592469668\n\ttest_dataset1_1592469689\n\ttest_dataset1_1592522636\n\ttest_dataset1_1592522814\n\ttest_dataset1_1592525505\n\ttest_dataset1_1592525551\n".
18 runs, 47 assertions, 1 failures, 0 errors, 0 skips
```
|
1.0
|
bigquery: CI samples task consistently fails - google-cloud-bigquery samples task consistently fails. This is probably due to an expectation that the added datasets will appear in the first page of results, and in reality there are many old datasets.
```
1) Failure:
List datasets#test_0001_lists datasets in a project [/tmpfs/src/github/google-cloud-ruby/google-cloud-bigquery/samples/snippets/acceptance/list_datasets_test.rb:27]:
Expected /test_dataset1_1603835162_1753c908/ to match "Datasets in project :\n\tgcloud_ruby_acceptance_2020_08_28t07_36_28z_523e3659_dataset_view\n\tgcloud_ruby_acceptance_2020_08_28t07_36_28z_523e3659_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_2\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_location\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_view\n\tgcloud_ruby_acceptance_2020_10_05t07_48_15z_93d1dd10_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_2\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_location\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_view\n\tgcloud_ruby_acceptance_2020_10_05t09_20_52z_b4f07038_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_2\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_location\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_view\n\tgcloud_ruby_acceptance_2020_10_06t08_43_28z_ab38893e_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_2\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_location\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_view\n\tgcloud_ruby_acceptance_2020_10_10t10_49_19z_bf7e3447_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_2\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_location\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_view\n\tgcloud_ruby_acceptance_2020_10_15t07_39_24z_919a6b6e_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_2\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_location\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_view\n\tgcloud_ruby_acceptance_2020_10_20t08_47_54z_b1a68b92_dataset_with_access\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_2\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_location\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_view\n\tgcloud_ruby_acceptance_2020_10_24t07_36_36z_9e954fb9_dataset_with_access\n\truby_asset_sample_898c3ad7d3cd5c3e2ea132420da283ca\n\ttest_dataset1_1592391068\n\ttest_dataset1_1592391079\n\ttest_dataset1_1592391081\n\ttest_dataset1_1592391087\n\ttest_dataset1_1592391111\n\ttest_dataset1_1592469664\n\ttest_dataset1_1592469668\n\ttest_dataset1_1592469689\n\ttest_dataset1_1592522636\n\ttest_dataset1_1592522814\n\ttest_dataset1_1592525505\n\ttest_dataset1_1592525551\n".
18 runs, 47 assertions, 1 failures, 0 errors, 0 skips
```
|
process
|
bigquery ci samples task consistently fails google cloud bigquery samples task consistently fails this is probably due to an expectation that the added datasets will appear in the first page of results and in reality there are many old datasets failure list datasets test lists datasets in a project expected test to match datasets in project n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset n tgcloud ruby acceptance dataset location n tgcloud ruby acceptance dataset view n tgcloud ruby acceptance dataset with access n truby asset sample n ttest n ttest n ttest n ttest n ttest n ttest n ttest n ttest n ttest n ttest n ttest n ttest n runs assertions failures errors skips
| 1
|
10,471
| 13,246,254,586
|
IssuesEvent
|
2020-08-19 15:27:07
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
Investigate mergify
|
process + tools
|
I will close this issue, and create a separate one for `mergify`.
_Originally posted by @ravwojdyla in https://github.com/pystatgen/sgkit/issues/26#issuecomment-673381620_
Related: https://github.com/pystatgen/sgkit/issues/26#issuecomment-655656832
|
1.0
|
Investigate mergify - I will close this issue, and create a separate one for `mergify`.
_Originally posted by @ravwojdyla in https://github.com/pystatgen/sgkit/issues/26#issuecomment-673381620_
Related: https://github.com/pystatgen/sgkit/issues/26#issuecomment-655656832
|
process
|
investigate mergify i will close this issue and create a separate one for mergify originally posted by ravwojdyla in related
| 1
|
18,728
| 24,622,700,670
|
IssuesEvent
|
2022-10-16 05:20:22
|
Open-Data-Product-Initiative/open-data-product-spec-1.1dev
|
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec-1.1dev
|
opened
|
Check Fair Data Economy "Dataset Terms of Use [Template]" requirements for data products
|
enhancement Unprocessed
|
**Idea Description**
The rulebook for a fair data economy is a guide for creators of fair data economy networks. Agreement templates and other tools make it easier to build and join new data networks which highlight transparency in data sharing.
Review the rule book for the dataset terms of use and find out which parts are already covered by the open data product spec and which are not. Document all the findings. Based on this work we can design needed changes to the Open Data Product Specification. See attached Rule book.
[rulebook-for-a-fair-data-economy-part-2.pdf](https://github.com/Open-Data-Product-Initiative/open-data-product-spec-1.1dev/files/9793651/rulebook-for-a-fair-data-economy-part-2.pdf)
More information about the rule book from here https://www.sitra.fi/en/publications/rulebook-for-a-fair-data-economy/#preface-and-templates
|
1.0
|
Check Fair Data Economy "Dataset Terms of Use [Template]" requirements for data products - **Idea Description**
The rulebook for a fair data economy is a guide for creators of fair data economy networks. Agreement templates and other tools make it easier to build and join new data networks which highlight transparency in data sharing.
Review the rule book for the dataset terms of use and find out which parts are already covered by the open data product spec and which are not. Document all the findings. Based on this work we can design needed changes to the Open Data Product Specification. See attached Rule book.
[rulebook-for-a-fair-data-economy-part-2.pdf](https://github.com/Open-Data-Product-Initiative/open-data-product-spec-1.1dev/files/9793651/rulebook-for-a-fair-data-economy-part-2.pdf)
More information about the rule book from here https://www.sitra.fi/en/publications/rulebook-for-a-fair-data-economy/#preface-and-templates
|
process
|
check fair data economy dataset terms of use requirements for data products idea description the rulebook for a fair data economy is a guide for creators of fair data economy networks agreement templates and other tools make it easier to build and join new data networks which highlight transparency in data sharing review the rule book for the dataset terms of use and find out which parts are already covered by the open data product spec and which are not document all the findings based on this work we can design needed changes to the open data product specification see attached rule book more information about the rule book from here
| 1
|
7,692
| 2,920,215,036
|
IssuesEvent
|
2015-06-24 17:51:02
|
Dolu1990/ElectricalAge
|
https://api.github.com/repos/Dolu1990/ElectricalAge
|
closed
|
Disapearing blocks
|
bug in the pipeline under test
|
I've had a problem that sometimes when I get back on my world all the blocks(but not items) are gone. This issue may be caused by the other mods in the pack but I don't know. The pack by the way is on the AT launcher and is called Buildpak (the code is "buildpak" if you want to try it yourself.) Thanks
|
1.0
|
Disapearing blocks - I've had a problem that sometimes when I get back on my world all the blocks(but not items) are gone. This issue may be caused by the other mods in the pack but I don't know. The pack by the way is on the AT launcher and is called Buildpak (the code is "buildpak" if you want to try it yourself.) Thanks
|
non_process
|
disapearing blocks i ve had a problem that sometimes when i get back on my world all the blocks but not items are gone this issue may be caused by the other mods in the pack but i don t know the pack by the way is on the at launcher and is called buildpak the code is buildpak if you want to try it yourself thanks
| 0
|
3,410
| 6,523,899,111
|
IssuesEvent
|
2017-08-29 10:28:04
|
w3c/w3process
|
https://api.github.com/repos/w3c/w3process
|
closed
|
Director can dismis a AB or TAG participant without giving a cause?
|
Active Process2018Candidate
|
Transferred from https://www.w3.org/community/w3process/track/issues/160
State: Raised
|
1.0
|
Director can dismis a AB or TAG participant without giving a cause? - Transferred from https://www.w3.org/community/w3process/track/issues/160
State: Raised
|
process
|
director can dismis a ab or tag participant without giving a cause transferred from state raised
| 1
|
5,920
| 8,742,309,477
|
IssuesEvent
|
2018-12-12 16:08:16
|
prusa3d/Slic3r
|
https://api.github.com/repos/prusa3d/Slic3r
|
closed
|
[Request] Estimated print time in gcode file name
|
background processing enhancement
|
I love the print time estimation in the new alpha. What would make it even better is if the print time could be incorporated in the filename.
(What would make it downright amazing, is if the print time could be somehow communicated to the printer, and shown on the display during printing.)
|
1.0
|
[Request] Estimated print time in gcode file name - I love the print time estimation in the new alpha. What would make it even better is if the print time could be incorporated in the filename.
(What would make it downright amazing, is if the print time could be somehow communicated to the printer, and shown on the display during printing.)
|
process
|
estimated print time in gcode file name i love the print time estimation in the new alpha what would make it even better is if the print time could be incorporated in the filename what would make it downright amazing is if the print time could be somehow communicated to the printer and shown on the display during printing
| 1
|
258,502
| 8,176,610,359
|
IssuesEvent
|
2018-08-28 08:08:29
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
Show a warning when viewing a non-public add-on
|
component: ux contrib: welcome priority: p4 size: S triaged type: papercut
|
### Describe the problem and steps to reproduce it:
<!-- Please include as many details as possible. -->
As an admin, log in to AMO and view the detail page for a disabled add-on
*or*
As a developer, upload an add-on, make it non-public, log in to AMO and view its detail page
### What happened?
You see a complete detail page as if nothing was different
### What did you expect to happen?
You should see a warning saying that the add-on is not public but you're seeing it anyway because you have elevated privileges.
### Anything else we should know?
<!-- Please include a link to the page, screenshots and any relevant files. -->
This has created confusion a couple of times from admins or developers who are not expecting to see a public listing.
It should be pretty easy to fix:
```diff
diff --git a/src/amo/components/Addon/index.js b/src/amo/components/Addon/index.js
index a2d4eec21..52e2b4d4e 100644
--- a/src/amo/components/Addon/index.js
+++ b/src/amo/components/Addon/index.js
@@ -53,6 +53,7 @@ import Card from 'ui/components/Card';
import Icon from 'ui/components/Icon';
import LoadingText from 'ui/components/LoadingText';
import ShowMoreCard from 'ui/components/ShowMoreCard';
+import Notice from 'ui/components/Notice';
import './styles.scss';
@@ -537,6 +538,11 @@ export class AddonBase extends React.Component {
reason={compatibility.reason}
/>
) : null}
+ {addon && addon.status !== 'public' ? (
+ <Notice type="error">
+ {i18n.gettext('This is not a public listing. You are only seeing it because of elevated permissions.')}
+ </Notice>
+ ) : null}
<header className="Addon-header">
{this.headerImage({ compatible: isCompatible })}
```
|
1.0
|
Show a warning when viewing a non-public add-on - ### Describe the problem and steps to reproduce it:
<!-- Please include as many details as possible. -->
As an admin, log in to AMO and view the detail page for a disabled add-on
*or*
As a developer, upload an add-on, make it non-public, log in to AMO and view its detail page
### What happened?
You see a complete detail page as if nothing was different
### What did you expect to happen?
You should see a warning saying that the add-on is not public but you're seeing it anyway because you have elevated privileges.
### Anything else we should know?
<!-- Please include a link to the page, screenshots and any relevant files. -->
This has created confusion a couple of times from admins or developers who are not expecting to see a public listing.
It should be pretty easy to fix:
```diff
diff --git a/src/amo/components/Addon/index.js b/src/amo/components/Addon/index.js
index a2d4eec21..52e2b4d4e 100644
--- a/src/amo/components/Addon/index.js
+++ b/src/amo/components/Addon/index.js
@@ -53,6 +53,7 @@ import Card from 'ui/components/Card';
import Icon from 'ui/components/Icon';
import LoadingText from 'ui/components/LoadingText';
import ShowMoreCard from 'ui/components/ShowMoreCard';
+import Notice from 'ui/components/Notice';
import './styles.scss';
@@ -537,6 +538,11 @@ export class AddonBase extends React.Component {
reason={compatibility.reason}
/>
) : null}
+ {addon && addon.status !== 'public' ? (
+ <Notice type="error">
+ {i18n.gettext('This is not a public listing. You are only seeing it because of elevated permissions.')}
+ </Notice>
+ ) : null}
<header className="Addon-header">
{this.headerImage({ compatible: isCompatible })}
```
|
non_process
|
show a warning when viewing a non public add on describe the problem and steps to reproduce it as an admin log in to amo and view the detail page for a disabled add on or as a developer upload an add on make it non public log in to amo and view its detail page what happened you see a complete detail page as if nothing was different what did you expect to happen you should see a warning saying that the add on is not public but you re seeing it anyway because you have elevated privileges anything else we should know this has created confusion a couple of times from admins or developers who are not expecting to see a public listing it should be pretty easy to fix diff diff git a src amo components addon index js b src amo components addon index js index a src amo components addon index js b src amo components addon index js import card from ui components card import icon from ui components icon import loadingtext from ui components loadingtext import showmorecard from ui components showmorecard import notice from ui components notice import styles scss export class addonbase extends react component reason compatibility reason null addon addon status public gettext this is not a public listing you are only seeing it because of elevated permissions null this headerimage compatible iscompatible
| 0
|
41,349
| 12,831,922,434
|
IssuesEvent
|
2020-07-07 06:37:48
|
rvvergara/fazebuk-api
|
https://api.github.com/repos/rvvergara/fazebuk-api
|
closed
|
CVE-2020-10663 (High) detected in json-2.2.0.gem
|
security vulnerability
|
## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.2.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.2.0.gem">https://rubygems.org/gems/json-2.2.0.gem</a></p>
<p>
Dependency Hierarchy:
- koala-3.0.0.gem (Root Library)
- :x: **json-2.2.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/fazebuk-api/commit/87f552cd8f4c8dbd427d01637ba54b85f1c43af1">87f552cd8f4c8dbd427d01637ba54b85f1c43af1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10663 (High) detected in json-2.2.0.gem - ## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.2.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.2.0.gem">https://rubygems.org/gems/json-2.2.0.gem</a></p>
<p>
Dependency Hierarchy:
- koala-3.0.0.gem (Root Library)
- :x: **json-2.2.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/fazebuk-api/commit/87f552cd8f4c8dbd427d01637ba54b85f1c43af1">87f552cd8f4c8dbd427d01637ba54b85f1c43af1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in json gem cve high severity vulnerability vulnerable library json gem this is a json implementation as a ruby extension in c library home page a href dependency hierarchy koala gem root library x json gem vulnerable library found in head commit a href vulnerability details the json gem through for ruby as used in ruby through through and through has an unsafe object creation vulnerability this is quite similar to cve but does not rely on poor garbage collection behavior within ruby specifically use of json parsing methods can lead to creation of a malicious object within the interpreter with adverse effects that are application dependent publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
171,874
| 27,192,443,553
|
IssuesEvent
|
2023-02-19 23:31:46
|
hackforla/expunge-assist
|
https://api.github.com/repos/hackforla/expunge-assist
|
closed
|
[Audit Colors] in Onboarding flow in LG
|
role: design priority: high size: 1pt feature: figma design system feature: figma wireframes
|
### Overview
Extension of #823 . Audit mobile and desktop wireframes under "Design System Audit" on Figma.
### Action Items
- [x] Select each individual "atom" on a wireframe
- [x] Draw an arrow to each "atom" and type out the HEX code or Color style being used
- [x] If "atom" is not using a color style, use a yellow (HEX # #F6841B) arrow and note the HEX code being used
- [x] Note any inconsistencies to share with the team
- [x] Close issue
### Resources/Instructions
- Includes "Welcome", "Before you begin", "Advice" pages
- Ask @anitadesigns or @RenaNicole if you have any questions
|
2.0
|
[Audit Colors] in Onboarding flow in LG - ### Overview
Extension of #823 . Audit mobile and desktop wireframes under "Design System Audit" on Figma.
### Action Items
- [x] Select each individual "atom" on a wireframe
- [x] Draw an arrow to each "atom" and type out the HEX code or Color style being used
- [x] If "atom" is not using a color style, use a yellow (HEX # #F6841B) arrow and note the HEX code being used
- [x] Note any inconsistencies to share with the team
- [x] Close issue
### Resources/Instructions
- Includes "Welcome", "Before you begin", "Advice" pages
- Ask @anitadesigns or @RenaNicole if you have any questions
|
non_process
|
in onboarding flow in lg overview extension of audit mobile and desktop wireframes under design system audit on figma action items select each individual atom on a wireframe draw an arrow to each atom and type out the hex code or color style being used if atom is not using a color style use a yellow hex arrow and note the hex code being used note any inconsistencies to share with the team close issue resources instructions includes welcome before you begin advice pages ask anitadesigns or renanicole if you have any questions
| 0
|
67,034
| 8,070,547,671
|
IssuesEvent
|
2018-08-06 10:05:56
|
JohnSegerstedt/Game1
|
https://api.github.com/repos/JohnSegerstedt/Game1
|
closed
|
Redesign StartMenu animation to match that of the game
|
redesign shelved
|
**ACCEPTANCE CRITERIA:**
* The player shapes share the same Material as the real player models.
* The ground shares the same Material as the real ground model.
* The edges of the ground has been correctly added.
|
1.0
|
Redesign StartMenu animation to match that of the game - **ACCEPTANCE CRITERIA:**
* The player shapes share the same Material as the real player models.
* The ground shares the same Material as the real ground model.
* The edges of the ground has been correctly added.
|
non_process
|
redesign startmenu animation to match that of the game acceptance criteria the player shapes share the same material as the real player models the ground shares the same material as the real ground model the edges of the ground has been correctly added
| 0
|
3,360
| 6,487,982,652
|
IssuesEvent
|
2017-08-20 13:19:17
|
gaocegege/Processing.R
|
https://api.github.com/repos/gaocegege/Processing.R
|
closed
|
Cast more functions from double to int
|
community/processing difficulty/low priority/p1 size/small status/to-be-claimed type/enhancement
|
- Anything named "mode":
- blendMode()
- colorMode()
- ellipseMode()
- imageMode()
- rectMode()
- shapeMode()
- textureMode()
- textMode()
- Other configuration-setting functions:
- pixelDensity()
- strokeCap()
- strokeJoin()
- textureWrap()
|
1.0
|
Cast more functions from double to int - - Anything named "mode":
- blendMode()
- colorMode()
- ellipseMode()
- imageMode()
- rectMode()
- shapeMode()
- textureMode()
- textMode()
- Other configuration-setting functions:
- pixelDensity()
- strokeCap()
- strokeJoin()
- textureWrap()
|
process
|
cast more functions from double to int anything named mode blendmode colormode ellipsemode imagemode rectmode shapemode texturemode textmode other configuration setting functions pixeldensity strokecap strokejoin texturewrap
| 1
|
18,195
| 24,248,270,342
|
IssuesEvent
|
2022-09-27 12:24:42
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Core Member Page Update
|
issue-processing-state-06
|
Hey guys,
I think we should update the [core member page](https://quark-engine.readthedocs.io/en/latest/contribution.html#core-members).
Suggested Categories are:
* Core Member
* Alumni
* Consultant
|
1.0
|
Core Member Page Update - Hey guys,
I think we should update the [core member page](https://quark-engine.readthedocs.io/en/latest/contribution.html#core-members).
Suggested Categories are:
* Core Member
* Alumni
* Consultant
|
process
|
core member page update hey guys i think we should update the suggested categories are core member alumni consultant
| 1
|
14,658
| 17,783,604,584
|
IssuesEvent
|
2021-08-31 08:26:07
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Dependency Dashboard
|
priority: p2 type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/microsoft.netframework.referenceassemblies-1.x -->[chore(deps): update dependency microsoft.netframework.referenceassemblies to v1.0.2](../pull/6811)
- [ ] <!-- recreate-branch=renovate/microsoft.aspnetcore.mvc.core-2.x -->[chore(deps): update dependency microsoft.aspnetcore.mvc.core to v2.2.5](../pull/6895)
- [ ] <!-- recreate-branch=renovate/xunit.combinatorial-1.x -->[chore(deps): update dependency xunit.combinatorial to v1.4.1](../pull/6857)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/microsoft.netframework.referenceassemblies-1.x -->[chore(deps): update dependency microsoft.netframework.referenceassemblies to v1.0.2](../pull/6811)
- [ ] <!-- recreate-branch=renovate/microsoft.aspnetcore.mvc.core-2.x -->[chore(deps): update dependency microsoft.aspnetcore.mvc.core to v2.2.5](../pull/6895)
- [ ] <!-- recreate-branch=renovate/xunit.combinatorial-1.x -->[chore(deps): update dependency xunit.combinatorial to v1.4.1](../pull/6857)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull pull check this box to trigger a request for renovate to run again on this repository
| 1
|
3,152
| 6,204,369,976
|
IssuesEvent
|
2017-07-06 14:05:23
|
SpongePowered/Mixin
|
https://api.github.com/repos/SpongePowered/Mixin
|
closed
|
Equivalent of @Override/@Overwrite for non-obfuscated methods
|
accepted annotation processor core enhancement
|
Currently there is no way to do any compiletime checks to make sure the method I'm overwriting exists when it's non obfuscated one/doesn't have mappings.
|
1.0
|
Equivalent of @Override/@Overwrite for non-obfuscated methods - Currently there is no way to do any compiletime checks to make sure the method I'm overwriting exists when it's non obfuscated one/doesn't have mappings.
|
process
|
equivalent of override overwrite for non obfuscated methods currently there is no way to do any compiletime checks to make sure the method i m overwriting exists when it s non obfuscated one doesn t have mappings
| 1
|
235,279
| 7,736,098,237
|
IssuesEvent
|
2018-05-27 22:26:05
|
WohlSoft/PGE-Project
|
https://api.github.com/repos/WohlSoft/PGE-Project
|
opened
|
[Editor] Plugins system
|
Priority - major enhancement
|
Editor even has the complex and wide functionality, will be more handy with having the plugins system. The current base of the Editor is not so good for implementation of the deep plugins system. Deep plugins system means the giving a complex binding into JavaScript which will allow plugins to manipulate editor's internals. That will be possible after the comming internal architecture restructurization.
The only is possible with current base to implement next things:
# Extra properties of every element
- [ ] Implement the access to the custom properties set stored in the special meta field inside of every element that will be saved into LVLX files.
- [ ] Inside of every element INI file, provide the ability to give another INI (or JSON) file that will declare the list of extra settings elements are will be shown in item properties toolbox or in the separated toolbox which will be openable from the main properties toolbox or from out of the context menu. Custom properties library taken by Kevsoft for some experiments can be used.
- [ ] Give some JavaScript events and bindings to allow pre-process data and fields of modifying elements.
|
1.0
|
[Editor] Plugins system - Editor even has the complex and wide functionality, will be more handy with having the plugins system. The current base of the Editor is not so good for implementation of the deep plugins system. Deep plugins system means the giving a complex binding into JavaScript which will allow plugins to manipulate editor's internals. That will be possible after the comming internal architecture restructurization.
The only is possible with current base to implement next things:
# Extra properties of every element
- [ ] Implement the access to the custom properties set stored in the special meta field inside of every element that will be saved into LVLX files.
- [ ] Inside of every element INI file, provide the ability to give another INI (or JSON) file that will declare the list of extra settings elements are will be shown in item properties toolbox or in the separated toolbox which will be openable from the main properties toolbox or from out of the context menu. Custom properties library taken by Kevsoft for some experiments can be used.
- [ ] Give some JavaScript events and bindings to allow pre-process data and fields of modifying elements.
|
non_process
|
plugins system editor even has the complex and wide functionality will be more handy with having the plugins system the current base of the editor is not so good for implementation of the deep plugins system deep plugins system means the giving a complex binding into javascript which will allow plugins to manipulate editor s internals that will be possible after the comming internal architecture restructurization the only is possible with current base to implement next things extra properties of every element implement the access to the custom properties set stored in the special meta field inside of every element that will be saved into lvlx files inside of every element ini file provide the ability to give another ini or json file that will declare the list of extra settings elements are will be shown in item properties toolbox or in the separated toolbox which will be openable from the main properties toolbox or from out of the context menu custom properties library taken by kevsoft for some experiments can be used give some javascript events and bindings to allow pre process data and fields of modifying elements
| 0
|
143,986
| 5,533,658,362
|
IssuesEvent
|
2017-03-21 13:52:14
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
closed
|
When confirming the delete of an add-on, the entered text should escape space in front and at the back
|
priority: enhancement
|
*Environment*
Firefox: Nightly 54.0a1
Server: -dev
OS: Windows 10
*Steps to reproduce*
1. Go to AMO page.
2. Go to Tools > Manage My Submissions and click on an add-on.
3. Go to Manage & Status versions.
4. Click Delete Add-on button.
5. On the confirmation prompt, enter the required text with spaces in front and/or at back and confirm the operation. Observe the behavior.
*Expected results*
The spaces should be escaped and the operation should be completed successfully.
*Actual results*
The spaces are treated as characters so the string won’t match, thus an error is thrown for entering the correct text.
*Notes*
1. Reproducibility: 5/5.
2. The issue is reproducible also on production on release build (51.0.1).
|
1.0
|
When confirming the delete of an add-on, the entered text should escape space in front and at the back - *Environment*
Firefox: Nightly 54.0a1
Server: -dev
OS: Windows 10
*Steps to reproduce*
1. Go to AMO page.
2. Go to Tools > Manage My Submissions and click on an add-on.
3. Go to Manage & Status versions.
4. Click Delete Add-on button.
5. On the confirmation prompt, enter the required text with spaces in front and/or at back and confirm the operation. Observe the behavior.
*Expected results*
The spaces should be escaped and the operation should be completed successfully.
*Actual results*
The spaces are treated as characters so the string won’t match, thus an error is thrown for entering the correct text.
*Notes*
1. Reproducibility: 5/5.
2. The issue is reproducible also on production on release build (51.0.1).
|
non_process
|
when confirming the delete of an add on the entered text should escape space in front and at the back environment firefox nightly server dev os windows steps to reproduce go to amo page go to tools manage my submissions and click on an add on go to manage status versions click delete add on button on the confirmation prompt enter the required text with spaces in front and or at back and confirm the operation observe the behavior expected results the spaces should be escaped and the operation should be completed successfully actual results the spaces are treated as characters so the string won’t match thus an error is thrown for entering the correct text notes reproducibility the issue is reproducible also on production on release build
| 0
|
136,171
| 11,042,473,373
|
IssuesEvent
|
2019-12-09 09:14:31
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
[Failing Test] gce-master-scale-correctness on 09/07
|
kind/failing-test lifecycle/stale priority/important-soon sig/scalability
|
<!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
UP - cluster creation
**Which test(s) are failing**:
gce-master-scale-correctness
**Since when has it been failing**:
on 09/07 only, next run has passed (hence this is not urgent)
**Testgrid link**:
https://k8s-testgrid.appspot.com/sig-scalability-gce#gce-master-scale-correctness
**Reason for failure**:
control plane didn't become healthy
**Anything else we need to know**:
-
/priority important-soon
/sig scalability
|
1.0
|
[Failing Test] gce-master-scale-correctness on 09/07 - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
UP - cluster creation
**Which test(s) are failing**:
gce-master-scale-correctness
**Since when has it been failing**:
on 09/07 only, next run has passed (hence this is not urgent)
**Testgrid link**:
https://k8s-testgrid.appspot.com/sig-scalability-gce#gce-master-scale-correctness
**Reason for failure**:
control plane didn't become healthy
**Anything else we need to know**:
-
/priority important-soon
/sig scalability
|
non_process
|
gce master scale correctness on which jobs are failing up cluster creation which test s are failing gce master scale correctness since when has it been failing on only next run has passed hence this is not urgent testgrid link reason for failure control plane didn t become healthy anything else we need to know priority important soon sig scalability
| 0
|
190,841
| 14,580,335,504
|
IssuesEvent
|
2020-12-18 09:00:02
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
outscale-dev/terraform-provider-outscale: outscale/resource_outscale_security_group_rule_test.go; 54 LoC
|
fresh medium test
|
Found a possible issue in [outscale-dev/terraform-provider-outscale](https://www.github.com/outscale-dev/terraform-provider-outscale) at [outscale/resource_outscale_security_group_rule_test.go](https://github.com/outscale-dev/terraform-provider-outscale/blob/564eaff8e3afe2729fcbc4db1793dfbd5c282c0d/outscale/resource_outscale_security_group_rule_test.go#L105-L158)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to r is reassigned at line 157
[Click here to see the code in its original context.](https://github.com/outscale-dev/terraform-provider-outscale/blob/564eaff8e3afe2729fcbc4db1793dfbd5c282c0d/outscale/resource_outscale_security_group_rule_test.go#L105-L158)
<details>
<summary>Click here to show the 54 line(s) of Go which triggered the analyzer.</summary>
```go
for _, r := range rules {
if p.GetToPortRange() != r.GetToPortRange() {
continue
}
if p.GetFromPortRange() != r.GetFromPortRange() {
continue
}
if p.GetIpProtocol() != r.GetIpProtocol() {
continue
}
remaining := len(p.GetIpRanges())
for _, ip := range p.GetIpRanges() {
for _, rip := range r.GetIpRanges() {
if ip == rip {
remaining--
}
}
}
if remaining > 0 {
continue
}
remaining = len(p.GetSecurityGroupsMembers())
for _, ip := range p.GetSecurityGroupsMembers() {
for _, rip := range r.GetSecurityGroupsMembers() {
if ip.GetSecurityGroupId() == rip.GetSecurityGroupId() {
remaining--
}
}
}
if remaining > 0 {
continue
}
remaining = len(p.GetServiceIds())
for _, pip := range p.GetServiceIds() {
for _, rpip := range r.GetServiceIds() {
if pip == rpip {
remaining--
}
}
}
if remaining > 0 {
continue
}
matchingRule = &r
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 564eaff8e3afe2729fcbc4db1793dfbd5c282c0d
|
1.0
|
outscale-dev/terraform-provider-outscale: outscale/resource_outscale_security_group_rule_test.go; 54 LoC -
Found a possible issue in [outscale-dev/terraform-provider-outscale](https://www.github.com/outscale-dev/terraform-provider-outscale) at [outscale/resource_outscale_security_group_rule_test.go](https://github.com/outscale-dev/terraform-provider-outscale/blob/564eaff8e3afe2729fcbc4db1793dfbd5c282c0d/outscale/resource_outscale_security_group_rule_test.go#L105-L158)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to r is reassigned at line 157
[Click here to see the code in its original context.](https://github.com/outscale-dev/terraform-provider-outscale/blob/564eaff8e3afe2729fcbc4db1793dfbd5c282c0d/outscale/resource_outscale_security_group_rule_test.go#L105-L158)
<details>
<summary>Click here to show the 54 line(s) of Go which triggered the analyzer.</summary>
```go
for _, r := range rules {
if p.GetToPortRange() != r.GetToPortRange() {
continue
}
if p.GetFromPortRange() != r.GetFromPortRange() {
continue
}
if p.GetIpProtocol() != r.GetIpProtocol() {
continue
}
remaining := len(p.GetIpRanges())
for _, ip := range p.GetIpRanges() {
for _, rip := range r.GetIpRanges() {
if ip == rip {
remaining--
}
}
}
if remaining > 0 {
continue
}
remaining = len(p.GetSecurityGroupsMembers())
for _, ip := range p.GetSecurityGroupsMembers() {
for _, rip := range r.GetSecurityGroupsMembers() {
if ip.GetSecurityGroupId() == rip.GetSecurityGroupId() {
remaining--
}
}
}
if remaining > 0 {
continue
}
remaining = len(p.GetServiceIds())
for _, pip := range p.GetServiceIds() {
for _, rpip := range r.GetServiceIds() {
if pip == rpip {
remaining--
}
}
}
if remaining > 0 {
continue
}
matchingRule = &r
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 564eaff8e3afe2729fcbc4db1793dfbd5c282c0d
|
non_process
|
outscale dev terraform provider outscale outscale resource outscale security group rule test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to r is reassigned at line click here to show the line s of go which triggered the analyzer go for r range rules if p gettoportrange r gettoportrange continue if p getfromportrange r getfromportrange continue if p getipprotocol r getipprotocol continue remaining len p getipranges for ip range p getipranges for rip range r getipranges if ip rip remaining if remaining continue remaining len p getsecuritygroupsmembers for ip range p getsecuritygroupsmembers for rip range r getsecuritygroupsmembers if ip getsecuritygroupid rip getsecuritygroupid remaining if remaining continue remaining len p getserviceids for pip range p getserviceids for rpip range r getserviceids if pip rpip remaining if remaining continue matchingrule r leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
149,135
| 19,566,059,704
|
IssuesEvent
|
2022-01-04 00:36:29
|
opensearch-project/OpenSearch-Dashboards
|
https://api.github.com/repos/opensearch-project/OpenSearch-Dashboards
|
opened
|
CVE-2021-23490 (Medium) detected in parse-link-header-1.0.1.tgz
|
security vulnerability
|
## CVE-2021-23490 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-link-header-1.0.1.tgz</b></p></summary>
<p>Parses a link header and returns paging information for each contained link.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-link-header/-/parse-link-header-1.0.1.tgz">https://registry.npmjs.org/parse-link-header/-/parse-link-header-1.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- @osd/test-1.0.0.tgz (Root Library)
- :x: **parse-link-header-1.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/OpenSearch-Dashboards/commit/4fd064970b66ce555f48c22dfab6ed965d0e260a">4fd064970b66ce555f48c22dfab6ed965d0e260a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package parse-link-header before 2.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the checkHeader function.
<p>Publish Date: 2021-12-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23490>CVE-2021-23490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23490">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23490</a></p>
<p>Release Date: 2021-12-24</p>
<p>Fix Resolution: parse-link-header - 2.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"parse-link-header","packageVersion":"1.0.1","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"@osd/test:1.0.0;parse-link-header:1.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"parse-link-header - 2.0.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23490","vulnerabilityDetails":"The package parse-link-header before 2.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the checkHeader function.\r\n\r\n","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23490","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23490 (Medium) detected in parse-link-header-1.0.1.tgz - ## CVE-2021-23490 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-link-header-1.0.1.tgz</b></p></summary>
<p>Parses a link header and returns paging information for each contained link.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-link-header/-/parse-link-header-1.0.1.tgz">https://registry.npmjs.org/parse-link-header/-/parse-link-header-1.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- @osd/test-1.0.0.tgz (Root Library)
- :x: **parse-link-header-1.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/OpenSearch-Dashboards/commit/4fd064970b66ce555f48c22dfab6ed965d0e260a">4fd064970b66ce555f48c22dfab6ed965d0e260a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package parse-link-header before 2.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the checkHeader function.
<p>Publish Date: 2021-12-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23490>CVE-2021-23490</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23490">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23490</a></p>
<p>Release Date: 2021-12-24</p>
<p>Fix Resolution: parse-link-header - 2.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"parse-link-header","packageVersion":"1.0.1","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"@osd/test:1.0.0;parse-link-header:1.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"parse-link-header - 2.0.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23490","vulnerabilityDetails":"The package parse-link-header before 2.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the checkHeader function.\r\n\r\n","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23490","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in parse link header tgz cve medium severity vulnerability vulnerable library parse link header tgz parses a link header and returns paging information for each contained link library home page a href dependency hierarchy osd test tgz root library x parse link header tgz vulnerable library found in head commit a href found in base branch main vulnerability details the package parse link header before are vulnerable to regular expression denial of service redos via the checkheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse link header isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree osd test parse link header isminimumfixversionavailable true minimumfixversion parse link header isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package parse link header before are vulnerable to regular expression denial of service redos via the checkheader function r n r n vulnerabilityurl
| 0
|
62,701
| 17,154,944,166
|
IssuesEvent
|
2021-07-14 05:02:41
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
FVFluxBCs dont allow for 0 dofs, which happens for neighbor facetypes
|
C: MOOSE P: normal T: defect
|
## Bug Description
Hitting an assert in FVFluxBC before computing the Jacobians which wants the number of dofs to be 1. But the variable has 0 dofs on the neighbor when it's not defined there
## Steps to Reproduce
Block restricted finite volume calculation, with a flux BC such as free slip conditions
## Impact
Cant debug application without disabling or modifying the assert. The latter is in a PR coming soon
|
1.0
|
FVFluxBCs dont allow for 0 dofs, which happens for neighbor facetypes - ## Bug Description
Hitting an assert in FVFluxBC before computing the Jacobians which wants the number of dofs to be 1. But the variable has 0 dofs on the neighbor when it's not defined there
## Steps to Reproduce
Block restricted finite volume calculation, with a flux BC such as free slip conditions
## Impact
Cant debug application without disabling or modifying the assert. The latter is in a PR coming soon
|
non_process
|
fvfluxbcs dont allow for dofs which happens for neighbor facetypes bug description hitting an assert in fvfluxbc before computing the jacobians which wants the number of dofs to be but the variable has dofs on the neighbor when it s not defined there steps to reproduce block restricted finite volume calculation with a flux bc such as free slip conditions impact cant debug application without disabling or modifying the assert the latter is in a pr coming soon
| 0
|
35,123
| 6,415,490,344
|
IssuesEvent
|
2017-08-08 12:55:55
|
zalando-incubator/zally
|
https://api.github.com/repos/zalando-incubator/zally
|
closed
|
Prepare Release 1.1
|
documentation technical-task
|
In order to prepare Release 1.1 we need to do the following:
- [x] Prepare release notes
- [x] Compile builds for Golang CLI tool for Windows, Linux and OSX
- [x] Check that basic workflows are working
- [x] Test Linux and OSX clients (sorry, we don't use Windows)
- [x] Update README / gh-pages
- [x] Publish release on GH (and push v1.1.0 tag)
- [x] Check what can be already done from #418
- [x] Send an email to Zalando mailing lists (@maxim-tschumak)
**Target date:** 1.08.2017
|
1.0
|
Prepare Release 1.1 - In order to prepare Release 1.1 we need to do the following:
- [x] Prepare release notes
- [x] Compile builds for Golang CLI tool for Windows, Linux and OSX
- [x] Check that basic workflows are working
- [x] Test Linux and OSX clients (sorry, we don't use Windows)
- [x] Update README / gh-pages
- [x] Publish release on GH (and push v1.1.0 tag)
- [x] Check what can be already done from #418
- [x] Send an email to Zalando mailing lists (@maxim-tschumak)
**Target date:** 1.08.2017
|
non_process
|
prepare release in order to prepare release we need to do the following prepare release notes compile builds for golang cli tool for windows linux and osx check that basic workflows are working test linux and osx clients sorry we don t use windows update readme gh pages publish release on gh and push tag check what can be already done from send an email to zalando mailing lists maxim tschumak target date
| 0
|
143,415
| 13,063,340,736
|
IssuesEvent
|
2020-07-30 16:23:29
|
muesli/markscribe
|
https://api.github.com/repos/muesli/markscribe
|
closed
|
No build and deploy instructions
|
documentation
|
Hey,
So as a non-go developer, I had to figure out on my own that:
1. Requirements: You need Go > 1.14 (1.13 fails due to some dependency).
2. You need to get a private github token for your profile from ``top right profile icon>settings>developer options>Personal access tokens``
3. Run using (after cloning and cd in to the repo):
```
GITHUB_TOKEN=xxxxx go run . templates/github-profile.tpl
```
Things I didn't figure out:
1.
There is no info on how to automate this without exposing the github token - any info on how you did that?
2. There is no information on what permissions to give the Personal access token, that would be helpful as input here to.
Would be happy to PR more documentation once I understand if I am using this correctly - and once I have answers to what I didn't figure out.
|
1.0
|
No build and deploy instructions - Hey,
So as a non-go developer, I had to figure out on my own that:
1. Requirements: You need Go > 1.14 (1.13 fails due to some dependency).
2. You need to get a private github token for your profile from ``top right profile icon>settings>developer options>Personal access tokens``
3. Run using (after cloning and cd in to the repo):
```
GITHUB_TOKEN=xxxxx go run . templates/github-profile.tpl
```
Things I didn't figure out:
1.
There is no info on how to automate this without exposing the github token - any info on how you did that?
2. There is no information on what permissions to give the Personal access token, that would be helpful as input here to.
Would be happy to PR more documentation once I understand if I am using this correctly - and once I have answers to what I didn't figure out.
|
non_process
|
no build and deploy instructions hey so as a non go developer i had to figure out on my own that requirements you need go fails due to some dependency you need to get a private github token for your profile from top right profile icon settings developer options personal access tokens run using after cloning and cd in to the repo github token xxxxx go run templates github profile tpl things i didn t figure out there is no info on how to automate this without exposing the github token any info on how you did that there is no information on what permissions to give the personal access token that would be helpful as input here to would be happy to pr more documentation once i understand if i am using this correctly and once i have answers to what i didn t figure out
| 0
|
7,299
| 10,443,004,040
|
IssuesEvent
|
2019-09-18 14:07:07
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
opened
|
PH Calc: Control Air:Fuel Ratio
|
Calculator Process Heating
|
Develop calculator from ESC calculator.
Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 27 Air-fuel ratio
|
1.0
|
PH Calc: Control Air:Fuel Ratio - Develop calculator from ESC calculator.
Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 27 Air-fuel ratio
|
process
|
ph calc control air fuel ratio develop calculator from esc calculator excel file found in dropbox amo tools other tools energy solutions center tools escenter org no air fuel ratio
| 1
|
17,587
| 23,402,529,416
|
IssuesEvent
|
2022-08-12 09:26:29
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
How to correctly get the bytes in the protobuf file
|
processors needs more info
|
I try to get data from kafka. The messages are protobuf encoded. I am trying to use the Benthos protobuf processor. The type of data in protobuf is bytes, but the output is string, like "data":"CoKgAQp4EAEYAiDk7M2XBii".
## yaml file
```yaml
input:
kafka:
pipeline:
processors:
- protobuf:
operator: to_json
message: subscribe.Envelope
import_paths: [ subscribe/subscribe.proto ]
output:
stdout: {}
```
## Envelop
```protobuf
message Envelope {
int32 version = 1;
uint32 total = 2;
uint32 index = 3;
bytes data = 4;
}
```
How can I get bytes in protobuf?
|
1.0
|
How to correctly get the bytes in the protobuf file - I try to get data from kafka. The messages are protobuf encoded. I am trying to use the Benthos protobuf processor. The type of data in protobuf is bytes, but the output is string, like "data":"CoKgAQp4EAEYAiDk7M2XBii".
## yaml file
```yaml
input:
kafka:
pipeline:
processors:
- protobuf:
operator: to_json
message: subscribe.Envelope
import_paths: [ subscribe/subscribe.proto ]
output:
stdout: {}
```
## Envelop
```protobuf
message Envelope {
int32 version = 1;
uint32 total = 2;
uint32 index = 3;
bytes data = 4;
}
```
How can I get bytes in protobuf?
|
process
|
how to correctly get the bytes in the protobuf file i try to get data from kafka the messages are protobuf encoded i am trying to use the benthos protobuf processor the type of data in protobuf is bytes but the output is string like data yaml file yaml input kafka pipeline processors protobuf operator to json message subscribe envelope import paths output stdout envelop protobuf message envelope version total index bytes data how can i get bytes in protobuf
| 1
|
821,927
| 30,843,422,998
|
IssuesEvent
|
2023-08-02 12:17:19
|
AlexandreSajus/TalkToTaipy
|
https://api.github.com/repos/AlexandreSajus/TalkToTaipy
|
opened
|
Dataset preprocessing
|
bug Priority
|
Uploading your dataset and using TTT often won't work because of the following issues:
- there is no way to specify separator and encoding within the app
- datasets require some preprocessing before being used: columns with values like "33,000" or "12%" must be converted to integers 33000 and 12 before PandasAI can work on them. Dates need to be converted to pd.datetime.
I am still trying to figure out an idea for a fix because adding features within the app to solve these issues would heavily complexify the app.
|
1.0
|
Dataset preprocessing - Uploading your dataset and using TTT often won't work because of the following issues:
- there is no way to specify separator and encoding within the app
- datasets require some preprocessing before being used: columns with values like "33,000" or "12%" must be converted to integers 33000 and 12 before PandasAI can work on them. Dates need to be converted to pd.datetime.
I am still trying to figure out an idea for a fix because adding features within the app to solve these issues would heavily complexify the app.
|
non_process
|
dataset preprocessing uploading your dataset and using ttt often won t work because of the following issues there is no way to specify separator and encoding within the app datasets require some preprocessing before being used columns with values like or must be converted to integers and before pandasai can work on them dates need to be converted to pd datetime i am still trying to figure out an idea for a fix because adding features within the app to solve these issues would heavily complexify the app
| 0
|
59,235
| 3,104,343,743
|
IssuesEvent
|
2015-08-31 15:23:25
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
Pre-build console templates into $templateCache to reduce ajax requests
|
area/performance component/web kind/enhancement priority/P3
|
There is an npm package called [grunt-angular-templates](https://www.npmjs.com/package/grunt-angular-templates) that will pre-compile templates into JavaScript functions that can be registered into Angular's $templateCache & reduce ajax requests while the app is running. This could be a useful performance optimization.
|
1.0
|
Pre-build console templates into $templateCache to reduce ajax requests - There is an npm package called [grunt-angular-templates](https://www.npmjs.com/package/grunt-angular-templates) that will pre-compile templates into JavaScript functions that can be registered into Angular's $templateCache & reduce ajax requests while the app is running. This could be a useful performance optimization.
|
non_process
|
pre build console templates into templatecache to reduce ajax requests there is an npm package called that will pre compile templates into javascript functions that can be registered into angular s templatecache reduce ajax requests while the app is running this could be a useful performance optimization
| 0
|
182,619
| 6,671,718,693
|
IssuesEvent
|
2017-10-04 08:37:39
|
Rsl1122/Plan-PlayerAnalytics
|
https://api.github.com/repos/Rsl1122/Plan-PlayerAnalytics
|
closed
|
Duplicate sessions
|
Bug Priority: HIGH status: Done
|
Spigot 1.12.2
PLAN 4.0.1
Duplicate sessions (and bad analysis):
```
Player | Started | Length | World - Time
rambeau | Oct 02, 14:29 | 1h 39m 6s | world_blackdog (100%)
rambeau | Oct 02, 10:09 | 10s | world_blackdog (100%)
rambeau | Oct 02, 09:26 | 37m 45s | - (�%)
rambeau | Oct 02, 09:26 | 37m 45s | world_blackdog (100%)
rambeau | Oct 02, 09:26 | 37m 45s | - (�%)
rambeau | Oct 02, 09:25 | 28s | world_blackdog (100%)
```
|
1.0
|
Duplicate sessions - Spigot 1.12.2
PLAN 4.0.1
Duplicate sessions (and bad analysis):
```
Player | Started | Length | World - Time
rambeau | Oct 02, 14:29 | 1h 39m 6s | world_blackdog (100%)
rambeau | Oct 02, 10:09 | 10s | world_blackdog (100%)
rambeau | Oct 02, 09:26 | 37m 45s | - (�%)
rambeau | Oct 02, 09:26 | 37m 45s | world_blackdog (100%)
rambeau | Oct 02, 09:26 | 37m 45s | - (�%)
rambeau | Oct 02, 09:25 | 28s | world_blackdog (100%)
```
|
non_process
|
duplicate sessions spigot plan duplicate sessions and bad analysis player started length world time rambeau oct world blackdog rambeau oct world blackdog rambeau oct � rambeau oct world blackdog rambeau oct � rambeau oct world blackdog
| 0
|
196,117
| 14,813,570,058
|
IssuesEvent
|
2021-01-14 02:24:32
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql/logictest: TestLogic/fakedist-disk/unique/Insert failed
|
C-test-failure O-robot branch-master
|
[(sql/logictest).TestLogic failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2581156&tab=buildLog) on [master@0ddbd7642e3de27442a0942e12868e5e6a518b11](https://github.com/cockroachdb/cockroach/commits/0ddbd7642e3de27442a0942e12868e5e6a518b11):
```
=== RUN TestLogic
test_log_scope.go:72: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic005641966
test_log_scope.go:73: use -show-logs to present logs inline
=== CONT TestLogic
logic.go:3294: -- test log scope end --
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic005641966
--- FAIL: TestLogic (547.13s)
=== RUN TestLogic/fakedist-disk/unique/Insert
=== CONT TestLogic/fakedist-disk/unique/Insert
logic.go:1970:
testdata/logic_test/unique:195:
expected:
pq: insert on table "uniq_fk_child" violates foreign key constraint "fk_b_ref_uniq_fk_parent"\nDETAIL: Key \(b, c\)=\(3, 3\) is not present in table "uniq_fk_parent"\.
got:
pq: insert on table "uniq_fk_child" violates foreign key constraint "fk_b_ref_uniq_fk_parent"
DETAIL: Key (b, c)=(4, 4) is not present in table "uniq_fk_parent".
E210113 21:08:33.185886 6609071 sql/logictest/logic.go:3501
testdata/logic_test/unique:198: too many errors encountered, skipping the rest of the input
logic.go:1802:
testdata/logic_test/unique:198: too many errors encountered, skipping the rest of the input
--- done: testdata/logic_test/unique with config fakedist-disk: 33 tests, 2 failures
--- FAIL: TestLogic/fakedist-disk/unique/Insert (0.17s)
=== RUN TestLogic/fakedist-disk/unique
=== PAUSE TestLogic/fakedist-disk/unique
=== CONT TestLogic/fakedist-disk/unique
--- FAIL: TestLogic/fakedist-disk/unique (1.42s)
=== RUN TestLogic/fakedist-disk
--- FAIL: TestLogic/fakedist-disk (0.47s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestLogic PKG=./pkg/sql/logictest TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #58363 sql/logictest: TestLogic failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2)
- #58269 sql/logictest: TestLogic failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestLogic.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
sql/logictest: TestLogic/fakedist-disk/unique/Insert failed - [(sql/logictest).TestLogic failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2581156&tab=buildLog) on [master@0ddbd7642e3de27442a0942e12868e5e6a518b11](https://github.com/cockroachdb/cockroach/commits/0ddbd7642e3de27442a0942e12868e5e6a518b11):
```
=== RUN TestLogic
test_log_scope.go:72: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic005641966
test_log_scope.go:73: use -show-logs to present logs inline
=== CONT TestLogic
logic.go:3294: -- test log scope end --
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestLogic005641966
--- FAIL: TestLogic (547.13s)
=== RUN TestLogic/fakedist-disk/unique/Insert
=== CONT TestLogic/fakedist-disk/unique/Insert
logic.go:1970:
testdata/logic_test/unique:195:
expected:
pq: insert on table "uniq_fk_child" violates foreign key constraint "fk_b_ref_uniq_fk_parent"\nDETAIL: Key \(b, c\)=\(3, 3\) is not present in table "uniq_fk_parent"\.
got:
pq: insert on table "uniq_fk_child" violates foreign key constraint "fk_b_ref_uniq_fk_parent"
DETAIL: Key (b, c)=(4, 4) is not present in table "uniq_fk_parent".
E210113 21:08:33.185886 6609071 sql/logictest/logic.go:3501
testdata/logic_test/unique:198: too many errors encountered, skipping the rest of the input
logic.go:1802:
testdata/logic_test/unique:198: too many errors encountered, skipping the rest of the input
--- done: testdata/logic_test/unique with config fakedist-disk: 33 tests, 2 failures
--- FAIL: TestLogic/fakedist-disk/unique/Insert (0.17s)
=== RUN TestLogic/fakedist-disk/unique
=== PAUSE TestLogic/fakedist-disk/unique
=== CONT TestLogic/fakedist-disk/unique
--- FAIL: TestLogic/fakedist-disk/unique (1.42s)
=== RUN TestLogic/fakedist-disk
--- FAIL: TestLogic/fakedist-disk (0.47s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestLogic PKG=./pkg/sql/logictest TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #58363 sql/logictest: TestLogic failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2)
- #58269 sql/logictest: TestLogic failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestLogic.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
sql logictest testlogic fakedist disk unique insert failed on run testlogic test log scope go test logs captured to go src github com cockroachdb cockroach artifacts test log scope go use show logs to present logs inline cont testlogic logic go test log scope end test logs left over in go src github com cockroachdb cockroach artifacts fail testlogic run testlogic fakedist disk unique insert cont testlogic fakedist disk unique insert logic go testdata logic test unique expected pq insert on table uniq fk child violates foreign key constraint fk b ref uniq fk parent ndetail key b c is not present in table uniq fk parent got pq insert on table uniq fk child violates foreign key constraint fk b ref uniq fk parent detail key b c is not present in table uniq fk parent sql logictest logic go testdata logic test unique too many errors encountered skipping the rest of the input logic go testdata logic test unique too many errors encountered skipping the rest of the input done testdata logic test unique with config fakedist disk tests failures fail testlogic fakedist disk unique insert run testlogic fakedist disk unique pause testlogic fakedist disk unique cont testlogic fakedist disk unique fail testlogic fakedist disk unique run testlogic fakedist disk fail testlogic fakedist disk more parameters goflags json make stressrace tests testlogic pkg pkg sql logictest testtimeout stressflags timeout related sql logictest testlogic failed sql logictest testlogic failed powered by
| 0
|
6,688
| 9,808,937,926
|
IssuesEvent
|
2019-06-12 16:43:08
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing "iterate" does not work anymore
|
Bug High Priority Processing Regression
|
Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [21524](https://issues.qgis.org/issues/21524)
Affected QGIS version: 3.6.0
Redmine category:processing/core
Assignee: Victor Olaya
---
This on 3.6 (3.4.5 is ok)
error is like
Loading resulting layers
The following layers were not correctly generated.<ul><li>TEMPORARY_OUTPUT_0TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_10TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_11TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_1TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_2TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_3TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_4TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_5TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_6TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_7TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_8TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_9TEMPORARY_OUTPUT.gpkg</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
Execution completed in 1.35 seconds
Algorithm 'Buffer' finished
|
1.0
|
Processing "iterate" does not work anymore - Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [21524](https://issues.qgis.org/issues/21524)
Affected QGIS version: 3.6.0
Redmine category:processing/core
Assignee: Victor Olaya
---
This on 3.6 (3.4.5 is ok)
error is like
Loading resulting layers
The following layers were not correctly generated.<ul><li>TEMPORARY_OUTPUT_0TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_10TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_11TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_1TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_2TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_3TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_4TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_5TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_6TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_7TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_8TEMPORARY_OUTPUT.gpkg</li><li>TEMPORARY_OUTPUT_9TEMPORARY_OUTPUT.gpkg</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
Execution completed in 1.35 seconds
Algorithm 'Buffer' finished
|
process
|
processing iterate does not work anymore author name giovanni manghi gioman original redmine issue affected qgis version redmine category processing core assignee victor olaya this on is ok error is like loading resulting layers the following layers were not correctly generated temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg temporary output output gpkg you can check the log messages panel in qgis main window to find more information about the execution of the algorithm execution completed in seconds algorithm buffer finished
| 1
|
10,730
| 13,531,437,119
|
IssuesEvent
|
2020-09-15 21:38:49
|
googleapis/python-automl
|
https://api.github.com/repos/googleapis/python-automl
|
opened
|
Speed up samples job
|
type: process
|
The samples tests take ~20 minutes to complete.
Investigate to see if any modifications can be made to speed them up.
Logs note how much time each test takes:
https://source.cloud.google.com/results/invocations/b0462faf-1be8-41c6-83e4-e0b8ba27bc24/targets
|
1.0
|
Speed up samples job - The samples tests take ~20 minutes to complete.
Investigate to see if any modifications can be made to speed them up.
Logs note how much time each test takes:
https://source.cloud.google.com/results/invocations/b0462faf-1be8-41c6-83e4-e0b8ba27bc24/targets
|
process
|
speed up samples job the samples tests take minutes to complete investigate to see if any modifications can be made to speed them up logs note how much time each test takes
| 1
|
336,809
| 10,197,256,110
|
IssuesEvent
|
2019-08-12 23:32:05
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
closed
|
Getting "Error decoding signature" with `web-ext sign`
|
component: api priority: p4
|
### Describe the problem and steps to reproduce it:
Our extension has been being published via `web-ext-submit` (a plain shell [script](https://github.com/bfred-it/web-ext-submit/blob/master/extender.sh) that only filters the output of `$ web-ext sign`) and for the past couple of weeks we've been having issues.
First it was https://travis-ci.org/sindresorhus/refined-github/builds/540130635#L691
```
Building web extension from /home/travis/build/sindresorhus/refined-github/distribution
Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/%7Ba4c4eda4-fb84-4a84-b4a1-f7c1cbf2a1ad%7D/versions/19.5.30.1925/
status: 403
response: {"detail":"Before starting, please read and accept our Firefox Add-on Distribution Agreement as well as our Review Policies and Rules. The Firefox Add-on Distribution Agreement also links to our Privacy Notice which explains how we handle your information."}
```
I have no idea of what that referred to; I found the agreement on Google but there was nothing to accept, even in the [AMO Developer Hub](https://addons.mozilla.org/en-US/developers/addons).
---
Then I published the extension locally with `$ web-ext sign` using my keys — it worked! So I changed the keys on Travis (which had had @sindresorhus's keys for 1+ years) and now we've been getting this message: https://travis-ci.org/sindresorhus/refined-github/builds/540188019#L692
```
Building web extension from /home/travis/build/sindresorhus/refined-github/distribution
Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/%7Ba4c4eda4-fb84-4a84-b4a1-f7c1cbf2a1ad%7D/versions/19.6.3/
status: 401
response: {"detail":"Error decoding signature."}
```
I don't know what could cause this either but... **replacing `web-ext-submit` with a plain `web-ext sign` fixed it.** See deployment in https://travis-ci.org/sindresorhus/refined-github/jobs/542204643#L675
Now I wonder why that would happen. `web-ext-submit` is a plain shell script that runs `web-ext sign` and pipes its content out; **why would `web-ext` be affected by that?** And what does these two errors even mean? Can you make them more clear?
Thanks for reading :)
|
1.0
|
Getting "Error decoding signature" with `web-ext sign` - ### Describe the problem and steps to reproduce it:
Our extension has been being published via `web-ext-submit` (a plain shell [script](https://github.com/bfred-it/web-ext-submit/blob/master/extender.sh) that only filters the output of `$ web-ext sign`) and for the past couple of weeks we've been having issues.
First it was https://travis-ci.org/sindresorhus/refined-github/builds/540130635#L691
```
Building web extension from /home/travis/build/sindresorhus/refined-github/distribution
Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/%7Ba4c4eda4-fb84-4a84-b4a1-f7c1cbf2a1ad%7D/versions/19.5.30.1925/
status: 403
response: {"detail":"Before starting, please read and accept our Firefox Add-on Distribution Agreement as well as our Review Policies and Rules. The Firefox Add-on Distribution Agreement also links to our Privacy Notice which explains how we handle your information."}
```
I have no idea of what that referred to; I found the agreement on Google but there was nothing to accept, even in the [AMO Developer Hub](https://addons.mozilla.org/en-US/developers/addons).
---
Then I published the extension locally with `$ web-ext sign` using my keys — it worked! So I changed the keys on Travis (which had had @sindresorhus's keys for 1+ years) and now we've been getting this message: https://travis-ci.org/sindresorhus/refined-github/builds/540188019#L692
```
Building web extension from /home/travis/build/sindresorhus/refined-github/distribution
Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/%7Ba4c4eda4-fb84-4a84-b4a1-f7c1cbf2a1ad%7D/versions/19.6.3/
status: 401
response: {"detail":"Error decoding signature."}
```
I don't know what could cause this either but... **replacing `web-ext-submit` with a plain `web-ext sign` fixed it.** See deployment in https://travis-ci.org/sindresorhus/refined-github/jobs/542204643#L675
Now I wonder why that would happen. `web-ext-submit` is a plain shell script that runs `web-ext sign` and pipes its content out; **why would `web-ext` be affected by that?** And what does these two errors even mean? Can you make them more clear?
Thanks for reading :)
|
non_process
|
getting error decoding signature with web ext sign describe the problem and steps to reproduce it our extension has been being published via web ext submit a plain shell that only filters the output of web ext sign and for the past couple of weeks we ve been having issues first it was building web extension from home travis build sindresorhus refined github distribution error received bad response from the server while requesting status response detail before starting please read and accept our firefox add on distribution agreement as well as our review policies and rules the firefox add on distribution agreement also links to our privacy notice which explains how we handle your information i have no idea of what that referred to i found the agreement on google but there was nothing to accept even in the then i published the extension locally with web ext sign using my keys — it worked so i changed the keys on travis which had had sindresorhus s keys for years and now we ve been getting this message building web extension from home travis build sindresorhus refined github distribution error received bad response from the server while requesting status response detail error decoding signature i don t know what could cause this either but replacing web ext submit with a plain web ext sign fixed it see deployment in now i wonder why that would happen web ext submit is a plain shell script that runs web ext sign and pipes its content out why would web ext be affected by that and what does these two errors even mean can you make them more clear thanks for reading
| 0
|
3,558
| 6,589,097,219
|
IssuesEvent
|
2017-09-14 07:32:11
|
zero-os/0-core
|
https://api.github.com/repos/zero-os/0-core
|
closed
|
A fake job_id subscription can stream another job
|
process_wontfix type_bug
|
## Scenario
1- Start a job for a long time that prints "Hi"
2- Subscribe to this job using job_id and subscriber_tag (ex: my-job-subscriber21)
3- Make another subscription with fake job_id but using same subscriber_tag that has been used in step 2
## Actual Result
- Step 3 subscription can stream step 2 subscription
## Expected Result
- Step 3 should throw an error that the job_id doesn't exist

## Version
master @Revision: 8369d1a4417cefb4fc010e5d2a7a2c11b111a8c8
|
1.0
|
A fake job_id subscription can stream another job - ## Scenario
1- Start a job for a long time that prints "Hi"
2- Subscribe to this job using job_id and subscriber_tag (ex: my-job-subscriber21)
3- Make another subscription with fake job_id but using same subscriber_tag that has been used in step 2
## Actual Result
- Step 3 subscription can stream step 2 subscription
## Expected Result
- Step 3 should throw an error that the job_id doesn't exist

## Version
master @Revision: 8369d1a4417cefb4fc010e5d2a7a2c11b111a8c8
|
process
|
a fake job id subscription can stream another job scenario start a job for a long time that prints hi subscribe to this job using job id and subscriber tag ex my job make another subscription with fake job id but using same subscriber tag that has been used in step actual result step subscription can stream step subscription expected result step should throw an error that the job id doesn t exist version master revision
| 1
|
22,600
| 31,820,798,532
|
IssuesEvent
|
2023-09-14 02:01:35
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
internal/postprocessor: containeranalysis repo metadata doc link switched to non-submodule docs
|
type: process
|
Our OwlBot postprocessor switch the documentation link for `containeranlaysis/v1beta1` to the mono-module documentation page in https://github.com/googleapis/google-cloud-go/pull/8538/commits/6ccbf06b8515dfb3cadbb4278368bfaad5e3d320.
Since `containeranalysis` is a submodule, we should always point to the submodule docs, not the mono-module docs as these don't get updated when the submodule is released, only when the mono-module is released, which is not often.
|
1.0
|
internal/postprocessor: containeranalysis repo metadata doc link switched to non-submodule docs - Our OwlBot postprocessor switch the documentation link for `containeranlaysis/v1beta1` to the mono-module documentation page in https://github.com/googleapis/google-cloud-go/pull/8538/commits/6ccbf06b8515dfb3cadbb4278368bfaad5e3d320.
Since `containeranalysis` is a submodule, we should always point to the submodule docs, not the mono-module docs as these don't get updated when the submodule is released, only when the mono-module is released, which is not often.
|
process
|
internal postprocessor containeranalysis repo metadata doc link switched to non submodule docs our owlbot postprocessor switch the documentation link for containeranlaysis to the mono module documentation page in since containeranalysis is a submodule we should always point to the submodule docs not the mono module docs as these don t get updated when the submodule is released only when the mono module is released which is not often
| 1
|
8,225
| 11,411,081,193
|
IssuesEvent
|
2020-02-01 02:47:26
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Optimize relative datetime filter clauses
|
.Backend .Enhancement Priority:P1 Querying/Processor Type:Performance
|
Related: #4043
In #10980 I added QP middleware to rewrite filters against temporal fields to generate `BETWEEN` clauses or the like and avoid temporal truncation/extraction operations that break index usages. However, this improvement only applied to filters against absolute temporal moments, e.g. "show me toucan sightings where the month is January 2020". Filters against *relative* temporal moments, e.g. "show me toucan sightings in the last month" are not currently being optimized. For example this MBQL query
```clj
;; attempted-murders is our crow sighting test dataset
(mt/dataset attempted-murders
(mt/run-mbql-query attempts
{:aggregation [[:count]]
:filter [:time-interval $datetime :last :month]}))
```
generates this SQL (BigQuery):
```sql
SELECT count(*) AS `count`
FROM `attempted_murders.attempts`
WHERE datetime_trunc(`attempted_murders.attempts`.`datetime`, month)
= datetime_trunc(datetime_add(current_datetime(), INTERVAL -1 month), month)
```
It seems possible to instead generate something along the lines of
```sql
SELECT count(*) AS `count`
FROM `attempted_murders.attempts`
WHERE `attempted_murders.attempts`.`datetime`
BETWEEN datetime_trunc(datetime_add(current_datetime(), INTERVAL -2 month), month)
AND datetime_trunc(datetime_add(current_datetime(), INTERVAL -1 month), month)
```
I could probably get this working in a day. It would be a big performance win.
Since #4043 was originally marked as P1 and #10980 didn't quite solve the entire issue I'm going to mark this as P1 and address it as soon as I get a chance.
|
1.0
|
Optimize relative datetime filter clauses - Related: #4043
In #10980 I added QP middleware to rewrite filters against temporal fields to generate `BETWEEN` clauses or the like and avoid temporal truncation/extraction operations that break index usages. However, this improvement only applied to filters against absolute temporal moments, e.g. "show me toucan sightings where the month is January 2020". Filters against *relative* temporal moments, e.g. "show me toucan sightings in the last month" are not currently being optimized. For example this MBQL query
```clj
;; attempted-murders is our crow sighting test dataset
(mt/dataset attempted-murders
(mt/run-mbql-query attempts
{:aggregation [[:count]]
:filter [:time-interval $datetime :last :month]}))
```
generates this SQL (BigQuery):
```sql
SELECT count(*) AS `count`
FROM `attempted_murders.attempts`
WHERE datetime_trunc(`attempted_murders.attempts`.`datetime`, month)
= datetime_trunc(datetime_add(current_datetime(), INTERVAL -1 month), month)
```
It seems possible to instead generate something along the lines of
```sql
SELECT count(*) AS `count`
FROM `attempted_murders.attempts`
WHERE `attempted_murders.attempts`.`datetime`
BETWEEN datetime_trunc(datetime_add(current_datetime(), INTERVAL -2 month), month)
AND datetime_trunc(datetime_add(current_datetime(), INTERVAL -1 month), month)
```
I could probably get this working in a day. It would be a big performance win.
Since #4043 was originally marked as P1 and #10980 didn't quite solve the entire issue I'm going to mark this as P1 and address it as soon as I get a chance.
|
process
|
optimize relative datetime filter clauses related in i added qp middleware to rewrite filters against temporal fields to generate between clauses or the like and avoid temporal truncation extraction operations that break index usages however this improvement only applied to filters against absolute temporal moments e g show me toucan sightings where the month is january filters against relative temporal moments e g show me toucan sightings in the last month are not currently being optimized for example this mbql query clj attempted murders is our crow sighting test dataset mt dataset attempted murders mt run mbql query attempts aggregation filter generates this sql bigquery sql select count as count from attempted murders attempts where datetime trunc attempted murders attempts datetime month datetime trunc datetime add current datetime interval month month it seems possible to instead generate something along the lines of sql select count as count from attempted murders attempts where attempted murders attempts datetime between datetime trunc datetime add current datetime interval month month and datetime trunc datetime add current datetime interval month month i could probably get this working in a day it would be a big performance win since was originally marked as and didn t quite solve the entire issue i m going to mark this as and address it as soon as i get a chance
| 1
|
6,689
| 9,809,281,037
|
IssuesEvent
|
2019-06-12 17:34:41
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
ess-string-command returns '+' chars along with output
|
process
|
When the `COM` arg to `ess-string-command` contains newlines, '+' chars are returned prefixed to the exptected string output.
This makes it difficult to write automations that recieve text from R and inject it into Emacs buffers. Here's a reprex:
```lisp
(defun ess-reprex ()
(let ((r-process (ess-get-process))
(r-output-buffer (get-buffer-create "*R-output*")))
(ess-string-command
"glue::glue(\"a\",\n,\"b\",\n\"c\")\n"
r-output-buffer nil)
(save-mark-and-excursion
(goto-char (point-max))
(newline)
(insert-buffer r-output-buffer))
))
```
Run it and it will append `+ + abc` to the end of the current buffer.
If there are better ways to do this with ESS please let me know.
|
1.0
|
ess-string-command returns '+' chars along with output - When the `COM` arg to `ess-string-command` contains newlines, '+' chars are returned prefixed to the exptected string output.
This makes it difficult to write automations that recieve text from R and inject it into Emacs buffers. Here's a reprex:
```lisp
(defun ess-reprex ()
(let ((r-process (ess-get-process))
(r-output-buffer (get-buffer-create "*R-output*")))
(ess-string-command
"glue::glue(\"a\",\n,\"b\",\n\"c\")\n"
r-output-buffer nil)
(save-mark-and-excursion
(goto-char (point-max))
(newline)
(insert-buffer r-output-buffer))
))
```
Run it and it will append `+ + abc` to the end of the current buffer.
If there are better ways to do this with ESS please let me know.
|
process
|
ess string command returns chars along with output when the com arg to ess string command contains newlines chars are returned prefixed to the exptected string output this makes it difficult to write automations that recieve text from r and inject it into emacs buffers here s a reprex lisp defun ess reprex let r process ess get process r output buffer get buffer create r output ess string command glue glue a n b n c n r output buffer nil save mark and excursion goto char point max newline insert buffer r output buffer run it and it will append abc to the end of the current buffer if there are better ways to do this with ess please let me know
| 1
|
591,233
| 17,798,202,958
|
IssuesEvent
|
2021-09-01 02:33:46
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
named flatten segfaults when dims is ()
|
high priority module: crash triaged module: named tensor
|
```
gurkenglas@Gurkenglas-PC ~/mutual-information (main) [SIGSEGV]> python3.9
Python 3.9.5 (default, May 19 2021, 11:32:47)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.empty(2,3).flatten((),"asd")
fish: 'python3.9' terminated by signal SIGSEGV (Address boundary error)
```
## Environment
gurkenglas@Gurkenglas-PC ~/mutual-information (main) [2]> python3.9 collect_env.py
Collecting environment information...
PyTorch version: 1.10.0.dev20210630+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.5 (default, May 19 2021, 11:32:47) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.0
[pip3] torch==1.10.0.dev20210630+cpu
[pip3] torch-utils==0.0.1
[pip3] torchaudio==0.10.0.dev20210630
[pip3] torchvision==0.11.0.dev20210630+cpu
[conda] Could not collect
## Imagine the API working like this.
```
from lenses import bind
def nflatten(self, **kwargs):
for name,olds in kwargs.items():
olds = tuple(bind(olds).Recur(str).collect())
self = self.align_to(..., *olds).flatten(olds, name) if olds else self.rename(None).unsqueeze(-1).rename(*self.names, name)
return self
def nunflatten(self, **kwargs):
for name,news in kwargs.items():
news = tuple(bind(news).Each().collect())
self = self.unflatten(name, news) if news else self.squeeze(name)
return self
torch.Tensor.nflatten = nflatten
torch.Tensor.nunflatten = nunflatten
```
Also fixes #61117, and why would flattened dimensions need to be consecutive?
There should be a higher-order function that easily expands operator coverage to a given function. Perhaps an automatically generated module that applies it to all the functions.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411
|
1.0
|
named flatten segfaults when dims is () - ```
gurkenglas@Gurkenglas-PC ~/mutual-information (main) [SIGSEGV]> python3.9
Python 3.9.5 (default, May 19 2021, 11:32:47)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.empty(2,3).flatten((),"asd")
fish: 'python3.9' terminated by signal SIGSEGV (Address boundary error)
```
## Environment
gurkenglas@Gurkenglas-PC ~/mutual-information (main) [2]> python3.9 collect_env.py
Collecting environment information...
PyTorch version: 1.10.0.dev20210630+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.5 (default, May 19 2021, 11:32:47) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.0
[pip3] torch==1.10.0.dev20210630+cpu
[pip3] torch-utils==0.0.1
[pip3] torchaudio==0.10.0.dev20210630
[pip3] torchvision==0.11.0.dev20210630+cpu
[conda] Could not collect
## Imagine the API working like this.
```
from lenses import bind
def nflatten(self, **kwargs):
for name,olds in kwargs.items():
olds = tuple(bind(olds).Recur(str).collect())
self = self.align_to(..., *olds).flatten(olds, name) if olds else self.rename(None).unsqueeze(-1).rename(*self.names, name)
return self
def nunflatten(self, **kwargs):
for name,news in kwargs.items():
news = tuple(bind(news).Each().collect())
self = self.unflatten(name, news) if news else self.squeeze(name)
return self
torch.Tensor.nflatten = nflatten
torch.Tensor.nunflatten = nunflatten
```
Also fixes #61117, and why would flattened dimensions need to be consecutive?
There should be a higher-order function that easily expands operator coverage to a given function. Perhaps an automatically generated module that applies it to all the functions.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411
|
non_process
|
named flatten segfaults when dims is gurkenglas gurkenglas pc mutual information main python default may on linux type help copyright credits or license for more information import torch torch empty flatten asd fish terminated by signal sigsegv address boundary error environment gurkenglas gurkenglas pc mutual information main collect env py collecting environment information pytorch version cpu is debug build false cuda used to build pytorch none rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version could not collect libc version glibc python version default may bit runtime python platform linux microsoft standard with is cuda available false cuda runtime version no cuda gpu models and configuration no cuda nvidia driver version no cuda cudnn version no cuda hip runtime version n a miopen runtime version n a versions of relevant libraries numpy torch cpu torch utils torchaudio torchvision cpu could not collect imagine the api working like this from lenses import bind def nflatten self kwargs for name olds in kwargs items olds tuple bind olds recur str collect self self align to olds flatten olds name if olds else self rename none unsqueeze rename self names name return self def nunflatten self kwargs for name news in kwargs items news tuple bind news each collect self self unflatten name news if news else self squeeze name return self torch tensor nflatten nflatten torch tensor nunflatten nunflatten also fixes and why would flattened dimensions need to be consecutive there should be a higher order function that easily expands operator coverage to a given function perhaps an automatically generated module that applies it to all the functions cc ezyang gchanan bdhirsh jbschlosser
| 0
|
2,860
| 5,824,373,265
|
IssuesEvent
|
2017-05-07 12:26:12
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] New algorithm for geometry boundary
|
needs-backport Processing Text User Manual
|
Original commit: https://github.com/qgis/QGIS/commit/66b0119ca81db43969b12c50c52a7e2d6df1c738 by nyalldawson
(cherry-picked from ab022451a58196f2c96f6a0482e2155b5171bad5)
|
1.0
|
[FEATURE][processing] New algorithm for geometry boundary - Original commit: https://github.com/qgis/QGIS/commit/66b0119ca81db43969b12c50c52a7e2d6df1c738 by nyalldawson
(cherry-picked from ab022451a58196f2c96f6a0482e2155b5171bad5)
|
process
|
new algorithm for geometry boundary original commit by nyalldawson cherry picked from
| 1
|
6,300
| 9,306,233,844
|
IssuesEvent
|
2019-03-25 09:11:21
|
kmycode/sangokukmy
|
https://api.github.com/repos/kmycode/sangokukmy
|
opened
|
GameUpdaterにおいて、CharacterのSaveChangesAsyncでSQL例外が出ることがある
|
bug process-pending
|
## ケース1:政務官(集合官)を解雇したとき
~~~
2019-03-25 11:00:19.7330||ERROR|SangokuKmy.Startup|更新処理中にエラーが発生しま>した Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> MySql.Data.MySqlClient.MySqlException: Duplicate entry '152' for key 'PRIMARY' ---> MySql.Data.MySqlClient.MySqlException: Duplicate entry '152' for key 'PRIMARY'
at MySqlConnector.Core.ServerSession.TryAsyncContinuation(Task`1 task) in C:\projects\mysqlconnector\src\MySqlConnector\Core\ServerSession.cs:line 1245
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)
--- End of stack trace from previous location where exception was thrown ---
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in C:\projects\mysqlconnector\src\MySqlConnector\Core\ResultSet.cs:line 42
--- End of inner exception stack trace ---
at MySql.Data.MySqlClient.MySqlDataReader.ActivateResultSet(ResultSet resultSet) in C:\projects\mysqlconnector\src\MySqlConnector\MySql.Data.MySqlClient\MySqlDataReader.cs:line 74
at MySql.Data.MySqlClient.MySqlDataReader.NextResultAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in C:\projects\mysqlconnector\src\MySqlConnector\MySql.Data.MySqlClient\MySqlDataReader.cs:line 59
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeAsync(RelationalDataReader reader, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeAsync(RelationalDataReader reader, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.ExecuteAsync(DbContext _, ValueTuple`2 parameters, CancellationToken cancellationToken)
at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlExecutionStrategy.ExecuteAsync[TState,TResult](TState state, Func`4 operation, Func`4 verifySucceeded, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChangesAsync(IReadOnlyList`1 entriesToSave, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChangesAsync(Boolean acceptAllChangesOnSuccess, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.DbContext.SaveChangesAsync(Boolean acceptAllChangesOnSuccess, CancellationToken cancellationToken)
at SangokuKmy.Models.Data.MainRepository.SaveChangesAsync() in /codebuild/output/src328578182/src/SangokuKmy/Models/Data/MainRepository.cs:line 202
at SangokuKmy.Models.Updates.GameUpdater.UpdateCharacterAsync(MainRepository repo, Character character) in /codebuild/output/src328578182/src/SangokuKmy/Models/Updates/GameUpdater.cs:line 868
at SangokuKmy.Models.Updates.GameUpdater.<>c__DisplayClass4_0.<<UpdateCharactersAsync>b__0>d.MoveNext() in /codebuild/output/src328578182/src/SangokuKmy/Models/Updates/GameUpdater.cs:line 687
~~~
|
1.0
|
GameUpdaterにおいて、CharacterのSaveChangesAsyncでSQL例外が出ることがある - ## ケース1:政務官(集合官)を解雇したとき
~~~
2019-03-25 11:00:19.7330||ERROR|SangokuKmy.Startup|更新処理中にエラーが発生しま>した Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> MySql.Data.MySqlClient.MySqlException: Duplicate entry '152' for key 'PRIMARY' ---> MySql.Data.MySqlClient.MySqlException: Duplicate entry '152' for key 'PRIMARY'
at MySqlConnector.Core.ServerSession.TryAsyncContinuation(Task`1 task) in C:\projects\mysqlconnector\src\MySqlConnector\Core\ServerSession.cs:line 1245
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)
--- End of stack trace from previous location where exception was thrown ---
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in C:\projects\mysqlconnector\src\MySqlConnector\Core\ResultSet.cs:line 42
--- End of inner exception stack trace ---
at MySql.Data.MySqlClient.MySqlDataReader.ActivateResultSet(ResultSet resultSet) in C:\projects\mysqlconnector\src\MySqlConnector\MySql.Data.MySqlClient\MySqlDataReader.cs:line 74
at MySql.Data.MySqlClient.MySqlDataReader.NextResultAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in C:\projects\mysqlconnector\src\MySqlConnector\MySql.Data.MySqlClient\MySqlDataReader.cs:line 59
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeAsync(RelationalDataReader reader, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeAsync(RelationalDataReader reader, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.ExecuteAsync(DbContext _, ValueTuple`2 parameters, CancellationToken cancellationToken)
at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlExecutionStrategy.ExecuteAsync[TState,TResult](TState state, Func`4 operation, Func`4 verifySucceeded, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChangesAsync(IReadOnlyList`1 entriesToSave, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChangesAsync(Boolean acceptAllChangesOnSuccess, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.DbContext.SaveChangesAsync(Boolean acceptAllChangesOnSuccess, CancellationToken cancellationToken)
at SangokuKmy.Models.Data.MainRepository.SaveChangesAsync() in /codebuild/output/src328578182/src/SangokuKmy/Models/Data/MainRepository.cs:line 202
at SangokuKmy.Models.Updates.GameUpdater.UpdateCharacterAsync(MainRepository repo, Character character) in /codebuild/output/src328578182/src/SangokuKmy/Models/Updates/GameUpdater.cs:line 868
at SangokuKmy.Models.Updates.GameUpdater.<>c__DisplayClass4_0.<<UpdateCharactersAsync>b__0>d.MoveNext() in /codebuild/output/src328578182/src/SangokuKmy/Models/Updates/GameUpdater.cs:line 687
~~~
|
process
|
gameupdaterにおいて、characterのsavechangesasyncでsql例外が出ることがある :政務官(集合官)を解雇したとき error sangokukmy startup 更新処理中にエラーが発生しま した microsoft entityframeworkcore dbupdateexception an error occurred while updating the entries see the inner exception for details mysql data mysqlclient mysqlexception duplicate entry for key primary mysql data mysqlclient mysqlexception duplicate entry for key primary at mysqlconnector core serversession tryasynccontinuation task task in c projects mysqlconnector src mysqlconnector core serversession cs line at system threading tasks continuationresulttaskfromresulttask innerinvoke at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state end of stack trace from previous location where exception was thrown at system threading tasks task executewiththreadlocal task currenttaskslot end of stack trace from previous location where exception was thrown at mysqlconnector core resultset readresultsetheaderasync iobehavior iobehavior in c projects mysqlconnector src mysqlconnector core resultset cs line end of inner exception stack trace at mysql data mysqlclient mysqldatareader activateresultset resultset resultset in c projects mysqlconnector src mysqlconnector mysql data mysqlclient mysqldatareader cs line at mysql data mysqlclient mysqldatareader nextresultasync iobehavior iobehavior cancellationtoken cancellationtoken in c projects mysqlconnector src mysqlconnector mysql data mysqlclient mysqldatareader cs line at microsoft entityframeworkcore update affectedcountmodificationcommandbatch consumeasync relationaldatareader reader cancellationtoken cancellationtoken end of inner exception stack trace at microsoft entityframeworkcore update affectedcountmodificationcommandbatch consumeasync relationaldatareader reader cancellationtoken cancellationtoken at microsoft entityframeworkcore update readermodificationcommandbatch executeasync irelationalconnection connection cancellationtoken cancellationtoken at microsoft entityframeworkcore update internal batchexecutor executeasync dbcontext valuetuple parameters cancellationtoken cancellationtoken at pomelo entityframeworkcore mysql storage internal mysqlexecutionstrategy executeasync tstate state func operation func verifysucceeded cancellationtoken cancellationtoken at microsoft entityframeworkcore changetracking internal statemanager savechangesasync ireadonlylist entriestosave cancellationtoken cancellationtoken at microsoft entityframeworkcore changetracking internal statemanager savechangesasync boolean acceptallchangesonsuccess cancellationtoken cancellationtoken at microsoft entityframeworkcore dbcontext savechangesasync boolean acceptallchangesonsuccess cancellationtoken cancellationtoken at sangokukmy models data mainrepository savechangesasync in codebuild output src sangokukmy models data mainrepository cs line at sangokukmy models updates gameupdater updatecharacterasync mainrepository repo character character in codebuild output src sangokukmy models updates gameupdater cs line at sangokukmy models updates gameupdater c b d movenext in codebuild output src sangokukmy models updates gameupdater cs line
| 1
|
15,029
| 18,751,357,712
|
IssuesEvent
|
2021-11-05 02:40:31
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing toolbox does not handle array fields
|
Feedback stale Processing Bug
|
Author Name: **Cory Albrecht** (@CoryAlbrecht)
Original Redmine Issue: [21786](https://issues.qgis.org/issues/21786)
Affected QGIS version: 3.6.1
Redmine category:processing/core
---
So I just did "fix geometries" on my PostGiS layer that has array fields, when I deleted the shapes from it and copied the new features back. When I went to save that layer, nothing would save. All I got was simple message log error line saying that arrays had to start with {, and no PostGIS errors.
My PostGIS layer has several fields that are arrays, so I did not know which one it was referring to. Eventually I realised that the fix geometries tool had converted all array fields to simple string fields, so a PostGIS array like '{en,"Kingdom of Bavaria"}' became the literal string "{en, \"Kingdom of Bavaria\"}". When the shapes were copied back to the PostGIS layer, QGIS was still seeing them as strings, not as arrays, and apparently converting them improperly when trying to create the SQL to update the PostGIS layer. I assume the problem was with creating SQL because there were no PostGIS errors, just the ambiguous error about needing the { for an array.
Based on other bugs I have filed having to do with how QGIS handles array fields, it seems to me that whatever redesign of the internals of how a layer and it's attributes are stored in memory, that redesign seems to be faulty at the core as it communicates wrong information to the higher level functions that do things like run the processing tools, like displaying the feature attributes form (arrays are converted to strings, but truncated, #28691), like when trying to update PostGIS layers (nulls getting passed as empty strings, #29277), and now this.
---
- [Screenshot from 2019-04-07 23-17-34.png](https://issues.qgis.org/attachments/download/14755/Screenshot%20from%202019-04-07%2023-17-34.png) (Cory Albrecht)
- [Screenshot from 2019-04-07 23-16-21.png](https://issues.qgis.org/attachments/download/14756/Screenshot%20from%202019-04-07%2023-16-21.png) (Cory Albrecht)
|
1.0
|
Processing toolbox does not handle array fields - Author Name: **Cory Albrecht** (@CoryAlbrecht)
Original Redmine Issue: [21786](https://issues.qgis.org/issues/21786)
Affected QGIS version: 3.6.1
Redmine category:processing/core
---
So I just did "fix geometries" on my PostGiS layer that has array fields, when I deleted the shapes from it and copied the new features back. When I went to save that layer, nothing would save. All I got was simple message log error line saying that arrays had to start with {, and no PostGIS errors.
My PostGIS layer has several fields that are arrays, so I did not know which one it was referring to. Eventually I realised that the fix geometries tool had converted all array fields to simple string fields, so a PostGIS array like '{en,"Kingdom of Bavaria"}' became the literal string "{en, \"Kingdom of Bavaria\"}". When the shapes were copied back to the PostGIS layer, QGIS was still seeing them as strings, not as arrays, and apparently converting them improperly when trying to create the SQL to update the PostGIS layer. I assume the problem was with creating SQL because there were no PostGIS errors, just the ambiguous error about needing the { for an array.
Based on other bugs I have filed having to do with how QGIS handles array fields, it seems to me that whatever redesign of the internals of how a layer and it's attributes are stored in memory, that redesign seems to be faulty at the core as it communicates wrong information to the higher level functions that do things like run the processing tools, like displaying the feature attributes form (arrays are converted to strings, but truncated, #28691), like when trying to update PostGIS layers (nulls getting passed as empty strings, #29277), and now this.
---
- [Screenshot from 2019-04-07 23-17-34.png](https://issues.qgis.org/attachments/download/14755/Screenshot%20from%202019-04-07%2023-17-34.png) (Cory Albrecht)
- [Screenshot from 2019-04-07 23-16-21.png](https://issues.qgis.org/attachments/download/14756/Screenshot%20from%202019-04-07%2023-16-21.png) (Cory Albrecht)
|
process
|
processing toolbox does not handle array fields author name cory albrecht coryalbrecht original redmine issue affected qgis version redmine category processing core so i just did fix geometries on my postgis layer that has array fields when i deleted the shapes from it and copied the new features back when i went to save that layer nothing would save all i got was simple message log error line saying that arrays had to start with and no postgis errors my postgis layer has several fields that are arrays so i did not know which one it was referring to eventually i realised that the fix geometries tool had converted all array fields to simple string fields so a postgis array like en kingdom of bavaria became the literal string en kingdom of bavaria when the shapes were copied back to the postgis layer qgis was still seeing them as strings not as arrays and apparently converting them improperly when trying to create the sql to update the postgis layer i assume the problem was with creating sql because there were no postgis errors just the ambiguous error about needing the for an array based on other bugs i have filed having to do with how qgis handles array fields it seems to me that whatever redesign of the internals of how a layer and it s attributes are stored in memory that redesign seems to be faulty at the core as it communicates wrong information to the higher level functions that do things like run the processing tools like displaying the feature attributes form arrays are converted to strings but truncated like when trying to update postgis layers nulls getting passed as empty strings and now this cory albrecht cory albrecht
| 1
|
3,842
| 6,808,534,096
|
IssuesEvent
|
2017-11-04 04:12:18
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Parity pull references "pagination for tracing" api. Can we use it?
|
libs-etherlib status-inprocess type-info
|
h t t p s : / / g i t h u b .com/paritytech/parity/pull/6751/files
|
1.0
|
Parity pull references "pagination for tracing" api. Can we use it? - h t t p s : / / g i t h u b .com/paritytech/parity/pull/6751/files
|
process
|
parity pull references pagination for tracing api can we use it h t t p s g i t h u b com paritytech parity pull files
| 1
|
13,028
| 15,380,504,129
|
IssuesEvent
|
2021-03-02 21:12:10
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
identify all minikube tech debt for Q4
|
kind/process priority/important-longterm
|
- [ ] iso push after deploy/ISO PR merges (test against a pushed merged ISO on head)
- [ ] kic base image push after delpoy/kic-base/Dockerfile PR merge to image repositories (gcr, docker hub, ...)
- [ ] storage provisioner and gvisor image push on merge
- [ ] health check Azure Windows VMs (including provision, keep alive, install tools)
- [ ] identify debian 9 vs debian 10 integration test differences. (weird results on debian 10)
- [ ] PR Bot to auto Comment on the PR the flake rate for each test similar to https://github.com/kubernetes/minikube/issues/9073#issuecomment-684060703 (using Thomas' script on jenkins)
- [ ] bump preload version when we change aux example image https://github.com/kubernetes/minikube/pull/9159/files
- [ ] Ensure Gh Action tests get triggered by go.mod changes https://github.com/kubernetes/minikube/pull/9160
- [ ] Make do-not-sleep run on windows machine as a service
- [ ] Add Stress Test to Jenkins
- [x] HTML report for integration test in Makefile https://github.com/kubernetes/minikube/issues/9212
- [ ] update kuberntes docs minikube installation https://kubernetes.io/docs/tasks/tools/install-minikube/
- [ ] Figure out the docker desktop File Sharing Notification.
- [ ] Brew PR opener in jenkins run in docker https://github.com/kubernetes/minikube/issues/7036
- [] ok to test be required for gh action
|
1.0
|
identify all minikube tech debt for Q4 - - [ ] iso push after deploy/ISO PR merges (test against a pushed merged ISO on head)
- [ ] kic base image push after delpoy/kic-base/Dockerfile PR merge to image repositories (gcr, docker hub, ...)
- [ ] storage provisioner and gvisor image push on merge
- [ ] health check Azure Windows VMs (including provision, keep alive, install tools)
- [ ] identify debian 9 vs debian 10 integration test differences. (weird results on debian 10)
- [ ] PR Bot to auto Comment on the PR the flake rate for each test similar to https://github.com/kubernetes/minikube/issues/9073#issuecomment-684060703 (using Thomas' script on jenkins)
- [ ] bump preload version when we change aux example image https://github.com/kubernetes/minikube/pull/9159/files
- [ ] Ensure Gh Action tests get triggered by go.mod changes https://github.com/kubernetes/minikube/pull/9160
- [ ] Make do-not-sleep run on windows machine as a service
- [ ] Add Stress Test to Jenkins
- [x] HTML report for integration test in Makefile https://github.com/kubernetes/minikube/issues/9212
- [ ] update kuberntes docs minikube installation https://kubernetes.io/docs/tasks/tools/install-minikube/
- [ ] Figure out the docker desktop File Sharing Notification.
- [ ] Brew PR opener in jenkins run in docker https://github.com/kubernetes/minikube/issues/7036
- [] ok to test be required for gh action
|
process
|
identify all minikube tech debt for iso push after deploy iso pr merges test against a pushed merged iso on head kic base image push after delpoy kic base dockerfile pr merge to image repositories gcr docker hub storage provisioner and gvisor image push on merge health check azure windows vms including provision keep alive install tools identify debian vs debian integration test differences weird results on debian pr bot to auto comment on the pr the flake rate for each test similar to using thomas script on jenkins bump preload version when we change aux example image ensure gh action tests get triggered by go mod changes make do not sleep run on windows machine as a service add stress test to jenkins html report for integration test in makefile update kuberntes docs minikube installation figure out the docker desktop file sharing notification brew pr opener in jenkins run in docker ok to test be required for gh action
| 1
|
12,510
| 14,962,203,810
|
IssuesEvent
|
2021-01-27 08:57:14
|
UserOfficeProject/stfc-user-office-project
|
https://api.github.com/repos/UserOfficeProject/stfc-user-office-project
|
opened
|
Capture ISIS FAP requirements for proposal submission software
|
origin: project size: 5 type: process
|
I need to capture ISIS panel requirements for proposal submission software.
- [ ] ISIS FAP secretaries and representatives
- [ ] Panel members
This is to focus on:
- [ ] the ISIS PDF documents
- [ ] the ISIS questionnaire
|
1.0
|
Capture ISIS FAP requirements for proposal submission software - I need to capture ISIS panel requirements for proposal submission software.
- [ ] ISIS FAP secretaries and representatives
- [ ] Panel members
This is to focus on:
- [ ] the ISIS PDF documents
- [ ] the ISIS questionnaire
|
process
|
capture isis fap requirements for proposal submission software i need to capture isis panel requirements for proposal submission software isis fap secretaries and representatives panel members this is to focus on the isis pdf documents the isis questionnaire
| 1
|
22,549
| 31,724,888,954
|
IssuesEvent
|
2023-09-10 20:47:10
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
metaflow-netflixext 1.0.1 has 1 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/metaflow-netflixext
https://inspector.pypi.io/project/metaflow-netflixext
```{
"dependency": "metaflow-netflixext",
"version": "1.0.1",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "metaflow-netflixext-1.0.1/metaflow_extensions/netflix_ext/plugins/conda/conda.py:2522",
"code": " p = subprocess.Popen(\n [\n self._bins[\"micromamba\"],\n \"-r\",\n os.path.dirname(self._package_dirs[0]),\n \"server\",\n \"-p\",\n... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpm22q_vo4/metaflow-netflixext"
}
}```
|
1.0
|
metaflow-netflixext 1.0.1 has 1 GuardDog issues - https://pypi.org/project/metaflow-netflixext
https://inspector.pypi.io/project/metaflow-netflixext
```{
"dependency": "metaflow-netflixext",
"version": "1.0.1",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "metaflow-netflixext-1.0.1/metaflow_extensions/netflix_ext/plugins/conda/conda.py:2522",
"code": " p = subprocess.Popen(\n [\n self._bins[\"micromamba\"],\n \"-r\",\n os.path.dirname(self._package_dirs[0]),\n \"server\",\n \"-p\",\n... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpm22q_vo4/metaflow-netflixext"
}
}```
|
process
|
metaflow netflixext has guarddog issues dependency metaflow netflixext version result issues errors results silent process execution location metaflow netflixext metaflow extensions netflix ext plugins conda conda py code p subprocess popen n n r n os path dirname self package dirs n server n p n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp metaflow netflixext
| 1
|
3,295
| 6,395,286,100
|
IssuesEvent
|
2017-08-04 12:52:06
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
[Documentation] Writing an importer for a custom dataset
|
processed question
|
It would be nice to have documentation around writing a new importer or how to re-use some of our existing importers to import custom dataset.
It would be worthy to note the required changes to other repos including schema and API to get the custom dataset/type flow through pelias. For example, if we add a new custom dataset called 'test_xyz' we need to add 'test_xyz' to [api/helper/layers.js](https://github.com/pelias/api/blob/master/helper/layers.js) and [api/queries/indeces.js](https://github.com/pelias/api/blob/master/query/indeces.js) to get the API to pick up the new layer. We need to add it to schema mapping as well.
Lets put this in a document `write_a_new_importer.md` perhaps?
@flotpk was trying to extend the open-addresses importer to work with a custom dataset and ran into issues. This proposed document might be helpful. does this sound reasonable?
|
1.0
|
[Documentation] Writing an importer for a custom dataset - It would be nice to have documentation around writing a new importer or how to re-use some of our existing importers to import custom dataset.
It would be worthy to note the required changes to other repos including schema and API to get the custom dataset/type flow through pelias. For example, if we add a new custom dataset called 'test_xyz' we need to add 'test_xyz' to [api/helper/layers.js](https://github.com/pelias/api/blob/master/helper/layers.js) and [api/queries/indeces.js](https://github.com/pelias/api/blob/master/query/indeces.js) to get the API to pick up the new layer. We need to add it to schema mapping as well.
Lets put this in a document `write_a_new_importer.md` perhaps?
@flotpk was trying to extend the open-addresses importer to work with a custom dataset and ran into issues. This proposed document might be helpful. does this sound reasonable?
|
process
|
writing an importer for a custom dataset it would be nice to have documentation around writing a new importer or how to re use some of our existing importers to import custom dataset it would be worthy to note the required changes to other repos including schema and api to get the custom dataset type flow through pelias for example if we add a new custom dataset called test xyz we need to add test xyz to and to get the api to pick up the new layer we need to add it to schema mapping as well lets put this in a document write a new importer md perhaps flotpk was trying to extend the open addresses importer to work with a custom dataset and ran into issues this proposed document might be helpful does this sound reasonable
| 1
|
20
| 2,496,261,619
|
IssuesEvent
|
2015-01-06 18:14:03
|
vivo-isf/vivo-isf-ontology
|
https://api.github.com/repos/vivo-isf/vivo-isf-ontology
|
closed
|
Erythropoiesis
|
biological_process imported
|
_From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on March 21, 2013 11:30:57_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
Erythropoiesis
\<b>Please provide a textual definition (with source):</b>
Erythropoiesis is the process by which red blood cells (erythrocytes) are produced.
\<a href="http://en.wikipedia.org/wiki/Erythropoiesis" rel="nofollow">http://en.wikipedia.org/wiki/Erythropoiesis</a>
\<b>Please add an example of usage for proposed term:</b>
As a biological process related to a database (<a href="http://www.cbil.upenn.edu/ErythronDB/resources.jsp;jsessionid=285321AC99B7608810523451BC3191B2" rel="nofollow">http://www.cbil.upenn.edu/ErythronDB/resources.jsp;jsessionid=285321AC99B7608810523451BC3191B2</a>)
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info:</b>
\<b>*** Term request example ****</b>
\<b>Please indicate the label for the proposed term: four-terminal resistance</b>
\<b>sensor</b>
Please provide a textual definition (with source): "Four-terminal
\<b>resistance sensors are electrical impedance measuring instruments that use</b>
\<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b>
\<b>accurate measurements that can be used to compute a material's electrical</b>
resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>
\<b>Please add an example of usage for proposed term: Measuring the inherent</b>
\<b>(per square) resistance of doped silicon.</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[X] Instrument</b>
\<b>[ ] Biological process</b>
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=200_
|
1.0
|
Erythropoiesis - _From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on March 21, 2013 11:30:57_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
Erythropoiesis
\<b>Please provide a textual definition (with source):</b>
Erythropoiesis is the process by which red blood cells (erythrocytes) are produced.
\<a href="http://en.wikipedia.org/wiki/Erythropoiesis" rel="nofollow">http://en.wikipedia.org/wiki/Erythropoiesis</a>
\<b>Please add an example of usage for proposed term:</b>
As a biological process related to a database (<a href="http://www.cbil.upenn.edu/ErythronDB/resources.jsp;jsessionid=285321AC99B7608810523451BC3191B2" rel="nofollow">http://www.cbil.upenn.edu/ErythronDB/resources.jsp;jsessionid=285321AC99B7608810523451BC3191B2</a>)
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info:</b>
\<b>*** Term request example ****</b>
\<b>Please indicate the label for the proposed term: four-terminal resistance</b>
\<b>sensor</b>
Please provide a textual definition (with source): "Four-terminal
\<b>resistance sensors are electrical impedance measuring instruments that use</b>
\<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b>
\<b>accurate measurements that can be used to compute a material's electrical</b>
resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>
\<b>Please add an example of usage for proposed term: Measuring the inherent</b>
\<b>(per square) resistance of doped silicon.</b>
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[X] Instrument</b>
\<b>[ ] Biological process</b>
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
\<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=200_
|
process
|
erythropoiesis from on march use the form below to request a new term scroll down to see a term request example please indicate the label for the proposed term erythropoiesis please provide a textual definition with source erythropoiesis is the process by which red blood cells erythrocytes are produced please add an example of usage for proposed term as a biological process related to a database please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info term request example please indicate the label for the proposed term four terminal resistance sensor please provide a textual definition with source four terminal resistance sensors are electrical impedance measuring instruments that use separate pairs of current carrying and voltage sensing electrodes to make accurate measurements that can be used to compute a material s electrical resistance please add an example of usage for proposed term measuring the inherent per square resistance of doped silicon please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info aka sensors wire sensor or point probe original issue
| 1
|
14,932
| 18,359,530,779
|
IssuesEvent
|
2021-10-09 01:46:11
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
DomProcessor._isOpenLinkInIframe work wrong with named top window
|
TYPE: bug AREA: client SYSTEM: URL processing AREA: server FREQUENCY: level 1 STATE: Stale
|
Markup for reproduce:
``` html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<script>
window.name = 'main_window_name';
</script>
<form id='form' target="main_window_name" action="/">
<input value="Text">
<input type="submit">
</form>
</body>
</html>
```
After proxing the `action` will have `http://<proxy-host-name>/<sessionId>!if/<siteUrl>` value.
It means that form is placed inside iframe. It is wrong.
Url should contains only `f` resource type letter.
|
1.0
|
DomProcessor._isOpenLinkInIframe work wrong with named top window - Markup for reproduce:
``` html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<script>
window.name = 'main_window_name';
</script>
<form id='form' target="main_window_name" action="/">
<input value="Text">
<input type="submit">
</form>
</body>
</html>
```
After proxing the `action` will have `http://<proxy-host-name>/<sessionId>!if/<siteUrl>` value.
It means that form is placed inside iframe. It is wrong.
Url should contains only `f` resource type letter.
|
process
|
domprocessor isopenlinkiniframe work wrong with named top window markup for reproduce html title window name main window name after proxing the action will have value it means that form is placed inside iframe it is wrong url should contains only f resource type letter
| 1
|
21,519
| 29,804,060,485
|
IssuesEvent
|
2023-06-16 10:15:15
|
Altinn/app-frontend-react
|
https://api.github.com/repos/Altinn/app-frontend-react
|
closed
|
End user navigation between process tasks
|
org/brg org/ssb Epic kind/analysis area/process feature-complete
|
## Description
When we give the app developers more possible tasks in a process, including the ability to define what tasks are possible to move between, we need to show the end user using the app in altinn.no how to navigate between the tasks.
### Ideas
- Create classes to represent different parts of a process. Class for the process itself, classes for tasks, sequences, conditions etc.
- Create a system for "running" generic tasks.
## In scope
- To what degree do we need / want to show the user how far in a process they have gotten?
- How do we enable navigating to the next / previous task?
- When/how do we allow navigating to/from the inbox (or other parts of SBL)?
## Out of scope
This analysis should not define new possible tasks in a process - use tasks already identified at the start of the analysis as foundation.
## Consideration
### Return back to previous task
We need to use BPMN to design the process that supports a return to a previous task. In BPMN this is the current template with only "one way"

```xml
<bpmn2:definitions id="Altinn_Data_Confirmation_Process_Definition">
<bpmn2:process id="Altinn_Data_Confirmation_Process" isExecutable="false">
<bpmn2:startEvent id="StartEvent_1">
<bpmn2:outgoing>SequenceFlow_1</bpmn2:outgoing>
</bpmn2:startEvent>
<bpmn2:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data">
<bpmn2:incoming>SequenceFlow_1</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_2</bpmn2:outgoing>
</bpmn2:task>
<bpmn2:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation">
<bpmn2:incoming>SequenceFlow_2</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_3</bpmn2:outgoing>
</bpmn2:task>
<bpmn2:endEvent id="EndEvent_1">
<bpmn2:incoming>SequenceFlow_3</bpmn2:incoming>
</bpmn2:endEvent>
<bpmn2:sequenceFlow id="SequenceFlow_1" sourceRef="StartEvent_1" targetRef="Task_1" />
<bpmn2:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" />
<bpmn2:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="EndEvent_1" />
</bpmn2:process>
</bpmn2:definitions>
```
To support return the BPMN process need to add explicit flows for the return. The below diagram and BPMN file describe the above process with added return flow from Task 2 to Task 1. To implement

```xml
<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions id="Definitions_10dklqb">
<bpmn:process id="Process_1so39gt" isExecutable="false">
<bpmn:startEvent id="StartEvent_1s67p7b" />
<bpmn:startEvent id="Event_Start">
<bpmn:outgoing>SequenceFlow_1</bpmn:outgoing>
</bpmn:startEvent>
<bpmn:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data">
<bpmn:incoming>SequenceFlow_1</bpmn:incoming>
<bpmn:incoming>SequenceFlow_3</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_2</bpmn:outgoing>
</bpmn:task>
<bpmn:sequenceFlow id="SequenceFlow_1" sourceRef="Task_1" targetRef="Task_2" />
<bpmn:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation">
<bpmn:incoming>SequenceFlow_2</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_3</bpmn:outgoing>
</bpmn:task>
<bpmn:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" />
<bpmn:exclusiveGateway id="Gateway_1">
<bpmn:incoming>SequenceFlow_3</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_4</bpmn:outgoing>
<bpmn:outgoing>SequenceFlow_5</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="Gateway_1" />
<bpmn:sequenceFlow id="SequenceFlow_5" sourceRef="Gateway_1" targetRef="Task_1" altinn:taskaction:leave="true" />
<bpmn:endEvent id="Event_End">
<bpmn:incoming>SequenceFlow_4</bpmn:incoming>
</bpmn:endEvent>
<bpmn:sequenceFlow id="SequenceFlow_4" sourceRef="Gateway_1" targetRef="Event_End" />
</bpmn:process>
</bpmn:definitions>
```
### Optional / conditional tasks
A common scenario is to skip tasks based on business rules. The following diagram shows a flow with a optional signing step. This is a common scenario where the business rule could be based on data in form. (example minimum sales amount)

```xml
<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" id="Definitions_1y7mtx0" targetNamespace="http://bpmn.io/schema/bpmn" exporter="bpmn-js (https://demo.bpmn.io)" exporterVersion="8.7.2">
<bpmn:process id="Process_04vnahs" isExecutable="false">
<bpmn:startEvent id="StartEvent">
<bpmn:outgoing>Flow1</bpmn:outgoing>
</bpmn:startEvent>
<bpmn:task id="Task1">
<bpmn:incoming>Flow1</bpmn:incoming>
<bpmn:incoming>Flow5</bpmn:incoming>
<bpmn:incoming>Flow7</bpmn:incoming>
<bpmn:outgoing>Flow2</bpmn:outgoing>
</bpmn:task>
<bpmn:task id="Task2">
<bpmn:incoming>Flow2</bpmn:incoming>
<bpmn:outgoing>Flow3</bpmn:outgoing>
</bpmn:task>
<bpmn:exclusiveGateway id="GateWay1">
<bpmn:incoming>Flow3</bpmn:incoming>
<bpmn:outgoing>Flow5</bpmn:outgoing>
<bpmn:outgoing>Flow4</bpmn:outgoing>
<bpmn:outgoing>Flow8</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:task id="Task3">
<bpmn:incoming>Flow4</bpmn:incoming>
<bpmn:outgoing>Flow6</bpmn:outgoing>
</bpmn:task>
<bpmn:exclusiveGateway id="Gateway2">
<bpmn:incoming>Flow6</bpmn:incoming>
<bpmn:outgoing>Flow7</bpmn:outgoing>
<bpmn:outgoing>Flow9</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:sequenceFlow id="Flow1" sourceRef="StartEvent" targetRef="Task1" />
<bpmn:sequenceFlow id="Flow2" sourceRef="Task1" targetRef="Task2" />
<bpmn:sequenceFlow id="Flow3" sourceRef="Task2" targetRef="GateWay1" />
<bpmn:sequenceFlow id="Flow5" sourceRef="GateWay1" targetRef="Task1" />
<bpmn:sequenceFlow id="Flow4" sourceRef="GateWay1" targetRef="Task3" />
<bpmn:sequenceFlow id="Flow6" sourceRef="Task3" targetRef="Gateway2" />
<bpmn:sequenceFlow id="Flow7" sourceRef="Gateway2" targetRef="Task1" />
<bpmn:endEvent id="EndEvent">
<bpmn:incoming>Flow9</bpmn:incoming>
<bpmn:incoming>Flow8</bpmn:incoming>
</bpmn:endEvent>
<bpmn:sequenceFlow id="Flow8" sourceRef="GateWay1" targetRef="EndEvent" />
<bpmn:sequenceFlow id="Flow9" sourceRef="Gateway2" targetRef="EndEvent" />
</bpmn:process>
</bpmn:definitions>
```
If we use an exclusive gateway the next method could identify that there are multiple flows and trigger a new App interface method that the app developer controls. That method could then be populated with logic that decided what the next task is.
### Task validation
The current behavior is to validate if a task can be ended. That includes validating if the required attachment is attached and all data models connected to the task are valid. But this is not relevant if you are going back in the process.
We need to update the code so when the process is going backward this validation is not required. The BPMN needs to define which flows that does not require validation. See suggestion in XML
### Valid scenarios
We need to define what kind of scenarios Altinn Apps really want to support.
- [ ] Go back from ConformationTask to previous data task? Suggestion Yes
- [ ] Go back from SigningTask to previous data task? Suggestion Yes

The following flows would be invalid to set up in an Altinn Apps application
- Go back to a signing task when signing has been completed
- Go back from a data task
- Go back passing a data task
- Go back passing a signing task (reason: Signatures might be already downloaded and processed)
- Go back passing a complete confirmation task (reason: Data might be already downloaded and processed)
### Technical flows
#### Return to previous task
Technical flow when the instance is in Task 2 and user want to go back to Task1

#### Optional
In this process, the user does not dictate the next task, but the business logic decides based on the instance, possible the data and other aspects which task is next.

To support this we need a new method in App interface that let app developer build custom code to define which route from a gateway. Input to this needs to be instance and the process/gateway info.
In this way the logic can rely on the instanceValues or the data from the instance (using storage API)
### Process navigation in frontend
The current views just expect that the only way to move in the process is forward. When there are multiple ways to go from a current task frontend needs to present the options to the users that they can go back to the defined

Frontend needs a way to get the Process options from current task. For that we need a new API that presented the current options, the title on that option and some indication which direction it is.
### Task Authorization
We need to support the authorization of the user that chooses to move the process backward.
It is defined for each task type what is required to perform forward movement of process but it is not described what is required to backward.
Suggestion: User is authorized for Read at current task.
### Default flows
We need to consider defining default flows if there are multiple flows out from a gateway. This to simplyfi API calls and to make it possible to trigger CompleteProcess.
See suggested example.
### Coded Gateways.
## Specification task
- [ ] Define how a process flow can define that it is backward and does not need a valid current task to leave
## Development task
- [x] Update [BPMN reader next elements](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Common/Process/BpmnReader.cs#L97) to support gateways to identify possible
- [x] Update [CanTaskBeEnded](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Api/Controllers/ProcessController.cs#L346) to support taking in next element and based on that identify that task does not need to be validated since it is a backward direction
- [ ] Update
- [ ] Create OnProcessTaskRevisit method that unlock data
- [ ] Update RegisterEventWithEventsComponent to support publishing of revisitTask
- [x] Update [AuthorizeAction](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Api/Controllers/ProcessController.cs#L555) and calling code so it nows it is a "backward" movement
## Alternative technical approach
In the current implementation, lots of business logic is located inside the process controller and the BMPN reader just for information about the process. The process is not responsible for doing any work.
Another approach is to introduce and ProcessEngine that is responsible for performing all business logic and trigger all other components that need to do work related to BPMN process.
See Altinn/altinn-studio#7067 for details
TBA
## Analysis
Points discussed during a meeting Thursday 9. September:
1. Being able to navigate backwards in the process should require an explicit sequence flow going out of the current task and back to a earlier task in the process. It should not be possible to go back in a process if there are no such sequence flow declared in the BPMN.
It should already be possible to perform the "back" sequence by providing the id of the task the user want to go to as input to the "go next" API. The process "engine" will validate and ensure that it is valid by checking for a sequence flow.
2. One major hurdle is to unlock locked data elements. If the current task is a confirm "step", data elements have been locked, preventing any updates even if the process goes back to an earlier task. The data elements must bee opened again at some point.
Possible solutions:
1. Logic attached to a sequence flow performs the unlock. The logic could be triggered based on a naming convention of a sequence flow or a custom, Altinn specific, attribute on the sequence flow declaration.
2. Declaring a new type of task that can be automatically executed by the App. This is almost identical to the first suggestion. Instead of having the sequence flow point directly back to the earlier task it points to the automated "unlock task" which in turn points to the earlier process task. This is more complex, but we avoid attaching business logic to a sequence flow.
The issue here is the visual representation and added complexity for the app developer when working with the process. "Why do we need this weird task?" A tiny remedy could be to also have a separate task for locking, to pair up with unlocking.
3. We no longer lock the data elements when moving into a confirm task. Instead we prevent changes from being performed through authorization rules.
3. Instead of letting external systems (like the frontend) "navigate" the process by calling next, the decision making is moved to the process itself. The idea is that tasks declare themselves complete with some sort of end state and give control to the process that provides a new task (or end event) based on the situation.
1. The tricky bit here would be to provide the process the necessary information to determine what to do. Gateways can be coded and allowed access to instances that could be expanded with additional properties if needed. The logic could also get access to the data elements and even the actual data if needed.
## Conclusion
TBD
## Tasks
- [ ] Is this issue labeled with a correct area label?
- [ ] QA has been done
|
1.0
|
End user navigation between process tasks - ## Description
When we give the app developers more possible tasks in a process, including the ability to define what tasks are possible to move between, we need to show the end user using the app in altinn.no how to navigate between the tasks.
### Ideas
- Create classes to represent different parts of a process. Class for the process itself, classes for tasks, sequences, conditions etc.
- Create a system for "running" generic tasks.
## In scope
- To what degree do we need / want to show the user how far in a process they have gotten?
- How do we enable navigating to the next / previous task?
- When/how do we allow navigating to/from the inbox (or other parts of SBL)?
## Out of scope
This analysis should not define new possible tasks in a process - use tasks already identified at the start of the analysis as foundation.
## Consideration
### Return back to previous task
We need to use BPMN to design the process that supports a return to a previous task. In BPMN this is the current template with only "one way"

```xml
<bpmn2:definitions id="Altinn_Data_Confirmation_Process_Definition">
<bpmn2:process id="Altinn_Data_Confirmation_Process" isExecutable="false">
<bpmn2:startEvent id="StartEvent_1">
<bpmn2:outgoing>SequenceFlow_1</bpmn2:outgoing>
</bpmn2:startEvent>
<bpmn2:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data">
<bpmn2:incoming>SequenceFlow_1</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_2</bpmn2:outgoing>
</bpmn2:task>
<bpmn2:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation">
<bpmn2:incoming>SequenceFlow_2</bpmn2:incoming>
<bpmn2:outgoing>SequenceFlow_3</bpmn2:outgoing>
</bpmn2:task>
<bpmn2:endEvent id="EndEvent_1">
<bpmn2:incoming>SequenceFlow_3</bpmn2:incoming>
</bpmn2:endEvent>
<bpmn2:sequenceFlow id="SequenceFlow_1" sourceRef="StartEvent_1" targetRef="Task_1" />
<bpmn2:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" />
<bpmn2:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="EndEvent_1" />
</bpmn2:process>
</bpmn2:definitions>
```
To support return the BPMN process need to add explicit flows for the return. The below diagram and BPMN file describe the above process with added return flow from Task 2 to Task 1. To implement

```xml
<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions id="Definitions_10dklqb">
<bpmn:process id="Process_1so39gt" isExecutable="false">
<bpmn:startEvent id="StartEvent_1s67p7b" />
<bpmn:startEvent id="Event_Start">
<bpmn:outgoing>SequenceFlow_1</bpmn:outgoing>
</bpmn:startEvent>
<bpmn:task id="Task_1" name="Fyll ut skjema" altinn:tasktype="data">
<bpmn:incoming>SequenceFlow_1</bpmn:incoming>
<bpmn:incoming>SequenceFlow_3</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_2</bpmn:outgoing>
</bpmn:task>
<bpmn:sequenceFlow id="SequenceFlow_1" sourceRef="Task_1" targetRef="Task_2" />
<bpmn:task id="Task_2" name="Bekreft skjemadata" altinn:tasktype="confirmation">
<bpmn:incoming>SequenceFlow_2</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_3</bpmn:outgoing>
</bpmn:task>
<bpmn:sequenceFlow id="SequenceFlow_2" sourceRef="Task_1" targetRef="Task_2" />
<bpmn:exclusiveGateway id="Gateway_1">
<bpmn:incoming>SequenceFlow_3</bpmn:incoming>
<bpmn:outgoing>SequenceFlow_4</bpmn:outgoing>
<bpmn:outgoing>SequenceFlow_5</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:sequenceFlow id="SequenceFlow_3" sourceRef="Task_2" targetRef="Gateway_1" />
<bpmn:sequenceFlow id="SequenceFlow_5" sourceRef="Gateway_1" targetRef="Task_1" altinn:taskaction:leave="true" />
<bpmn:endEvent id="Event_End">
<bpmn:incoming>SequenceFlow_4</bpmn:incoming>
</bpmn:endEvent>
<bpmn:sequenceFlow id="SequenceFlow_4" sourceRef="Gateway_1" targetRef="Event_End" />
</bpmn:process>
</bpmn:definitions>
```
### Optional / conditional tasks
A common scenario is to skip tasks based on business rules. The following diagram shows a flow with a optional signing step. This is a common scenario where the business rule could be based on data in form. (example minimum sales amount)

```xml
<?xml version="1.0" encoding="UTF-8"?>
<bpmn:definitions xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:bpmn="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" id="Definitions_1y7mtx0" targetNamespace="http://bpmn.io/schema/bpmn" exporter="bpmn-js (https://demo.bpmn.io)" exporterVersion="8.7.2">
<bpmn:process id="Process_04vnahs" isExecutable="false">
<bpmn:startEvent id="StartEvent">
<bpmn:outgoing>Flow1</bpmn:outgoing>
</bpmn:startEvent>
<bpmn:task id="Task1">
<bpmn:incoming>Flow1</bpmn:incoming>
<bpmn:incoming>Flow5</bpmn:incoming>
<bpmn:incoming>Flow7</bpmn:incoming>
<bpmn:outgoing>Flow2</bpmn:outgoing>
</bpmn:task>
<bpmn:task id="Task2">
<bpmn:incoming>Flow2</bpmn:incoming>
<bpmn:outgoing>Flow3</bpmn:outgoing>
</bpmn:task>
<bpmn:exclusiveGateway id="GateWay1">
<bpmn:incoming>Flow3</bpmn:incoming>
<bpmn:outgoing>Flow5</bpmn:outgoing>
<bpmn:outgoing>Flow4</bpmn:outgoing>
<bpmn:outgoing>Flow8</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:task id="Task3">
<bpmn:incoming>Flow4</bpmn:incoming>
<bpmn:outgoing>Flow6</bpmn:outgoing>
</bpmn:task>
<bpmn:exclusiveGateway id="Gateway2">
<bpmn:incoming>Flow6</bpmn:incoming>
<bpmn:outgoing>Flow7</bpmn:outgoing>
<bpmn:outgoing>Flow9</bpmn:outgoing>
</bpmn:exclusiveGateway>
<bpmn:sequenceFlow id="Flow1" sourceRef="StartEvent" targetRef="Task1" />
<bpmn:sequenceFlow id="Flow2" sourceRef="Task1" targetRef="Task2" />
<bpmn:sequenceFlow id="Flow3" sourceRef="Task2" targetRef="GateWay1" />
<bpmn:sequenceFlow id="Flow5" sourceRef="GateWay1" targetRef="Task1" />
<bpmn:sequenceFlow id="Flow4" sourceRef="GateWay1" targetRef="Task3" />
<bpmn:sequenceFlow id="Flow6" sourceRef="Task3" targetRef="Gateway2" />
<bpmn:sequenceFlow id="Flow7" sourceRef="Gateway2" targetRef="Task1" />
<bpmn:endEvent id="EndEvent">
<bpmn:incoming>Flow9</bpmn:incoming>
<bpmn:incoming>Flow8</bpmn:incoming>
</bpmn:endEvent>
<bpmn:sequenceFlow id="Flow8" sourceRef="GateWay1" targetRef="EndEvent" />
<bpmn:sequenceFlow id="Flow9" sourceRef="Gateway2" targetRef="EndEvent" />
</bpmn:process>
</bpmn:definitions>
```
If we use an exclusive gateway the next method could identify that there are multiple flows and trigger a new App interface method that the app developer controls. That method could then be populated with logic that decided what the next task is.
### Task validation
The current behavior is to validate if a task can be ended. That includes validating if the required attachment is attached and all data models connected to the task are valid. But this is not relevant if you are going back in the process.
We need to update the code so when the process is going backward this validation is not required. The BPMN needs to define which flows that does not require validation. See suggestion in XML
### Valid scenarios
We need to define what kind of scenarios Altinn Apps really want to support.
- [ ] Go back from ConformationTask to previous data task? Suggestion Yes
- [ ] Go back from SigningTask to previous data task? Suggestion Yes

The following flows would be invalid to set up in an Altinn Apps application
- Go back to a signing task when signing has been completed
- Go back from a data task
- Go back passing a data task
- Go back passing a signing task (reason: Signatures might be already downloaded and processed)
- Go back passing a complete confirmation task (reason: Data might be already downloaded and processed)
### Technical flows
#### Return to previous task
Technical flow when the instance is in Task 2 and user want to go back to Task1

#### Optional
In this process, the user does not dictate the next task, but the business logic decides based on the instance, possible the data and other aspects which task is next.

To support this we need a new method in App interface that let app developer build custom code to define which route from a gateway. Input to this needs to be instance and the process/gateway info.
In this way the logic can rely on the instanceValues or the data from the instance (using storage API)
### Process navigation in frontend
The current views just expect that the only way to move in the process is forward. When there are multiple ways to go from a current task frontend needs to present the options to the users that they can go back to the defined

Frontend needs a way to get the Process options from current task. For that we need a new API that presented the current options, the title on that option and some indication which direction it is.
### Task Authorization
We need to support the authorization of the user that chooses to move the process backward.
It is defined for each task type what is required to perform forward movement of process but it is not described what is required to backward.
Suggestion: User is authorized for Read at current task.
### Default flows
We need to consider defining default flows if there are multiple flows out from a gateway. This to simplyfi API calls and to make it possible to trigger CompleteProcess.
See suggested example.
### Coded Gateways.
## Specification task
- [ ] Define how a process flow can define that it is backward and does not need a valid current task to leave
## Development task
- [x] Update [BPMN reader next elements](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Common/Process/BpmnReader.cs#L97) to support gateways to identify possible
- [x] Update [CanTaskBeEnded](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Api/Controllers/ProcessController.cs#L346) to support taking in next element and based on that identify that task does not need to be validated since it is a backward direction
- [ ] Update
- [ ] Create OnProcessTaskRevisit method that unlock data
- [ ] Update RegisterEventWithEventsComponent to support publishing of revisitTask
- [x] Update [AuthorizeAction](https://github.com/Altinn/altinn-studio/blob/master/src/Altinn.Apps/AppTemplates/AspNet/Altinn.App.Api/Controllers/ProcessController.cs#L555) and calling code so it nows it is a "backward" movement
## Alternative technical approach
In the current implementation, lots of business logic is located inside the process controller and the BMPN reader just for information about the process. The process is not responsible for doing any work.
Another approach is to introduce and ProcessEngine that is responsible for performing all business logic and trigger all other components that need to do work related to BPMN process.
See Altinn/altinn-studio#7067 for details
TBA
## Analysis
Points discussed during a meeting Thursday 9. September:
1. Being able to navigate backwards in the process should require an explicit sequence flow going out of the current task and back to a earlier task in the process. It should not be possible to go back in a process if there are no such sequence flow declared in the BPMN.
It should already be possible to perform the "back" sequence by providing the id of the task the user want to go to as input to the "go next" API. The process "engine" will validate and ensure that it is valid by checking for a sequence flow.
2. One major hurdle is to unlock locked data elements. If the current task is a confirm "step", data elements have been locked, preventing any updates even if the process goes back to an earlier task. The data elements must bee opened again at some point.
Possible solutions:
1. Logic attached to a sequence flow performs the unlock. The logic could be triggered based on a naming convention of a sequence flow or a custom, Altinn specific, attribute on the sequence flow declaration.
2. Declaring a new type of task that can be automatically executed by the App. This is almost identical to the first suggestion. Instead of having the sequence flow point directly back to the earlier task it points to the automated "unlock task" which in turn points to the earlier process task. This is more complex, but we avoid attaching business logic to a sequence flow.
The issue here is the visual representation and added complexity for the app developer when working with the process. "Why do we need this weird task?" A tiny remedy could be to also have a separate task for locking, to pair up with unlocking.
3. We no longer lock the data elements when moving into a confirm task. Instead we prevent changes from being performed through authorization rules.
3. Instead of letting external systems (like the frontend) "navigate" the process by calling next, the decision making is moved to the process itself. The idea is that tasks declare themselves complete with some sort of end state and give control to the process that provides a new task (or end event) based on the situation.
1. The tricky bit here would be to provide the process the necessary information to determine what to do. Gateways can be coded and allowed access to instances that could be expanded with additional properties if needed. The logic could also get access to the data elements and even the actual data if needed.
## Conclusion
TBD
## Tasks
- [ ] Is this issue labeled with a correct area label?
- [ ] QA has been done
|
process
|
end user navigation between process tasks description when we give the app developers more possible tasks in a process including the ability to define what tasks are possible to move between we need to show the end user using the app in altinn no how to navigate between the tasks ideas create classes to represent different parts of a process class for the process itself classes for tasks sequences conditions etc create a system for running generic tasks in scope to what degree do we need want to show the user how far in a process they have gotten how do we enable navigating to the next previous task when how do we allow navigating to from the inbox or other parts of sbl out of scope this analysis should not define new possible tasks in a process use tasks already identified at the start of the analysis as foundation consideration return back to previous task we need to use bpmn to design the process that supports a return to a previous task in bpmn this is the current template with only one way xml sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow to support return the bpmn process need to add explicit flows for the return the below diagram and bpmn file describe the above process with added return flow from task to task to implement xml sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow sequenceflow optional conditional tasks a common scenario is to skip tasks based on business rules the following diagram shows a flow with a optional signing step this is a common scenario where the business rule could be based on data in form example minimum sales amount xml if we use an exclusive gateway the next method could identify that there are multiple flows and trigger a new app interface method that the app developer controls that method could then be populated with logic that decided what the next task is task validation the current behavior is to validate if a task can be ended that includes validating if the required attachment is attached and all data models connected to the task are valid but this is not relevant if you are going back in the process we need to update the code so when the process is going backward this validation is not required the bpmn needs to define which flows that does not require validation see suggestion in xml valid scenarios we need to define what kind of scenarios altinn apps really want to support go back from conformationtask to previous data task suggestion yes go back from signingtask to previous data task suggestion yes the following flows would be invalid to set up in an altinn apps application go back to a signing task when signing has been completed go back from a data task go back passing a data task go back passing a signing task reason signatures might be already downloaded and processed go back passing a complete confirmation task reason data might be already downloaded and processed technical flows return to previous task technical flow when the instance is in task and user want to go back to optional in this process the user does not dictate the next task but the business logic decides based on the instance possible the data and other aspects which task is next to support this we need a new method in app interface that let app developer build custom code to define which route from a gateway input to this needs to be instance and the process gateway info in this way the logic can rely on the instancevalues or the data from the instance using storage api process navigation in frontend the current views just expect that the only way to move in the process is forward when there are multiple ways to go from a current task frontend needs to present the options to the users that they can go back to the defined frontend needs a way to get the process options from current task for that we need a new api that presented the current options the title on that option and some indication which direction it is task authorization we need to support the authorization of the user that chooses to move the process backward it is defined for each task type what is required to perform forward movement of process but it is not described what is required to backward suggestion user is authorized for read at current task default flows we need to consider defining default flows if there are multiple flows out from a gateway this to simplyfi api calls and to make it possible to trigger completeprocess see suggested example coded gateways specification task define how a process flow can define that it is backward and does not need a valid current task to leave development task update to support gateways to identify possible update to support taking in next element and based on that identify that task does not need to be validated since it is a backward direction update create onprocesstaskrevisit method that unlock data update registereventwitheventscomponent to support publishing of revisittask update and calling code so it nows it is a backward movement alternative technical approach in the current implementation lots of business logic is located inside the process controller and the bmpn reader just for information about the process the process is not responsible for doing any work another approach is to introduce and processengine that is responsible for performing all business logic and trigger all other components that need to do work related to bpmn process see altinn altinn studio for details tba analysis points discussed during a meeting thursday september being able to navigate backwards in the process should require an explicit sequence flow going out of the current task and back to a earlier task in the process it should not be possible to go back in a process if there are no such sequence flow declared in the bpmn it should already be possible to perform the back sequence by providing the id of the task the user want to go to as input to the go next api the process engine will validate and ensure that it is valid by checking for a sequence flow one major hurdle is to unlock locked data elements if the current task is a confirm step data elements have been locked preventing any updates even if the process goes back to an earlier task the data elements must bee opened again at some point possible solutions logic attached to a sequence flow performs the unlock the logic could be triggered based on a naming convention of a sequence flow or a custom altinn specific attribute on the sequence flow declaration declaring a new type of task that can be automatically executed by the app this is almost identical to the first suggestion instead of having the sequence flow point directly back to the earlier task it points to the automated unlock task which in turn points to the earlier process task this is more complex but we avoid attaching business logic to a sequence flow the issue here is the visual representation and added complexity for the app developer when working with the process why do we need this weird task a tiny remedy could be to also have a separate task for locking to pair up with unlocking we no longer lock the data elements when moving into a confirm task instead we prevent changes from being performed through authorization rules instead of letting external systems like the frontend navigate the process by calling next the decision making is moved to the process itself the idea is that tasks declare themselves complete with some sort of end state and give control to the process that provides a new task or end event based on the situation the tricky bit here would be to provide the process the necessary information to determine what to do gateways can be coded and allowed access to instances that could be expanded with additional properties if needed the logic could also get access to the data elements and even the actual data if needed conclusion tbd tasks is this issue labeled with a correct area label qa has been done
| 1
|
16,212
| 20,737,379,940
|
IssuesEvent
|
2022-03-14 14:48:20
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Have `handle_unknown="ignore"` by default in OneHotEncoder
|
Enhancement module:preprocessing
|
I would propose to make `handle_unknown="ignore"` the default in OneHotEncoder.
That's what one would want in most cases in practice, I believe. Real datasets often have infrequent categories and so depending on the train/test split the test set is likely to have some infrequent categories. Also in production systems, better to have unknown categories ignored than have the system crashing because of it.
This might be blocked due to a suboptimal interaction with the `drop` option https://github.com/scikit-learn/scikit-learn/issues/18072, and I'm not sure how this would interact with a few other proposed improvements to OHE lately.
|
1.0
|
Have `handle_unknown="ignore"` by default in OneHotEncoder - I would propose to make `handle_unknown="ignore"` the default in OneHotEncoder.
That's what one would want in most cases in practice, I believe. Real datasets often have infrequent categories and so depending on the train/test split the test set is likely to have some infrequent categories. Also in production systems, better to have unknown categories ignored than have the system crashing because of it.
This might be blocked due to a suboptimal interaction with the `drop` option https://github.com/scikit-learn/scikit-learn/issues/18072, and I'm not sure how this would interact with a few other proposed improvements to OHE lately.
|
process
|
have handle unknown ignore by default in onehotencoder i would propose to make handle unknown ignore the default in onehotencoder that s what one would want in most cases in practice i believe real datasets often have infrequent categories and so depending on the train test split the test set is likely to have some infrequent categories also in production systems better to have unknown categories ignored than have the system crashing because of it this might be blocked due to a suboptimal interaction with the drop option and i m not sure how this would interact with a few other proposed improvements to ohe lately
| 1
|
480,476
| 13,852,798,738
|
IssuesEvent
|
2020-10-15 07:07:52
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.messenger.com - site is not usable
|
browser-firefox engine-gecko priority-important
|
<!-- @browser: Firefox 81.0.1 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59897 -->
**URL**: https://www.messenger.com/groupcall/create?
**Browser / Version**: Firefox 81.0.1
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
A simple user agent override makes the site usable
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.messenger.com - site is not usable - <!-- @browser: Firefox 81.0.1 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59897 -->
**URL**: https://www.messenger.com/groupcall/create?
**Browser / Version**: Firefox 81.0.1
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
A simple user agent override makes the site usable
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce a simple user agent override makes the site usable browser configuration none from with ❤️
| 0
|
74,607
| 7,434,077,237
|
IssuesEvent
|
2018-03-26 09:46:59
|
YusufIpek/CompilerConstruction
|
https://api.github.com/repos/YusufIpek/CompilerConstruction
|
reopened
|
Ast extension: statements
|
task 1 totest
|
Extend the existing ast-structure with statements:
- if
- while
- return
- compound
|
1.0
|
Ast extension: statements - Extend the existing ast-structure with statements:
- if
- while
- return
- compound
|
non_process
|
ast extension statements extend the existing ast structure with statements if while return compound
| 0
|
15,050
| 18,762,894,038
|
IssuesEvent
|
2021-11-05 18:46:13
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Change instances of moistureInAirComposition
|
Process Heating
|
Change to moistureInAirCombustion. These two refer to the same thing.
|
1.0
|
Change instances of moistureInAirComposition - Change to moistureInAirCombustion. These two refer to the same thing.
|
process
|
change instances of moistureinaircomposition change to moistureinaircombustion these two refer to the same thing
| 1
|
32,383
| 12,112,822,236
|
IssuesEvent
|
2020-04-21 14:20:59
|
wrbejar/VulnerableJavaWebApplicationEUA
|
https://api.github.com/repos/wrbejar/VulnerableJavaWebApplicationEUA
|
opened
|
CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js
|
security vulnerability
|
## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /VulnerableJavaWebApplicationEUA/src/main/webapp/resources/js/bootstrap.min.js,/VulnerableJavaWebApplicationEUA/src/main/webapp/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/VulnerableJavaWebApplicationEUA/commit/7bfbee08927b5431ce3ec79acd1a17e44f997efe">7bfbee08927b5431ce3ec79acd1a17e44f997efe</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.3.7","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.3.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1"}],"vulnerabilityIdentifier":"CVE-2019-8331","vulnerabilityDetails":"In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /VulnerableJavaWebApplicationEUA/src/main/webapp/resources/js/bootstrap.min.js,/VulnerableJavaWebApplicationEUA/src/main/webapp/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/VulnerableJavaWebApplicationEUA/commit/7bfbee08927b5431ce3ec79acd1a17e44f997efe">7bfbee08927b5431ce3ec79acd1a17e44f997efe</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.3.7","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.3.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1"}],"vulnerabilityIdentifier":"CVE-2019-8331","vulnerabilityDetails":"In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library vulnerablejavawebapplicationeua src main webapp resources js bootstrap min js vulnerablejavawebapplicationeua src main webapp resources js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in bootstrap before and x before xss is possible in the tooltip or popover data template attribute vulnerabilityurl
| 0
|
16,586
| 21,634,214,766
|
IssuesEvent
|
2022-05-05 12:53:12
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Database isolation in the new test setup doesn't work properly
|
process/candidate topic: internal topic: tests tech/typescript team/client
|
The regular expression in `setupTestSuiteDbURI()` doesn't seem to work (at least for the PostgreSQL connection string), so tests are using the same database, and, if run concurrently, cause constraint violations, "database does not exist" errors (when the `afterAll` hook of one test suite runs in parallel with the tests from another test suite) etc.
|
1.0
|
Database isolation in the new test setup doesn't work properly - The regular expression in `setupTestSuiteDbURI()` doesn't seem to work (at least for the PostgreSQL connection string), so tests are using the same database, and, if run concurrently, cause constraint violations, "database does not exist" errors (when the `afterAll` hook of one test suite runs in parallel with the tests from another test suite) etc.
|
process
|
database isolation in the new test setup doesn t work properly the regular expression in setuptestsuitedburi doesn t seem to work at least for the postgresql connection string so tests are using the same database and if run concurrently cause constraint violations database does not exist errors when the afterall hook of one test suite runs in parallel with the tests from another test suite etc
| 1
|
576,167
| 17,080,639,017
|
IssuesEvent
|
2021-07-08 04:18:56
|
feast-dev/feast
|
https://api.github.com/repos/feast-dev/feast
|
closed
|
Provide option to strip feature view names from feature names
|
kind/feature priority/p1
|
**Is your feature request related to a problem? Please describe.**
Users often have hardcode logic in model training pipelines based on specific feature names that come from their data sources. When Feast is introduced during productionization, we expect users to rename their feature names to the Feast convention `feature_view__feature`. This creates an unnecessary overhead for teams if they have many features.
**Describe the solution you'd like**
Provide a way to strip the `feature_view__` prefix from feature names during retrieval and reuse feature names directly from sources.
before
```
training_df = store.get_historical_features(
entity_df=entity_df,
feature_refs = [
'driver_performance:conv_rate',
'driver_performance:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
)
for feature_name in training_df.columns:
print(feature_name)
```
```
driver_performance__conv_rate
driver_performance__acc_rate
driver_hourly_stats__avg_daily_trips
```
after
```
training_df = store.get_historical_features(
entity_df=entity_df,
feature_refs = [
'driver_performance:conv_rate',
'driver_performance:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
feature_names_only=True
)
for feature_name in training_df.columns:
print(feature_name)
```
```
conv_rate
acc_rate
avg_daily_trips
```
Any collisions in feature names would result in an exception.
|
1.0
|
Provide option to strip feature view names from feature names - **Is your feature request related to a problem? Please describe.**
Users often have hardcode logic in model training pipelines based on specific feature names that come from their data sources. When Feast is introduced during productionization, we expect users to rename their feature names to the Feast convention `feature_view__feature`. This creates an unnecessary overhead for teams if they have many features.
**Describe the solution you'd like**
Provide a way to strip the `feature_view__` prefix from feature names during retrieval and reuse feature names directly from sources.
before
```
training_df = store.get_historical_features(
entity_df=entity_df,
feature_refs = [
'driver_performance:conv_rate',
'driver_performance:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
)
for feature_name in training_df.columns:
print(feature_name)
```
```
driver_performance__conv_rate
driver_performance__acc_rate
driver_hourly_stats__avg_daily_trips
```
after
```
training_df = store.get_historical_features(
entity_df=entity_df,
feature_refs = [
'driver_performance:conv_rate',
'driver_performance:acc_rate',
'driver_hourly_stats:avg_daily_trips'
],
feature_names_only=True
)
for feature_name in training_df.columns:
print(feature_name)
```
```
conv_rate
acc_rate
avg_daily_trips
```
Any collisions in feature names would result in an exception.
|
non_process
|
provide option to strip feature view names from feature names is your feature request related to a problem please describe users often have hardcode logic in model training pipelines based on specific feature names that come from their data sources when feast is introduced during productionization we expect users to rename their feature names to the feast convention feature view feature this creates an unnecessary overhead for teams if they have many features describe the solution you d like provide a way to strip the feature view prefix from feature names during retrieval and reuse feature names directly from sources before training df store get historical features entity df entity df feature refs driver performance conv rate driver performance acc rate driver hourly stats avg daily trips for feature name in training df columns print feature name driver performance conv rate driver performance acc rate driver hourly stats avg daily trips after training df store get historical features entity df entity df feature refs driver performance conv rate driver performance acc rate driver hourly stats avg daily trips feature names only true for feature name in training df columns print feature name conv rate acc rate avg daily trips any collisions in feature names would result in an exception
| 0
|
19,320
| 4,381,424,426
|
IssuesEvent
|
2016-08-06 07:15:25
|
openwhisk/openwhisk
|
https://api.github.com/repos/openwhisk/openwhisk
|
closed
|
deploy failed
|
documentation tooling
|
Hi,
I succcessfully build the openwhisk docker images according to READme, but when I try ant deploy, I got the following error.
```
he ' characters around the executable and arguments are
not part of the command.
[exec] The invoker hosts are [(0, '172.17.0.1', False)]
[exec] Removing invoker0 at 172.17.0.1:4243 0.0 sec did nothing
[exec] Skipping pull
[exec] Starting invoker0 at 172.17.0.1:4243 0.4 sec SUCCEEDED
[exec] Container 6918e6898fab8349 at 172.17.0.1:4243 0.1 sec SUCCEEDED
[exec] has imageId fb184548770d4b3bbe6647f26d1e7a7f84b76fd11a562a802978fe801368fcdf
[exec] Checking invoker0 63.3 sec FAILED
[exec] return code = 7
[exec] 172.17.0.1:12001
[exec] invoker0 is not responding
[exec]
[exec] Tried to deploy 1 invokers. Failed to process 1 invokers in 63.8 sec.
[antcall] Exiting /home/vagrant/openwhisk/core/build.xml.
[ant] Exiting /home/vagrant/openwhisk/core/build.xml.
BUILD FAILED
/home/vagrant/openwhisk/build.xml:51: The following error occurred while executing this line:
/home/vagrant/openwhisk/core/build.xml:14: The following error occurred while executing this line:
/home/vagrant/openwhisk/core/build.xml:26: exec returned: 1
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:643)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:669)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:495)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:440)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Parallel$TaskRunnable.run(Parallel.java:453)
at java.lang.Thread.run(Thread.java:745)
```
what might be the issue, is any anyone else encountered the same issue?
Thanks
dliu
|
1.0
|
deploy failed - Hi,
I succcessfully build the openwhisk docker images according to READme, but when I try ant deploy, I got the following error.
```
he ' characters around the executable and arguments are
not part of the command.
[exec] The invoker hosts are [(0, '172.17.0.1', False)]
[exec] Removing invoker0 at 172.17.0.1:4243 0.0 sec did nothing
[exec] Skipping pull
[exec] Starting invoker0 at 172.17.0.1:4243 0.4 sec SUCCEEDED
[exec] Container 6918e6898fab8349 at 172.17.0.1:4243 0.1 sec SUCCEEDED
[exec] has imageId fb184548770d4b3bbe6647f26d1e7a7f84b76fd11a562a802978fe801368fcdf
[exec] Checking invoker0 63.3 sec FAILED
[exec] return code = 7
[exec] 172.17.0.1:12001
[exec] invoker0 is not responding
[exec]
[exec] Tried to deploy 1 invokers. Failed to process 1 invokers in 63.8 sec.
[antcall] Exiting /home/vagrant/openwhisk/core/build.xml.
[ant] Exiting /home/vagrant/openwhisk/core/build.xml.
BUILD FAILED
/home/vagrant/openwhisk/build.xml:51: The following error occurred while executing this line:
/home/vagrant/openwhisk/core/build.xml:14: The following error occurred while executing this line:
/home/vagrant/openwhisk/core/build.xml:26: exec returned: 1
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:643)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:669)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:495)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:440)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Parallel$TaskRunnable.run(Parallel.java:453)
at java.lang.Thread.run(Thread.java:745)
```
what might be the issue, is any anyone else encountered the same issue?
Thanks
dliu
|
non_process
|
deploy failed hi i succcessfully build the openwhisk docker images according to readme but when i try ant deploy i got the following error he characters around the executable and arguments are not part of the command the invoker hosts are removing at sec did nothing skipping pull starting at sec succeeded container at sec succeeded has imageid checking sec failed return code is not responding tried to deploy invokers failed to process invokers in sec exiting home vagrant openwhisk core build xml exiting home vagrant openwhisk core build xml build failed home vagrant openwhisk build xml the following error occurred while executing this line home vagrant openwhisk core build xml the following error occurred while executing this line home vagrant openwhisk core build xml exec returned at org apache tools ant taskdefs exectask runexecute exectask java at org apache tools ant taskdefs exectask runexec exectask java at org apache tools ant taskdefs exectask execute exectask java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant taskdefs parallel taskrunnable run parallel java at java lang thread run thread java what might be the issue is any anyone else encountered the same issue thanks dliu
| 0
|
14,181
| 8,894,918,734
|
IssuesEvent
|
2019-01-16 06:41:36
|
mochajs/mocha
|
https://api.github.com/repos/mochajs/mocha
|
closed
|
Decouple the "root" property on Suite from title length.
|
feature usability
|
Right now, Mocha deems a `Suite` to be a `root` if the title for it is empty, as can be seen [here](https://github.com/mochajs/mocha/blob/2bb2b9fa35818db7a02e5068364b0c417436b1af/lib/suite.js#L60):
```
function Suite (title, parentContext) {
if (!utils.isString(title)) {
throw new Error('Suite `title` should be a "string" but "' + typeof title + '" was given instead.');
}
this.title = title;
// skipping non-pertinent stuff...
this.root = !title;
```
Mocha creates a first `Suite` with `''` for title, which marks that suite as the root suite. This is a problem because, it seems to me when the user uses an empty string to name a suite, this should not result in Mocha building a structurally deficient tree of tests. (A tree with two "root" suites.)
Being able to name suites with the empty string (without at the same time marking them as "root") is useful. Sometimes I have suites where testing what is logically a single piece of functionality is best done by dividing the tests into two groups: simple tests that can be performed with super fast setup code, more complex tests that need a costlier setup. Oh, I could use the setup code for the 2nd group for all tests, but that would increase the total run time. I end up with something like:
```
// Testing the method foo on class Bar.
describe("#foo", () => {
describe("", () => {
beforeEach(() => {
// create a trivial data structure that is quick to create and sufficient for the tests in this group.
});
// tests...
});
describe("", () => {
beforeEach(() => {
// create a complex structure that is costlier to create but needed for these tests.
});
// tests...
});
});
```
The fact that the suite uses different setups for the two groups of tests is an internal detail that is not useful to know in test reports. The need for different test setups does not always correlate with divisions that are meaningful from the point of view of reporting successes or failures. It *often does*, but not always.
So I would suggest that the code be changed to determine the value of `root` through some other test than the length of the suite's title. There could be a unique object that serves as a marker to indicate "this suite I'm building is a root suite". It could be "statically" added to `Suite` (e.g. `Suite.Root` so doing `new Suite(Suite.Root, ...)` would result in a root suite).
Ultimately, though, the `root` property seems redundant to me: a `Suite` which has no `parent` is a root suite, no? So "root"-ness should correlate with the absence of a set `parent`. But maybe there's some scenario I'm missing? At any rate, removing `root` cannot be done without breakage. The `karma-mocha` plugin, for instance, relies on it to produce reports (which is how I discovered the problem).
|
True
|
Decouple the "root" property on Suite from title length. - Right now, Mocha deems a `Suite` to be a `root` if the title for it is empty, as can be seen [here](https://github.com/mochajs/mocha/blob/2bb2b9fa35818db7a02e5068364b0c417436b1af/lib/suite.js#L60):
```
function Suite (title, parentContext) {
if (!utils.isString(title)) {
throw new Error('Suite `title` should be a "string" but "' + typeof title + '" was given instead.');
}
this.title = title;
// skipping non-pertinent stuff...
this.root = !title;
```
Mocha creates a first `Suite` with `''` for title, which marks that suite as the root suite. This is a problem because, it seems to me when the user uses an empty string to name a suite, this should not result in Mocha building a structurally deficient tree of tests. (A tree with two "root" suites.)
Being able to name suites with the empty string (without at the same time marking them as "root") is useful. Sometimes I have suites where testing what is logically a single piece of functionality is best done by dividing the tests into two groups: simple tests that can be performed with super fast setup code, more complex tests that need a costlier setup. Oh, I could use the setup code for the 2nd group for all tests, but that would increase the total run time. I end up with something like:
```
// Testing the method foo on class Bar.
describe("#foo", () => {
describe("", () => {
beforeEach(() => {
// create a trivial data structure that is quick to create and sufficient for the tests in this group.
});
// tests...
});
describe("", () => {
beforeEach(() => {
// create a complex structure that is costlier to create but needed for these tests.
});
// tests...
});
});
```
The fact that the suite uses different setups for the two groups of tests is an internal detail that is not useful to know in test reports. The need for different test setups does not always correlate with divisions that are meaningful from the point of view of reporting successes or failures. It *often does*, but not always.
So I would suggest that the code be changed to determine the value of `root` through some other test than the length of the suite's title. There could be a unique object that serves as a marker to indicate "this suite I'm building is a root suite". It could be "statically" added to `Suite` (e.g. `Suite.Root` so doing `new Suite(Suite.Root, ...)` would result in a root suite).
Ultimately, though, the `root` property seems redundant to me: a `Suite` which has no `parent` is a root suite, no? So "root"-ness should correlate with the absence of a set `parent`. But maybe there's some scenario I'm missing? At any rate, removing `root` cannot be done without breakage. The `karma-mocha` plugin, for instance, relies on it to produce reports (which is how I discovered the problem).
|
non_process
|
decouple the root property on suite from title length right now mocha deems a suite to be a root if the title for it is empty as can be seen function suite title parentcontext if utils isstring title throw new error suite title should be a string but typeof title was given instead this title title skipping non pertinent stuff this root title mocha creates a first suite with for title which marks that suite as the root suite this is a problem because it seems to me when the user uses an empty string to name a suite this should not result in mocha building a structurally deficient tree of tests a tree with two root suites being able to name suites with the empty string without at the same time marking them as root is useful sometimes i have suites where testing what is logically a single piece of functionality is best done by dividing the tests into two groups simple tests that can be performed with super fast setup code more complex tests that need a costlier setup oh i could use the setup code for the group for all tests but that would increase the total run time i end up with something like testing the method foo on class bar describe foo describe beforeeach create a trivial data structure that is quick to create and sufficient for the tests in this group tests describe beforeeach create a complex structure that is costlier to create but needed for these tests tests the fact that the suite uses different setups for the two groups of tests is an internal detail that is not useful to know in test reports the need for different test setups does not always correlate with divisions that are meaningful from the point of view of reporting successes or failures it often does but not always so i would suggest that the code be changed to determine the value of root through some other test than the length of the suite s title there could be a unique object that serves as a marker to indicate this suite i m building is a root suite it could be statically added to suite e g suite root so doing new suite suite root would result in a root suite ultimately though the root property seems redundant to me a suite which has no parent is a root suite no so root ness should correlate with the absence of a set parent but maybe there s some scenario i m missing at any rate removing root cannot be done without breakage the karma mocha plugin for instance relies on it to produce reports which is how i discovered the problem
| 0
|
52,962
| 10,964,736,763
|
IssuesEvent
|
2019-11-27 23:46:04
|
evanplaice/evanplaice
|
https://api.github.com/repos/evanplaice/evanplaice
|
opened
|
Boredom List
|
Code
|
- Simple Calculator
- Weather App
- To Do List
- Image Slider
- Form Validation
- Hamburger Menu/Animations
- Quiz App
- Shopping Cart
- Countdown Timer
|
1.0
|
Boredom List - - Simple Calculator
- Weather App
- To Do List
- Image Slider
- Form Validation
- Hamburger Menu/Animations
- Quiz App
- Shopping Cart
- Countdown Timer
|
non_process
|
boredom list simple calculator weather app to do list image slider form validation hamburger menu animations quiz app shopping cart countdown timer
| 0
|
20,621
| 27,293,045,146
|
IssuesEvent
|
2023-02-23 17:59:46
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Flaky test: TestTraceE2E/statefulset
|
bug priority:p2 processor/k8sattributes flaky test
|
### Component(s)
processor/k8sattributes, testbed
### What happened?
See https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/4181235503/jobs/7242982697
```
=== RUN TestTraceE2E/statefulset
e2e_test.go:179:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:179
/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:[14](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/4181235503/jobs/7242982697#step:24:15)6
Error: Received unexpected error:
the following errors occurred:
- k8s.annotations.workload attribute not found
- k8s.statefulset.uid attribute not found
- container.id attribute not found
- k8s.pod.name attribute not found
- k8s.node.name attribute not found
- container.image.tag attribute not found
- k8s.labels.app attribute not found
- k8s.pod.uid attribute not found
- k8s.pod.start_time attribute not found
- k8s.statefulset.name attribute not found
- k8s.namespace.name attribute not found
- container.image.name attribute not found
Test: TestTraceE2E/statefulset
```
### Collector version
0.71.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
Flaky test: TestTraceE2E/statefulset - ### Component(s)
processor/k8sattributes, testbed
### What happened?
See https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/4181235503/jobs/7242982697
```
=== RUN TestTraceE2E/statefulset
e2e_test.go:179:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:179
/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:[14](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/4181235503/jobs/7242982697#step:24:15)6
Error: Received unexpected error:
the following errors occurred:
- k8s.annotations.workload attribute not found
- k8s.statefulset.uid attribute not found
- container.id attribute not found
- k8s.pod.name attribute not found
- k8s.node.name attribute not found
- container.image.tag attribute not found
- k8s.labels.app attribute not found
- k8s.pod.uid attribute not found
- k8s.pod.start_time attribute not found
- k8s.statefulset.name attribute not found
- k8s.namespace.name attribute not found
- container.image.name attribute not found
Test: TestTraceE2E/statefulset
```
### Collector version
0.71.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
process
|
flaky test statefulset component s processor testbed what happened see run statefulset test go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib processor test go home runner work opentelemetry collector contrib opentelemetry collector contrib processor test go error received unexpected error the following errors occurred annotations workload attribute not found statefulset uid attribute not found container id attribute not found pod name attribute not found node name attribute not found container image tag attribute not found labels app attribute not found pod uid attribute not found pod start time attribute not found statefulset name attribute not found namespace name attribute not found container image name attribute not found test statefulset collector version environment information no response opentelemetry collector configuration no response log output no response additional context no response
| 1
|
5,997
| 8,805,990,112
|
IssuesEvent
|
2018-12-27 00:00:40
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Putting chunked topicrefs to topics in composite in topicgroup results in unchunked result
|
bug preprocess/chunking stale
|
Using 2.3 develop branch:
Given a `<dita>` document with two topics, if I create a map that references the two topics in the composite using only chunk="select-branch to-content" topicrefs and and the topicrefs are **not** in a topicgroup, then only the requested chunks are created and nothing is generated for the composite as a whole.
However, if I change the map _only_ by moving the topicrefs to the composite topics into a topicgroup element, then, in addition to the requested chunked topics, the processor creates two unchunked files for the composite (temporary .dita files and then corresponding HTML files using the XHTML transform).
In this case the unchunked files should not be created since they were not requested and are not wanted (and even if it was appropriate to generate the unchunked result there should not be two of them).
This suggests a logic bug in the chunk processing.
Note that having the chunked topicrefs be children of regular topicrefs does not appear to be a factor, only being in or not in a topicgroup.
|
1.0
|
Putting chunked topicrefs to topics in composite in topicgroup results in unchunked result - Using 2.3 develop branch:
Given a `<dita>` document with two topics, if I create a map that references the two topics in the composite using only chunk="select-branch to-content" topicrefs and and the topicrefs are **not** in a topicgroup, then only the requested chunks are created and nothing is generated for the composite as a whole.
However, if I change the map _only_ by moving the topicrefs to the composite topics into a topicgroup element, then, in addition to the requested chunked topics, the processor creates two unchunked files for the composite (temporary .dita files and then corresponding HTML files using the XHTML transform).
In this case the unchunked files should not be created since they were not requested and are not wanted (and even if it was appropriate to generate the unchunked result there should not be two of them).
This suggests a logic bug in the chunk processing.
Note that having the chunked topicrefs be children of regular topicrefs does not appear to be a factor, only being in or not in a topicgroup.
|
process
|
putting chunked topicrefs to topics in composite in topicgroup results in unchunked result using develop branch given a document with two topics if i create a map that references the two topics in the composite using only chunk select branch to content topicrefs and and the topicrefs are not in a topicgroup then only the requested chunks are created and nothing is generated for the composite as a whole however if i change the map only by moving the topicrefs to the composite topics into a topicgroup element then in addition to the requested chunked topics the processor creates two unchunked files for the composite temporary dita files and then corresponding html files using the xhtml transform in this case the unchunked files should not be created since they were not requested and are not wanted and even if it was appropriate to generate the unchunked result there should not be two of them this suggests a logic bug in the chunk processing note that having the chunked topicrefs be children of regular topicrefs does not appear to be a factor only being in or not in a topicgroup
| 1
|
89,490
| 17,933,522,586
|
IssuesEvent
|
2021-09-10 12:37:23
|
boltsparts/BOLTS
|
https://api.github.com/repos/boltsparts/BOLTS
|
closed
|
F811 redefinition of unused 'safe_join' from line 1
|
CodeFormating
|
save_join is imported twice in one module
https://github.com/boltsparts/BOLTS/blob/87a015198dbe999dd1281af27e28e8e749303f78/backends/website/docs/__init__.py#L1
https://github.com/boltsparts/BOLTS/blob/87a015198dbe999dd1281af27e28e8e749303f78/backends/website/docs/__init__.py#L4
|
1.0
|
F811 redefinition of unused 'safe_join' from line 1 - save_join is imported twice in one module
https://github.com/boltsparts/BOLTS/blob/87a015198dbe999dd1281af27e28e8e749303f78/backends/website/docs/__init__.py#L1
https://github.com/boltsparts/BOLTS/blob/87a015198dbe999dd1281af27e28e8e749303f78/backends/website/docs/__init__.py#L4
|
non_process
|
redefinition of unused safe join from line save join is imported twice in one module
| 0
|
20,529
| 27,189,544,711
|
IssuesEvent
|
2023-02-19 16:30:01
|
pcg-platinus/feedback
|
https://api.github.com/repos/pcg-platinus/feedback
|
opened
|
Ergänzung von Rollen-Unterweisungen: Ich bin Prozessmitarbeiter und möchte ...
|
BPM-BusinessProcessManagement
|
Im Zuge der Überarbeitung der BPM-KB wurde u.a. die Sektion **Ich bin** überarbeitet.
Die Style wurde überarbeitet und sieht wie folgt aus:
[Ich bin Prozessmitarbeiter Business Process Management Knowledge-Base.pdf](https://github.com/pcg-platinus/feedback/files/10777372/Ich.bin.Prozessmitarbeiter.Business.Process.Management.Knowledge-Base.pdf)
Nun werden weitere Fragen aus Sicht von Prozessmitarbeitern gesammelt. Dieser Input dient für Ergänzungen den BPM-KB.
Quelle = https://bpm-kb.platinus.at/docs/Einstieg/Ich-bin/
|
1.0
|
Ergänzung von Rollen-Unterweisungen: Ich bin Prozessmitarbeiter und möchte ... - Im Zuge der Überarbeitung der BPM-KB wurde u.a. die Sektion **Ich bin** überarbeitet.
Die Style wurde überarbeitet und sieht wie folgt aus:
[Ich bin Prozessmitarbeiter Business Process Management Knowledge-Base.pdf](https://github.com/pcg-platinus/feedback/files/10777372/Ich.bin.Prozessmitarbeiter.Business.Process.Management.Knowledge-Base.pdf)
Nun werden weitere Fragen aus Sicht von Prozessmitarbeitern gesammelt. Dieser Input dient für Ergänzungen den BPM-KB.
Quelle = https://bpm-kb.platinus.at/docs/Einstieg/Ich-bin/
|
process
|
ergänzung von rollen unterweisungen ich bin prozessmitarbeiter und möchte im zuge der überarbeitung der bpm kb wurde u a die sektion ich bin überarbeitet die style wurde überarbeitet und sieht wie folgt aus nun werden weitere fragen aus sicht von prozessmitarbeitern gesammelt dieser input dient für ergänzungen den bpm kb quelle
| 1
|
404,144
| 11,852,706,406
|
IssuesEvent
|
2020-03-24 20:28:24
|
grafana/grafana
|
https://api.github.com/repos/grafana/grafana
|
closed
|
[Bug] pressing ctrl+z too many times will set the time range to *null*, disabling the dashboard
|
area/dashboard/timerange priority/nice-to-have type/bug
|
Please include this information:
* What Grafana version are you using? Grafana v3.1.1 (commit: a4d2708)
* What datasource are you using? Graphite
* What OS are you running grafana on? CentOS 7.0
* What did you do? Zoomed out too much and saved. Dashboard is permanently stuck disabled, but I'll just reimport the correct json export file.
* What was the expected result? Tried to undo an action I just did in Grafana.
* What happened instead? Accidentally zoomed out an infinite amount of time.
Here is a snippet from the exported json dashboard:
"time": {
"from": null,
"to": null
},
|
1.0
|
[Bug] pressing ctrl+z too many times will set the time range to *null*, disabling the dashboard - Please include this information:
* What Grafana version are you using? Grafana v3.1.1 (commit: a4d2708)
* What datasource are you using? Graphite
* What OS are you running grafana on? CentOS 7.0
* What did you do? Zoomed out too much and saved. Dashboard is permanently stuck disabled, but I'll just reimport the correct json export file.
* What was the expected result? Tried to undo an action I just did in Grafana.
* What happened instead? Accidentally zoomed out an infinite amount of time.
Here is a snippet from the exported json dashboard:
"time": {
"from": null,
"to": null
},
|
non_process
|
pressing ctrl z too many times will set the time range to null disabling the dashboard please include this information what grafana version are you using grafana commit what datasource are you using graphite what os are you running grafana on centos what did you do zoomed out too much and saved dashboard is permanently stuck disabled but i ll just reimport the correct json export file what was the expected result tried to undo an action i just did in grafana what happened instead accidentally zoomed out an infinite amount of time here is a snippet from the exported json dashboard time from null to null
| 0
|
20,657
| 27,329,869,336
|
IssuesEvent
|
2023-02-25 13:40:05
|
cse442-at-ub/project_s23-iweatherify
|
https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify
|
closed
|
Create and Host a Static Homepage for the Website
|
Processing Task Sprint 1
|
**Task Tests*
*Test 1*
1. Visit the Static_Homepage Branch of the CSE 442 iWeatherify Repository
https://github.com/cse442-at-ub/project_s23-iweatherify/tree/Static_Homepage_BugFix
2. Click on Code and Download Zip
3. Unzip what was downloaded and open up the folder
4. Rename 'index.php' to 'index.html'
5. Open 'index.html' with any browser
6. Successfully view the website’s Homepage on localhost!

*Test 2*
(Proving that it can run on UB Servers)
1. Open up a Terminal Window
2. ssh into UBIT@cheshire.cse.buffalo.edu (UBIT being the abc@buffalo.edu where ABC is the UBIT)
3. cd into /web/CSE442-542/2023-Spring/cse-442a
4. Switch Branches from the current one to Static_Homepage
5. Visit the website
https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/
6. Check the Figma Homepage Design (User Page Not Logged In)
https://www.figma.com/file/MezZn3xcEGiUnMHzpY7ton/iWeatherfy?node-id=441%3A1271&t=rwDVE3BzLOMW0kZN-0

7. Compare the Figma Homepage Design to what's on the website

8. Switch Branches back from Static_Homepage to Main
|
1.0
|
Create and Host a Static Homepage for the Website - **Task Tests*
*Test 1*
1. Visit the Static_Homepage Branch of the CSE 442 iWeatherify Repository
https://github.com/cse442-at-ub/project_s23-iweatherify/tree/Static_Homepage_BugFix
2. Click on Code and Download Zip
3. Unzip what was downloaded and open up the folder
4. Rename 'index.php' to 'index.html'
5. Open 'index.html' with any browser
6. Successfully view the website’s Homepage on localhost!

*Test 2*
(Proving that it can run on UB Servers)
1. Open up a Terminal Window
2. ssh into UBIT@cheshire.cse.buffalo.edu (UBIT being the abc@buffalo.edu where ABC is the UBIT)
3. cd into /web/CSE442-542/2023-Spring/cse-442a
4. Switch Branches from the current one to Static_Homepage
5. Visit the website
https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/
6. Check the Figma Homepage Design (User Page Not Logged In)
https://www.figma.com/file/MezZn3xcEGiUnMHzpY7ton/iWeatherfy?node-id=441%3A1271&t=rwDVE3BzLOMW0kZN-0

7. Compare the Figma Homepage Design to what's on the website

8. Switch Branches back from Static_Homepage to Main
|
process
|
create and host a static homepage for the website task tests test visit the static homepage branch of the cse iweatherify repository click on code and download zip unzip what was downloaded and open up the folder rename index php to index html open index html with any browser successfully view the website’s homepage on localhost test proving that it can run on ub servers open up a terminal window ssh into ubit cheshire cse buffalo edu ubit being the abc buffalo edu where abc is the ubit cd into web spring cse switch branches from the current one to static homepage visit the website check the figma homepage design user page not logged in compare the figma homepage design to what s on the website switch branches back from static homepage to main
| 1
|
15,948
| 20,168,584,868
|
IssuesEvent
|
2022-02-10 08:14:35
|
energy-modelling-toolkit/Dispa-SET
|
https://api.github.com/repos/energy-modelling-toolkit/Dispa-SET
|
closed
|
Line congestion and net position plots
|
enhancement postprocessing
|
Simple GIS plots that indicate net positions of individual zones and congestion in the interconnections.
Some parts of the functions are hard-coded and not well documented. Explanations on how each of these variables impacts the plot:
- [x] boundaries
- [x] margin_type
- [x] margin
- [x] geomap
- [x] color_geomap
- [x] terrain
- [x] bublesize
Speed is also an issue for really big plots (49 African countries took quite while to generate) I'm not sure if there is a way to improve the plot generation time. (If not a warning should indicate that it might take a while before the plots are generated.
|
1.0
|
Line congestion and net position plots - Simple GIS plots that indicate net positions of individual zones and congestion in the interconnections.
Some parts of the functions are hard-coded and not well documented. Explanations on how each of these variables impacts the plot:
- [x] boundaries
- [x] margin_type
- [x] margin
- [x] geomap
- [x] color_geomap
- [x] terrain
- [x] bublesize
Speed is also an issue for really big plots (49 African countries took quite while to generate) I'm not sure if there is a way to improve the plot generation time. (If not a warning should indicate that it might take a while before the plots are generated.
|
process
|
line congestion and net position plots simple gis plots that indicate net positions of individual zones and congestion in the interconnections some parts of the functions are hard coded and not well documented explanations on how each of these variables impacts the plot boundaries margin type margin geomap color geomap terrain bublesize speed is also an issue for really big plots african countries took quite while to generate i m not sure if there is a way to improve the plot generation time if not a warning should indicate that it might take a while before the plots are generated
| 1
|
164,204
| 20,364,360,055
|
IssuesEvent
|
2022-02-21 02:38:05
|
Hackagroup/BirdsEye
|
https://api.github.com/repos/Hackagroup/BirdsEye
|
closed
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/webpack-dev-server/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.0.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json,/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-2.0.6.tgz (Root Library)
- chokidar-3.4.3.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - autoclosed - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/webpack-dev-server/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.0.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json,/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-2.0.6.tgz (Root Library)
- chokidar-3.4.3.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in glob parent tgz glob parent tgz autoclosed cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file client package json path to vulnerable library client node modules webpack dev server node modules glob parent package json dependency hierarchy react scripts tgz root library webpack dev server tgz chokidar tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json node modules glob parent package json dependency hierarchy nodemon tgz root library chokidar tgz x glob parent tgz vulnerable library found in base branch main vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource
| 0
|
214,293
| 24,061,512,530
|
IssuesEvent
|
2022-09-16 23:45:56
|
Agoric/agoric-sdk
|
https://api.github.com/repos/Agoric/agoric-sdk
|
closed
|
Await safety uncertain in cosmic-swingset/src/launch-chain.js
|
bug security
|
https://github.com/Agoric/agoric-sdk/blob/a1dedeae72908fda45afcb6038d76f8359adc8de/packages/cosmic-swingset/src/launch-chain.js#L97
The triage at https://github.com/Agoric/agoric-sdk/pull/6219 currently classifies this as
// TODO FIXME This code should be refactored to make this
// await checkably safe, or to remove it, or to record here
// why it is actually safe.
//
// `initializeSwingset` is stateful and is called synchronously
// or asynchronously, depending on which branch of the conditional
// is taken. If it were verified to be insensitive to this,
// then this await would be safe because "terminal-control-flow"
Git blame shows @michaelfig as the one who should probably investigate this, so I'm assigning to them. Feel free to reassign as appropriate of course.
|
True
|
Await safety uncertain in cosmic-swingset/src/launch-chain.js - https://github.com/Agoric/agoric-sdk/blob/a1dedeae72908fda45afcb6038d76f8359adc8de/packages/cosmic-swingset/src/launch-chain.js#L97
The triage at https://github.com/Agoric/agoric-sdk/pull/6219 currently classifies this as
// TODO FIXME This code should be refactored to make this
// await checkably safe, or to remove it, or to record here
// why it is actually safe.
//
// `initializeSwingset` is stateful and is called synchronously
// or asynchronously, depending on which branch of the conditional
// is taken. If it were verified to be insensitive to this,
// then this await would be safe because "terminal-control-flow"
Git blame shows @michaelfig as the one who should probably investigate this, so I'm assigning to them. Feel free to reassign as appropriate of course.
|
non_process
|
await safety uncertain in cosmic swingset src launch chain js the triage at currently classifies this as todo fixme this code should be refactored to make this await checkably safe or to remove it or to record here why it is actually safe initializeswingset is stateful and is called synchronously or asynchronously depending on which branch of the conditional is taken if it were verified to be insensitive to this then this await would be safe because terminal control flow git blame shows michaelfig as the one who should probably investigate this so i m assigning to them feel free to reassign as appropriate of course
| 0
|
10,766
| 13,562,029,590
|
IssuesEvent
|
2020-09-18 06:02:07
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing: GRASS r.drain fails on macOS: "No module named site", "Raster not found"
|
Bug Feedback MacOS Processing
|
This issue seems to be back. I'm running mac OS 10.13.6 and QGIS 3.12.2. Another report of something very similar is [here](https://gis.stackexchange.com/questions/305635/unable-to-make-use-r-drain).
GRASS tools (on mac at least) seem to have often had path problems that occasionally work and then in updates, break again (for example, see [this bug report](https://github.com/qgis/QGIS/issues/26903)). In 3.12, r.walk works fine, but r.drain seems to create temporary rasters but then can't find them again. For the start point, I have tried a shapefile and a coordinate, which was an issue in earlier versions, but the result is the same error. It looks like the tool runs in GRASS, but this error happens twice:
```
ImportError: No module named site
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
```
Below is the full log.
```
QGIS version: 3.12.2-București
QGIS code revision: 8a1fb33634
Qt version: 5.12.3
GDAL version: 2.4.1
GEOS version: 3.7.2-CAPI-1.11.2 b55d2125
PROJ version: Rel. 5.2.0, September 15th, 2018
Processing algorithm…
Algorithm 'r.drain' starting…
Input parameters:
{ '-a' : False, '-c' : False, '-d' : True, '-n' : False, 'GRASS_MIN_AREA_PARAMETER' : 0.0001, 'GRASS_OUTPUT_TYPE_PARAMETER' : 0, 'GRASS_RASTER_FORMAT_META' : '', 'GRASS_RASTER_FORMAT_OPT' : '', 'GRASS_REGION_CELLSIZE_PARAMETER' : 0, 'GRASS_REGION_PARAMETER' : None, 'GRASS_SNAP_TOLERANCE_PARAMETER' : -1, 'GRASS_VECTOR_DSCO' : '', 'GRASS_VECTOR_EXPORT_NOCAT' : False, 'GRASS_VECTOR_LCO' : '', 'direction' : '/Users/erik/Documents/QGIS/Diamante/Move Directions from LD-4.tif', 'drain' : 'TEMPORARY_OUTPUT', 'input' : '/Users/erik/Documents/QGIS/Diamante/Cost surface from LD-4 hours.tif', 'output' : 'TEMPORARY_OUTPUT', 'start_coordinates' : None, 'start_points' : '/Users/erik/Documents/QGIS/Diamante/LD-4.shp' }
g.proj -c proj4="+proj=longlat +datum=WGS84 +no_defs"
r.in.gdal input="/Users/erik/Documents/QGIS/Diamante/Cost surface from LD-4 hours.tif" band=1 output="rast_5eac3c8ebd1f35" --overwrite -o
r.in.gdal input="/Users/erik/Documents/QGIS/Diamante/Move Directions from LD-4.tif" band=1 output="rast_5eac3c8ebd3976" --overwrite -o
v.in.ogr min_area=0.0001 snap=-1.0 input="/Users/erik/Documents/QGIS/Diamante/LD-4.shp" output="vector_5eac3c8ebd5887" --overwrite -o
g.region n=-33.818067018 s=-34.798218824 e=-69.038583857 w=-71.236042704 res=0.0002694945851116008
r.drain input=rast_5eac3c8ebd1f35 direction=rast_5eac3c8ebd3976 start_points=vector_5eac3c8ebd5887 -d output=output52fba189f3314041a68e695b68431f67 drain=drain52fba189f3314041a68e695b68431f67 --overwrite
g.region raster=output52fba189f3314041a68e695b68431f67
r.out.gdal -t -m input="output52fba189f3314041a68e695b68431f67" output="/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/fc5c20acce2649449811cc7401de1a46/output.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
v.out.ogr type="auto" input="drain52fba189f3314041a68e695b68431f67" output="/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/32493e9d44754ade84017f6a53dc2586/drain.gpkg" format="GPKG" --overwrite
Starting GRASS GIS...
Executing </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> ...
Default region was updated to the new projection, but if you have multiple mapsets `g.region -d` should be run in each to update the region from the default
Projection information updated
Over-riding projection check
Importing raster map <rast_5eac3c8ebd1f35>...
0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
Over-riding projection check
Importing raster map <rast_5eac3c8ebd3976>...
0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
Over-riding projection check
Check if OGR layer <LD-4> contains polygons...
0..100
Creating attribute table for layer <LD-4>...
Importing 1 features (OGR layer <LD-4>)...
0..100
-----------------------------------------------------
Building topology for vector map <vector_5eac3c8ebd5887@PERMANENT>...
Registering primitives...
ImportError: No module named site
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
Execution of </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> finished.
Traceback (most recent call last):
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2207, in <module>
main()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2155, in main
clean_all()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 1789, in clean_all
gsetup.clean_default_db()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/setup.py", line 220, in clean_default_db
conn = gdb.db_connection()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/db.py", line 108, in db_connection
nuldev = file(os.devnull, 'w')
NameError: name 'file' is not defined
/Applications/QGIS3.12.app/Contents/Resources/grass7/grass76.sh: line 20: /private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/temp_location/PERMANENT: is a directory
Starting GRASS GIS...
Executing </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> ...
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
Execution of </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> finished.
Traceback (most recent call last):
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2207, in <module>
main()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2155, in main
clean_all()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 1789, in clean_all
gsetup.clean_default_db()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/setup.py", line 220, in clean_default_db
conn = gdb.db_connection()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/db.py", line 108, in db_connection
nuldev = file(os.devnull, 'w')
NameError: name 'file' is not defined
/Applications/QGIS3.12.app/Contents/Resources/grass7/grass76.sh: line 20: /private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/temp_location/PERMANENT: is a directory
Execution completed in 17.13 seconds
Results:
{'drain': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
'output': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>}
Loading resulting layers
The following layers were not correctly generated.<ul><li>/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/32493e9d44754ade84017f6a53dc2586/drain.gpkg</li><li>/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/fc5c20acce2649449811cc7401de1a46/output.tif</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
```
|
1.0
|
Processing: GRASS r.drain fails on macOS: "No module named site", "Raster not found" - This issue seems to be back. I'm running mac OS 10.13.6 and QGIS 3.12.2. Another report of something very similar is [here](https://gis.stackexchange.com/questions/305635/unable-to-make-use-r-drain).
GRASS tools (on mac at least) seem to have often had path problems that occasionally work and then in updates, break again (for example, see [this bug report](https://github.com/qgis/QGIS/issues/26903)). In 3.12, r.walk works fine, but r.drain seems to create temporary rasters but then can't find them again. For the start point, I have tried a shapefile and a coordinate, which was an issue in earlier versions, but the result is the same error. It looks like the tool runs in GRASS, but this error happens twice:
```
ImportError: No module named site
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
```
Below is the full log.
```
QGIS version: 3.12.2-București
QGIS code revision: 8a1fb33634
Qt version: 5.12.3
GDAL version: 2.4.1
GEOS version: 3.7.2-CAPI-1.11.2 b55d2125
PROJ version: Rel. 5.2.0, September 15th, 2018
Processing algorithm…
Algorithm 'r.drain' starting…
Input parameters:
{ '-a' : False, '-c' : False, '-d' : True, '-n' : False, 'GRASS_MIN_AREA_PARAMETER' : 0.0001, 'GRASS_OUTPUT_TYPE_PARAMETER' : 0, 'GRASS_RASTER_FORMAT_META' : '', 'GRASS_RASTER_FORMAT_OPT' : '', 'GRASS_REGION_CELLSIZE_PARAMETER' : 0, 'GRASS_REGION_PARAMETER' : None, 'GRASS_SNAP_TOLERANCE_PARAMETER' : -1, 'GRASS_VECTOR_DSCO' : '', 'GRASS_VECTOR_EXPORT_NOCAT' : False, 'GRASS_VECTOR_LCO' : '', 'direction' : '/Users/erik/Documents/QGIS/Diamante/Move Directions from LD-4.tif', 'drain' : 'TEMPORARY_OUTPUT', 'input' : '/Users/erik/Documents/QGIS/Diamante/Cost surface from LD-4 hours.tif', 'output' : 'TEMPORARY_OUTPUT', 'start_coordinates' : None, 'start_points' : '/Users/erik/Documents/QGIS/Diamante/LD-4.shp' }
g.proj -c proj4="+proj=longlat +datum=WGS84 +no_defs"
r.in.gdal input="/Users/erik/Documents/QGIS/Diamante/Cost surface from LD-4 hours.tif" band=1 output="rast_5eac3c8ebd1f35" --overwrite -o
r.in.gdal input="/Users/erik/Documents/QGIS/Diamante/Move Directions from LD-4.tif" band=1 output="rast_5eac3c8ebd3976" --overwrite -o
v.in.ogr min_area=0.0001 snap=-1.0 input="/Users/erik/Documents/QGIS/Diamante/LD-4.shp" output="vector_5eac3c8ebd5887" --overwrite -o
g.region n=-33.818067018 s=-34.798218824 e=-69.038583857 w=-71.236042704 res=0.0002694945851116008
r.drain input=rast_5eac3c8ebd1f35 direction=rast_5eac3c8ebd3976 start_points=vector_5eac3c8ebd5887 -d output=output52fba189f3314041a68e695b68431f67 drain=drain52fba189f3314041a68e695b68431f67 --overwrite
g.region raster=output52fba189f3314041a68e695b68431f67
r.out.gdal -t -m input="output52fba189f3314041a68e695b68431f67" output="/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/fc5c20acce2649449811cc7401de1a46/output.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
v.out.ogr type="auto" input="drain52fba189f3314041a68e695b68431f67" output="/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/32493e9d44754ade84017f6a53dc2586/drain.gpkg" format="GPKG" --overwrite
Starting GRASS GIS...
Executing </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> ...
Default region was updated to the new projection, but if you have multiple mapsets `g.region -d` should be run in each to update the region from the default
Projection information updated
Over-riding projection check
Importing raster map <rast_5eac3c8ebd1f35>...
0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
Over-riding projection check
Importing raster map <rast_5eac3c8ebd3976>...
0..3..6..9..12..15..18..21..24..27..30..33..36..39..42..45..48..51..54..57..60..63..66..69..72..75..78..81..84..87..90..93..96..99..100
Over-riding projection check
Check if OGR layer <LD-4> contains polygons...
0..100
Creating attribute table for layer <LD-4>...
Importing 1 features (OGR layer <LD-4>)...
0..100
-----------------------------------------------------
Building topology for vector map <vector_5eac3c8ebd5887@PERMANENT>...
Registering primitives...
ImportError: No module named site
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
Execution of </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> finished.
Traceback (most recent call last):
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2207, in <module>
main()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2155, in main
clean_all()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 1789, in clean_all
gsetup.clean_default_db()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/setup.py", line 220, in clean_default_db
conn = gdb.db_connection()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/db.py", line 108, in db_connection
nuldev = file(os.devnull, 'w')
NameError: name 'file' is not defined
/Applications/QGIS3.12.app/Contents/Resources/grass7/grass76.sh: line 20: /private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/temp_location/PERMANENT: is a directory
Starting GRASS GIS...
Executing </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> ...
ERROR: Raster map <output52fba189f3314041a68e695b68431f67> not found
ERROR: Raster map or group <output52fba189f3314041a68e695b68431f67> not found
ERROR: Vector map <drain52fba189f3314041a68e695b68431f67> not found
Execution of </private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/grass_batch_job.sh> finished.
Traceback (most recent call last):
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2207, in <module>
main()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 2155, in main
clean_all()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/bin/_grass76", line 1789, in clean_all
gsetup.clean_default_db()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/setup.py", line 220, in clean_default_db
conn = gdb.db_connection()
File "/Applications/QGIS3.12.app/Contents/Resources/grass7/etc/python/grass/script/db.py", line 108, in db_connection
nuldev = file(os.devnull, 'w')
NameError: name 'file' is not defined
/Applications/QGIS3.12.app/Contents/Resources/grass7/grass76.sh: line 20: /private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/grassdata/temp_location/PERMANENT: is a directory
Execution completed in 17.13 seconds
Results:
{'drain': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>,
'output': <QgsProcessingOutputLayerDefinition {'sink':TEMPORARY_OUTPUT, 'createOptions': {'fileEncoding': 'System'}}>}
Loading resulting layers
The following layers were not correctly generated.<ul><li>/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/32493e9d44754ade84017f6a53dc2586/drain.gpkg</li><li>/private/var/folders/nj/gkbf_4l131x1x3yzwtbq96y80000gn/T/processing_BOxdqI/fc5c20acce2649449811cc7401de1a46/output.tif</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
```
|
process
|
processing grass r drain fails on macos no module named site raster not found this issue seems to be back i m running mac os and qgis another report of something very similar is grass tools on mac at least seem to have often had path problems that occasionally work and then in updates break again for example see in r walk works fine but r drain seems to create temporary rasters but then can t find them again for the start point i have tried a shapefile and a coordinate which was an issue in earlier versions but the result is the same error it looks like the tool runs in grass but this error happens twice importerror no module named site error raster map not found error raster map or group not found error vector map not found below is the full log qgis version bucurești qgis code revision qt version gdal version geos version capi proj version rel september processing algorithm… algorithm r drain starting… input parameters a false c false d true n false grass min area parameter grass output type parameter grass raster format meta grass raster format opt grass region cellsize parameter grass region parameter none grass snap tolerance parameter grass vector dsco grass vector export nocat false grass vector lco direction users erik documents qgis diamante move directions from ld tif drain temporary output input users erik documents qgis diamante cost surface from ld hours tif output temporary output start coordinates none start points users erik documents qgis diamante ld shp g proj c proj longlat datum no defs r in gdal input users erik documents qgis diamante cost surface from ld hours tif band output rast overwrite o r in gdal input users erik documents qgis diamante move directions from ld tif band output rast overwrite o v in ogr min area snap input users erik documents qgis diamante ld shp output vector overwrite o g region n s e w res r drain input rast direction rast start points vector d output drain overwrite g region raster r out gdal t m input output private var folders nj gkbf t processing boxdqi output tif format gtiff createopt tfw yes compress lzw overwrite v out ogr type auto input output private var folders nj gkbf t processing boxdqi drain gpkg format gpkg overwrite starting grass gis executing default region was updated to the new projection but if you have multiple mapsets g region d should be run in each to update the region from the default projection information updated over riding projection check importing raster map over riding projection check importing raster map over riding projection check check if ogr layer contains polygons creating attribute table for layer importing features ogr layer building topology for vector map registering primitives importerror no module named site error raster map not found error raster map or group not found error vector map not found execution of finished traceback most recent call last file applications app contents resources bin line in main file applications app contents resources bin line in main clean all file applications app contents resources bin line in clean all gsetup clean default db file applications app contents resources etc python grass script setup py line in clean default db conn gdb db connection file applications app contents resources etc python grass script db py line in db connection nuldev file os devnull w nameerror name file is not defined applications app contents resources sh line private var folders nj gkbf t processing boxdqi grassdata temp location permanent is a directory starting grass gis executing error raster map not found error raster map or group not found error vector map not found execution of finished traceback most recent call last file applications app contents resources bin line in main file applications app contents resources bin line in main clean all file applications app contents resources bin line in clean all gsetup clean default db file applications app contents resources etc python grass script setup py line in clean default db conn gdb db connection file applications app contents resources etc python grass script db py line in db connection nuldev file os devnull w nameerror name file is not defined applications app contents resources sh line private var folders nj gkbf t processing boxdqi grassdata temp location permanent is a directory execution completed in seconds results drain output loading resulting layers the following layers were not correctly generated private var folders nj gkbf t processing boxdqi drain gpkg private var folders nj gkbf t processing boxdqi output tif you can check the log messages panel in qgis main window to find more information about the execution of the algorithm
| 1
|
371
| 2,814,109,095
|
IssuesEvent
|
2015-05-18 18:11:09
|
joyent/node
|
https://api.github.com/repos/joyent/node
|
closed
|
Signals aren't received when debugging
|
child_process debugger maybe-close
|
When using child_process.spawn() to run node in debug mode, signals sent to the child aren't received. You can see an example in this gist: https://gist.github.com/2474314/
If you remove the debug argument on line 14 of signals.js, the SIGINT handler in example.js fires. I'm pretty sure the issue is that `node debug` spawns a child process that runs example.js, but it doesn't forward signals along to the child.
I'm using node 0.6.15 on OS X 10.7.3.
If you're curious, the reason I'm doing this craziness is because I'm trying to fix the debugger in [whiskey](https://github.com/cloudkick/whiskey/).
|
1.0
|
Signals aren't received when debugging - When using child_process.spawn() to run node in debug mode, signals sent to the child aren't received. You can see an example in this gist: https://gist.github.com/2474314/
If you remove the debug argument on line 14 of signals.js, the SIGINT handler in example.js fires. I'm pretty sure the issue is that `node debug` spawns a child process that runs example.js, but it doesn't forward signals along to the child.
I'm using node 0.6.15 on OS X 10.7.3.
If you're curious, the reason I'm doing this craziness is because I'm trying to fix the debugger in [whiskey](https://github.com/cloudkick/whiskey/).
|
process
|
signals aren t received when debugging when using child process spawn to run node in debug mode signals sent to the child aren t received you can see an example in this gist if you remove the debug argument on line of signals js the sigint handler in example js fires i m pretty sure the issue is that node debug spawns a child process that runs example js but it doesn t forward signals along to the child i m using node on os x if you re curious the reason i m doing this craziness is because i m trying to fix the debugger in
| 1
|
643,330
| 20,948,536,916
|
IssuesEvent
|
2022-03-26 08:16:00
|
thesaurus-linguae-aegyptiae/tla-web
|
https://api.github.com/repos/thesaurus-linguae-aegyptiae/tla-web
|
opened
|
"Suche modifizieren"-Logik falsch
|
bug thinkabout/question low priority
|
Bei Suche Modifizieren werden (mind.) Auswahl der Transkriptionskodierung und Einstellung der Wortartensuche eingestellt auf die letzte Anfrage überhaupt/in allen Fenstern (d.h. auf Basis der Cookies). Logisch wäre aber, dass die Einstellungen die der Anfrage sind, die im betreffenden Fenster gemacht wurde (so funktionieren z.B. das Feld Transkription und Übersetzung). Das eigentlich gewünschte/intuitive Verhalten würde aber wieder die Cookies betreffen.
Beispiel:
- Suche in einem Fenster nach gm + Verb => Suche mod. => Voreinstellung gm, Verb
- Suche in einem Fenster nach gm + Verb => Suche in anderem Fenster nach pr + Nomen=> Suche mod. im gm/Verb-Fenster => Voreinstellung pr, Nomen
|
1.0
|
"Suche modifizieren"-Logik falsch - Bei Suche Modifizieren werden (mind.) Auswahl der Transkriptionskodierung und Einstellung der Wortartensuche eingestellt auf die letzte Anfrage überhaupt/in allen Fenstern (d.h. auf Basis der Cookies). Logisch wäre aber, dass die Einstellungen die der Anfrage sind, die im betreffenden Fenster gemacht wurde (so funktionieren z.B. das Feld Transkription und Übersetzung). Das eigentlich gewünschte/intuitive Verhalten würde aber wieder die Cookies betreffen.
Beispiel:
- Suche in einem Fenster nach gm + Verb => Suche mod. => Voreinstellung gm, Verb
- Suche in einem Fenster nach gm + Verb => Suche in anderem Fenster nach pr + Nomen=> Suche mod. im gm/Verb-Fenster => Voreinstellung pr, Nomen
|
non_process
|
suche modifizieren logik falsch bei suche modifizieren werden mind auswahl der transkriptionskodierung und einstellung der wortartensuche eingestellt auf die letzte anfrage überhaupt in allen fenstern d h auf basis der cookies logisch wäre aber dass die einstellungen die der anfrage sind die im betreffenden fenster gemacht wurde so funktionieren z b das feld transkription und übersetzung das eigentlich gewünschte intuitive verhalten würde aber wieder die cookies betreffen beispiel suche in einem fenster nach gm verb suche mod voreinstellung gm verb suche in einem fenster nach gm verb suche in anderem fenster nach pr nomen suche mod im gm verb fenster voreinstellung pr nomen
| 0
|
3,245
| 6,312,501,949
|
IssuesEvent
|
2017-07-24 03:51:49
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
How can I set the Temp directory with Psr16Adapter ?
|
6.0 [-_-] In Process
|
### Configuration:
PhpFastCache version: `6.0.2`
PHP version: `7.0`
Operating system: `Ubuntu 16.04`
#### Question:
> How can I set the Temp directory using Psr16Adapter ? I see the Read Me file has the example available but it is using the CacheManager class.
I have tried using something like this.. but it did not worked.
`$this->Psr16Adapter = new Psr16Adapter('Files', array("path" => '/var/www/html/site/tmp')); `
Can you tell me if this possible ?
The reason I need it is, I have multiple sites hosted on same server and at moment all sites are getting same data from cache. and so I want to make it different for each site.
Thanks
|
1.0
|
How can I set the Temp directory with Psr16Adapter ? - ### Configuration:
PhpFastCache version: `6.0.2`
PHP version: `7.0`
Operating system: `Ubuntu 16.04`
#### Question:
> How can I set the Temp directory using Psr16Adapter ? I see the Read Me file has the example available but it is using the CacheManager class.
I have tried using something like this.. but it did not worked.
`$this->Psr16Adapter = new Psr16Adapter('Files', array("path" => '/var/www/html/site/tmp')); `
Can you tell me if this possible ?
The reason I need it is, I have multiple sites hosted on same server and at moment all sites are getting same data from cache. and so I want to make it different for each site.
Thanks
|
process
|
how can i set the temp directory with configuration phpfastcache version php version operating system ubuntu question how can i set the temp directory using i see the read me file has the example available but it is using the cachemanager class i have tried using something like this but it did not worked this new files array path var www html site tmp can you tell me if this possible the reason i need it is i have multiple sites hosted on same server and at moment all sites are getting same data from cache and so i want to make it different for each site thanks
| 1
|
312
| 2,751,274,406
|
IssuesEvent
|
2015-04-24 07:50:42
|
arduino/Arduino
|
https://api.github.com/repos/arduino/Arduino
|
closed
|
Arduino throws StackOverflow exception for long strings
|
Component: IDE Component: Preprocessor
|
Dup of [issue #1401](https://github.com/arduino/Arduino/issues/1401).
There is apparently a maximum length for strings on Arduino 1.6.3 (32-bit linux) before java throws an exception, as demonstrated by the following sketch:
/**
* @file string_length_regex_exception.ino
* @brief Sketch to demonstrate java regex exception caused by long strings.
*
* Arduino 1.6.3 (32-bit) throws StackOverflow & java.util.regex exceptions
* during compile if the sketch contains strings above a certain length (exact
* length varies depending on type of string).
*
* I have found that the Arduino IDE needs to be quit & restarted after one of
* these exceptions are thrown, otherwise behavior is unpredictable (exceptions
* will continue to be thrown regardless of whether or not the string has been
* shortened, etc.)
*
* I tested this on Arduino 1.6.3 (64-bit) and it compiles fine.
*
* @author Jeremy Ruhland <jeremy ( a t ) goopypanther.org>
*/
#include <avr/pgmspace.h>
/* This progmem string with 799 chars + 1 trailing null will cause a
* java.lang.StackOverflow exception.
*/
const uint8_t PROGMEM string1[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This progmem string with 798 chars + 1 trailing null compiles fine. */
const uint8_t PROGMEM string2[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This const string with 779 chars will cause a java.lang.StackOverflow
* exception.
*/
//const uint8_t string3[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This const string with 778 chars compiles fine. */
const uint8_t string4[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
void setup() {
// put your setup code here, to run once:
}
void loop() {
// put your main code here, to run repeatedly:
}
This causes a stack overflow with the following error:
Exception in thread "Thread-12" java.lang.StackOverflowError
at java.lang.AbstractStringBuilder.charAt(AbstractStringBuilder.java:202)
at java.lang.StringBuilder.charAt(StringBuilder.java:72)
at java.lang.Character.codePointAt(Character.java:4668)
at java.util.regex.Pattern$CharProperty.match(Pattern.java:3693)
at java.util.regex.Pattern$Branch.match(Pattern.java:4502)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4556)
at java.util.regex.Pattern$Loop.match(Pattern.java:4683)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4615)
...which continues on for many hundreds of lines.
This does not occur on the 64-bit linux version. I have been unable to test Mac or Windows versions at this time.
|
1.0
|
Arduino throws StackOverflow exception for long strings - Dup of [issue #1401](https://github.com/arduino/Arduino/issues/1401).
There is apparently a maximum length for strings on Arduino 1.6.3 (32-bit linux) before java throws an exception, as demonstrated by the following sketch:
/**
* @file string_length_regex_exception.ino
* @brief Sketch to demonstrate java regex exception caused by long strings.
*
* Arduino 1.6.3 (32-bit) throws StackOverflow & java.util.regex exceptions
* during compile if the sketch contains strings above a certain length (exact
* length varies depending on type of string).
*
* I have found that the Arduino IDE needs to be quit & restarted after one of
* these exceptions are thrown, otherwise behavior is unpredictable (exceptions
* will continue to be thrown regardless of whether or not the string has been
* shortened, etc.)
*
* I tested this on Arduino 1.6.3 (64-bit) and it compiles fine.
*
* @author Jeremy Ruhland <jeremy ( a t ) goopypanther.org>
*/
#include <avr/pgmspace.h>
/* This progmem string with 799 chars + 1 trailing null will cause a
* java.lang.StackOverflow exception.
*/
const uint8_t PROGMEM string1[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This progmem string with 798 chars + 1 trailing null compiles fine. */
const uint8_t PROGMEM string2[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This const string with 779 chars will cause a java.lang.StackOverflow
* exception.
*/
//const uint8_t string3[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
/* This const string with 778 chars compiles fine. */
const uint8_t string4[] = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
void setup() {
// put your setup code here, to run once:
}
void loop() {
// put your main code here, to run repeatedly:
}
This causes a stack overflow with the following error:
Exception in thread "Thread-12" java.lang.StackOverflowError
at java.lang.AbstractStringBuilder.charAt(AbstractStringBuilder.java:202)
at java.lang.StringBuilder.charAt(StringBuilder.java:72)
at java.lang.Character.codePointAt(Character.java:4668)
at java.util.regex.Pattern$CharProperty.match(Pattern.java:3693)
at java.util.regex.Pattern$Branch.match(Pattern.java:4502)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4556)
at java.util.regex.Pattern$Loop.match(Pattern.java:4683)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4615)
...which continues on for many hundreds of lines.
This does not occur on the 64-bit linux version. I have been unable to test Mac or Windows versions at this time.
|
process
|
arduino throws stackoverflow exception for long strings dup of there is apparently a maximum length for strings on arduino bit linux before java throws an exception as demonstrated by the following sketch file string length regex exception ino brief sketch to demonstrate java regex exception caused by long strings arduino bit throws stackoverflow java util regex exceptions during compile if the sketch contains strings above a certain length exact length varies depending on type of string i have found that the arduino ide needs to be quit restarted after one of these exceptions are thrown otherwise behavior is unpredictable exceptions will continue to be thrown regardless of whether or not the string has been shortened etc i tested this on arduino bit and it compiles fine author jeremy ruhland include this progmem string with chars trailing null will cause a java lang stackoverflow exception const t progmem aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa this progmem string with chars trailing null compiles fine const t progmem aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa this const string with chars will cause a java lang stackoverflow exception const t aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa this const string with chars compiles fine const t aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa void setup put your setup code here to run once void loop put your main code here to run repeatedly this causes a stack overflow with the following error exception in thread thread java lang stackoverflowerror at java lang abstractstringbuilder charat abstractstringbuilder java at java lang stringbuilder charat stringbuilder java at java lang character codepointat character java at java util regex pattern charproperty match pattern java at java util regex pattern branch match pattern java at java util regex pattern grouphead match pattern java at java util regex pattern loop match pattern java at java util regex pattern grouptail match pattern java which continues on for many hundreds of lines this does not occur on the bit linux version i have been unable to test mac or windows versions at this time
| 1
|
68,593
| 8,309,149,071
|
IssuesEvent
|
2018-09-24 04:14:37
|
horizontalsystems/bank-wallet-android-app
|
https://api.github.com/repos/horizontalsystems/bank-wallet-android-app
|
closed
|
Transactions tab changes
|
design
|
- New cells
- Add 16dp paddings left of tabs --(all)
<img width="509" alt="screen shot 2018-09-21 at 11 40 40 am" src="https://user-images.githubusercontent.com/42367908/45862370-36152e00-bd93-11e8-88c1-5a1c9ace49be.png">
https://zpl.io/bAER3Oq
https://zpl.io/agNYn5D
|
1.0
|
Transactions tab changes - - New cells
- Add 16dp paddings left of tabs --(all)
<img width="509" alt="screen shot 2018-09-21 at 11 40 40 am" src="https://user-images.githubusercontent.com/42367908/45862370-36152e00-bd93-11e8-88c1-5a1c9ace49be.png">
https://zpl.io/bAER3Oq
https://zpl.io/agNYn5D
|
non_process
|
transactions tab changes new cells add paddings left of tabs all img width alt screen shot at am src
| 0
|
4,098
| 7,046,827,017
|
IssuesEvent
|
2018-01-02 10:12:59
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
Catch FriendNotFoundException when accepting loyalty user data
|
process_duplicate type_bug
|
```
Permanent failure attempting to execute task (/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py:127)
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 318, in post
self.run_from_request()
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 313, in run_from_request
run(self.request.body)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 327, in _new_deferred_run
return func(*args, **kwds)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/solutions/common/bizz/loyalty.py", line 249, in update_user_data_user_loyalty
system.put_user_data(email, json.dumps(user_data), service_identity, app_id)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/rpc/service.py", line 366, in wrapped
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/service/api/system.py", line 322, in put_user_data
set_user_data(service_identity_user, create_app_user(users.User(email), app_id), user_data)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2610, in set_user_data
set_user_data_object(service_identity_user, friend_user, data_dict, replace, must_be_friends)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2715, in set_user_data_object
run_in_xg_transaction(trans, data_dict)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/utils/transactions.py", line 103, in run_in_xg_transaction
return db.run_in_transaction_options(xg_on, function, *args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 217, in wrapped
r = run(transaction_guid)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 197, in run
return _orig_run_in_transaction_options(options, *args, **kwargs)
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/api/datastore.py", line 2647, in RunInTransactionOptions
ok, result = _DoOneTry(function, args, kwargs)
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/api/datastore.py", line 2667, in _DoOneTry
result = function(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2631, in trans
raise FriendNotFoundException()
FriendNotFoundException: 60011 - User not in friends list - {}
```
|
1.0
|
Catch FriendNotFoundException when accepting loyalty user data - ```
Permanent failure attempting to execute task (/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py:127)
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 318, in post
self.run_from_request()
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 313, in run_from_request
run(self.request.body)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 327, in _new_deferred_run
return func(*args, **kwds)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/solutions/common/bizz/loyalty.py", line 249, in update_user_data_user_loyalty
system.put_user_data(email, json.dumps(user_data), service_identity, app_id)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/rpc/service.py", line 366, in wrapped
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/service/api/system.py", line 322, in put_user_data
set_user_data(service_identity_user, create_app_user(users.User(email), app_id), user_data)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2610, in set_user_data
set_user_data_object(service_identity_user, friend_user, data_dict, replace, must_be_friends)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2715, in set_user_data_object
run_in_xg_transaction(trans, data_dict)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/utils/transactions.py", line 103, in run_in_xg_transaction
return db.run_in_transaction_options(xg_on, function, *args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 217, in wrapped
r = run(transaction_guid)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/add_1_monkey_patches.py", line 197, in run
return _orig_run_in_transaction_options(options, *args, **kwargs)
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/api/datastore.py", line 2647, in RunInTransactionOptions
ok, result = _DoOneTry(function, args, kwargs)
File "/base/alloc/tmpfs/dynamic_runtimes/python27/54c5883f70296ec8_unzipped/python27_lib/versions/1/google/appengine/api/datastore.py", line 2667, in _DoOneTry
result = function(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20171228t160836.406538483996163887/rogerthat/bizz/service/__init__.py", line 2631, in trans
raise FriendNotFoundException()
FriendNotFoundException: 60011 - User not in friends list - {}
```
|
process
|
catch friendnotfoundexception when accepting loyalty user data permanent failure attempting to execute task base data home apps e rogerthat server add monkey patches py traceback most recent call last file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine ext deferred deferred py line in post self run from request file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine ext deferred deferred py line in run from request run self request body file base data home apps e rogerthat server add monkey patches py line in new deferred run return func args kwds file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server solutions common bizz loyalty py line in update user data user loyalty system put user data email json dumps user data service identity app id file base data home apps e rogerthat server rogerthat rpc service py line in wrapped result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat service api system py line in put user data set user data service identity user create app user users user email app id user data file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz service init py line in set user data set user data object service identity user friend user data dict replace must be friends file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz service init py line in set user data object run in xg transaction trans data dict file base data home apps e rogerthat server rogerthat utils transactions py line in run in xg transaction return db run in transaction options xg on function args kwargs file base data home apps e rogerthat server add monkey patches py line in wrapped r run transaction guid file base data home apps e rogerthat server add monkey patches py line in run return orig run in transaction options options args kwargs file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine api datastore py line in runintransactionoptions ok result doonetry function args kwargs file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine api datastore py line in doonetry result function args kwargs file base data home apps e rogerthat server rogerthat bizz service init py line in trans raise friendnotfoundexception friendnotfoundexception user not in friends list
| 1
|
8,255
| 10,322,692,150
|
IssuesEvent
|
2019-08-31 14:40:22
|
Johni0702/BetterPortals
|
https://api.github.com/repos/Johni0702/BetterPortals
|
closed
|
[0.1.6] Serene Seasons glitch
|
compatibility
|
betterportals-0.1.6
SereneSeasons-1.12.2-1.2.16-universal
if the season changes the color of foliage and grass, then the opened betterportal provokes a constant reload of chunks between normal and seasoned
if i load my modpack 200+ - its blinking constantly. if i load clear test-mc with only that mods - blinking not so fast, but still happening
sometimes leaves patches of noncolored grass/leaves blocks
|
True
|
[0.1.6] Serene Seasons glitch - betterportals-0.1.6
SereneSeasons-1.12.2-1.2.16-universal
if the season changes the color of foliage and grass, then the opened betterportal provokes a constant reload of chunks between normal and seasoned
if i load my modpack 200+ - its blinking constantly. if i load clear test-mc with only that mods - blinking not so fast, but still happening
sometimes leaves patches of noncolored grass/leaves blocks
|
non_process
|
serene seasons glitch betterportals sereneseasons universal if the season changes the color of foliage and grass then the opened betterportal provokes a constant reload of chunks between normal and seasoned if i load my modpack its blinking constantly if i load clear test mc with only that mods blinking not so fast but still happening sometimes leaves patches of noncolored grass leaves blocks
| 0
|
18,647
| 24,581,005,646
|
IssuesEvent
|
2022-10-13 15:36:25
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR store] [Questionnaire response] Questionnaires > Question step > text choice > multiple select > All the answer options are not getting stored in the FHIR store in the following scenario
|
Bug P0 Response datastore Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:
1. SB > Create/edit study > create a text choice question with multiple select options ('Other ' option should not be marked as exclusive)
2. launch or publish the updates
3. Sign up or sign in to the mobile app
4. Enroll to the study
5. Click on a created activity in step 1
6. Select all the possible options with 'other' options and submit the response
7. Go to the FHIR store and observe
AR: Only the 'Other' answer option is getting stored in the fhir store
ER: All the selected answer options should get displed (if the participant selects option1, option2, and 'Other' option then all three answer options should get stored in the FHIR store)

|
3.0
|
[FHIR store] [Questionnaire response] Questionnaires > Question step > text choice > multiple select > All the answer options are not getting stored in the FHIR store in the following scenario - Steps:
1. SB > Create/edit study > create a text choice question with multiple select options ('Other ' option should not be marked as exclusive)
2. launch or publish the updates
3. Sign up or sign in to the mobile app
4. Enroll to the study
5. Click on a created activity in step 1
6. Select all the possible options with 'other' options and submit the response
7. Go to the FHIR store and observe
AR: Only the 'Other' answer option is getting stored in the fhir store
ER: All the selected answer options should get displed (if the participant selects option1, option2, and 'Other' option then all three answer options should get stored in the FHIR store)

|
process
|
questionnaires question step text choice multiple select all the answer options are not getting stored in the fhir store in the following scenario steps sb create edit study create a text choice question with multiple select options other option should not be marked as exclusive launch or publish the updates sign up or sign in to the mobile app enroll to the study click on a created activity in step select all the possible options with other options and submit the response go to the fhir store and observe ar only the other answer option is getting stored in the fhir store er all the selected answer options should get displed if the participant selects and other option then all three answer options should get stored in the fhir store
| 1
|
233,590
| 25,765,633,726
|
IssuesEvent
|
2022-12-09 01:25:42
|
jonhilgart22/go-trader
|
https://api.github.com/repos/jonhilgart22/go-trader
|
opened
|
CVE-2022-23491 (Medium) detected in certifi-2021.10.8-py2.py3-none-any.whl
|
security vulnerability
|
## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2021.10.8-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/37/45/946c02767aabb873146011e665728b680884cd8fe70dde973c640e45b775/certifi-2021.10.8-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/37/45/946c02767aabb873146011e665728b680884cd8fe70dde973c640e45b775/certifi-2021.10.8-py2.py3-none-any.whl</a></p>
<p>
Dependency Hierarchy:
- darts-0.13.1-py3-none-any.whl (Root Library)
- requests-2.27.1-py2.py3-none-any.whl
- :x: **certifi-2021.10.8-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-23491 (Medium) detected in certifi-2021.10.8-py2.py3-none-any.whl - ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2021.10.8-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/37/45/946c02767aabb873146011e665728b680884cd8fe70dde973c640e45b775/certifi-2021.10.8-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/37/45/946c02767aabb873146011e665728b680884cd8fe70dde973c640e45b775/certifi-2021.10.8-py2.py3-none-any.whl</a></p>
<p>
Dependency Hierarchy:
- darts-0.13.1-py3-none-any.whl (Root Library)
- requests-2.27.1-py2.py3-none-any.whl
- :x: **certifi-2021.10.8-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in certifi none any whl cve medium severity vulnerability vulnerable library certifi none any whl python package for providing mozilla s ca bundle library home page a href dependency hierarchy darts none any whl root library requests none any whl x certifi none any whl vulnerable library found in base branch main vulnerability details certifi is a curated collection of root certificates for validating the trustworthiness of ssl certificates while verifying the identity of tls hosts certifi removes root certificates from trustcor from the root store these are in the process of being removed from mozilla s trust store trustcor s root certificates are being removed pursuant to an investigation prompted by media reporting that trustcor s ownership also operated a business that produced spyware conclusions of mozilla s investigation can be found in the linked google group discussion publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution certifi step up your open source security game with mend
| 0
|
1,165
| 3,655,467,202
|
IssuesEvent
|
2016-02-17 16:23:28
|
cfpb/hmda-platform-ui
|
https://api.github.com/repos/cfpb/hmda-platform-ui
|
closed
|
Ability to download the full report
|
Data Validation Event Processing File Submission question
|
In the Pilot "summery and findings" doc there is talk about the ability to see and download a full report at the end of the process, the full audit trail listing all of the errors that occurred, the fixes, etc.
|
1.0
|
Ability to download the full report - In the Pilot "summery and findings" doc there is talk about the ability to see and download a full report at the end of the process, the full audit trail listing all of the errors that occurred, the fixes, etc.
|
process
|
ability to download the full report in the pilot summery and findings doc there is talk about the ability to see and download a full report at the end of the process the full audit trail listing all of the errors that occurred the fixes etc
| 1
|
341,841
| 10,308,622,211
|
IssuesEvent
|
2019-08-29 11:25:14
|
TerryCavanagh/diceydungeons.com
|
https://api.github.com/repos/TerryCavanagh/diceydungeons.com
|
opened
|
Commonly reported issues
|
High Priority
|
The following (fairly minor) issues get reported a lot, so would be nice to fix in an upcoming patch:
- [ ] In some cases, when you make a trade with Val, you can keep the original item.
- [ ] Vanished cursed items with countdowns still accept, and consume, dice
- [ ] Winning a fight with 0 hp counts as a win, and lets you walk around the map with negative health
- [ ] Backing out of an upgrade that you've bought or chosen as a level up reward results in you losing that upgrade completely.
- [ ] Dice slots are a little buggy when trying to break Silence as Inventor
- [ ] Jester enemy move previewing is buggy
|
1.0
|
Commonly reported issues - The following (fairly minor) issues get reported a lot, so would be nice to fix in an upcoming patch:
- [ ] In some cases, when you make a trade with Val, you can keep the original item.
- [ ] Vanished cursed items with countdowns still accept, and consume, dice
- [ ] Winning a fight with 0 hp counts as a win, and lets you walk around the map with negative health
- [ ] Backing out of an upgrade that you've bought or chosen as a level up reward results in you losing that upgrade completely.
- [ ] Dice slots are a little buggy when trying to break Silence as Inventor
- [ ] Jester enemy move previewing is buggy
|
non_process
|
commonly reported issues the following fairly minor issues get reported a lot so would be nice to fix in an upcoming patch in some cases when you make a trade with val you can keep the original item vanished cursed items with countdowns still accept and consume dice winning a fight with hp counts as a win and lets you walk around the map with negative health backing out of an upgrade that you ve bought or chosen as a level up reward results in you losing that upgrade completely dice slots are a little buggy when trying to break silence as inventor jester enemy move previewing is buggy
| 0
|
261,190
| 27,798,397,728
|
IssuesEvent
|
2023-03-17 14:11:26
|
kube-tarian/kad
|
https://api.github.com/repos/kube-tarian/kad
|
closed
|
k8s.io/apimachinery-v0.24.2: 1 vulnerabilities (highest severity is: 5.1) - autoclosed
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>k8s.io/apimachinery-v0.24.2</b></p></summary>
<p>null</p>
<p>Library home page: <a href="https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip">https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/kube-tarian/kad/commit/ef6d2d14ba20ad161dda103d1ac6ed92c19687bb">ef6d2d14ba20ad161dda103d1ac6ed92c19687bb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (k8s.io/apimachinery-v0.24.2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-3172](https://www.mend.io/vulnerability-database/CVE-2022-3172) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.1 | k8s.io/apimachinery-v0.24.2 | Direct | v0.25.1,kubernetes-1.22.14,kubernetes-1.23.11,kubernetes-1.24.5,kubernetes-1.25.1 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-3172</summary>
### Vulnerable Library - <b>k8s.io/apimachinery-v0.24.2</b></p>
<p>null</p>
<p>Library home page: <a href="https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip">https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **k8s.io/apimachinery-v0.24.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kube-tarian/kad/commit/ef6d2d14ba20ad161dda103d1ac6ed92c19687bb">ef6d2d14ba20ad161dda103d1ac6ed92c19687bb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A security issue was discovered in kube-apiserver that allows an aggregated API server to redirect client traffic to any URL. This issue leads to the client performing unexpected actions and forwarding the client's API server credentials to third parties
<p>Publish Date: 2022-09-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3172>CVE-2022-3172</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-10</p>
<p>Fix Resolution: v0.25.1,kubernetes-1.22.14,kubernetes-1.23.11,kubernetes-1.24.5,kubernetes-1.25.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
k8s.io/apimachinery-v0.24.2: 1 vulnerabilities (highest severity is: 5.1) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>k8s.io/apimachinery-v0.24.2</b></p></summary>
<p>null</p>
<p>Library home page: <a href="https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip">https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/kube-tarian/kad/commit/ef6d2d14ba20ad161dda103d1ac6ed92c19687bb">ef6d2d14ba20ad161dda103d1ac6ed92c19687bb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (k8s.io/apimachinery-v0.24.2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-3172](https://www.mend.io/vulnerability-database/CVE-2022-3172) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.1 | k8s.io/apimachinery-v0.24.2 | Direct | v0.25.1,kubernetes-1.22.14,kubernetes-1.23.11,kubernetes-1.24.5,kubernetes-1.25.1 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-3172</summary>
### Vulnerable Library - <b>k8s.io/apimachinery-v0.24.2</b></p>
<p>null</p>
<p>Library home page: <a href="https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip">https://proxy.golang.org/k8s.io/apimachinery/@v/v0.24.2.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **k8s.io/apimachinery-v0.24.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kube-tarian/kad/commit/ef6d2d14ba20ad161dda103d1ac6ed92c19687bb">ef6d2d14ba20ad161dda103d1ac6ed92c19687bb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A security issue was discovered in kube-apiserver that allows an aggregated API server to redirect client traffic to any URL. This issue leads to the client performing unexpected actions and forwarding the client's API server credentials to third parties
<p>Publish Date: 2022-09-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3172>CVE-2022-3172</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-10</p>
<p>Fix Resolution: v0.25.1,kubernetes-1.22.14,kubernetes-1.23.11,kubernetes-1.24.5,kubernetes-1.25.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
io apimachinery vulnerabilities highest severity is autoclosed vulnerable library io apimachinery null library home page a href found in head commit a href vulnerabilities cve severity cvss dependency type fixed in io apimachinery version remediation available medium io apimachinery direct kubernetes kubernetes kubernetes kubernetes details cve vulnerable library io apimachinery null library home page a href dependency hierarchy x io apimachinery vulnerable library found in head commit a href found in base branch main vulnerability details a security issue was discovered in kube apiserver that allows an aggregated api server to redirect client traffic to any url this issue leads to the client performing unexpected actions and forwarding the client s api server credentials to third parties publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution kubernetes kubernetes kubernetes kubernetes step up your open source security game with mend
| 0
|
14,931
| 18,359,530,474
|
IssuesEvent
|
2021-10-09 01:46:07
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Wrong `history` navigation in Chrome
|
TYPE: bug AREA: client BROWSER: Chrome FREQUENCY: level 1 SYSTEM: client side processing STATE: Stale
|
Test works in Firefox and fails in Chrome:
```js
import { ClientFunction } from 'testcafe';
fixture `Test fixture`
.page `http://example.com/`;
const getLocation = ClientFunction(() => document.location.href);
test('test', async t => {
await t
.navigateTo('http://google.com')
.expect(getLocation()).contains('google')
.navigateTo('https://habrahabr.ru/top/')
.expect(getLocation()).contains('habrahabr')
.navigateTo('javascript:history.go(-1)')
// NOTE: in Chrome location is http://example.com/
.expect(getLocation()).contains('google');
});
```
|
1.0
|
Wrong `history` navigation in Chrome - Test works in Firefox and fails in Chrome:
```js
import { ClientFunction } from 'testcafe';
fixture `Test fixture`
.page `http://example.com/`;
const getLocation = ClientFunction(() => document.location.href);
test('test', async t => {
await t
.navigateTo('http://google.com')
.expect(getLocation()).contains('google')
.navigateTo('https://habrahabr.ru/top/')
.expect(getLocation()).contains('habrahabr')
.navigateTo('javascript:history.go(-1)')
// NOTE: in Chrome location is http://example.com/
.expect(getLocation()).contains('google');
});
```
|
process
|
wrong history navigation in chrome test works in firefox and fails in chrome js import clientfunction from testcafe fixture test fixture page const getlocation clientfunction document location href test test async t await t navigateto expect getlocation contains google navigateto expect getlocation contains habrahabr navigateto javascript history go note in chrome location is expect getlocation contains google
| 1
|
98,632
| 16,387,781,969
|
IssuesEvent
|
2021-05-17 12:47:09
|
fitzinbox/Exomiser
|
https://api.github.com/repos/fitzinbox/Exomiser
|
opened
|
CVE-2020-36187 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2020-36187 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Exomiser/exomiser-data-genome/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,Exomiser/exomiser-cli/target/lib/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fitzinbox/Exomiser/commit/3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6">3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-36187 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2020-36187 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Exomiser/exomiser-data-genome/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,Exomiser/exomiser-cli/target/lib/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fitzinbox/Exomiser/commit/3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6">3a0ae5a0b72ae7a7e59a638af862c28aa80dcdf6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file exomiser exomiser data genome pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar exomiser exomiser cli target lib jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
4,125
| 7,085,169,253
|
IssuesEvent
|
2018-01-11 10:00:42
|
bisq-network/exchange
|
https://api.github.com/repos/bisq-network/exchange
|
closed
|
Investigate why there are timeouts leading to failed trades
|
Trade process a: bug
|
We get frequent timeouts which lead sometimes to lost trade fees. Find out the reason for timeouts (Tor network issues, too low timeouts currently used, bug/errors on the peers side so he stops responding, network disconnections to peer,...).
|
1.0
|
Investigate why there are timeouts leading to failed trades - We get frequent timeouts which lead sometimes to lost trade fees. Find out the reason for timeouts (Tor network issues, too low timeouts currently used, bug/errors on the peers side so he stops responding, network disconnections to peer,...).
|
process
|
investigate why there are timeouts leading to failed trades we get frequent timeouts which lead sometimes to lost trade fees find out the reason for timeouts tor network issues too low timeouts currently used bug errors on the peers side so he stops responding network disconnections to peer
| 1
|
16,264
| 20,862,602,838
|
IssuesEvent
|
2022-03-22 01:26:35
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Feature Request]希望mpx-fetch支持微信enableHttpDNS等属性
|
processing
|
**问题描述**
微信小程序 wx.request新增了enableCache、enableHttpDNS等新的属性,mpx-fetch是否考虑支持?
|
1.0
|
[Feature Request]希望mpx-fetch支持微信enableHttpDNS等属性 - **问题描述**
微信小程序 wx.request新增了enableCache、enableHttpDNS等新的属性,mpx-fetch是否考虑支持?
|
process
|
希望mpx fetch支持微信enablehttpdns等属性 问题描述 微信小程序 wx request新增了enablecache、enablehttpdns等新的属性,mpx fetch是否考虑支持?
| 1
|
15,289
| 19,294,129,216
|
IssuesEvent
|
2021-12-12 09:45:56
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Incorrect interpretation of 4 byte attributes to assembly commands
|
Type: Bug Feature: Processor/x86
|
**Describe the bug**
When disassembling 16-bit Windows application, some command use 32-bit immediate values in the assembly, overriding the 16-bit version with the prefix `0x66` and using extended registers (like `EAX`). I have found that values are being incorrectly interpretated in that process. See the example below:
```
1028:09dc 66 3d 00 CMP EAX,0x400
04 00 00
CF = INT_LESS EAX, 0x400:4
OF = INT_SBORROW EAX, 0x400:4
$U1db80:4 = INT_SUB EAX, 0x400:4
SF = INT_SLESS $U1db80:4, 0:4
ZF = INT_EQUAL $U1db80:4, 0:4
$Ud900:4 = INT_AND $U1db80:4, 0xff:4
$Ud980:1 = POPCOUNT $Ud900:4
$Uda00:1 = INT_AND $Ud980:1, 1:1
PF = INT_EQUAL $Uda00:1, 0:1
```
this should be:
```
1028:09dc 66 3d 00 CMP EAX,0x40000
04 00 00
```
(I'm not sure what the command breakdown would be!)
**To Reproduce**
Steps to reproduce the behavior:
1. Import the included xml from the `Debug Function Decompilation` option
[FUN_1028_094f.zip](https://github.com/NationalSecurityAgency/ghidra/files/7121905/FUN_1028_094f.zip)
2. Look at the instruction at `1028:09dc`
**Expected behavior**
The correct interpretation of the value.
**Screenshots**

This also highlights the failure in the `AND` command at `1028:09d6` as well.
**Environment (please complete the following information):**
- OS: Windows 10
- Java Version: 11.0.2
- Ghidra Version: 10
- Ghidra Origin: locally built
**Additional context**
I know that these values correspond to the window styles listed below:
```
#define WS_THICKFRAME 0x00040000L
#define WS_MINIMIZEBOX 0x00020000L
#define WS_MAXIMIZEBOX 0x00010000L
```
but what I am unsure about is how this has occurred, how to fix it, and whether it affects anything else!
|
1.0
|
Incorrect interpretation of 4 byte attributes to assembly commands - **Describe the bug**
When disassembling 16-bit Windows application, some command use 32-bit immediate values in the assembly, overriding the 16-bit version with the prefix `0x66` and using extended registers (like `EAX`). I have found that values are being incorrectly interpretated in that process. See the example below:
```
1028:09dc 66 3d 00 CMP EAX,0x400
04 00 00
CF = INT_LESS EAX, 0x400:4
OF = INT_SBORROW EAX, 0x400:4
$U1db80:4 = INT_SUB EAX, 0x400:4
SF = INT_SLESS $U1db80:4, 0:4
ZF = INT_EQUAL $U1db80:4, 0:4
$Ud900:4 = INT_AND $U1db80:4, 0xff:4
$Ud980:1 = POPCOUNT $Ud900:4
$Uda00:1 = INT_AND $Ud980:1, 1:1
PF = INT_EQUAL $Uda00:1, 0:1
```
this should be:
```
1028:09dc 66 3d 00 CMP EAX,0x40000
04 00 00
```
(I'm not sure what the command breakdown would be!)
**To Reproduce**
Steps to reproduce the behavior:
1. Import the included xml from the `Debug Function Decompilation` option
[FUN_1028_094f.zip](https://github.com/NationalSecurityAgency/ghidra/files/7121905/FUN_1028_094f.zip)
2. Look at the instruction at `1028:09dc`
**Expected behavior**
The correct interpretation of the value.
**Screenshots**

This also highlights the failure in the `AND` command at `1028:09d6` as well.
**Environment (please complete the following information):**
- OS: Windows 10
- Java Version: 11.0.2
- Ghidra Version: 10
- Ghidra Origin: locally built
**Additional context**
I know that these values correspond to the window styles listed below:
```
#define WS_THICKFRAME 0x00040000L
#define WS_MINIMIZEBOX 0x00020000L
#define WS_MAXIMIZEBOX 0x00010000L
```
but what I am unsure about is how this has occurred, how to fix it, and whether it affects anything else!
|
process
|
incorrect interpretation of byte attributes to assembly commands describe the bug when disassembling bit windows application some command use bit immediate values in the assembly overriding the bit version with the prefix and using extended registers like eax i have found that values are being incorrectly interpretated in that process see the example below cmp eax cf int less eax of int sborrow eax int sub eax sf int sless zf int equal int and popcount int and pf int equal this should be cmp eax i m not sure what the command breakdown would be to reproduce steps to reproduce the behavior import the included xml from the debug function decompilation option look at the instruction at expected behavior the correct interpretation of the value screenshots this also highlights the failure in the and command at as well environment please complete the following information os windows java version ghidra version ghidra origin locally built additional context i know that these values correspond to the window styles listed below define ws thickframe define ws minimizebox define ws maximizebox but what i am unsure about is how this has occurred how to fix it and whether it affects anything else
| 1
|
411,609
| 12,026,548,126
|
IssuesEvent
|
2020-04-12 14:38:31
|
kiwicom/schemathesis
|
https://api.github.com/repos/kiwicom/schemathesis
|
opened
|
OpenAPI -> JSON Schema conversion is not applied recursively
|
Priority: High Type: Bug
|
At the moment only top-level values are converted. It might cause errors when there are nested schemas that contain Draft7 incompatible keywords. Which basically leads to an invalid schema + it might lead to problems in response schema check
Related: #468
|
1.0
|
OpenAPI -> JSON Schema conversion is not applied recursively - At the moment only top-level values are converted. It might cause errors when there are nested schemas that contain Draft7 incompatible keywords. Which basically leads to an invalid schema + it might lead to problems in response schema check
Related: #468
|
non_process
|
openapi json schema conversion is not applied recursively at the moment only top level values are converted it might cause errors when there are nested schemas that contain incompatible keywords which basically leads to an invalid schema it might lead to problems in response schema check related
| 0
|
39,851
| 2,860,554,726
|
IssuesEvent
|
2015-06-03 16:21:46
|
HellscreamWoW/Tracker
|
https://api.github.com/repos/HellscreamWoW/Tracker
|
closed
|
[class] warrior - Shield Wall
|
Priority-High Type-Spell
|
How it should work: Reduces all damage taken by 40% for 8 sec.
How it works: Doesnt reduce any damage!
http://www.wowhead.com/spell=871/shield-wall
|
1.0
|
[class] warrior - Shield Wall - How it should work: Reduces all damage taken by 40% for 8 sec.
How it works: Doesnt reduce any damage!
http://www.wowhead.com/spell=871/shield-wall
|
non_process
|
warrior shield wall how it should work reduces all damage taken by for sec how it works doesnt reduce any damage
| 0
|
13,942
| 16,718,871,270
|
IssuesEvent
|
2021-06-10 03:20:33
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Wrong P-Code for "67 E8" on x86-64
|
Feature: Processor/x86
|
The P-Code generated for call instructions with the byte sequence `67 E8 xx xx xx xx` (a call with a addr32 prefix) seems to be wrong for x86-64 binaries: Ghidra generates something like
```
RSP = INT_SUB RSP, 4:8
STORE ram(RSP), 0x10433f:4
CALL *[ram]0x105390:8
```
but the size of the return address pushed onto the stack is still 8 bytes and not 4. In other words, Ghidra seems to misinterpret the effect of the `67` prefix on the call instruction, since it actually has no effect in this situation (as far as I can tell). I confirmed the different actual behaviour of the instruction with gdb on my computer.
Tested with Ghidra version 9.2.2 DEV.
|
1.0
|
Wrong P-Code for "67 E8" on x86-64 - The P-Code generated for call instructions with the byte sequence `67 E8 xx xx xx xx` (a call with a addr32 prefix) seems to be wrong for x86-64 binaries: Ghidra generates something like
```
RSP = INT_SUB RSP, 4:8
STORE ram(RSP), 0x10433f:4
CALL *[ram]0x105390:8
```
but the size of the return address pushed onto the stack is still 8 bytes and not 4. In other words, Ghidra seems to misinterpret the effect of the `67` prefix on the call instruction, since it actually has no effect in this situation (as far as I can tell). I confirmed the different actual behaviour of the instruction with gdb on my computer.
Tested with Ghidra version 9.2.2 DEV.
|
process
|
wrong p code for on the p code generated for call instructions with the byte sequence xx xx xx xx a call with a prefix seems to be wrong for binaries ghidra generates something like rsp int sub rsp store ram rsp call but the size of the return address pushed onto the stack is still bytes and not in other words ghidra seems to misinterpret the effect of the prefix on the call instruction since it actually has no effect in this situation as far as i can tell i confirmed the different actual behaviour of the instruction with gdb on my computer tested with ghidra version dev
| 1
|
48,600
| 10,264,109,643
|
IssuesEvent
|
2019-08-22 15:38:59
|
codered-co/classifica-me-app
|
https://api.github.com/repos/codered-co/classifica-me-app
|
closed
|
Recuperar empresas em Classificar e Ranking
|
Core code
|
Recuperação das empresas no ClassificarFragment e RankingFragment
Recuperação das informações da empresa selecionada no DescricaoEmpresaFragment
|
1.0
|
Recuperar empresas em Classificar e Ranking - Recuperação das empresas no ClassificarFragment e RankingFragment
Recuperação das informações da empresa selecionada no DescricaoEmpresaFragment
|
non_process
|
recuperar empresas em classificar e ranking recuperação das empresas no classificarfragment e rankingfragment recuperação das informações da empresa selecionada no descricaoempresafragment
| 0
|
2,179
| 5,028,582,262
|
IssuesEvent
|
2016-12-15 18:39:25
|
Sage-Bionetworks/Genie
|
https://api.github.com/repos/Sage-Bionetworks/Genie
|
opened
|
remove/modify some fields from clinical release file, but not internal database
|
clinical data processing
|
Remove the following fields:
1. BIRTH_YEAR
2. SECONDARY_RACE
3. TERTIARY_RACE
4. ONCOTREE_PRIMARY_NODE
5. ONCOTREE_SECONDARY_NODE
Modification of AGE_AT_SEQ_REPORT from days to: FLOOR ([AGE_AT_SEQ_REPORT]/365.25)
|
1.0
|
remove/modify some fields from clinical release file, but not internal database - Remove the following fields:
1. BIRTH_YEAR
2. SECONDARY_RACE
3. TERTIARY_RACE
4. ONCOTREE_PRIMARY_NODE
5. ONCOTREE_SECONDARY_NODE
Modification of AGE_AT_SEQ_REPORT from days to: FLOOR ([AGE_AT_SEQ_REPORT]/365.25)
|
process
|
remove modify some fields from clinical release file but not internal database remove the following fields birth year secondary race tertiary race oncotree primary node oncotree secondary node modification of age at seq report from days to floor
| 1
|
22,017
| 30,523,488,272
|
IssuesEvent
|
2023-07-19 09:38:08
|
prometheus-community/windows_exporter
|
https://api.github.com/repos/prometheus-community/windows_exporter
|
closed
|
Msi install EXTRA_FLAGS Syntax Process.include not working
|
collector/process
|
Hello,
I am trying to monitor a process on multiple windows machines.
I try to use the process collector for that but i am unable to find the correct syntax to pass the EXTRA_FLAGS.
It either just fails and says the syntax is incorrect or it gets stuck during install.
I tried multiple variants already and also ' quotes to no avail.
My best guess was:
msiexec /i windows_exporter-0.22.0-amd64.msi ENABLED_COLLECTORS="ad,adfs,cache,cpu,cpu_info,cs,container,dfsr,dhcp,dns,fsrmquota,iis,logical_disk,logon,memory,msmq,mssql,netframework_clrexceptions,netframework_clrinterop,netframework_clrjit,netframework_clrloading,netframework_clrlocksandthreads,netframework_clrmemory,netframework_clrremoting,netframework_clrsecurity,net,os,process,remote_fx,service,system,tcp,time,vmware" EXTRA_FLAGS="--collector.process.include="firefox.*"" TEXTFILE_DIR="C:\custom_metrics" LISTEN_PORT="9182"
I found --collector.process.include="firefox.*" on the windows_exporter/docs/collector.process.md page.
I also tried to move the "EXTRA_FLAGS" to the end.
I already searched the issues here.
I am sorry if I didnt find the correct answer by someone else or am missing something.
Thanks in advance.
|
1.0
|
Msi install EXTRA_FLAGS Syntax Process.include not working - Hello,
I am trying to monitor a process on multiple windows machines.
I try to use the process collector for that but i am unable to find the correct syntax to pass the EXTRA_FLAGS.
It either just fails and says the syntax is incorrect or it gets stuck during install.
I tried multiple variants already and also ' quotes to no avail.
My best guess was:
msiexec /i windows_exporter-0.22.0-amd64.msi ENABLED_COLLECTORS="ad,adfs,cache,cpu,cpu_info,cs,container,dfsr,dhcp,dns,fsrmquota,iis,logical_disk,logon,memory,msmq,mssql,netframework_clrexceptions,netframework_clrinterop,netframework_clrjit,netframework_clrloading,netframework_clrlocksandthreads,netframework_clrmemory,netframework_clrremoting,netframework_clrsecurity,net,os,process,remote_fx,service,system,tcp,time,vmware" EXTRA_FLAGS="--collector.process.include="firefox.*"" TEXTFILE_DIR="C:\custom_metrics" LISTEN_PORT="9182"
I found --collector.process.include="firefox.*" on the windows_exporter/docs/collector.process.md page.
I also tried to move the "EXTRA_FLAGS" to the end.
I already searched the issues here.
I am sorry if I didnt find the correct answer by someone else or am missing something.
Thanks in advance.
|
process
|
msi install extra flags syntax process include not working hello i am trying to monitor a process on multiple windows machines i try to use the process collector for that but i am unable to find the correct syntax to pass the extra flags it either just fails and says the syntax is incorrect or it gets stuck during install i tried multiple variants already and also quotes to no avail my best guess was msiexec i windows exporter msi enabled collectors ad adfs cache cpu cpu info cs container dfsr dhcp dns fsrmquota iis logical disk logon memory msmq mssql netframework clrexceptions netframework clrinterop netframework clrjit netframework clrloading netframework clrlocksandthreads netframework clrmemory netframework clrremoting netframework clrsecurity net os process remote fx service system tcp time vmware extra flags collector process include firefox textfile dir c custom metrics listen port i found collector process include firefox on the windows exporter docs collector process md page i also tried to move the extra flags to the end i already searched the issues here i am sorry if i didnt find the correct answer by someone else or am missing something thanks in advance
| 1
|
16,799
| 22,044,378,442
|
IssuesEvent
|
2022-05-29 21:02:11
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Meg
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Meg
Type (film/tv show): Film
Film or show in which it appears: Hollywood (miniseries)
Is the parent film/show streaming anywhere? Netflix
About when in the parent film/show does it appear? Episodes 6 and 7
Actual footage of the film/show can be seen (yes/no)? Yes
|
1.0
|
Add Meg - Please add as much of the following info as you can:
Title: Meg
Type (film/tv show): Film
Film or show in which it appears: Hollywood (miniseries)
Is the parent film/show streaming anywhere? Netflix
About when in the parent film/show does it appear? Episodes 6 and 7
Actual footage of the film/show can be seen (yes/no)? Yes
|
process
|
add meg please add as much of the following info as you can title meg type film tv show film film or show in which it appears hollywood miniseries is the parent film show streaming anywhere netflix about when in the parent film show does it appear episodes and actual footage of the film show can be seen yes no yes
| 1
|
15,898
| 6,050,655,407
|
IssuesEvent
|
2017-06-12 21:33:22
|
mapbox/mapbox-gl-native
|
https://api.github.com/repos/mapbox/mapbox-gl-native
|
closed
|
extend iOS tests to device
|
build iOS tests
|
We can add a `make itest-device` or something like that.
https://github.com/phonegap/ios-deploy is possibly useful.
/cc @kkaefer
|
1.0
|
extend iOS tests to device - We can add a `make itest-device` or something like that.
https://github.com/phonegap/ios-deploy is possibly useful.
/cc @kkaefer
|
non_process
|
extend ios tests to device we can add a make itest device or something like that is possibly useful cc kkaefer
| 0
|
13,505
| 16,044,856,582
|
IssuesEvent
|
2021-04-22 12:32:54
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Improve the error when the shape of the data doesn't match the schema
|
kind/improvement process/candidate team/client topic: mongodb
|
Follow-up to
## Problem
The problem. You have a schema
```prisma
model Post {
slug String
}
```
Then you:
- You generate a client
- You insert a record
- You run prisma.post.findMany() all good.
Now you're ready to expand your schema. You add a title:
```prisma
model Post {
slug String
title String
}
```
Then you:
- You generate a client
- You run prisma.post.findMany() .
You'll run into:
```
Attempted to serialize scalar 'null' with incompatible type 'String'.
```
Your previous insert doesn't have the required title. This is technically correct because your schema is describing an invalid shape of your actual data, but it's super easy to run into.
A good start is improving the error. Ideally:
```
Data returned included a title with a null value which is not compatible with your current schema. To fix this, either make the title field optional or backfill your existing data to have a title.
```
@dpetrick mentioned that since we don't have access to this kind of metadata at this layer, but we may have access to the field name:
```
The `title` field contains a `null` value. This is incompatible with the defined 'String' type.
```
|
1.0
|
Improve the error when the shape of the data doesn't match the schema - Follow-up to
## Problem
The problem. You have a schema
```prisma
model Post {
slug String
}
```
Then you:
- You generate a client
- You insert a record
- You run prisma.post.findMany() all good.
Now you're ready to expand your schema. You add a title:
```prisma
model Post {
slug String
title String
}
```
Then you:
- You generate a client
- You run prisma.post.findMany() .
You'll run into:
```
Attempted to serialize scalar 'null' with incompatible type 'String'.
```
Your previous insert doesn't have the required title. This is technically correct because your schema is describing an invalid shape of your actual data, but it's super easy to run into.
A good start is improving the error. Ideally:
```
Data returned included a title with a null value which is not compatible with your current schema. To fix this, either make the title field optional or backfill your existing data to have a title.
```
@dpetrick mentioned that since we don't have access to this kind of metadata at this layer, but we may have access to the field name:
```
The `title` field contains a `null` value. This is incompatible with the defined 'String' type.
```
|
process
|
improve the error when the shape of the data doesn t match the schema follow up to problem the problem you have a schema prisma model post slug string then you you generate a client you insert a record you run prisma post findmany all good now you re ready to expand your schema you add a title prisma model post slug string title string then you you generate a client you run prisma post findmany you ll run into attempted to serialize scalar null with incompatible type string your previous insert doesn t have the required title this is technically correct because your schema is describing an invalid shape of your actual data but it s super easy to run into a good start is improving the error ideally data returned included a title with a null value which is not compatible with your current schema to fix this either make the title field optional or backfill your existing data to have a title dpetrick mentioned that since we don t have access to this kind of metadata at this layer but we may have access to the field name the title field contains a null value this is incompatible with the defined string type
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.