Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,547
| 8,393,188,464
|
IssuesEvent
|
2018-10-09 19:48:31
|
fossas/fossa-cli
|
https://api.github.com/repos/fossas/fossa-cli
|
opened
|
Backlog
|
type: process
|
This is a collection of issues that were out of scope at the time and/or closed to reduce noise, but may be added back to our roadmap at some point.
- [ ] [Automatic releases within CI](https://github.com/fossas/fossa-cli/issues/220)
|
1.0
|
Backlog - This is a collection of issues that were out of scope at the time and/or closed to reduce noise, but may be added back to our roadmap at some point.
- [ ] [Automatic releases within CI](https://github.com/fossas/fossa-cli/issues/220)
|
process
|
backlog this is a collection of issues that were out of scope at the time and or closed to reduce noise but may be added back to our roadmap at some point
| 1
|
513,686
| 14,924,929,041
|
IssuesEvent
|
2021-01-24 02:58:10
|
weaveworks/eksctl
|
https://api.github.com/repos/weaveworks/eksctl
|
closed
|
--ssh-access on a Windows node group should add port 3389 for RDP
|
kind/feature priority/backlog stale
|
**What happened?**
Windows node groups remote access isn't added with --ssh-access. This option or similar ("--rdp-access") should add RDP/3389 on Windows nodes.
**What you expected to happen?**
When you run a Windows node group and add --ssh-access, rdp access should be opened in the security group so you can access it remotely.
**How to reproduce it?**
eksctl create cluster --name=windows-test
eksctl utils install-vpc-controllers --cluster windows-test --approve
eksctl create nodegroup --cluster windows-test --node-ami-family WindowsServer2019CoreContainer --ssh-access
**Anything else we need to know?**
What OS are you using, are you using a downloaded binary or did you compile eksctl, what type of AWS credentials are you using (i.e. default/named profile, MFA) - please don't include actual credentials though!
**Versions**
Please paste in the output of these commands:
```
$ eksctl version
0.22.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
```
|
1.0
|
--ssh-access on a Windows node group should add port 3389 for RDP - **What happened?**
Windows node groups remote access isn't added with --ssh-access. This option or similar ("--rdp-access") should add RDP/3389 on Windows nodes.
**What you expected to happen?**
When you run a Windows node group and add --ssh-access, rdp access should be opened in the security group so you can access it remotely.
**How to reproduce it?**
eksctl create cluster --name=windows-test
eksctl utils install-vpc-controllers --cluster windows-test --approve
eksctl create nodegroup --cluster windows-test --node-ami-family WindowsServer2019CoreContainer --ssh-access
**Anything else we need to know?**
What OS are you using, are you using a downloaded binary or did you compile eksctl, what type of AWS credentials are you using (i.e. default/named profile, MFA) - please don't include actual credentials though!
**Versions**
Please paste in the output of these commands:
```
$ eksctl version
0.22.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
```
|
non_process
|
ssh access on a windows node group should add port for rdp what happened windows node groups remote access isn t added with ssh access this option or similar rdp access should add rdp on windows nodes what you expected to happen when you run a windows node group and add ssh access rdp access should be opened in the security group so you can access it remotely how to reproduce it eksctl create cluster name windows test eksctl utils install vpc controllers cluster windows test approve eksctl create nodegroup cluster windows test node ami family ssh access anything else we need to know what os are you using are you using a downloaded binary or did you compile eksctl what type of aws credentials are you using i e default named profile mfa please don t include actual credentials though versions please paste in the output of these commands eksctl version kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux
| 0
|
10,991
| 13,785,994,639
|
IssuesEvent
|
2020-10-09 00:27:17
|
cbrennanpoole/Qualitative-Self
|
https://api.github.com/repos/cbrennanpoole/Qualitative-Self
|
closed
|
Duns & Bradstreet
|
Creative Strategy Git Gud Leadership and Development Machines Learning conscience hacking help wanted institutional stigmatization process implementation unconscious bias
|

## On Chasing the Wind, LLC
`
Categorical Associative Algorithms – Altruism
`<br>
> Dangerous games being played by
big tech
big corporate best interest...
and what do we have here?
big DUN DUN DUN -d'uh- S
SMB credit rates– who's that?
'with?'
'with...are you with me?'
'with!'
Oh yeah... what's up... Wind was late swinging in.
Hey.. that's 'Chasing!
What's Chasing doing here?
'In the damn midst'?
I told the whole earth - even back-end;
pandemic 'forced states'
now DBA
with Wind
Y'all leave 'Chasing' alone and
out of the midst!
End of discussion.
music fades.
lights above begin to flicker as
-whoosh- big gust -blows-
screen black
> 'and... Cut! Great scene team.
> Sorry with'.
`
{mumblings and lots of posturing as the h.sapien takes to a cowardice defeat; with reels it back in}.
`<br>
##### Production Assistant Biden :
"ok.. ok... I am sorry. {uncomfortable okay awkward silence now} no more."
{ENTER GALE}
Yes 'chasing is safe. said 'Gale' came and they were going back to Unit B
if you need them; do I even want to ask wh- ah neverming.
It's hard enough processing pandemic, now we've had things like - what 4 walking, talking, living, beings
manifest from - poof - literally nothing.
'sorry... you're right.... I'm sorry I mis-spoke. From dust.
As I am a witness to that damn literal 'dust in the wind'
don't know if i ever be wound all the way in after seeing
```
...
..
.
```<br>
and
```
...
..
.
```<br>
ACTION
- poof -
'shhh! chasing is crazy, let's leave it at that before with catches
-whoosh-
// ohh.. fuuc....
- poof -
fade to black
## with (the) Wind
---
`
hasn't been seen since that ominous last scene y'all.
`<br>
**POSSIBILITIES PONTIFICATIONS**
9114 Central Avenue SW *Unit B* 30014
italicized and pointed to while not yet able
to point you all the way to the whole picture
story ... maybe not.. maybe never.
just remember ... it is in italics...
Best,
x.__________
with Wind
> algorithm, keys got 'tensor' shh... did you -poof- just happen to hear with Wind whisperings? ohh....
( to be continued )
---
---
**Source URL**:
[https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html](https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
1.0
|
Duns & Bradstreet - 
## On Chasing the Wind, LLC
`
Categorical Associative Algorithms – Altruism
`<br>
> Dangerous games being played by
big tech
big corporate best interest...
and what do we have here?
big DUN DUN DUN -d'uh- S
SMB credit rates– who's that?
'with?'
'with...are you with me?'
'with!'
Oh yeah... what's up... Wind was late swinging in.
Hey.. that's 'Chasing!
What's Chasing doing here?
'In the damn midst'?
I told the whole earth - even back-end;
pandemic 'forced states'
now DBA
with Wind
Y'all leave 'Chasing' alone and
out of the midst!
End of discussion.
music fades.
lights above begin to flicker as
-whoosh- big gust -blows-
screen black
> 'and... Cut! Great scene team.
> Sorry with'.
`
{mumblings and lots of posturing as the h.sapien takes to a cowardice defeat; with reels it back in}.
`<br>
##### Production Assistant Biden :
"ok.. ok... I am sorry. {uncomfortable okay awkward silence now} no more."
{ENTER GALE}
Yes 'chasing is safe. said 'Gale' came and they were going back to Unit B
if you need them; do I even want to ask wh- ah neverming.
It's hard enough processing pandemic, now we've had things like - what 4 walking, talking, living, beings
manifest from - poof - literally nothing.
'sorry... you're right.... I'm sorry I mis-spoke. From dust.
As I am a witness to that damn literal 'dust in the wind'
don't know if i ever be wound all the way in after seeing
```
...
..
.
```<br>
and
```
...
..
.
```<br>
ACTION
- poof -
'shhh! chasing is crazy, let's leave it at that before with catches
-whoosh-
// ohh.. fuuc....
- poof -
fade to black
## with (the) Wind
---
`
hasn't been seen since that ominous last scene y'all.
`<br>
**POSSIBILITIES PONTIFICATIONS**
9114 Central Avenue SW *Unit B* 30014
italicized and pointed to while not yet able
to point you all the way to the whole picture
story ... maybe not.. maybe never.
just remember ... it is in italics...
Best,
x.__________
with Wind
> algorithm, keys got 'tensor' shh... did you -poof- just happen to hear with Wind whisperings? ohh....
( to be continued )
---
---
**Source URL**:
[https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html](https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
process
|
duns bradstreet on chasing the wind llc categorical associative algorithms – altruism dangerous games being played by big tech big corporate best interest and what do we have here big dun dun dun d uh s smb credit rates– who s that with with are you with me with oh yeah what s up wind was late swinging in hey that s chasing what s chasing doing here in the damn midst i told the whole earth even back end pandemic forced states now dba with wind y all leave chasing alone and out of the midst end of discussion music fades lights above begin to flicker as whoosh big gust blows screen black and cut great scene team sorry with mumblings and lots of posturing as the h sapien takes to a cowardice defeat with reels it back in production assistant biden ok ok i am sorry uncomfortable okay awkward silence now no more enter gale yes chasing is safe said gale came and they were going back to unit b if you need them do i even want to ask wh ah neverming it s hard enough processing pandemic now we ve had things like what walking talking living beings manifest from poof literally nothing sorry you re right i m sorry i mis spoke from dust as i am a witness to that damn literal dust in the wind don t know if i ever be wound all the way in after seeing and action poof shhh chasing is crazy let s leave it at that before with catches whoosh ohh fuuc poof fade to black with the wind hasn t been seen since that ominous last scene y all possibilities pontifications central avenue sw unit b italicized and pointed to while not yet able to point you all the way to the whole picture story maybe not maybe never just remember it is in italics best x with wind algorithm keys got tensor shh did you poof just happen to hear with wind whisperings ohh to be continued source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
20,481
| 27,139,707,864
|
IssuesEvent
|
2023-02-16 15:32:29
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
BP: Add "Select State/Region" dropdown
|
Batch Processor
|
Part of #1455
- [x] Create a dropdown that says "Select State / Region:"
- [x] Make a service call to https://streamstats.usgs.gov/nssservices/regions to retrieve the list of all the Regions
- [x] Populate the dropdown with the "name" of each Region
- [x] Also create a checkbox that says "Basin Delineation", which gets forcibly selected when a State/Region is selected
Note: we may need to refine this, as there are some regions like "Undefined" that we may not want to include.
|
1.0
|
BP: Add "Select State/Region" dropdown - Part of #1455
- [x] Create a dropdown that says "Select State / Region:"
- [x] Make a service call to https://streamstats.usgs.gov/nssservices/regions to retrieve the list of all the Regions
- [x] Populate the dropdown with the "name" of each Region
- [x] Also create a checkbox that says "Basin Delineation", which gets forcibly selected when a State/Region is selected
Note: we may need to refine this, as there are some regions like "Undefined" that we may not want to include.
|
process
|
bp add select state region dropdown part of create a dropdown that says select state region make a service call to to retrieve the list of all the regions populate the dropdown with the name of each region also create a checkbox that says basin delineation which gets forcibly selected when a state region is selected note we may need to refine this as there are some regions like undefined that we may not want to include
| 1
|
219,217
| 7,334,089,005
|
IssuesEvent
|
2018-03-05 21:33:58
|
w3c/web-platform-tests
|
https://api.github.com/repos/w3c/web-platform-tests
|
closed
|
"wpt run firefox --install-browser --yes" fails on OSX
|
good first issue infra priority:backlog wptrunner
|
Running on OSX, this is the stacktrace:
```
File "./wpt", line 5, in <module>
wpt.main()
File "/Users/kereliuk/web-platform-tests/tools/wpt/wpt.py", line 132, in main
rv = script(*args, **kwargs)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 373, in run
**kwargs)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 351, in setup_wptrunner
kwargs["binary"] = setup_cls.install(venv)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 147, in install
return self.browser.install(venv.path)
File "/Users/kereliuk/web-platform-tests/tools/wpt/browser.py", line 116, in install
resp = self.get_from_nightly("<a[^>]*>(firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2)" % self.platform_string())
File "/Users/kereliuk/web-platform-tests/tools/wpt/browser.py", line 107, in get_from_nightly
filename = re.compile(pattern).search(index.text).group(1)
AttributeError: 'NoneType' object has no attribute 'group'
```
We are matching with the wrong regex pattern in browser.py
```
self.get_from_nightly("<a[^>]*>(firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2)" % self.platform_string())
```
So it will search the for a nightly download with "firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2"
This will find nothing, I think instead we need to search for "firefox-\d+\.\d(?:\w\d)?.en-US.%s.web-platform.tests\.tar\.bz2
In fact, should we be using these web.platform-test downloads from firefox nightly for every platform? @jgraham do you know?
|
1.0
|
"wpt run firefox --install-browser --yes" fails on OSX - Running on OSX, this is the stacktrace:
```
File "./wpt", line 5, in <module>
wpt.main()
File "/Users/kereliuk/web-platform-tests/tools/wpt/wpt.py", line 132, in main
rv = script(*args, **kwargs)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 373, in run
**kwargs)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 351, in setup_wptrunner
kwargs["binary"] = setup_cls.install(venv)
File "/Users/kereliuk/web-platform-tests/tools/wpt/run.py", line 147, in install
return self.browser.install(venv.path)
File "/Users/kereliuk/web-platform-tests/tools/wpt/browser.py", line 116, in install
resp = self.get_from_nightly("<a[^>]*>(firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2)" % self.platform_string())
File "/Users/kereliuk/web-platform-tests/tools/wpt/browser.py", line 107, in get_from_nightly
filename = re.compile(pattern).search(index.text).group(1)
AttributeError: 'NoneType' object has no attribute 'group'
```
We are matching with the wrong regex pattern in browser.py
```
self.get_from_nightly("<a[^>]*>(firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2)" % self.platform_string())
```
So it will search the for a nightly download with "firefox-\d+\.\d(?:\w\d)?.en-US.%s\.tar\.bz2"
This will find nothing, I think instead we need to search for "firefox-\d+\.\d(?:\w\d)?.en-US.%s.web-platform.tests\.tar\.bz2
In fact, should we be using these web.platform-test downloads from firefox nightly for every platform? @jgraham do you know?
|
non_process
|
wpt run firefox install browser yes fails on osx running on osx this is the stacktrace file wpt line in wpt main file users kereliuk web platform tests tools wpt wpt py line in main rv script args kwargs file users kereliuk web platform tests tools wpt run py line in run kwargs file users kereliuk web platform tests tools wpt run py line in setup wptrunner kwargs setup cls install venv file users kereliuk web platform tests tools wpt run py line in install return self browser install venv path file users kereliuk web platform tests tools wpt browser py line in install resp self get from nightly firefox d d w d en us s tar self platform string file users kereliuk web platform tests tools wpt browser py line in get from nightly filename re compile pattern search index text group attributeerror nonetype object has no attribute group we are matching with the wrong regex pattern in browser py self get from nightly firefox d d w d en us s tar self platform string so it will search the for a nightly download with firefox d d w d en us s tar this will find nothing i think instead we need to search for firefox d d w d en us s web platform tests tar in fact should we be using these web platform test downloads from firefox nightly for every platform jgraham do you know
| 0
|
183,875
| 21,784,740,199
|
IssuesEvent
|
2022-05-14 01:09:29
|
RG4421/terra-dev-site
|
https://api.github.com/repos/RG4421/terra-dev-site
|
closed
|
CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js, jquery-1.9.1.js - autoclosed
|
security vulnerability
|
## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: terra-dev-site/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: terra-dev-site/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: terra-dev-site/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: terra-dev-site/node_modules/tinycolor2/demo/jquery-1.9.1.js,terra-dev-site/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/terra-dev-site/commit/69dc7bb19397d2ec93f0039c0e390d4b4c29f1ee">69dc7bb19397d2ec93f0039c0e390d4b4c29f1ee</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","packageFilePaths":["/node_modules/js-base64/.attic/test-moment/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","packageFilePaths":["/node_modules/tinycolor2/index.html","/node_modules/tinycolor2/test/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js, jquery-1.9.1.js - autoclosed - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: terra-dev-site/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: terra-dev-site/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: terra-dev-site/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: terra-dev-site/node_modules/tinycolor2/demo/jquery-1.9.1.js,terra-dev-site/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/terra-dev-site/commit/69dc7bb19397d2ec93f0039c0e390d4b4c29f1ee">69dc7bb19397d2ec93f0039c0e390d4b4c29f1ee</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","packageFilePaths":["/node_modules/js-base64/.attic/test-moment/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","packageFilePaths":["/node_modules/tinycolor2/index.html","/node_modules/tinycolor2/test/index.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js jquery js autoclosed cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to dependency file terra dev site node modules js attic test moment index html path to vulnerable library terra dev site node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file terra dev site node modules index html path to vulnerable library terra dev site node modules demo jquery js terra dev site node modules test demo jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion packagetype javascript packagename jquery packageversion packagefilepaths istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl
| 0
|
351,305
| 10,515,469,261
|
IssuesEvent
|
2019-09-28 10:06:36
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
Add preflight check for availability of HyperV and dependent cmdlets on Powershell Core (v6)
|
kind/enhancement os/windows priority/minor state/need more information
|
### General information
* OS: Windows
* Hypervisor: Hyper-V
* Did you run `crc setup` before starting it (Yes/No)? Yes
## CRC version
```Powershell
PS C:\Users\Erik> crc version
version: 1.0.0-beta.5+f2aa58c
OpenShift version: 4.1.14 (embedded in binary)
```
## CRC status
```bash
# Put `crc status` output here
```
## CRC config
```bash
# Put `crc config view` output here
```
## Host Operating System
```bash
# Put the output of `cat /etc/os-release` in case of Linux
# put the output of `sw_vers` in case of Mac
# Put the output of `systeminfo` in case of Windows
```
### Steps to reproduce
1. Run crc start in Windows Terminal (Preview) / Powershell 6.1.1
2.
3.
4.
### Expected
I then closed the Windows Terminal and tried again in regular PowerShell version 5.1 and had no problems. I expect it to work like that. I'm unsure whether it's the new terminal or the newer version of PowerShell 6.1.1 that's causing the issue.
### Actual
### Logs
You can start crc with `crc start --log-level debug` to collect logs.
Please consider posting this on http://gist.github.com/ and post the link in the issue.
```Powershell
PowerShell 6.1.1
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS C:\Users\Erik> crc setup
INFO Checking if running as normal user
INFO Caching oc binary
INFO Unpacking bundle from the CRC binary
INFO Check Windows 10 release
INFO Hyper-V installed
INFO Is user a member of the Hyper-V Administrators group
INFO Does the Hyper-V virtual switch exist
Setup is complete, you can now run 'crc start' to start a CodeReady Containers instance
```
```Powershell
PS C:\Users\Erik> crc start --log-level debug
INFO Checking if running as normal user
INFO Checking if oc binary is cached
DEBU oc binary already cached
INFO Check Windows 10 release
INFO Hyper-V installed and operational
INFO Is user a member of the Hyper-V Administrators group
INFO Does the Hyper-V virtual switch exist
Checking file: C:\Users\Erik\.crc\machines\crc\.crc-exist
? Image pull secret [? for help] **********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
INFO Loading bundle: crc_hyperv_4.1.14.crcbundle ...
INFO Creating VM ...
Found binary path at C:\Program Files\crc-windows-1.0.0-beta.5-amd64\crc.exe
Launching plugin server for driver hyperv
Plugin server listening at address 127.0.0.1:50954
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(crc) Calling .GetMachineName
(crc) Calling .DriverName
Running pre-create checks...
(crc) Calling .PreCreateCheck
(crc) DBG | [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique (crc) DBG | [stdout =====>] :
(crc) DBG | [stderr =====>] : Get-Unique : The 'Get-Unique' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded. For more information, run 'Import-Module Microsoft.PowerShell.Utility'. (crc) DBG | At line:1 char:45
(crc) DBG | + @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
(crc) DBG | + ~~~~~~~~~~
(crc) DBG | + CategoryInfo : ObjectNotFound: (Get-Unique:String) [], CommandNotFoundException
(crc) DBG | + FullyQualifiedErrorId : CouldNotAutoloadMatchingModule
(crc) DBG |
(crc) DBG |
ERRO Error occurred: Error creating host: Error creating the VM. Error with pre-create check: "exit status 1"
Making call to close driver server
(crc) Calling .Close
Successfully made call to close driver server
(crc) DBG | Closing plugin on server side
Making call to close connection to plugin binary
```
|
1.0
|
Add preflight check for availability of HyperV and dependent cmdlets on Powershell Core (v6) - ### General information
* OS: Windows
* Hypervisor: Hyper-V
* Did you run `crc setup` before starting it (Yes/No)? Yes
## CRC version
```Powershell
PS C:\Users\Erik> crc version
version: 1.0.0-beta.5+f2aa58c
OpenShift version: 4.1.14 (embedded in binary)
```
## CRC status
```bash
# Put `crc status` output here
```
## CRC config
```bash
# Put `crc config view` output here
```
## Host Operating System
```bash
# Put the output of `cat /etc/os-release` in case of Linux
# put the output of `sw_vers` in case of Mac
# Put the output of `systeminfo` in case of Windows
```
### Steps to reproduce
1. Run crc start in Windows Terminal (Preview) / Powershell 6.1.1
2.
3.
4.
### Expected
I then closed the Windows Terminal and tried again in regular PowerShell version 5.1 and had no problems. I expect it to work like that. I'm unsure whether it's the new terminal or the newer version of PowerShell 6.1.1 that's causing the issue.
### Actual
### Logs
You can start crc with `crc start --log-level debug` to collect logs.
Please consider posting this on http://gist.github.com/ and post the link in the issue.
```Powershell
PowerShell 6.1.1
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS C:\Users\Erik> crc setup
INFO Checking if running as normal user
INFO Caching oc binary
INFO Unpacking bundle from the CRC binary
INFO Check Windows 10 release
INFO Hyper-V installed
INFO Is user a member of the Hyper-V Administrators group
INFO Does the Hyper-V virtual switch exist
Setup is complete, you can now run 'crc start' to start a CodeReady Containers instance
```
```Powershell
PS C:\Users\Erik> crc start --log-level debug
INFO Checking if running as normal user
INFO Checking if oc binary is cached
DEBU oc binary already cached
INFO Check Windows 10 release
INFO Hyper-V installed and operational
INFO Is user a member of the Hyper-V Administrators group
INFO Does the Hyper-V virtual switch exist
Checking file: C:\Users\Erik\.crc\machines\crc\.crc-exist
? Image pull secret [? for help] **********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
INFO Loading bundle: crc_hyperv_4.1.14.crcbundle ...
INFO Creating VM ...
Found binary path at C:\Program Files\crc-windows-1.0.0-beta.5-amd64\crc.exe
Launching plugin server for driver hyperv
Plugin server listening at address 127.0.0.1:50954
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(crc) Calling .GetMachineName
(crc) Calling .DriverName
Running pre-create checks...
(crc) Calling .PreCreateCheck
(crc) DBG | [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique (crc) DBG | [stdout =====>] :
(crc) DBG | [stderr =====>] : Get-Unique : The 'Get-Unique' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded. For more information, run 'Import-Module Microsoft.PowerShell.Utility'. (crc) DBG | At line:1 char:45
(crc) DBG | + @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
(crc) DBG | + ~~~~~~~~~~
(crc) DBG | + CategoryInfo : ObjectNotFound: (Get-Unique:String) [], CommandNotFoundException
(crc) DBG | + FullyQualifiedErrorId : CouldNotAutoloadMatchingModule
(crc) DBG |
(crc) DBG |
ERRO Error occurred: Error creating host: Error creating the VM. Error with pre-create check: "exit status 1"
Making call to close driver server
(crc) Calling .Close
Successfully made call to close driver server
(crc) DBG | Closing plugin on server side
Making call to close connection to plugin binary
```
|
non_process
|
add preflight check for availability of hyperv and dependent cmdlets on powershell core general information os windows hypervisor hyper v did you run crc setup before starting it yes no yes crc version powershell ps c users erik crc version version beta openshift version embedded in binary crc status bash put crc status output here crc config bash put crc config view output here host operating system bash put the output of cat etc os release in case of linux put the output of sw vers in case of mac put the output of systeminfo in case of windows steps to reproduce run crc start in windows terminal preview powershell expected i then closed the windows terminal and tried again in regular powershell version and had no problems i expect it to work like that i m unsure whether it s the new terminal or the newer version of powershell that s causing the issue actual logs you can start crc with crc start log level debug to collect logs please consider posting this on and post the link in the issue powershell powershell copyright c microsoft corporation all rights reserved type help to get help ps c users erik crc setup info checking if running as normal user info caching oc binary info unpacking bundle from the crc binary info check windows release info hyper v installed info is user a member of the hyper v administrators group info does the hyper v virtual switch exist setup is complete you can now run crc start to start a codeready containers instance powershell ps c users erik crc start log level debug info checking if running as normal user info checking if oc binary is cached debu oc binary already cached info check windows release info hyper v installed and operational info is user a member of the hyper v administrators group info does the hyper v virtual switch exist checking file c users erik crc machines crc crc exist image pull secret info loading bundle crc hyperv crcbundle info creating vm found binary path at c program files crc windows beta crc exe launching plugin server for driver hyperv plugin server listening at address calling getversion using api version calling setconfigraw calling getmachinename crc calling getmachinename crc calling drivername running pre create checks crc calling precreatecheck crc dbg c windows windowspowershell powershell exe noprofile noninteractive get module listavailable hyper v name get unique crc dbg crc dbg get unique the get unique command was found in the module microsoft powershell utility but the module could not be loaded for more information run import module microsoft powershell utility crc dbg at line char crc dbg get module listavailable hyper v name get unique crc dbg crc dbg categoryinfo objectnotfound get unique string commandnotfoundexception crc dbg fullyqualifiederrorid couldnotautoloadmatchingmodule crc dbg crc dbg erro error occurred error creating host error creating the vm error with pre create check exit status making call to close driver server crc calling close successfully made call to close driver server crc dbg closing plugin on server side making call to close connection to plugin binary
| 0
|
15,103
| 18,842,758,882
|
IssuesEvent
|
2021-11-11 11:32:29
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
Fix different mypy errors for different builds
|
process + tools
|
We are getting different mypy errors between builds. Compare
https://github.com/pystatgen/sgkit/runs/4149816911?check_suite_focus=true
with
https://github.com/pystatgen/sgkit/runs/4149991714?check_suite_focus=true
It's not clear if this is due to different Python versions, or GH actions caching different environments (or something else).
|
1.0
|
Fix different mypy errors for different builds - We are getting different mypy errors between builds. Compare
https://github.com/pystatgen/sgkit/runs/4149816911?check_suite_focus=true
with
https://github.com/pystatgen/sgkit/runs/4149991714?check_suite_focus=true
It's not clear if this is due to different Python versions, or GH actions caching different environments (or something else).
|
process
|
fix different mypy errors for different builds we are getting different mypy errors between builds compare with it s not clear if this is due to different python versions or gh actions caching different environments or something else
| 1
|
6,267
| 9,221,763,958
|
IssuesEvent
|
2019-03-11 20:50:02
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
closed
|
update -webauthn-registries Internet-Draft to cause IANA registration of WebAuthn attestation and extension identifiers
|
spec:webauthn-registries type:editorial type:process
|
update the [draft-hodges-webauthn-registries](https://datatracker.ietf.org/doc/draft-hodges-webauthn-registries/) Internet-Draft to cause [IANA](https://www.iana.org/) registration of [WebAuthn Attestation Statement Format Identifiers](https://w3c.github.io/webauthn/#sctn-att-fmt-reg) and [WebAuthn Extension Identifiers](https://w3c.github.io/webauthn/#sctn-extensions-reg), when said registries are created.
|
1.0
|
update -webauthn-registries Internet-Draft to cause IANA registration of WebAuthn attestation and extension identifiers - update the [draft-hodges-webauthn-registries](https://datatracker.ietf.org/doc/draft-hodges-webauthn-registries/) Internet-Draft to cause [IANA](https://www.iana.org/) registration of [WebAuthn Attestation Statement Format Identifiers](https://w3c.github.io/webauthn/#sctn-att-fmt-reg) and [WebAuthn Extension Identifiers](https://w3c.github.io/webauthn/#sctn-extensions-reg), when said registries are created.
|
process
|
update webauthn registries internet draft to cause iana registration of webauthn attestation and extension identifiers update the internet draft to cause registration of and when said registries are created
| 1
|
35,362
| 14,675,927,150
|
IssuesEvent
|
2020-12-30 18:50:44
|
microsoft/BotFramework-Composer
|
https://api.github.com/repos/microsoft/BotFramework-Composer
|
closed
|
Nightly Build e678bd7 - cannot choose Profiles to Publish to
|
Area: Publish Bot Services Team: Runtime & Publishing Type: Bug customer-replied-to customer-reported
|
Hi @tonyanziano
I've installed the latest nightly build and got further than I have before with trying to Publish to PVA from the Bot Framework Composer. However, it is not allowing me to choose a profile to publish to.
I had a publish profile of "Publish Bot to Azure" and added one to "Publish to PVA" - this time it found all 4 bots in my test tenant environment. However, when I go to the screen "Publish your bots" I cannot actually choose "Publish to PVA" as so:

When I click the "Publish to PVA" profile, it defaults to the "Publish to Azure" Profile so I can't actually publish it.
Thanks!
|
1.0
|
Nightly Build e678bd7 - cannot choose Profiles to Publish to - Hi @tonyanziano
I've installed the latest nightly build and got further than I have before with trying to Publish to PVA from the Bot Framework Composer. However, it is not allowing me to choose a profile to publish to.
I had a publish profile of "Publish Bot to Azure" and added one to "Publish to PVA" - this time it found all 4 bots in my test tenant environment. However, when I go to the screen "Publish your bots" I cannot actually choose "Publish to PVA" as so:

When I click the "Publish to PVA" profile, it defaults to the "Publish to Azure" Profile so I can't actually publish it.
Thanks!
|
non_process
|
nightly build cannot choose profiles to publish to hi tonyanziano i ve installed the latest nightly build and got further than i have before with trying to publish to pva from the bot framework composer however it is not allowing me to choose a profile to publish to i had a publish profile of publish bot to azure and added one to publish to pva this time it found all bots in my test tenant environment however when i go to the screen publish your bots i cannot actually choose publish to pva as so when i click the publish to pva profile it defaults to the publish to azure profile so i can t actually publish it thanks
| 0
|
19,015
| 13,536,102,198
|
IssuesEvent
|
2020-09-16 08:33:59
|
topcoder-platform/qa-fun
|
https://api.github.com/repos/topcoder-platform/qa-fun
|
closed
|
Banner text is partially hidden
|
UX/Usability
|
Steps to Reproduce:
1. Go to https://www.topcoder.com/
2. Observe text in the banner at the top of the page.
Expected Result:
Text of the banner should be fully displayed.
Actual Result:
Text of the banner isn't fully dislayed
Screenshots or screencast:

Device: Lenovo ideapad 700
OS: Windows 10
Browser: Google Chrome
Version 81.0.4044.138 (Official Build) (64-bit)
|
True
|
Banner text is partially hidden - Steps to Reproduce:
1. Go to https://www.topcoder.com/
2. Observe text in the banner at the top of the page.
Expected Result:
Text of the banner should be fully displayed.
Actual Result:
Text of the banner isn't fully dislayed
Screenshots or screencast:

Device: Lenovo ideapad 700
OS: Windows 10
Browser: Google Chrome
Version 81.0.4044.138 (Official Build) (64-bit)
|
non_process
|
banner text is partially hidden steps to reproduce go to observe text in the banner at the top of the page expected result text of the banner should be fully displayed actual result text of the banner isn t fully dislayed screenshots or screencast device lenovo ideapad os windows browser google chrome version official build bit
| 0
|
47,113
| 6,044,392,324
|
IssuesEvent
|
2017-06-12 05:24:33
|
fossasia/susi_android
|
https://api.github.com/repos/fossasia/susi_android
|
closed
|
Issue with alignment of layout in display of maps.
|
design UI Fix
|
**Actual Behaviour**
The layout is not uniform.
**Expected Behaviour**
The layout must be uniform.
**Screenshots of the issue**

**Would you like to work on the issue?**
Yes
|
1.0
|
Issue with alignment of layout in display of maps. - **Actual Behaviour**
The layout is not uniform.
**Expected Behaviour**
The layout must be uniform.
**Screenshots of the issue**

**Would you like to work on the issue?**
Yes
|
non_process
|
issue with alignment of layout in display of maps actual behaviour the layout is not uniform expected behaviour the layout must be uniform screenshots of the issue would you like to work on the issue yes
| 0
|
14,782
| 18,055,073,011
|
IssuesEvent
|
2021-09-20 07:03:05
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Remove is_a link between 'defense response to other organism' and 'immune response'
|
multi-species process
|
Read all comments on this topic in #16298 before commenting. I think this is the immediate course of action agreed upon (in 2018!) on that ticket.
Immediate course of action:
1. Remove is_a link between 'defense response to other organism' and 'immune response'
2. Examine all descendants of DRtOO and determine if they should be placed under 'immune response' (or ideally a more precise term)
- E.g. CRISPR-CAS - **No**
There is a lot more to talk about here, but I suggest limiting this issue to be the immediate action above. We have an issue about CRISPR-CAS #16423 -- the defense response hierarchy seems pretty problematic to me on multiple levels but let's talk about that elsewhere
|
1.0
|
Remove is_a link between 'defense response to other organism' and 'immune response' - Read all comments on this topic in #16298 before commenting. I think this is the immediate course of action agreed upon (in 2018!) on that ticket.
Immediate course of action:
1. Remove is_a link between 'defense response to other organism' and 'immune response'
2. Examine all descendants of DRtOO and determine if they should be placed under 'immune response' (or ideally a more precise term)
- E.g. CRISPR-CAS - **No**
There is a lot more to talk about here, but I suggest limiting this issue to be the immediate action above. We have an issue about CRISPR-CAS #16423 -- the defense response hierarchy seems pretty problematic to me on multiple levels but let's talk about that elsewhere
|
process
|
remove is a link between defense response to other organism and immune response read all comments on this topic in before commenting i think this is the immediate course of action agreed upon in on that ticket immediate course of action remove is a link between defense response to other organism and immune response examine all descendants of drtoo and determine if they should be placed under immune response or ideally a more precise term e g crispr cas no there is a lot more to talk about here but i suggest limiting this issue to be the immediate action above we have an issue about crispr cas the defense response hierarchy seems pretty problematic to me on multiple levels but let s talk about that elsewhere
| 1
|
19,481
| 25,792,864,011
|
IssuesEvent
|
2022-12-10 08:29:11
|
COPIM/open-book-collective
|
https://api.github.com/repos/COPIM/open-book-collective
|
closed
|
Provide information on how initiatives will integrate their outputs with existing library infrastructure
|
question userstory membership management (pillar 4) organisational process
|
As a librarian or institutional decision-maker visiting an open access initiative’s profile page ...
...I want, for OABPs, to be able integrate open access monographs into our library systems ...
... so that our community can utilize the books (including metadata and content) within our existing systems.
|
1.0
|
Provide information on how initiatives will integrate their outputs with existing library infrastructure - As a librarian or institutional decision-maker visiting an open access initiative’s profile page ...
...I want, for OABPs, to be able integrate open access monographs into our library systems ...
... so that our community can utilize the books (including metadata and content) within our existing systems.
|
process
|
provide information on how initiatives will integrate their outputs with existing library infrastructure as a librarian or institutional decision maker visiting an open access initiative’s profile page i want for oabps to be able integrate open access monographs into our library systems so that our community can utilize the books including metadata and content within our existing systems
| 1
|
6,927
| 10,084,677,468
|
IssuesEvent
|
2019-07-25 16:11:42
|
google/nodejs-container-image-builder
|
https://api.github.com/repos/google/nodejs-container-image-builder
|
closed
|
chore(release): proposal for next release
|
release-candidate type: process
|
_:robot: Here's what the next release of **container-image-builder** would look like._
---
## [2.0.0](https://www.github.com/google/nodejs-container-image-builder/compare/v1.1.1...v2.0.0) (2019-05-16)
### Bug Fixes
* fix image.WorkingDir not saving ([#4](https://www.github.com/google/nodejs-container-image-builder/issues/4)) ([61397cb](https://www.github.com/google/nodejs-container-image-builder/commit/61397cb))
* link to oci spec in readme ([f45a318](https://www.github.com/google/nodejs-container-image-builder/commit/f45a318))
* package.json updates ([8a771c2](https://www.github.com/google/nodejs-container-image-builder/commit/8a771c2))
* readme ([315a602](https://www.github.com/google/nodejs-container-image-builder/commit/315a602))
* upgrading walkdir to 0.4.0 to fix error ([3b7d13f](https://www.github.com/google/nodejs-container-image-builder/commit/3b7d13f))
### Features
* adding files. ([4066347](https://www.github.com/google/nodejs-container-image-builder/commit/4066347))
* new pack/addFiles api and lots and lots of docs. ([533e599](https://www.github.com/google/nodejs-container-image-builder/commit/533e599))
* support adding in memory files to layers ([#15](https://www.github.com/google/nodejs-container-image-builder/issues/15)) ([9b716ee](https://www.github.com/google/nodejs-container-image-builder/commit/9b716ee))
### BREAKING CHANGES
* image.addFiles obj keys and values flipped
image.addFiles(obj)
when paths are specified as an object the keys are now `targetDirectory`.
this enables copying the same files into a container to different paths and CustomFiles
* feat: custom files can be added to packs
----------------
* [ ] **Should I create this release for you :robot:?**
|
1.0
|
chore(release): proposal for next release - _:robot: Here's what the next release of **container-image-builder** would look like._
---
## [2.0.0](https://www.github.com/google/nodejs-container-image-builder/compare/v1.1.1...v2.0.0) (2019-05-16)
### Bug Fixes
* fix image.WorkingDir not saving ([#4](https://www.github.com/google/nodejs-container-image-builder/issues/4)) ([61397cb](https://www.github.com/google/nodejs-container-image-builder/commit/61397cb))
* link to oci spec in readme ([f45a318](https://www.github.com/google/nodejs-container-image-builder/commit/f45a318))
* package.json updates ([8a771c2](https://www.github.com/google/nodejs-container-image-builder/commit/8a771c2))
* readme ([315a602](https://www.github.com/google/nodejs-container-image-builder/commit/315a602))
* upgrading walkdir to 0.4.0 to fix error ([3b7d13f](https://www.github.com/google/nodejs-container-image-builder/commit/3b7d13f))
### Features
* adding files. ([4066347](https://www.github.com/google/nodejs-container-image-builder/commit/4066347))
* new pack/addFiles api and lots and lots of docs. ([533e599](https://www.github.com/google/nodejs-container-image-builder/commit/533e599))
* support adding in memory files to layers ([#15](https://www.github.com/google/nodejs-container-image-builder/issues/15)) ([9b716ee](https://www.github.com/google/nodejs-container-image-builder/commit/9b716ee))
### BREAKING CHANGES
* image.addFiles obj keys and values flipped
image.addFiles(obj)
when paths are specified as an object the keys are now `targetDirectory`.
this enables copying the same files into a container to different paths and CustomFiles
* feat: custom files can be added to packs
----------------
* [ ] **Should I create this release for you :robot:?**
|
process
|
chore release proposal for next release robot here s what the next release of container image builder would look like bug fixes fix image workingdir not saving link to oci spec in readme package json updates readme upgrading walkdir to to fix error features adding files new pack addfiles api and lots and lots of docs support adding in memory files to layers breaking changes image addfiles obj keys and values flipped image addfiles obj when paths are specified as an object the keys are now targetdirectory this enables copying the same files into a container to different paths and customfiles feat custom files can be added to packs should i create this release for you robot
| 1
|
3,143
| 6,198,679,551
|
IssuesEvent
|
2017-07-05 19:45:07
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
DNA strand resection involved in replication fork processing
|
cell cycle and DNA processes community curation New term request PomBase
|
In brief:
id: GO:new1
name: DNA strand resection involved in replication fork processing
def: The 5' to 3' exonucleolytic resection of DNA at the site of a stalled replication fork that contributes to replication fork processing.
is_a: GO:0006259 ! DNA metabolic process
relationship: has_part GO:0035312 ! 5'-3' exodeoxyribonuclease activity
relationship: part_of GO:0031297 ! replication fork processing
id: GO:new2
name: negative regulation of DNA strand resection involved in replication fork processing
def: [standard wording]
intersection_of: GO:0008150 ! biological_process
intersection_of: negatively_regulates GO:new1 ! DNA strand resection involved in replication fork processing
Details:
We've had a request from a community curator for a GO BP term for a role S. pombe Rad51 and Rad52 play in replication fork processing. As received, the request is:
term: fork protection from resection
def: A process that prevents extensive resection of newly replicated strands at arrested replication forks.
What appears to be going on, however, is that Rad51 and Rad52 negatively regulate resection - in either deletion mutant, extensive resection takes place, probably mediated by Exo1. I'm not sure the forks collapse, so I think the new term should be part of replication fork processing, instead of a type of replication fork protection.
There isn't a process term for strand resection involved in replication fork processing, but perhaps there should be (Figure 3 suggests that ~100 bp of ssDNA normally forms). If so, we could also add a negative regulation term to use for Rad51 and Rad52.
I'm happy to consider alternative suggestions for these annotations.
The paper is PMID:28475874, and you can see the Canto curation session at https://curation.pombase.org/pombe/curs/92141eb3ba63c234/ro/
|
1.0
|
DNA strand resection involved in replication fork processing - In brief:
id: GO:new1
name: DNA strand resection involved in replication fork processing
def: The 5' to 3' exonucleolytic resection of DNA at the site of a stalled replication fork that contributes to replication fork processing.
is_a: GO:0006259 ! DNA metabolic process
relationship: has_part GO:0035312 ! 5'-3' exodeoxyribonuclease activity
relationship: part_of GO:0031297 ! replication fork processing
id: GO:new2
name: negative regulation of DNA strand resection involved in replication fork processing
def: [standard wording]
intersection_of: GO:0008150 ! biological_process
intersection_of: negatively_regulates GO:new1 ! DNA strand resection involved in replication fork processing
Details:
We've had a request from a community curator for a GO BP term for a role S. pombe Rad51 and Rad52 play in replication fork processing. As received, the request is:
term: fork protection from resection
def: A process that prevents extensive resection of newly replicated strands at arrested replication forks.
What appears to be going on, however, is that Rad51 and Rad52 negatively regulate resection - in either deletion mutant, extensive resection takes place, probably mediated by Exo1. I'm not sure the forks collapse, so I think the new term should be part of replication fork processing, instead of a type of replication fork protection.
There isn't a process term for strand resection involved in replication fork processing, but perhaps there should be (Figure 3 suggests that ~100 bp of ssDNA normally forms). If so, we could also add a negative regulation term to use for Rad51 and Rad52.
I'm happy to consider alternative suggestions for these annotations.
The paper is PMID:28475874, and you can see the Canto curation session at https://curation.pombase.org/pombe/curs/92141eb3ba63c234/ro/
|
process
|
dna strand resection involved in replication fork processing in brief id go name dna strand resection involved in replication fork processing def the to exonucleolytic resection of dna at the site of a stalled replication fork that contributes to replication fork processing is a go dna metabolic process relationship has part go exodeoxyribonuclease activity relationship part of go replication fork processing id go name negative regulation of dna strand resection involved in replication fork processing def intersection of go biological process intersection of negatively regulates go dna strand resection involved in replication fork processing details we ve had a request from a community curator for a go bp term for a role s pombe and play in replication fork processing as received the request is term fork protection from resection def a process that prevents extensive resection of newly replicated strands at arrested replication forks what appears to be going on however is that and negatively regulate resection in either deletion mutant extensive resection takes place probably mediated by i m not sure the forks collapse so i think the new term should be part of replication fork processing instead of a type of replication fork protection there isn t a process term for strand resection involved in replication fork processing but perhaps there should be figure suggests that bp of ssdna normally forms if so we could also add a negative regulation term to use for and i m happy to consider alternative suggestions for these annotations the paper is pmid and you can see the canto curation session at
| 1
|
2,290
| 5,112,618,903
|
IssuesEvent
|
2017-01-06 12:01:51
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
Predis no password is set
|
5.0 6.0 [-_-] In Process ~_~ Issue confirmed
|
###Configuration:
PhpFastCache version: ` dev-final `
PHP version: ` 7.0.14 `
Operating system: ` centos `
####Issue description:
Fatal error: Uncaught Predis\Connection\ConnectionException: `AUTH` failed: ERR Client sent AUTH, but no password is set [tcp://127.0.0.1:6379] in /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php:155 Stack trace: /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(263): Predis\Connection\AbstractConnection->onConnectionError('`AUTH` failed: ...', 0) /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php(180): Predis\Connection\StreamConnection->connect() /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(288): Predis\Connection\AbstractConnection->getResource() /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(394): Predis\Connection\StreamConnection->write('*2\r\n$3\r\nGET\r\n$3...') /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php(110): Predis\Connection\StreamConnection->wri in /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php on line 155
|
1.0
|
Predis no password is set - ###Configuration:
PhpFastCache version: ` dev-final `
PHP version: ` 7.0.14 `
Operating system: ` centos `
####Issue description:
Fatal error: Uncaught Predis\Connection\ConnectionException: `AUTH` failed: ERR Client sent AUTH, but no password is set [tcp://127.0.0.1:6379] in /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php:155 Stack trace: /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(263): Predis\Connection\AbstractConnection->onConnectionError('`AUTH` failed: ...', 0) /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php(180): Predis\Connection\StreamConnection->connect() /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(288): Predis\Connection\AbstractConnection->getResource() /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/StreamConnection.php(394): Predis\Connection\StreamConnection->write('*2\r\n$3\r\nGET\r\n$3...') /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php(110): Predis\Connection\StreamConnection->wri in /var/www/vhosts/**/httpdocs/vendor/predis/predis/src/Connection/AbstractConnection.php on line 155
|
process
|
predis no password is set configuration phpfastcache version dev final php version operating system centos issue description fatal error uncaught predis connection connectionexception auth failed err client sent auth but no password is set in var www vhosts httpdocs vendor predis predis src connection abstractconnection php stack trace var www vhosts httpdocs vendor predis predis src connection streamconnection php predis connection abstractconnection onconnectionerror auth failed var www vhosts httpdocs vendor predis predis src connection abstractconnection php predis connection streamconnection connect var www vhosts httpdocs vendor predis predis src connection streamconnection php predis connection abstractconnection getresource var www vhosts httpdocs vendor predis predis src connection streamconnection php predis connection streamconnection write r n r nget r n var www vhosts httpdocs vendor predis predis src connection abstractconnection php predis connection streamconnection wri in var www vhosts httpdocs vendor predis predis src connection abstractconnection php on line
| 1
|
14,089
| 16,979,997,644
|
IssuesEvent
|
2021-06-30 07:36:17
|
CLIxIndia-Dev/clixoer
|
https://api.github.com/repos/CLIxIndia-Dev/clixoer
|
opened
|
Footer Links with clixoer's template - Contact, Terms of Service, and Privacy Policy, etc
|
backend enhancement frontend process improvement
|
**Is your feature request related to a problem? Please describe.**
The Footer links in CLIxOER website, redirect to the CLIxplatform Needs to update it with a new template in clixoer
**Describe the solution you'd like**
- The template for each footer link according to the data provided
- The Templates are - Contact, Terms of Service, and Privacy Policy, etc. ( check footer link )
**Describe alternatives you've considered**
- NO alternative we need to update the links with the template.
-
|
1.0
|
Footer Links with clixoer's template - Contact, Terms of Service, and Privacy Policy, etc - **Is your feature request related to a problem? Please describe.**
The Footer links in CLIxOER website, redirect to the CLIxplatform Needs to update it with a new template in clixoer
**Describe the solution you'd like**
- The template for each footer link according to the data provided
- The Templates are - Contact, Terms of Service, and Privacy Policy, etc. ( check footer link )
**Describe alternatives you've considered**
- NO alternative we need to update the links with the template.
-
|
process
|
footer links with clixoer s template contact terms of service and privacy policy etc is your feature request related to a problem please describe the footer links in clixoer website redirect to the clixplatform needs to update it with a new template in clixoer describe the solution you d like the template for each footer link according to the data provided the templates are contact terms of service and privacy policy etc check footer link describe alternatives you ve considered no alternative we need to update the links with the template
| 1
|
15,193
| 18,976,433,732
|
IssuesEvent
|
2021-11-20 03:41:15
|
googleapis/gax-java
|
https://api.github.com/repos/googleapis/gax-java
|
opened
|
Migrate from Gradle to Maven
|
type: process priority: p3
|
The goal of switching to Maven is to align with the rest of the Java client libraries repositories that use Maven. This should enable us to take advantage of more automation tools that have been designed and tested against Maven-based repositories in the googleapis organization. This should also simplify integration with Bazel and eliminate the need for `/dependencies.properties`.
- [ ] Add Maven-based build without impacting Gradle and Bazel builds (#1562)
- [ ] Verify that changing artifact parents is non-breaking
- [ ] Verify that `mvn install` works and then using the newly installed version (x.x.x-SNAPSHOT) directly in any of the existing gapic clients (like java-language) still works.
- [ ] Switch Bazel integration to the new Maven build instead of the existing Gradle one
- [ ] Discontinue reliance on `\dependencies.properties`
- [ ] Ensure that the Maven setup is aligned with other repos such as https://github.com/googleapis/java-core
- [ ] Verify that all functions of the existing Gradle build configuration are covered
- [ ] Switch CI from Gradle to Maven
- [ ] Release using Maven
- [ ] Remove Gradle build configs
|
1.0
|
Migrate from Gradle to Maven - The goal of switching to Maven is to align with the rest of the Java client libraries repositories that use Maven. This should enable us to take advantage of more automation tools that have been designed and tested against Maven-based repositories in the googleapis organization. This should also simplify integration with Bazel and eliminate the need for `/dependencies.properties`.
- [ ] Add Maven-based build without impacting Gradle and Bazel builds (#1562)
- [ ] Verify that changing artifact parents is non-breaking
- [ ] Verify that `mvn install` works and then using the newly installed version (x.x.x-SNAPSHOT) directly in any of the existing gapic clients (like java-language) still works.
- [ ] Switch Bazel integration to the new Maven build instead of the existing Gradle one
- [ ] Discontinue reliance on `\dependencies.properties`
- [ ] Ensure that the Maven setup is aligned with other repos such as https://github.com/googleapis/java-core
- [ ] Verify that all functions of the existing Gradle build configuration are covered
- [ ] Switch CI from Gradle to Maven
- [ ] Release using Maven
- [ ] Remove Gradle build configs
|
process
|
migrate from gradle to maven the goal of switching to maven is to align with the rest of the java client libraries repositories that use maven this should enable us to take advantage of more automation tools that have been designed and tested against maven based repositories in the googleapis organization this should also simplify integration with bazel and eliminate the need for dependencies properties add maven based build without impacting gradle and bazel builds verify that changing artifact parents is non breaking verify that mvn install works and then using the newly installed version x x x snapshot directly in any of the existing gapic clients like java language still works switch bazel integration to the new maven build instead of the existing gradle one discontinue reliance on dependencies properties ensure that the maven setup is aligned with other repos such as verify that all functions of the existing gradle build configuration are covered switch ci from gradle to maven release using maven remove gradle build configs
| 1
|
428,289
| 12,405,963,373
|
IssuesEvent
|
2020-05-21 18:14:53
|
woocommerce/woocommerce-admin
|
https://api.github.com/repos/woocommerce/woocommerce-admin
|
opened
|
Fatal error due to product variation removal
|
[Many] Small [Much] Small [Priority] Low [Type] Bug
|
**Affected customers**
2968918-zen
**Describe the bug**
Customer is getting this error:
> 2020-05-13T00:35:04+00:00 CRITICAL Uncaught Error: Call to a member function get_file_download_path() on boolean in .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php:112
> Stack trace:
> #0 .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php(59): Automattic\WooCommerce\Admin\API\Reports\Downloads\Controller->prepare_item_for_response(Array, Object(WP_REST_Request))
> #1 .../wp-includes/rest-api/class-wp-rest-server.php(1015): Automattic\WooCommerce\Admin\API\Reports\Downloads\Controller->get_items(Object(WP_REST_Request))
> #2 .../wp-includes/rest-api/class-wp-rest-server.php(342): WP_REST_Server->dispatch(Object(WP_REST_Request))
> #3 .../wp-includes/rest-api.php(306): WP_REST_Server->serve_request('/wc-analytics/r...')
> #4 .../wp-includes/class-wp-hook.php(287): rest_api_loaded(Object(WP))
> #5 .../wp-includes/class-wp-hook.php( in .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php on line 112
This is happening here:
https://github.com/woocommerce/woocommerce-admin/blob/v1.1.3/src/API/Reports/Downloads/Controller.php#L112
This is due to they had a Simple product, switched it to Variable (with downloads), had sales, customers downloaded, and then they switched the product back to Simple again. They are now trying to run download reports and due to the orders have the variation in them they are getting a fatal error.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a variable product with 1 variation that has a download attached.
1. Add variation to cart, purchase, and then download file.
1. Go back to product, switch to simple.
1. This may not be needed, but I did it... Add download to simple product, add to cart, purchase, and download that file.
1. Go to Analytics > Downloads, and the report will not load correctly and fatal error will be logged.
**Expected behavior**
I am not sure if we can obtain proper data for the reports here, especially since this shouldn't happen, but we could check to make sure we have a `$_product` object before trying to use its methods.
**Screenshots**

Image Link: https://d.pr/i/v5Itcn
|
1.0
|
Fatal error due to product variation removal - **Affected customers**
2968918-zen
**Describe the bug**
Customer is getting this error:
> 2020-05-13T00:35:04+00:00 CRITICAL Uncaught Error: Call to a member function get_file_download_path() on boolean in .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php:112
> Stack trace:
> #0 .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php(59): Automattic\WooCommerce\Admin\API\Reports\Downloads\Controller->prepare_item_for_response(Array, Object(WP_REST_Request))
> #1 .../wp-includes/rest-api/class-wp-rest-server.php(1015): Automattic\WooCommerce\Admin\API\Reports\Downloads\Controller->get_items(Object(WP_REST_Request))
> #2 .../wp-includes/rest-api/class-wp-rest-server.php(342): WP_REST_Server->dispatch(Object(WP_REST_Request))
> #3 .../wp-includes/rest-api.php(306): WP_REST_Server->serve_request('/wc-analytics/r...')
> #4 .../wp-includes/class-wp-hook.php(287): rest_api_loaded(Object(WP))
> #5 .../wp-includes/class-wp-hook.php( in .../wp-content/plugins/woocommerce-admin/src/API/Reports/Downloads/Controller.php on line 112
This is happening here:
https://github.com/woocommerce/woocommerce-admin/blob/v1.1.3/src/API/Reports/Downloads/Controller.php#L112
This is due to they had a Simple product, switched it to Variable (with downloads), had sales, customers downloaded, and then they switched the product back to Simple again. They are now trying to run download reports and due to the orders have the variation in them they are getting a fatal error.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a variable product with 1 variation that has a download attached.
1. Add variation to cart, purchase, and then download file.
1. Go back to product, switch to simple.
1. This may not be needed, but I did it... Add download to simple product, add to cart, purchase, and download that file.
1. Go to Analytics > Downloads, and the report will not load correctly and fatal error will be logged.
**Expected behavior**
I am not sure if we can obtain proper data for the reports here, especially since this shouldn't happen, but we could check to make sure we have a `$_product` object before trying to use its methods.
**Screenshots**

Image Link: https://d.pr/i/v5Itcn
|
non_process
|
fatal error due to product variation removal affected customers zen describe the bug customer is getting this error critical uncaught error call to a member function get file download path on boolean in wp content plugins woocommerce admin src api reports downloads controller php stack trace wp content plugins woocommerce admin src api reports downloads controller php automattic woocommerce admin api reports downloads controller prepare item for response array object wp rest request wp includes rest api class wp rest server php automattic woocommerce admin api reports downloads controller get items object wp rest request wp includes rest api class wp rest server php wp rest server dispatch object wp rest request wp includes rest api php wp rest server serve request wc analytics r wp includes class wp hook php rest api loaded object wp wp includes class wp hook php in wp content plugins woocommerce admin src api reports downloads controller php on line this is happening here this is due to they had a simple product switched it to variable with downloads had sales customers downloaded and then they switched the product back to simple again they are now trying to run download reports and due to the orders have the variation in them they are getting a fatal error to reproduce steps to reproduce the behavior create a variable product with variation that has a download attached add variation to cart purchase and then download file go back to product switch to simple this may not be needed but i did it add download to simple product add to cart purchase and download that file go to analytics downloads and the report will not load correctly and fatal error will be logged expected behavior i am not sure if we can obtain proper data for the reports here especially since this shouldn t happen but we could check to make sure we have a product object before trying to use its methods screenshots image link
| 0
|
420
| 2,852,814,328
|
IssuesEvent
|
2015-06-01 15:25:30
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] Start method returns null
|
Process
|
In the documentation, [start](http://api.symfony.com/2.7/Symfony/Component/Process/Process.html#method_start) returns the [Process](http://api.symfony.com/2.7/Symfony/Component/Process/Process.html).
However, it returns null instead.
Unit test
````php
public function testStartMethodReturnsProcessInstance()
{
$process = $this->getProcess('echo foo');
$process = $process->start();
$this->assertInstanceOf('Symfony\Component\Process\Process', $process);
}
````
|
1.0
|
[Process] Start method returns null - In the documentation, [start](http://api.symfony.com/2.7/Symfony/Component/Process/Process.html#method_start) returns the [Process](http://api.symfony.com/2.7/Symfony/Component/Process/Process.html).
However, it returns null instead.
Unit test
````php
public function testStartMethodReturnsProcessInstance()
{
$process = $this->getProcess('echo foo');
$process = $process->start();
$this->assertInstanceOf('Symfony\Component\Process\Process', $process);
}
````
|
process
|
start method returns null in the documentation returns the however it returns null instead unit test php public function teststartmethodreturnsprocessinstance process this getprocess echo foo process process start this assertinstanceof symfony component process process process
| 1
|
102,624
| 8,851,025,953
|
IssuesEvent
|
2019-01-08 14:48:06
|
NativeScript/nativescript-angular
|
https://api.github.com/repos/NativeScript/nativescript-angular
|
closed
|
Rendering issue when using *ngFor inside a TabView, NS 4.0
|
backlog bug os: ios ready for test tab-view
|
**Platform**
iOS
**Description**
Given a template that uses *ngFor inside a tab content:
The first rendering works fine, but if the underlying datasource is replaced rendering the items of the new source doesn't seem to happen.
**Steps to reproduce**
1. Download the playground app linked below
2. tns run ios
3. click the refresh button and the list is empty
Worked fine in NS 3.4
https://play.nativescript.org/?template=play-ng&id=u8JfoL&v=2
|
1.0
|
Rendering issue when using *ngFor inside a TabView, NS 4.0 - **Platform**
iOS
**Description**
Given a template that uses *ngFor inside a tab content:
The first rendering works fine, but if the underlying datasource is replaced rendering the items of the new source doesn't seem to happen.
**Steps to reproduce**
1. Download the playground app linked below
2. tns run ios
3. click the refresh button and the list is empty
Worked fine in NS 3.4
https://play.nativescript.org/?template=play-ng&id=u8JfoL&v=2
|
non_process
|
rendering issue when using ngfor inside a tabview ns platform ios description given a template that uses ngfor inside a tab content the first rendering works fine but if the underlying datasource is replaced rendering the items of the new source doesn t seem to happen steps to reproduce download the playground app linked below tns run ios click the refresh button and the list is empty worked fine in ns
| 0
|
1,779
| 4,511,904,424
|
IssuesEvent
|
2016-09-03 09:30:44
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Specify unit time in stats_mysql_query_digest
|
ADMIN QUERY PROCESSOR STATISTICS
|
```
Create Table: CREATE TABLE stats_mysql_query_digest (
hostgroup INT,
schemaname VARCHAR NOT NULL,
username VARCHAR NOT NULL,
digest VARCHAR NOT NULL,
digest_text VARCHAR NOT NULL,
count_star INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
sum_time INTEGER NOT NULL,
min_time INTEGER NOT NULL,
max_time INTEGER NOT NULL,
PRIMARY KEY(hostgroup, schemaname, username, digest))
```
`sum_time` , `min_time` and `max_time` needs to be descriptive enough to understand it is microseconds
|
1.0
|
Specify unit time in stats_mysql_query_digest - ```
Create Table: CREATE TABLE stats_mysql_query_digest (
hostgroup INT,
schemaname VARCHAR NOT NULL,
username VARCHAR NOT NULL,
digest VARCHAR NOT NULL,
digest_text VARCHAR NOT NULL,
count_star INTEGER NOT NULL,
first_seen INTEGER NOT NULL,
last_seen INTEGER NOT NULL,
sum_time INTEGER NOT NULL,
min_time INTEGER NOT NULL,
max_time INTEGER NOT NULL,
PRIMARY KEY(hostgroup, schemaname, username, digest))
```
`sum_time` , `min_time` and `max_time` needs to be descriptive enough to understand it is microseconds
|
process
|
specify unit time in stats mysql query digest create table create table stats mysql query digest hostgroup int schemaname varchar not null username varchar not null digest varchar not null digest text varchar not null count star integer not null first seen integer not null last seen integer not null sum time integer not null min time integer not null max time integer not null primary key hostgroup schemaname username digest sum time min time and max time needs to be descriptive enough to understand it is microseconds
| 1
|
9,298
| 27,947,690,457
|
IssuesEvent
|
2023-03-24 05:34:25
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Error in Gov but not Commercial cloud for: "'this.Client.SubscriptionId' cannot be null".
|
automation/svc triaged cxp product-feedback /subsvc Pri2
|
It seems that the: 'Set-AzContext -Subscription <subscription-id>' Cmdlet is causing an error for US Gov subscriptions only, but not in the commercial cloud.
Here is the PowerShell runbook code that was executed both in Azure Commercial and in Azure US Government clouds.

I've tested this runbook in both clouds **(commercial & us gov)** and it works in **commercial** but not **us gov**, even with the Az.Accounts module upgraded to the latest version, which is 2.12.1.
I've also verified that the managed identity for both the **us gov** and **commercial** automation accounts have been granted the _contributor_ role at the subscription level.
This is the result in a **us gov** subscription:

And this is the successful result in a **commercial** subscription:

When testing in the cloud shell, in either **us gov** or **commercial** , it works fine, but the cloud shell PowerShell runtime version is 7.3 whearas the runbook will only use either the 7.1 (preview) or 5.1 runtime. I've also tested this using PowerShell version 5.1 in both clouds and still get the same results anyway.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 12da5699-0a6a-1ddf-8981-d27bdd743d74
* Version Independent ID: ce0178e4-a9a2-9a1f-1c73-0dd9691d81b3
* Content: [Troubleshoot Azure Automation managed identity issues](https://learn.microsoft.com/en-us/azure/automation/troubleshoot/managed-identity#scenario-runbook-fails-with-thisclientsubscriptionid-cannot-be-null-error-message)
* Content Source: [articles/automation/troubleshoot/managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/troubleshoot/managed-identity.md)
* Service: **automation**
* Sub-service: ****
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Error in Gov but not Commercial cloud for: "'this.Client.SubscriptionId' cannot be null". -
It seems that the: 'Set-AzContext -Subscription <subscription-id>' Cmdlet is causing an error for US Gov subscriptions only, but not in the commercial cloud.
Here is the PowerShell runbook code that was executed both in Azure Commercial and in Azure US Government clouds.

I've tested this runbook in both clouds **(commercial & us gov)** and it works in **commercial** but not **us gov**, even with the Az.Accounts module upgraded to the latest version, which is 2.12.1.
I've also verified that the managed identity for both the **us gov** and **commercial** automation accounts have been granted the _contributor_ role at the subscription level.
This is the result in a **us gov** subscription:

And this is the successful result in a **commercial** subscription:

When testing in the cloud shell, in either **us gov** or **commercial** , it works fine, but the cloud shell PowerShell runtime version is 7.3 whearas the runbook will only use either the 7.1 (preview) or 5.1 runtime. I've also tested this using PowerShell version 5.1 in both clouds and still get the same results anyway.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 12da5699-0a6a-1ddf-8981-d27bdd743d74
* Version Independent ID: ce0178e4-a9a2-9a1f-1c73-0dd9691d81b3
* Content: [Troubleshoot Azure Automation managed identity issues](https://learn.microsoft.com/en-us/azure/automation/troubleshoot/managed-identity#scenario-runbook-fails-with-thisclientsubscriptionid-cannot-be-null-error-message)
* Content Source: [articles/automation/troubleshoot/managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/troubleshoot/managed-identity.md)
* Service: **automation**
* Sub-service: ****
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
non_process
|
error in gov but not commercial cloud for this client subscriptionid cannot be null it seems that the set azcontext subscription cmdlet is causing an error for us gov subscriptions only but not in the commercial cloud here is the powershell runbook code that was executed both in azure commercial and in azure us government clouds i ve tested this runbook in both clouds commercial us gov and it works in commercial but not us gov even with the az accounts module upgraded to the latest version which is i ve also verified that the managed identity for both the us gov and commercial automation accounts have been granted the contributor role at the subscription level this is the result in a us gov subscription and this is the successful result in a commercial subscription when testing in the cloud shell in either us gov or commercial it works fine but the cloud shell powershell runtime version is whearas the runbook will only use either the preview or runtime i ve also tested this using powershell version in both clouds and still get the same results anyway thank you document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service github login snehasudhirg microsoft alias sudhirsneha
| 0
|
615,971
| 19,287,125,647
|
IssuesEvent
|
2021-12-11 05:47:22
|
yukiHaga/regex-hunting
|
https://api.github.com/repos/yukiHaga/regex-hunting
|
opened
|
ログインモーダルのログイン機能を実装
|
Priority: high Type: new feature
|
## 概要
ログインモーダルを使ってログインできるようにする。
## やること
- [ ] `mkdir src/reducers`を実行して、reducer関数を管理するディレクトリを作成する。
- [ ] `$ touch src/reducers/login.js`を実行して、loginロジックに関するreducer関数を定義するファイルを作成する。
- [ ] `src/reducers/login.js`内にloginロジックに関するinitialState, loginActionTyps, loginReducer関数を定義する。
- [ ] `src/components/LoginDialog.jsx`で、initialState, loginActionTyps, loginReducer関数をインポートしてくる。
- [ ] `src/components/LoginDialog.jsx` コンポーネント内に`const [state, dispatch] = useReducer(loginReducer, initialState);`を書いて、reducer関数を使うstateを定義する。
- [ ] LoginDialogコンポーネント内で、dispatch関数を呼び出す。以下のコードではuseEffect内で呼び出しているが、onSubmit関数内でdispatchを実行する。dispatch → postUserSession → 取得したデータでdispatchを呼び出す。通信に含まれるデータのことを「ペイロードデータ」という。
```
const [state, dispatch] = useReducer(restaurantsReducer, initialState);
useEffect(() => {
dispatch({ type: restaurantsActionTyps.FETCHING });
fetchRestaurants()
.then((data) =>
dispatch({
type: restaurantsActionTyps.FETCH_SUCCESS,
payload: {
restaurants: data.restaurants
}
})
)
}, [])
return (
```
- [ ] データが返却されてマイページへ行くことを確認する。
- [ ] マイページにヘッダーを設置する。
- [ ] マイページのヘッダーが「ランキング, ログアウト(暫定)」になっていることを確認する。
- [ ] ログアウトボタンを押すと、LPページに戻り、セッション情報が破棄される。そのため、ヘッダーが元に戻る。
- [ ] CSRF対策をサーバー側でしたが、ちゃんと機能しているかが分からない。
そこをちゃんと確認する。
- [ ] フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。
## 受け入れ条件
- [ ] ログインモーダルのログインボタンを押すと、サーバーへアクセスできる。
- [ ] サーバー側で認証処理を実行してくれる。
- [ ] サーバー側の処理結果を、フロントに正常に返してくれる。
- [ ] サーバー側の処理結果を元に、フロントエンド側で画面遷移する。
- [ ] もしログインが成功したなら、マイページへ遷移する。
- [ ] もしログインが失敗したら、LPページへ遷移する。
- [ ] cookieにセッション情報が格納されている。
## 懸念点
- CSRF対策をサーバー側でしたが、ちゃんと機能しているかが分からない。
そこをちゃんと確認する。
- フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。
## 参考記事
特になし。
|
1.0
|
ログインモーダルのログイン機能を実装 - ## 概要
ログインモーダルを使ってログインできるようにする。
## やること
- [ ] `mkdir src/reducers`を実行して、reducer関数を管理するディレクトリを作成する。
- [ ] `$ touch src/reducers/login.js`を実行して、loginロジックに関するreducer関数を定義するファイルを作成する。
- [ ] `src/reducers/login.js`内にloginロジックに関するinitialState, loginActionTyps, loginReducer関数を定義する。
- [ ] `src/components/LoginDialog.jsx`で、initialState, loginActionTyps, loginReducer関数をインポートしてくる。
- [ ] `src/components/LoginDialog.jsx` コンポーネント内に`const [state, dispatch] = useReducer(loginReducer, initialState);`を書いて、reducer関数を使うstateを定義する。
- [ ] LoginDialogコンポーネント内で、dispatch関数を呼び出す。以下のコードではuseEffect内で呼び出しているが、onSubmit関数内でdispatchを実行する。dispatch → postUserSession → 取得したデータでdispatchを呼び出す。通信に含まれるデータのことを「ペイロードデータ」という。
```
const [state, dispatch] = useReducer(restaurantsReducer, initialState);
useEffect(() => {
dispatch({ type: restaurantsActionTyps.FETCHING });
fetchRestaurants()
.then((data) =>
dispatch({
type: restaurantsActionTyps.FETCH_SUCCESS,
payload: {
restaurants: data.restaurants
}
})
)
}, [])
return (
```
- [ ] データが返却されてマイページへ行くことを確認する。
- [ ] マイページにヘッダーを設置する。
- [ ] マイページのヘッダーが「ランキング, ログアウト(暫定)」になっていることを確認する。
- [ ] ログアウトボタンを押すと、LPページに戻り、セッション情報が破棄される。そのため、ヘッダーが元に戻る。
- [ ] CSRF対策をサーバー側でしたが、ちゃんと機能しているかが分からない。
そこをちゃんと確認する。
- [ ] フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。
## 受け入れ条件
- [ ] ログインモーダルのログインボタンを押すと、サーバーへアクセスできる。
- [ ] サーバー側で認証処理を実行してくれる。
- [ ] サーバー側の処理結果を、フロントに正常に返してくれる。
- [ ] サーバー側の処理結果を元に、フロントエンド側で画面遷移する。
- [ ] もしログインが成功したなら、マイページへ遷移する。
- [ ] もしログインが失敗したら、LPページへ遷移する。
- [ ] cookieにセッション情報が格納されている。
## 懸念点
- CSRF対策をサーバー側でしたが、ちゃんと機能しているかが分からない。
そこをちゃんと確認する。
- フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。
## 参考記事
特になし。
|
non_process
|
ログインモーダルのログイン機能を実装 概要 ログインモーダルを使ってログインできるようにする。 やること mkdir src reducers を実行して、reducer関数を管理するディレクトリを作成する。 touch src reducers login js を実行して、loginロジックに関するreducer関数を定義するファイルを作成する。 src reducers login js 内にloginロジックに関するinitialstate loginactiontyps loginreducer関数を定義する。 src components logindialog jsx で、initialstate loginactiontyps loginreducer関数をインポートしてくる。 src components logindialog jsx コンポーネント内に const usereducer loginreducer initialstate を書いて、reducer関数を使うstateを定義する。 logindialogコンポーネント内で、dispatch関数を呼び出す。以下のコードではuseeffect内で呼び出しているが、onsubmit関数内でdispatchを実行する。dispatch → postusersession → 取得したデータでdispatchを呼び出す。通信に含まれるデータのことを「ペイロードデータ」という。 const usereducer restaurantsreducer initialstate useeffect dispatch type restaurantsactiontyps fetching fetchrestaurants then data dispatch type restaurantsactiontyps fetch success payload restaurants data restaurants return データが返却されてマイページへ行くことを確認する。 マイページにヘッダーを設置する。 マイページのヘッダーが「ランキング ログアウト 暫定 」になっていることを確認する。 ログアウトボタンを押すと、lpページに戻り、セッション情報が破棄される。そのため、ヘッダーが元に戻る。 csrf対策をサーバー側でしたが、ちゃんと機能しているかが分からない。 そこをちゃんと確認する。 フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。 受け入れ条件 ログインモーダルのログインボタンを押すと、サーバーへアクセスできる。 サーバー側で認証処理を実行してくれる。 サーバー側の処理結果を、フロントに正常に返してくれる。 サーバー側の処理結果を元に、フロントエンド側で画面遷移する。 もしログインが成功したなら、マイページへ遷移する。 もしログインが失敗したら、lpページへ遷移する。 cookieにセッション情報が格納されている。 懸念点 csrf対策をサーバー側でしたが、ちゃんと機能しているかが分からない。 そこをちゃんと確認する。 フロント側からサーバー側へパスワードを渡すとき、ちゃんとパスワードが隠蔽されているか確認する。 参考記事 特になし。
| 0
|
124,404
| 4,913,612,200
|
IssuesEvent
|
2016-11-23 13:09:36
|
BinPar/eBooks
|
https://api.github.com/repos/BinPar/eBooks
|
closed
|
Hospital Sant Joan de Deu sin acceso remoto a Eureka
|
Eureka Priority: High
|
Para la Instituciónhttp://gestor.medicapanamericana.com/tables/institution.aspx?id=346
El responsable de Greendata (la plataforma que usa esta institución es de ellos) nos dice: _
Los e-books de Panamericana no se visualizan a través del Proxy. Pasa algo similar a lo que pasaba con el Colegio de Fisioterapeutas, que es que cuando se quiere consultar el e-book a través del proxy aparece la siguiente pantalla (adjunto1)

Es decir, para que funcione el acceso al proxy, nosotros le añadimos la siguiente url al inicio:
http://ezproxy-hsjd.greendata.es/login?url=http://www.medicapanamericana.com/visorebookv2/ebook/9788498359015
la cual, una vez validada por el proxy se debería convertir en
http://www.medicapanamericana.com.ezproxy-hsjd.greendata.es/VisorEbookV2/authentication/Register/9788498359015?demoMode=False
La redirección la hace bien, pero lo que no hace bien es mostrar el e-book. Pasa con todos los e-books del Hospital. La IP del proxy es la siguiente: 212.92.58.162
Creo recordar, como decía antes, que también pasó algo parecido con los colegios de fisioterapeutas y que al final se pudo arreglar._
A Eureka si pueden entrar y después abrir ebook también, pero título a título directamente no. Podría ser
problema de ellos?
Gracias
|
1.0
|
Hospital Sant Joan de Deu sin acceso remoto a Eureka - Para la Instituciónhttp://gestor.medicapanamericana.com/tables/institution.aspx?id=346
El responsable de Greendata (la plataforma que usa esta institución es de ellos) nos dice: _
Los e-books de Panamericana no se visualizan a través del Proxy. Pasa algo similar a lo que pasaba con el Colegio de Fisioterapeutas, que es que cuando se quiere consultar el e-book a través del proxy aparece la siguiente pantalla (adjunto1)

Es decir, para que funcione el acceso al proxy, nosotros le añadimos la siguiente url al inicio:
http://ezproxy-hsjd.greendata.es/login?url=http://www.medicapanamericana.com/visorebookv2/ebook/9788498359015
la cual, una vez validada por el proxy se debería convertir en
http://www.medicapanamericana.com.ezproxy-hsjd.greendata.es/VisorEbookV2/authentication/Register/9788498359015?demoMode=False
La redirección la hace bien, pero lo que no hace bien es mostrar el e-book. Pasa con todos los e-books del Hospital. La IP del proxy es la siguiente: 212.92.58.162
Creo recordar, como decía antes, que también pasó algo parecido con los colegios de fisioterapeutas y que al final se pudo arreglar._
A Eureka si pueden entrar y después abrir ebook también, pero título a título directamente no. Podría ser
problema de ellos?
Gracias
|
non_process
|
hospital sant joan de deu sin acceso remoto a eureka para la institución el responsable de greendata la plataforma que usa esta institución es de ellos nos dice los e books de panamericana no se visualizan a través del proxy pasa algo similar a lo que pasaba con el colegio de fisioterapeutas que es que cuando se quiere consultar el e book a través del proxy aparece la siguiente pantalla es decir para que funcione el acceso al proxy nosotros le añadimos la siguiente url al inicio la cual una vez validada por el proxy se debería convertir en la redirección la hace bien pero lo que no hace bien es mostrar el e book pasa con todos los e books del hospital la ip del proxy es la siguiente creo recordar como decía antes que también pasó algo parecido con los colegios de fisioterapeutas y que al final se pudo arreglar a eureka si pueden entrar y después abrir ebook también pero título a título directamente no podría ser problema de ellos gracias
| 0
|
20,458
| 15,531,381,908
|
IssuesEvent
|
2021-03-13 23:20:39
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Changing Editor Settings Asset library URLS doesn't update editor
|
bug topic:assetlib topic:editor usability
|
**Godot version:**
3.2.4-rc5
**OS/device including version:**
N/A
**Issue description:**
<!-- What happened, and what was expected. -->
I change asset library urls in editor settings, but the editor dropdown is not updated. This is because of missing functionality in #45202. Restarting the editor updates with new settings, but this should not require a restart.
**Steps to reproduce:**
Initial:


Update settings, for example add new entry or modify current entries:

Editor doesn't update with new ones:

After editor restart, we see our changes:

**Minimal reproduction project:**
New/empty project is all you need.
|
True
|
Changing Editor Settings Asset library URLS doesn't update editor - **Godot version:**
3.2.4-rc5
**OS/device including version:**
N/A
**Issue description:**
<!-- What happened, and what was expected. -->
I change asset library urls in editor settings, but the editor dropdown is not updated. This is because of missing functionality in #45202. Restarting the editor updates with new settings, but this should not require a restart.
**Steps to reproduce:**
Initial:


Update settings, for example add new entry or modify current entries:

Editor doesn't update with new ones:

After editor restart, we see our changes:

**Minimal reproduction project:**
New/empty project is all you need.
|
non_process
|
changing editor settings asset library urls doesn t update editor godot version os device including version n a issue description i change asset library urls in editor settings but the editor dropdown is not updated this is because of missing functionality in restarting the editor updates with new settings but this should not require a restart steps to reproduce initial update settings for example add new entry or modify current entries editor doesn t update with new ones after editor restart we see our changes minimal reproduction project new empty project is all you need
| 0
|
154,641
| 13,562,823,203
|
IssuesEvent
|
2020-09-18 07:33:23
|
blocktobody/vary
|
https://api.github.com/repos/blocktobody/vary
|
opened
|
가이드 컴포넌트를 만들면서 알게 된 것들
|
documentation
|
- Notion은 공식 API를 아직 제공하지 않으며 iframe에서 공개된 웹페이지를 embed하는 것을 금지하고 있음
- 공개된 노션 웹페이지를 렌더링해주는 react-notion이라는 라이브러리를 사용하기로 결정
- SSR이 필요한 이유
- iframe이 히스토리를 조작한다는 사실
|
1.0
|
가이드 컴포넌트를 만들면서 알게 된 것들 - - Notion은 공식 API를 아직 제공하지 않으며 iframe에서 공개된 웹페이지를 embed하는 것을 금지하고 있음
- 공개된 노션 웹페이지를 렌더링해주는 react-notion이라는 라이브러리를 사용하기로 결정
- SSR이 필요한 이유
- iframe이 히스토리를 조작한다는 사실
|
non_process
|
가이드 컴포넌트를 만들면서 알게 된 것들 notion은 공식 api를 아직 제공하지 않으며 iframe에서 공개된 웹페이지를 embed하는 것을 금지하고 있음 공개된 노션 웹페이지를 렌더링해주는 react notion이라는 라이브러리를 사용하기로 결정 ssr이 필요한 이유 iframe이 히스토리를 조작한다는 사실
| 0
|
8,870
| 11,964,881,470
|
IssuesEvent
|
2020-04-05 21:14:49
|
arcum-omni/Coquo
|
https://api.github.com/repos/arcum-omni/Coquo
|
closed
|
Setup Actions
|
dev process
|
Setup a .NET Core workflow to ensure project builds, to prevents avoidable bugs.
|
1.0
|
Setup Actions - Setup a .NET Core workflow to ensure project builds, to prevents avoidable bugs.
|
process
|
setup actions setup a net core workflow to ensure project builds to prevents avoidable bugs
| 1
|
5,364
| 8,196,130,662
|
IssuesEvent
|
2018-08-31 08:49:50
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
remote ess
|
process:remote
|
On the remote host there are several R installations, so in the `~/.profile` on the remote host I specified an `alias` that points the command `R` to the current version.
If I open a remote file with tramp `/ssh:remote:/my_path/file.R` and then want to run a remote R-session I type `C-RET RET` to open an R session on the remote host. Now an old version of R pops up, which is very annoying.
Everything works as intended if I use `M-x shell` `ssh remote` `R` `ess-remote`, which brings up the current version of R.
The same happens with the remote Julia.
It would be great if the first way respected stuff in the remote `~/.profile`.
|
1.0
|
remote ess - On the remote host there are several R installations, so in the `~/.profile` on the remote host I specified an `alias` that points the command `R` to the current version.
If I open a remote file with tramp `/ssh:remote:/my_path/file.R` and then want to run a remote R-session I type `C-RET RET` to open an R session on the remote host. Now an old version of R pops up, which is very annoying.
Everything works as intended if I use `M-x shell` `ssh remote` `R` `ess-remote`, which brings up the current version of R.
The same happens with the remote Julia.
It would be great if the first way respected stuff in the remote `~/.profile`.
|
process
|
remote ess on the remote host there are several r installations so in the profile on the remote host i specified an alias that points the command r to the current version if i open a remote file with tramp ssh remote my path file r and then want to run a remote r session i type c ret ret to open an r session on the remote host now an old version of r pops up which is very annoying everything works as intended if i use m x shell ssh remote r ess remote which brings up the current version of r the same happens with the remote julia it would be great if the first way respected stuff in the remote profile
| 1
|
398,384
| 27,192,810,525
|
IssuesEvent
|
2023-02-20 00:25:58
|
typescript-eslint/typescript-eslint
|
https://api.github.com/repos/typescript-eslint/typescript-eslint
|
opened
|
Docs: Reword rule formatting docs to allow for partially overlapping rules
|
triage documentation
|
### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
We **strongly** recommend users do not use a linter (e.g. ESLint) to do the job of a formatter (e.g. Prettier). Right now, this notice is injected on top of any rule (example: [`@typescript-eslint/padding-line-between-statements`](https://typescript-eslint.io/rules/padding-line-between-statements/)):
> We strongly recommend you do not use this rule or any other formatting linter rules. Use a separate dedicated formatter instead. See [What About Formatting?](https://typescript-eslint.io/linting/troubleshooting/formatting) for more information.
However! Not all ESLint formatting rules are _completely_ in conflict with formatters. For example, `@typescript-eslint/padding-line-between-statements` can be used to enforce blank lines after `}`s, which Prettier doesn't cover (https://github.com/JoshuaKGoldberg/template-typescript-node-package/issues/231 -> https://github.com/JoshuaKGoldberg/template-typescript-node-package/pull/247).
I propose we change the wording slightly to account for this nuance. Maybe...
> We strongly recommend you use a separate dedicated formatter for formatting files. Lint rules are buggier, less comprehensive, and slower for formatting concerns. See [What About Formatting?](https://typescript-eslint.io/linting/troubleshooting/formatting) for more information.
### Affected URL(s)
https://typescript-eslint.io/rules/padding-line-between-statements & co.
|
1.0
|
Docs: Reword rule formatting docs to allow for partially overlapping rules - ### Before You File a Documentation Request Please Confirm You Have Done The Following...
- [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal.
- [X] I have [read the FAQ](https://typescript-eslint.io/linting/troubleshooting) and my problem is not listed.
### Suggested Changes
We **strongly** recommend users do not use a linter (e.g. ESLint) to do the job of a formatter (e.g. Prettier). Right now, this notice is injected on top of any rule (example: [`@typescript-eslint/padding-line-between-statements`](https://typescript-eslint.io/rules/padding-line-between-statements/)):
> We strongly recommend you do not use this rule or any other formatting linter rules. Use a separate dedicated formatter instead. See [What About Formatting?](https://typescript-eslint.io/linting/troubleshooting/formatting) for more information.
However! Not all ESLint formatting rules are _completely_ in conflict with formatters. For example, `@typescript-eslint/padding-line-between-statements` can be used to enforce blank lines after `}`s, which Prettier doesn't cover (https://github.com/JoshuaKGoldberg/template-typescript-node-package/issues/231 -> https://github.com/JoshuaKGoldberg/template-typescript-node-package/pull/247).
I propose we change the wording slightly to account for this nuance. Maybe...
> We strongly recommend you use a separate dedicated formatter for formatting files. Lint rules are buggier, less comprehensive, and slower for formatting concerns. See [What About Formatting?](https://typescript-eslint.io/linting/troubleshooting/formatting) for more information.
### Affected URL(s)
https://typescript-eslint.io/rules/padding-line-between-statements & co.
|
non_process
|
docs reword rule formatting docs to allow for partially overlapping rules before you file a documentation request please confirm you have done the following i have looked for existing that match my proposal i have and my problem is not listed suggested changes we strongly recommend users do not use a linter e g eslint to do the job of a formatter e g prettier right now this notice is injected on top of any rule example we strongly recommend you do not use this rule or any other formatting linter rules use a separate dedicated formatter instead see for more information however not all eslint formatting rules are completely in conflict with formatters for example typescript eslint padding line between statements can be used to enforce blank lines after s which prettier doesn t cover i propose we change the wording slightly to account for this nuance maybe we strongly recommend you use a separate dedicated formatter for formatting files lint rules are buggier less comprehensive and slower for formatting concerns see for more information affected url s co
| 0
|
394,592
| 27,033,988,867
|
IssuesEvent
|
2023-02-12 14:58:16
|
involveMINT/iMPublic
|
https://api.github.com/repos/involveMINT/iMPublic
|
opened
|
As a Product Owner, I want to have a visual example of what the developers plan on building, so that I can provide quick feedback before starting development
|
documentation L T-Shirt Must Have
|
*Conversation:* Developers are responsible for creating wireframes to demonstrate the idea of what will be created during the next few sprints. Can do both low-fidelity (paper) and high-fidelity (Figma) wireframes. The wireframes should also be focused on the activity feed and how it will look. Daniel may have some existing wireframes for reference in the Figma folder.
*Confirmation:*
- Develop a series of wireframes
- Ask Daniel for feedback and approval during Wednesday meetings
|
1.0
|
As a Product Owner, I want to have a visual example of what the developers plan on building, so that I can provide quick feedback before starting development - *Conversation:* Developers are responsible for creating wireframes to demonstrate the idea of what will be created during the next few sprints. Can do both low-fidelity (paper) and high-fidelity (Figma) wireframes. The wireframes should also be focused on the activity feed and how it will look. Daniel may have some existing wireframes for reference in the Figma folder.
*Confirmation:*
- Develop a series of wireframes
- Ask Daniel for feedback and approval during Wednesday meetings
|
non_process
|
as a product owner i want to have a visual example of what the developers plan on building so that i can provide quick feedback before starting development conversation developers are responsible for creating wireframes to demonstrate the idea of what will be created during the next few sprints can do both low fidelity paper and high fidelity figma wireframes the wireframes should also be focused on the activity feed and how it will look daniel may have some existing wireframes for reference in the figma folder confirmation develop a series of wireframes ask daniel for feedback and approval during wednesday meetings
| 0
|
31,994
| 26,338,103,916
|
IssuesEvent
|
2023-01-10 15:42:55
|
OpenLiberty/openliberty.io
|
https://api.github.com/repos/OpenLiberty/openliberty.io
|
closed
|
Upgrade Liberty version monthly instead of quarterly
|
infrastructure
|
By default, openliberty.io is running on the Liberty `quarterly` release from the buildpack. Consider enabling the usage of the Liberty `monthly` release.
To start using the `monthly` release...
```
To use the V19.0.01 Liberty monthly release, you must set the following environment variables:
JBP_CONFIG_LIBERTY = 'version: +'
IBM_LIBERTY_MONTHLY = true
```
19001 buildpack announcement
https://console.bluemix.net/status/notification/d5ebe0fb74647bcf134512e304e75588
|
1.0
|
Upgrade Liberty version monthly instead of quarterly - By default, openliberty.io is running on the Liberty `quarterly` release from the buildpack. Consider enabling the usage of the Liberty `monthly` release.
To start using the `monthly` release...
```
To use the V19.0.01 Liberty monthly release, you must set the following environment variables:
JBP_CONFIG_LIBERTY = 'version: +'
IBM_LIBERTY_MONTHLY = true
```
19001 buildpack announcement
https://console.bluemix.net/status/notification/d5ebe0fb74647bcf134512e304e75588
|
non_process
|
upgrade liberty version monthly instead of quarterly by default openliberty io is running on the liberty quarterly release from the buildpack consider enabling the usage of the liberty monthly release to start using the monthly release to use the liberty monthly release you must set the following environment variables jbp config liberty version ibm liberty monthly true buildpack announcement
| 0
|
124,220
| 4,893,629,441
|
IssuesEvent
|
2016-11-19 00:06:11
|
elementary/houston
|
https://api.github.com/repos/elementary/houston
|
opened
|
Move flightcheck process around to build the package first
|
Priority Low Subject Flightcheck
|
Right now we expect a very strict repository tree, as outlined in the elementary developer docs. This leads to some problems with being able to build other languages. For instance, Node has no official tree for how to form debian packages, so searching for icons would be impossible. To accommodate this, we should build the debian package first, then search for the files. This will help make implimenting other build types besides the standard elementary tree easier as all the files should be in the same (or similar) spots in the debian package.
|
1.0
|
Move flightcheck process around to build the package first - Right now we expect a very strict repository tree, as outlined in the elementary developer docs. This leads to some problems with being able to build other languages. For instance, Node has no official tree for how to form debian packages, so searching for icons would be impossible. To accommodate this, we should build the debian package first, then search for the files. This will help make implimenting other build types besides the standard elementary tree easier as all the files should be in the same (or similar) spots in the debian package.
|
non_process
|
move flightcheck process around to build the package first right now we expect a very strict repository tree as outlined in the elementary developer docs this leads to some problems with being able to build other languages for instance node has no official tree for how to form debian packages so searching for icons would be impossible to accommodate this we should build the debian package first then search for the files this will help make implimenting other build types besides the standard elementary tree easier as all the files should be in the same or similar spots in the debian package
| 0
|
13,388
| 8,894,456,034
|
IssuesEvent
|
2019-01-16 04:13:26
|
kyb3r/modmail
|
https://api.github.com/repos/kyb3r/modmail
|
opened
|
Python 3.7.1 Heroku Runtime
|
security
|
[Heroku supports](https://devcenter.heroku.com/articles/python-support#supported-runtimes) the Python 3.7.1 runtime environment. As most changes from 3.7.0 -> 3.7.1 are [security-related](https://www.python.org/downloads/release/python-371/), there shouldn't be a problem upgrading to the Python 3.7.1 Heroku runtime.
|
True
|
Python 3.7.1 Heroku Runtime - [Heroku supports](https://devcenter.heroku.com/articles/python-support#supported-runtimes) the Python 3.7.1 runtime environment. As most changes from 3.7.0 -> 3.7.1 are [security-related](https://www.python.org/downloads/release/python-371/), there shouldn't be a problem upgrading to the Python 3.7.1 Heroku runtime.
|
non_process
|
python heroku runtime the python runtime environment as most changes from are there shouldn t be a problem upgrading to the python heroku runtime
| 0
|
11,958
| 14,726,071,070
|
IssuesEvent
|
2021-01-06 06:07:21
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Last Rate Change Date Tracking
|
anc-core anp-1.5 ant-child/secondary ant-feature grt-ui processes
|
In GitLab by @kdjstudios on Oct 26, 2016, 15:38
We need to log the date of the last change and make it visible.
Source:
Crown asked about a report that shows the last date the rates were raised/changed for clients.
Currently we do not have anything for this, but it can be added to the reports queue.
|
1.0
|
Last Rate Change Date Tracking - In GitLab by @kdjstudios on Oct 26, 2016, 15:38
We need to log the date of the last change and make it visible.
Source:
Crown asked about a report that shows the last date the rates were raised/changed for clients.
Currently we do not have anything for this, but it can be added to the reports queue.
|
process
|
last rate change date tracking in gitlab by kdjstudios on oct we need to log the date of the last change and make it visible source crown asked about a report that shows the last date the rates were raised changed for clients currently we do not have anything for this but it can be added to the reports queue
| 1
|
147,768
| 13,214,229,162
|
IssuesEvent
|
2020-08-16 16:43:02
|
chrispliakos-gr/Hyperskill_academy_projects
|
https://api.github.com/repos/chrispliakos-gr/Hyperskill_academy_projects
|
closed
|
Rock paper scissors
|
documentation enhancement
|
- Update readme file
- add the script
- explain the features of playing more than the classic game
|
1.0
|
Rock paper scissors - - Update readme file
- add the script
- explain the features of playing more than the classic game
|
non_process
|
rock paper scissors update readme file add the script explain the features of playing more than the classic game
| 0
|
21,828
| 11,660,521,193
|
IssuesEvent
|
2020-03-03 03:37:37
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
opened
|
VZE: Create user interface for improving geocode addresses and reattempt geocode
|
Product: Vision Zero Crash Data System Project: Vision Zero Crash Data System Service: Dev Type: Enhancement Workgroup: VZ migrated
|
Need to provide a way for the user to attempt to improve the address information and geocode the crash location again.
This could include:
- Modal in Crash view
- Text fields that populate current address information and allow the user to update what info is sent to geocoder
- Show match quality of Here result (convert match quality to percentage)
- Incorporate the canonical street names as suggestions or as reference
*Migrated from [atd-vz-data #479](https://github.com/cityofaustin/atd-vz-data/issues/479)*
|
1.0
|
VZE: Create user interface for improving geocode addresses and reattempt geocode - Need to provide a way for the user to attempt to improve the address information and geocode the crash location again.
This could include:
- Modal in Crash view
- Text fields that populate current address information and allow the user to update what info is sent to geocoder
- Show match quality of Here result (convert match quality to percentage)
- Incorporate the canonical street names as suggestions or as reference
*Migrated from [atd-vz-data #479](https://github.com/cityofaustin/atd-vz-data/issues/479)*
|
non_process
|
vze create user interface for improving geocode addresses and reattempt geocode need to provide a way for the user to attempt to improve the address information and geocode the crash location again this could include modal in crash view text fields that populate current address information and allow the user to update what info is sent to geocoder show match quality of here result convert match quality to percentage incorporate the canonical street names as suggestions or as reference migrated from
| 0
|
72,060
| 15,210,704,116
|
IssuesEvent
|
2021-02-17 07:56:48
|
YJSoft/generator-xpressengine1
|
https://api.github.com/repos/YJSoft/generator-xpressengine1
|
opened
|
CVE-2018-3721 (Medium) detected in lodash-2.4.2.tgz
|
security vulnerability
|
## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: generator-xpressengine1/package.json</p>
<p>Path to vulnerable library: generator-xpressengine1/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- yeoman-generator-0.17.7.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/generator-xpressengine1/commit/f1011c2c99085a741d1c0ddbb5dd8b10272ca9b1">f1011c2c99085a741d1c0ddbb5dd8b10272ca9b1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-3721 (Medium) detected in lodash-2.4.2.tgz - ## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: generator-xpressengine1/package.json</p>
<p>Path to vulnerable library: generator-xpressengine1/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- yeoman-generator-0.17.7.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/generator-xpressengine1/commit/f1011c2c99085a741d1c0ddbb5dd8b10272ca9b1">f1011c2c99085a741d1c0ddbb5dd8b10272ca9b1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file generator package json path to vulnerable library generator node modules lodash package json dependency hierarchy yeoman generator tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
319,361
| 9,742,783,544
|
IssuesEvent
|
2019-06-02 20:03:17
|
CodeForFoco/volunteercore
|
https://api.github.com/repos/CodeForFoco/volunteercore
|
closed
|
Revert back to JWT auth with refresh token and http-only cookie
|
api priority
|
The API token auth is currently vulnerable to CSRF attacks. I attempted to implement this with Flask-JWT-Extended but have not been able to get it working. See more details in:
- https://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_cookies.html
- http://www.redotheweb.com/2015/11/09/api-security.html
|
1.0
|
Revert back to JWT auth with refresh token and http-only cookie - The API token auth is currently vulnerable to CSRF attacks. I attempted to implement this with Flask-JWT-Extended but have not been able to get it working. See more details in:
- https://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_cookies.html
- http://www.redotheweb.com/2015/11/09/api-security.html
|
non_process
|
revert back to jwt auth with refresh token and http only cookie the api token auth is currently vulnerable to csrf attacks i attempted to implement this with flask jwt extended but have not been able to get it working see more details in
| 0
|
16,489
| 21,446,435,361
|
IssuesEvent
|
2022-04-25 06:57:48
|
masashi-hatano/trajectories_prediction
|
https://api.github.com/repos/masashi-hatano/trajectories_prediction
|
closed
|
Creating input data with the terrain information
|
preprocessing
|
The frequency of applying semantic segmentation needs to be decided.
|
1.0
|
Creating input data with the terrain information - The frequency of applying semantic segmentation needs to be decided.
|
process
|
creating input data with the terrain information the frequency of applying semantic segmentation needs to be decided
| 1
|
21,907
| 30,389,055,998
|
IssuesEvent
|
2023-07-13 05:03:31
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
"Load layer into project" algorithm crashes QGIS 3.32 and master if layer already loaded
|
Processing Bug Crash/Data Corruption
|
### What is the bug or the crash?
"Load layer into project" algorithm crashes QGIS 3.32 and master if the layer is already loaded
I found this while trying to debug something more complicated.
It is OK in 3.30
[testload.zip](https://github.com/qgis/QGIS/files/11845961/testload.zip)
### Steps to reproduce the issue
Try running the attached model to load a layer, and then load it again
### Versions
```
QGIS version
3.32.0-Lima
QGIS code revision
311a8cb8a65
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.7.0
PROJ version
9.2.1
EPSG Registry database version
v10.088 (2023-05-13)
GEOS version
3.11.2-CAPI-1.17.2
SQLite version
3.41.1
PDAL version
2.5.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
annotationManager
0.5
annotation_labels
1.0.1
AnotherDXF2Shape
1.2.7
changeDataSource
3.1
civilplan
1.0
coveragebuilder
version 0.5.0
Equal_area_slope_QGIS_Plugin
0.1
file_management
0.1
FlowEstimator
0.21
GeoCoding
2.19
geometry_paster
0.2
geoprocAlgos
3.30
group_transparency
0.2
joinmultiplelines
Version 0.4.1
LayerBoard
1.0.1
layout_panel-main
0.3
MemoryLayerSaver
5.0.0
merge_selected_features
0.1
nominatim
1.4.5
pathfinder
version 0.4.2
plugin_reloader
0.9.3
precisioncursor4qgis-main
0.2E
QCopycanvas
0.7
qgis_resource_sharing
1.0.0
QlrBrowser
3.0.0
quick_map_services
0.19.33
reveal_address_plugin
1.2
segment_reshape_plugin
0.1.4
StyleLoadSave
1.0
switch_active_layer
0.1
valuetool
3.0.15
workbench
0.0.4
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
```
```
QGIS version
3.33.0-Master
QGIS code revision
7df22c4d1e
Qt version
5.15.3
Python version
3.9.5
Compiled against GDAL/OGR
3.8.0dev-ecefae5921
Running against GDAL/OGR
3.8.0dev-360a9aea02
PROJ version
9.2.1
EPSG Registry database version
v10.088 (2023-05-13)
GEOS version
3.11.2-CAPI-1.17.2
SQLite version
3.41.1
PDAL version
2.5.3
PostgreSQL client version
15.2
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
This copy of QGIS writes debugging output.
Active Python plugins
annotationManager
0.5
annotation_labels
1.0.1
AnotherDXF2Shape
1.2.7
changeDataSource
3.1
civilplan
1.0
coveragebuilder
version 0.5.0
Equal_area_slope_QGIS_Plugin
0.1
file_management
0.1
FlowEstimator
0.21
GeoCoding
2.19
geometry_paster
0.2
geoprocAlgos
3.30
group_transparency
0.2
joinmultiplelines
Version 0.4.1
LayerBoard
1.0.1
layout_panel-main
0.3
MemoryLayerSaver
5.0.0
merge_selected_features
0.1
nominatim
1.4.5
pathfinder
version 0.4.2
plugin_reloader
0.9.3
precisioncursor4qgis-main
0.2E
QCopycanvas
0.7
qgis_resource_sharing
1.0.0
QlrBrowser
3.0.0
quick_map_services
0.19.33
reveal_address_plugin
1.2
segment_reshape_plugin
0.1.4
StyleLoadSave
1.0
switch_active_layer
0.1
valuetool
3.0.15
workbench
0.0.4
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
```
### Supported QGIS version
- [X] I'm running a supported QGIS version according to [the roadmap](https://www.qgis.org/en/site/getinvolved/development/roadmap.html#release-schedule).
### New profile
- [X] I tried with a new [QGIS profile](https://docs.qgis.org/latest/en/docs/user_manual/introduction/qgis_configuration.html#working-with-user-profiles)
### Additional context
I was expecting a more informative stack trace, but this is what I get:
```
Python Stack Trace
Windows fatal exception: access violation
Current thread 0x00005e10 (most recent call first):
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\Postprocessing.py", line 132 in create_layer_tree_layer
layer_tree_layer = QgsLayerTreeLayer(layer)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\Postprocessing.py", line 236 in handleAlgorithmResults
layer_tree_layer = create_layer_tree_layer(map_layer, details)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\AlgorithmDialog.py", line 355 in finish
if not handleAlgorithmResults(
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\AlgorithmDialog.py", line 308 in on_complete
self.finish(ok, results, self.context, self.feedback, in_place=self.in_place)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\ProcessingPlugin.py", line 432 in executeAlgorithm
dlg.exec_()
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\ProcessingToolbox.py", line 232 in executeAlgorithm
self.executeWithGui.emit(alg.id(), self, self.in_place_mode, False)
Stack Trace
QObject::thread :
QgsMapLayer::name qgsmaplayer.cpp:204
QgsLayerTreeLayer::QgsLayerTreeLayer qgslayertreelayer.cpp:27
sipQgsLayerTreeLayer::sipQgsLayerTreeLayer sip_corepart12.cpp:210850
init_type_QgsLayerTreeLayer sip_corepart12.cpp:212829
PyEval_EvalFrameDefault :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QgsProcessingAlgRunnerTask::executed moc_qgsprocessingalgrunnertask.cpp:135
QgsProcessingAlgRunnerTask::finished qgsprocessingalgrunnertask.cpp:89
sipQgsProcessingAlgRunnerTask::finished sip_corepart9.cpp:36717
QgsTaskManager::taskStatusChanged qgstaskmanager.cpp:732
QtPrivate::FunctorCall,QtPrivate::List,void,void (__cdecl QgsTaskManager::*)(int)>::call qobjectdefs_impl.h:152
QtPrivate::FunctionPointer::call,void> qobjectdefs_impl.h:186
QtPrivate::QSlotObject,void>::impl qobjectdefs_impl.h:419
QObject::qt_static_metacall :
QgsTask::statusChanged moc_qgstaskmanager.cpp:228
QgsTask::processSubTasksForCompletion qgstaskmanager.cpp:290
QgsTask::qt_static_metacall moc_qgstaskmanager.cpp:132
QObject::event :
sipQgsProcessingAlgRunnerTask::event sip_corepart9.cpp:36667
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QCoreApplicationPrivate::sendPostedEvents :
qt_plugin_query_metadata :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QDialog::exec :
PyInit_QtWidgets :
PyArg_ParseTuple_SizeT :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QMetaObject::activate :
PyInit_QtCore :
PyInit_QtCore :
PyType_GenericNew :
PyEval_EvalFrameDefault :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QAbstractItemView::doubleClicked :
QTreeView::mouseDoubleClickEvent :
sipQgsProcessingToolboxTreeView::mouseDoubleClickEvent sip_guipart5.cpp:30088
QWidget::event :
QFrame::event :
QAbstractItemView::viewportEvent :
sipQgsProcessingToolboxTreeView::viewportEvent sip_guipart5.cpp:30626
QCoreApplicationPrivate::sendThroughObjectEventFilters :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QApplicationPrivate::sendMouseEvent :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processMouseEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main main.cpp:1817
WinMain mainwin.cpp:214
__scrt_common_main_seh exe_common.inl:288
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.33.0-Master
QGIS code revision: 7df22c4d1e
Compiled against Qt: 5.15.3
Running against Qt: 5.15.3
Compiled against GDAL: 3.8.0dev-ecefae5921
Running against GDAL: 3.8.0dev-360a9aea02
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.19045
|
1.0
|
"Load layer into project" algorithm crashes QGIS 3.32 and master if layer already loaded - ### What is the bug or the crash?
"Load layer into project" algorithm crashes QGIS 3.32 and master if the layer is already loaded
I found this while trying to debug something more complicated.
It is OK in 3.30
[testload.zip](https://github.com/qgis/QGIS/files/11845961/testload.zip)
### Steps to reproduce the issue
Try running the attached model to load a layer, and then load it again
### Versions
```
QGIS version
3.32.0-Lima
QGIS code revision
311a8cb8a65
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.7.0
PROJ version
9.2.1
EPSG Registry database version
v10.088 (2023-05-13)
GEOS version
3.11.2-CAPI-1.17.2
SQLite version
3.41.1
PDAL version
2.5.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
annotationManager
0.5
annotation_labels
1.0.1
AnotherDXF2Shape
1.2.7
changeDataSource
3.1
civilplan
1.0
coveragebuilder
version 0.5.0
Equal_area_slope_QGIS_Plugin
0.1
file_management
0.1
FlowEstimator
0.21
GeoCoding
2.19
geometry_paster
0.2
geoprocAlgos
3.30
group_transparency
0.2
joinmultiplelines
Version 0.4.1
LayerBoard
1.0.1
layout_panel-main
0.3
MemoryLayerSaver
5.0.0
merge_selected_features
0.1
nominatim
1.4.5
pathfinder
version 0.4.2
plugin_reloader
0.9.3
precisioncursor4qgis-main
0.2E
QCopycanvas
0.7
qgis_resource_sharing
1.0.0
QlrBrowser
3.0.0
quick_map_services
0.19.33
reveal_address_plugin
1.2
segment_reshape_plugin
0.1.4
StyleLoadSave
1.0
switch_active_layer
0.1
valuetool
3.0.15
workbench
0.0.4
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
```
```
QGIS version
3.33.0-Master
QGIS code revision
7df22c4d1e
Qt version
5.15.3
Python version
3.9.5
Compiled against GDAL/OGR
3.8.0dev-ecefae5921
Running against GDAL/OGR
3.8.0dev-360a9aea02
PROJ version
9.2.1
EPSG Registry database version
v10.088 (2023-05-13)
GEOS version
3.11.2-CAPI-1.17.2
SQLite version
3.41.1
PDAL version
2.5.3
PostgreSQL client version
15.2
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
This copy of QGIS writes debugging output.
Active Python plugins
annotationManager
0.5
annotation_labels
1.0.1
AnotherDXF2Shape
1.2.7
changeDataSource
3.1
civilplan
1.0
coveragebuilder
version 0.5.0
Equal_area_slope_QGIS_Plugin
0.1
file_management
0.1
FlowEstimator
0.21
GeoCoding
2.19
geometry_paster
0.2
geoprocAlgos
3.30
group_transparency
0.2
joinmultiplelines
Version 0.4.1
LayerBoard
1.0.1
layout_panel-main
0.3
MemoryLayerSaver
5.0.0
merge_selected_features
0.1
nominatim
1.4.5
pathfinder
version 0.4.2
plugin_reloader
0.9.3
precisioncursor4qgis-main
0.2E
QCopycanvas
0.7
qgis_resource_sharing
1.0.0
QlrBrowser
3.0.0
quick_map_services
0.19.33
reveal_address_plugin
1.2
segment_reshape_plugin
0.1.4
StyleLoadSave
1.0
switch_active_layer
0.1
valuetool
3.0.15
workbench
0.0.4
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
```
### Supported QGIS version
- [X] I'm running a supported QGIS version according to [the roadmap](https://www.qgis.org/en/site/getinvolved/development/roadmap.html#release-schedule).
### New profile
- [X] I tried with a new [QGIS profile](https://docs.qgis.org/latest/en/docs/user_manual/introduction/qgis_configuration.html#working-with-user-profiles)
### Additional context
I was expecting a more informative stack trace, but this is what I get:
```
Python Stack Trace
Windows fatal exception: access violation
Current thread 0x00005e10 (most recent call first):
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\Postprocessing.py", line 132 in create_layer_tree_layer
layer_tree_layer = QgsLayerTreeLayer(layer)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\Postprocessing.py", line 236 in handleAlgorithmResults
layer_tree_layer = create_layer_tree_layer(map_layer, details)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\AlgorithmDialog.py", line 355 in finish
if not handleAlgorithmResults(
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\AlgorithmDialog.py", line 308 in on_complete
self.finish(ok, results, self.context, self.feedback, in_place=self.in_place)
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\ProcessingPlugin.py", line 432 in executeAlgorithm
dlg.exec_()
File "D:\OSGeo4W/apps/qgis-dev/./python/plugins\processing\gui\ProcessingToolbox.py", line 232 in executeAlgorithm
self.executeWithGui.emit(alg.id(), self, self.in_place_mode, False)
Stack Trace
QObject::thread :
QgsMapLayer::name qgsmaplayer.cpp:204
QgsLayerTreeLayer::QgsLayerTreeLayer qgslayertreelayer.cpp:27
sipQgsLayerTreeLayer::sipQgsLayerTreeLayer sip_corepart12.cpp:210850
init_type_QgsLayerTreeLayer sip_corepart12.cpp:212829
PyEval_EvalFrameDefault :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QgsProcessingAlgRunnerTask::executed moc_qgsprocessingalgrunnertask.cpp:135
QgsProcessingAlgRunnerTask::finished qgsprocessingalgrunnertask.cpp:89
sipQgsProcessingAlgRunnerTask::finished sip_corepart9.cpp:36717
QgsTaskManager::taskStatusChanged qgstaskmanager.cpp:732
QtPrivate::FunctorCall,QtPrivate::List,void,void (__cdecl QgsTaskManager::*)(int)>::call qobjectdefs_impl.h:152
QtPrivate::FunctionPointer::call,void> qobjectdefs_impl.h:186
QtPrivate::QSlotObject,void>::impl qobjectdefs_impl.h:419
QObject::qt_static_metacall :
QgsTask::statusChanged moc_qgstaskmanager.cpp:228
QgsTask::processSubTasksForCompletion qgstaskmanager.cpp:290
QgsTask::qt_static_metacall moc_qgstaskmanager.cpp:132
QObject::event :
sipQgsProcessingAlgRunnerTask::event sip_corepart9.cpp:36667
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QCoreApplicationPrivate::sendPostedEvents :
qt_plugin_query_metadata :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QDialog::exec :
PyInit_QtWidgets :
PyArg_ParseTuple_SizeT :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QMetaObject::activate :
PyInit_QtCore :
PyInit_QtCore :
PyType_GenericNew :
PyEval_EvalFrameDefault :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
PyInit_QtCore :
QObject::qt_static_metacall :
QAbstractItemView::doubleClicked :
QTreeView::mouseDoubleClickEvent :
sipQgsProcessingToolboxTreeView::mouseDoubleClickEvent sip_guipart5.cpp:30088
QWidget::event :
QFrame::event :
QAbstractItemView::viewportEvent :
sipQgsProcessingToolboxTreeView::viewportEvent sip_guipart5.cpp:30626
QCoreApplicationPrivate::sendThroughObjectEventFilters :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QApplicationPrivate::sendMouseEvent :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify qgsapplication.cpp:580
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processMouseEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main main.cpp:1817
WinMain mainwin.cpp:214
__scrt_common_main_seh exe_common.inl:288
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.33.0-Master
QGIS code revision: 7df22c4d1e
Compiled against Qt: 5.15.3
Running against Qt: 5.15.3
Compiled against GDAL: 3.8.0dev-ecefae5921
Running against GDAL: 3.8.0dev-360a9aea02
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.19045
|
process
|
load layer into project algorithm crashes qgis and master if layer already loaded what is the bug or the crash load layer into project algorithm crashes qgis and master if the layer is already loaded i found this while trying to debug something more complicated it is ok in steps to reproduce the issue try running the attached model to load a layer and then load it again versions qgis version lima qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version windows version active python plugins annotationmanager annotation labels changedatasource civilplan coveragebuilder version equal area slope qgis plugin file management flowestimator geocoding geometry paster geoprocalgos group transparency joinmultiplelines version layerboard layout panel main memorylayersaver merge selected features nominatim pathfinder version plugin reloader main qcopycanvas qgis resource sharing qlrbrowser quick map services reveal address plugin segment reshape plugin styleloadsave switch active layer valuetool workbench db manager grassprovider metasearch processing qgis version master qgis code revision qt version python version compiled against gdal ogr running against gdal ogr proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version this copy of qgis writes debugging output active python plugins annotationmanager annotation labels changedatasource civilplan coveragebuilder version equal area slope qgis plugin file management flowestimator geocoding geometry paster geoprocalgos group transparency joinmultiplelines version layerboard layout panel main memorylayersaver merge selected features nominatim pathfinder version plugin reloader main qcopycanvas qgis resource sharing qlrbrowser quick map services reveal address plugin segment reshape plugin styleloadsave switch active layer valuetool workbench db manager grassprovider metasearch processing supported qgis version i m running a supported qgis version according to new profile i tried with a new additional context i was expecting a more informative stack trace but this is what i get python stack trace windows fatal exception access violation current thread most recent call first file d apps qgis dev python plugins processing gui postprocessing py line in create layer tree layer layer tree layer qgslayertreelayer layer file d apps qgis dev python plugins processing gui postprocessing py line in handlealgorithmresults layer tree layer create layer tree layer map layer details file d apps qgis dev python plugins processing gui algorithmdialog py line in finish if not handlealgorithmresults file d apps qgis dev python plugins processing gui algorithmdialog py line in on complete self finish ok results self context self feedback in place self in place file d apps qgis dev python plugins processing processingplugin py line in executealgorithm dlg exec file d apps qgis dev python plugins processing gui processingtoolbox py line in executealgorithm self executewithgui emit alg id self self in place mode false stack trace qobject thread qgsmaplayer name qgsmaplayer cpp qgslayertreelayer qgslayertreelayer qgslayertreelayer cpp sipqgslayertreelayer sipqgslayertreelayer sip cpp init type qgslayertreelayer sip cpp pyeval evalframedefault pyeval evalframedefault pyobject gc del pyeval evalframedefault pyobject gc del pyeval evalframedefault pyobject gc del pyfunction vectorcall pyvectorcall call pyobject call pyinit qtcore pyinit qtcore pyinit qtcore pyinit qtcore qobject qt static metacall qgsprocessingalgrunnertask executed moc qgsprocessingalgrunnertask cpp qgsprocessingalgrunnertask finished qgsprocessingalgrunnertask cpp sipqgsprocessingalgrunnertask finished sip cpp qgstaskmanager taskstatuschanged qgstaskmanager cpp qtprivate functorcall qtprivate list void void cdecl qgstaskmanager int call qobjectdefs impl h qtprivate functionpointer call void qobjectdefs impl h qtprivate qslotobject void impl qobjectdefs impl h qobject qt static metacall qgstask statuschanged moc qgstaskmanager cpp qgstask processsubtasksforcompletion qgstaskmanager cpp qgstask qt static metacall moc qgstaskmanager cpp qobject event sipqgsprocessingalgrunnertask event sip cpp qapplicationprivate notify helper qapplication notify qgsapplication notify qgsapplication cpp qcoreapplication qcoreapplicationprivate sendpostedevents qt plugin query metadata processevents qt plugin query metadata qeventloop exec qdialog exec pyinit qtwidgets pyarg parsetuple sizet pyeval evalframedefault pyobject gc del pyfunction vectorcall pyfloat fromdouble pyvectorcall call pyobject call pyinit qtcore pyinit qtcore pyinit qtcore pyinit qtcore pyinit qtcore pyinit qtcore qobject qt static metacall qmetaobject activate pyinit qtcore pyinit qtcore pytype genericnew pyeval evalframedefault pyfunction vectorcall pyfloat fromdouble pyvectorcall call pyobject call pyinit qtcore pyinit qtcore pyinit qtcore pyinit qtcore qobject qt static metacall qabstractitemview doubleclicked qtreeview mousedoubleclickevent sipqgsprocessingtoolboxtreeview mousedoubleclickevent sip cpp qwidget event qframe event qabstractitemview viewportevent sipqgsprocessingtoolboxtreeview viewportevent sip cpp qcoreapplicationprivate sendthroughobjecteventfilters qapplicationprivate notify helper qapplication notify qgsapplication notify qgsapplication cpp qcoreapplication qapplicationprivate sendmouseevent qsizepolicy qsizepolicy qsizepolicy qsizepolicy qapplicationprivate notify helper qapplication notify qgsapplication notify qgsapplication cpp qcoreapplication qguiapplicationprivate processmouseevent qwindowsysteminterface sendwindowsystemevents processevents qt plugin query metadata qeventloop exec qcoreapplication exec main main cpp winmain mainwin cpp scrt common main seh exe common inl basethreadinitthunk rtluserthreadstart qgis info qgis version master qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version
| 1
|
11,334
| 14,147,203,512
|
IssuesEvent
|
2020-11-10 20:25:17
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Switch processor permits empty cases
|
annoying bughancement processors
|
The switch processor allows both cases with no `processors`, and cases with no `check`. This can cause typos to lint, despite not being valid:
```
switch:
- check: foo == "bar"
- processors:
- resource: bar
```
This lints, despite `resource: bar` being impossible to execute, because there is no condition associated with it to ever run.
|
1.0
|
Switch processor permits empty cases - The switch processor allows both cases with no `processors`, and cases with no `check`. This can cause typos to lint, despite not being valid:
```
switch:
- check: foo == "bar"
- processors:
- resource: bar
```
This lints, despite `resource: bar` being impossible to execute, because there is no condition associated with it to ever run.
|
process
|
switch processor permits empty cases the switch processor allows both cases with no processors and cases with no check this can cause typos to lint despite not being valid switch check foo bar processors resource bar this lints despite resource bar being impossible to execute because there is no condition associated with it to ever run
| 1
|
74
| 2,524,850,382
|
IssuesEvent
|
2015-01-20 20:31:15
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
multi-property filtering on value
|
enhancement process
|
I want to get the user where ts is 300, but I can't see a way of doing this at least with M6.
```
v1 = {
ts:[
100 @user:"u1",
200 @user:"u2",
300 @user:"u1" <-- I want this.
]
}
```
The best I could come up with was:
```
v1.properties("ts").filter(p->100 == (long)((Property)p.get()).value()).properties("user").value().next()
```
But having to use a filter isn't great especially with all the casting. Be good if I overlooked something though.
What I would like is:
```
v1.properties("ts").hasValue(100).properties("user").value().next()
```
I guess most of the `has*` methods should also have the equivalent `hasValue*` method on property traversals otherwise multi-properties will be difficult to query.
|
1.0
|
multi-property filtering on value - I want to get the user where ts is 300, but I can't see a way of doing this at least with M6.
```
v1 = {
ts:[
100 @user:"u1",
200 @user:"u2",
300 @user:"u1" <-- I want this.
]
}
```
The best I could come up with was:
```
v1.properties("ts").filter(p->100 == (long)((Property)p.get()).value()).properties("user").value().next()
```
But having to use a filter isn't great especially with all the casting. Be good if I overlooked something though.
What I would like is:
```
v1.properties("ts").hasValue(100).properties("user").value().next()
```
I guess most of the `has*` methods should also have the equivalent `hasValue*` method on property traversals otherwise multi-properties will be difficult to query.
|
process
|
multi property filtering on value i want to get the user where ts is but i can t see a way of doing this at least with ts user user user i want this the best i could come up with was properties ts filter p long property p get value properties user value next but having to use a filter isn t great especially with all the casting be good if i overlooked something though what i would like is properties ts hasvalue properties user value next i guess most of the has methods should also have the equivalent hasvalue method on property traversals otherwise multi properties will be difficult to query
| 1
|
111,194
| 24,085,209,055
|
IssuesEvent
|
2022-09-19 10:17:15
|
arduino/arduino-ide
|
https://api.github.com/repos/arduino/arduino-ide
|
closed
|
UI unresponsive when sketch has a very long line
|
topic: code type: imperfection
|
### Describe the problem
Arduino sketches may contain large machine generated arrays for data such as images. These may span many columns, using a block that follows the dimensions of the source data (e.g., an array with 320 elements per line might be generated for a 320x240 pixel image), or even be all on a single long line.
🐛 The Arduino IDE UI becomes noticeably laggy or even completely unresponsive when the sketch contains a long line
### To reproduce
1. Download the following demonstration sketch, which contains a line 18432 characters long:
[LongLine.zip](https://github.com/arduino/arduino-ide/files/9389700/LongLine.zip)
1. Unzip the downloaded file.
1. Open the "**LongLine**" sketch in the Arduino IDE.
🐛 The IDE UI is completely unresponsive.
1. Force close the Arduino IDE.
1. Start the Arduino IDE (making sure it loads an innocuous sketch on startup).
1. Open the "**Command Palette**" via the <kbd>**Ctrl**</kbd>+<kbd>**Shift**</kbd>+<kbd>**P**</kbd> shortcut (<kbd>**Command**</kbd>+<kbd>**Shift**</kbd>+<kbd>**P**</kbd> for macOS users).
1. Select the "**Preferences: Open Settings (UI)**" command.
1. In the "**Search Settings**" field, type `editor.maxTokenizationLineLength`
1. Change the value of the "**Editor: Max Tokenization Line Length**" setting from the default `20000` to `500`
1. Open the "**LongLine**" sketch in the Arduino IDE.
🙂 The IDE remains perfectly responsive.
### Expected behavior
IDE is usable when the sketch contains long lines.
### Arduino IDE version
2.0.0-rc9.2.snapshot-de32bdd
### Operating system
Windows
### Operating system version
10
### Additional context
I am able reproduce the issue in [**Theia Blueprint**](https://theia-ide.org/docs/blueprint_download) (but not in [**VS Code**](https://code.visualstudio.com/)), so the inability to handle such content is not a bug in the Arduino IDE codebase.
I see that this was reported in the Theia project and fixed by reducing the default value of the `editor.maxTokenizationLineLength` to 400: https://github.com/eclipse-theia/theia/issues/8021
So a similar change should be made in Arduino IDE as well.
---
I used a ridiculously long line in the demo sketch (though it was generated from an image of only 32x32 px, using [an established tool](https://notisrac.github.io/FileToCArray/)). More reasonable line lengths result in less dramatic impact, but still make the IDE unpleasant to use. A real world file was provided here:
https://forum.arduino.cc/t/2-0-slows-down-if-very-long-lines-but-ok-with-crs-line-feeds-inserted/1021335/5
---
It seems that changes to the `editor.maxTokenizationLineLength` setting are not applied to sketches which have already been "tokenized", so make sure to reload the sketch if you are experimenting with the setting.
---
The issue is not related to the Arduino Language Server because it occurs even when the language server is not running due to not having a board open.
---
Originally reported at https://forum.arduino.cc/t/2-0-slows-down-if-very-long-lines-but-ok-with-crs-line-feeds-inserted/1021335
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://github.com/arduino/arduino-ide#nightly-builds)
- [X] My report contains all necessary details
|
1.0
|
UI unresponsive when sketch has a very long line - ### Describe the problem
Arduino sketches may contain large machine generated arrays for data such as images. These may span many columns, using a block that follows the dimensions of the source data (e.g., an array with 320 elements per line might be generated for a 320x240 pixel image), or even be all on a single long line.
🐛 The Arduino IDE UI becomes noticeably laggy or even completely unresponsive when the sketch contains a long line
### To reproduce
1. Download the following demonstration sketch, which contains a line 18432 characters long:
[LongLine.zip](https://github.com/arduino/arduino-ide/files/9389700/LongLine.zip)
1. Unzip the downloaded file.
1. Open the "**LongLine**" sketch in the Arduino IDE.
🐛 The IDE UI is completely unresponsive.
1. Force close the Arduino IDE.
1. Start the Arduino IDE (making sure it loads an innocuous sketch on startup).
1. Open the "**Command Palette**" via the <kbd>**Ctrl**</kbd>+<kbd>**Shift**</kbd>+<kbd>**P**</kbd> shortcut (<kbd>**Command**</kbd>+<kbd>**Shift**</kbd>+<kbd>**P**</kbd> for macOS users).
1. Select the "**Preferences: Open Settings (UI)**" command.
1. In the "**Search Settings**" field, type `editor.maxTokenizationLineLength`
1. Change the value of the "**Editor: Max Tokenization Line Length**" setting from the default `20000` to `500`
1. Open the "**LongLine**" sketch in the Arduino IDE.
🙂 The IDE remains perfectly responsive.
### Expected behavior
IDE is usable when the sketch contains long lines.
### Arduino IDE version
2.0.0-rc9.2.snapshot-de32bdd
### Operating system
Windows
### Operating system version
10
### Additional context
I am able reproduce the issue in [**Theia Blueprint**](https://theia-ide.org/docs/blueprint_download) (but not in [**VS Code**](https://code.visualstudio.com/)), so the inability to handle such content is not a bug in the Arduino IDE codebase.
I see that this was reported in the Theia project and fixed by reducing the default value of the `editor.maxTokenizationLineLength` to 400: https://github.com/eclipse-theia/theia/issues/8021
So a similar change should be made in Arduino IDE as well.
---
I used a ridiculously long line in the demo sketch (though it was generated from an image of only 32x32 px, using [an established tool](https://notisrac.github.io/FileToCArray/)). More reasonable line lengths result in less dramatic impact, but still make the IDE unpleasant to use. A real world file was provided here:
https://forum.arduino.cc/t/2-0-slows-down-if-very-long-lines-but-ok-with-crs-line-feeds-inserted/1021335/5
---
It seems that changes to the `editor.maxTokenizationLineLength` setting are not applied to sketches which have already been "tokenized", so make sure to reload the sketch if you are experimenting with the setting.
---
The issue is not related to the Arduino Language Server because it occurs even when the language server is not running due to not having a board open.
---
Originally reported at https://forum.arduino.cc/t/2-0-slows-down-if-very-long-lines-but-ok-with-crs-line-feeds-inserted/1021335
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://github.com/arduino/arduino-ide#nightly-builds)
- [X] My report contains all necessary details
|
non_process
|
ui unresponsive when sketch has a very long line describe the problem arduino sketches may contain large machine generated arrays for data such as images these may span many columns using a block that follows the dimensions of the source data e g an array with elements per line might be generated for a pixel image or even be all on a single long line 🐛 the arduino ide ui becomes noticeably laggy or even completely unresponsive when the sketch contains a long line to reproduce download the following demonstration sketch which contains a line characters long unzip the downloaded file open the longline sketch in the arduino ide 🐛 the ide ui is completely unresponsive force close the arduino ide start the arduino ide making sure it loads an innocuous sketch on startup open the command palette via the ctrl shift p shortcut command shift p for macos users select the preferences open settings ui command in the search settings field type editor maxtokenizationlinelength change the value of the editor max tokenization line length setting from the default to open the longline sketch in the arduino ide 🙂 the ide remains perfectly responsive expected behavior ide is usable when the sketch contains long lines arduino ide version snapshot operating system windows operating system version additional context i am able reproduce the issue in but not in so the inability to handle such content is not a bug in the arduino ide codebase i see that this was reported in the theia project and fixed by reducing the default value of the editor maxtokenizationlinelength to so a similar change should be made in arduino ide as well i used a ridiculously long line in the demo sketch though it was generated from an image of only px using more reasonable line lengths result in less dramatic impact but still make the ide unpleasant to use a real world file was provided here it seems that changes to the editor maxtokenizationlinelength setting are not applied to sketches which have already been tokenized so make sure to reload the sketch if you are experimenting with the setting the issue is not related to the arduino language server because it occurs even when the language server is not running due to not having a board open originally reported at issue checklist i searched for previous reports in i verified the problem still occurs when using the latest my report contains all necessary details
| 0
|
13,325
| 8,188,368,119
|
IssuesEvent
|
2018-08-30 01:28:56
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
tf.nn.conv2d() inconsistent dilation rate at runtime
|
type:bug/performance
|
### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: yes
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 LTS
- **TensorFlow installed from (source or binary)**: binary (pip install)
- **TensorFlow version (use command below)**: ('v1.7.0-3-g024aecf414', '1.7.0')
- **Python version**: 2.7.12
- **Bazel version (if compiling from source)**: N/A
- **GCC/Compiler version (if compiling from source)**: N/A
- **CUDA/cuDNN version**: 9.0/7.0.5
- **GPU model and memory**: GTX1080, 8GB
- **Exact command to reproduce**: shown below
### Describe the problem
Dilated convolution via tf.nn.conv2d() with data_format='NHWC' gets corrupted to 'NCHW' during sess.run(). Since the data_format alone is corrupted and the dilation rate is unchanged, the code fails with an error message indicating that it does not support dilation along the depth dimension (dilation rate of [1, 2, 2, 1] is valid for 'NHWC' format but not for 'NCHW' format).
It seems that this is a CUDA problem, since if I disable the GPU using os.environ['CUDA_VISIBLE_DEVICES'] = '' line, the code does not error out.
Weirdly enough, if I don't do anything to the output of tf.nn.conv2d(), the code does not error out either (corresponds to setting use_reduce_mean=False in the below example).
Also, if the dilation rate is set as [1, 1, 2, 2], the code does not error out, although this goes against the [documentation](https://www.tensorflow.org/versions/r1.7/api_docs/python/tf/nn/conv2d), which says that `The dimension order is determined by the value of data_format`
### Source code / logs
source code to reproduce the bug
```
import os
import numpy as np
import tensorflow as tf
# os.environ['CUDA_VISIBLE_DEVICES'] = ''
def bug():
use_reduce_mean = True
dilation_rate = 2
# bug # 1: conv2d changes from NHWC to NCHW
input_shape = [1, 32, 32, 1]
in_place = tf.placeholder(dtype=tf.float32, shape=input_shape)
filter_tensor = tf.Variable(tf.random_normal(
[3, 3, 1, 1], dtype=tf.float32, stddev=0.1), trainable=True)
out_tensor = tf.nn.conv2d(
in_place, filter=filter_tensor, strides=(1, 1, 1, 1),
padding='SAME', dilations=(1, dilation_rate, dilation_rate, 1),
data_format='NHWC')
if use_reduce_mean:
out_tensor = tf.reduce_mean(out_tensor)
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
init_op.run()
f_dict = {in_place: np.zeros(input_shape)}
sess_out = sess.run(out_tensor, feed_dict=f_dict)
if __name__ == "__main__":
bug()
```
error message:
```
Executor failed to create kernel. Invalid argument: Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
Traceback (most recent call last):
File "tmp.py", line 48, in <module>
bug()
File "tmp.py", line 44, in bug
sess_out = sess.run(out_tensor, feed_dict=f_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
Caused by op u'Conv2D', defined at:
File "tmp.py", line 48, in <module>
bug()
File "tmp.py", line 27, in bug
data_format='NHWC')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 953, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
```
|
True
|
tf.nn.conv2d() inconsistent dilation rate at runtime - ### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: yes
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 LTS
- **TensorFlow installed from (source or binary)**: binary (pip install)
- **TensorFlow version (use command below)**: ('v1.7.0-3-g024aecf414', '1.7.0')
- **Python version**: 2.7.12
- **Bazel version (if compiling from source)**: N/A
- **GCC/Compiler version (if compiling from source)**: N/A
- **CUDA/cuDNN version**: 9.0/7.0.5
- **GPU model and memory**: GTX1080, 8GB
- **Exact command to reproduce**: shown below
### Describe the problem
Dilated convolution via tf.nn.conv2d() with data_format='NHWC' gets corrupted to 'NCHW' during sess.run(). Since the data_format alone is corrupted and the dilation rate is unchanged, the code fails with an error message indicating that it does not support dilation along the depth dimension (dilation rate of [1, 2, 2, 1] is valid for 'NHWC' format but not for 'NCHW' format).
It seems that this is a CUDA problem, since if I disable the GPU using os.environ['CUDA_VISIBLE_DEVICES'] = '' line, the code does not error out.
Weirdly enough, if I don't do anything to the output of tf.nn.conv2d(), the code does not error out either (corresponds to setting use_reduce_mean=False in the below example).
Also, if the dilation rate is set as [1, 1, 2, 2], the code does not error out, although this goes against the [documentation](https://www.tensorflow.org/versions/r1.7/api_docs/python/tf/nn/conv2d), which says that `The dimension order is determined by the value of data_format`
### Source code / logs
source code to reproduce the bug
```
import os
import numpy as np
import tensorflow as tf
# os.environ['CUDA_VISIBLE_DEVICES'] = ''
def bug():
use_reduce_mean = True
dilation_rate = 2
# bug # 1: conv2d changes from NHWC to NCHW
input_shape = [1, 32, 32, 1]
in_place = tf.placeholder(dtype=tf.float32, shape=input_shape)
filter_tensor = tf.Variable(tf.random_normal(
[3, 3, 1, 1], dtype=tf.float32, stddev=0.1), trainable=True)
out_tensor = tf.nn.conv2d(
in_place, filter=filter_tensor, strides=(1, 1, 1, 1),
padding='SAME', dilations=(1, dilation_rate, dilation_rate, 1),
data_format='NHWC')
if use_reduce_mean:
out_tensor = tf.reduce_mean(out_tensor)
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
init_op.run()
f_dict = {in_place: np.zeros(input_shape)}
sess_out = sess.run(out_tensor, feed_dict=f_dict)
if __name__ == "__main__":
bug()
```
error message:
```
Executor failed to create kernel. Invalid argument: Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
Traceback (most recent call last):
File "tmp.py", line 48, in <module>
bug()
File "tmp.py", line 44, in bug
sess_out = sess.run(out_tensor, feed_dict=f_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
Caused by op u'Conv2D', defined at:
File "tmp.py", line 48, in <module>
bug()
File "tmp.py", line 27, in bug
data_format='NHWC')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 953, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Current implementation does not yet support dilations in the batch and depth dimensions.
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 2, 2, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, Variable/read)]]
```
|
non_process
|
tf nn inconsistent dilation rate at runtime system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu linux ubuntu lts tensorflow installed from source or binary binary pip install tensorflow version use command below python version bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version gpu model and memory exact command to reproduce shown below describe the problem dilated convolution via tf nn with data format nhwc gets corrupted to nchw during sess run since the data format alone is corrupted and the dilation rate is unchanged the code fails with an error message indicating that it does not support dilation along the depth dimension dilation rate of is valid for nhwc format but not for nchw format it seems that this is a cuda problem since if i disable the gpu using os environ line the code does not error out weirdly enough if i don t do anything to the output of tf nn the code does not error out either corresponds to setting use reduce mean false in the below example also if the dilation rate is set as the code does not error out although this goes against the which says that the dimension order is determined by the value of data format source code logs source code to reproduce the bug import os import numpy as np import tensorflow as tf os environ def bug use reduce mean true dilation rate bug changes from nhwc to nchw input shape in place tf placeholder dtype tf shape input shape filter tensor tf variable tf random normal dtype tf stddev trainable true out tensor tf nn in place filter filter tensor strides padding same dilations dilation rate dilation rate data format nhwc if use reduce mean out tensor tf reduce mean out tensor with tf session as sess init op tf global variables initializer init op run f dict in place np zeros input shape sess out sess run out tensor feed dict f dict if name main bug error message executor failed to create kernel invalid argument current implementation does not yet support dilations in the batch and depth dimensions padding same strides use cudnn on gpu true device job localhost replica task device gpu transposenhwctonchw layoutoptimizer variable read traceback most recent call last file tmp py line in bug file tmp py line in bug sess out sess run out tensor feed dict f dict file usr local lib dist packages tensorflow python client session py line in run run metadata ptr file usr local lib dist packages tensorflow python client session py line in run feed dict tensor options run metadata file usr local lib dist packages tensorflow python client session py line in do run run metadata file usr local lib dist packages tensorflow python client session py line in do call raise type e node def op message tensorflow python framework errors impl invalidargumenterror current implementation does not yet support dilations in the batch and depth dimensions padding same strides use cudnn on gpu true device job localhost replica task device gpu transposenhwctonchw layoutoptimizer variable read caused by op u defined at file tmp py line in bug file tmp py line in bug data format nhwc file usr local lib dist packages tensorflow python ops gen nn ops py line in data format data format dilations dilations name name file usr local lib dist packages tensorflow python framework op def library py line in apply op helper op def op def file usr local lib dist packages tensorflow python framework ops py line in create op op def op def file usr local lib dist packages tensorflow python framework ops py line in init self traceback self graph extract stack pylint disable protected access invalidargumenterror see above for traceback current implementation does not yet support dilations in the batch and depth dimensions padding same strides use cudnn on gpu true device job localhost replica task device gpu transposenhwctonchw layoutoptimizer variable read
| 0
|
18,477
| 24,550,705,585
|
IssuesEvent
|
2022-10-12 12:23:53
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Angular Upgrade] 'Admins profile' icon is not getting displayed beside 'My account' tab
|
Bug P1 Participant manager Process: Fixed Process: Tested dev
|
**AR:** 'Admins profile' icon is not getting displayed beside 'My account tab
**ER:** 'Admins profile' icon should get displayed beside 'My account tab i.e., Admins first letter of first name should get displayed.
**Actual :**

**Expected:**

|
2.0
|
[PM] [Angular Upgrade] 'Admins profile' icon is not getting displayed beside 'My account' tab - **AR:** 'Admins profile' icon is not getting displayed beside 'My account tab
**ER:** 'Admins profile' icon should get displayed beside 'My account tab i.e., Admins first letter of first name should get displayed.
**Actual :**

**Expected:**

|
process
|
admins profile icon is not getting displayed beside my account tab ar admins profile icon is not getting displayed beside my account tab er admins profile icon should get displayed beside my account tab i e admins first letter of first name should get displayed actual expected
| 1
|
316,386
| 23,628,974,309
|
IssuesEvent
|
2022-08-25 07:40:41
|
AtlasOfLivingAustralia/extended-data-model
|
https://api.github.com/repos/AtlasOfLivingAustralia/extended-data-model
|
opened
|
Document basic navigation
|
documentation feedback
|
Create a user support article to explain the basic navigation starting from a dataset
|
1.0
|
Document basic navigation - Create a user support article to explain the basic navigation starting from a dataset
|
non_process
|
document basic navigation create a user support article to explain the basic navigation starting from a dataset
| 0
|
2,025
| 4,846,823,679
|
IssuesEvent
|
2016-11-10 13:09:03
|
raphym/Simulation-of-routing-problem-with-intelligent-agents
|
https://api.github.com/repos/raphym/Simulation-of-routing-problem-with-intelligent-agents
|
opened
|
Create the elements of the Map
|
being processed
|
I have to create the objects of the map like :
- traffic-lights
- Lamps
-Providers
|
1.0
|
Create the elements of the Map - I have to create the objects of the map like :
- traffic-lights
- Lamps
-Providers
|
process
|
create the elements of the map i have to create the objects of the map like traffic lights lamps providers
| 1
|
139,873
| 5,392,548,889
|
IssuesEvent
|
2017-02-26 12:15:44
|
cdnjs/cdnjs
|
https://api.github.com/repos/cdnjs/cdnjs
|
closed
|
[Request] Update to jqplot 1.0.9
|
High Priority Library - Request to Add/Update wait for response
|
**Library name:** jqplot
**Git repository url:** https://github.com/jqPlot/jqPlot
**npm package url(optional):**
**License(s):** MIT and GPL version 2.0 licenses
**Official homepage:** http://www.jqplot.com/
|
1.0
|
[Request] Update to jqplot 1.0.9 - **Library name:** jqplot
**Git repository url:** https://github.com/jqPlot/jqPlot
**npm package url(optional):**
**License(s):** MIT and GPL version 2.0 licenses
**Official homepage:** http://www.jqplot.com/
|
non_process
|
update to jqplot library name jqplot git repository url npm package url optional license s mit and gpl version licenses official homepage
| 0
|
593
| 3,067,297,534
|
IssuesEvent
|
2015-08-18 09:32:44
|
maraujop/django-crispy-forms
|
https://api.github.com/repos/maraujop/django-crispy-forms
|
closed
|
Repeated tests execution
|
Cleanup Testing/Process
|
Each test call for different template pack runs more tests, that it is required. Because of class inheritance.
E.g. bootstrap runs 134 tests now and before. But:
```TestFormHelper``` - has 22 tests
```TestBootstrapFormHelper``` - has 4 own tests and 22 from ```TestFormHelper```
So, these 22 tests runs twice. Same with all inherited test classes and other template packs.
Real tests count:
* uni_form - 78 (139 before)
* bootstrap - 91 (134 before)
* bootstrap3 - 94 (175 before)
|
1.0
|
Repeated tests execution - Each test call for different template pack runs more tests, that it is required. Because of class inheritance.
E.g. bootstrap runs 134 tests now and before. But:
```TestFormHelper``` - has 22 tests
```TestBootstrapFormHelper``` - has 4 own tests and 22 from ```TestFormHelper```
So, these 22 tests runs twice. Same with all inherited test classes and other template packs.
Real tests count:
* uni_form - 78 (139 before)
* bootstrap - 91 (134 before)
* bootstrap3 - 94 (175 before)
|
process
|
repeated tests execution each test call for different template pack runs more tests that it is required because of class inheritance e g bootstrap runs tests now and before but testformhelper has tests testbootstrapformhelper has own tests and from testformhelper so these tests runs twice same with all inherited test classes and other template packs real tests count uni form before bootstrap before before
| 1
|
493,992
| 14,243,059,031
|
IssuesEvent
|
2020-11-19 03:23:49
|
genkimaps/gis-hub
|
https://api.github.com/repos/genkimaps/gis-hub
|
closed
|
reorder resources button does not save/commit changes
|
bug low priority
|
@genkimaps
from a dataset's edit page, a user should be able to reorder the resources. on the front end, this looks like it works (users can move/reorder resources). however, after saving and refreshing, this does not actually have an effect.
https://docs.ckan.org/en/ckan-2.7.3/api/#ckan.logic.action.update.package_resource_reorder
package_resource_reorder function in `/home/tk/ckan/ckan/logic/action/update.py`
could use the patch resources function, but if it's an option on the front-end, should be functional.
|
1.0
|
reorder resources button does not save/commit changes - @genkimaps
from a dataset's edit page, a user should be able to reorder the resources. on the front end, this looks like it works (users can move/reorder resources). however, after saving and refreshing, this does not actually have an effect.
https://docs.ckan.org/en/ckan-2.7.3/api/#ckan.logic.action.update.package_resource_reorder
package_resource_reorder function in `/home/tk/ckan/ckan/logic/action/update.py`
could use the patch resources function, but if it's an option on the front-end, should be functional.
|
non_process
|
reorder resources button does not save commit changes genkimaps from a dataset s edit page a user should be able to reorder the resources on the front end this looks like it works users can move reorder resources however after saving and refreshing this does not actually have an effect package resource reorder function in home tk ckan ckan logic action update py could use the patch resources function but if it s an option on the front end should be functional
| 0
|
15,025
| 18,739,938,033
|
IssuesEvent
|
2021-11-04 12:28:24
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
opened
|
Notify Applicants of Feedback
|
application-process
|
When a staff member leaves feedback on an application we should notify the application author via email this has been done and link them to their application.
|
1.0
|
Notify Applicants of Feedback - When a staff member leaves feedback on an application we should notify the application author via email this has been done and link them to their application.
|
process
|
notify applicants of feedback when a staff member leaves feedback on an application we should notify the application author via email this has been done and link them to their application
| 1
|
19,429
| 25,597,667,626
|
IssuesEvent
|
2022-12-01 17:23:42
|
GIScience/sketch-map-tool
|
https://api.github.com/repos/GIScience/sketch-map-tool
|
closed
|
Add refactored upload processing code
|
component:upload-processing priority:high
|
The basic code for the handling of uploaded photos of sketch maps should be added to this repository.
|
1.0
|
Add refactored upload processing code - The basic code for the handling of uploaded photos of sketch maps should be added to this repository.
|
process
|
add refactored upload processing code the basic code for the handling of uploaded photos of sketch maps should be added to this repository
| 1
|
170,380
| 13,185,437,210
|
IssuesEvent
|
2020-08-12 21:24:24
|
hannakim91/overlook-hotel
|
https://api.github.com/repos/hannakim91/overlook-hotel
|
closed
|
Customer: select a date for booking
|
priority: high priority: medium type: enhancement type: test
|
- hook up select date button to event handler and use date input to see what date to search for --- `getAvailableRooms` method in hotel
|
1.0
|
Customer: select a date for booking - - hook up select date button to event handler and use date input to see what date to search for --- `getAvailableRooms` method in hotel
|
non_process
|
customer select a date for booking hook up select date button to event handler and use date input to see what date to search for getavailablerooms method in hotel
| 0
|
2,302
| 5,116,796,503
|
IssuesEvent
|
2017-01-07 08:40:16
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
custom data attribute to table headers or table cells
|
help wanted inprocess
|
I want add a data-text="My own value" attribute to each cell or header for my reference to add some functionality.
How can I save my temporary data in table cell or headers, which can be used later
Thanks
|
1.0
|
custom data attribute to table headers or table cells - I want add a data-text="My own value" attribute to each cell or header for my reference to add some functionality.
How can I save my temporary data in table cell or headers, which can be used later
Thanks
|
process
|
custom data attribute to table headers or table cells i want add a data text my own value attribute to each cell or header for my reference to add some functionality how can i save my temporary data in table cell or headers which can be used later thanks
| 1
|
11,131
| 13,957,690,658
|
IssuesEvent
|
2020-10-24 08:10:12
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
DE: Question related to harvest console & results
|
DE - Germany Geoportal Harvesting process
|
Dear Angelo,
We (Sara Biesel and I) tried out the INSPIRE Geoportal Harvest Console and have some questions:
Schedule a new harvest:
The number at “current jobs running” and/or “current jobs queued” is related to all member states, right?
How could we update our INSPIRE Service Register entry, e.g. change the CSW-URL (page is not available: http://inspire-geoportal.ec.europa.eu/INSPIRERegistry/)?
Which OGC Filter is used by the INSPIRE Geoportal Harvest Console for Germany's Discovery Service?
Harvesting job
Is it possible to have an overview
about the process actual started,
about the process actual ended,
“how far” the harvest process is (status during “running”)?
time for the whole harvest process?
Is it possible to “create” a cron job, that we can decide, when the harvest process should start (e.g. 01.30 AM)? We think, that our whole harvest process runs at least 20h. So we should start the process right after midnight, to have one harvest circle on one day (not over midnight).
Check and publish (Harvesting report)
After several harvest process there are still duplicates in our report. Could we delete manual several records?
Thanks in advance.
Best regards.
Anja Litka & Sara Biesel
|
1.0
|
DE: Question related to harvest console & results - Dear Angelo,
We (Sara Biesel and I) tried out the INSPIRE Geoportal Harvest Console and have some questions:
Schedule a new harvest:
The number at “current jobs running” and/or “current jobs queued” is related to all member states, right?
How could we update our INSPIRE Service Register entry, e.g. change the CSW-URL (page is not available: http://inspire-geoportal.ec.europa.eu/INSPIRERegistry/)?
Which OGC Filter is used by the INSPIRE Geoportal Harvest Console for Germany's Discovery Service?
Harvesting job
Is it possible to have an overview
about the process actual started,
about the process actual ended,
“how far” the harvest process is (status during “running”)?
time for the whole harvest process?
Is it possible to “create” a cron job, that we can decide, when the harvest process should start (e.g. 01.30 AM)? We think, that our whole harvest process runs at least 20h. So we should start the process right after midnight, to have one harvest circle on one day (not over midnight).
Check and publish (Harvesting report)
After several harvest process there are still duplicates in our report. Could we delete manual several records?
Thanks in advance.
Best regards.
Anja Litka & Sara Biesel
|
process
|
de question related to harvest console results dear angelo we sara biesel and i tried out the inspire geoportal harvest console and have some questions schedule a new harvest the number at ldquo current jobs running rdquo and or ldquo current jobs queued rdquo is related to all member states right how could we update our inspire service register entry e g change the csw url page is not available which ogc filter is used by the inspire geoportal harvest console for germany s discovery service harvesting job is it possible to have an overview about the process actual started about the process actual ended ldquo how far rdquo the harvest process is status during ldquo running rdquo time for the whole harvest process is it possible to ldquo create rdquo a cron job that we can decide when the harvest process should start e g am we think that our whole harvest process runs at least so we should start the process right after midnight to have one harvest circle on one day not over midnight check and publish harvesting report after several harvest process there are still duplicates in our report could we delete manual several records thanks in advance best regards anja litka amp sara biesel
| 1
|
831,884
| 32,064,252,649
|
IssuesEvent
|
2023-09-25 00:36:52
|
RbAvci/My-Coursework-Planner
|
https://api.github.com/repos/RbAvci/My-Coursework-Planner
|
opened
|
[PD] Feeling, behaving and acting like a professional in the software industry
|
🔑 Priority Key 🐂 Size Medium 📅 HTML-CSS 📅 Week 2
|
From Module-HTML-CSS created by [kfklein15](https://github.com/kfklein15): CodeYourFuture/Module-HTML-CSS#44
### Coursework content
You are back to your Plan your Life as a Developer.
This plan is not something that you can finalise in a short period. You'll need to go back to it a few more times if you'd like to find an **honest description** of your current week and identify the necessary changes to it.
As a week will have passed since you did it, you can **compare** what you wrote with the reality of the week that passed.
**Reflections on your current plan.**
- How much energy did you have when you sat down to study and work on CYF projects?
- How tired or distracted were you?
- How many interruptions did you get?
**Other areas** to reflect:
- On your work (or other studies), did you work longer hours than what you planned? What happened?
- Were there any activities that you dedicated more time to it than what you expected?
- How is your sleep?
- Do you manage to feel rested in the morning?
- How do you start your day?
Reflecting on this, think about these **two topics**:
1. What changes you might need to bring to your life.
2. Define their short/medium/long-term goals.
Then:
- Add these two items to your existing Google Doc. _(Reminder: minimum 50 words each and reviewed with an automated grammar tool)_
- Share them with your pair.
- Discuss with them, so you can identify anything that is missing, if what you are planning is realistic, or if it is just right.
### Estimated time in hours
1.5
### What is the purpose of this assignment?
You are getting a deeper understanding of what blockers and distractions that hold you up. But now, you also have to start thinking about what can you do to change this situation and what goals can you start putting in place.
### How to submit
- Create a document with the following titles and add your reflections to it:
- Summary of my current situation
- My current plan
- What distractions do I have / My energy levels during the study
- Original plans I had after I finished the training
- Share your document with 1-2 people with similar situations or experiences
- Discuss your document with them to get some input
- Add the link to this document as a comment on this issue. Make sure it can be commented on by anyone.
|
1.0
|
[PD] Feeling, behaving and acting like a professional in the software industry - From Module-HTML-CSS created by [kfklein15](https://github.com/kfklein15): CodeYourFuture/Module-HTML-CSS#44
### Coursework content
You are back to your Plan your Life as a Developer.
This plan is not something that you can finalise in a short period. You'll need to go back to it a few more times if you'd like to find an **honest description** of your current week and identify the necessary changes to it.
As a week will have passed since you did it, you can **compare** what you wrote with the reality of the week that passed.
**Reflections on your current plan.**
- How much energy did you have when you sat down to study and work on CYF projects?
- How tired or distracted were you?
- How many interruptions did you get?
**Other areas** to reflect:
- On your work (or other studies), did you work longer hours than what you planned? What happened?
- Were there any activities that you dedicated more time to it than what you expected?
- How is your sleep?
- Do you manage to feel rested in the morning?
- How do you start your day?
Reflecting on this, think about these **two topics**:
1. What changes you might need to bring to your life.
2. Define their short/medium/long-term goals.
Then:
- Add these two items to your existing Google Doc. _(Reminder: minimum 50 words each and reviewed with an automated grammar tool)_
- Share them with your pair.
- Discuss with them, so you can identify anything that is missing, if what you are planning is realistic, or if it is just right.
### Estimated time in hours
1.5
### What is the purpose of this assignment?
You are getting a deeper understanding of what blockers and distractions that hold you up. But now, you also have to start thinking about what can you do to change this situation and what goals can you start putting in place.
### How to submit
- Create a document with the following titles and add your reflections to it:
- Summary of my current situation
- My current plan
- What distractions do I have / My energy levels during the study
- Original plans I had after I finished the training
- Share your document with 1-2 people with similar situations or experiences
- Discuss your document with them to get some input
- Add the link to this document as a comment on this issue. Make sure it can be commented on by anyone.
|
non_process
|
feeling behaving and acting like a professional in the software industry from module html css created by codeyourfuture module html css coursework content you are back to your plan your life as a developer this plan is not something that you can finalise in a short period you ll need to go back to it a few more times if you d like to find an honest description of your current week and identify the necessary changes to it as a week will have passed since you did it you can compare what you wrote with the reality of the week that passed reflections on your current plan how much energy did you have when you sat down to study and work on cyf projects how tired or distracted were you how many interruptions did you get other areas to reflect on your work or other studies did you work longer hours than what you planned what happened were there any activities that you dedicated more time to it than what you expected how is your sleep do you manage to feel rested in the morning how do you start your day reflecting on this think about these two topics what changes you might need to bring to your life define their short medium long term goals then add these two items to your existing google doc reminder minimum words each and reviewed with an automated grammar tool share them with your pair discuss with them so you can identify anything that is missing if what you are planning is realistic or if it is just right estimated time in hours what is the purpose of this assignment you are getting a deeper understanding of what blockers and distractions that hold you up but now you also have to start thinking about what can you do to change this situation and what goals can you start putting in place how to submit create a document with the following titles and add your reflections to it summary of my current situation my current plan what distractions do i have my energy levels during the study original plans i had after i finished the training share your document with people with similar situations or experiences discuss your document with them to get some input add the link to this document as a comment on this issue make sure it can be commented on by anyone
| 0
|
161,841
| 6,137,115,450
|
IssuesEvent
|
2017-06-26 11:20:14
|
ProgrammingLife2017/Desoxyribonucleinezuur
|
https://api.github.com/repos/ProgrammingLife2017/Desoxyribonucleinezuur
|
closed
|
Debug mode
|
enhancement priority: C time:2
|
Create a debug mode to ease development
features for the debug mode:
- [x] console
- [ ] show node id on nodes (text overlay might also be useful for end users, like overlaying sequences / annotation names)
|
1.0
|
Debug mode - Create a debug mode to ease development
features for the debug mode:
- [x] console
- [ ] show node id on nodes (text overlay might also be useful for end users, like overlaying sequences / annotation names)
|
non_process
|
debug mode create a debug mode to ease development features for the debug mode console show node id on nodes text overlay might also be useful for end users like overlaying sequences annotation names
| 0
|
20,550
| 3,822,450,168
|
IssuesEvent
|
2016-03-30 01:13:00
|
dataproofer/Dataproofer
|
https://api.github.com/repos/dataproofer/Dataproofer
|
opened
|
Test: check numbers against Benford's Law to look for made up data
|
medium suite: stats test
|
*Please read [how to create a new test](https://github.com/dataproofer/Dataproofer#creating-a-new-test) if you're interested in writing this test.*
>[Benford's Law](https://en.wikipedia.org/wiki/Benford%27s_law) is a theory which states that small digits (1, 2, 3) appear at the beginning of numbers much more frequently than large digits (7, 8, 9). In theory Benford's Law can be used to detect anomalies in accounting practices or election results, though in practice it can easily be misapplied. If you suspect a dataset has been created or modified to deceive, Benford's Law is an excellent first test, but you should always verify your results with an expert before concluding your data have been manipulated.
—[Quartz's Bad Data Guide](https://github.com/Quartz/bad-data-guide#benfords-law-fails)

## More info
* Karroubi’s Unlucky 7’s?, [FiveThirtyEight](http://fivethirtyeight.com/features/karroubis-unlucky-7s/)
* Radiolab, [Numbers](http://www.radiolab.org/story/91697-numbers/)
|
1.0
|
Test: check numbers against Benford's Law to look for made up data - *Please read [how to create a new test](https://github.com/dataproofer/Dataproofer#creating-a-new-test) if you're interested in writing this test.*
>[Benford's Law](https://en.wikipedia.org/wiki/Benford%27s_law) is a theory which states that small digits (1, 2, 3) appear at the beginning of numbers much more frequently than large digits (7, 8, 9). In theory Benford's Law can be used to detect anomalies in accounting practices or election results, though in practice it can easily be misapplied. If you suspect a dataset has been created or modified to deceive, Benford's Law is an excellent first test, but you should always verify your results with an expert before concluding your data have been manipulated.
—[Quartz's Bad Data Guide](https://github.com/Quartz/bad-data-guide#benfords-law-fails)

## More info
* Karroubi’s Unlucky 7’s?, [FiveThirtyEight](http://fivethirtyeight.com/features/karroubis-unlucky-7s/)
* Radiolab, [Numbers](http://www.radiolab.org/story/91697-numbers/)
|
non_process
|
test check numbers against benford s law to look for made up data please read if you re interested in writing this test is a theory which states that small digits appear at the beginning of numbers much more frequently than large digits in theory benford s law can be used to detect anomalies in accounting practices or election results though in practice it can easily be misapplied if you suspect a dataset has been created or modified to deceive benford s law is an excellent first test but you should always verify your results with an expert before concluding your data have been manipulated — more info karroubi’s unlucky ’s radiolab
| 0
|
6,328
| 9,368,298,775
|
IssuesEvent
|
2019-04-03 08:22:52
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: question PAMP binding
|
PomBase multi-species process quick fix
|
Is it possible to have a term for
NTR PAMP binding
--NTR chitin PAMP binding (broad chitin fragment binding)
Fungal pathogens have cell surface chitin that is recognised by the host, and broken down by host chitinases.
These fragments if can elicit a "PAMP triggered immunity".
Therefore to evade this response the fungi produces a "chitinase-like protein" which binds to and masks these fragments from the host.
PAMP binding
Interacting selectively and non-covalently with a pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species.
chitin PAMP binding
Interacting selectively and non-covalently with a chitin fragment which function as a PAMP, structures conserved among microbial species.
|
1.0
|
NTR: question PAMP binding - Is it possible to have a term for
NTR PAMP binding
--NTR chitin PAMP binding (broad chitin fragment binding)
Fungal pathogens have cell surface chitin that is recognised by the host, and broken down by host chitinases.
These fragments if can elicit a "PAMP triggered immunity".
Therefore to evade this response the fungi produces a "chitinase-like protein" which binds to and masks these fragments from the host.
PAMP binding
Interacting selectively and non-covalently with a pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species.
chitin PAMP binding
Interacting selectively and non-covalently with a chitin fragment which function as a PAMP, structures conserved among microbial species.
|
process
|
ntr question pamp binding is it possible to have a term for ntr pamp binding ntr chitin pamp binding broad chitin fragment binding fungal pathogens have cell surface chitin that is recognised by the host and broken down by host chitinases these fragments if can elicit a pamp triggered immunity therefore to evade this response the fungi produces a chitinase like protein which binds to and masks these fragments from the host pamp binding interacting selectively and non covalently with a pathogen associated molecular pattern pamps structures conserved among microbial species chitin pamp binding interacting selectively and non covalently with a chitin fragment which function as a pamp structures conserved among microbial species
| 1
|
3,554
| 6,587,646,924
|
IssuesEvent
|
2017-09-13 21:58:50
|
cptechinc/soft-6-ecomm
|
https://api.github.com/repos/cptechinc/soft-6-ecomm
|
opened
|
_foot.php changes
|
PHP PHP Backend Processwire
|
https://github.com/cptechinc/soft-6-ecomm/blob/c500c082d4146bbbc5c7a95dfc9e94d52ef73ee1/site/templates/_foot.php#L12-L30
Make that configurable from processwire
https://github.com/cptechinc/soft-6-ecomm/blob/c500c082d4146bbbc5c7a95dfc9e94d52ef73ee1/site/templates/_foot.php#L32-L52
Let's find a way to create a sitemap for this
|
1.0
|
_foot.php changes - https://github.com/cptechinc/soft-6-ecomm/blob/c500c082d4146bbbc5c7a95dfc9e94d52ef73ee1/site/templates/_foot.php#L12-L30
Make that configurable from processwire
https://github.com/cptechinc/soft-6-ecomm/blob/c500c082d4146bbbc5c7a95dfc9e94d52ef73ee1/site/templates/_foot.php#L32-L52
Let's find a way to create a sitemap for this
|
process
|
foot php changes make that configurable from processwire let s find a way to create a sitemap for this
| 1
|
111,167
| 9,516,006,301
|
IssuesEvent
|
2019-04-26 07:41:06
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
closed
|
Selective sync lists aren't guaranteed to be sorted
|
ReadyToTest bug
|
I somehow ended up with a selective sync blacklist that runs into `Q_ASSERT(std::is_sorted(list.begin(), list.end()));` in discoveryphase.cpp. The lists should be actively sorted at the start of the discovery phase.
|
1.0
|
Selective sync lists aren't guaranteed to be sorted - I somehow ended up with a selective sync blacklist that runs into `Q_ASSERT(std::is_sorted(list.begin(), list.end()));` in discoveryphase.cpp. The lists should be actively sorted at the start of the discovery phase.
|
non_process
|
selective sync lists aren t guaranteed to be sorted i somehow ended up with a selective sync blacklist that runs into q assert std is sorted list begin list end in discoveryphase cpp the lists should be actively sorted at the start of the discovery phase
| 0
|
186,833
| 15,085,376,755
|
IssuesEvent
|
2021-02-05 18:34:27
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
closed
|
allow oauth_scopes = default for google_container_cluster
|
documentation size/XS
|
Just like in the UI... thanks!

|
1.0
|
allow oauth_scopes = default for google_container_cluster - Just like in the UI... thanks!

|
non_process
|
allow oauth scopes default for google container cluster just like in the ui thanks
| 0
|
241,179
| 20,105,778,912
|
IssuesEvent
|
2022-02-07 10:20:31
|
CakeWP/block-options
|
https://api.github.com/repos/CakeWP/block-options
|
closed
|
Inline insertions does not work in FSE
|
wordpress-support needs-testing
|
## Support
Plugin works with 5.9 and Twenty Twenty-Two in regular pages/posts.
But in Template Editor (Full Site Editing) all Inline Insertions like Special Characters or Nonbreaking Space are not there.
## Details
- **Support Author**: burnuser
- **Support Link**: https://wordpress.org/support/topic/inline-insertions-does-not-work-in-fse/
- **Latest Activity**: 1 minute ago
- **Spinup Sandbox Site**: https://tastewp.com/new/?pre-installed-plugin-slug=block-options
**Note:** This support issue is created automatically via GitHub action.
|
1.0
|
Inline insertions does not work in FSE -
## Support
Plugin works with 5.9 and Twenty Twenty-Two in regular pages/posts.
But in Template Editor (Full Site Editing) all Inline Insertions like Special Characters or Nonbreaking Space are not there.
## Details
- **Support Author**: burnuser
- **Support Link**: https://wordpress.org/support/topic/inline-insertions-does-not-work-in-fse/
- **Latest Activity**: 1 minute ago
- **Spinup Sandbox Site**: https://tastewp.com/new/?pre-installed-plugin-slug=block-options
**Note:** This support issue is created automatically via GitHub action.
|
non_process
|
inline insertions does not work in fse support plugin works with and twenty twenty two in regular pages posts but in template editor full site editing all inline insertions like special characters or nonbreaking space are not there details support author burnuser support link latest activity minute ago spinup sandbox site note this support issue is created automatically via github action
| 0
|
13,701
| 16,457,229,297
|
IssuesEvent
|
2021-05-21 14:06:14
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
less strict requirements, if needed, for security reviewers
|
assessment-process inactive
|
At the moment, we can consider ourselves in "bootstrap" mode. The following qualification are a bit more representative of the current working group and could serve to facilitate a reasonable process if needed.
# Qualifications
WG will strive to establish that the two mentors have diverse experience, covering some of the ideal qualifications below. Exemptions may be granted by the WG chairs, expected to bootstrap the process but only in extreme cases later on.
To aid in this process, WG members are encouraged to provide a profile with a synopsis of their background with respect to their relevant experience.
### Requirements
* Participation in a security audit
* Participated in prior SAFE Assessment
### Ideal
* performed security audits for diverse organizations
* the recipient of security audits for a software project they manage
* experience using and contributing to open source
Note that it is encouraged to have participation (shadowing) from participants that are not
yet qualified to help them gain the necessary skills to be a SAFE mentor in the future.
|
1.0
|
less strict requirements, if needed, for security reviewers - At the moment, we can consider ourselves in "bootstrap" mode. The following qualification are a bit more representative of the current working group and could serve to facilitate a reasonable process if needed.
# Qualifications
WG will strive to establish that the two mentors have diverse experience, covering some of the ideal qualifications below. Exemptions may be granted by the WG chairs, expected to bootstrap the process but only in extreme cases later on.
To aid in this process, WG members are encouraged to provide a profile with a synopsis of their background with respect to their relevant experience.
### Requirements
* Participation in a security audit
* Participated in prior SAFE Assessment
### Ideal
* performed security audits for diverse organizations
* the recipient of security audits for a software project they manage
* experience using and contributing to open source
Note that it is encouraged to have participation (shadowing) from participants that are not
yet qualified to help them gain the necessary skills to be a SAFE mentor in the future.
|
process
|
less strict requirements if needed for security reviewers at the moment we can consider ourselves in bootstrap mode the following qualification are a bit more representative of the current working group and could serve to facilitate a reasonable process if needed qualifications wg will strive to establish that the two mentors have diverse experience covering some of the ideal qualifications below exemptions may be granted by the wg chairs expected to bootstrap the process but only in extreme cases later on to aid in this process wg members are encouraged to provide a profile with a synopsis of their background with respect to their relevant experience requirements participation in a security audit participated in prior safe assessment ideal performed security audits for diverse organizations the recipient of security audits for a software project they manage experience using and contributing to open source note that it is encouraged to have participation shadowing from participants that are not yet qualified to help them gain the necessary skills to be a safe mentor in the future
| 1
|
668,240
| 22,575,027,144
|
IssuesEvent
|
2022-06-28 06:25:26
|
unep-grid/map-x-mgl
|
https://api.github.com/repos/unep-grid/map-x-mgl
|
opened
|
The attribute table no longer filters the map
|
bug priority 1
|
Error in web console:
```js
Error: layers.MX-NUMTV-X97MG-7POZ1@CQ3hb.filter[3][1]: string expected, array found
```
|
1.0
|
The attribute table no longer filters the map - Error in web console:
```js
Error: layers.MX-NUMTV-X97MG-7POZ1@CQ3hb.filter[3][1]: string expected, array found
```
|
non_process
|
the attribute table no longer filters the map error in web console js error layers mx numtv filter string expected array found
| 0
|
12,947
| 3,670,310,914
|
IssuesEvent
|
2016-02-21 19:50:58
|
llaumgui/CheckToolsFramework
|
https://api.github.com/repos/llaumgui/CheckToolsFramework
|
closed
|
Add couscous documentation
|
documentation enhancement
|
Build all documentation with couscous.io and store it on github.io.
|
1.0
|
Add couscous documentation - Build all documentation with couscous.io and store it on github.io.
|
non_process
|
add couscous documentation build all documentation with couscous io and store it on github io
| 0
|
8,706
| 11,847,652,269
|
IssuesEvent
|
2020-03-24 12:26:17
|
opendevshop/devshop
|
https://api.github.com/repos/opendevshop/devshop
|
closed
|
New Component: Power Process
|
component component | PowerProcess
|
Migrate Provision Ops Power Process to a DevShop component.
- [x] Follow instructions (improving them along the way, if needed) to create a new DevShop Component called "Power Process": https://github.com/opendevshop/devshop/blob/develop/docs/DEVELOPING.md
- [x] Scope out what else will be needed to complete this.
- [ ] Create 1.0.0 release on the sub-repo only, devshop main is ready and all packages hit 2.x.
|
1.0
|
New Component: Power Process - Migrate Provision Ops Power Process to a DevShop component.
- [x] Follow instructions (improving them along the way, if needed) to create a new DevShop Component called "Power Process": https://github.com/opendevshop/devshop/blob/develop/docs/DEVELOPING.md
- [x] Scope out what else will be needed to complete this.
- [ ] Create 1.0.0 release on the sub-repo only, devshop main is ready and all packages hit 2.x.
|
process
|
new component power process migrate provision ops power process to a devshop component follow instructions improving them along the way if needed to create a new devshop component called power process scope out what else will be needed to complete this create release on the sub repo only devshop main is ready and all packages hit x
| 1
|
216,291
| 16,749,514,401
|
IssuesEvent
|
2021-06-11 20:29:24
|
Deltric/Generations
|
https://api.github.com/repos/Deltric/Generations
|
closed
|
Judgment type information does not reflect Arceus' type
|
beta-testing bug verified
|
**What is the bug?**
The Judgment move does not update its type in either the Moves menu of the stats screen, nor in the battle screen, to reflect the Arceus in play. This is also causing a knock-on issue with Dynamaxing, where if you Dynamax an Arceus after using Judgment on the first turn, the move says "Max Strike", but the actual move will be appropriate for whatever type of Arceus you are using. If you do not use Judgment on the first turn, it will sometimes use "Max Strike", but even this doesn't seem to be consistent - it may be that it only does that if you've just put a plate on Arceus but have not yet used Judgment, or it may be a freak thing, I've not been able to 100% pin it down.
**What are the steps to reproduce the bug?**
1: Use any Arceus knowing Judgment with a plate attached.
2: Try and Dynamax the Arceus and observe the behaviour from the Max Strike move
**What version of Pixelmon and Forge are you on?**
Generations 8.4.2 and Forge 2851
**Please provide any screenshots or crash reports if needed.**
https://www.youtube.com/watch?v=zDYb5JialJ4
|
1.0
|
Judgment type information does not reflect Arceus' type - **What is the bug?**
The Judgment move does not update its type in either the Moves menu of the stats screen, nor in the battle screen, to reflect the Arceus in play. This is also causing a knock-on issue with Dynamaxing, where if you Dynamax an Arceus after using Judgment on the first turn, the move says "Max Strike", but the actual move will be appropriate for whatever type of Arceus you are using. If you do not use Judgment on the first turn, it will sometimes use "Max Strike", but even this doesn't seem to be consistent - it may be that it only does that if you've just put a plate on Arceus but have not yet used Judgment, or it may be a freak thing, I've not been able to 100% pin it down.
**What are the steps to reproduce the bug?**
1: Use any Arceus knowing Judgment with a plate attached.
2: Try and Dynamax the Arceus and observe the behaviour from the Max Strike move
**What version of Pixelmon and Forge are you on?**
Generations 8.4.2 and Forge 2851
**Please provide any screenshots or crash reports if needed.**
https://www.youtube.com/watch?v=zDYb5JialJ4
|
non_process
|
judgment type information does not reflect arceus type what is the bug the judgment move does not update its type in either the moves menu of the stats screen nor in the battle screen to reflect the arceus in play this is also causing a knock on issue with dynamaxing where if you dynamax an arceus after using judgment on the first turn the move says max strike but the actual move will be appropriate for whatever type of arceus you are using if you do not use judgment on the first turn it will sometimes use max strike but even this doesn t seem to be consistent it may be that it only does that if you ve just put a plate on arceus but have not yet used judgment or it may be a freak thing i ve not been able to pin it down what are the steps to reproduce the bug use any arceus knowing judgment with a plate attached try and dynamax the arceus and observe the behaviour from the max strike move what version of pixelmon and forge are you on generations and forge please provide any screenshots or crash reports if needed
| 0
|
16,160
| 20,599,202,092
|
IssuesEvent
|
2022-03-06 01:16:47
|
B2o5T/graphql-eslint
|
https://api.github.com/repos/B2o5T/graphql-eslint
|
opened
|
`GraphQL-ESLint` v4 Roadmap (after Node 12 end of life)
|
process/candidate
|
- [ ] Node 12 drop support
- [ ] rename `parserOptions.operations` to `parserOptions.documents` https://github.com/B2o5T/graphql-eslint/issues/770#issuecomment-967561242
- [ ] Remove `GraphQLRuleTester` from bundle and publish as `@graphql-eslint/rule-tester` https://github.com/B2o5T/graphql-eslint/issues/946#issuecomment-1030802927
- [ ] `alphabetize` changes
- add `definitions: true` option for `all` config
- rename `values: ['EnumTypeDefinition']` and `variables: ['OperationDefinition']` options to `values: true` and `variables: true`
- [ ] bring back `possible-type-extension` to `recommended` config
- [ ] Remove `unique-enum-value-names` rule, rename `no-case-insensitive-enum-values-duplicates` to `unique-enum-value-names` with new option `caseSensitive` https://github.com/B2o5T/graphql-eslint/discussions/793
|
1.0
|
`GraphQL-ESLint` v4 Roadmap (after Node 12 end of life) - - [ ] Node 12 drop support
- [ ] rename `parserOptions.operations` to `parserOptions.documents` https://github.com/B2o5T/graphql-eslint/issues/770#issuecomment-967561242
- [ ] Remove `GraphQLRuleTester` from bundle and publish as `@graphql-eslint/rule-tester` https://github.com/B2o5T/graphql-eslint/issues/946#issuecomment-1030802927
- [ ] `alphabetize` changes
- add `definitions: true` option for `all` config
- rename `values: ['EnumTypeDefinition']` and `variables: ['OperationDefinition']` options to `values: true` and `variables: true`
- [ ] bring back `possible-type-extension` to `recommended` config
- [ ] Remove `unique-enum-value-names` rule, rename `no-case-insensitive-enum-values-duplicates` to `unique-enum-value-names` with new option `caseSensitive` https://github.com/B2o5T/graphql-eslint/discussions/793
|
process
|
graphql eslint roadmap after node end of life node drop support rename parseroptions operations to parseroptions documents remove graphqlruletester from bundle and publish as graphql eslint rule tester alphabetize changes add definitions true option for all config rename values and variables options to values true and variables true bring back possible type extension to recommended config remove unique enum value names rule rename no case insensitive enum values duplicates to unique enum value names with new option casesensitive
| 1
|
612,197
| 19,006,771,060
|
IssuesEvent
|
2021-11-23 01:38:35
|
WiIIiam278/HuskHomes2
|
https://api.github.com/repos/WiIIiam278/HuskHomes2
|
closed
|
Option to display countdown numbers as titles instead of action bar
|
type: feature request priority: low
|
> If possible, nice how could you add titles to the screen while teleporting?
Original Issue #33 - ReferTV - May 4th, 2021
|
1.0
|
Option to display countdown numbers as titles instead of action bar - > If possible, nice how could you add titles to the screen while teleporting?
Original Issue #33 - ReferTV - May 4th, 2021
|
non_process
|
option to display countdown numbers as titles instead of action bar if possible nice how could you add titles to the screen while teleporting original issue refertv may
| 0
|
14,368
| 17,391,325,456
|
IssuesEvent
|
2021-08-02 07:47:48
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
[dt 3.7.0] adding raster mask crashes darktable
|
bug: wip reproduce: confirmed scope: UI scope: image processing
|
**Describe the bug/issue**
Adding raster mask to module crashes darktable
**To Reproduce**
1. make some parametric mask on one module
2. open another module and add raster mask
3. darktable crashes
See [darktable_bt_H3GA70.txt](https://github.com/darktable-org/darktable/files/6909975/darktable_bt_H3GA70.txt)
* darktable version : 3.7.0~git579.5c3444e43e-1
* Linux - Distro : Kubuntu 20.04
* Graphics card : GeForce GTX 1070
* Graphics driver : 460.91.03
* OpenCL installed : yes
* OpenCL activated : yes
* Desktop : KDE
|
1.0
|
[dt 3.7.0] adding raster mask crashes darktable - **Describe the bug/issue**
Adding raster mask to module crashes darktable
**To Reproduce**
1. make some parametric mask on one module
2. open another module and add raster mask
3. darktable crashes
See [darktable_bt_H3GA70.txt](https://github.com/darktable-org/darktable/files/6909975/darktable_bt_H3GA70.txt)
* darktable version : 3.7.0~git579.5c3444e43e-1
* Linux - Distro : Kubuntu 20.04
* Graphics card : GeForce GTX 1070
* Graphics driver : 460.91.03
* OpenCL installed : yes
* OpenCL activated : yes
* Desktop : KDE
|
process
|
adding raster mask crashes darktable describe the bug issue adding raster mask to module crashes darktable to reproduce make some parametric mask on one module open another module and add raster mask darktable crashes see darktable version linux distro kubuntu graphics card geforce gtx graphics driver opencl installed yes opencl activated yes desktop kde
| 1
|
193,983
| 14,666,992,615
|
IssuesEvent
|
2020-12-29 17:32:37
|
ermiry/cerver
|
https://api.github.com/repos/ermiry/cerver
|
opened
|
Add web examples automated tests
|
examples http tests
|
Add dedicated tests for each web example to correctly test the different methods. We should be able to run each example and an automated test that can perform requests to all the routes. This will prevent us from testing the examples ourselves each time, and will set the basis to perform automated tests directly using GitHub actions.
|
1.0
|
Add web examples automated tests - Add dedicated tests for each web example to correctly test the different methods. We should be able to run each example and an automated test that can perform requests to all the routes. This will prevent us from testing the examples ourselves each time, and will set the basis to perform automated tests directly using GitHub actions.
|
non_process
|
add web examples automated tests add dedicated tests for each web example to correctly test the different methods we should be able to run each example and an automated test that can perform requests to all the routes this will prevent us from testing the examples ourselves each time and will set the basis to perform automated tests directly using github actions
| 0
|
635,327
| 20,384,645,711
|
IssuesEvent
|
2022-02-22 04:45:22
|
phetsims/perennial
|
https://api.github.com/repos/phetsims/perennial
|
opened
|
protect-branches-for-repo.js fails with "Did not find repository: geometric-optics-basics"
|
priority:2-high
|
I'm trying to follow the instructions in https://github.com/phetsims/phet-info/blob/master/checklists/new_repo_checklist.md for creating a new sim repo. The repo is geometric-optics-basics. I'm having problems with this step:
> - [ ] Apply branch protection rules. Use [this script to do so](https://github.com/phetsims/perennial/blob/master/js/scripts/protect-branches-for-repo.js).
Following the instructions at the top of protect-branches-for-repo.js, I'm getting this failure:
```
% node perennial/js/scripts/protect-branches-for-repo.js geometric-optics-basics
(node:46912) UnhandledPromiseRejectionWarning: Error: Did not find repository: geometric-optics-basics
at handleJSONResponse (/Users/cmalley/PhET/GitHub/perennial/js/common/githubProtectBranches.js:136:13)
at IncomingMessage.<anonymous> (/Users/cmalley/PhET/GitHub/perennial/js/common/githubProtectBranches.js:247:32)
at IncomingMessage.emit (events.js:327:22)
at endReadableNT (internal/streams/readable.js:1327:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:46912) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:46912) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
|
1.0
|
protect-branches-for-repo.js fails with "Did not find repository: geometric-optics-basics" - I'm trying to follow the instructions in https://github.com/phetsims/phet-info/blob/master/checklists/new_repo_checklist.md for creating a new sim repo. The repo is geometric-optics-basics. I'm having problems with this step:
> - [ ] Apply branch protection rules. Use [this script to do so](https://github.com/phetsims/perennial/blob/master/js/scripts/protect-branches-for-repo.js).
Following the instructions at the top of protect-branches-for-repo.js, I'm getting this failure:
```
% node perennial/js/scripts/protect-branches-for-repo.js geometric-optics-basics
(node:46912) UnhandledPromiseRejectionWarning: Error: Did not find repository: geometric-optics-basics
at handleJSONResponse (/Users/cmalley/PhET/GitHub/perennial/js/common/githubProtectBranches.js:136:13)
at IncomingMessage.<anonymous> (/Users/cmalley/PhET/GitHub/perennial/js/common/githubProtectBranches.js:247:32)
at IncomingMessage.emit (events.js:327:22)
at endReadableNT (internal/streams/readable.js:1327:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:46912) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:46912) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
|
non_process
|
protect branches for repo js fails with did not find repository geometric optics basics i m trying to follow the instructions in for creating a new sim repo the repo is geometric optics basics i m having problems with this step apply branch protection rules use following the instructions at the top of protect branches for repo js i m getting this failure node perennial js scripts protect branches for repo js geometric optics basics node unhandledpromiserejectionwarning error did not find repository geometric optics basics at handlejsonresponse users cmalley phet github perennial js common githubprotectbranches js at incomingmessage users cmalley phet github perennial js common githubprotectbranches js at incomingmessage emit events js at endreadablent internal streams readable js at processticksandrejections internal process task queues js use node trace warnings to show where the warning was created node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch to terminate the node process on unhandled promise rejection use the cli flag unhandled rejections strict see rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code
| 0
|
240,323
| 7,800,974,615
|
IssuesEvent
|
2018-06-09 15:40:12
|
qutebrowser/qutebrowser
|
https://api.github.com/repos/qutebrowser/qutebrowser
|
opened
|
Refactor download filename handling
|
component: downloads component: style / refactoring priority: 2 - low
|
They way filenames/paths are handled for downloads grew into quite a mess - we should check if that can be refactored to be more maintainable.
|
1.0
|
Refactor download filename handling - They way filenames/paths are handled for downloads grew into quite a mess - we should check if that can be refactored to be more maintainable.
|
non_process
|
refactor download filename handling they way filenames paths are handled for downloads grew into quite a mess we should check if that can be refactored to be more maintainable
| 0
|
180,444
| 6,649,907,183
|
IssuesEvent
|
2017-09-28 14:41:56
|
SparkDevNetwork/Rock
|
https://api.github.com/repos/SparkDevNetwork/Rock
|
closed
|
Adding a page route to a page with children breaks PageListAsBlocks.lava
|
Priority: Low Status: Confirmed Topic: Lava
|
<!--
If you have found a security bug in Rock and want to report it to us, DO NOT file an issue. Email info@sparkdevnetwork.org and we'll be in touch shortly.
Do you want to ask a question? Are you looking for support? The Ask Rock is the best place for getting support: https://www.rockrms.com/Ask
-->
### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
Adding a page route to a page that has children which would be displayed in a Page Menu, which uses PageListAsBlocks.lava, breaks the links generated by PageListAsBlocks.lava. Links are generated ~/330 rather than ~/page/330
### Steps to Reproduce
1. Add a route to a page which uses PageListAsBlocks.lava in a PageMenu block.
2. Reload the page and attempt to follow a link in the Page Menu block to, e.g. reports, which is ~/page/330
3. Get the Rock 'page not found error' because the actual link generated is ~/330, as though 330 is a route to the page
**Expected behavior:**
Adding a route to a page doesn't affect PageMenuAsBlocks.lava, following the normal naming convention of ~/page/###
PageSubNav.lava menus are not affected.
**Actual behavior:**
Adding a route to a page causes PageMenuAsBlocks.lava (in the Rock and Stark themes at least) to omit the /page/ from the address, leading to links using the naming convention of ~/###
### Versions
* **Rock Version:** [6.2 (demo site) 6.9]
* **Client Culture Setting:** [en-US]
|
1.0
|
Adding a page route to a page with children breaks PageListAsBlocks.lava - <!--
If you have found a security bug in Rock and want to report it to us, DO NOT file an issue. Email info@sparkdevnetwork.org and we'll be in touch shortly.
Do you want to ask a question? Are you looking for support? The Ask Rock is the best place for getting support: https://www.rockrms.com/Ask
-->
### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
Adding a page route to a page that has children which would be displayed in a Page Menu, which uses PageListAsBlocks.lava, breaks the links generated by PageListAsBlocks.lava. Links are generated ~/330 rather than ~/page/330
### Steps to Reproduce
1. Add a route to a page which uses PageListAsBlocks.lava in a PageMenu block.
2. Reload the page and attempt to follow a link in the Page Menu block to, e.g. reports, which is ~/page/330
3. Get the Rock 'page not found error' because the actual link generated is ~/330, as though 330 is a route to the page
**Expected behavior:**
Adding a route to a page doesn't affect PageMenuAsBlocks.lava, following the normal naming convention of ~/page/###
PageSubNav.lava menus are not affected.
**Actual behavior:**
Adding a route to a page causes PageMenuAsBlocks.lava (in the Rock and Stark themes at least) to omit the /page/ from the address, leading to links using the naming convention of ~/###
### Versions
* **Rock Version:** [6.2 (demo site) 6.9]
* **Client Culture Setting:** [en-US]
|
non_process
|
adding a page route to a page with children breaks pagelistasblocks lava if you have found a security bug in rock and want to report it to us do not file an issue email info sparkdevnetwork org and we ll be in touch shortly do you want to ask a question are you looking for support the ask rock is the best place for getting support prerequisites put an x between the brackets on this line if you have done all of the following can you reproduce the problem on a fresh install or the did you include your rock version number and setting did you to see if your bug or enhancement is already reported description adding a page route to a page that has children which would be displayed in a page menu which uses pagelistasblocks lava breaks the links generated by pagelistasblocks lava links are generated rather than page steps to reproduce add a route to a page which uses pagelistasblocks lava in a pagemenu block reload the page and attempt to follow a link in the page menu block to e g reports which is page get the rock page not found error because the actual link generated is as though is a route to the page expected behavior adding a route to a page doesn t affect pagemenuasblocks lava following the normal naming convention of page pagesubnav lava menus are not affected actual behavior adding a route to a page causes pagemenuasblocks lava in the rock and stark themes at least to omit the page from the address leading to links using the naming convention of versions rock version client culture setting
| 0
|
738,788
| 25,574,404,975
|
IssuesEvent
|
2022-11-30 20:39:13
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
healthcare.api-client.v1.dicom.dicom_stores_test: test_CRUD_dicom_store failed
|
priority: p1 type: bug api: healthcare samples flakybot: issue flakybot: flaky
|
Note: #6822 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 07e8b8145670fc1f9f9ed99e348d8fe28ca2ca7e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/be880b63-6128-4dd8-b998-bc6eebd17b8c), [Sponge](http://sponge2/be880b63-6128-4dd8-b998-bc6eebd17b8c)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores_test.py", line 175, in test_CRUD_dicom_store
create()
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/backoff/_sync.py", line 110, in retry
ret = target(*args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores_test.py", line 171, in create
dicom_stores.create_dicom_store(
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores.py", line 51, in create_dicom_store
response = request.execute()
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/http.py", line 922, in execute
resp, content = _retry_request(
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/http.py", line 190, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google_auth_httplib2.py", line 209, in request
self.credentials.before_request(self._request, method, uri, request_headers)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/auth/credentials.py", line 133, in before_request
self.refresh(request)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/service_account.py", line 410, in refresh
access_token, expiry, _ = _client.jwt_grant(
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 193, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 165, in _token_endpoint_request
_handle_error_response(response_data)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 60, in _handle_error_response
raise exceptions.RefreshError(error_details, response_data)
google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})</pre></details>
|
1.0
|
healthcare.api-client.v1.dicom.dicom_stores_test: test_CRUD_dicom_store failed - Note: #6822 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 07e8b8145670fc1f9f9ed99e348d8fe28ca2ca7e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/be880b63-6128-4dd8-b998-bc6eebd17b8c), [Sponge](http://sponge2/be880b63-6128-4dd8-b998-bc6eebd17b8c)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores_test.py", line 175, in test_CRUD_dicom_store
create()
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/backoff/_sync.py", line 110, in retry
ret = target(*args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores_test.py", line 171, in create
dicom_stores.create_dicom_store(
File "/workspace/healthcare/api-client/v1/dicom/dicom_stores.py", line 51, in create_dicom_store
response = request.execute()
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/http.py", line 922, in execute
resp, content = _retry_request(
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/googleapiclient/http.py", line 190, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google_auth_httplib2.py", line 209, in request
self.credentials.before_request(self._request, method, uri, request_headers)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/auth/credentials.py", line 133, in before_request
self.refresh(request)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/service_account.py", line 410, in refresh
access_token, expiry, _ = _client.jwt_grant(
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 193, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 165, in _token_endpoint_request
_handle_error_response(response_data)
File "/workspace/healthcare/api-client/v1/dicom/.nox/py-3-9/lib/python3.9/site-packages/google/oauth2/_client.py", line 60, in _handle_error_response
raise exceptions.RefreshError(error_details, response_data)
google.auth.exceptions.RefreshError: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})</pre></details>
|
non_process
|
healthcare api client dicom dicom stores test test crud dicom store failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file workspace healthcare api client dicom dicom stores test py line in test crud dicom store create file workspace healthcare api client dicom nox py lib site packages backoff sync py line in retry ret target args kwargs file workspace healthcare api client dicom dicom stores test py line in create dicom stores create dicom store file workspace healthcare api client dicom dicom stores py line in create dicom store response request execute file workspace healthcare api client dicom nox py lib site packages googleapiclient helpers py line in positional wrapper return wrapped args kwargs file workspace healthcare api client dicom nox py lib site packages googleapiclient http py line in execute resp content retry request file workspace healthcare api client dicom nox py lib site packages googleapiclient http py line in retry request resp content http request uri method args kwargs file workspace healthcare api client dicom nox py lib site packages google auth py line in request self credentials before request self request method uri request headers file workspace healthcare api client dicom nox py lib site packages google auth credentials py line in before request self refresh request file workspace healthcare api client dicom nox py lib site packages google service account py line in refresh access token expiry client jwt grant file workspace healthcare api client dicom nox py lib site packages google client py line in jwt grant response data token endpoint request request token uri body file workspace healthcare api client dicom nox py lib site packages google client py line in token endpoint request handle error response response data file workspace healthcare api client dicom nox py lib site packages google client py line in handle error response raise exceptions refresherror error details response data google auth exceptions refresherror invalid grant invalid jwt signature error invalid grant error description invalid jwt signature
| 0
|
184,856
| 32,060,476,850
|
IssuesEvent
|
2023-09-24 15:50:11
|
Braekpo1nt/MCTManager
|
https://api.github.com/repos/Braekpo1nt/MCTManager
|
closed
|
Make BoundingBoxDTO return a new BoundingBox
|
redesign
|
## Description
Instead of `BoundingBoxDTO` returning a reference to the same `BoundingBox` object with the `getBoundingBox()` method, it should instead follow the pattern of `Vector` and have a `toBoundingBox()` method that returns a new `BoundingBox` object every time. This way, there will be no confusion as to whether or not editing the return value of the `getBoundingBox()` method will also change the `BoundingBoxDTO` it came from, and changing the `BoundingBoxDTO` will always result in `toBoundingBox()` returning the `BoundingBox` that reflects the current state of the `BoundingBoxDTO`.
|
1.0
|
Make BoundingBoxDTO return a new BoundingBox - ## Description
Instead of `BoundingBoxDTO` returning a reference to the same `BoundingBox` object with the `getBoundingBox()` method, it should instead follow the pattern of `Vector` and have a `toBoundingBox()` method that returns a new `BoundingBox` object every time. This way, there will be no confusion as to whether or not editing the return value of the `getBoundingBox()` method will also change the `BoundingBoxDTO` it came from, and changing the `BoundingBoxDTO` will always result in `toBoundingBox()` returning the `BoundingBox` that reflects the current state of the `BoundingBoxDTO`.
|
non_process
|
make boundingboxdto return a new boundingbox description instead of boundingboxdto returning a reference to the same boundingbox object with the getboundingbox method it should instead follow the pattern of vector and have a toboundingbox method that returns a new boundingbox object every time this way there will be no confusion as to whether or not editing the return value of the getboundingbox method will also change the boundingboxdto it came from and changing the boundingboxdto will always result in toboundingbox returning the boundingbox that reflects the current state of the boundingboxdto
| 0
|
184,217
| 31,841,216,738
|
IssuesEvent
|
2023-09-14 16:26:49
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
reopened
|
Copy of [Design] Create Wireframe for Upcoming Appointments Progress Loading Indicator
|
design ux HCE-Checkin
|
## User Story
The call the retrieve the upcoming appointments will take place after the new landing page loads. Therefore, we need a progress indicator for the fact that the upcoming appointments are loading.
## Tasks
- [ ] Create final mockups with callouts
- [ ] Decide: Should a content and accessibility review be part of this ticket or separate tickets due to scope?
## Acceptance Criteria
- [ ] UI review meeting with product/UX team for feature capabilities
- [ ] UI review meeting with engineering for layout and callouts (can be the same meeting as above)
- [ ] Wireframe available on Sketch Cloud
- [ ] If the wireframes applies to an error state or text message, then update the [GitHub source of truth](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/health-care/checkin/design/text-and-error-messages.md) documentation.
|
1.0
|
Copy of [Design] Create Wireframe for Upcoming Appointments Progress Loading Indicator - ## User Story
The call the retrieve the upcoming appointments will take place after the new landing page loads. Therefore, we need a progress indicator for the fact that the upcoming appointments are loading.
## Tasks
- [ ] Create final mockups with callouts
- [ ] Decide: Should a content and accessibility review be part of this ticket or separate tickets due to scope?
## Acceptance Criteria
- [ ] UI review meeting with product/UX team for feature capabilities
- [ ] UI review meeting with engineering for layout and callouts (can be the same meeting as above)
- [ ] Wireframe available on Sketch Cloud
- [ ] If the wireframes applies to an error state or text message, then update the [GitHub source of truth](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/health-care/checkin/design/text-and-error-messages.md) documentation.
|
non_process
|
copy of create wireframe for upcoming appointments progress loading indicator user story the call the retrieve the upcoming appointments will take place after the new landing page loads therefore we need a progress indicator for the fact that the upcoming appointments are loading tasks create final mockups with callouts decide should a content and accessibility review be part of this ticket or separate tickets due to scope acceptance criteria ui review meeting with product ux team for feature capabilities ui review meeting with engineering for layout and callouts can be the same meeting as above wireframe available on sketch cloud if the wireframes applies to an error state or text message then update the documentation
| 0
|
213,089
| 7,245,790,785
|
IssuesEvent
|
2018-02-14 19:19:12
|
aspnet/EntityFrameworkCore
|
https://api.github.com/repos/aspnet/EntityFrameworkCore
|
closed
|
Query: Allow use of AsQueryable method
|
priority-stretch type-bug
|
## Update
Modifying the description to track support for `AsQueryable` in linq queries.
`AsQueryable` method is required when user wants to pass Expression<Func> to linq method which is hanging of an navigation property since navigation is of type IEnumerable. With the relinq fix now we can get parsed query with AsQueryable but we are client eval-ing afterwards. This issue is to track work needed on EF Core side to translate query to server.
Query
`var query3 = db.Products.Select(p => p.ProductCategories.AsQueryable().Select(pc => pc.Category).Where(Category.IsGenre)).ToList();`
QueryExecution:
```
dbug: Microsoft.EntityFrameworkCore.Query[10104]
Optimized query model:
'from Product p in DbSet<Product>
select
from ProductCategory pc in
(from ProductCategory <generated>_1 in DbSet<ProductCategory>
where Property([p], "ProductId") ?= Property([<generated>_1], "ProductId")
select [<generated>_1]).AsQueryable()
join Category pc.Category in DbSet<Category>
on Property([pc], "CategoryId") equals Property([pc.Category], "CategoryId")
where [pc.Category].ParentId == (Nullable<Guid>)__Genre_0
select [pc.Category]'
warn: Microsoft.EntityFrameworkCore.Query[20500]
The LINQ expression 'join Category pc.Category in value(Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryable`1[EFSampleApp.Category]) on Property([pc], "CategoryId") equals Property([pc.Category], "CategoryId")' could not be translated and will be evaluated locally.
warn: Microsoft.EntityFrameworkCore.Query[20500]
The LINQ expression 'where ([pc.Category].ParentId == Convert(__Genre_0, Nullable`1))' could not be translated and will be evaluated locally.
dbug: Microsoft.EntityFrameworkCore.Query[10107]
(QueryContext queryContext) => IEnumerable<IOrderedQueryable<Category>> _InterceptExceptions(
source: IEnumerable<IOrderedQueryable<Category>> _ShapedQuery(
queryContext: queryContext,
shaperCommandContext: SelectExpression:
SELECT [p].[ProductId]
FROM [Commerce].[Product] AS [p],
shaper: TypedProjectionShaper<ValueBufferShaper, ValueBuffer, IOrderedQueryable<Category>>),
contextType: EFSampleApp.MyContext,
logger: DiagnosticsLogger<Query>,
queryContext: queryContext)
```
## Original Issue
Hi!
I'm trying to map an existing database to EF Core. Our project has a terrible model where we have to compare the `ParentId` to a specific `Guid` to find out the type of a row. E.g. we have the table `Categories` and each Guid identifies `Genre`, `Mood`, etc. (we have a music app).
So I'm trying to write this property in the Category class, but if I do, I'm unable to use `Include` because it can't be translated:
``` csharp
public virtual bool IsGenre => ParentId == Genre;
private static Guid Genre = Guid.Parse("3CA9FA61-EB62-4480-B476-867F78A9ADB3")
```
If I instead do `ctx.Categories.Where(c => c.ParentId == Guid.Parse("3CA9FA61-EB62-4480-B476-867F78A9ADB3")` it works perfectly.
I'm wondering if there's any way to move this to the Category class so I can avoid copy and pasting for each query I need to write.
I tried to manually create the `Expression`:
``` csharp
public static Expression<Func<Category, bool>> IsGenre = c => c.ParentId == Genre;
```
but I get the following error on `p.ProductCategories.Select(pc => pc.Category).Where(Category.IsGenre)`:
```
Error CS1929 'IEnumerable<Category>' does not contain a definition for 'Where' and the best extension method overload 'Queryable.Where<Category>(IQueryable<Category>, Expression<Func<Category, bool>>)' requires a receiver of type 'IQueryable<Category>'
```
With `.AsQueryable()` I'm able to compile the program but I get `This overload of the method 'System.Linq.Queryable.AsQueryable' is currently not supported`.
Thanks!
|
1.0
|
Query: Allow use of AsQueryable method - ## Update
Modifying the description to track support for `AsQueryable` in linq queries.
`AsQueryable` method is required when user wants to pass Expression<Func> to linq method which is hanging of an navigation property since navigation is of type IEnumerable. With the relinq fix now we can get parsed query with AsQueryable but we are client eval-ing afterwards. This issue is to track work needed on EF Core side to translate query to server.
Query
`var query3 = db.Products.Select(p => p.ProductCategories.AsQueryable().Select(pc => pc.Category).Where(Category.IsGenre)).ToList();`
QueryExecution:
```
dbug: Microsoft.EntityFrameworkCore.Query[10104]
Optimized query model:
'from Product p in DbSet<Product>
select
from ProductCategory pc in
(from ProductCategory <generated>_1 in DbSet<ProductCategory>
where Property([p], "ProductId") ?= Property([<generated>_1], "ProductId")
select [<generated>_1]).AsQueryable()
join Category pc.Category in DbSet<Category>
on Property([pc], "CategoryId") equals Property([pc.Category], "CategoryId")
where [pc.Category].ParentId == (Nullable<Guid>)__Genre_0
select [pc.Category]'
warn: Microsoft.EntityFrameworkCore.Query[20500]
The LINQ expression 'join Category pc.Category in value(Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryable`1[EFSampleApp.Category]) on Property([pc], "CategoryId") equals Property([pc.Category], "CategoryId")' could not be translated and will be evaluated locally.
warn: Microsoft.EntityFrameworkCore.Query[20500]
The LINQ expression 'where ([pc.Category].ParentId == Convert(__Genre_0, Nullable`1))' could not be translated and will be evaluated locally.
dbug: Microsoft.EntityFrameworkCore.Query[10107]
(QueryContext queryContext) => IEnumerable<IOrderedQueryable<Category>> _InterceptExceptions(
source: IEnumerable<IOrderedQueryable<Category>> _ShapedQuery(
queryContext: queryContext,
shaperCommandContext: SelectExpression:
SELECT [p].[ProductId]
FROM [Commerce].[Product] AS [p],
shaper: TypedProjectionShaper<ValueBufferShaper, ValueBuffer, IOrderedQueryable<Category>>),
contextType: EFSampleApp.MyContext,
logger: DiagnosticsLogger<Query>,
queryContext: queryContext)
```
## Original Issue
Hi!
I'm trying to map an existing database to EF Core. Our project has a terrible model where we have to compare the `ParentId` to a specific `Guid` to find out the type of a row. E.g. we have the table `Categories` and each Guid identifies `Genre`, `Mood`, etc. (we have a music app).
So I'm trying to write this property in the Category class, but if I do, I'm unable to use `Include` because it can't be translated:
``` csharp
public virtual bool IsGenre => ParentId == Genre;
private static Guid Genre = Guid.Parse("3CA9FA61-EB62-4480-B476-867F78A9ADB3")
```
If I instead do `ctx.Categories.Where(c => c.ParentId == Guid.Parse("3CA9FA61-EB62-4480-B476-867F78A9ADB3")` it works perfectly.
I'm wondering if there's any way to move this to the Category class so I can avoid copy and pasting for each query I need to write.
I tried to manually create the `Expression`:
``` csharp
public static Expression<Func<Category, bool>> IsGenre = c => c.ParentId == Genre;
```
but I get the following error on `p.ProductCategories.Select(pc => pc.Category).Where(Category.IsGenre)`:
```
Error CS1929 'IEnumerable<Category>' does not contain a definition for 'Where' and the best extension method overload 'Queryable.Where<Category>(IQueryable<Category>, Expression<Func<Category, bool>>)' requires a receiver of type 'IQueryable<Category>'
```
With `.AsQueryable()` I'm able to compile the program but I get `This overload of the method 'System.Linq.Queryable.AsQueryable' is currently not supported`.
Thanks!
|
non_process
|
query allow use of asqueryable method update modifying the description to track support for asqueryable in linq queries asqueryable method is required when user wants to pass expression to linq method which is hanging of an navigation property since navigation is of type ienumerable with the relinq fix now we can get parsed query with asqueryable but we are client eval ing afterwards this issue is to track work needed on ef core side to translate query to server query var db products select p p productcategories asqueryable select pc pc category where category isgenre tolist queryexecution dbug microsoft entityframeworkcore query optimized query model from product p in dbset select from productcategory pc in from productcategory in dbset where property productid property productid select asqueryable join category pc category in dbset on property categoryid equals property categoryid where parentid nullable genre select warn microsoft entityframeworkcore query the linq expression join category pc category in value microsoft entityframeworkcore query internal entityqueryable on property categoryid equals property categoryid could not be translated and will be evaluated locally warn microsoft entityframeworkcore query the linq expression where parentid convert genre nullable could not be translated and will be evaluated locally dbug microsoft entityframeworkcore query querycontext querycontext ienumerable interceptexceptions source ienumerable shapedquery querycontext querycontext shapercommandcontext selectexpression select from as shaper typedprojectionshaper contexttype efsampleapp mycontext logger diagnosticslogger querycontext querycontext original issue hi i m trying to map an existing database to ef core our project has a terrible model where we have to compare the parentid to a specific guid to find out the type of a row e g we have the table categories and each guid identifies genre mood etc we have a music app so i m trying to write this property in the category class but if i do i m unable to use include because it can t be translated csharp public virtual bool isgenre parentid genre private static guid genre guid parse if i instead do ctx categories where c c parentid guid parse it works perfectly i m wondering if there s any way to move this to the category class so i can avoid copy and pasting for each query i need to write i tried to manually create the expression csharp public static expression isgenre c c parentid genre but i get the following error on p productcategories select pc pc category where category isgenre error ienumerable does not contain a definition for where and the best extension method overload queryable where iqueryable expression requires a receiver of type iqueryable with asqueryable i m able to compile the program but i get this overload of the method system linq queryable asqueryable is currently not supported thanks
| 0
|
122,366
| 16,107,637,879
|
IssuesEvent
|
2021-04-27 16:45:55
|
phetsims/natural-selection
|
https://api.github.com/repos/phetsims/natural-selection
|
opened
|
Review PhET-iO API changes between 1.2 and 1.3
|
design:phet-io
|
When https://github.com/phetsims/natural-selection/issues/271 and https://github.com/phetsims/phet-io-wrappers/issues/406 have been addressed, a lurking question is whether the Diff wrapper will identify unintended changes between 1.2 and 1.3.
Review those changes, and verify.
|
1.0
|
Review PhET-iO API changes between 1.2 and 1.3 - When https://github.com/phetsims/natural-selection/issues/271 and https://github.com/phetsims/phet-io-wrappers/issues/406 have been addressed, a lurking question is whether the Diff wrapper will identify unintended changes between 1.2 and 1.3.
Review those changes, and verify.
|
non_process
|
review phet io api changes between and when and have been addressed a lurking question is whether the diff wrapper will identify unintended changes between and review those changes and verify
| 0
|
14,492
| 17,604,249,723
|
IssuesEvent
|
2021-08-17 15:10:55
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm (Request in QGIS)
|
Processing Alg 3.22
|
### Request for documentation
From pull request QGIS/qgis#44041
Author: @nirvn
QGIS version: 3.22
**[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm**
### PR Description:
## Description
This PR implements a brand new spatiotemporal ST-DBSCAN clustering algorithm in QGIS' processing toolbox.
It's a pretty straightforward copy/paste of @nyalldawson 's DBSCAN clustering algorithm, with a temporal component added.
Obligatory screenshot:

### Commits tagged with [need-docs] or [FEATURE]
"[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm"
|
1.0
|
[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#44041
Author: @nirvn
QGIS version: 3.22
**[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm**
### PR Description:
## Description
This PR implements a brand new spatiotemporal ST-DBSCAN clustering algorithm in QGIS' processing toolbox.
It's a pretty straightforward copy/paste of @nyalldawson 's DBSCAN clustering algorithm, with a temporal component added.
Obligatory screenshot:

### Commits tagged with [need-docs] or [FEATURE]
"[feature][processing] A brand new spatiotemporal ST-DBSCAN clustering algorithm"
|
process
|
a brand new spatiotemporal st dbscan clustering algorithm request in qgis request for documentation from pull request qgis qgis author nirvn qgis version a brand new spatiotemporal st dbscan clustering algorithm pr description description this pr implements a brand new spatiotemporal st dbscan clustering algorithm in qgis processing toolbox it s a pretty straightforward copy paste of nyalldawson s dbscan clustering algorithm with a temporal component added obligatory screenshot commits tagged with or a brand new spatiotemporal st dbscan clustering algorithm
| 1
|
442,087
| 12,737,394,989
|
IssuesEvent
|
2020-06-25 18:41:20
|
LLK/scratch-www
|
https://api.github.com/repos/LLK/scratch-www
|
closed
|
Update credits page
|
enhancement good first issue help wanted priority 2
|
https://scratch.mit.edu/credits should have the following updates:
**Add to MIT Scratch Team section:**
Amielle (originalwow)
Craig (noncanonical)
Joshua (Class12321)
Ellen (SunnyDay4aBlueJay)
**Remove from MIT Scratch Team section:**
Elizabeth (rmiel)
**Add to Past Contributors:**
Elizabeth Foster
**Remove from Past Contributors:**
Ellen Daoust
## files to change
* src/views/credits/people.json
* src/views/credits/credits.jsx
|
1.0
|
Update credits page - https://scratch.mit.edu/credits should have the following updates:
**Add to MIT Scratch Team section:**
Amielle (originalwow)
Craig (noncanonical)
Joshua (Class12321)
Ellen (SunnyDay4aBlueJay)
**Remove from MIT Scratch Team section:**
Elizabeth (rmiel)
**Add to Past Contributors:**
Elizabeth Foster
**Remove from Past Contributors:**
Ellen Daoust
## files to change
* src/views/credits/people.json
* src/views/credits/credits.jsx
|
non_process
|
update credits page should have the following updates add to mit scratch team section amielle originalwow craig noncanonical joshua ellen remove from mit scratch team section elizabeth rmiel add to past contributors elizabeth foster remove from past contributors ellen daoust files to change src views credits people json src views credits credits jsx
| 0
|
10,791
| 13,609,018,479
|
IssuesEvent
|
2020-09-23 04:02:07
|
googleapis/java-recaptchaenterprise
|
https://api.github.com/repos/googleapis/java-recaptchaenterprise
|
closed
|
Dependency Dashboard
|
api: recaptchaenterprise type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.truth-truth-1.x -->deps: update dependency com.google.truth:truth to v1.0.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-recaptchaenterprise-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-recaptchaenterprise to v1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.truth-truth-1.x -->deps: update dependency com.google.truth:truth to v1.0.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-recaptchaenterprise-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-recaptchaenterprise to v1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to deps update dependency com google truth truth to chore deps update dependency com google cloud google cloud recaptchaenterprise to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
33,318
| 7,089,066,060
|
IssuesEvent
|
2018-01-12 00:23:03
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
ClassRegistry::init() does not use plugin information for internal model mapping.
|
Defect
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 2.6.12 (but seems to be in branch 2.x as well)
* Platform and Target: not relevant.
### What you did
```
$model1 = ClassRegistry::init('MyClass');
$model2 = ClassRegistry::init('MyPlugin.MyClass');
```
### What happened
`$model2` is the same as `$model1`
### What you expected to happen
`$model2` should be the class loaded from MyPlugin
### Reason
`ClassRegistry::init()` only uses the `$alias` (without plugin information) to duplicate and map the model:
```
$model = $_this->_duplicate($alias, $class);
if ($model) {
$_this->map($alias, $class);
return $model;
}
```
That needs to be enhanced to use plugin information for the key as well.
|
1.0
|
ClassRegistry::init() does not use plugin information for internal model mapping. - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 2.6.12 (but seems to be in branch 2.x as well)
* Platform and Target: not relevant.
### What you did
```
$model1 = ClassRegistry::init('MyClass');
$model2 = ClassRegistry::init('MyPlugin.MyClass');
```
### What happened
`$model2` is the same as `$model1`
### What you expected to happen
`$model2` should be the class loaded from MyPlugin
### Reason
`ClassRegistry::init()` only uses the `$alias` (without plugin information) to duplicate and map the model:
```
$model = $_this->_duplicate($alias, $class);
if ($model) {
$_this->map($alias, $class);
return $model;
}
```
That needs to be enhanced to use plugin information for the key as well.
|
non_process
|
classregistry init does not use plugin information for internal model mapping this is a multiple allowed bug enhancement feature discussion rfc cakephp version but seems to be in branch x as well platform and target not relevant what you did classregistry init myclass classregistry init myplugin myclass what happened is the same as what you expected to happen should be the class loaded from myplugin reason classregistry init only uses the alias without plugin information to duplicate and map the model model this duplicate alias class if model this map alias class return model that needs to be enhanced to use plugin information for the key as well
| 0
|
5,580
| 8,432,701,251
|
IssuesEvent
|
2018-10-17 03:30:02
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Reprocessing of thumbnails of older maps or manual editing/reprocessing of thumbnails/headers on old submitted maps
|
beatmap beatmap processor
|
Going through my catalog of maps i have a lot of older (and some newer ones in the last year) which dont have a thumbnail or header (all of them were uploaded with a background).
I have had a couple of people who were wondering if i deleted the BG of my now 2+year old maps without uploading only to have to tell them that that is not the case.
I am not the only one affected by this as a lot of older classic maps suffer from the same thumbnailless fate or different visual bugs which make their header glitch-y
In particular this one https://osu.ppy.sh/beatmapsets/656040#osu/1390114 doesnt have a thumbnail but shows a header, which is especially wierd https://puu.sh/BCthM/53604e92d4.png.
even if i wanted to revive and resubmit them so a thumbnail/header is reprocessed i would need to do that 22+ times myself, for which i dont have the slots for as i am actively mapping (And i wouldnt like reviving the dead for this).
On the old site all of these maps show an image so this must be a problem on the new site.
My profile for reference https://osu.ppy.sh/users/2140676everything including https://puu.sh/BCtlB/9c6ea6ef9d.png and further below.
|
1.0
|
Reprocessing of thumbnails of older maps or manual editing/reprocessing of thumbnails/headers on old submitted maps - Going through my catalog of maps i have a lot of older (and some newer ones in the last year) which dont have a thumbnail or header (all of them were uploaded with a background).
I have had a couple of people who were wondering if i deleted the BG of my now 2+year old maps without uploading only to have to tell them that that is not the case.
I am not the only one affected by this as a lot of older classic maps suffer from the same thumbnailless fate or different visual bugs which make their header glitch-y
In particular this one https://osu.ppy.sh/beatmapsets/656040#osu/1390114 doesnt have a thumbnail but shows a header, which is especially wierd https://puu.sh/BCthM/53604e92d4.png.
even if i wanted to revive and resubmit them so a thumbnail/header is reprocessed i would need to do that 22+ times myself, for which i dont have the slots for as i am actively mapping (And i wouldnt like reviving the dead for this).
On the old site all of these maps show an image so this must be a problem on the new site.
My profile for reference https://osu.ppy.sh/users/2140676everything including https://puu.sh/BCtlB/9c6ea6ef9d.png and further below.
|
process
|
reprocessing of thumbnails of older maps or manual editing reprocessing of thumbnails headers on old submitted maps going through my catalog of maps i have a lot of older and some newer ones in the last year which dont have a thumbnail or header all of them were uploaded with a background i have had a couple of people who were wondering if i deleted the bg of my now year old maps without uploading only to have to tell them that that is not the case i am not the only one affected by this as a lot of older classic maps suffer from the same thumbnailless fate or different visual bugs which make their header glitch y in particular this one doesnt have a thumbnail but shows a header which is especially wierd even if i wanted to revive and resubmit them so a thumbnail header is reprocessed i would need to do that times myself for which i dont have the slots for as i am actively mapping and i wouldnt like reviving the dead for this on the old site all of these maps show an image so this must be a problem on the new site my profile for reference including and further below
| 1
|
309,433
| 23,295,397,396
|
IssuesEvent
|
2022-08-06 13:36:56
|
lizliz/teaspoon
|
https://api.github.com/repos/lizliz/teaspoon
|
closed
|
python version issues
|
bug documentation
|
In fighting with the other issue (#16 ) I've had to change my numpy version several times. We should specify on the documentation page if there are certain versions of packages that are needed (or certain versions that dont work). For example, the ordinal partition network test gives me a segfault with the most recent numpy version, with numpy==1.15.0 it runs but the test fails, but I assume it works for @lizliz since she setup the unittests stuff.
Also, the `sortedcontainers` package is needed in one of the tests, but it isn't on the list in the documentation of things that are needed to pip install.
|
1.0
|
python version issues - In fighting with the other issue (#16 ) I've had to change my numpy version several times. We should specify on the documentation page if there are certain versions of packages that are needed (or certain versions that dont work). For example, the ordinal partition network test gives me a segfault with the most recent numpy version, with numpy==1.15.0 it runs but the test fails, but I assume it works for @lizliz since she setup the unittests stuff.
Also, the `sortedcontainers` package is needed in one of the tests, but it isn't on the list in the documentation of things that are needed to pip install.
|
non_process
|
python version issues in fighting with the other issue i ve had to change my numpy version several times we should specify on the documentation page if there are certain versions of packages that are needed or certain versions that dont work for example the ordinal partition network test gives me a segfault with the most recent numpy version with numpy it runs but the test fails but i assume it works for lizliz since she setup the unittests stuff also the sortedcontainers package is needed in one of the tests but it isn t on the list in the documentation of things that are needed to pip install
| 0
|
20,757
| 27,488,962,132
|
IssuesEvent
|
2023-03-04 11:32:26
|
hsmusic/hsmusic-wiki
|
https://api.github.com/repos/hsmusic/hsmusic-wiki
|
opened
|
Detect & report content tag errors before building the site
|
scope: data processing type: dev friendliness
|
Right now content errors are only reported when a page is actually built (reached in static-build or loaded in browser). That's a bit annoying and totally unnecessary, since page builds don't mutate data or necessarily expose much of anything needed to transform content, besides language strings.
|
1.0
|
Detect & report content tag errors before building the site - Right now content errors are only reported when a page is actually built (reached in static-build or loaded in browser). That's a bit annoying and totally unnecessary, since page builds don't mutate data or necessarily expose much of anything needed to transform content, besides language strings.
|
process
|
detect report content tag errors before building the site right now content errors are only reported when a page is actually built reached in static build or loaded in browser that s a bit annoying and totally unnecessary since page builds don t mutate data or necessarily expose much of anything needed to transform content besides language strings
| 1
|
19,161
| 25,258,317,561
|
IssuesEvent
|
2022-11-15 20:11:46
|
microsoft/react-native-windows
|
https://api.github.com/repos/microsoft/react-native-windows
|
closed
|
RNW stable releases needs to update to latest RN core versions
|
enhancement Area: Release Process
|
### Problem Description
We should update our stable branches of RNW to require the latest matching version of RN.
TLDR: In publishing RN 0.71-RC0, all older versions of RN were broken for Android builds. See https://github.com/facebook/react-native/issues/35210 for details.
RN Core has pushed patches back to RN 0.63.
### Steps To Reproduce
See https://github.com/facebook/react-native/issues/35210
### Expected Results
_No response_
### CLI version
npx react-native --version
### Environment
```markdown
npx react-native info
```
### Target Platform Version
_No response_
### Target Device(s)
_No response_
### Visual Studio Version
_No response_
### Build Configuration
_No response_
### Snack, code example, screenshot, or link to a repository
_No response_
|
1.0
|
RNW stable releases needs to update to latest RN core versions - ### Problem Description
We should update our stable branches of RNW to require the latest matching version of RN.
TLDR: In publishing RN 0.71-RC0, all older versions of RN were broken for Android builds. See https://github.com/facebook/react-native/issues/35210 for details.
RN Core has pushed patches back to RN 0.63.
### Steps To Reproduce
See https://github.com/facebook/react-native/issues/35210
### Expected Results
_No response_
### CLI version
npx react-native --version
### Environment
```markdown
npx react-native info
```
### Target Platform Version
_No response_
### Target Device(s)
_No response_
### Visual Studio Version
_No response_
### Build Configuration
_No response_
### Snack, code example, screenshot, or link to a repository
_No response_
|
process
|
rnw stable releases needs to update to latest rn core versions problem description we should update our stable branches of rnw to require the latest matching version of rn tldr in publishing rn all older versions of rn were broken for android builds see for details rn core has pushed patches back to rn steps to reproduce see expected results no response cli version npx react native version environment markdown npx react native info target platform version no response target device s no response visual studio version no response build configuration no response snack code example screenshot or link to a repository no response
| 1
|
20,427
| 27,089,660,272
|
IssuesEvent
|
2023-02-14 19:53:06
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
opened
|
[processor/filter] Standardize configuration on OTTL
|
enhancement priority:p2 processor/filter
|
### Component(s)
processor/filter
### Is your feature request related to a problem? Please describe.
With https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16369, OTTL was added to the filterprocessor. This allows us to standardize the way the filterprocessor configures conditions for when data should be dropped.
For spans, the existing non-ottl configuration options are:
- matching by service name, span name, attributes, scopes, resource attributes, and span kind.
- Both strict and regex is supported.
- When checking attributes, converting attribute values from bool, double, and int to string is supported
For metric, the existing non-ottl configuration options are:
- matching by metric name, resource attributes
- Both strict and regex is supported.
- matching by datapoint attributes via the [expr](https://github.com/antonmedv/expr) expression engine.
- When checking attributes, converting attribute values from bool, double, and int to string is supported.
For logs, the existing non-ottl configuration options are:
- matching by resource attributes, attributes, severity text, body, and severity number.
- Both strict and regex is supported.
- When checking attributes, converting attribute values from bool, double, and int to string is supported.
- When matching severity number, a minimum log severity can be used and any values equal to or greater than the severity match. You can also specify if an undefined severity number can be match.
With the completion of https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16413, OTTL can now handle each of these use cases, and more, in a uniform library.
In the past we only had what was in `internal/filter` and as different components needed different condition features those packages grew. As users wanted access to different fields in the telemetry payload the signal-specific filters would change to reflect those needs. The filterprocessor grew with it, providing important value but with increased maintenance burden and configuration complexity. Reading through the filterprocessor's readme highlights the difficulty in maintaining and understanding the different features and fields available.
By unifying on OTTL we'll have a solution that has access to all fields on all signals. Users no longer need to worry about whether or not a field for their signal is available to use and maintainers no longer need to worry about adding more fields to filter on in the future (OTLP changes excluded). Due to OTTL's functions, adding more features to enable complex conditions is simpler as the functions encapsulate the logic and can be added without modifying the underlying libraries or configuration. On top of its field access and functions, OTTL's grammar also provides more robust conditions, allowing users to use inequalities, `nil`, and arithmetic.
Acknowledging that Domain Specific Languages can be scary, there is an [open issue that proposes a declarative syntax solution that works with OTTL](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/11852)
### Describe the solution you'd like
temp
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
1.0
|
[processor/filter] Standardize configuration on OTTL - ### Component(s)
processor/filter
### Is your feature request related to a problem? Please describe.
With https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16369, OTTL was added to the filterprocessor. This allows us to standardize the way the filterprocessor configures conditions for when data should be dropped.
For spans, the existing non-ottl configuration options are:
- matching by service name, span name, attributes, scopes, resource attributes, and span kind.
- Both strict and regex is supported.
- When checking attributes, converting attribute values from bool, double, and int to string is supported
For metric, the existing non-ottl configuration options are:
- matching by metric name, resource attributes
- Both strict and regex is supported.
- matching by datapoint attributes via the [expr](https://github.com/antonmedv/expr) expression engine.
- When checking attributes, converting attribute values from bool, double, and int to string is supported.
For logs, the existing non-ottl configuration options are:
- matching by resource attributes, attributes, severity text, body, and severity number.
- Both strict and regex is supported.
- When checking attributes, converting attribute values from bool, double, and int to string is supported.
- When matching severity number, a minimum log severity can be used and any values equal to or greater than the severity match. You can also specify if an undefined severity number can be match.
With the completion of https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16413, OTTL can now handle each of these use cases, and more, in a uniform library.
In the past we only had what was in `internal/filter` and as different components needed different condition features those packages grew. As users wanted access to different fields in the telemetry payload the signal-specific filters would change to reflect those needs. The filterprocessor grew with it, providing important value but with increased maintenance burden and configuration complexity. Reading through the filterprocessor's readme highlights the difficulty in maintaining and understanding the different features and fields available.
By unifying on OTTL we'll have a solution that has access to all fields on all signals. Users no longer need to worry about whether or not a field for their signal is available to use and maintainers no longer need to worry about adding more fields to filter on in the future (OTLP changes excluded). Due to OTTL's functions, adding more features to enable complex conditions is simpler as the functions encapsulate the logic and can be added without modifying the underlying libraries or configuration. On top of its field access and functions, OTTL's grammar also provides more robust conditions, allowing users to use inequalities, `nil`, and arithmetic.
Acknowledging that Domain Specific Languages can be scary, there is an [open issue that proposes a declarative syntax solution that works with OTTL](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/11852)
### Describe the solution you'd like
temp
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
process
|
standardize configuration on ottl component s processor filter is your feature request related to a problem please describe with ottl was added to the filterprocessor this allows us to standardize the way the filterprocessor configures conditions for when data should be dropped for spans the existing non ottl configuration options are matching by service name span name attributes scopes resource attributes and span kind both strict and regex is supported when checking attributes converting attribute values from bool double and int to string is supported for metric the existing non ottl configuration options are matching by metric name resource attributes both strict and regex is supported matching by datapoint attributes via the expression engine when checking attributes converting attribute values from bool double and int to string is supported for logs the existing non ottl configuration options are matching by resource attributes attributes severity text body and severity number both strict and regex is supported when checking attributes converting attribute values from bool double and int to string is supported when matching severity number a minimum log severity can be used and any values equal to or greater than the severity match you can also specify if an undefined severity number can be match with the completion of ottl can now handle each of these use cases and more in a uniform library in the past we only had what was in internal filter and as different components needed different condition features those packages grew as users wanted access to different fields in the telemetry payload the signal specific filters would change to reflect those needs the filterprocessor grew with it providing important value but with increased maintenance burden and configuration complexity reading through the filterprocessor s readme highlights the difficulty in maintaining and understanding the different features and fields available by unifying on ottl we ll have a solution that has access to all fields on all signals users no longer need to worry about whether or not a field for their signal is available to use and maintainers no longer need to worry about adding more fields to filter on in the future otlp changes excluded due to ottl s functions adding more features to enable complex conditions is simpler as the functions encapsulate the logic and can be added without modifying the underlying libraries or configuration on top of its field access and functions ottl s grammar also provides more robust conditions allowing users to use inequalities nil and arithmetic acknowledging that domain specific languages can be scary there is an describe the solution you d like temp describe alternatives you ve considered no response additional context no response
| 1
|
2,147
| 4,997,012,114
|
IssuesEvent
|
2016-12-09 15:36:06
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Custom Search Toolbar Not Rendering my Custom Element, Uses Default.
|
inprocess
|
Hello!
After trying many react datatables, I have decided that react-bootstrap-table is by far the best design, great work! I'm experiencing some difficulties however. I want to completely modify the table header so it renders a custom themed search bar and tool bar that matches my table styling. Here is a screenshot of the expected end result:

I've been able to theme the table just fine but the code I have written for the header toolbar doesn't render but no exceptions are thrown by webpack or browser console so I'm not sure what I'm doing wrong or if this is perhaps a bug. Help is much appreciated!
My Table Search class which returns the custom toolbar HTML
```
class CyhyTableSearch extends React.Component {
getValue() {
return ReactDOM.findDOMNode(this).value;
}
setValue(value) {
ReactDOM.findDOMNode(this).value = value;
}
render() {
return (
<div className="flakes-search">
<input
ref="search-input"
className="flakes-search"
placeholder={ this.props.placeholder }
defaultValue={ this.props.defaultValue }
onKeyUp={ this.props.search } />
<div className="flakes-actions-bar">
<button
className="action button-gray smaller right"
defaultValue="Add Value" />
<button
className="action button-gray smaller right"
defaultValue="Export CSV" />
</div>
</div>
);
}
}
```
And my Table Class:
```
class CyhyTable extends React.Component {
csvFormatter(cell, row) {
return `${row.id}: ${cell} USD`;
}
render() {
const selectRowProp = {
mode: 'checkbox'
};
const options = {
clearSearch: false,
searchPanel: (props) => (<CyhyTableSearch { ...props }/>),
page: 1, // which page you want to show as default
sizePerPage: 25, // which size per page you want to locate as default
pageStartIndex: 0, // where to start counting the pages
paginationSize: 3, // the pagination bar size.
prePage: 'Prev', // Previous page button text
nextPage: 'Next', // Next page button text
firstPage: 'First', // First page button text
lastPage: 'Last', // Last page button text
sizePerPageList: [ {
text: '5', value: 5
}, {
text: '10', value: 10
}, {
text: '15', value: 15
}, {
text: '25', value: 25
}, {
text: '50', value: 50
}, {
text: '100', value: 100
}, {
text: 'All', value: cyhyData.length
} ],
};
return (
<BootstrapTable
data={ cyhyData }
options={ options }
selectRow={ selectRowProp }
exportCSV={ true }
pagination={ true }
tableHeaderClass='flakes-table'
tableBodyClass='flakes-table'
containerClass='flakes-table'
tableContainerClass='flakes-table'
headerContainerClass='flakes-table'
bodyContainerClass='flakes-table'
search >
<TableHeaderColumn dataField='Facility'>Facility</TableHeaderColumn>
<TableHeaderColumn dataField='Severity'>Severity</TableHeaderColumn>
<TableHeaderColumn dataField='DNS'>DNS</TableHeaderColumn>
<TableHeaderColumn isKey={true} dataField='IP'>IP</TableHeaderColumn>
<TableHeaderColumn dataField='Port'>Port</TableHeaderColumn>
<TableHeaderColumn dataField='vulnName'>Vulnerability</TableHeaderColumn>
</BootstrapTable>
);
}
}
```
And my app.js render method:
```
ReactDOM.render(
<CyhyTable />,
document.getElementById('table')
);
```
And this is the rendered result:
<img width="863" alt="screen shot 2016-12-02 at 11 08 05 am" src="https://cloud.githubusercontent.com/assets/3852362/20846677/a3ab244e-b87f-11e6-9b2a-be4240b6e8fa.png">
Everything is rendered default so it appears.
|
1.0
|
Custom Search Toolbar Not Rendering my Custom Element, Uses Default. - Hello!
After trying many react datatables, I have decided that react-bootstrap-table is by far the best design, great work! I'm experiencing some difficulties however. I want to completely modify the table header so it renders a custom themed search bar and tool bar that matches my table styling. Here is a screenshot of the expected end result:

I've been able to theme the table just fine but the code I have written for the header toolbar doesn't render but no exceptions are thrown by webpack or browser console so I'm not sure what I'm doing wrong or if this is perhaps a bug. Help is much appreciated!
My Table Search class which returns the custom toolbar HTML
```
class CyhyTableSearch extends React.Component {
getValue() {
return ReactDOM.findDOMNode(this).value;
}
setValue(value) {
ReactDOM.findDOMNode(this).value = value;
}
render() {
return (
<div className="flakes-search">
<input
ref="search-input"
className="flakes-search"
placeholder={ this.props.placeholder }
defaultValue={ this.props.defaultValue }
onKeyUp={ this.props.search } />
<div className="flakes-actions-bar">
<button
className="action button-gray smaller right"
defaultValue="Add Value" />
<button
className="action button-gray smaller right"
defaultValue="Export CSV" />
</div>
</div>
);
}
}
```
And my Table Class:
```
class CyhyTable extends React.Component {
csvFormatter(cell, row) {
return `${row.id}: ${cell} USD`;
}
render() {
const selectRowProp = {
mode: 'checkbox'
};
const options = {
clearSearch: false,
searchPanel: (props) => (<CyhyTableSearch { ...props }/>),
page: 1, // which page you want to show as default
sizePerPage: 25, // which size per page you want to locate as default
pageStartIndex: 0, // where to start counting the pages
paginationSize: 3, // the pagination bar size.
prePage: 'Prev', // Previous page button text
nextPage: 'Next', // Next page button text
firstPage: 'First', // First page button text
lastPage: 'Last', // Last page button text
sizePerPageList: [ {
text: '5', value: 5
}, {
text: '10', value: 10
}, {
text: '15', value: 15
}, {
text: '25', value: 25
}, {
text: '50', value: 50
}, {
text: '100', value: 100
}, {
text: 'All', value: cyhyData.length
} ],
};
return (
<BootstrapTable
data={ cyhyData }
options={ options }
selectRow={ selectRowProp }
exportCSV={ true }
pagination={ true }
tableHeaderClass='flakes-table'
tableBodyClass='flakes-table'
containerClass='flakes-table'
tableContainerClass='flakes-table'
headerContainerClass='flakes-table'
bodyContainerClass='flakes-table'
search >
<TableHeaderColumn dataField='Facility'>Facility</TableHeaderColumn>
<TableHeaderColumn dataField='Severity'>Severity</TableHeaderColumn>
<TableHeaderColumn dataField='DNS'>DNS</TableHeaderColumn>
<TableHeaderColumn isKey={true} dataField='IP'>IP</TableHeaderColumn>
<TableHeaderColumn dataField='Port'>Port</TableHeaderColumn>
<TableHeaderColumn dataField='vulnName'>Vulnerability</TableHeaderColumn>
</BootstrapTable>
);
}
}
```
And my app.js render method:
```
ReactDOM.render(
<CyhyTable />,
document.getElementById('table')
);
```
And this is the rendered result:
<img width="863" alt="screen shot 2016-12-02 at 11 08 05 am" src="https://cloud.githubusercontent.com/assets/3852362/20846677/a3ab244e-b87f-11e6-9b2a-be4240b6e8fa.png">
Everything is rendered default so it appears.
|
process
|
custom search toolbar not rendering my custom element uses default hello after trying many react datatables i have decided that react bootstrap table is by far the best design great work i m experiencing some difficulties however i want to completely modify the table header so it renders a custom themed search bar and tool bar that matches my table styling here is a screenshot of the expected end result i ve been able to theme the table just fine but the code i have written for the header toolbar doesn t render but no exceptions are thrown by webpack or browser console so i m not sure what i m doing wrong or if this is perhaps a bug help is much appreciated my table search class which returns the custom toolbar html class cyhytablesearch extends react component getvalue return reactdom finddomnode this value setvalue value reactdom finddomnode this value value render return input ref search input classname flakes search placeholder this props placeholder defaultvalue this props defaultvalue onkeyup this props search button classname action button gray smaller right defaultvalue add value button classname action button gray smaller right defaultvalue export csv and my table class class cyhytable extends react component csvformatter cell row return row id cell usd render const selectrowprop mode checkbox const options clearsearch false searchpanel props page which page you want to show as default sizeperpage which size per page you want to locate as default pagestartindex where to start counting the pages paginationsize the pagination bar size prepage prev previous page button text nextpage next next page button text firstpage first first page button text lastpage last last page button text sizeperpagelist text value text value text value text value text value text value text all value cyhydata length return bootstraptable data cyhydata options options selectrow selectrowprop exportcsv true pagination true tableheaderclass flakes table tablebodyclass flakes table containerclass flakes table tablecontainerclass flakes table headercontainerclass flakes table bodycontainerclass flakes table search facility severity dns ip port vulnerability and my app js render method reactdom render document getelementbyid table and this is the rendered result img width alt screen shot at am src everything is rendered default so it appears
| 1
|
15,892
| 20,075,038,396
|
IssuesEvent
|
2022-02-04 11:43:42
|
climatepolicyradar/navigator
|
https://api.github.com/repos/climatepolicyradar/navigator
|
opened
|
Extract structured passages when adding a new document
|
Document processing
|
[unsupported is not supported]
Text should be extracted at “passage-level”. For the purpose of this work, a “passage” means a contiguous text span that contains a single sentence or sequence of text that can be interpreted on its own.
Navigator should extract sequences of text tokens as a single passage in the following cases:
- Sentence
- Numbered list item
- Bulleted list item
- Indented item
Certain passages should be ignored. These include:
- Headers (e.g. title, page number)
- Footers (e.g. title, page number)
- Figures and text contained in figures (e.g. figure labels)
- Figure captions and titles
- Table text (tables will be identified, but text contained within them will not be extracted)
<br/>
|
1.0
|
Extract structured passages when adding a new document - [unsupported is not supported]
Text should be extracted at “passage-level”. For the purpose of this work, a “passage” means a contiguous text span that contains a single sentence or sequence of text that can be interpreted on its own.
Navigator should extract sequences of text tokens as a single passage in the following cases:
- Sentence
- Numbered list item
- Bulleted list item
- Indented item
Certain passages should be ignored. These include:
- Headers (e.g. title, page number)
- Footers (e.g. title, page number)
- Figures and text contained in figures (e.g. figure labels)
- Figure captions and titles
- Table text (tables will be identified, but text contained within them will not be extracted)
<br/>
|
process
|
extract structured passages when adding a new document text should be extracted at “passage level” for the purpose of this work a “passage” means a contiguous text span that contains a single sentence or sequence of text that can be interpreted on its own navigator should extract sequences of text tokens as a single passage in the following cases sentence numbered list item bulleted list item indented item certain passages should be ignored these include headers e g title page number footers e g title page number figures and text contained in figures e g figure labels figure captions and titles table text tables will be identified but text contained within them will not be extracted
| 1
|
11,335
| 2,610,112,584
|
IssuesEvent
|
2015-02-26 18:34:54
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Allow resizing of content box.
|
auto-migrated Milestone-1.1 Priority-Low Type-Enhancement
|
```
Please allow the post content box to be expanded to fill the screen. It is
too small to be usable.
```
-----
Original issue reported on code.google.com by `robjhyndman` on 20 May 2010 at 12:07
|
1.0
|
Allow resizing of content box. - ```
Please allow the post content box to be expanded to fill the screen. It is
too small to be usable.
```
-----
Original issue reported on code.google.com by `robjhyndman` on 20 May 2010 at 12:07
|
non_process
|
allow resizing of content box please allow the post content box to be expanded to fill the screen it is too small to be usable original issue reported on code google com by robjhyndman on may at
| 0
|
17,666
| 12,238,856,864
|
IssuesEvent
|
2020-05-04 20:34:00
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Add settings `allow_suspicious_codecs`.
|
usability
|
If this setting is turned off (default), don't allow to create a table with nonsense codec declarations:
- a codec that does some transformations but does not compress data without compressing codec later in the list (example: `CODEC(Delta)`);
- more than one generic compression codec (example: `CODEC(LZ4, ZSTD)`);
- type-dependent transformation after generic compression codec (example: `CODEC(LZ4, Delta)`).
Always allow to attach a table.
These combinations of codecs are still available for tests (under this setting).
|
True
|
Add settings `allow_suspicious_codecs`. - If this setting is turned off (default), don't allow to create a table with nonsense codec declarations:
- a codec that does some transformations but does not compress data without compressing codec later in the list (example: `CODEC(Delta)`);
- more than one generic compression codec (example: `CODEC(LZ4, ZSTD)`);
- type-dependent transformation after generic compression codec (example: `CODEC(LZ4, Delta)`).
Always allow to attach a table.
These combinations of codecs are still available for tests (under this setting).
|
non_process
|
add settings allow suspicious codecs if this setting is turned off default don t allow to create a table with nonsense codec declarations a codec that does some transformations but does not compress data without compressing codec later in the list example codec delta more than one generic compression codec example codec zstd type dependent transformation after generic compression codec example codec delta always allow to attach a table these combinations of codecs are still available for tests under this setting
| 0
|
267,770
| 8,392,633,936
|
IssuesEvent
|
2018-10-09 18:11:39
|
turtl/tracker
|
https://api.github.com/repos/turtl/tracker
|
opened
|
Migration keychain decryption failing for some users
|
priority:high project:core type:bug
|
```
2018-10-09T11:32:55 - [INFO][migrate] migrate::get_profile() -- got profile, processing
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][migrate] migrate::get_profile() -- profile processed (got 10 items, 0 files)
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6b32959fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f034c
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6c37490fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f04a0
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6ab1034fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f016b
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b74fd5246bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d0071
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6abde1ffab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f01c1
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b7502e4e6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d00b5
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b750035e6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d008b
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6efce1ffab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f00cf
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165c037ffaf6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d00bf
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165c03629966bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d0097
```
Looks like a master key issue? Could this possibly be related to #190?
|
1.0
|
Migration keychain decryption failing for some users - ```
2018-10-09T11:32:55 - [INFO][migrate] migrate::get_profile() -- got profile, processing
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [INFO][migrate] migrate::get_profile() -- profile processed (got 10 items, 0 files)
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6b32959fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f034c
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6c37490fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f04a0
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6ab1034fab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f016b
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b74fd5246bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d0071
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6abde1ffab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f01c1
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b7502e4e6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d00b5
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b750035e6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d008b
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165b6efce1ffab258f7e62a4d2083dc9906238a774726c012c35f403dc7fa5bc7257cb29a5f00cf
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165c037ffaf6bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d00bf
2018-10-09T11:32:55 - [INFO][turtl_core::messaging] messaging::ui_event() -- migration-event
2018-10-09T11:32:55 - [WARN][migrate] migrate::decrypt_profile() -- error decrypting keychain entry: 0165c03629966bd3c6d7d08eb8148c5f40d371b89cc2ff826e0c8f4bd414c4d27c5afe03b69d0097
```
Looks like a master key issue? Could this possibly be related to #190?
|
non_process
|
migration keychain decryption failing for some users migrate get profile got profile processing messaging ui event migration event messaging ui event migration event messaging ui event migration event messaging ui event migration event migrate get profile profile processed got items files messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry messaging ui event migration event migrate decrypt profile error decrypting keychain entry looks like a master key issue could this possibly be related to
| 0
|
8,811
| 2,605,271,356
|
IssuesEvent
|
2015-02-25 05:32:53
|
calblueprint/PHC
|
https://api.github.com/repos/calblueprint/PHC
|
closed
|
Add helper text to UserSelectionFragment
|
medium priority UI/UX
|
Kate just requested that we add directions to the User Selection Fragment so the volunteers know what to do. The directions should be along the lines of:
"please ask: 'Is this your first time as a PHC event?' If yes, choose create new user. If no, search for returning user. If participant is unsure, search for returning user using the participants name."
ask me more about this. i think it would look kinda janky if we just dump that text into the fragment, so we might have to implement like a help icon or something in the future.
|
1.0
|
Add helper text to UserSelectionFragment - Kate just requested that we add directions to the User Selection Fragment so the volunteers know what to do. The directions should be along the lines of:
"please ask: 'Is this your first time as a PHC event?' If yes, choose create new user. If no, search for returning user. If participant is unsure, search for returning user using the participants name."
ask me more about this. i think it would look kinda janky if we just dump that text into the fragment, so we might have to implement like a help icon or something in the future.
|
non_process
|
add helper text to userselectionfragment kate just requested that we add directions to the user selection fragment so the volunteers know what to do the directions should be along the lines of please ask is this your first time as a phc event if yes choose create new user if no search for returning user if participant is unsure search for returning user using the participants name ask me more about this i think it would look kinda janky if we just dump that text into the fragment so we might have to implement like a help icon or something in the future
| 0
|
526,793
| 15,301,470,546
|
IssuesEvent
|
2021-02-24 13:39:37
|
PyTorchLightning/pytorch-lightning
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
|
closed
|
Error in Logger on epoch end when using Multiple GPUs
|
DP Priority P0 bug / fix help wanted
|
## 🐛 Bug
When using multiple GPUs with 'dp', the error `RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:0` occurs. It means the collections on epoch end would be from different device.
### Expected behavior
While they might need to be on the same device, or maybe the aggregating function should be able to handle items from different device.
### Environment
- PyTorch Version (e.g., 1.0): 1.7.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.8.0
- CUDA/cuDNN version: 10.2
- Any other relevant information: pytorch-lightning==1.0.8
### A quick but not safe solution
- modify `collate_tensors` function in `pytorch_lightning/core/step_result.py`:
```Python
def collate_tensors(items: Union[List, Tuple]) -> Union[Tensor, List, Tuple]:
if not items or not isinstance(items, (list, tuple)) or any(not isinstance(item, Tensor) for item in items):
# items is not a sequence, empty, or contains non-tensors
return items
# add the following line of code
items = [item.type_as(items[0]) for item in items]
if all(item.ndim == 0 for item in items):
# all tensors are scalars, we need to stack
return torch.stack(items)
if all(item.ndim >= 1 and item.shape[1:] == items[0].shape[1:] for item in items):
# we can concatenate along the first dimension
return torch.cat(items)
return items
```
|
1.0
|
Error in Logger on epoch end when using Multiple GPUs - ## 🐛 Bug
When using multiple GPUs with 'dp', the error `RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:0` occurs. It means the collections on epoch end would be from different device.
### Expected behavior
While they might need to be on the same device, or maybe the aggregating function should be able to handle items from different device.
### Environment
- PyTorch Version (e.g., 1.0): 1.7.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.8.0
- CUDA/cuDNN version: 10.2
- Any other relevant information: pytorch-lightning==1.0.8
### A quick but not safe solution
- modify `collate_tensors` function in `pytorch_lightning/core/step_result.py`:
```Python
def collate_tensors(items: Union[List, Tuple]) -> Union[Tensor, List, Tuple]:
if not items or not isinstance(items, (list, tuple)) or any(not isinstance(item, Tensor) for item in items):
# items is not a sequence, empty, or contains non-tensors
return items
# add the following line of code
items = [item.type_as(items[0]) for item in items]
if all(item.ndim == 0 for item in items):
# all tensors are scalars, we need to stack
return torch.stack(items)
if all(item.ndim >= 1 and item.shape[1:] == items[0].shape[1:] for item in items):
# we can concatenate along the first dimension
return torch.cat(items)
return items
```
|
non_process
|
error in logger on epoch end when using multiple gpus 🐛 bug when using multiple gpus with dp the error runtimeerror all input tensors must be on the same device received cuda and cuda occurs it means the collections on epoch end would be from different device expected behavior while they might need to be on the same device or maybe the aggregating function should be able to handle items from different device environment pytorch version e g os e g linux linux how you installed pytorch conda pip source pip python version cuda cudnn version any other relevant information pytorch lightning a quick but not safe solution modify collate tensors function in pytorch lightning core step result py python def collate tensors items union union if not items or not isinstance items list tuple or any not isinstance item tensor for item in items items is not a sequence empty or contains non tensors return items add the following line of code items for item in items if all item ndim for item in items all tensors are scalars we need to stack return torch stack items if all item ndim and item shape items shape for item in items we can concatenate along the first dimension return torch cat items return items
| 0
|
527,533
| 15,344,094,347
|
IssuesEvent
|
2021-02-27 23:21:23
|
monarch-initiative/mondo
|
https://api.github.com/repos/monarch-initiative/mondo
|
closed
|
split term: <enter name>MONDO:0009989 and Goldmann-Favre syndrome
|
high priority lumping and splitting medical input needed question split
|
**Term to split:**
enhanced S-cone syndrome
**Name for new terms:**
enhanced S-cone syndrome and Goldmann-Favre syndrome
**List properties that should be moved to new term:**
If the clinical community agrees that these are distinct entities, then the munge of XRefs and labels under this MondoID should be cleaned up
* OMIM and Snomed make a disctinction between these concepts.
The definition of Goldmann-Favre syndrome (from Orphanet) seems to be displayed for enhanced S-cone syndrome
We are holding off a merge at NCBI until this is resolved with Mondo.
**Comment**
If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
|
1.0
|
split term: <enter name>MONDO:0009989 and Goldmann-Favre syndrome - **Term to split:**
enhanced S-cone syndrome
**Name for new terms:**
enhanced S-cone syndrome and Goldmann-Favre syndrome
**List properties that should be moved to new term:**
If the clinical community agrees that these are distinct entities, then the munge of XRefs and labels under this MondoID should be cleaned up
* OMIM and Snomed make a disctinction between these concepts.
The definition of Goldmann-Favre syndrome (from Orphanet) seems to be displayed for enhanced S-cone syndrome
We are holding off a merge at NCBI until this is resolved with Mondo.
**Comment**
If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
|
non_process
|
split term mondo and goldmann favre syndrome term to split enhanced s cone syndrome name for new terms enhanced s cone syndrome and goldmann favre syndrome list properties that should be moved to new term if the clinical community agrees that these are distinct entities then the munge of xrefs and labels under this mondoid should be cleaned up omim and snomed make a disctinction between these concepts the definition of goldmann favre syndrome from orphanet seems to be displayed for enhanced s cone syndrome we are holding off a merge at ncbi until this is resolved with mondo comment if you don t have an orcid you can sign up for one
| 0
|
20,125
| 26,659,983,949
|
IssuesEvent
|
2023-01-25 20:10:12
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
reopened
|
Reorganize [ops] and [custom_ops]
|
high-priority process cleanup api-polish
|
Currently, we have some tech debt here: ops contains some custom layers as well as some ops. We should really fix this and put the components that are like layers under layers instead of ops. Then we can merge custom_ops into the ops directory.
Its confusing to have both.
|
1.0
|
Reorganize [ops] and [custom_ops] - Currently, we have some tech debt here: ops contains some custom layers as well as some ops. We should really fix this and put the components that are like layers under layers instead of ops. Then we can merge custom_ops into the ops directory.
Its confusing to have both.
|
process
|
reorganize and currently we have some tech debt here ops contains some custom layers as well as some ops we should really fix this and put the components that are like layers under layers instead of ops then we can merge custom ops into the ops directory its confusing to have both
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.