Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
83,081
10,320,328,698
IssuesEvent
2019-08-30 20:10:42
mosdef-hub/foyer
https://api.github.com/repos/mosdef-hub/foyer
opened
Developer-facing flowchart
documentation
**Describe the behavior you would like added to Foyer** We have this flowchart for the atom-typing process (from the paper): ![image](https://user-images.githubusercontent.com/7935382/64048582-b6154d80-cb37-11e9-8d31-b45f0cf10e7d.png) since there are a lot of strung-together modular/internal functions that actually implement this algorithm, it would be useful to have this annotated with the functions that implement each step **Describe the solution you'd like** Line numbers would be useful, particularly if tagged to the commit (hitting the y key in GitHub) **Describe alternatives you've considered** Reading internal functions (which is not fun) **Additional context** Add any other context or screenshots about the feature request here.
1.0
Developer-facing flowchart - **Describe the behavior you would like added to Foyer** We have this flowchart for the atom-typing process (from the paper): ![image](https://user-images.githubusercontent.com/7935382/64048582-b6154d80-cb37-11e9-8d31-b45f0cf10e7d.png) since there are a lot of strung-together modular/internal functions that actually implement this algorithm, it would be useful to have this annotated with the functions that implement each step **Describe the solution you'd like** Line numbers would be useful, particularly if tagged to the commit (hitting the y key in GitHub) **Describe alternatives you've considered** Reading internal functions (which is not fun) **Additional context** Add any other context or screenshots about the feature request here.
non_process
developer facing flowchart describe the behavior you would like added to foyer we have this flowchart for the atom typing process from the paper since there are a lot of strung together modular internal functions that actually implement this algorithm it would be useful to have this annotated with the functions that implement each step describe the solution you d like line numbers would be useful particularly if tagged to the commit hitting the y key in github describe alternatives you ve considered reading internal functions which is not fun additional context add any other context or screenshots about the feature request here
0
17,354
23,175,977,087
IssuesEvent
2022-07-31 12:28:11
ppy/osu-web
https://api.github.com/repos/ppy/osu-web
closed
Incorrect link in Featured Artist label
area:beatmap-processing
https://osu.ppy.sh/beatmapsets/1703527#taiko/3483886 Featured Artist label links to https://osu.ppy.sh/beatmaps/artists/tracks/4404 - which is on katagiri's Featured Artist listing, but it should direct the user to tokiwa's listing instead. I believe this should instead be tracks/5085? This might have happened because https://osu.ppy.sh/beatmapsets/1605822#osu/3394505 this remix *is* on katagiri's FA listing (and is tagged correctly). I searched through tokiwa's other Ranked maps and they all seemed to be tagged correctly, so hopefully just an isolated case.
1.0
Incorrect link in Featured Artist label - https://osu.ppy.sh/beatmapsets/1703527#taiko/3483886 Featured Artist label links to https://osu.ppy.sh/beatmaps/artists/tracks/4404 - which is on katagiri's Featured Artist listing, but it should direct the user to tokiwa's listing instead. I believe this should instead be tracks/5085? This might have happened because https://osu.ppy.sh/beatmapsets/1605822#osu/3394505 this remix *is* on katagiri's FA listing (and is tagged correctly). I searched through tokiwa's other Ranked maps and they all seemed to be tagged correctly, so hopefully just an isolated case.
process
incorrect link in featured artist label featured artist label links to which is on katagiri s featured artist listing but it should direct the user to tokiwa s listing instead i believe this should instead be tracks this might have happened because this remix is on katagiri s fa listing and is tagged correctly i searched through tokiwa s other ranked maps and they all seemed to be tagged correctly so hopefully just an isolated case
1
187,894
22,046,004,341
IssuesEvent
2022-05-30 01:49:36
artsking/linux-4.1.15
https://api.github.com/repos/artsking/linux-4.1.15
closed
CVE-2019-16089 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed
security vulnerability
## CVE-2019-16089 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15/commit/b1c15f7dc4cfe553aeed8332e46f285ee92b5756">b1c15f7dc4cfe553aeed8332e46f285ee92b5756</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/nbd.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/nbd.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 5.2.13. nbd_genl_status in drivers/block/nbd.c does not check the nla_nest_start_noflag return value. <p>Publish Date: 2019-09-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16089>CVE-2019-16089</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-16089">https://nvd.nist.gov/vuln/detail/CVE-2019-16089</a></p> <p>Release Date: 2020-08-04</p> <p>Fix Resolution: linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-16089 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2019-16089 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15/commit/b1c15f7dc4cfe553aeed8332e46f285ee92b5756">b1c15f7dc4cfe553aeed8332e46f285ee92b5756</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/nbd.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/nbd.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 5.2.13. nbd_genl_status in drivers/block/nbd.c does not check the nla_nest_start_noflag return value. <p>Publish Date: 2019-09-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16089>CVE-2019-16089</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-16089">https://nvd.nist.gov/vuln/detail/CVE-2019-16089</a></p> <p>Release Date: 2020-08-04</p> <p>Fix Resolution: linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers block nbd c drivers block nbd c vulnerability details an issue was discovered in the linux kernel through nbd genl status in drivers block nbd c does not check the nla nest start noflag return value publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux yocto gitautoinc gitautoinc step up your open source security game with whitesource
0
21,023
27,969,909,863
IssuesEvent
2023-03-25 00:17:13
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
duplicate manager - inconsistent processing
feature: enhancement difficulty: hard scope: UI scope: image processing bug: pending no-issue-activity
Within the darkroom the duplicate manager allows you to click on any of the duplicate images to compare with the current edit. However, the image you get when you click on a duplicate is often not consistent with the image you'd get if you double-clicked to change image and view directly in the darkroom. For example, I import an image and perform some edits on it. I then duplicate that image in the lighttable and do not perform any further edits (so both duplicate images - in this case v0 and v2 - are identical). I expect that if I view version 0 in the darkroom and click on version 1 in the duplicate manager then, after some processing, the image should remain unchanged (they are identical edits). The actual result, when the entire image is viewed, is that version 2 seems to be more 'fuzzy' than version 0: v0 (the image being processed currently) ![full_v0](https://user-images.githubusercontent.com/9555491/78693081-a8f0e600-78f2-11ea-8410-d8f30f4eaf14.png) v2 (viewed by single-clicking in the duplicate manager) ![full_v2](https://user-images.githubusercontent.com/9555491/78693146-c0c86a00-78f2-11ea-85a2-95d16d057469.png) If I zoom to 100% the results are better, but the image position seems to be off (the image moves up and to the left by a few pixels when I click on version 2) and sometimes the brightness can be different (though I didn't reproduce that here). v0 (the current image) ![zoom_v0](https://user-images.githubusercontent.com/9555491/78693260-ebb2be00-78f2-11ea-8879-ee7452ab771b.png) v2 (from duplicate manager) ![zoom_v2](https://user-images.githubusercontent.com/9555491/78693284-f4a38f80-78f2-11ea-9361-8e518dce1f65.png) I would like to be able to perform different edits on the same picture and use this module to perform an accurate comparison but this does not currently seem to be possible as I can't rely on the processing to be the same on both. The only reliable and consistent way to do this at the moment is to load v2 in the darkroom, take a snapshot, and then go back and enable the snapshot against v0 These issues seem fairly consistent (I've reproduced on a number of versions including current master and on a number of images) so I haven't uploaded the raw or xmp but I can if needed. I'm running on ArchLinux and the issue occurs both with and without OpenCL. Presumably because it's getting the images from the cache, if I switch to version 2 in the darkroom and view v0 from the duplicate manager, both images look the same as they did in my original example above (i.e. v2 is fuzzy). If I clear the cache before re-entering darktable and start off with v2 in the darkroom the above results are reproduced (i.e. v0 is fuzzy)
1.0
duplicate manager - inconsistent processing - Within the darkroom the duplicate manager allows you to click on any of the duplicate images to compare with the current edit. However, the image you get when you click on a duplicate is often not consistent with the image you'd get if you double-clicked to change image and view directly in the darkroom. For example, I import an image and perform some edits on it. I then duplicate that image in the lighttable and do not perform any further edits (so both duplicate images - in this case v0 and v2 - are identical). I expect that if I view version 0 in the darkroom and click on version 1 in the duplicate manager then, after some processing, the image should remain unchanged (they are identical edits). The actual result, when the entire image is viewed, is that version 2 seems to be more 'fuzzy' than version 0: v0 (the image being processed currently) ![full_v0](https://user-images.githubusercontent.com/9555491/78693081-a8f0e600-78f2-11ea-8410-d8f30f4eaf14.png) v2 (viewed by single-clicking in the duplicate manager) ![full_v2](https://user-images.githubusercontent.com/9555491/78693146-c0c86a00-78f2-11ea-85a2-95d16d057469.png) If I zoom to 100% the results are better, but the image position seems to be off (the image moves up and to the left by a few pixels when I click on version 2) and sometimes the brightness can be different (though I didn't reproduce that here). v0 (the current image) ![zoom_v0](https://user-images.githubusercontent.com/9555491/78693260-ebb2be00-78f2-11ea-8879-ee7452ab771b.png) v2 (from duplicate manager) ![zoom_v2](https://user-images.githubusercontent.com/9555491/78693284-f4a38f80-78f2-11ea-9361-8e518dce1f65.png) I would like to be able to perform different edits on the same picture and use this module to perform an accurate comparison but this does not currently seem to be possible as I can't rely on the processing to be the same on both. The only reliable and consistent way to do this at the moment is to load v2 in the darkroom, take a snapshot, and then go back and enable the snapshot against v0 These issues seem fairly consistent (I've reproduced on a number of versions including current master and on a number of images) so I haven't uploaded the raw or xmp but I can if needed. I'm running on ArchLinux and the issue occurs both with and without OpenCL. Presumably because it's getting the images from the cache, if I switch to version 2 in the darkroom and view v0 from the duplicate manager, both images look the same as they did in my original example above (i.e. v2 is fuzzy). If I clear the cache before re-entering darktable and start off with v2 in the darkroom the above results are reproduced (i.e. v0 is fuzzy)
process
duplicate manager inconsistent processing within the darkroom the duplicate manager allows you to click on any of the duplicate images to compare with the current edit however the image you get when you click on a duplicate is often not consistent with the image you d get if you double clicked to change image and view directly in the darkroom for example i import an image and perform some edits on it i then duplicate that image in the lighttable and do not perform any further edits so both duplicate images in this case and are identical i expect that if i view version in the darkroom and click on version in the duplicate manager then after some processing the image should remain unchanged they are identical edits the actual result when the entire image is viewed is that version seems to be more fuzzy than version the image being processed currently viewed by single clicking in the duplicate manager if i zoom to the results are better but the image position seems to be off the image moves up and to the left by a few pixels when i click on version and sometimes the brightness can be different though i didn t reproduce that here the current image from duplicate manager i would like to be able to perform different edits on the same picture and use this module to perform an accurate comparison but this does not currently seem to be possible as i can t rely on the processing to be the same on both the only reliable and consistent way to do this at the moment is to load in the darkroom take a snapshot and then go back and enable the snapshot against these issues seem fairly consistent i ve reproduced on a number of versions including current master and on a number of images so i haven t uploaded the raw or xmp but i can if needed i m running on archlinux and the issue occurs both with and without opencl presumably because it s getting the images from the cache if i switch to version in the darkroom and view from the duplicate manager both images look the same as they did in my original example above i e is fuzzy if i clear the cache before re entering darktable and start off with in the darkroom the above results are reproduced i e is fuzzy
1
2,094
4,931,386,429
IssuesEvent
2016-11-28 10:01:30
tomlutzenberger/frontal-coding-guideline
https://api.github.com/repos/tomlutzenberger/frontal-coding-guideline
opened
GIT Hooks
enhancement process quality suggestion
Konfiguration, die es bei mangelnder Qualität/nicht bestandenen Tests unmöglich macht zu pushen/mergen.
1.0
GIT Hooks - Konfiguration, die es bei mangelnder Qualität/nicht bestandenen Tests unmöglich macht zu pushen/mergen.
process
git hooks konfiguration die es bei mangelnder qualität nicht bestandenen tests unmöglich macht zu pushen mergen
1
498,046
14,399,332,079
IssuesEvent
2020-12-03 10:46:49
gnosis/conditional-tokens-explorer
https://api.github.com/repos/gnosis/conditional-tokens-explorer
closed
Condition Id field is cleared out when select a position to merge with for a position with 2 conditions
Medium priority QA Passed bug verify in production
Related to #650, #668, #571, #598, #512 **Steps:** 1. create positions with 2 or more conditions 2. Open Merge positions page 3. Select a position with 2 or more conditions in the Select position section 4. Select a positoin in the Merge with section **AR:** Condition Id field is cleared out. However, user is able to select a condition when expand the Condition ID dropdown (see the [video](https://drive.google.com/file/d/1_Xmnvo6TQmZRxmlkGGf4jDYnq_cO5wvd/view)) 'Merge' button is enabled in this case. When I click on it (with 'empty' conditionID field), I get an [error ](https://rinkeby.etherscan.io/tx/0x6f967b37eb769a1dc098ade1185a9e2c9f7ef745d2c68803c20ee88cf45e15a4) Also, **"strange' price is displayed for a couple of seconds when change a position** (it is also shown on the video) **ER:** Condition Id field displays the condition ID
1.0
Condition Id field is cleared out when select a position to merge with for a position with 2 conditions - Related to #650, #668, #571, #598, #512 **Steps:** 1. create positions with 2 or more conditions 2. Open Merge positions page 3. Select a position with 2 or more conditions in the Select position section 4. Select a positoin in the Merge with section **AR:** Condition Id field is cleared out. However, user is able to select a condition when expand the Condition ID dropdown (see the [video](https://drive.google.com/file/d/1_Xmnvo6TQmZRxmlkGGf4jDYnq_cO5wvd/view)) 'Merge' button is enabled in this case. When I click on it (with 'empty' conditionID field), I get an [error ](https://rinkeby.etherscan.io/tx/0x6f967b37eb769a1dc098ade1185a9e2c9f7ef745d2c68803c20ee88cf45e15a4) Also, **"strange' price is displayed for a couple of seconds when change a position** (it is also shown on the video) **ER:** Condition Id field displays the condition ID
non_process
condition id field is cleared out when select a position to merge with for a position with conditions related to steps create positions with or more conditions open merge positions page select a position with or more conditions in the select position section select a positoin in the merge with section ar condition id field is cleared out however user is able to select a condition when expand the condition id dropdown see the merge button is enabled in this case when i click on it with empty conditionid field i get an also strange price is displayed for a couple of seconds when change a position it is also shown on the video er condition id field displays the condition id
0
157,043
12,344,134,608
IssuesEvent
2020-05-15 06:14:00
celery/celery
https://api.github.com/repos/celery/celery
closed
kombu.exceptions.OperationalError: Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists
Status: Needs Testcase ✘
<!-- Please fill this template entirely and do not erase parts of it. We reserve the right to close without a response bug reports which are incomplete. --> # Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [ ] I have verified that the issue exists against the `master` branch of Celery. - [x] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first. - [x] I have read the relevant section in the [contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs) on reporting bugs. - [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22) for similar or identical bug reports. - [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22) for existing proposed fixes. - [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master) to find out if the bug was already fixed in the master branch. - [ ] I have included all related issues and possible duplicate issues in this issue (If there are none, check this box anyway). ## Mandatory Debugging Information - [ ] I have included the output of ``celery -A proj report`` in the issue. (if you are not able to do this, then at least specify the Celery version affected). - [ ] I have verified that the issue exists against the `master` branch of Celery. - [ ] I have included the contents of ``pip freeze`` in the issue. - [ ] I have included all the versions of all the external dependencies required to reproduce this bug. ## Optional Debugging Information <!-- Try some of the below if you think they are relevant. It will help us figure out the scope of the bug and how many users it affects. --> - [ ] I have tried reproducing the issue on more than one Python version and/or implementation. - [ ] I have tried reproducing the issue on more than one message broker and/or result backend. - [ ] I have tried reproducing the issue on more than one version of the message broker and/or result backend. - [ ] I have tried reproducing the issue on more than one operating system. - [ ] I have tried reproducing the issue on more than one workers pool. - [ ] I have tried reproducing the issue with autoscaling, retries, ETA/Countdown & rate limits disabled. - [ ] I have tried reproducing the issue after downgrading and/or upgrading Celery and its dependencies. ## Related Issues and Possible Duplicates <!-- Please make sure to search and mention any related issues or possible duplicates to this issue as requested by the checklist above. This may or may not include issues in other repositories that the Celery project maintains or other repositories that are dependencies of Celery. If you don't know how to mention issues, please refer to Github's documentation on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests --> #### Related Issues Has recently been used celery for multi-process concurrent tasks, met a very difficult problem, I tried to solve, but to collect a variety of methods, failed to solve my problem. Problem: Celery multitasking program, the task using inherited classes encapsulate Celery. Task type of task to complete my task, task, of course, include my success and failure of rewriting method.My program is not important, of course, just a program will appear, **kombu. Exceptions. OperationalError** mistake, **Cannot route message for exchange 'reply.celery.pidbox': Table emply or key no longer exists**, I'll find related to explain the key as redis reply. Celery. Pidbox ousted, lead to the routing problem, which I doubt is redis configuration problem, I tried to existing have been fighting for using redis cluster, will quote us pidbox ousted, same problem.Then I was celery there may be some problems, please the great god still hope to give directions, thank you very much #### Possible Duplicates - None ## Environment & Settings <!-- Include the contents of celery --version below --> **Celery version**: python 3.6.5 celery 4.3.0 redis 3.2.1 (Using the cluster) kombu 4.6.2 (4.6.4 4.6.5 Try to release) <!-- Include the output of celery -A proj report below --> <details> <summary><b><code>celery report</code> Output:</b></summary> <p> ``` kombu.exceptions.OperationalError: Cannot route message for exchange 'reply.celery.pidbox': Table emply or key no longer exists RuntimeError: pubsub connection not set: did you forget to call subscribe() or psubscribe()? ``` </p> </details> # Steps to Reproduce ## Required Dependencies <!-- Please fill the required dependencies to reproduce this issue --> * **Minimal Python Version**: N/A or Unknown * **Minimal Celery Version**: N/A or Unknown * **Minimal Kombu Version**: N/A or Unknown * **Minimal Broker Version**: N/A or Unknown * **Minimal Result Backend Version**: N/A or Unknown * **Minimal OS and/or Kernel Version**: N/A or Unknown * **Minimal Broker Client Version**: N/A or Unknown * **Minimal Result Backend Client Version**: N/A or Unknown ### Python Packages <!-- Please fill the contents of pip freeze below --> <details> <summary><b><code>pip freeze</code> Output:</b></summary> <p> ``` ``` </p> </details> ### Other Dependencies <!-- Please provide system dependencies, configuration files and other dependency information if applicable --> <details> <p> N/A </p> </details> ## Minimally Reproducible Test Case <!-- Please provide a reproducible test case. Refer to the Reporting Bugs section in our contribution guide. We prefer submitting test cases in the form of a PR to our integration test suite. If you can provide one, please mention the PR number below. If not, please attach the most minimal code example required to reproduce the issue below. If the test case is too large, please include a link to a gist or a repository below. --> <details> <p> ```python ``` </p> </details> # Expected Behavior <!-- Describe in detail what you expect to happen --> # Actual Behavior <!-- Describe in detail what actually happened. Please include a backtrace and surround it with triple backticks (```). In addition, include the Celery daemon logs, the broker logs, the result backend logs and system logs below if they will help us debug the issue. -->
1.0
kombu.exceptions.OperationalError: Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists - <!-- Please fill this template entirely and do not erase parts of it. We reserve the right to close without a response bug reports which are incomplete. --> # Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [ ] I have verified that the issue exists against the `master` branch of Celery. - [x] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first. - [x] I have read the relevant section in the [contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs) on reporting bugs. - [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22) for similar or identical bug reports. - [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22) for existing proposed fixes. - [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master) to find out if the bug was already fixed in the master branch. - [ ] I have included all related issues and possible duplicate issues in this issue (If there are none, check this box anyway). ## Mandatory Debugging Information - [ ] I have included the output of ``celery -A proj report`` in the issue. (if you are not able to do this, then at least specify the Celery version affected). - [ ] I have verified that the issue exists against the `master` branch of Celery. - [ ] I have included the contents of ``pip freeze`` in the issue. - [ ] I have included all the versions of all the external dependencies required to reproduce this bug. ## Optional Debugging Information <!-- Try some of the below if you think they are relevant. It will help us figure out the scope of the bug and how many users it affects. --> - [ ] I have tried reproducing the issue on more than one Python version and/or implementation. - [ ] I have tried reproducing the issue on more than one message broker and/or result backend. - [ ] I have tried reproducing the issue on more than one version of the message broker and/or result backend. - [ ] I have tried reproducing the issue on more than one operating system. - [ ] I have tried reproducing the issue on more than one workers pool. - [ ] I have tried reproducing the issue with autoscaling, retries, ETA/Countdown & rate limits disabled. - [ ] I have tried reproducing the issue after downgrading and/or upgrading Celery and its dependencies. ## Related Issues and Possible Duplicates <!-- Please make sure to search and mention any related issues or possible duplicates to this issue as requested by the checklist above. This may or may not include issues in other repositories that the Celery project maintains or other repositories that are dependencies of Celery. If you don't know how to mention issues, please refer to Github's documentation on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests --> #### Related Issues Has recently been used celery for multi-process concurrent tasks, met a very difficult problem, I tried to solve, but to collect a variety of methods, failed to solve my problem. Problem: Celery multitasking program, the task using inherited classes encapsulate Celery. Task type of task to complete my task, task, of course, include my success and failure of rewriting method.My program is not important, of course, just a program will appear, **kombu. Exceptions. OperationalError** mistake, **Cannot route message for exchange 'reply.celery.pidbox': Table emply or key no longer exists**, I'll find related to explain the key as redis reply. Celery. Pidbox ousted, lead to the routing problem, which I doubt is redis configuration problem, I tried to existing have been fighting for using redis cluster, will quote us pidbox ousted, same problem.Then I was celery there may be some problems, please the great god still hope to give directions, thank you very much #### Possible Duplicates - None ## Environment & Settings <!-- Include the contents of celery --version below --> **Celery version**: python 3.6.5 celery 4.3.0 redis 3.2.1 (Using the cluster) kombu 4.6.2 (4.6.4 4.6.5 Try to release) <!-- Include the output of celery -A proj report below --> <details> <summary><b><code>celery report</code> Output:</b></summary> <p> ``` kombu.exceptions.OperationalError: Cannot route message for exchange 'reply.celery.pidbox': Table emply or key no longer exists RuntimeError: pubsub connection not set: did you forget to call subscribe() or psubscribe()? ``` </p> </details> # Steps to Reproduce ## Required Dependencies <!-- Please fill the required dependencies to reproduce this issue --> * **Minimal Python Version**: N/A or Unknown * **Minimal Celery Version**: N/A or Unknown * **Minimal Kombu Version**: N/A or Unknown * **Minimal Broker Version**: N/A or Unknown * **Minimal Result Backend Version**: N/A or Unknown * **Minimal OS and/or Kernel Version**: N/A or Unknown * **Minimal Broker Client Version**: N/A or Unknown * **Minimal Result Backend Client Version**: N/A or Unknown ### Python Packages <!-- Please fill the contents of pip freeze below --> <details> <summary><b><code>pip freeze</code> Output:</b></summary> <p> ``` ``` </p> </details> ### Other Dependencies <!-- Please provide system dependencies, configuration files and other dependency information if applicable --> <details> <p> N/A </p> </details> ## Minimally Reproducible Test Case <!-- Please provide a reproducible test case. Refer to the Reporting Bugs section in our contribution guide. We prefer submitting test cases in the form of a PR to our integration test suite. If you can provide one, please mention the PR number below. If not, please attach the most minimal code example required to reproduce the issue below. If the test case is too large, please include a link to a gist or a repository below. --> <details> <p> ```python ``` </p> </details> # Expected Behavior <!-- Describe in detail what you expect to happen --> # Actual Behavior <!-- Describe in detail what actually happened. Please include a backtrace and surround it with triple backticks (```). In addition, include the Celery daemon logs, the broker logs, the result backend logs and system logs below if they will help us debug the issue. -->
non_process
kombu exceptions operationalerror cannot route message for exchange reply celery pidbox table empty or key no longer exists please fill this template entirely and do not erase parts of it we reserve the right to close without a response bug reports which are incomplete checklist to check an item on the list replace with i have verified that the issue exists against the master branch of celery this has already been asked to the first i have read the relevant section in the on reporting bugs i have checked the for similar or identical bug reports i have checked the for existing proposed fixes i have checked the to find out if the bug was already fixed in the master branch i have included all related issues and possible duplicate issues in this issue if there are none check this box anyway mandatory debugging information i have included the output of celery a proj report in the issue if you are not able to do this then at least specify the celery version affected i have verified that the issue exists against the master branch of celery i have included the contents of pip freeze in the issue i have included all the versions of all the external dependencies required to reproduce this bug optional debugging information try some of the below if you think they are relevant it will help us figure out the scope of the bug and how many users it affects i have tried reproducing the issue on more than one python version and or implementation i have tried reproducing the issue on more than one message broker and or result backend i have tried reproducing the issue on more than one version of the message broker and or result backend i have tried reproducing the issue on more than one operating system i have tried reproducing the issue on more than one workers pool i have tried reproducing the issue with autoscaling retries eta countdown rate limits disabled i have tried reproducing the issue after downgrading and or upgrading celery and its dependencies related issues and possible duplicates please make sure to search and mention any related issues or possible duplicates to this issue as requested by the checklist above this may or may not include issues in other repositories that the celery project maintains or other repositories that are dependencies of celery if you don t know how to mention issues please refer to github s documentation on the subject related issues has recently been used celery for multi process concurrent tasks met a very difficult problem i tried to solve but to collect a variety of methods failed to solve my problem problem celery multitasking program the task using inherited classes encapsulate celery task type of task to complete my task task of course include my success and failure of rewriting method my program is not important of course just a program will appear kombu exceptions operationalerror mistake cannot route message for exchange reply celery pidbox table emply or key no longer exists i ll find related to explain the key as redis reply celery pidbox ousted lead to the routing problem which i doubt is redis configuration problem i tried to existing have been fighting for using redis cluster will quote us pidbox ousted same problem then i was celery there may be some problems please the great god still hope to give directions thank you very much possible duplicates none environment settings celery version python celery redis (using the cluster) kombu try to release celery report output kombu exceptions operationalerror: cannot route message for exchange reply celery pidbox table emply or key no longer exists runtimeerror pubsub connection not set did you forget to call subscribe or psubscribe steps to reproduce required dependencies minimal python version n a or unknown minimal celery version n a or unknown minimal kombu version n a or unknown minimal broker version n a or unknown minimal result backend version n a or unknown minimal os and or kernel version n a or unknown minimal broker client version n a or unknown minimal result backend client version n a or unknown python packages pip freeze output other dependencies please provide system dependencies configuration files and other dependency information if applicable n a minimally reproducible test case please provide a reproducible test case refer to the reporting bugs section in our contribution guide we prefer submitting test cases in the form of a pr to our integration test suite if you can provide one please mention the pr number below if not please attach the most minimal code example required to reproduce the issue below if the test case is too large please include a link to a gist or a repository below python expected behavior actual behavior describe in detail what actually happened please include a backtrace and surround it with triple backticks in addition include the celery daemon logs the broker logs the result backend logs and system logs below if they will help us debug the issue
0
2,009
4,832,728,401
IssuesEvent
2016-11-08 08:34:47
woesterduolf/Mission-reisbureau
https://api.github.com/repos/woesterduolf/Mission-reisbureau
closed
Hotel kiezen pagina
Boekingsprocess priority: highest Type:Feature
**See mockup file (page 3)** Here we have the screen for the selection of the hotel preferences. Again, we have the top banner present with the images of the cities. This time however, we do not have the advertisement banner on the left. On the left there are all the options a customer could want to choose from regarding hotel preferences. In this example I have added the amount of stars, the rating and the distance to the nearest land mark. There are however a lot more and they are accessible by scrolling down by using the vertical scroll bar. The user can select all the options he wants, they’re not mutually exclusive. All the hotels that fit the preferences that the customer has selected, show up on the right side of the screen. Because you can’t see all the hotels at once, there is a vertical scroll bar on the right of the screen. On the background of the main screen there now is a faded image that characterizes the city. Opacity is set to 30% so the image doesn’t disturb too much. Now we see the hotels themselves. Concise information about every hotel is visible in a rectangular shaped bar. On the left side of the bar is a photo of the hotel. Above that the consumer can see the hotel’s name. To the right of that there are 4 lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel. Next to that is a map indicating the position of the hotel. This map is clickable and will lead the consumer to Google Maps where he can look at more precise information about the area. And last, there is a button that, if clicked, will take the user to the hotel and room selection page.
1.0
Hotel kiezen pagina - **See mockup file (page 3)** Here we have the screen for the selection of the hotel preferences. Again, we have the top banner present with the images of the cities. This time however, we do not have the advertisement banner on the left. On the left there are all the options a customer could want to choose from regarding hotel preferences. In this example I have added the amount of stars, the rating and the distance to the nearest land mark. There are however a lot more and they are accessible by scrolling down by using the vertical scroll bar. The user can select all the options he wants, they’re not mutually exclusive. All the hotels that fit the preferences that the customer has selected, show up on the right side of the screen. Because you can’t see all the hotels at once, there is a vertical scroll bar on the right of the screen. On the background of the main screen there now is a faded image that characterizes the city. Opacity is set to 30% so the image doesn’t disturb too much. Now we see the hotels themselves. Concise information about every hotel is visible in a rectangular shaped bar. On the left side of the bar is a photo of the hotel. Above that the consumer can see the hotel’s name. To the right of that there are 4 lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel. Next to that is a map indicating the position of the hotel. This map is clickable and will lead the consumer to Google Maps where he can look at more precise information about the area. And last, there is a button that, if clicked, will take the user to the hotel and room selection page.
process
hotel kiezen pagina see mockup file page here we have the screen for the selection of the hotel preferences again we have the top banner present with the images of the cities this time however we do not have the advertisement banner on the left on the left there are all the options a customer could want to choose from regarding hotel preferences in this example i have added the amount of stars the rating and the distance to the nearest land mark there are however a lot more and they are accessible by scrolling down by using the vertical scroll bar the user can select all the options he wants they’re not mutually exclusive all the hotels that fit the preferences that the customer has selected show up on the right side of the screen because you can’t see all the hotels at once there is a vertical scroll bar on the right of the screen on the background of the main screen there now is a faded image that characterizes the city opacity is set to so the image doesn’t disturb too much now we see the hotels themselves concise information about every hotel is visible in a rectangular shaped bar on the left side of the bar is a photo of the hotel above that the consumer can see the hotel’s name to the right of that there are lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel next to that is a map indicating the position of the hotel this map is clickable and will lead the consumer to google maps where he can look at more precise information about the area and last there is a button that if clicked will take the user to the hotel and room selection page
1
12,508
14,962,121,304
IssuesEvent
2021-01-27 08:50:10
laurent-daniel-utt/MeshIneBits
https://api.github.com/repos/laurent-daniel-utt/MeshIneBits
reopened
add a reserved space for each bit in order to be able to hold during cutting
enhancement preprocessor
At bit population stage, each bit should be checked for having a reserved space in order to be able to be hold by prehensor during cutting process. The size of reserved space should be a parameter. At the moment, it will only be used when preprocessing in order to add preprocessing information like a potential cut line along the reserved space. Later this would be use in a different way, the patern population algorithms will be able to check if they can use a bit without having to cur that part or not therefore driving to a two lengh bit possibility, one with that reserved space cut, another with that space kept. Implication on machine process being different, in a first implementation every bit will see that reserved space cut.
1.0
add a reserved space for each bit in order to be able to hold during cutting - At bit population stage, each bit should be checked for having a reserved space in order to be able to be hold by prehensor during cutting process. The size of reserved space should be a parameter. At the moment, it will only be used when preprocessing in order to add preprocessing information like a potential cut line along the reserved space. Later this would be use in a different way, the patern population algorithms will be able to check if they can use a bit without having to cur that part or not therefore driving to a two lengh bit possibility, one with that reserved space cut, another with that space kept. Implication on machine process being different, in a first implementation every bit will see that reserved space cut.
process
add a reserved space for each bit in order to be able to hold during cutting at bit population stage each bit should be checked for having a reserved space in order to be able to be hold by prehensor during cutting process the size of reserved space should be a parameter at the moment it will only be used when preprocessing in order to add preprocessing information like a potential cut line along the reserved space later this would be use in a different way the patern population algorithms will be able to check if they can use a bit without having to cur that part or not therefore driving to a two lengh bit possibility one with that reserved space cut another with that space kept implication on machine process being different in a first implementation every bit will see that reserved space cut
1
528,569
15,369,902,359
IssuesEvent
2021-03-02 08:04:29
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
[Debugger] Byte array element values are shown as 'unknown'
Area/Debugger Priority/High Team/DevTools Type/Bug
**Description:** Please refer the below sample code. Ballerina byte array values are shown as `unkown` in here. ![Screenshot from 2021-03-01 10-40-46](https://user-images.githubusercontent.com/29032600/109454719-d7336300-7a7a-11eb-9f20-cc5afe001fe3.png) **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
1.0
[Debugger] Byte array element values are shown as 'unknown' - **Description:** Please refer the below sample code. Ballerina byte array values are shown as `unkown` in here. ![Screenshot from 2021-03-01 10-40-46](https://user-images.githubusercontent.com/29032600/109454719-d7336300-7a7a-11eb-9f20-cc5afe001fe3.png) **Steps to reproduce:** **Affected Versions:** **OS, DB, other environment details and versions:** **Related Issues (optional):** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> **Suggested Labels (optional):** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees (optional):** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
non_process
byte array element values are shown as unknown description please refer the below sample code ballerina byte array values are shown as unkown in here steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
0
650,584
21,409,865,991
IssuesEvent
2022-04-22 03:55:41
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
reopened
Removing x-wso2-request-interceptor in API Definition
Type/Bug Priority/Normal
### Description: Automatically removing the x-wso2-request-interceptor (in the API level) property from the generated API resource when creating an API by using a swagger file with the **x-wso2-request-interceptor** in the API level. ### Steps to reproduce: - Create an API by using a small swagger file with the **x-wso2-request-interceptor** property at the API level. - After creating the API, Go to the publisher portal home page and click on the **API Definitions** section - Check the generated API definition and able to see the provided **x-wso2-request-interceptor** through the swagger file is removed automatically. ### Affected Product Version: 3.2.0
1.0
Removing x-wso2-request-interceptor in API Definition - ### Description: Automatically removing the x-wso2-request-interceptor (in the API level) property from the generated API resource when creating an API by using a swagger file with the **x-wso2-request-interceptor** in the API level. ### Steps to reproduce: - Create an API by using a small swagger file with the **x-wso2-request-interceptor** property at the API level. - After creating the API, Go to the publisher portal home page and click on the **API Definitions** section - Check the generated API definition and able to see the provided **x-wso2-request-interceptor** through the swagger file is removed automatically. ### Affected Product Version: 3.2.0
non_process
removing x request interceptor in api definition description automatically removing the x request interceptor in the api level property from the generated api resource when creating an api by using a swagger file with the x request interceptor in the api level steps to reproduce create an api by using a small swagger file with the x request interceptor property at the api level after creating the api go to the publisher portal home page and click on the api definitions section check the generated api definition and able to see the provided x request interceptor through the swagger file is removed automatically affected product version
0
69,083
22,142,689,219
IssuesEvent
2022-06-03 08:37:15
MarcusWolschon/osmeditor4android
https://api.github.com/repos/MarcusWolschon/osmeditor4android
opened
Object search that doesn't return any results creates an error notification
Defect Minor
An object search that doesn't find any objects shows an error toast that then leads to an error notification. An empty result should naturally just be a warning toast without a notification.
1.0
Object search that doesn't return any results creates an error notification - An object search that doesn't find any objects shows an error toast that then leads to an error notification. An empty result should naturally just be a warning toast without a notification.
non_process
object search that doesn t return any results creates an error notification an object search that doesn t find any objects shows an error toast that then leads to an error notification an empty result should naturally just be a warning toast without a notification
0
15,773
19,915,937,393
IssuesEvent
2022-01-25 22:40:17
medic/cht-core
https://api.github.com/repos/medic/cht-core
opened
Release 3.15.0
Type: Internal process
# Planning - Product Manager - [ ] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining. - [ ] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes. - [ ] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo - [ ] Assign an engineer as Release Engineer for this release. # Development - Release Engineer When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [ ] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`. - [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them. - [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release Engineer is to update this every week until the version is released. # Releasing - Release Engineer Once all issues have passed acceptance testing and have been merged into `master` release testing can begin. - [ ] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template: ``` @core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks! ``` - [ ] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [ ] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient. - [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [ ] Review the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. Compare the trend using the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md). - [ ] Upgrade the `demo-cht.dev` instance to this version. - [ ] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/overview/supported-software/) and update the EOL date and status of previous releases. - [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template: ``` @channel *We're excited to announce the release of {{version}} of {{product}}* New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs. Read the release notes for full details: {{url}} Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://docs.communityhealthtoolkit.org/core/overview/supported-software/ See what's scheduled for the next releases: https://github.com/medic/cht-core/milestones ``` - [ ] Mark this issue "done" and close the Milestone.
1.0
Release 3.15.0 - # Planning - Product Manager - [ ] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining. - [ ] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes. - [ ] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo - [ ] Assign an engineer as Release Engineer for this release. # Development - Release Engineer When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [ ] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`. - [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them. - [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release Engineer is to update this every week until the version is released. # Releasing - Release Engineer Once all issues have passed acceptance testing and have been merged into `master` release testing can begin. - [ ] Create a new release branch from `master` named `<major>.<minor>.x` in `cht-core`. Post a message to #development using this template: ``` @core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks! ``` - [ ] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [ ] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient. - [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [ ] Review the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. Compare the trend using the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md). - [ ] Upgrade the `demo-cht.dev` instance to this version. - [ ] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/overview/supported-software/) and update the EOL date and status of previous releases. - [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template: ``` @channel *We're excited to announce the release of {{version}} of {{product}}* New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs. Read the release notes for full details: {{url}} Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://docs.communityhealthtoolkit.org/core/overview/supported-software/ See what's scheduled for the next releases: https://github.com/medic/cht-core/milestones ``` - [ ] Mark this issue "done" and close the Milestone.
process
release planning product manager create a gh milestone for the release we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the milestone ideally each minor release will have one or two features a handful of improvements and plenty of bug fixes identify any features and improvements in the release that need end user documentation beyond eng team documentation improvements and create corresponding issues in the cht docs repo assign an engineer as release engineer for this release development release engineer when development is ready to begin one of the engineers should be nominated as a release engineer they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them write an update in the weekly product team call agenda summarising development and acceptance testing progress and identifying any blockers the release engineer is to update this every week until the version is released releasing release engineer once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from master named x in cht core post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing create a new document in the in master ensure all issues are in the gh milestone that they re correctly labelled in particular they have the right type ui ux if they change the ui and breaking change if appropriate and have human readable descriptions use to export the issues into our release note format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg medic conf medic gateway medic android assign the pr to a the director of technology and b an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic review the scalability results on at medic scalability tag name add the release jtl file to cht core tests scalability previous results compare the trend using the upgrade the demo cht dev instance to this version add the release to the and update the eol date and status of previous releases announce the release on the under the product releases category using this template channel we re excited to announce the release of version of product new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the release notes for full details url following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our software support documentation see what s scheduled for the next releases mark this issue done and close the milestone
1
5,390
8,213,358,996
IssuesEvent
2018-09-04 19:17:21
GoogleCloudPlatform/google-cloud-python
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
opened
Bigquery: 'test_extract_table' snippet, bucket creation flakes with 500
api: bigquery flaky testing type: process
Similar to #5746, #5747, #5748, but with a 500 error instead of a 429. See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7904 (first error in `snippets-2-7` run). ```python ______________________________ test_extract_table ______________________________ client = <google.cloud.bigquery.client.Client object at 0x7f2f23608f50> to_delete = [] def test_extract_table(client, to_delete): from google.cloud import storage bucket_name = 'extract_shakespeare_{}'.format(_millis()) storage_client = storage.Client() > bucket = retry_429(storage_client.create_bucket)(bucket_name) ../docs/bigquery/snippets.py:1986: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) ../storage/google/cloud/storage/client.py:285: in create_bucket bucket.create(client=self, project=project) ../storage/google/cloud/storage/bucket.py:309: in create data=properties, _target_object=self) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f2f23596050> method = 'POST', path = '/b', query_params = {'project': 'precise-truck-742'} data = '{"name": "extract_shakespeare_1536076519795"}' content_type = 'application/json', headers = None, api_base_url = None api_version = None, expect_json = True _target_object = <Bucket: extract_shakespeare_1536076519795> def api_request(self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None): """Make a request over the HTTP transport to the API. You shouldn't need to use this method, but if you plan to interact with the API using these primitives, this is the correct one to use. :type method: str :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). Required. :type path: str :param path: The path to the resource (ie, ``'/b/bucket-name'``). Required. :type query_params: dict or list :param query_params: A dictionary of keys and values (or list of key-value pairs) to insert into the query string of the URL. :type data: str :param data: The data to send as the body of the request. Default is the empty string. :type content_type: str :param content_type: The proper MIME type of the data provided. Default is None. :type headers: dict :param headers: extra HTTP headers to be sent with the request. :type api_base_url: str :param api_base_url: The base URL for the API endpoint. Typically you won't have to provide this. Default is the standard API base URL. :type api_version: str :param api_version: The version of the API to call. Typically you shouldn't provide this and instead use the default for the library. Default is the latest API version supported by google-cloud-python. :type expect_json: bool :param expect_json: If True, this method will try to parse the response as JSON and raise an exception if that cannot be done. Default is True. :type _target_object: :class:`object` :param _target_object: (Optional) Protected argument to be used by library callers. This can allow custom behavior, for example, to defer an HTTP request and complete initialization of the object at a later time. :raises ~google.cloud.exceptions.GoogleCloudError: if the response code is not 200 OK. :raises ValueError: if the response content type is not JSON. :rtype: dict or str :returns: The API response payload, either as a raw string or a dictionary if the response is valid JSON. """ url = self.build_api_url(path=path, query_params=query_params, api_base_url=api_base_url, api_version=api_version) # Making the executive decision that any dictionary # data will be sent properly as JSON. if data and isinstance(data, dict): data = json.dumps(data) content_type = 'application/json' response = self._make_request( method=method, url=url, data=data, content_type=content_type, headers=headers, target_object=_target_object) if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E InternalServerError: 500 POST https://www.googleapis.com/storage/v1/b?project=precise-truck-742: Backend Error ```
1.0
Bigquery: 'test_extract_table' snippet, bucket creation flakes with 500 - Similar to #5746, #5747, #5748, but with a 500 error instead of a 429. See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7904 (first error in `snippets-2-7` run). ```python ______________________________ test_extract_table ______________________________ client = <google.cloud.bigquery.client.Client object at 0x7f2f23608f50> to_delete = [] def test_extract_table(client, to_delete): from google.cloud import storage bucket_name = 'extract_shakespeare_{}'.format(_millis()) storage_client = storage.Client() > bucket = retry_429(storage_client.create_bucket)(bucket_name) ../docs/bigquery/snippets.py:1986: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) ../storage/google/cloud/storage/client.py:285: in create_bucket bucket.create(client=self, project=project) ../storage/google/cloud/storage/bucket.py:309: in create data=properties, _target_object=self) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f2f23596050> method = 'POST', path = '/b', query_params = {'project': 'precise-truck-742'} data = '{"name": "extract_shakespeare_1536076519795"}' content_type = 'application/json', headers = None, api_base_url = None api_version = None, expect_json = True _target_object = <Bucket: extract_shakespeare_1536076519795> def api_request(self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None): """Make a request over the HTTP transport to the API. You shouldn't need to use this method, but if you plan to interact with the API using these primitives, this is the correct one to use. :type method: str :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). Required. :type path: str :param path: The path to the resource (ie, ``'/b/bucket-name'``). Required. :type query_params: dict or list :param query_params: A dictionary of keys and values (or list of key-value pairs) to insert into the query string of the URL. :type data: str :param data: The data to send as the body of the request. Default is the empty string. :type content_type: str :param content_type: The proper MIME type of the data provided. Default is None. :type headers: dict :param headers: extra HTTP headers to be sent with the request. :type api_base_url: str :param api_base_url: The base URL for the API endpoint. Typically you won't have to provide this. Default is the standard API base URL. :type api_version: str :param api_version: The version of the API to call. Typically you shouldn't provide this and instead use the default for the library. Default is the latest API version supported by google-cloud-python. :type expect_json: bool :param expect_json: If True, this method will try to parse the response as JSON and raise an exception if that cannot be done. Default is True. :type _target_object: :class:`object` :param _target_object: (Optional) Protected argument to be used by library callers. This can allow custom behavior, for example, to defer an HTTP request and complete initialization of the object at a later time. :raises ~google.cloud.exceptions.GoogleCloudError: if the response code is not 200 OK. :raises ValueError: if the response content type is not JSON. :rtype: dict or str :returns: The API response payload, either as a raw string or a dictionary if the response is valid JSON. """ url = self.build_api_url(path=path, query_params=query_params, api_base_url=api_base_url, api_version=api_version) # Making the executive decision that any dictionary # data will be sent properly as JSON. if data and isinstance(data, dict): data = json.dumps(data) content_type = 'application/json' response = self._make_request( method=method, url=url, data=data, content_type=content_type, headers=headers, target_object=_target_object) if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E InternalServerError: 500 POST https://www.googleapis.com/storage/v1/b?project=precise-truck-742: Backend Error ```
process
bigquery test extract table snippet bucket creation flakes with similar to but with a error instead of a see first error in snippets run python test extract table client to delete def test extract table client to delete from google cloud import storage bucket name extract shakespeare format millis storage client storage client bucket retry storage client create bucket bucket name docs bigquery snippets py test utils test utils retry py in wrapped function return to wrap args kwargs storage google cloud storage client py in create bucket bucket create client self project project storage google cloud storage bucket py in create data properties target object self self method post path b query params project precise truck data name extract shakespeare content type application json headers none api base url none api version none expect json true target object def api request self method path query params none data none content type none headers none api base url none api version none expect json true target object none make a request over the http transport to the api you shouldn t need to use this method but if you plan to interact with the api using these primitives this is the correct one to use type method str param method the http method name ie get post etc required type path str param path the path to the resource ie b bucket name required type query params dict or list param query params a dictionary of keys and values or list of key value pairs to insert into the query string of the url type data str param data the data to send as the body of the request default is the empty string type content type str param content type the proper mime type of the data provided default is none type headers dict param headers extra http headers to be sent with the request type api base url str param api base url the base url for the api endpoint typically you won t have to provide this default is the standard api base url type api version str param api version the version of the api to call typically you shouldn t provide this and instead use the default for the library default is the latest api version supported by google cloud python type expect json bool param expect json if true this method will try to parse the response as json and raise an exception if that cannot be done default is true type target object class object param target object optional protected argument to be used by library callers this can allow custom behavior for example to defer an http request and complete initialization of the object at a later time raises google cloud exceptions googleclouderror if the response code is not ok raises valueerror if the response content type is not json rtype dict or str returns the api response payload either as a raw string or a dictionary if the response is valid json url self build api url path path query params query params api base url api base url api version api version making the executive decision that any dictionary data will be sent properly as json if data and isinstance data dict data json dumps data content type application json response self make request method method url url data data content type content type headers headers target object target object if not response status code raise exceptions from http response response e internalservererror post backend error
1
342,645
10,320,047,033
IssuesEvent
2019-08-30 19:17:36
thirtybees/thirtybees
https://api.github.com/repos/thirtybees/thirtybees
closed
PHP translation BUG (maybe?)
Bug Estimate: L Priority: low
Again, I am not sure if to consider it as a bug. But when you have a space after opening and before closing parentheses, the string won't be recognized as translatable string: $this->l('Cart block') - will work But this won't: $this->l( 'Cart block' )
1.0
PHP translation BUG (maybe?) - Again, I am not sure if to consider it as a bug. But when you have a space after opening and before closing parentheses, the string won't be recognized as translatable string: $this->l('Cart block') - will work But this won't: $this->l( 'Cart block' )
non_process
php translation bug maybe again i am not sure if to consider it as a bug but when you have a space after opening and before closing parentheses the string won t be recognized as translatable string this l cart block will work but this won t this l cart block
0
17,164
22,742,639,453
IssuesEvent
2022-07-07 06:06:17
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Improve GO:0033638 modulation by symbiont of host response to heat
multi-species process
GO:0033638 modulation by symbiont of host response to heat has a single annotation, from a paper that describes how a symbiont changes the temperature range under which its host can live (PMID:17425405, PMID:35350854) The @geneontology/multiorganism-working-group would like to improve the term label, definition and placement in the ontology. Thanks, Pascale
1.0
Improve GO:0033638 modulation by symbiont of host response to heat - GO:0033638 modulation by symbiont of host response to heat has a single annotation, from a paper that describes how a symbiont changes the temperature range under which its host can live (PMID:17425405, PMID:35350854) The @geneontology/multiorganism-working-group would like to improve the term label, definition and placement in the ontology. Thanks, Pascale
process
improve go modulation by symbiont of host response to heat go modulation by symbiont of host response to heat has a single annotation from a paper that describes how a symbiont changes the temperature range under which its host can live pmid pmid the geneontology multiorganism working group would like to improve the term label definition and placement in the ontology thanks pascale
1
2,407
5,193,205,819
IssuesEvent
2017-01-22 17:02:13
raphym/Simulation_of_message_routing_by_intelligent_agents
https://api.github.com/repos/raphym/Simulation_of_message_routing_by_intelligent_agents
opened
traceroute betwen quorum with DFS
being processed
I have to find - the routes between the backbones -the route in the quorums i will use the dfs algorythm
1.0
traceroute betwen quorum with DFS - I have to find - the routes between the backbones -the route in the quorums i will use the dfs algorythm
process
traceroute betwen quorum with dfs i have to find the routes between the backbones the route in the quorums i will use the dfs algorythm
1
15,180
18,953,037,277
IssuesEvent
2021-11-18 16:59:52
Bodmer/TFT_eSPI
https://api.github.com/repos/Bodmer/TFT_eSPI
closed
TFT_eSPI - ESP32-S2 - ST7789
To do: enhancement Compatibility update New processor variant
Hi, As Arduino ESP32 Core 2.0.0 is out, I would like to use TFT_eSPI for my ESP32-2 Saola and my [adafruit 240x320 ST7789](https://www.adafruit.com/product/4311). With the same hardware, I can use Adafruit_ST7789 driver using ```Adafruit_ST7789 tft = Adafruit_ST7789(TFT_CS, TFT_DC, TFT_MOSI, TFT_SCLK, TFT_RST);``` But with TFT_eSPI nothing is display. I only have a black screen. Here is the result of Read_user_setup.ino ``` 10:35:09.139 -> [code] 10:35:09.139 -> TFT_eSPI ver = 2.3.70 10:35:09.139 -> Processor = ESP32 10:35:09.139 -> Frequency = 240MHz 10:35:09.176 -> Transactions = Yes 10:35:09.176 -> Interface = SPI 10:35:09.176 -> Display driver = 7789 10:35:09.176 -> Display width = 240 10:35:09.176 -> Display height = 320 10:35:09.176 -> 10:35:09.176 -> MOSI = GPIO 35 10:35:09.176 -> MISO = GPIO 36 10:35:09.176 -> SCK = GPIO 37 10:35:09.176 -> TFT_CS = GPIO 34 10:35:09.176 -> TFT_DC = GPIO 33 10:35:09.176 -> 10:35:09.176 -> Font GLCD loaded 10:35:09.176 -> Font 2 loaded 10:35:09.176 -> Font 4 loaded 10:35:09.176 -> Font 6 loaded 10:35:09.176 -> Font 7 loaded 10:35:09.176 -> Font 8 loaded 10:35:09.176 -> Smooth font enabled 10:35:09.176 -> 10:35:09.176 -> Display SPI frequency = 36.00 10:35:09.176 -> [/code] ``` My User_Setup_ESP32-S2.h is ```` #define ST7789_DRIVER // Full configuration option, define additional parameters below for this display #define TFT_RGB_ORDER TFT_BGR // Colour order Blue-Green-Red #define TFT_WIDTH 240 // ST7789 240 x 240 and 240 x 320 #define TFT_HEIGHT 320 // ST7789 240 x 320 #define TFT_MOSI 35 #define TFT_MISO 36 #define TFT_SCLK 37 #define TFT_CS 34 // Chip select control pin #define TFT_DC 33 // Data Command control pin #define TFT_RST -1 // Set TFT_RST to -1 if display RESET is connected to ESP32 board RST #define LOAD_GLCD // Font 1. Original Adafruit 8 pixel font needs ~1820 bytes in FLASH #define LOAD_FONT2 // Font 2. Small 16 pixel high font, needs ~3534 bytes in FLASH, 96 characters #define LOAD_FONT4 // Font 4. Medium 26 pixel high font, needs ~5848 bytes in FLASH, 96 characters #define LOAD_FONT6 // Font 6. Large 48 pixel font, needs ~2666 bytes in FLASH, only characters 1234567890:-.apm #define LOAD_FONT7 // Font 7. 7 segment 48 pixel font, needs ~2438 bytes in FLASH, only characters 1234567890:-. #define LOAD_FONT8 // Font 8. Large 75 pixel font needs ~3256 bytes in FLASH, only characters 1234567890:-. #define LOAD_GFXFF // FreeFonts. Include access to the 48 Adafruit_GFX free fonts FF1 to FF48 and custom fonts #define SMOOTH_FONT #define SPI_FREQUENCY 8000000L #define USE_HSPI_PORT ```` With Test_readWrite I assume, I can write to the screen but not read. So I am not very sure, I wrote ! ```` 10:44:27.478 -> Pixel value written = 80 10:44:27.478 -> Pixel value read = 0 10:44:27.478 -> ERROR ^^^^ 10:44:27.973 -> Pixel value written = 100 10:44:27.973 -> Pixel value read = 0 10:44:27.973 -> ERROR ^^^^ ```` I try to change frequency without results. Pins definitions seem ok for me as they are the same I use with Adafruit_ST7789. I also try the ST7789_2_DRIVER. I don't have any compilation error I use ```#define USE_HSPI_PORT``` as proposed in #807. Maybe there is something more Do you have any ideas ? Thank you very much
1.0
TFT_eSPI - ESP32-S2 - ST7789 - Hi, As Arduino ESP32 Core 2.0.0 is out, I would like to use TFT_eSPI for my ESP32-2 Saola and my [adafruit 240x320 ST7789](https://www.adafruit.com/product/4311). With the same hardware, I can use Adafruit_ST7789 driver using ```Adafruit_ST7789 tft = Adafruit_ST7789(TFT_CS, TFT_DC, TFT_MOSI, TFT_SCLK, TFT_RST);``` But with TFT_eSPI nothing is display. I only have a black screen. Here is the result of Read_user_setup.ino ``` 10:35:09.139 -> [code] 10:35:09.139 -> TFT_eSPI ver = 2.3.70 10:35:09.139 -> Processor = ESP32 10:35:09.139 -> Frequency = 240MHz 10:35:09.176 -> Transactions = Yes 10:35:09.176 -> Interface = SPI 10:35:09.176 -> Display driver = 7789 10:35:09.176 -> Display width = 240 10:35:09.176 -> Display height = 320 10:35:09.176 -> 10:35:09.176 -> MOSI = GPIO 35 10:35:09.176 -> MISO = GPIO 36 10:35:09.176 -> SCK = GPIO 37 10:35:09.176 -> TFT_CS = GPIO 34 10:35:09.176 -> TFT_DC = GPIO 33 10:35:09.176 -> 10:35:09.176 -> Font GLCD loaded 10:35:09.176 -> Font 2 loaded 10:35:09.176 -> Font 4 loaded 10:35:09.176 -> Font 6 loaded 10:35:09.176 -> Font 7 loaded 10:35:09.176 -> Font 8 loaded 10:35:09.176 -> Smooth font enabled 10:35:09.176 -> 10:35:09.176 -> Display SPI frequency = 36.00 10:35:09.176 -> [/code] ``` My User_Setup_ESP32-S2.h is ```` #define ST7789_DRIVER // Full configuration option, define additional parameters below for this display #define TFT_RGB_ORDER TFT_BGR // Colour order Blue-Green-Red #define TFT_WIDTH 240 // ST7789 240 x 240 and 240 x 320 #define TFT_HEIGHT 320 // ST7789 240 x 320 #define TFT_MOSI 35 #define TFT_MISO 36 #define TFT_SCLK 37 #define TFT_CS 34 // Chip select control pin #define TFT_DC 33 // Data Command control pin #define TFT_RST -1 // Set TFT_RST to -1 if display RESET is connected to ESP32 board RST #define LOAD_GLCD // Font 1. Original Adafruit 8 pixel font needs ~1820 bytes in FLASH #define LOAD_FONT2 // Font 2. Small 16 pixel high font, needs ~3534 bytes in FLASH, 96 characters #define LOAD_FONT4 // Font 4. Medium 26 pixel high font, needs ~5848 bytes in FLASH, 96 characters #define LOAD_FONT6 // Font 6. Large 48 pixel font, needs ~2666 bytes in FLASH, only characters 1234567890:-.apm #define LOAD_FONT7 // Font 7. 7 segment 48 pixel font, needs ~2438 bytes in FLASH, only characters 1234567890:-. #define LOAD_FONT8 // Font 8. Large 75 pixel font needs ~3256 bytes in FLASH, only characters 1234567890:-. #define LOAD_GFXFF // FreeFonts. Include access to the 48 Adafruit_GFX free fonts FF1 to FF48 and custom fonts #define SMOOTH_FONT #define SPI_FREQUENCY 8000000L #define USE_HSPI_PORT ```` With Test_readWrite I assume, I can write to the screen but not read. So I am not very sure, I wrote ! ```` 10:44:27.478 -> Pixel value written = 80 10:44:27.478 -> Pixel value read = 0 10:44:27.478 -> ERROR ^^^^ 10:44:27.973 -> Pixel value written = 100 10:44:27.973 -> Pixel value read = 0 10:44:27.973 -> ERROR ^^^^ ```` I try to change frequency without results. Pins definitions seem ok for me as they are the same I use with Adafruit_ST7789. I also try the ST7789_2_DRIVER. I don't have any compilation error I use ```#define USE_HSPI_PORT``` as proposed in #807. Maybe there is something more Do you have any ideas ? Thank you very much
process
tft espi hi as arduino core is out i would like to use tft espi for my saola and my with the same hardware i can use adafruit driver using adafruit tft adafruit tft cs tft dc tft mosi tft sclk tft rst but with tft espi nothing is display i only have a black screen here is the result of read user setup ino tft espi ver processor frequency transactions yes interface spi display driver display width display height mosi gpio miso gpio sck gpio tft cs gpio tft dc gpio font glcd loaded font loaded font loaded font loaded font loaded font loaded smooth font enabled display spi frequency my user setup h is define driver full configuration option define additional parameters below for this display define tft rgb order tft bgr colour order blue green red define tft width x and x define tft height x define tft mosi define tft miso define tft sclk define tft cs chip select control pin define tft dc data command control pin define tft rst set tft rst to if display reset is connected to board rst define load glcd font original adafruit pixel font needs bytes in flash define load font small pixel high font needs bytes in flash characters define load font medium pixel high font needs bytes in flash characters define load font large pixel font needs bytes in flash only characters apm define load font segment pixel font needs bytes in flash only characters define load font large pixel font needs bytes in flash only characters define load gfxff freefonts include access to the adafruit gfx free fonts to and custom fonts define smooth font define spi frequency define use hspi port with test readwrite i assume i can write to the screen but not read so i am not very sure i wrote pixel value written pixel value read error pixel value written pixel value read error i try to change frequency without results pins definitions seem ok for me as they are the same i use with adafruit i also try the driver i don t have any compilation error i use define use hspi port as proposed in maybe there is something more do you have any ideas thank you very much
1
15,364
19,536,052,360
IssuesEvent
2021-12-31 07:09:45
apache/iotdb
https://api.github.com/repos/apache/iotdb
closed
希望降频聚合查询补空值支持avg函数
Module - Query Processing Priority - Middle
降频聚合查询补空值,现在只支持 last_value 聚合函数。avg函数会报错。 Msg: 411: Meet error in query process: Group By Fill only support last_value function 希望能支持avg函数。
1.0
希望降频聚合查询补空值支持avg函数 - 降频聚合查询补空值,现在只支持 last_value 聚合函数。avg函数会报错。 Msg: 411: Meet error in query process: Group By Fill only support last_value function 希望能支持avg函数。
process
希望降频聚合查询补空值支持avg函数 降频聚合查询补空值,现在只支持 last value 聚合函数。avg函数会报错。 msg meet error in query process group by fill only support last value function 希望能支持avg函数。
1
9,874
12,886,267,575
IssuesEvent
2020-07-13 09:13:29
deepset-ai/haystack
https://api.github.com/repos/deepset-ai/haystack
closed
File upload with Asian languages triggers warning in language detection
preprocessing question
To Author: I have uploaded the text file with Chinese and Thai, but the system showed "The language for file-uploads/ec87658d3f7c4756a99c986b2f9ab558_Duterte.txt is not one of ['']. The file may not have been decoded in the correct text format." Is that normal ? I just want some suggestions. Thank you! ![3](https://user-images.githubusercontent.com/8537280/86444224-0904f600-bd43-11ea-918d-7657bb3c97b7.PNG)
1.0
File upload with Asian languages triggers warning in language detection - To Author: I have uploaded the text file with Chinese and Thai, but the system showed "The language for file-uploads/ec87658d3f7c4756a99c986b2f9ab558_Duterte.txt is not one of ['']. The file may not have been decoded in the correct text format." Is that normal ? I just want some suggestions. Thank you! ![3](https://user-images.githubusercontent.com/8537280/86444224-0904f600-bd43-11ea-918d-7657bb3c97b7.PNG)
process
file upload with asian languages triggers warning in language detection to author i have uploaded the text file with chinese and thai but the system showed the language for file uploads duterte txt is not one of the file may not have been decoded in the correct text format is that normal i just want some suggestions thank you
1
65,656
7,892,661,273
IssuesEvent
2018-06-28 15:35:41
ian-james/IFS
https://api.github.com/repos/ian-james/IFS
opened
Display Survey results and subcategories to stakeholders [EPIC]
design-discussion feature-request
Can be split as needed for individual tasks.
1.0
Display Survey results and subcategories to stakeholders [EPIC] - Can be split as needed for individual tasks.
non_process
display survey results and subcategories to stakeholders can be split as needed for individual tasks
0
8,873
11,965,292,879
IssuesEvent
2020-04-05 22:47:17
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
closed
[BUG] Puffer Refugium Graphical Glitch
Angels Bio Processing Wont Fix
The Puffer Refugium icon has a magic pipe connection that appears to work across an air gap (for puffer atmosphere input) <img width="401" alt="Screen Shot 2020-04-05 at 10 51 23 AM" src="https://user-images.githubusercontent.com/12788951/78503319-8660a080-772b-11ea-8e2f-23c380074048.png">
1.0
[BUG] Puffer Refugium Graphical Glitch - The Puffer Refugium icon has a magic pipe connection that appears to work across an air gap (for puffer atmosphere input) <img width="401" alt="Screen Shot 2020-04-05 at 10 51 23 AM" src="https://user-images.githubusercontent.com/12788951/78503319-8660a080-772b-11ea-8e2f-23c380074048.png">
process
puffer refugium graphical glitch the puffer refugium icon has a magic pipe connection that appears to work across an air gap for puffer atmosphere input img width alt screen shot at am src
1
238,755
7,782,786,226
IssuesEvent
2018-06-06 07:51:58
javaee/servlet-spec
https://api.github.com/repos/javaee/servlet-spec
closed
Need way to track progress of requests; proposal included
Component: Listeners Priority: Major Type: Improvement multipart progressbar upload
Servlet 3.0 added multipart request processing to, in part, make handling file uploads easier (and easier it did make it). Before 3.0, many users used Commons FileUpload to accomplish this task. However, 3.0's multipart processing did not, unfortunately, completely eliminate the need for FileUpload. One of the major features lacking is the ability to track the progress of a large request. This feature is sometimes called "file upload progress," but that name is misleading. It's actually "request progress," and it's the ability to measure and periodically report the number of bytes actually received versus the number of bytes indicated in the "Content-Length" header. As I propose below, I believe this should be relatively easy to add to the servlet spec, relatively easy to implement, and quite easy to use. As proposed, this is independent of protocol (not strictly tied to HTTP/multipart/Content-Length). That could be changed, but I think this makes sense. First, create a new interface: ``` **ServletRequestProgressListener**package javax.servlet; public interface ServletRequestProgressListener { /** * Called whenever the number of bytes read changes, at least every 64 kilobytes. * * @param bytesRead The number of bytes that have been read so far, at least 0 * @param bytesExpected The number of bytes expected to be read, -1 if unknown * @param itemsRead The number of items (parts in an HTTP multipart) processed so far */ void update(long bytesRead, long bytesExpected, int itemsRead); /** * Called whenever the request has ended, either by being canceled or completed, * normally or abnormally. */ void destroy(); } ``` Next, add a method to <tt>ServletRequest</tt>: ``` **ServletRequest**... /** * Attaches a progress listener to this request. Progress listeners must be attached in * a filter, before the request gets to the Servlet, in order to be effective. * * @param progressListener The progress listener to update when the bytes read increases * @throws UnsupportedOperationException if the protocol does not support progress listeners */ void setProgressListener(ServletRequestProgressListener progressListener); ... ``` Because the listener can be a source of performance problems, containers would only be required to call <tt>update</tt> (1) when first attached, and (2) every 64 kilobytes. Containers may call it more often, but do not have to. As proposed, I estimate 30 minutes to create the proposed interfaces and 1.5 hours to update the servlet specification documentation. Should only take 2-3 hours to add to the Tomcat implementation based on my examination of the code. Can't speak for the other implementations. Using multipart as the primary example, since multipart processing is completed before the Servlet gets the request, the listener would have to be attached in a filter. A typical use case would be to create a listener and add it to a session so that it can later be queried by some Ajax call: ``` **Psuedo-Code**... public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) { if(request-is-large) { MyProgressListener listener = new MyProgressListener(request); request.getSession().addAttribute("progressListener", listener); request.setProgressListener(listener); } } ... ``` #### Environment n/a
1.0
Need way to track progress of requests; proposal included - Servlet 3.0 added multipart request processing to, in part, make handling file uploads easier (and easier it did make it). Before 3.0, many users used Commons FileUpload to accomplish this task. However, 3.0's multipart processing did not, unfortunately, completely eliminate the need for FileUpload. One of the major features lacking is the ability to track the progress of a large request. This feature is sometimes called "file upload progress," but that name is misleading. It's actually "request progress," and it's the ability to measure and periodically report the number of bytes actually received versus the number of bytes indicated in the "Content-Length" header. As I propose below, I believe this should be relatively easy to add to the servlet spec, relatively easy to implement, and quite easy to use. As proposed, this is independent of protocol (not strictly tied to HTTP/multipart/Content-Length). That could be changed, but I think this makes sense. First, create a new interface: ``` **ServletRequestProgressListener**package javax.servlet; public interface ServletRequestProgressListener { /** * Called whenever the number of bytes read changes, at least every 64 kilobytes. * * @param bytesRead The number of bytes that have been read so far, at least 0 * @param bytesExpected The number of bytes expected to be read, -1 if unknown * @param itemsRead The number of items (parts in an HTTP multipart) processed so far */ void update(long bytesRead, long bytesExpected, int itemsRead); /** * Called whenever the request has ended, either by being canceled or completed, * normally or abnormally. */ void destroy(); } ``` Next, add a method to <tt>ServletRequest</tt>: ``` **ServletRequest**... /** * Attaches a progress listener to this request. Progress listeners must be attached in * a filter, before the request gets to the Servlet, in order to be effective. * * @param progressListener The progress listener to update when the bytes read increases * @throws UnsupportedOperationException if the protocol does not support progress listeners */ void setProgressListener(ServletRequestProgressListener progressListener); ... ``` Because the listener can be a source of performance problems, containers would only be required to call <tt>update</tt> (1) when first attached, and (2) every 64 kilobytes. Containers may call it more often, but do not have to. As proposed, I estimate 30 minutes to create the proposed interfaces and 1.5 hours to update the servlet specification documentation. Should only take 2-3 hours to add to the Tomcat implementation based on my examination of the code. Can't speak for the other implementations. Using multipart as the primary example, since multipart processing is completed before the Servlet gets the request, the listener would have to be attached in a filter. A typical use case would be to create a listener and add it to a session so that it can later be queried by some Ajax call: ``` **Psuedo-Code**... public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) { if(request-is-large) { MyProgressListener listener = new MyProgressListener(request); request.getSession().addAttribute("progressListener", listener); request.setProgressListener(listener); } } ... ``` #### Environment n/a
non_process
need way to track progress of requests proposal included servlet added multipart request processing to in part make handling file uploads easier and easier it did make it before many users used commons fileupload to accomplish this task however s multipart processing did not unfortunately completely eliminate the need for fileupload one of the major features lacking is the ability to track the progress of a large request this feature is sometimes called file upload progress but that name is misleading it s actually request progress and it s the ability to measure and periodically report the number of bytes actually received versus the number of bytes indicated in the content length header as i propose below i believe this should be relatively easy to add to the servlet spec relatively easy to implement and quite easy to use as proposed this is independent of protocol not strictly tied to http multipart content length that could be changed but i think this makes sense first create a new interface servletrequestprogresslistener package javax servlet public interface servletrequestprogresslistener called whenever the number of bytes read changes at least every kilobytes param bytesread the number of bytes that have been read so far at least param bytesexpected the number of bytes expected to be read if unknown param itemsread the number of items parts in an http multipart processed so far void update long bytesread long bytesexpected int itemsread called whenever the request has ended either by being canceled or completed normally or abnormally void destroy next add a method to servletrequest servletrequest attaches a progress listener to this request progress listeners must be attached in a filter before the request gets to the servlet in order to be effective param progresslistener the progress listener to update when the bytes read increases throws unsupportedoperationexception if the protocol does not support progress listeners void setprogresslistener servletrequestprogresslistener progresslistener because the listener can be a source of performance problems containers would only be required to call update when first attached and every kilobytes containers may call it more often but do not have to as proposed i estimate minutes to create the proposed interfaces and hours to update the servlet specification documentation should only take hours to add to the tomcat implementation based on my examination of the code can t speak for the other implementations using multipart as the primary example since multipart processing is completed before the servlet gets the request the listener would have to be attached in a filter a typical use case would be to create a listener and add it to a session so that it can later be queried by some ajax call psuedo code public void dofilter servletrequest request servletresponse response filterchain chain if request is large myprogresslistener listener new myprogresslistener request request getsession addattribute progresslistener listener request setprogresslistener listener environment n a
0
16,971
22,333,695,390
IssuesEvent
2022-06-14 16:30:52
GoogleCloudPlatform/cloud-ops-sandbox
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
closed
fix make-release pipeline
type: process priority: p3
When running make release for v0.7.1, the [push-tags action](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/blob/main/.github/workflows/push-tags.yml) failed to run. I had to trigger it manually. We should fix this so that the release process is fully automated and reliable
1.0
fix make-release pipeline - When running make release for v0.7.1, the [push-tags action](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/blob/main/.github/workflows/push-tags.yml) failed to run. I had to trigger it manually. We should fix this so that the release process is fully automated and reliable
process
fix make release pipeline when running make release for the failed to run i had to trigger it manually we should fix this so that the release process is fully automated and reliable
1
19,226
25,368,890,113
IssuesEvent
2022-11-21 09:05:20
NEARWEEK/CORE
https://api.github.com/repos/NEARWEEK/CORE
opened
Create staking battleplan & start execution
Process
## 🎉 Subtasks - [ ] Create strategy for staking node - [ ] Map all top tiers partners we need to reach out to - [ ] Start execute plan ## 🤼‍♂️ Reviewer @Kisgus
1.0
Create staking battleplan & start execution - ## 🎉 Subtasks - [ ] Create strategy for staking node - [ ] Map all top tiers partners we need to reach out to - [ ] Start execute plan ## 🤼‍♂️ Reviewer @Kisgus
process
create staking battleplan start execution 🎉 subtasks create strategy for staking node map all top tiers partners we need to reach out to start execute plan 🤼‍♂️ reviewer kisgus
1
155,816
12,278,514,415
IssuesEvent
2020-05-08 10:07:32
inf112-v20/legless-crane
https://api.github.com/repos/inf112-v20/legless-crane
closed
Thourough tests of Board.java
Need tests
As we might replace Board.java, don't prioritize this currently - [x] A test for each type of tile (belt vs cog etc.)
1.0
Thourough tests of Board.java - As we might replace Board.java, don't prioritize this currently - [x] A test for each type of tile (belt vs cog etc.)
non_process
thourough tests of board java as we might replace board java don t prioritize this currently a test for each type of tile belt vs cog etc
0
108,089
9,259,587,386
IssuesEvent
2019-03-18 00:41:57
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Etcd snapshots have .part file along with the actual snapshot
kind/bug-qa status/blocker status/resolved status/to-test team/ca version/2.0
Version: Master build from March 14th Steps: 1. Create a single node cluster with local backup config of 1hr creation time 2. Wait for the first automatic snapshot to be taken The snapshots have a .part in /opt/rke/etcd-snapshots directory ``` root@soumyasingagn1:/opt/rke/etcd-snapshots# ls -ltr total 1940 -rw-r--r-- 1 root root 1982496 Mar 14 22:06 c-smfrt-rl-bxr9h -rw-r--r-- 1 root root 0 Mar 14 22:06 c-smfrt-rl-bxr9h.part ``` 3. Take a manual backup, the .part file is present here for the manual backup too ``` root@soumyasingagn1:/opt/rke/etcd-snapshots# ls -ltr total 5936 -rw-r--r-- 1 root root 1982496 Mar 14 22:06 c-smfrt-rl-bxr9h -rw-r--r-- 1 root root 0 Mar 14 22:06 c-smfrt-rl-bxr9h.part -rw-r--r-- 1 root root 3170336 Mar 14 22:33 c-smfrt-ml-tfg64 -rw-r--r-- 1 root root 917504 Mar 14 22:33 c-smfrt-ml-tfg64.part ```
1.0
Etcd snapshots have .part file along with the actual snapshot - Version: Master build from March 14th Steps: 1. Create a single node cluster with local backup config of 1hr creation time 2. Wait for the first automatic snapshot to be taken The snapshots have a .part in /opt/rke/etcd-snapshots directory ``` root@soumyasingagn1:/opt/rke/etcd-snapshots# ls -ltr total 1940 -rw-r--r-- 1 root root 1982496 Mar 14 22:06 c-smfrt-rl-bxr9h -rw-r--r-- 1 root root 0 Mar 14 22:06 c-smfrt-rl-bxr9h.part ``` 3. Take a manual backup, the .part file is present here for the manual backup too ``` root@soumyasingagn1:/opt/rke/etcd-snapshots# ls -ltr total 5936 -rw-r--r-- 1 root root 1982496 Mar 14 22:06 c-smfrt-rl-bxr9h -rw-r--r-- 1 root root 0 Mar 14 22:06 c-smfrt-rl-bxr9h.part -rw-r--r-- 1 root root 3170336 Mar 14 22:33 c-smfrt-ml-tfg64 -rw-r--r-- 1 root root 917504 Mar 14 22:33 c-smfrt-ml-tfg64.part ```
non_process
etcd snapshots have part file along with the actual snapshot version master build from march steps create a single node cluster with local backup config of creation time wait for the first automatic snapshot to be taken the snapshots have a part in opt rke etcd snapshots directory root opt rke etcd snapshots ls ltr total rw r r root root mar c smfrt rl rw r r root root mar c smfrt rl part take a manual backup the part file is present here for the manual backup too root opt rke etcd snapshots ls ltr total rw r r root root mar c smfrt rl rw r r root root mar c smfrt rl part rw r r root root mar c smfrt ml rw r r root root mar c smfrt ml part
0
14,886
18,288,329,750
IssuesEvent
2021-10-05 12:50:45
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Android] User is able to skip the force upgrade pop-up and continue using the app by clicking on 'Forgot passcode?Sign in again'
Bug P2 Android Process: Fixed Process: Tested QA Process: Tested dev
**Steps:** 1. Signup/login to Android app and create a passcode 2. Configure force upgrade from SB and publish app 3. Kill and relaunch the app/minimize the app 4. Click on 'Forgot passcode?Sign in again' link 5. User is navigated to signin. 6. Again signin/signup with new user, user is able to enroll/take activities or perform any actions. **Actual:** User is able to skip the force upgrade pop-up and continue using the app by clicking on 'Forgot passcode?Sign in again' **Expected:** Force upgrade should always be displayed and restrict user to navigate inside any other screens Refer video https://user-images.githubusercontent.com/60386291/134026653-60334695-7940-49aa-a54a-78654b069ecc.mp4
3.0
[Android] User is able to skip the force upgrade pop-up and continue using the app by clicking on 'Forgot passcode?Sign in again' - **Steps:** 1. Signup/login to Android app and create a passcode 2. Configure force upgrade from SB and publish app 3. Kill and relaunch the app/minimize the app 4. Click on 'Forgot passcode?Sign in again' link 5. User is navigated to signin. 6. Again signin/signup with new user, user is able to enroll/take activities or perform any actions. **Actual:** User is able to skip the force upgrade pop-up and continue using the app by clicking on 'Forgot passcode?Sign in again' **Expected:** Force upgrade should always be displayed and restrict user to navigate inside any other screens Refer video https://user-images.githubusercontent.com/60386291/134026653-60334695-7940-49aa-a54a-78654b069ecc.mp4
process
user is able to skip the force upgrade pop up and continue using the app by clicking on forgot passcode sign in again steps signup login to android app and create a passcode configure force upgrade from sb and publish app kill and relaunch the app minimize the app click on forgot passcode sign in again link user is navigated to signin again signin signup with new user user is able to enroll take activities or perform any actions actual user is able to skip the force upgrade pop up and continue using the app by clicking on forgot passcode sign in again expected force upgrade should always be displayed and restrict user to navigate inside any other screens refer video
1
5,670
8,556,164,795
IssuesEvent
2018-11-08 12:18:18
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
[Orbit UI v0.29.0] New & renamed illustrations
Illustrations Processing
Almost all illustrations were changed (small tweaks with positioning mostly); a few were renamed and few were added. Hopefully, the full list of changes here: New illustrations - Nomad - Success - Error - BusinessTravel - MobileApp - PlaceholderTours - DesktopSearch Renaming - no-bookings => NoResults - ~~timeline-transport-taxi => transport-taxi~~ - timeline-boarding => Boarding - timeline-drop-baggage => BaggageDrop - AirportTransport => TransportBus - AirportTransportTaxi => TransportTaxi [illustrations.zip](https://github.com/kiwicom/orbit-components/files/2502073/illustrations.zip) All illustrations should be already optimized by tinypng.com :)
1.0
[Orbit UI v0.29.0] New & renamed illustrations - Almost all illustrations were changed (small tweaks with positioning mostly); a few were renamed and few were added. Hopefully, the full list of changes here: New illustrations - Nomad - Success - Error - BusinessTravel - MobileApp - PlaceholderTours - DesktopSearch Renaming - no-bookings => NoResults - ~~timeline-transport-taxi => transport-taxi~~ - timeline-boarding => Boarding - timeline-drop-baggage => BaggageDrop - AirportTransport => TransportBus - AirportTransportTaxi => TransportTaxi [illustrations.zip](https://github.com/kiwicom/orbit-components/files/2502073/illustrations.zip) All illustrations should be already optimized by tinypng.com :)
process
new renamed illustrations almost all illustrations were changed small tweaks with positioning mostly a few were renamed and few were added hopefully the full list of changes here new illustrations nomad success error businesstravel mobileapp placeholdertours desktopsearch renaming no bookings noresults timeline transport taxi transport taxi timeline boarding boarding timeline drop baggage baggagedrop airporttransport transportbus airporttransporttaxi transporttaxi all illustrations should be already optimized by tinypng com
1
27,016
7,888,123,904
IssuesEvent
2018-06-27 20:53:29
Polymer/tools
https://api.github.com/repos/Polymer/tools
closed
[project-config] Uncompiled presets
Package: build Priority: Medium Status: Available Type: Bug
Hi, Is there a reason why the `uncompiled-bundled` and the `uncompiled-unbundled` presets have **es2018** in browserCapabilities instead of **es2016** in the [builds.ts](https://github.com/Polymer/tools/blob/dd1c8bbb44f37f67974fbabf878b7a495ffeb6f6/packages/project-config/src/builds.ts#L224-L249) file of project-config package? Also, as these presets are uncompiled, should they have `module` as browser capabilities? By example, this [version](https://benjaminrancourt-int.firebaseapp.com/en) of my personal website with Polymer 3 and prpl-node-server throws an error when serving the uncompiled-bundled in Firefox 60 : ``` SyntaxError: expected expression, got keyword 'import' ``` as ES Modules are not yet completely supported in Firefox ([browser-capabilities](https://github.com/Polymer/tools/blob/dd1c8bbb44f37f67974fbabf878b7a495ffeb6f6/packages/browser-capabilities/src/browser-capabilities.ts#L118-L127)). Finally, in the [Polymer documentaition](https://github.com/Polymer/docs/blob/master/app/3.0/toolbox/build-for-production.md), the Polymer team recommends to serve ES6 build to the browsers that supported it for better performance and `uncompiled-unbundled` > is suitable for serving an implementation of the PRPL pattern to browsers that use the latest JavaScript features Is there other gains to serve the `uncompiled` builds aside to use the latest features? Let me know if you want me to make a pull request to update project-config. Thanks!
1.0
[project-config] Uncompiled presets - Hi, Is there a reason why the `uncompiled-bundled` and the `uncompiled-unbundled` presets have **es2018** in browserCapabilities instead of **es2016** in the [builds.ts](https://github.com/Polymer/tools/blob/dd1c8bbb44f37f67974fbabf878b7a495ffeb6f6/packages/project-config/src/builds.ts#L224-L249) file of project-config package? Also, as these presets are uncompiled, should they have `module` as browser capabilities? By example, this [version](https://benjaminrancourt-int.firebaseapp.com/en) of my personal website with Polymer 3 and prpl-node-server throws an error when serving the uncompiled-bundled in Firefox 60 : ``` SyntaxError: expected expression, got keyword 'import' ``` as ES Modules are not yet completely supported in Firefox ([browser-capabilities](https://github.com/Polymer/tools/blob/dd1c8bbb44f37f67974fbabf878b7a495ffeb6f6/packages/browser-capabilities/src/browser-capabilities.ts#L118-L127)). Finally, in the [Polymer documentaition](https://github.com/Polymer/docs/blob/master/app/3.0/toolbox/build-for-production.md), the Polymer team recommends to serve ES6 build to the browsers that supported it for better performance and `uncompiled-unbundled` > is suitable for serving an implementation of the PRPL pattern to browsers that use the latest JavaScript features Is there other gains to serve the `uncompiled` builds aside to use the latest features? Let me know if you want me to make a pull request to update project-config. Thanks!
non_process
uncompiled presets hi is there a reason why the uncompiled bundled and the uncompiled unbundled presets have in browsercapabilities instead of in the file of project config package also as these presets are uncompiled should they have module as browser capabilities by example this of my personal website with polymer and prpl node server throws an error when serving the uncompiled bundled in firefox syntaxerror expected expression got keyword import as es modules are not yet completely supported in firefox finally in the the polymer team recommends to serve build to the browsers that supported it for better performance and uncompiled unbundled is suitable for serving an implementation of the prpl pattern to browsers that use the latest javascript features is there other gains to serve the uncompiled builds aside to use the latest features let me know if you want me to make a pull request to update project config thanks
0
17,651
23,471,174,697
IssuesEvent
2022-08-16 22:04:22
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
reopened
Three memory copies of every dataloader cpu tensor
module: multiprocessing module: dataloader module: cuda triaged enhancement
### 🐛 Describe the bug (from slack discussion with @albanD) Is it possible to both share_memory and pin_memory for a tensor, for dataloading across shared memory with zero copies? ``` >>> x.share_memory_().pin_memory().is_shared() False >>> x.pin_memory().share_memory_().is_pinned() False ``` It seems like the answer is no, since cudaHostAlloc doesn't seem to be compatible with shared memory, which is a shame because there's implicitly three full copies of every tensor generated by a dataset: - dataset generates an example - dataloader collates it - dataloader copies it into shared memory to share with main process - memory pinning thread copies into pinned memory before transfer to device How might we reduce these extra copies? The H100 generation is going to start being impossible to feed. (sort of related to https://pytorch.slack.com/archives/C3PDTEV8E/p1652536474661579 and https://pytorch.slack.com/archives/C3PDTEV8E/p1652844740547679) ### Versions Not relevant, but I'm on PyTorch 1.11.0 and on CUDA 11.4, with 8x V100s or A100s. cc @VitalyFedyunin @SsnL @ejguan @NivekT @ngimel
1.0
Three memory copies of every dataloader cpu tensor - ### 🐛 Describe the bug (from slack discussion with @albanD) Is it possible to both share_memory and pin_memory for a tensor, for dataloading across shared memory with zero copies? ``` >>> x.share_memory_().pin_memory().is_shared() False >>> x.pin_memory().share_memory_().is_pinned() False ``` It seems like the answer is no, since cudaHostAlloc doesn't seem to be compatible with shared memory, which is a shame because there's implicitly three full copies of every tensor generated by a dataset: - dataset generates an example - dataloader collates it - dataloader copies it into shared memory to share with main process - memory pinning thread copies into pinned memory before transfer to device How might we reduce these extra copies? The H100 generation is going to start being impossible to feed. (sort of related to https://pytorch.slack.com/archives/C3PDTEV8E/p1652536474661579 and https://pytorch.slack.com/archives/C3PDTEV8E/p1652844740547679) ### Versions Not relevant, but I'm on PyTorch 1.11.0 and on CUDA 11.4, with 8x V100s or A100s. cc @VitalyFedyunin @SsnL @ejguan @NivekT @ngimel
process
three memory copies of every dataloader cpu tensor 🐛 describe the bug from slack discussion with alband is it possible to both share memory and pin memory for a tensor for dataloading across shared memory with zero copies x share memory pin memory is shared false x pin memory share memory is pinned false it seems like the answer is no since cudahostalloc doesn t seem to be compatible with shared memory which is a shame because there s implicitly three full copies of every tensor generated by a dataset dataset generates an example dataloader collates it dataloader copies it into shared memory to share with main process memory pinning thread copies into pinned memory before transfer to device how might we reduce these extra copies the generation is going to start being impossible to feed sort of related to and versions not relevant but i m on pytorch and on cuda with or cc vitalyfedyunin ssnl ejguan nivekt ngimel
1
14,506
17,604,363,042
IssuesEvent
2021-08-17 15:17:50
flancast90/Speech-To-Text-in-TW5
https://api.github.com/repos/flancast90/Speech-To-Text-in-TW5
closed
Text-editor toolbar button brainstorming
process-description
I open this issue as a reminder that we want to create a text-editor toolbar button that allows inserting the recorded text into the current tiddlers' text field For that we need some brainstorming and we need to collect information about the tiddlywiki internals so that we know how we can realize this
1.0
Text-editor toolbar button brainstorming - I open this issue as a reminder that we want to create a text-editor toolbar button that allows inserting the recorded text into the current tiddlers' text field For that we need some brainstorming and we need to collect information about the tiddlywiki internals so that we know how we can realize this
process
text editor toolbar button brainstorming i open this issue as a reminder that we want to create a text editor toolbar button that allows inserting the recorded text into the current tiddlers text field for that we need some brainstorming and we need to collect information about the tiddlywiki internals so that we know how we can realize this
1
168,364
20,754,724,701
IssuesEvent
2022-03-15 11:04:07
arngrimur/computersaysno
https://api.github.com/repos/arngrimur/computersaysno
closed
CVE-2020-8565 (Medium) detected in github.com/docker/cli-v20.10.11 - autoclosed
security vulnerability
## CVE-2020-8565 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/cli-v20.10.11</b></p></summary> <p>The Docker CLI</p> <p> Dependency Hierarchy: - github.com/ory/dockertest/v3-v3.8.1 (Root Library) - :x: **github.com/docker/cli-v20.10.11** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Kubernetes, if the logging level is set to at least 9, authorization and bearer tokens will be written to log files. This can occur both in API server logs and client tool output like kubectl. This affects <= v1.19.3, <= v1.18.10, <= v1.17.13, < v1.20.0-alpha2. <p>Publish Date: 2020-12-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8565>CVE-2020-8565</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0064">https://osv.dev/vulnerability/GO-2020-0064</a></p> <p>Release Date: 2020-12-07</p> <p>Fix Resolution: v1.20.0-alpha.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-8565 (Medium) detected in github.com/docker/cli-v20.10.11 - autoclosed - ## CVE-2020-8565 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/cli-v20.10.11</b></p></summary> <p>The Docker CLI</p> <p> Dependency Hierarchy: - github.com/ory/dockertest/v3-v3.8.1 (Root Library) - :x: **github.com/docker/cli-v20.10.11** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Kubernetes, if the logging level is set to at least 9, authorization and bearer tokens will be written to log files. This can occur both in API server logs and client tool output like kubectl. This affects <= v1.19.3, <= v1.18.10, <= v1.17.13, < v1.20.0-alpha2. <p>Publish Date: 2020-12-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8565>CVE-2020-8565</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0064">https://osv.dev/vulnerability/GO-2020-0064</a></p> <p>Release Date: 2020-12-07</p> <p>Fix Resolution: v1.20.0-alpha.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in github com docker cli autoclosed cve medium severity vulnerability vulnerable library github com docker cli the docker cli dependency hierarchy github com ory dockertest root library x github com docker cli vulnerable library found in head commit a href found in base branch main vulnerability details in kubernetes if the logging level is set to at least authorization and bearer tokens will be written to log files this can occur both in api server logs and client tool output like kubectl this affects publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution alpha step up your open source security game with whitesource
0
29,901
14,334,080,586
IssuesEvent
2020-11-27 07:24:04
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
PERF: regression in Series.asof with single date
Performance Regression
From https://pandas.pydata.org/speed/pandas/#timeseries.AsOf.time_asof_single?p-constructor='Series'&commits=52a17259-24e881d4&x-axis=commit&Cython=0.29.21&python=3.8 Snippet extracted from ASV: ```python N = 10000 rng = pd.date_range(start="1/1/1990", periods=N, freq="53s") s = pd.Series(np.random.randn(N), index=rng) dates = pd.date_range(start="1/1/1990", periods=N * 10, freq="5s") date = dates[0] %timeit s.asof(date) ``` On pandas 1.1 this takes around 20µs, on master I get around 110µs Commit range indicated by ASV is https://github.com/pandas-dev/pandas/compare/52a17259...24e881d4
True
PERF: regression in Series.asof with single date - From https://pandas.pydata.org/speed/pandas/#timeseries.AsOf.time_asof_single?p-constructor='Series'&commits=52a17259-24e881d4&x-axis=commit&Cython=0.29.21&python=3.8 Snippet extracted from ASV: ```python N = 10000 rng = pd.date_range(start="1/1/1990", periods=N, freq="53s") s = pd.Series(np.random.randn(N), index=rng) dates = pd.date_range(start="1/1/1990", periods=N * 10, freq="5s") date = dates[0] %timeit s.asof(date) ``` On pandas 1.1 this takes around 20µs, on master I get around 110µs Commit range indicated by ASV is https://github.com/pandas-dev/pandas/compare/52a17259...24e881d4
non_process
perf regression in series asof with single date from snippet extracted from asv python n rng pd date range start periods n freq s pd series np random randn n index rng dates pd date range start periods n freq date dates timeit s asof date on pandas this takes around on master i get around commit range indicated by asv is
0
120,133
17,644,020,786
IssuesEvent
2021-08-20 01:28:54
mariano72/node
https://api.github.com/repos/mariano72/node
opened
CVE-2020-36048 (High) detected in engine.io-3.4.0.tgz
security vulnerability
## CVE-2020-36048 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.4.0.tgz</b></p></summary> <p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p> <p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.4.0.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.4.0.tgz</a></p> <p>Path to dependency file: node/package.json</p> <p>Path to vulnerable library: node/node_modules/engine.io/package.json</p> <p> Dependency Hierarchy: - socket.io-2.3.0.tgz (Root Library) - :x: **engine.io-3.4.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport. <p>Publish Date: 2021-01-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p> <p>Release Date: 2021-01-08</p> <p>Fix Resolution: engine.io - 4.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-36048 (High) detected in engine.io-3.4.0.tgz - ## CVE-2020-36048 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.4.0.tgz</b></p></summary> <p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p> <p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.4.0.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.4.0.tgz</a></p> <p>Path to dependency file: node/package.json</p> <p>Path to vulnerable library: node/node_modules/engine.io/package.json</p> <p> Dependency Hierarchy: - socket.io-2.3.0.tgz (Root Library) - :x: **engine.io-3.4.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport. <p>Publish Date: 2021-01-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p> <p>Release Date: 2021-01-08</p> <p>Fix Resolution: engine.io - 4.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in engine io tgz cve high severity vulnerability vulnerable library engine io tgz the realtime engine behind socket io provides the foundation of a bidirectional connection between client and server library home page a href path to dependency file node package json path to vulnerable library node node modules engine io package json dependency hierarchy socket io tgz root library x engine io tgz vulnerable library vulnerability details engine io before allows attackers to cause a denial of service resource consumption via a post request to the long polling transport publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution engine io step up your open source security game with whitesource
0
4,727
7,571,408,759
IssuesEvent
2018-04-23 12:09:50
dzhw/zofar
https://api.github.com/repos/dzhw/zofar
closed
Bug in Carousel
category: technical.processes et: 3 prio: ? status: testing type: backlog.task type: bug
If two Carousels are on one page. Both change header title simultaneous, when slide in one of the carousels is changed
1.0
Bug in Carousel - If two Carousels are on one page. Both change header title simultaneous, when slide in one of the carousels is changed
process
bug in carousel if two carousels are on one page both change header title simultaneous when slide in one of the carousels is changed
1
2,734
5,622,580,736
IssuesEvent
2017-04-04 13:11:07
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
reopened
Handling duplicate rows in custom modal body
inprocess
Thanks for excellent plugin! I have a custom modal body for the 'insert row' and validator for the key field. How can I check if the entered key already exists and update the validateState? Thanks for your help.
1.0
Handling duplicate rows in custom modal body - Thanks for excellent plugin! I have a custom modal body for the 'insert row' and validator for the key field. How can I check if the entered key already exists and update the validateState? Thanks for your help.
process
handling duplicate rows in custom modal body thanks for excellent plugin i have a custom modal body for the insert row and validator for the key field how can i check if the entered key already exists and update the validatestate thanks for your help
1
17,699
23,549,054,045
IssuesEvent
2022-08-21 14:59:46
divertingPan/divertingPan.github.io
https://api.github.com/repos/divertingPan/divertingPan.github.io
opened
科普 | 电脑里的图片到底是什么样的 | 老潘家的潘老师
Gitalk /post/basic_digital_image_processing/
https://divertingpan.github.io/post/basic_digital_image_processing/ 使用图片处理软件,就不可避免的要用到计算机(不管是手机还是电脑,原理都是一样)。今天潘老师从科学的角度解剖图片,了解图片背后的一些机制,再使用Photoshop就会更明白些。 今天的文章侧重科普,没有很多操作上的东西。即使读者没有任何计算机...
1.0
科普 | 电脑里的图片到底是什么样的 | 老潘家的潘老师 - https://divertingpan.github.io/post/basic_digital_image_processing/ 使用图片处理软件,就不可避免的要用到计算机(不管是手机还是电脑,原理都是一样)。今天潘老师从科学的角度解剖图片,了解图片背后的一些机制,再使用Photoshop就会更明白些。 今天的文章侧重科普,没有很多操作上的东西。即使读者没有任何计算机...
process
科普 电脑里的图片到底是什么样的 老潘家的潘老师 使用图片处理软件,就不可避免的要用到计算机(不管是手机还是电脑,原理都是一样)。今天潘老师从科学的角度解剖图片,了解图片背后的一些机制,再使用photoshop就会更明白些。 今天的文章侧重科普,没有很多操作上的东西。即使读者没有任何计算机
1
698,076
23,964,747,583
IssuesEvent
2022-09-12 23:03:24
apache/hudi
https://api.github.com/repos/apache/hudi
closed
[SUPPORT]Hudi Inserts and Upserts for MoR and CoW tables are taking very long time.
performance priority:major
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. Hi Team, I was testing Hudi for doing inserts/updates/deletes on data in S3. Below are benchmark metrics captured so far on varied data sizes: Run 1 - Fresh Insert ----------------------- Total Data size = 7 GB COW = 22 mins MOR = 25 mins Run 2 - Upsert -------------------- Total Data Size=6.7 GB COW = 61 mins MOR = 64 mins Run 3 - Upsert ------------------- Total Data size: 2.5 GB COW = 39 mins MOR = 53 mins Below are cluster configurations used: EMR Version : 5.33.0 Hudi: 0.7.0 Spark: 2.4.7 Scala: 2.11.12 Static cluster with 1 Master (m5.xlarge) , 4 * (m5.2xlarge) core and 4 * (m5.2xlarge) task nodes **To Reproduce** Steps to reproduce the behavior: 1. Execute Hudi insert/usert on text data stored in S3 2. The spark-submit is issued on EMR 5.33.0 3. Hudi 0.7.0 and Scala 2.11.12 is used 4. **Expected behavior** Not expecting that Hudi will take so much time to write to Hudi Store. Expectation was it should take 15-20 mins time at max for data of size (7-8 GB) both inserts/upserts. Also for even writes CoW write strategy was performing better compared to MoR which I thought would have been vice versa. **Environment Description** * Hudi version : 0.7.0 * Spark version : 2.4.7 * Hive version : 2.3.7 * Hadoop version : * Storage (HDFS/S3/GCS..) : S3 * Running on Docker? (yes/no) : No **Additional context** This is a complete batch job, we receive daily loads and upserts are supposed to be performed over existing Hudi Tables. Static EMR cluster: 1 Master (m5.xlarge) node , 4 * (m5.2xlarge) core nodes and 4 * (m5.2xlarge) task nodes Spark submit command :: spark-submit --master yarn --num-executors 8 --driver-memory 4G --executor-memory 20G \ --conf spark.yarn.executor.memoryOverhead=4096 \ --conf spark.yarn.maxAppAttempts=3 \ --conf spark.executor.cores=5 \ --conf spark.segment.etl.numexecutors=8 \ --conf spark.network.timeout=800 \ --conf spark.shuffle.minNumPartitionsToHighlyCompress=32 \ --conf spark.segment.processor.partition.count=500 \ --conf spark.segment.processor.output-shard.count=60 \ --conf spark.segment.processor.binseg.partition.threshold.bytes=500000000000 \ --conf spark.driver.maxResultSize=0 \ --conf spark.hadoop.fs.s3.maxRetries=20 \ --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \ --conf spark.sql.shuffle.partitions=500 \ --conf spark.kryo.registrationRequired=false \ --class <class-name> \ --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar \ s3://<jar-name> HUDI insert and upsert parameters: userSegDf.write .format("hudi") .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, if(hudiWriteStrg=="MOR") DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL else DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenClass) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, key) .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, partitionKey) .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, combineKey) .option(HoodieWriteConfig.TABLE_NAME, tableName) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL) .option("hoodie.upsert.shuffle.parallelism", "2") .mode(SaveMode.Overwrite) .save(s"$basePath/$tableName/") userSegDf.write .format("hudi") .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, if(hudiWriteStrg=="MOR") DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL else DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenClass) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, key) .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, partitionKey) .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, combineKey) .option(HoodieWriteConfig.TABLE_NAME, tableName) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL) .mode(SaveMode.Append) .save(s"$basePath/$tableName/") I have tried to run a full production load on 53 GB of data size on production cluster with the below cluster configuration and spark submit command for Hudi insert using COW write strategy ...I observed that it is taking more than 2 hrs just for insert and it is quite evident from the earlier runs that I will take even more time for upsert operation. Tota Data size: 53 GB Cluster Size:1 Master (m5.xlarge) node , 2* (r5a.24xlarge) core nodes and 6 * (r5a.24xlarge) task nodes Spark submit command :: spark-submit --master yarn --num-executors 192 --driver-memory 4G --executor-memory 20G \ --conf spark.yarn.executor.memoryOverhead=4096 \ --conf spark.yarn.maxAppAttempts=3 \ --conf spark.executor.cores=4 \ --conf spark.segment.etl.numexecutors=192 \ --conf spark.network.timeout=800 \ --conf spark.shuffle.minNumPartitionsToHighlyCompress=32 \ --conf spark.segment.processor.partition.count=1536 \ --conf spark.segment.processor.output-shard.count=60 \ --conf spark.segment.processor.binseg.partition.threshold.bytes=500000000000 \ --conf spark.driver.maxResultSize=0 \ --conf spark.hadoop.fs.s3.maxRetries=20 \ --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \ --conf spark.sql.shuffle.partitions=1536 \ --conf spark.kryo.registrationRequired=false \ --class <class-name> \ --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar \ s3://<jar-name> Hudi insert and Upsert parameters being same as above.
1.0
[SUPPORT]Hudi Inserts and Upserts for MoR and CoW tables are taking very long time. - **_Tips before filing an issue_** - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. Hi Team, I was testing Hudi for doing inserts/updates/deletes on data in S3. Below are benchmark metrics captured so far on varied data sizes: Run 1 - Fresh Insert ----------------------- Total Data size = 7 GB COW = 22 mins MOR = 25 mins Run 2 - Upsert -------------------- Total Data Size=6.7 GB COW = 61 mins MOR = 64 mins Run 3 - Upsert ------------------- Total Data size: 2.5 GB COW = 39 mins MOR = 53 mins Below are cluster configurations used: EMR Version : 5.33.0 Hudi: 0.7.0 Spark: 2.4.7 Scala: 2.11.12 Static cluster with 1 Master (m5.xlarge) , 4 * (m5.2xlarge) core and 4 * (m5.2xlarge) task nodes **To Reproduce** Steps to reproduce the behavior: 1. Execute Hudi insert/usert on text data stored in S3 2. The spark-submit is issued on EMR 5.33.0 3. Hudi 0.7.0 and Scala 2.11.12 is used 4. **Expected behavior** Not expecting that Hudi will take so much time to write to Hudi Store. Expectation was it should take 15-20 mins time at max for data of size (7-8 GB) both inserts/upserts. Also for even writes CoW write strategy was performing better compared to MoR which I thought would have been vice versa. **Environment Description** * Hudi version : 0.7.0 * Spark version : 2.4.7 * Hive version : 2.3.7 * Hadoop version : * Storage (HDFS/S3/GCS..) : S3 * Running on Docker? (yes/no) : No **Additional context** This is a complete batch job, we receive daily loads and upserts are supposed to be performed over existing Hudi Tables. Static EMR cluster: 1 Master (m5.xlarge) node , 4 * (m5.2xlarge) core nodes and 4 * (m5.2xlarge) task nodes Spark submit command :: spark-submit --master yarn --num-executors 8 --driver-memory 4G --executor-memory 20G \ --conf spark.yarn.executor.memoryOverhead=4096 \ --conf spark.yarn.maxAppAttempts=3 \ --conf spark.executor.cores=5 \ --conf spark.segment.etl.numexecutors=8 \ --conf spark.network.timeout=800 \ --conf spark.shuffle.minNumPartitionsToHighlyCompress=32 \ --conf spark.segment.processor.partition.count=500 \ --conf spark.segment.processor.output-shard.count=60 \ --conf spark.segment.processor.binseg.partition.threshold.bytes=500000000000 \ --conf spark.driver.maxResultSize=0 \ --conf spark.hadoop.fs.s3.maxRetries=20 \ --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \ --conf spark.sql.shuffle.partitions=500 \ --conf spark.kryo.registrationRequired=false \ --class <class-name> \ --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar \ s3://<jar-name> HUDI insert and upsert parameters: userSegDf.write .format("hudi") .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, if(hudiWriteStrg=="MOR") DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL else DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenClass) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, key) .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, partitionKey) .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, combineKey) .option(HoodieWriteConfig.TABLE_NAME, tableName) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL) .option("hoodie.upsert.shuffle.parallelism", "2") .mode(SaveMode.Overwrite) .save(s"$basePath/$tableName/") userSegDf.write .format("hudi") .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, if(hudiWriteStrg=="MOR") DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL else DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL) .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, keyGenClass) .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, key) .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, partitionKey) .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, combineKey) .option(HoodieWriteConfig.TABLE_NAME, tableName) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL) .mode(SaveMode.Append) .save(s"$basePath/$tableName/") I have tried to run a full production load on 53 GB of data size on production cluster with the below cluster configuration and spark submit command for Hudi insert using COW write strategy ...I observed that it is taking more than 2 hrs just for insert and it is quite evident from the earlier runs that I will take even more time for upsert operation. Tota Data size: 53 GB Cluster Size:1 Master (m5.xlarge) node , 2* (r5a.24xlarge) core nodes and 6 * (r5a.24xlarge) task nodes Spark submit command :: spark-submit --master yarn --num-executors 192 --driver-memory 4G --executor-memory 20G \ --conf spark.yarn.executor.memoryOverhead=4096 \ --conf spark.yarn.maxAppAttempts=3 \ --conf spark.executor.cores=4 \ --conf spark.segment.etl.numexecutors=192 \ --conf spark.network.timeout=800 \ --conf spark.shuffle.minNumPartitionsToHighlyCompress=32 \ --conf spark.segment.processor.partition.count=1536 \ --conf spark.segment.processor.output-shard.count=60 \ --conf spark.segment.processor.binseg.partition.threshold.bytes=500000000000 \ --conf spark.driver.maxResultSize=0 \ --conf spark.hadoop.fs.s3.maxRetries=20 \ --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \ --conf spark.sql.shuffle.partitions=1536 \ --conf spark.kryo.registrationRequired=false \ --class <class-name> \ --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar \ s3://<jar-name> Hudi insert and Upsert parameters being same as above.
non_process
hudi inserts and upserts for mor and cow tables are taking very long time tips before filing an issue have you gone through our join the mailing list to engage in conversations and get faster support at dev subscribe hudi apache org if you have triaged this as a bug then file an directly hi team i was testing hudi for doing inserts updates deletes on data in below are benchmark metrics captured so far on varied data sizes run fresh insert total data size gb cow mins mor mins run upsert total data size gb cow mins mor mins run upsert total data size gb cow mins mor mins below are cluster configurations used emr version hudi spark scala static cluster with master xlarge core and task nodes to reproduce steps to reproduce the behavior execute hudi insert usert on text data stored in the spark submit is issued on emr hudi and scala is used expected behavior not expecting that hudi will take so much time to write to hudi store expectation was it should take mins time at max for data of size gb both inserts upserts also for even writes cow write strategy was performing better compared to mor which i thought would have been vice versa environment description hudi version spark version hive version hadoop version storage hdfs gcs running on docker yes no no additional context this is a complete batch job we receive daily loads and upserts are supposed to be performed over existing hudi tables static emr cluster master xlarge node core nodes and task nodes spark submit command spark submit master yarn num executors driver memory executor memory conf spark yarn executor memoryoverhead conf spark yarn maxappattempts conf spark executor cores conf spark segment etl numexecutors conf spark network timeout conf spark shuffle minnumpartitionstohighlycompress conf spark segment processor partition count conf spark segment processor output shard count conf spark segment processor binseg partition threshold bytes conf spark driver maxresultsize conf spark hadoop fs maxretries conf spark serializer org apache spark serializer kryoserializer conf spark sql shuffle partitions conf spark kryo registrationrequired false class jars usr lib hudi hudi spark bundle jar usr lib spark external lib spark avro jar hudi insert and upsert parameters usersegdf write format hudi option datasourcewriteoptions table type opt key if hudiwritestrg mor datasourcewriteoptions mor table type opt val else datasourcewriteoptions cow table type opt val option datasourcewriteoptions keygenerator class opt key keygenclass option datasourcewriteoptions recordkey field opt key key option datasourcewriteoptions partitionpath field opt key partitionkey option datasourcewriteoptions precombine field opt key combinekey option hoodiewriteconfig table name tablename option datasourcewriteoptions operation opt key datasourcewriteoptions insert operation opt val option hoodie upsert shuffle parallelism mode savemode overwrite save s basepath tablename usersegdf write format hudi option datasourcewriteoptions table type opt key if hudiwritestrg mor datasourcewriteoptions mor table type opt val else datasourcewriteoptions cow table type opt val option datasourcewriteoptions keygenerator class opt key keygenclass option datasourcewriteoptions recordkey field opt key key option datasourcewriteoptions partitionpath field opt key partitionkey option datasourcewriteoptions precombine field opt key combinekey option hoodiewriteconfig table name tablename option datasourcewriteoptions operation opt key datasourcewriteoptions upsert operation opt val mode savemode append save s basepath tablename i have tried to run a full production load on gb of data size on production cluster with the below cluster configuration and spark submit command for hudi insert using cow write strategy i observed that it is taking more than hrs just for insert and it is quite evident from the earlier runs that i will take even more time for upsert operation tota data size gb cluster size master xlarge node core nodes and task nodes spark submit command spark submit master yarn num executors driver memory executor memory conf spark yarn executor memoryoverhead conf spark yarn maxappattempts conf spark executor cores conf spark segment etl numexecutors conf spark network timeout conf spark shuffle minnumpartitionstohighlycompress conf spark segment processor partition count conf spark segment processor output shard count conf spark segment processor binseg partition threshold bytes conf spark driver maxresultsize conf spark hadoop fs maxretries conf spark serializer org apache spark serializer kryoserializer conf spark sql shuffle partitions conf spark kryo registrationrequired false class jars usr lib hudi hudi spark bundle jar usr lib spark external lib spark avro jar hudi insert and upsert parameters being same as above
0
13,345
15,801,910,589
IssuesEvent
2021-04-03 07:07:01
PyCQA/flake8
https://api.github.com/repos/PyCQA/flake8
closed
Simplify and speed up multiprocessing - [merged]
component:multiprocessing component:performance gitlab merge request
In GitLab by @asottile on Nov 22, 2016, 15:45 _Merges faster -> master_ This is a bit of a WIP, I moved away from Queue (since it seems to be the bottleneck) From #265 the same test finishes (still slower) but in reasonable time: ``` $ time flake8 -j8 bar real 0m17.583s user 0m26.312s sys 0m2.288s ```
1.0
Simplify and speed up multiprocessing - [merged] - In GitLab by @asottile on Nov 22, 2016, 15:45 _Merges faster -> master_ This is a bit of a WIP, I moved away from Queue (since it seems to be the bottleneck) From #265 the same test finishes (still slower) but in reasonable time: ``` $ time flake8 -j8 bar real 0m17.583s user 0m26.312s sys 0m2.288s ```
process
simplify and speed up multiprocessing in gitlab by asottile on nov merges faster master this is a bit of a wip i moved away from queue since it seems to be the bottleneck from the same test finishes still slower but in reasonable time time bar real user sys
1
9,789
12,805,470,203
IssuesEvent
2020-07-03 07:35:05
ClickHouse/ClickHouse
https://api.github.com/repos/ClickHouse/ClickHouse
closed
View of MV slower than querying directly?
comp-processors performance v20.3-affected v20.4-affected
```sql CREATE MATERIALIZED VIEW player_champ_counts_mv ENGINE = AggregatingMergeTree() ORDER BY (patch_num, my_account_id, champ_id) as select patch_num, my_account_id, champ_id, countState() as c_state from full_info group by patch_num, my_account_id, champ_id; create view player_champ_counts as select patch_num, my_account_id, champ_id, countMerge(c_state) as c from player_champ_counts_mv group by patch_num, my_account_id, champ_id; ``` 2.4s: `select * from player_champ_counts` 900ms: ``` select patch_num, my_account_id, champ_id, countMerge(c_state) as c from player_champ_counts_mv group by patch_num, my_account_id, champ_id; ``` In other words, just inlining the definition of the view is doing something different than querying the view itself.
1.0
View of MV slower than querying directly? - ```sql CREATE MATERIALIZED VIEW player_champ_counts_mv ENGINE = AggregatingMergeTree() ORDER BY (patch_num, my_account_id, champ_id) as select patch_num, my_account_id, champ_id, countState() as c_state from full_info group by patch_num, my_account_id, champ_id; create view player_champ_counts as select patch_num, my_account_id, champ_id, countMerge(c_state) as c from player_champ_counts_mv group by patch_num, my_account_id, champ_id; ``` 2.4s: `select * from player_champ_counts` 900ms: ``` select patch_num, my_account_id, champ_id, countMerge(c_state) as c from player_champ_counts_mv group by patch_num, my_account_id, champ_id; ``` In other words, just inlining the definition of the view is doing something different than querying the view itself.
process
view of mv slower than querying directly sql create materialized view player champ counts mv engine aggregatingmergetree order by patch num my account id champ id as select patch num my account id champ id countstate as c state from full info group by patch num my account id champ id create view player champ counts as select patch num my account id champ id countmerge c state as c from player champ counts mv group by patch num my account id champ id select from player champ counts select patch num my account id champ id countmerge c state as c from player champ counts mv group by patch num my account id champ id in other words just inlining the definition of the view is doing something different than querying the view itself
1
282,453
21,315,490,446
IssuesEvent
2022-04-16 07:39:12
Rye-Catcher/pe
https://api.github.com/repos/Rye-Catcher/pe
opened
Wrong command format of `remove` command provided in Command Suumary of UG
type.DocumentationBug severity.VeryLow
This is its format in the `remove` command section ![image.png](https://raw.githubusercontent.com/Rye-Catcher/pe/master/files/bc8ebd30-c66f-4a7b-8162-77c29fd7b307.png) This is its format in the Command Summary section ![image.png](https://raw.githubusercontent.com/Rye-Catcher/pe/master/files/3c1f8fea-61ad-4704-9392-5ba535cd3ef6.png) <!--session: 1650087615852-35f65ea8-c411-481f-8e14-b8de986e2cef--> <!--Version: Web v3.4.2-->
1.0
Wrong command format of `remove` command provided in Command Suumary of UG - This is its format in the `remove` command section ![image.png](https://raw.githubusercontent.com/Rye-Catcher/pe/master/files/bc8ebd30-c66f-4a7b-8162-77c29fd7b307.png) This is its format in the Command Summary section ![image.png](https://raw.githubusercontent.com/Rye-Catcher/pe/master/files/3c1f8fea-61ad-4704-9392-5ba535cd3ef6.png) <!--session: 1650087615852-35f65ea8-c411-481f-8e14-b8de986e2cef--> <!--Version: Web v3.4.2-->
non_process
wrong command format of remove command provided in command suumary of ug this is its format in the remove command section this is its format in the command summary section
0
4,771
7,635,189,387
IssuesEvent
2018-05-07 01:44:30
googlegenomics/gcp-variant-transforms
https://api.github.com/repos/googlegenomics/gcp-variant-transforms
opened
Public release of VEP docker image.
P2 process
It turned out that we are the first who need to publish VEP docker images on [gcr.io](http://gcr.io). We need to follow our internal processes/policies to make this happen; this is a process tracking issue. The [license](https://useast.ensembl.org/info/about/legal/code_licence.html) is [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0).
1.0
Public release of VEP docker image. - It turned out that we are the first who need to publish VEP docker images on [gcr.io](http://gcr.io). We need to follow our internal processes/policies to make this happen; this is a process tracking issue. The [license](https://useast.ensembl.org/info/about/legal/code_licence.html) is [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0).
process
public release of vep docker image it turned out that we are the first who need to publish vep docker images on we need to follow our internal processes policies to make this happen this is a process tracking issue the is
1
15,908
20,113,115,023
IssuesEvent
2022-02-07 16:47:07
pelias/api
https://api.github.com/repos/pelias/api
closed
parser: fails to parse apartment numbers
input parsing processed Q1-2017 libpostal
the address parser is making a mistake parsing `1917/2 Pike Drive`, it ends up thinking that `2` is the housenumber. ``` javascript "query": { "text": "1917/2 Pike Drive", "parsed_text": { "number": 2, "street": "Pike Drive", "regions": [] } } ```
1.0
parser: fails to parse apartment numbers - the address parser is making a mistake parsing `1917/2 Pike Drive`, it ends up thinking that `2` is the housenumber. ``` javascript "query": { "text": "1917/2 Pike Drive", "parsed_text": { "number": 2, "street": "Pike Drive", "regions": [] } } ```
process
parser fails to parse apartment numbers the address parser is making a mistake parsing pike drive it ends up thinking that is the housenumber javascript query text pike drive parsed text number street pike drive regions
1
27,684
22,154,710,494
IssuesEvent
2022-06-03 20:59:55
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
Redirect COVID chatbot link to the main coronavirus FAQs page
backend operations tools-be-review infrastructure platform-sre console-services team-platform-infrastructure
On May 15, the coronavirus chatbot will be retired. By that date, we will want to redirect users who visit that link over to the main coronavirus FAQs page. See below: https://github.com/department-of-veterans-affairs/va-virtual-agent/issues/456 https://app.zenhub.com/workspaces/vft-59c95ae5fda7577a9b3184f8/issues/department-of-veterans-affairs/va.gov-team/39785 The Virtual Agent Chatbot team is looking for guidance from sitewide teams to understand how we can put this redirection in place. Thank you!
2.0
Redirect COVID chatbot link to the main coronavirus FAQs page - On May 15, the coronavirus chatbot will be retired. By that date, we will want to redirect users who visit that link over to the main coronavirus FAQs page. See below: https://github.com/department-of-veterans-affairs/va-virtual-agent/issues/456 https://app.zenhub.com/workspaces/vft-59c95ae5fda7577a9b3184f8/issues/department-of-veterans-affairs/va.gov-team/39785 The Virtual Agent Chatbot team is looking for guidance from sitewide teams to understand how we can put this redirection in place. Thank you!
non_process
redirect covid chatbot link to the main coronavirus faqs page on may the coronavirus chatbot will be retired by that date we will want to redirect users who visit that link over to the main coronavirus faqs page see below the virtual agent chatbot team is looking for guidance from sitewide teams to understand how we can put this redirection in place thank you
0
6,931
10,095,831,885
IssuesEvent
2019-07-27 12:45:37
shirou/gopsutil
https://api.github.com/repos/shirou/gopsutil
closed
On CentOS, the value returned by process.Percent() is bigger than 100%
os:linux package:cpu package:process
How I get cpu usage: ``` c.CPUUsage, err = p.Percent(time.Second * 1) ``` Centos Version: 6.4
1.0
On CentOS, the value returned by process.Percent() is bigger than 100% - How I get cpu usage: ``` c.CPUUsage, err = p.Percent(time.Second * 1) ``` Centos Version: 6.4
process
on centos the value returned by process percent is bigger than how i get cpu usage c cpuusage err p percent time second centos version
1
113,725
17,150,887,827
IssuesEvent
2021-07-13 20:26:22
snowdensb/braindump
https://api.github.com/repos/snowdensb/braindump
opened
CVE-2018-16492 (High) detected in extend-3.0.0.tgz
security vulnerability
## CVE-2018-16492 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>extend-3.0.0.tgz</b></p></summary> <p>Port of jQuery.extend for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/extend/-/extend-3.0.0.tgz">https://registry.npmjs.org/extend/-/extend-3.0.0.tgz</a></p> <p>Path to dependency file: braindump/package.json</p> <p>Path to vulnerable library: braindump/node_modules/extend</p> <p> Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.3.0.tgz - :x: **extend-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module extend <2.0.2, ~<3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492>CVE-2018-16492</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/381185">https://hackerone.com/reports/381185</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: extend - v3.0.2,v2.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"extend","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp:3.9.1;liftoff:2.3.0;extend:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"extend - v3.0.2,v2.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16492","vulnerabilityDetails":"A prototype pollution vulnerability was found in module extend \u003c2.0.2, ~\u003c3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-16492 (High) detected in extend-3.0.0.tgz - ## CVE-2018-16492 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>extend-3.0.0.tgz</b></p></summary> <p>Port of jQuery.extend for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/extend/-/extend-3.0.0.tgz">https://registry.npmjs.org/extend/-/extend-3.0.0.tgz</a></p> <p>Path to dependency file: braindump/package.json</p> <p>Path to vulnerable library: braindump/node_modules/extend</p> <p> Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - liftoff-2.3.0.tgz - :x: **extend-3.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module extend <2.0.2, ~<3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492>CVE-2018-16492</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/381185">https://hackerone.com/reports/381185</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: extend - v3.0.2,v2.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"extend","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp:3.9.1;liftoff:2.3.0;extend:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"extend - v3.0.2,v2.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16492","vulnerabilityDetails":"A prototype pollution vulnerability was found in module extend \u003c2.0.2, ~\u003c3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in extend tgz cve high severity vulnerability vulnerable library extend tgz port of jquery extend for node js and the browser library home page a href path to dependency file braindump package json path to vulnerable library braindump node modules extend dependency hierarchy gulp tgz root library liftoff tgz x extend tgz vulnerable library found in head commit a href found in base branch master vulnerability details a prototype pollution vulnerability was found in module extend that allows an attacker to inject arbitrary properties onto object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution extend isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree gulp liftoff extend isminimumfixversionavailable true minimumfixversion extend basebranches vulnerabilityidentifier cve vulnerabilitydetails a prototype pollution vulnerability was found in module extend that allows an attacker to inject arbitrary properties onto object prototype vulnerabilityurl
0
20,836
27,608,098,833
IssuesEvent
2023-03-09 14:18:27
rusefi/rusefi_documentation
https://api.github.com/repos/rusefi/rusefi_documentation
closed
EPIC: errors in markdown need fixing before merging a change (CI for documentation)
bug IMPORTANT wiki location & process change
Validate Markdown Files With MarkdownLint according to [this ](https://matthewsetter.com/tools-that-make-technical-writing-easier-markdown-linter/)blog post revealed over 7K errors. To avoid follow-on errors in other tools I'd suggest to fix them. Luckily most errors can be fixed automatically. As a follow-up we should activate this check on each commit. ``` PS git\rusefi_documentation> markdownlint-cli2 "**/*.md" "#node_modules" 2>.\wiki-tools\mdl-20221224.log markdownlint-cli2 v0.5.1 (markdownlint v0.26.2) Finding: **/*.md !node_modules Linting: 425 file(s) Summary: 7557 error(s) ``` [mdl-20221224.log](https://github.com/rusefi/rusefi_documentation/files/10299257/mdl-20221224.log)
1.0
EPIC: errors in markdown need fixing before merging a change (CI for documentation) - Validate Markdown Files With MarkdownLint according to [this ](https://matthewsetter.com/tools-that-make-technical-writing-easier-markdown-linter/)blog post revealed over 7K errors. To avoid follow-on errors in other tools I'd suggest to fix them. Luckily most errors can be fixed automatically. As a follow-up we should activate this check on each commit. ``` PS git\rusefi_documentation> markdownlint-cli2 "**/*.md" "#node_modules" 2>.\wiki-tools\mdl-20221224.log markdownlint-cli2 v0.5.1 (markdownlint v0.26.2) Finding: **/*.md !node_modules Linting: 425 file(s) Summary: 7557 error(s) ``` [mdl-20221224.log](https://github.com/rusefi/rusefi_documentation/files/10299257/mdl-20221224.log)
process
epic errors in markdown need fixing before merging a change ci for documentation validate markdown files with markdownlint according to post revealed over errors to avoid follow on errors in other tools i d suggest to fix them luckily most errors can be fixed automatically as a follow up we should activate this check on each commit ps git rusefi documentation markdownlint md node modules wiki tools mdl log markdownlint markdownlint finding md node modules linting file s summary error s
1
14,098
16,987,917,780
IssuesEvent
2021-06-30 16:24:07
CesiumGS/cesium
https://api.github.com/repos/CesiumGS/cesium
closed
Post Processing sandcastle issue
category - post-processing type - bug
Run http://localhost:8080/Apps/Sandcastle/index.html?src=Post%20Processing.html&label=All 1. There is an odd flash full screen flash when you enable each of the effects the first time. 2. I got a random crash the first time I enabled black and white about undefined `v` in a UniformSample or something similar (not sure of the exact details, I couldn't reproduce)
1.0
Post Processing sandcastle issue - Run http://localhost:8080/Apps/Sandcastle/index.html?src=Post%20Processing.html&label=All 1. There is an odd flash full screen flash when you enable each of the effects the first time. 2. I got a random crash the first time I enabled black and white about undefined `v` in a UniformSample or something similar (not sure of the exact details, I couldn't reproduce)
process
post processing sandcastle issue run there is an odd flash full screen flash when you enable each of the effects the first time i got a random crash the first time i enabled black and white about undefined v in a uniformsample or something similar not sure of the exact details i couldn t reproduce
1
5,488
8,359,342,428
IssuesEvent
2018-10-03 07:54:51
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
insulin secretion - always from pancreatic cells?
cellular processes
``` [Term] id: GO:0030073 name: insulin secretion def: "The regulated release of proinsulin from ***secretory granules (B granules) in the B cells of the pancreas***; accompanied by cleavage of proinsulin to form mature insulin." [GOC:mah, ISBN:0198506732] is_a: GO:0030072 ! peptide hormone secretion ``` We have a two annotations to regulation of insulin secretion in worm and fly. the pancreas and proto-pancreas structures are vertebrate-specific PMID:16417468, but insulin is a billion years old: http://en.wikipedia.org/wiki/Insulin Looked at the drosophila annotation abstract, insulin is indeed being secreted: http://www.ncbi.nlm.nih.gov/pubmed/23874700 There are two possible courses of action here: 1. Weak: alter text definition to be: "The regulated release of insulin from a cell. In vertebrates, this is always from beta cells of the pancreas, and is accompanied by cleavage of the secreted proinsulin to form mature insulin" 2. Strong: change label to be "insulin secretion from B cell" and axiomatize accordingly. This will invalidate the 4 invert annotations. To accommodate, create a new grouping class "insulin secretion" Note that 1 could be accompanied by a taxon GCI as in uberon, but we don't use these in GO yet, and probably shouldn't until Protege 5. I would therefore vote for 2, even though this contributes to the drunk-christmas-tree effect. Use case driving decision: a user is looking for animal models of diabetes, or disorders involving pancreatic B cells. They query GO using the CL class, they get the specific GO class, which does not yield the FB gene. They have to go one level up (note this isn't necessary with the taxon-GCI route, but the query machinery is harder), e.g. via semantic similarity query, then they get the fly genes. Reported by: cmungall Original Ticket: [geneontology/ontology-requests/11073](https://sourceforge.net/p/geneontology/ontology-requests/11073)
1.0
insulin secretion - always from pancreatic cells? - ``` [Term] id: GO:0030073 name: insulin secretion def: "The regulated release of proinsulin from ***secretory granules (B granules) in the B cells of the pancreas***; accompanied by cleavage of proinsulin to form mature insulin." [GOC:mah, ISBN:0198506732] is_a: GO:0030072 ! peptide hormone secretion ``` We have a two annotations to regulation of insulin secretion in worm and fly. the pancreas and proto-pancreas structures are vertebrate-specific PMID:16417468, but insulin is a billion years old: http://en.wikipedia.org/wiki/Insulin Looked at the drosophila annotation abstract, insulin is indeed being secreted: http://www.ncbi.nlm.nih.gov/pubmed/23874700 There are two possible courses of action here: 1. Weak: alter text definition to be: "The regulated release of insulin from a cell. In vertebrates, this is always from beta cells of the pancreas, and is accompanied by cleavage of the secreted proinsulin to form mature insulin" 2. Strong: change label to be "insulin secretion from B cell" and axiomatize accordingly. This will invalidate the 4 invert annotations. To accommodate, create a new grouping class "insulin secretion" Note that 1 could be accompanied by a taxon GCI as in uberon, but we don't use these in GO yet, and probably shouldn't until Protege 5. I would therefore vote for 2, even though this contributes to the drunk-christmas-tree effect. Use case driving decision: a user is looking for animal models of diabetes, or disorders involving pancreatic B cells. They query GO using the CL class, they get the specific GO class, which does not yield the FB gene. They have to go one level up (note this isn't necessary with the taxon-GCI route, but the query machinery is harder), e.g. via semantic similarity query, then they get the fly genes. Reported by: cmungall Original Ticket: [geneontology/ontology-requests/11073](https://sourceforge.net/p/geneontology/ontology-requests/11073)
process
insulin secretion always from pancreatic cells id go name insulin secretion def the regulated release of proinsulin from secretory granules b granules in the b cells of the pancreas accompanied by cleavage of proinsulin to form mature insulin is a go peptide hormone secretion we have a two annotations to regulation of insulin secretion in worm and fly the pancreas and proto pancreas structures are vertebrate specific pmid but insulin is a billion years old looked at the drosophila annotation abstract insulin is indeed being secreted there are two possible courses of action here weak alter text definition to be the regulated release of insulin from a cell in vertebrates this is always from beta cells of the pancreas and is accompanied by cleavage of the secreted proinsulin to form mature insulin strong change label to be insulin secretion from b cell and axiomatize accordingly this will invalidate the invert annotations to accommodate create a new grouping class insulin secretion note that could be accompanied by a taxon gci as in uberon but we don t use these in go yet and probably shouldn t until protege i would therefore vote for even though this contributes to the drunk christmas tree effect use case driving decision a user is looking for animal models of diabetes or disorders involving pancreatic b cells they query go using the cl class they get the specific go class which does not yield the fb gene they have to go one level up note this isn t necessary with the taxon gci route but the query machinery is harder e g via semantic similarity query then they get the fly genes reported by cmungall original ticket
1
819,383
30,732,456,609
IssuesEvent
2023-07-28 03:43:07
TEAM-cafe-in/cafe-in-be
https://api.github.com/repos/TEAM-cafe-in/cafe-in-be
closed
feat: API 명세를 수정한다
🔥High priority ❗️refactoring
### As-is --- - 프론트의 요구사항에 맞추어 API 수정이 필요합니다 ### To-be --- - [x] 회원의 커피콩을 반환하는 API를 구현한다 - [x] `POST` 요청으로 조회한 카페를 다시 조회할 때의 처리 - [x] 순환 참조 문제를 해결한다 - [x] 구현한 API의 테스트를 작성한다
1.0
feat: API 명세를 수정한다 - ### As-is --- - 프론트의 요구사항에 맞추어 API 수정이 필요합니다 ### To-be --- - [x] 회원의 커피콩을 반환하는 API를 구현한다 - [x] `POST` 요청으로 조회한 카페를 다시 조회할 때의 처리 - [x] 순환 참조 문제를 해결한다 - [x] 구현한 API의 테스트를 작성한다
non_process
feat api 명세를 수정한다 as is 프론트의 요구사항에 맞추어 api 수정이 필요합니다 to be 회원의 커피콩을 반환하는 api를 구현한다 post 요청으로 조회한 카페를 다시 조회할 때의 처리 순환 참조 문제를 해결한다 구현한 api의 테스트를 작성한다
0
19,756
6,760,341,633
IssuesEvent
2017-10-24 20:17:13
mapbox/mapbox-gl-native
https://api.github.com/repos/mapbox/mapbox-gl-native
closed
Bitrise should upload releases to GitHub too
build iOS Node.js
Currently Travis uploads prebuilt releases to an s3 directory with no public file listing, [complicating the manual setup story](https://github.com/mapbox/mapbox-gl-native/wiki/Installing-Mapbox-GL-for-iOS/_compare/cbc4f2345a332b69d9f9a01130efffaec4ab87d4...df38e6f10a123c656cdd5d6a7b662fe04f23fb25) for folks who can’t use CocoaPods. We should configure Travis to [automatically deploy a copy of the prebuilt release to GitHub](http://docs.travis-ci.com/user/deployment/releases/) to be listed [here](https://github.com/mapbox/mapbox-gl-native/releases). /cc @incanus @bsudekum
1.0
Bitrise should upload releases to GitHub too - Currently Travis uploads prebuilt releases to an s3 directory with no public file listing, [complicating the manual setup story](https://github.com/mapbox/mapbox-gl-native/wiki/Installing-Mapbox-GL-for-iOS/_compare/cbc4f2345a332b69d9f9a01130efffaec4ab87d4...df38e6f10a123c656cdd5d6a7b662fe04f23fb25) for folks who can’t use CocoaPods. We should configure Travis to [automatically deploy a copy of the prebuilt release to GitHub](http://docs.travis-ci.com/user/deployment/releases/) to be listed [here](https://github.com/mapbox/mapbox-gl-native/releases). /cc @incanus @bsudekum
non_process
bitrise should upload releases to github too currently travis uploads prebuilt releases to an directory with no public file listing for folks who can’t use cocoapods we should configure travis to to be listed cc incanus bsudekum
0
385,911
26,658,989,583
IssuesEvent
2023-01-25 19:17:39
SKY-ALIN/regta
https://api.github.com/repos/SKY-ALIN/regta
closed
Documentation for v0.3.0
documentation
- [x] Python 3.11 support - [x] `regta-period` page - [x] Scheduling alternatives and comparison with regta (separated page) - [x] Add a link to the page from README - [x] Mark benchmarks as not ready - [x] Explain verbose flag more detailed - [x] Add `Period` to API Reference
1.0
Documentation for v0.3.0 - - [x] Python 3.11 support - [x] `regta-period` page - [x] Scheduling alternatives and comparison with regta (separated page) - [x] Add a link to the page from README - [x] Mark benchmarks as not ready - [x] Explain verbose flag more detailed - [x] Add `Period` to API Reference
non_process
documentation for python support regta period page scheduling alternatives and comparison with regta separated page add a link to the page from readme mark benchmarks as not ready explain verbose flag more detailed add period to api reference
0
147,537
19,522,837,715
IssuesEvent
2021-12-29 22:29:25
swagger-api/swagger-codegen
https://api.github.com/repos/swagger-api/swagger-codegen
opened
CVE-2017-16042 (High) detected in growl-1.9.2.tgz, growl-1.8.1.tgz
security vulnerability
## CVE-2017-16042 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>growl-1.9.2.tgz</b>, <b>growl-1.8.1.tgz</b></p></summary> <p> <details><summary><b>growl-1.9.2.tgz</b></p></summary> <p>Growl unobtrusive notifications</p> <p>Library home page: <a href="https://registry.npmjs.org/growl/-/growl-1.9.2.tgz">https://registry.npmjs.org/growl/-/growl-1.9.2.tgz</a></p> <p>Path to dependency file: /samples/client/petstore/typescript-fetch/tests/default/package.json</p> <p>Path to vulnerable library: /samples/client/petstore/typescript-fetch/tests/default/node_modules/growl/package.json</p> <p> Dependency Hierarchy: - mocha-3.5.0.tgz (Root Library) - :x: **growl-1.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>growl-1.8.1.tgz</b></p></summary> <p>Growl unobtrusive notifications</p> <p>Library home page: <a href="https://registry.npmjs.org/growl/-/growl-1.8.1.tgz">https://registry.npmjs.org/growl/-/growl-1.8.1.tgz</a></p> <p>Path to dependency file: /samples/client/petstore-security-test/javascript/package.json</p> <p>Path to vulnerable library: /samples/client/petstore-security-test/javascript/node_modules/growl/package.json,/samples/client/petstore/javascript-override-default-config/node_modules/growl/package.json,/samples/client/petstore/javascript-promise-es6/node_modules/growl/package.json,/samples/client/petstore/javascript/node_modules/growl/package.json,/samples/client/petstore/javascript-es6/node_modules/growl/package.json,/samples/client/petstore/javascript-promise/node_modules/growl/package.json</p> <p> Dependency Hierarchy: - mocha-2.3.4.tgz (Root Library) - :x: **growl-1.8.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16042>CVE-2017-16042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16042">https://nvd.nist.gov/vuln/detail/CVE-2017-16042</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution (growl): 1.10.2</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p><p>Fix Resolution (growl): 1.10.2</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"3.5.0","packageFilePaths":["/samples/client/petstore/typescript-fetch/tests/default/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:3.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.0.0","isBinary":false},{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"2.3.4","packageFilePaths":["/samples/client/petstore-security-test/javascript/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:2.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16042","vulnerabilityDetails":"Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16042","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-16042 (High) detected in growl-1.9.2.tgz, growl-1.8.1.tgz - ## CVE-2017-16042 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>growl-1.9.2.tgz</b>, <b>growl-1.8.1.tgz</b></p></summary> <p> <details><summary><b>growl-1.9.2.tgz</b></p></summary> <p>Growl unobtrusive notifications</p> <p>Library home page: <a href="https://registry.npmjs.org/growl/-/growl-1.9.2.tgz">https://registry.npmjs.org/growl/-/growl-1.9.2.tgz</a></p> <p>Path to dependency file: /samples/client/petstore/typescript-fetch/tests/default/package.json</p> <p>Path to vulnerable library: /samples/client/petstore/typescript-fetch/tests/default/node_modules/growl/package.json</p> <p> Dependency Hierarchy: - mocha-3.5.0.tgz (Root Library) - :x: **growl-1.9.2.tgz** (Vulnerable Library) </details> <details><summary><b>growl-1.8.1.tgz</b></p></summary> <p>Growl unobtrusive notifications</p> <p>Library home page: <a href="https://registry.npmjs.org/growl/-/growl-1.8.1.tgz">https://registry.npmjs.org/growl/-/growl-1.8.1.tgz</a></p> <p>Path to dependency file: /samples/client/petstore-security-test/javascript/package.json</p> <p>Path to vulnerable library: /samples/client/petstore-security-test/javascript/node_modules/growl/package.json,/samples/client/petstore/javascript-override-default-config/node_modules/growl/package.json,/samples/client/petstore/javascript-promise-es6/node_modules/growl/package.json,/samples/client/petstore/javascript/node_modules/growl/package.json,/samples/client/petstore/javascript-es6/node_modules/growl/package.json,/samples/client/petstore/javascript-promise/node_modules/growl/package.json</p> <p> Dependency Hierarchy: - mocha-2.3.4.tgz (Root Library) - :x: **growl-1.8.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16042>CVE-2017-16042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16042">https://nvd.nist.gov/vuln/detail/CVE-2017-16042</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution (growl): 1.10.2</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p><p>Fix Resolution (growl): 1.10.2</p> <p>Direct dependency fix Resolution (mocha): 4.0.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"3.5.0","packageFilePaths":["/samples/client/petstore/typescript-fetch/tests/default/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:3.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.0.0","isBinary":false},{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"2.3.4","packageFilePaths":["/samples/client/petstore-security-test/javascript/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:2.3.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16042","vulnerabilityDetails":"Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16042","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in growl tgz growl tgz cve high severity vulnerability vulnerable libraries growl tgz growl tgz growl tgz growl unobtrusive notifications library home page a href path to dependency file samples client petstore typescript fetch tests default package json path to vulnerable library samples client petstore typescript fetch tests default node modules growl package json dependency hierarchy mocha tgz root library x growl tgz vulnerable library growl tgz growl unobtrusive notifications library home page a href path to dependency file samples client petstore security test javascript package json path to vulnerable library samples client petstore security test javascript node modules growl package json samples client petstore javascript override default config node modules growl package json samples client petstore javascript promise node modules growl package json samples client petstore javascript node modules growl package json samples client petstore javascript node modules growl package json samples client petstore javascript promise node modules growl package json dependency hierarchy mocha tgz root library x growl tgz vulnerable library found in head commit a href found in base branch master vulnerability details growl adds growl notification support to nodejs growl before does not properly sanitize input before passing it to exec allowing for arbitrary command execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution growl direct dependency fix resolution mocha fix resolution growl direct dependency fix resolution mocha isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mocha isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename mocha packageversion packagefilepaths istransitivedependency false dependencytree mocha isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails growl adds growl notification support to nodejs growl before does not properly sanitize input before passing it to exec allowing for arbitrary command execution vulnerabilityurl
0
779
3,258,517,299
IssuesEvent
2015-10-20 22:49:28
HAW-BAI4-SE2/UniKit
https://api.github.com/repos/HAW-BAI4-SE2/UniKit
closed
Anmeldephase implementieren
in process
# Phase 1: Veranstaltungsauswahl ## Komponenten: * Hibernatekomponente * Studierendenkomponente --- ## ToDo: * __Pre__: Dummy-Studentenobjekt, das seine wählbaren Veranstaltungen kennt. * System zeigt für einen aktuellen Benutzer alle wählbaren Veranstaltungen an. * Benutzer kann Veranstaltungen wählen und Einschreibung bestätigen. * System persistiert Auswahl per Hibernate. ---
1.0
Anmeldephase implementieren - # Phase 1: Veranstaltungsauswahl ## Komponenten: * Hibernatekomponente * Studierendenkomponente --- ## ToDo: * __Pre__: Dummy-Studentenobjekt, das seine wählbaren Veranstaltungen kennt. * System zeigt für einen aktuellen Benutzer alle wählbaren Veranstaltungen an. * Benutzer kann Veranstaltungen wählen und Einschreibung bestätigen. * System persistiert Auswahl per Hibernate. ---
process
anmeldephase implementieren phase veranstaltungsauswahl komponenten hibernatekomponente studierendenkomponente todo pre dummy studentenobjekt das seine wählbaren veranstaltungen kennt system zeigt für einen aktuellen benutzer alle wählbaren veranstaltungen an benutzer kann veranstaltungen wählen und einschreibung bestätigen system persistiert auswahl per hibernate
1
11,172
13,957,694,833
IssuesEvent
2020-10-24 08:11:21
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
MT: Harvest
Geoportal Harvesting process MT - Malta
Dear Angelo, Can you kindly perform a harvest on the Maltese CSW as we need to check some changes. Thanks in advance for your help. Regards, Rene
1.0
MT: Harvest - Dear Angelo, Can you kindly perform a harvest on the Maltese CSW as we need to check some changes. Thanks in advance for your help. Regards, Rene
process
mt harvest dear angelo can you kindly perform a harvest on the maltese csw as we need to check some changes thanks in advance for your help regards rene
1
279,421
21,159,951,569
IssuesEvent
2022-04-07 08:28:05
xtensor-stack/xtensor
https://api.github.com/repos/xtensor-stack/xtensor
closed
Documentation improvement suggestions
Enhancement Documentation
## A - Replace ``` ``xt::something`` ``` with ``:cpp:func:`xt::something` `` I suggest replacing inline code that mention a xtensor function or class with the associated sphinx target. This is [supported by Breathe for classes and functions](https://breathe.readthedocs.io/en/latest/domains.html) and has the advantage that the resulting html links to the reference sections, which I find particularly useful. For more complex uses, such as `xt::xarray<double>({{3, 4}, {5, 6}})` in [From numpy to xtensor](https://xtensor.readthedocs.io/en/latest/numpy.html), we can use the generalized synthax ```rst :cpp:class:`xt::xarray\<double\>({{3, 4}, {5, 6}}) <xt::xarray>` ``` ✅ Small poc shows this is working as expected with the current setup. ## B - Cross reference numpy Using [intershpinx](https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html), and replacing the inline code ``np.something`` to `` :py:func:`numpy.something` ``, we can also automatically link to https://numpy.org/doc/stable/. This requires a small addition to `conf.py` ```python extensions += ["sphinx.ext.intersphinx"] intersphinx_mapping = { "numpy": ("https://numpy.org/doc/stable/", None), } ``` Together with the previous suggestion, it makes the cheat sheet [From numpy to xtensor](https://xtensor.readthedocs.io/en/latest/numpy.html) very powerful. ✅ Does _not_ require to maintain a numpy cross reference file/database (it is downloaded by sphinx); ❓ Does require an internet connection when generating the doc; ✅ Small poc shows this is working as expected with the current setup. ## C - Harmonize `#include` statements Some code blocks use https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/indices.rst#L17 others https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/getting_started.rst#L19 and https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/quickref/manipulation.rst#L15 I suggest sticking to the first one.
1.0
Documentation improvement suggestions - ## A - Replace ``` ``xt::something`` ``` with ``:cpp:func:`xt::something` `` I suggest replacing inline code that mention a xtensor function or class with the associated sphinx target. This is [supported by Breathe for classes and functions](https://breathe.readthedocs.io/en/latest/domains.html) and has the advantage that the resulting html links to the reference sections, which I find particularly useful. For more complex uses, such as `xt::xarray<double>({{3, 4}, {5, 6}})` in [From numpy to xtensor](https://xtensor.readthedocs.io/en/latest/numpy.html), we can use the generalized synthax ```rst :cpp:class:`xt::xarray\<double\>({{3, 4}, {5, 6}}) <xt::xarray>` ``` ✅ Small poc shows this is working as expected with the current setup. ## B - Cross reference numpy Using [intershpinx](https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html), and replacing the inline code ``np.something`` to `` :py:func:`numpy.something` ``, we can also automatically link to https://numpy.org/doc/stable/. This requires a small addition to `conf.py` ```python extensions += ["sphinx.ext.intersphinx"] intersphinx_mapping = { "numpy": ("https://numpy.org/doc/stable/", None), } ``` Together with the previous suggestion, it makes the cheat sheet [From numpy to xtensor](https://xtensor.readthedocs.io/en/latest/numpy.html) very powerful. ✅ Does _not_ require to maintain a numpy cross reference file/database (it is downloaded by sphinx); ❓ Does require an internet connection when generating the doc; ✅ Small poc shows this is working as expected with the current setup. ## C - Harmonize `#include` statements Some code blocks use https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/indices.rst#L17 others https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/getting_started.rst#L19 and https://github.com/xtensor-stack/xtensor/blob/31acec1e90bbea6d4bc17af0710a123bd5da6689/docs/source/quickref/manipulation.rst#L15 I suggest sticking to the first one.
non_process
documentation improvement suggestions a replace xt something with cpp func xt something i suggest replacing inline code that mention a xtensor function or class with the associated sphinx target this is and has the advantage that the resulting html links to the reference sections which i find particularly useful for more complex uses such as xt xarray in we can use the generalized synthax rst cpp class xt xarray ✅ small poc shows this is working as expected with the current setup b cross reference numpy using and replacing the inline code np something to py func numpy something we can also automatically link to this requires a small addition to conf py python extensions intersphinx mapping numpy none together with the previous suggestion it makes the cheat sheet very powerful ✅ does not require to maintain a numpy cross reference file database it is downloaded by sphinx ❓ does require an internet connection when generating the doc ✅ small poc shows this is working as expected with the current setup c harmonize include statements some code blocks use others and i suggest sticking to the first one
0
18,203
10,217,961,045
IssuesEvent
2019-08-15 14:51:39
vutuv/vutuv
https://api.github.com/repos/vutuv/vutuv
closed
Prevent user enumeration - wording of sign up warnings
Feature Request Security
![image](https://user-images.githubusercontent.com/11748187/53373471-1f998100-3988-11e9-9995-81082676e589.png) Protect users email from hacker, by changing the word email "has already been taken", to something like: "Please check your email to complete the registration". The email sent to the mailbox is something like: "You tried to sign up, but you already have an account! Please use the following link to sign in..." For failed sign in, because the email is not exist: ![image](https://user-images.githubusercontent.com/11748187/53374285-7e5ffa00-398a-11e9-8388-d899009add0d.png) We can change it to "We have sent an email!" At the mailbox, we send something like: "Someone, maybe you, tried to use this email to sign in to vutuv.com, but this email address doesn't have an account! You might have signed up using a different email address or please sign up on vutuv.com." So the hacker have no idea which emails are registered or not. Because users email are the core component in vutuv, we have to protect them. Maybe we could also disable user with lower privilege (or event for all users) to view other users email. We can change the way of the communication by using internal messaging service. This is just an idea.
True
Prevent user enumeration - wording of sign up warnings - ![image](https://user-images.githubusercontent.com/11748187/53373471-1f998100-3988-11e9-9995-81082676e589.png) Protect users email from hacker, by changing the word email "has already been taken", to something like: "Please check your email to complete the registration". The email sent to the mailbox is something like: "You tried to sign up, but you already have an account! Please use the following link to sign in..." For failed sign in, because the email is not exist: ![image](https://user-images.githubusercontent.com/11748187/53374285-7e5ffa00-398a-11e9-8388-d899009add0d.png) We can change it to "We have sent an email!" At the mailbox, we send something like: "Someone, maybe you, tried to use this email to sign in to vutuv.com, but this email address doesn't have an account! You might have signed up using a different email address or please sign up on vutuv.com." So the hacker have no idea which emails are registered or not. Because users email are the core component in vutuv, we have to protect them. Maybe we could also disable user with lower privilege (or event for all users) to view other users email. We can change the way of the communication by using internal messaging service. This is just an idea.
non_process
prevent user enumeration wording of sign up warnings protect users email from hacker by changing the word email has already been taken to something like please check your email to complete the registration the email sent to the mailbox is something like you tried to sign up but you already have an account please use the following link to sign in for failed sign in because the email is not exist we can change it to we have sent an email at the mailbox we send something like someone maybe you tried to use this email to sign in to vutuv com but this email address doesn t have an account you might have signed up using a different email address or please sign up on vutuv com so the hacker have no idea which emails are registered or not because users email are the core component in vutuv we have to protect them maybe we could also disable user with lower privilege or event for all users to view other users email we can change the way of the communication by using internal messaging service this is just an idea
0
120,534
25,813,144,885
IssuesEvent
2022-12-12 01:25:18
sec-edgar/sec-edgar
https://api.github.com/repos/sec-edgar/sec-edgar
closed
Add compatibility with EDGAR APIs
enhancement help-wanted code-structure
**Is your feature request related to a problem? Please describe.** The SEC is currently offering a RESTful API in beta! See more [here](https://www.sec.gov/edgar/sec-api-documentation). Not sure exactly what the best way to integrate this into the package would be, but want to put it out there so that it can be discussed. **Describe the solution you'd like** Not sure yet. Add an api package that provides utilities to easily access the API? Should be able to reuse `NetworkClient` code.
1.0
Add compatibility with EDGAR APIs - **Is your feature request related to a problem? Please describe.** The SEC is currently offering a RESTful API in beta! See more [here](https://www.sec.gov/edgar/sec-api-documentation). Not sure exactly what the best way to integrate this into the package would be, but want to put it out there so that it can be discussed. **Describe the solution you'd like** Not sure yet. Add an api package that provides utilities to easily access the API? Should be able to reuse `NetworkClient` code.
non_process
add compatibility with edgar apis is your feature request related to a problem please describe the sec is currently offering a restful api in beta see more not sure exactly what the best way to integrate this into the package would be but want to put it out there so that it can be discussed describe the solution you d like not sure yet add an api package that provides utilities to easily access the api should be able to reuse networkclient code
0
300,563
9,211,457,730
IssuesEvent
2019-03-09 15:34:19
qgisissuebot/QGIS
https://api.github.com/repos/qgisissuebot/QGIS
closed
Raster Symbology, Paletted/Unique values not respecting selected band
Bug Priority: normal
--- Author Name: **Andrew Harvey** (Andrew Harvey) Original Redmine Issue: 21505, https://issues.qgis.org/issues/21505 Original Date: 2019-03-07T00:51:46.888Z Affected QGIS version: 3.6.0 --- I have a 4 band raster loaded into QGIS, under the layer Symbology I've chosen "Paletted/Unique values" and selected Band 3 to be the source of those values. Selecting classify with random colors, all the vales are represented correctly, however the actual raster is rendered based on Band 1, not based on the selected Band 3. I would expect selecting band 3 could cause it to use band 3 in the rendering. --- - [Screenshot from 2019-03-07 11-44-14.png](https://issues.qgis.org/attachments/download/14543/Screenshot%20from%202019-03-07%2011-44-14.png) (Andrew Harvey) - [PortHacking201304-LID1-AHD_3306226_56_0002_0002_1m.trgb.deflate.tiff](https://issues.qgis.org/attachments/download/14544/PortHacking201304-LID1-AHD_3306226_56_0002_0002_1m.trgb.deflate.tiff) (Andrew Harvey)
1.0
Raster Symbology, Paletted/Unique values not respecting selected band - --- Author Name: **Andrew Harvey** (Andrew Harvey) Original Redmine Issue: 21505, https://issues.qgis.org/issues/21505 Original Date: 2019-03-07T00:51:46.888Z Affected QGIS version: 3.6.0 --- I have a 4 band raster loaded into QGIS, under the layer Symbology I've chosen "Paletted/Unique values" and selected Band 3 to be the source of those values. Selecting classify with random colors, all the vales are represented correctly, however the actual raster is rendered based on Band 1, not based on the selected Band 3. I would expect selecting band 3 could cause it to use band 3 in the rendering. --- - [Screenshot from 2019-03-07 11-44-14.png](https://issues.qgis.org/attachments/download/14543/Screenshot%20from%202019-03-07%2011-44-14.png) (Andrew Harvey) - [PortHacking201304-LID1-AHD_3306226_56_0002_0002_1m.trgb.deflate.tiff](https://issues.qgis.org/attachments/download/14544/PortHacking201304-LID1-AHD_3306226_56_0002_0002_1m.trgb.deflate.tiff) (Andrew Harvey)
non_process
raster symbology paletted unique values not respecting selected band author name andrew harvey andrew harvey original redmine issue original date affected qgis version i have a band raster loaded into qgis under the layer symbology i ve chosen paletted unique values and selected band to be the source of those values selecting classify with random colors all the vales are represented correctly however the actual raster is rendered based on band not based on the selected band i would expect selecting band could cause it to use band in the rendering andrew harvey andrew harvey
0
5,904
8,722,791,970
IssuesEvent
2018-12-09 15:56:15
P0cL4bs/WiFi-Pumpkin
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
closed
Update new Version 0.8.7
Feature request in process new version
I'm work in a new version more moduled with @yudevan. @yudevan list the features bellow:
1.0
Update new Version 0.8.7 - I'm work in a new version more moduled with @yudevan. @yudevan list the features bellow:
process
update new version i m work in a new version more moduled with yudevan yudevan list the features bellow
1
1,568
4,165,429,090
IssuesEvent
2016-06-19 13:51:54
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
opened
Disable/enable multiplexing from mysql_query_rules
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
In `mysql_query_rules` we need to add a new variable that defines if multiplexing needs to be disabled or re-enabled. This can be useful if: * we want that a specific query disables multiplexing * we want that multiplexing is re-enabled for example after ProxySQL things that is not safe to use multiplexing (ex, when `@` is used)
1.0
Disable/enable multiplexing from mysql_query_rules - In `mysql_query_rules` we need to add a new variable that defines if multiplexing needs to be disabled or re-enabled. This can be useful if: * we want that a specific query disables multiplexing * we want that multiplexing is re-enabled for example after ProxySQL things that is not safe to use multiplexing (ex, when `@` is used)
process
disable enable multiplexing from mysql query rules in mysql query rules we need to add a new variable that defines if multiplexing needs to be disabled or re enabled this can be useful if we want that a specific query disables multiplexing we want that multiplexing is re enabled for example after proxysql things that is not safe to use multiplexing ex when is used
1
56
2,516,123,386
IssuesEvent
2015-01-15 23:29:19
GsDevKit/gsApplicationTools
https://api.github.com/repos/GsDevKit/gsApplicationTools
opened
GemServerLauncher needs to isolate processes and semaphores ... not commitable
in process
Need to use TransientStackVlue to isolate....discovered while working through interactive debugging of Seaside using gem server.
1.0
GemServerLauncher needs to isolate processes and semaphores ... not commitable - Need to use TransientStackVlue to isolate....discovered while working through interactive debugging of Seaside using gem server.
process
gemserverlauncher needs to isolate processes and semaphores not commitable need to use transientstackvlue to isolate discovered while working through interactive debugging of seaside using gem server
1
141,148
11,395,812,527
IssuesEvent
2020-01-30 12:16:21
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Test Case and Test Execution for GIDS: Allow user to update crosswalk values for WEAMS data that has an IPED or OPE match #4751
bah-gids bah-sprint-39 testing
As a member of the BAH development team, I need to make sure that we have traceability between our stories and test case execution so that we can easily see what test cases were executed against our functional work and when those cases were executed. Assumptions: 1. All test cases pass 2. All defects resulting from test case execution have been resolved before this story and #4751 are closed. 3. Any defects that remain open after this issue and #4751 are closed do not result in failed acceptance criteria. Acceptance Criteria: 1. All test cases have been documented that are required to ensure that acceptance criteria for #4751 are met. 1a. All test cases have been successfully executed Tasks: - [x] Create Test case for story #4751 - Test case can be found here: https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/products/education-careers/school-comparison-tool/Test%20Cases/Test%20Scripts/Sprint%2039 - [x] Execute Test Cases - [x] Write Defects if any are found - [x] Check fixes and close functional story - [x] Move this story to closed
1.0
Test Case and Test Execution for GIDS: Allow user to update crosswalk values for WEAMS data that has an IPED or OPE match #4751 - As a member of the BAH development team, I need to make sure that we have traceability between our stories and test case execution so that we can easily see what test cases were executed against our functional work and when those cases were executed. Assumptions: 1. All test cases pass 2. All defects resulting from test case execution have been resolved before this story and #4751 are closed. 3. Any defects that remain open after this issue and #4751 are closed do not result in failed acceptance criteria. Acceptance Criteria: 1. All test cases have been documented that are required to ensure that acceptance criteria for #4751 are met. 1a. All test cases have been successfully executed Tasks: - [x] Create Test case for story #4751 - Test case can be found here: https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/products/education-careers/school-comparison-tool/Test%20Cases/Test%20Scripts/Sprint%2039 - [x] Execute Test Cases - [x] Write Defects if any are found - [x] Check fixes and close functional story - [x] Move this story to closed
non_process
test case and test execution for gids allow user to update crosswalk values for weams data that has an iped or ope match as a member of the bah development team i need to make sure that we have traceability between our stories and test case execution so that we can easily see what test cases were executed against our functional work and when those cases were executed assumptions all test cases pass all defects resulting from test case execution have been resolved before this story and are closed any defects that remain open after this issue and are closed do not result in failed acceptance criteria acceptance criteria all test cases have been documented that are required to ensure that acceptance criteria for are met all test cases have been successfully executed tasks create test case for story test case can be found here execute test cases write defects if any are found check fixes and close functional story move this story to closed
0
21,126
28,092,826,695
IssuesEvent
2023-03-30 14:04:13
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
cumulativetodelta: initial data points are included for monotonic counters
bug help wanted Stale priority:p2 processor/cumulativetodelta
### Component(s) processor/cumulativetodelta ### What happened? The cumulative-to-delta processor includes the initial delta from zero to the current value as a data point: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/34b31da629f65c195125e8db07480574b181477f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L92-L104 Even for monotonic counters, this is counterintuitive and noisy. For example, here is what that looks like in real production data when the collector is restarted: <img width="868" alt="Screen Shot 2023-01-26 at 1 03 13 PM" src="https://user-images.githubusercontent.com/102976597/214830848-8bb75e25-c4e0-454c-9e04-2abe302db114.png"> The processor should drop the first data point regardless of whether the dataset is monotonic or not, because the first data point can not be guaranteed to be a delta. ### Collector version 0.70.0 ### Environment information ## Environment OS: Ubuntu 22.04 ### OpenTelemetry Collector configuration ```yaml <snip> processors: cumulativetodelta: include: metrics: - "bytes_total\\z" match_type: regexp metricstransform: transforms: - include: "(.*)_bytes_total\\z" action: insert new_name: "$${1}_bitrate" match_type: regexp operations: - action: experimental_scale_value # The starting unit is bytes per 5s. 0.2 * 8 = 1.6 experimental_scale: 1.6 ``` ### Log output _No response_ ### Additional context _No response_
1.0
cumulativetodelta: initial data points are included for monotonic counters - ### Component(s) processor/cumulativetodelta ### What happened? The cumulative-to-delta processor includes the initial delta from zero to the current value as a data point: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/34b31da629f65c195125e8db07480574b181477f/processor/cumulativetodeltaprocessor/internal/tracking/tracker.go#L92-L104 Even for monotonic counters, this is counterintuitive and noisy. For example, here is what that looks like in real production data when the collector is restarted: <img width="868" alt="Screen Shot 2023-01-26 at 1 03 13 PM" src="https://user-images.githubusercontent.com/102976597/214830848-8bb75e25-c4e0-454c-9e04-2abe302db114.png"> The processor should drop the first data point regardless of whether the dataset is monotonic or not, because the first data point can not be guaranteed to be a delta. ### Collector version 0.70.0 ### Environment information ## Environment OS: Ubuntu 22.04 ### OpenTelemetry Collector configuration ```yaml <snip> processors: cumulativetodelta: include: metrics: - "bytes_total\\z" match_type: regexp metricstransform: transforms: - include: "(.*)_bytes_total\\z" action: insert new_name: "$${1}_bitrate" match_type: regexp operations: - action: experimental_scale_value # The starting unit is bytes per 5s. 0.2 * 8 = 1.6 experimental_scale: 1.6 ``` ### Log output _No response_ ### Additional context _No response_
process
cumulativetodelta initial data points are included for monotonic counters component s processor cumulativetodelta what happened the cumulative to delta processor includes the initial delta from zero to the current value as a data point even for monotonic counters this is counterintuitive and noisy for example here is what that looks like in real production data when the collector is restarted img width alt screen shot at pm src the processor should drop the first data point regardless of whether the dataset is monotonic or not because the first data point can not be guaranteed to be a delta collector version environment information environment os ubuntu opentelemetry collector configuration yaml processors cumulativetodelta include metrics bytes total z match type regexp metricstransform transforms include bytes total z action insert new name bitrate match type regexp operations action experimental scale value the starting unit is bytes per experimental scale log output no response additional context no response
1
616,564
19,306,194,032
IssuesEvent
2021-12-13 11:47:11
kubernetes/release
https://api.github.com/repos/kubernetes/release
closed
General promote-images tool for k8s image maintainers
kind/feature priority/important-longterm sig/release area/release-eng
<!-- Please only use this template for submitting feature requests --> #### What would you like to be added: Currently, the krel promote-images tool only works for kubernetes images. It would be great to extend it to other projects in k8s and k8s-sigs that have staging images hosted in k8s.gcr.io. Slack thread for context: https://kubernetes.slack.com/archives/CCK68P2Q2/p1629334254082700
1.0
General promote-images tool for k8s image maintainers - <!-- Please only use this template for submitting feature requests --> #### What would you like to be added: Currently, the krel promote-images tool only works for kubernetes images. It would be great to extend it to other projects in k8s and k8s-sigs that have staging images hosted in k8s.gcr.io. Slack thread for context: https://kubernetes.slack.com/archives/CCK68P2Q2/p1629334254082700
non_process
general promote images tool for image maintainers what would you like to be added currently the krel promote images tool only works for kubernetes images it would be great to extend it to other projects in and sigs that have staging images hosted in gcr io slack thread for context
0
17,082
22,587,153,422
IssuesEvent
2022-06-28 16:10:12
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/transform] Add 'like' capability to WHERE condition
priority:p2 comp: transformprocessor
**Is your feature request related to a problem? Please describe.** The transform processor's where condition cannot handle checking if 2 string match only in specific positions. It must check the whole string. **Describe the solution you'd like** For strings, a condition allows you to use the keyword `like` which will use glob matching to compare the strings. There should also be a way to negate the match, such as `not like`.
1.0
[processor/transform] Add 'like' capability to WHERE condition - **Is your feature request related to a problem? Please describe.** The transform processor's where condition cannot handle checking if 2 string match only in specific positions. It must check the whole string. **Describe the solution you'd like** For strings, a condition allows you to use the keyword `like` which will use glob matching to compare the strings. There should also be a way to negate the match, such as `not like`.
process
add like capability to where condition is your feature request related to a problem please describe the transform processor s where condition cannot handle checking if string match only in specific positions it must check the whole string describe the solution you d like for strings a condition allows you to use the keyword like which will use glob matching to compare the strings there should also be a way to negate the match such as not like
1
8,663
11,798,104,183
IssuesEvent
2020-03-18 13:52:56
MHRA/products
https://api.github.com/repos/MHRA/products
opened
Observability | Azure monitor
EPIC - Auto Batch Process :oncoming_automobile:
## User want As a technical user I want to visualize doc index updater using Azure monitor So I can monitor and alert ## Technical acceptance criteria Azure Monitor should have the tooling to visualise the [golden signals](https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals): - [ ] Latency - [ ] Traffic - [ ] Errors - [ ] Saturation **Size** **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
Observability | Azure monitor - ## User want As a technical user I want to visualize doc index updater using Azure monitor So I can monitor and alert ## Technical acceptance criteria Azure Monitor should have the tooling to visualise the [golden signals](https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals): - [ ] Latency - [ ] Traffic - [ ] Errors - [ ] Saturation **Size** **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
observability azure monitor user want as a technical user i want to visualize doc index updater using azure monitor so i can monitor and alert technical acceptance criteria azure monitor should have the tooling to visualise the latency traffic errors saturation size value effort exit criteria met backlog discovery duxd development quality assurance release and validate
1
187,331
22,045,644,279
IssuesEvent
2022-05-30 01:09:49
CodeChung/bobert-ai
https://api.github.com/repos/CodeChung/bobert-ai
opened
CVE-2022-25878 (High) detected in protobufjs-6.8.8.tgz
security vulnerability
## CVE-2022-25878 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobufjs-6.8.8.tgz</b></p></summary> <p>Protocol Buffers for JavaScript (& TypeScript).</p> <p>Library home page: <a href="https://registry.npmjs.org/protobufjs/-/protobufjs-6.8.8.tgz">https://registry.npmjs.org/protobufjs/-/protobufjs-6.8.8.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/protobufjs/package.json</p> <p> Dependency Hierarchy: - dialogflow-1.1.1.tgz (Root Library) - :x: **protobufjs-6.8.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/CodeChung/bobert-ai/commit/46ca746b77b158e66df42fabccff3ee33ecadc8f">46ca746b77b158e66df42fabccff3ee33ecadc8f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package protobufjs before 6.11.3 are vulnerable to Prototype Pollution which can allow an attacker to add/modify properties of the Object.prototype. This vulnerability can occur in multiple ways: 1. by providing untrusted user input to util.setProperty or to ReflectionObject.setParsedOption functions 2. by parsing/loading .proto files <p>Publish Date: 2022-05-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25878>CVE-2022-25878</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25878">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25878</a></p> <p>Release Date: 2022-05-27</p> <p>Fix Resolution (protobufjs): 6.11.3</p> <p>Direct dependency fix Resolution (dialogflow): 1.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-25878 (High) detected in protobufjs-6.8.8.tgz - ## CVE-2022-25878 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobufjs-6.8.8.tgz</b></p></summary> <p>Protocol Buffers for JavaScript (& TypeScript).</p> <p>Library home page: <a href="https://registry.npmjs.org/protobufjs/-/protobufjs-6.8.8.tgz">https://registry.npmjs.org/protobufjs/-/protobufjs-6.8.8.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/protobufjs/package.json</p> <p> Dependency Hierarchy: - dialogflow-1.1.1.tgz (Root Library) - :x: **protobufjs-6.8.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/CodeChung/bobert-ai/commit/46ca746b77b158e66df42fabccff3ee33ecadc8f">46ca746b77b158e66df42fabccff3ee33ecadc8f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package protobufjs before 6.11.3 are vulnerable to Prototype Pollution which can allow an attacker to add/modify properties of the Object.prototype. This vulnerability can occur in multiple ways: 1. by providing untrusted user input to util.setProperty or to ReflectionObject.setParsedOption functions 2. by parsing/loading .proto files <p>Publish Date: 2022-05-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25878>CVE-2022-25878</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25878">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25878</a></p> <p>Release Date: 2022-05-27</p> <p>Fix Resolution (protobufjs): 6.11.3</p> <p>Direct dependency fix Resolution (dialogflow): 1.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in protobufjs tgz cve high severity vulnerability vulnerable library protobufjs tgz protocol buffers for javascript typescript library home page a href path to dependency file package json path to vulnerable library node modules protobufjs package json dependency hierarchy dialogflow tgz root library x protobufjs tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package protobufjs before are vulnerable to prototype pollution which can allow an attacker to add modify properties of the object prototype this vulnerability can occur in multiple ways by providing untrusted user input to util setproperty or to reflectionobject setparsedoption functions by parsing loading proto files publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution protobufjs direct dependency fix resolution dialogflow step up your open source security game with mend
0
3,899
6,821,849,734
IssuesEvent
2017-11-07 18:02:54
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
Q: What is Resolver Error?
parse-tree-processing support
In Excel 2010 , or Excel 2013, I open a file with c17500 LOC (excluding comments) VBA. I click "Pending". after a couple of minutes the caption changes to "Resolver Error". Is there a way of finding out why?
1.0
Q: What is Resolver Error? - In Excel 2010 , or Excel 2013, I open a file with c17500 LOC (excluding comments) VBA. I click "Pending". after a couple of minutes the caption changes to "Resolver Error". Is there a way of finding out why?
process
q what is resolver error in excel or excel i open a file with loc excluding comments vba i click pending after a couple of minutes the caption changes to resolver error is there a way of finding out why
1
551,761
16,188,669,497
IssuesEvent
2021-05-04 03:44:59
TerriaJS/neii-viewer
https://api.github.com/repos/TerriaJS/neii-viewer
closed
NEII - Apr 2021 release
Priority high
- [x] update to latest terriajs next - [x] Global Horizontal Exposure not loading (WMS): https://github.com/TerriaJS/neii-viewer/issues/172 - [x] Wrong error message for NEMSR catalogue item: https://github.com/TerriaJS/neii-viewer/issues/171 - [x] Geofabric datasets errors: https://github.com/TerriaJS/neii-viewer/issues/170 - See https://github.com/TerriaJS/neii-viewer/issues/170#issuecomment-823009029 - [x] Update Hydrology and Marine points url from GA: https://github.com/TerriaJS/neii-viewer/issues/169 - see https://github.com/TerriaJS/neii-viewer/issues/169#issuecomment-823015442
1.0
NEII - Apr 2021 release - - [x] update to latest terriajs next - [x] Global Horizontal Exposure not loading (WMS): https://github.com/TerriaJS/neii-viewer/issues/172 - [x] Wrong error message for NEMSR catalogue item: https://github.com/TerriaJS/neii-viewer/issues/171 - [x] Geofabric datasets errors: https://github.com/TerriaJS/neii-viewer/issues/170 - See https://github.com/TerriaJS/neii-viewer/issues/170#issuecomment-823009029 - [x] Update Hydrology and Marine points url from GA: https://github.com/TerriaJS/neii-viewer/issues/169 - see https://github.com/TerriaJS/neii-viewer/issues/169#issuecomment-823015442
non_process
neii apr release update to latest terriajs next global horizontal exposure not loading wms wrong error message for nemsr catalogue item geofabric datasets errors see update hydrology and marine points url from ga see
0
140,519
11,349,427,874
IssuesEvent
2020-01-24 04:53:22
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
[test-failed]: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts - homepage app sample data dashboard should launch sample logs data set dashboard
failed-test test-cloud
**Version: 7.6** **Class: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts** **Stack Trace:** Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="launchSampleDataSetlogs"]) Wait timed out after 10017ms at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/ossGrp1/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:13) _Platform: cloud_ _Build Num: 42_
2.0
[test-failed]: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts - homepage app sample data dashboard should launch sample logs data set dashboard - **Version: 7.6** **Class: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts** **Stack Trace:** Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="launchSampleDataSetlogs"]) Wait timed out after 10017ms at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/ossGrp1/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:13) _Platform: cloud_ _Build Num: 42_
non_process
chrome ui functional tests test functional apps home sample data·ts homepage app sample data dashboard should launch sample logs data set dashboard version class chrome ui functional tests test functional apps home sample data·ts stack trace error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts platform cloud build num
0
12,097
14,740,105,100
IssuesEvent
2021-01-07 08:31:29
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Santa Rosa - SA Billing - Late Fee Account List
anc-process anp-important ant-bug has attachment
In GitLab by @kdjstudios on Oct 4, 2018, 10:43 [Santa_Rosa.xlsx](/uploads/ccbfbeb17d9d8c6f37eacd159efb96b6/Santa_Rosa.xlsx) HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-82891/conversation
1.0
Santa Rosa - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 4, 2018, 10:43 [Santa_Rosa.xlsx](/uploads/ccbfbeb17d9d8c6f37eacd159efb96b6/Santa_Rosa.xlsx) HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-82891/conversation
process
santa rosa sa billing late fee account list in gitlab by kdjstudios on oct uploads santa rosa xlsx hd
1
14,946
18,426,704,083
IssuesEvent
2021-10-13 23:28:03
yandali-damian/LIM015-social-network
https://api.github.com/repos/yandali-damian/LIM015-social-network
closed
USER HISTORY - 004
pending Process
Como usuario logeado debo poder publicar, visualizar, modificar y eliminar una publicación - tareas - [x] estructura HTML de home > - [x] perfil > > - [x] Agregar botón para editar foto y nombre > >> - [x] Modal para cambiar nombre y photo > > - [x] mostrar los post personales > - [x] post home > > - [x] Agregar botón de editar y eliminar (solo post personales) > > - [x] Modal alerta para eliminar un post > > - [x] Agregar botón para subir una imagen a un post > > - [x] Mostrar fecha en el post > >> - [x] Crear un función para capturar la fecha > >> - [x] Guardar la fecha en el crear post > > - [x] Agregar icon para likes > >> - [x] Función para contabilizar los likes > >> - [x] Mostrar el número de likes en el post > - [x] Tests
1.0
USER HISTORY - 004 - Como usuario logeado debo poder publicar, visualizar, modificar y eliminar una publicación - tareas - [x] estructura HTML de home > - [x] perfil > > - [x] Agregar botón para editar foto y nombre > >> - [x] Modal para cambiar nombre y photo > > - [x] mostrar los post personales > - [x] post home > > - [x] Agregar botón de editar y eliminar (solo post personales) > > - [x] Modal alerta para eliminar un post > > - [x] Agregar botón para subir una imagen a un post > > - [x] Mostrar fecha en el post > >> - [x] Crear un función para capturar la fecha > >> - [x] Guardar la fecha en el crear post > > - [x] Agregar icon para likes > >> - [x] Función para contabilizar los likes > >> - [x] Mostrar el número de likes en el post > - [x] Tests
process
user history como usuario logeado debo poder publicar visualizar modificar y eliminar una publicación tareas estructura html de home perfil agregar botón para editar foto y nombre modal para cambiar nombre y photo mostrar los post personales post home agregar botón de editar y eliminar solo post personales modal alerta para eliminar un post agregar botón para subir una imagen a un post mostrar fecha en el post crear un función para capturar la fecha guardar la fecha en el crear post agregar icon para likes función para contabilizar los likes mostrar el número de likes en el post tests
1
24,157
7,460,052,925
IssuesEvent
2018-03-30 17:57:07
dart-lang/build
https://api.github.com/repos/dart-lang/build
closed
Incremental Builds are taking an excessive amount of time
package: build_runner state: needs info type: perf
Hi, I have an Angular Application that takes 6 minutes for its initial build. Trivial changes are taking over a minute to rebuild. Are there any tools I can use to figure out what is going on? Something to help me untangle the dependencies and determine the why behind the long rebuilds? Dart SDK - Dev 2 43 build_runner: 0.8.1 Linux Lubuntu 17.10 running in Virtual Box with 10Gb Ram and 2 CPUs. Initial build: ``` [INFO] Generating build script completed, took 1.1s [INFO] Setting up file watchers completed, took 65ms [INFO] Waiting for all file watchers to be ready completed, took 696ms [WARNING] Throwing away cached asset graph due to Dart SDK update. [INFO] Cleaning up outputs from previous builds. completed, took 1.1s [INFO] Reading cached asset graph completed, took 9.1s [INFO] Building new asset graph completed, took 4.4s [INFO] Checking for unexpected pre-existing outputs. completed, took 7ms [INFO] Running build completed, took 6m 28s [INFO] Caching finalized dependency graph completed, took 1.9s [INFO] Succeeded after 6m 31s with 6318 outputs Serving `web` on port 8080 Serving `test` on port 8081 ``` After trivial change, added a print statement in a file: ``` [INFO] Starting Build [INFO] Updating asset graph completed, took 13ms [INFO] Running build completed, took 55.6s [INFO] Caching finalized dependency graph completed, took 3.0s [INFO] Succeeded after 58.7s with 48 outputs ```
1.0
Incremental Builds are taking an excessive amount of time - Hi, I have an Angular Application that takes 6 minutes for its initial build. Trivial changes are taking over a minute to rebuild. Are there any tools I can use to figure out what is going on? Something to help me untangle the dependencies and determine the why behind the long rebuilds? Dart SDK - Dev 2 43 build_runner: 0.8.1 Linux Lubuntu 17.10 running in Virtual Box with 10Gb Ram and 2 CPUs. Initial build: ``` [INFO] Generating build script completed, took 1.1s [INFO] Setting up file watchers completed, took 65ms [INFO] Waiting for all file watchers to be ready completed, took 696ms [WARNING] Throwing away cached asset graph due to Dart SDK update. [INFO] Cleaning up outputs from previous builds. completed, took 1.1s [INFO] Reading cached asset graph completed, took 9.1s [INFO] Building new asset graph completed, took 4.4s [INFO] Checking for unexpected pre-existing outputs. completed, took 7ms [INFO] Running build completed, took 6m 28s [INFO] Caching finalized dependency graph completed, took 1.9s [INFO] Succeeded after 6m 31s with 6318 outputs Serving `web` on port 8080 Serving `test` on port 8081 ``` After trivial change, added a print statement in a file: ``` [INFO] Starting Build [INFO] Updating asset graph completed, took 13ms [INFO] Running build completed, took 55.6s [INFO] Caching finalized dependency graph completed, took 3.0s [INFO] Succeeded after 58.7s with 48 outputs ```
non_process
incremental builds are taking an excessive amount of time hi i have an angular application that takes minutes for its initial build trivial changes are taking over a minute to rebuild are there any tools i can use to figure out what is going on something to help me untangle the dependencies and determine the why behind the long rebuilds dart sdk dev build runner linux lubuntu running in virtual box with ram and cpus initial build generating build script completed took setting up file watchers completed took waiting for all file watchers to be ready completed took throwing away cached asset graph due to dart sdk update cleaning up outputs from previous builds completed took reading cached asset graph completed took building new asset graph completed took checking for unexpected pre existing outputs completed took running build completed took caching finalized dependency graph completed took succeeded after with outputs serving web on port serving test on port after trivial change added a print statement in a file starting build updating asset graph completed took running build completed took caching finalized dependency graph completed took succeeded after with outputs
0
5,908
8,725,613,444
IssuesEvent
2018-12-10 09:54:31
linnovate/root
https://api.github.com/repos/linnovate/root
opened
every entity, when you change the status to one that makes the entity "inactive" (done, archive, sent, etc.) it doesnt update the entity's list automatically
2.0.6 Process bug
every entity, when you change the status to one that makes the entity "inactive" (done, archive, sent, etc.) it doesnt update the entity's list automatically, but when in tasks, when you press the "waiting for confirmation" button, it does update the list automatically
1.0
every entity, when you change the status to one that makes the entity "inactive" (done, archive, sent, etc.) it doesnt update the entity's list automatically - every entity, when you change the status to one that makes the entity "inactive" (done, archive, sent, etc.) it doesnt update the entity's list automatically, but when in tasks, when you press the "waiting for confirmation" button, it does update the list automatically
process
every entity when you change the status to one that makes the entity inactive done archive sent etc it doesnt update the entity s list automatically every entity when you change the status to one that makes the entity inactive done archive sent etc it doesnt update the entity s list automatically but when in tasks when you press the waiting for confirmation button it does update the list automatically
1
11,874
14,674,883,152
IssuesEvent
2020-12-30 16:17:52
amor71/LiuAlgoTrader
https://api.github.com/repos/amor71/LiuAlgoTrader
closed
portfolio builder
enhancement in-process
**Is your feature request related to a problem? Please describe.** miner for building off-market hours momentum portfolio **Describe the solution you'd like** miner for building off-market hours momentum portfolio based on Clenow's `stocks on the move`
1.0
portfolio builder - **Is your feature request related to a problem? Please describe.** miner for building off-market hours momentum portfolio **Describe the solution you'd like** miner for building off-market hours momentum portfolio based on Clenow's `stocks on the move`
process
portfolio builder is your feature request related to a problem please describe miner for building off market hours momentum portfolio describe the solution you d like miner for building off market hours momentum portfolio based on clenow s stocks on the move
1
17,539
23,350,000,250
IssuesEvent
2022-08-09 22:16:29
nextflow-io/nextflow
https://api.github.com/repos/nextflow-io/nextflow
closed
Add syntax support for Input and output block
lang/processes
## New feature @delagoya **hi, everyone. i am an newbie for nextflow, just learn it three month. i like use it construct my analysis pipeline Because of his powerful.but now i found a problem when i construct same pipeline. same process just different input block(maybe share same channel). if input block support same syntax. it will reduce code redundancy.** ## Usage scenario a demo for introduce the feature ``` flag = true process Test { input: if(flag){ val(x) from Channel.of(1..10) }else{ val(x) from Channel.of(11..20) } val(y) from Channel.of(21..30) """ echo $x $y """ } ``` ## Suggest implementation (Highlight the main building blocks of a possible implementation and/or related components) **hi, so sorry for it. i am a user of python. don't know groovy and java. so i don't hava any ideal for implementation. looking forward your reply. thanks**
1.0
Add syntax support for Input and output block - ## New feature @delagoya **hi, everyone. i am an newbie for nextflow, just learn it three month. i like use it construct my analysis pipeline Because of his powerful.but now i found a problem when i construct same pipeline. same process just different input block(maybe share same channel). if input block support same syntax. it will reduce code redundancy.** ## Usage scenario a demo for introduce the feature ``` flag = true process Test { input: if(flag){ val(x) from Channel.of(1..10) }else{ val(x) from Channel.of(11..20) } val(y) from Channel.of(21..30) """ echo $x $y """ } ``` ## Suggest implementation (Highlight the main building blocks of a possible implementation and/or related components) **hi, so sorry for it. i am a user of python. don't know groovy and java. so i don't hava any ideal for implementation. looking forward your reply. thanks**
process
add syntax support for input and output block new feature delagoya hi everyone i am an newbie for nextflow just learn it three month i like use it construct my analysis pipeline because of his powerful but now i found a problem when i construct same pipeline same process just different input block maybe share same channel if input block support same syntax it will reduce code redundancy usage scenario a demo for introduce the feature flag true process test input if flag val x from channel of else val x from channel of val y from channel of echo x y suggest implementation highlight the main building blocks of a possible implementation and or related components hi so sorry for it i am a user of python don t know groovy and java so i don t hava any ideal for implementation looking forward your reply thanks
1
4,250
7,187,641,794
IssuesEvent
2018-02-02 06:35:45
GoogleCloudPlatform/google-cloud-python
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
closed
Trace: Release
api: trace priority: p1 release blocking type: process
# Thank you for reporting an issue to google-cloud-python! If you are reporting an issue or requesting a feature, please search the existing open and closed issues to see if there is already work being done. - https://github.com/GoogleCloudPlatform/google-cloud-python/issues - http://stackoverflow.com/questions/tagged/google-cloud-python If you can provide us with as much of the following information as possible it will help us identify the cause of your issue more quickly. 1. Specify the API at the beginning of the title (for example, "BigQuery: ...") General, Core, and Other are also allowed as types 2. OS type and version 3. Python version and virtual environment information `python --version` 4. google-cloud-python version `pip show google-cloud`, `pip show google-<service>` or `pip freeze` 5. Stacktrace if available 6. Steps to reproduce 7. Code example Using GitHub flavored markdown can help make your request clearer. See: https://guides.github.com/features/mastering-markdown/
1.0
Trace: Release - # Thank you for reporting an issue to google-cloud-python! If you are reporting an issue or requesting a feature, please search the existing open and closed issues to see if there is already work being done. - https://github.com/GoogleCloudPlatform/google-cloud-python/issues - http://stackoverflow.com/questions/tagged/google-cloud-python If you can provide us with as much of the following information as possible it will help us identify the cause of your issue more quickly. 1. Specify the API at the beginning of the title (for example, "BigQuery: ...") General, Core, and Other are also allowed as types 2. OS type and version 3. Python version and virtual environment information `python --version` 4. google-cloud-python version `pip show google-cloud`, `pip show google-<service>` or `pip freeze` 5. Stacktrace if available 6. Steps to reproduce 7. Code example Using GitHub flavored markdown can help make your request clearer. See: https://guides.github.com/features/mastering-markdown/
process
trace release thank you for reporting an issue to google cloud python if you are reporting an issue or requesting a feature please search the existing open and closed issues to see if there is already work being done if you can provide us with as much of the following information as possible it will help us identify the cause of your issue more quickly specify the api at the beginning of the title for example bigquery general core and other are also allowed as types os type and version python version and virtual environment information python version google cloud python version pip show google cloud pip show google or pip freeze stacktrace if available steps to reproduce code example using github flavored markdown can help make your request clearer see
1
2,567
5,316,416,436
IssuesEvent
2017-02-13 19:49:30
MobileOrg/mobileorg
https://api.github.com/repos/MobileOrg/mobileorg
closed
Xcode compiled app crashes when switching to Dropbox if AppKey.plist isn't populated
development process
If the AppKey.plist isn't populated and the user switches to Dropbox in settings the app crashes and continuously crashes when attempting to start. Would be nice if this was handled more gracefully by warning the user or allowing them to select another option. Otherwise updating the documentation may help. Xcode catches the crash if you compile while having Dropbox selected, but if you are compiling a fresh version of MobileOrg then the settings may not be populated and so the crash will not occur until you select Dropbox.
1.0
Xcode compiled app crashes when switching to Dropbox if AppKey.plist isn't populated - If the AppKey.plist isn't populated and the user switches to Dropbox in settings the app crashes and continuously crashes when attempting to start. Would be nice if this was handled more gracefully by warning the user or allowing them to select another option. Otherwise updating the documentation may help. Xcode catches the crash if you compile while having Dropbox selected, but if you are compiling a fresh version of MobileOrg then the settings may not be populated and so the crash will not occur until you select Dropbox.
process
xcode compiled app crashes when switching to dropbox if appkey plist isn t populated if the appkey plist isn t populated and the user switches to dropbox in settings the app crashes and continuously crashes when attempting to start would be nice if this was handled more gracefully by warning the user or allowing them to select another option otherwise updating the documentation may help xcode catches the crash if you compile while having dropbox selected but if you are compiling a fresh version of mobileorg then the settings may not be populated and so the crash will not occur until you select dropbox
1
213,927
7,261,557,352
IssuesEvent
2018-02-18 21:54:53
staltz/mmmmm-mobile
https://api.github.com/repos/staltz/mmmmm-mobile
closed
Unresponsive components (in the JS thread)
priority 4 (must) scope: gui type: bug (ux)
<!-- Found a bug? Please fill out the sections below. Be kind and objective when writing in text. Thanks for informing us! :) If you have a feature request, just write it however you wish. --> **Steps to reproduce the bug:** install v0.0.9 alpha over an existing v0.0.9 alpha install **Expected behavior:** UI responds: can like, post message, select profile **Actual behavior:** The only UI events that seem to be handled are scrolling up and down the feed, and swiping left/right to see the other columns. Pressing for a long time in the new post area eventually shows a "paste" option, which does paste the clipboard. But it is impossible to edit the paste. Not even a keyboard shows up. **Technical details** - MMMMM app version: v0.0.9-alpha - Device/phone model: OnePlus5 - Android OS version: 7.1.1, OxygenOs 4.5.14
1.0
Unresponsive components (in the JS thread) - <!-- Found a bug? Please fill out the sections below. Be kind and objective when writing in text. Thanks for informing us! :) If you have a feature request, just write it however you wish. --> **Steps to reproduce the bug:** install v0.0.9 alpha over an existing v0.0.9 alpha install **Expected behavior:** UI responds: can like, post message, select profile **Actual behavior:** The only UI events that seem to be handled are scrolling up and down the feed, and swiping left/right to see the other columns. Pressing for a long time in the new post area eventually shows a "paste" option, which does paste the clipboard. But it is impossible to edit the paste. Not even a keyboard shows up. **Technical details** - MMMMM app version: v0.0.9-alpha - Device/phone model: OnePlus5 - Android OS version: 7.1.1, OxygenOs 4.5.14
non_process
unresponsive components in the js thread found a bug please fill out the sections below be kind and objective when writing in text thanks for informing us if you have a feature request just write it however you wish steps to reproduce the bug install alpha over an existing alpha install expected behavior ui responds can like post message select profile actual behavior the only ui events that seem to be handled are scrolling up and down the feed and swiping left right to see the other columns pressing for a long time in the new post area eventually shows a paste option which does paste the clipboard but it is impossible to edit the paste not even a keyboard shows up technical details mmmmm app version alpha device phone model android os version oxygenos
0
51,871
21,898,735,516
IssuesEvent
2022-05-20 11:16:57
kubeshop/testkube
https://api.github.com/repos/kubeshop/testkube
closed
Details-panel for scripts
enhancement service:dashboard 🎡 🚨 needs-ux
Similar to the details panel I can see for a test-execution, I would like to see a details panel for a script, containing; - basic metadata; name, description, create-date - total number of executions (total, failed, success) - script execution time metrics; min, max, avg (per total, failed, success) - a graph showing number of executions over time - separating total, failed, success - a graph showing script execution time distribution (per total, failed, success) - the content of the script (downloadable) if available - update history of the script (if available?) - a link to "see executions"
1.0
Details-panel for scripts - Similar to the details panel I can see for a test-execution, I would like to see a details panel for a script, containing; - basic metadata; name, description, create-date - total number of executions (total, failed, success) - script execution time metrics; min, max, avg (per total, failed, success) - a graph showing number of executions over time - separating total, failed, success - a graph showing script execution time distribution (per total, failed, success) - the content of the script (downloadable) if available - update history of the script (if available?) - a link to "see executions"
non_process
details panel for scripts similar to the details panel i can see for a test execution i would like to see a details panel for a script containing basic metadata name description create date total number of executions total failed success script execution time metrics min max avg per total failed success a graph showing number of executions over time separating total failed success a graph showing script execution time distribution per total failed success the content of the script downloadable if available update history of the script if available a link to see executions
0
773,713
27,167,870,023
IssuesEvent
2023-02-17 16:43:45
planetary-social/scuttlego
https://api.github.com/repos/planetary-social/scuttlego
closed
Can't handle malformed blobs.get
bug priority/high
``` time="2023-02-17 16:20:01.8618950 (UTC)" level=trace msg="received a message" body="{\"name\":[\"blobs\",\"get\"],\"args\":[{\"key\":\"&eb3zi3R00MZ6X+9jXgZMCS6/N1W1PGM2leOEKvpKQjA=.sha256\",\"max\":5242880}],\"type\":\"source\"}" ctx.connection_id=12 ctx.peer_id="7jJ7oou5pKKuyKvIlI5tl3ncjEXmZcbm3TvKqQetJIo=" header.bodyLength=126 header.flags="<stream=true endOrError=false bodyType={json}>" header.number=7150 name=scuttlego.raw source=golang time="2023-02-17 16:20:01.8622100 (UTC)" level=trace msg="sending a message" body="{\"error\":\"invalid arguments: 2 errors occurred:\\n\\t* json unmarshal failed: json: cannot unmarshal object into Go value of type string\\n\\t* could not create a blob ref: invalid prefix\\n\\n\"}" ctx.connection_id=12 ctx.peer_id="7jJ7oou5pKKuyKvIlI5tl3ncjEXmZcbm3TvKqQetJIo=" header.bodyLength=189 header.flags="<stream=true endOrError=true bodyType={json}>" header.number=-7150 name=scuttlego.raw source=golang ```
1.0
Can't handle malformed blobs.get - ``` time="2023-02-17 16:20:01.8618950 (UTC)" level=trace msg="received a message" body="{\"name\":[\"blobs\",\"get\"],\"args\":[{\"key\":\"&eb3zi3R00MZ6X+9jXgZMCS6/N1W1PGM2leOEKvpKQjA=.sha256\",\"max\":5242880}],\"type\":\"source\"}" ctx.connection_id=12 ctx.peer_id="7jJ7oou5pKKuyKvIlI5tl3ncjEXmZcbm3TvKqQetJIo=" header.bodyLength=126 header.flags="<stream=true endOrError=false bodyType={json}>" header.number=7150 name=scuttlego.raw source=golang time="2023-02-17 16:20:01.8622100 (UTC)" level=trace msg="sending a message" body="{\"error\":\"invalid arguments: 2 errors occurred:\\n\\t* json unmarshal failed: json: cannot unmarshal object into Go value of type string\\n\\t* could not create a blob ref: invalid prefix\\n\\n\"}" ctx.connection_id=12 ctx.peer_id="7jJ7oou5pKKuyKvIlI5tl3ncjEXmZcbm3TvKqQetJIo=" header.bodyLength=189 header.flags="<stream=true endOrError=true bodyType={json}>" header.number=-7150 name=scuttlego.raw source=golang ```
non_process
can t handle malformed blobs get time utc level trace msg received a message body name args type source ctx connection id ctx peer id header bodylength header flags header number name scuttlego raw source golang time utc level trace msg sending a message body error invalid arguments errors occurred n t json unmarshal failed json cannot unmarshal object into go value of type string n t could not create a blob ref invalid prefix n n ctx connection id ctx peer id header bodylength header flags header number name scuttlego raw source golang
0
15,587
19,708,726,108
IssuesEvent
2022-01-13 01:54:32
fluent/fluent-bit
https://api.github.com/repos/fluent/fluent-bit
closed
[windows] Kubernetes filter on windows not working
work-in-process Stale
Fluent-bit is not loading Kubernetes FILTER **Config used** ``` apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: logging labels: k8s-app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: |- [SERVICE] Flush 1 Log_Level debug Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name tail Tag kube.* Path C:\\ProgramData\\Docker\\containers\\*\\*.log Parser docker DB C:\\flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc.cluster.local:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.ProgramData.Docker.containers Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False parsers.conf: |- [PARSER] Name apache Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$ [PARSER] Name nginx Format regex Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog Format regex Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S ``` **Guide Used** [https://github.com/fluent/fluent-bit-kubernetes-logging](url) **Expected behavior** Should filter the logs properly **Logs for reference** ``` Fluent Bit v1.4.2 * Copyright (C) 2019-2020 The Fluent Bit Authors * Copyright (C) 2015-2018 Treasure Data * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io [2020/04/07 15:00:57] [ info] Configuration: [2020/04/07 15:00:57] [ info] flush time | 1.000000 seconds [2020/04/07 15:00:57] [ info] grace | 5 seconds [2020/04/07 15:00:57] [ info] daemon | 0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] inputs: [2020/04/07 15:00:57] [ info] tail [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] filters: [2020/04/07 15:00:57] [ info] kubernetes.0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] outputs: [2020/04/07 15:00:57] [ info] es.0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] collectors: [2020/04/07 15:00:57] [debug] [storage] [cio stream] new stream registered: tail.0 [2020/04/07 15:00:57] [ info] [storage] version=1.0.3, initializing... [2020/04/07 15:00:57] [ info] [storage] in-memory [2020/04/07 15:00:57] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 [2020/04/07 15:00:57] [ info] [engine] started (pid=10576) [2020/04/07 15:00:57] [debug] [engine] coroutine stack size: 98302 bytes (96.0K) [2020/04/07 15:00:58] [debug] [input:tail:tail.0] scanning path C:\\ProgramData\\Docker\\containers\\*\\*.log [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "237001930390711��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "365917469762065��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\7fbc003699e6b2568f388dcd7963812f7d1cae210619d2c8e7f7b9aa8927e8b0\7fbc003699e6b2568f388dcd7963812f7d1cae210619d2c8e7f7b9aa8927e8b0-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "675539944143564��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\d390fe73943c10afab770d3935cca8aeaae41d303574612909a63f88787c4752\d390fe73943c10afab770d3935cca8aeaae41d303574612909a63f88787c4752-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "788129934827827��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\fc03b27983c8b55dcb4609286bba9a068e18e5659ca90d8e2bb95d2265efd7d9\fc03b27983c8b55dcb4609286bba9a068e18e5659ca90d8e2bb95d2265efd7d9-json.log, offset=0 [2020/04/07 15:00:58] [debug] [input:tail:tail.0] 4 files found for 'C:\\ProgramData\\Docker\\containers\\*\\*.log' [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc.cluster.local port=443 [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] local POD info OK [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server... [2020/04/07 15:00:58] [ warn] net_tcp_fd_connect: getaddrinfo(host='kubernetes.default.svc.cluster.local'): No such host is known. [2020/04/07 15:00:58] [error] [filter:kubernetes:kubernetes.0] upstream connection error [2020/04/07 15:00:58] [ warn] [filter:kubernetes:kubernetes.0] could not get meta for POD fluent-bit-gpkcs [2020/04/07 15:00:58] [debug] [output:es:es.0] host=elasticsearch port=9200 uri=/_bulk index=fluent-bit type=flb_type [2020/04/07 15:00:58] [debug] [router] match rule tail.0:es.0 [2020/04/07 15:00:58] [ info] [sp] stream processor started [2020/04/07 15:00:58] [ warn] [filter:kubernetes:kubernetes.0] invalid pattern for given tag kube.C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log 2020/04/07 15:00:59] [ warn] [filter:kubernetes:kubernetes.0] invalid pattern for given tag kube.C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log ``` **Your Environment** k8s cluster v1.15.7 with windows node **Additional context** let me know if more information is needed cc @fujimotos
1.0
[windows] Kubernetes filter on windows not working - Fluent-bit is not loading Kubernetes FILTER **Config used** ``` apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: logging labels: k8s-app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: |- [SERVICE] Flush 1 Log_Level debug Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name tail Tag kube.* Path C:\\ProgramData\\Docker\\containers\\*\\*.log Parser docker DB C:\\flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc.cluster.local:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.ProgramData.Docker.containers Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On Replace_Dots On Retry_Limit False parsers.conf: |- [PARSER] Name apache Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$ [PARSER] Name nginx Format regex Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog Format regex Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S ``` **Guide Used** [https://github.com/fluent/fluent-bit-kubernetes-logging](url) **Expected behavior** Should filter the logs properly **Logs for reference** ``` Fluent Bit v1.4.2 * Copyright (C) 2019-2020 The Fluent Bit Authors * Copyright (C) 2015-2018 Treasure Data * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd * https://fluentbit.io [2020/04/07 15:00:57] [ info] Configuration: [2020/04/07 15:00:57] [ info] flush time | 1.000000 seconds [2020/04/07 15:00:57] [ info] grace | 5 seconds [2020/04/07 15:00:57] [ info] daemon | 0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] inputs: [2020/04/07 15:00:57] [ info] tail [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] filters: [2020/04/07 15:00:57] [ info] kubernetes.0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] outputs: [2020/04/07 15:00:57] [ info] es.0 [2020/04/07 15:00:57] [ info] ___________ [2020/04/07 15:00:57] [ info] collectors: [2020/04/07 15:00:57] [debug] [storage] [cio stream] new stream registered: tail.0 [2020/04/07 15:00:57] [ info] [storage] version=1.0.3, initializing... [2020/04/07 15:00:57] [ info] [storage] in-memory [2020/04/07 15:00:57] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 [2020/04/07 15:00:57] [ info] [engine] started (pid=10576) [2020/04/07 15:00:57] [debug] [engine] coroutine stack size: 98302 bytes (96.0K) [2020/04/07 15:00:58] [debug] [input:tail:tail.0] scanning path C:\\ProgramData\\Docker\\containers\\*\\*.log [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "237001930390711��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "365917469762065��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\7fbc003699e6b2568f388dcd7963812f7d1cae210619d2c8e7f7b9aa8927e8b0\7fbc003699e6b2568f388dcd7963812f7d1cae210619d2c8e7f7b9aa8927e8b0-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "675539944143564��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\d390fe73943c10afab770d3935cca8aeaae41d303574612909a63f88787c4752\d390fe73943c10afab770d3935cca8aeaae41d303574612909a63f88787c4752-json.log, offset=0 [2020/04/07 15:00:58] [error] [sqldb] error=unrecognized token: "788129934827827��" [2020/04/07 15:00:58] [debug] [input:tail:tail.0] add to scan queue C:\ProgramData\Docker\containers\fc03b27983c8b55dcb4609286bba9a068e18e5659ca90d8e2bb95d2265efd7d9\fc03b27983c8b55dcb4609286bba9a068e18e5659ca90d8e2bb95d2265efd7d9-json.log, offset=0 [2020/04/07 15:00:58] [debug] [input:tail:tail.0] 4 files found for 'C:\\ProgramData\\Docker\\containers\\*\\*.log' [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] https=1 host=kubernetes.default.svc.cluster.local port=443 [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] local POD info OK [2020/04/07 15:00:58] [ info] [filter:kubernetes:kubernetes.0] testing connectivity with API server... [2020/04/07 15:00:58] [ warn] net_tcp_fd_connect: getaddrinfo(host='kubernetes.default.svc.cluster.local'): No such host is known. [2020/04/07 15:00:58] [error] [filter:kubernetes:kubernetes.0] upstream connection error [2020/04/07 15:00:58] [ warn] [filter:kubernetes:kubernetes.0] could not get meta for POD fluent-bit-gpkcs [2020/04/07 15:00:58] [debug] [output:es:es.0] host=elasticsearch port=9200 uri=/_bulk index=fluent-bit type=flb_type [2020/04/07 15:00:58] [debug] [router] match rule tail.0:es.0 [2020/04/07 15:00:58] [ info] [sp] stream processor started [2020/04/07 15:00:58] [ warn] [filter:kubernetes:kubernetes.0] invalid pattern for given tag kube.C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log 2020/04/07 15:00:59] [ warn] [filter:kubernetes:kubernetes.0] invalid pattern for given tag kube.C:\ProgramData\Docker\containers\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6\710d649b86322548cbeee7e1d09d787b280a86ea530390ed9f1b3c42055fcbd6-json.log ``` **Your Environment** k8s cluster v1.15.7 with windows node **Additional context** let me know if more information is needed cc @fujimotos
process
kubernetes filter on windows not working fluent bit is not loading kubernetes filter config used apiversion kind configmap metadata name fluent bit config namespace logging labels app fluent bit data configuration files server input filters and output fluent bit conf flush log level debug daemon off parsers file parsers conf http server on http listen http port name tail tag kube path c programdata docker containers log parser docker db c flb kube db mem buf limit skip long lines on refresh interval name kubernetes match kube kube url kube ca file var run secrets kubernetes io serviceaccount ca crt kube token file var run secrets kubernetes io serviceaccount token kube tag prefix kube programdata docker containers merge log on merge log key log processed logging parser on logging exclude off name es match host fluent elasticsearch host port fluent elasticsearch port logstash format on replace dots on retry limit false parsers conf name apache format regex regex s s time key time time format d b y h m s z name format regex regex s s time key time time format d b y h m s z name apache error format regex regex name nginx format regex regex s s time key time time format d b y h m s z name json format json time key time time format d b y h m s z name docker format json time key time time format y m dt h m s l time keep on name syslog format regex regex time key time time format b d h m s guide used url expected behavior should filter the logs properly logs for reference fluent bit copyright c the fluent bit authors copyright c treasure data fluent bit is a cncf sub project under the umbrella of fluentd configuration flush time seconds grace seconds daemon inputs tail filters kubernetes outputs es collectors new stream registered tail version initializing in memory normal synchronization mode checksum disabled max chunks up started pid coroutine stack size bytes scanning path c programdata docker containers log error unrecognized token �� add to scan queue c programdata docker containers json log offset error unrecognized token �� add to scan queue c programdata docker containers json log offset error unrecognized token �� add to scan queue c programdata docker containers json log offset error unrecognized token �� add to scan queue c programdata docker containers json log offset files found for c programdata docker containers log https host kubernetes default svc cluster local port local pod info ok testing connectivity with api server net tcp fd connect getaddrinfo host kubernetes default svc cluster local no such host is known upstream connection error could not get meta for pod fluent bit gpkcs host elasticsearch port uri bulk index fluent bit type flb type match rule tail es stream processor started invalid pattern for given tag kube c programdata docker containers json log invalid pattern for given tag kube c programdata docker containers json log your environment cluster with windows node additional context let me know if more information is needed cc fujimotos
1
21,952
30,452,393,651
IssuesEvent
2023-07-16 13:11:13
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
etm-dgraham 5.1.13 has 4 GuardDog issues
guarddog exec-base64 silent-process-execution
https://pypi.org/project/etm-dgraham https://inspector.pypi.io/project/etm-dgraham ```{ "dependency": "etm-dgraham", "version": "5.1.13", "result": { "issues": 4, "errors": {}, "results": { "silent-process-execution": [ { "location": "etm-dgraham-5.1.13/etm/view.py:1558", "code": " pid = subprocess.Popen(parts, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).pid", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ], "exec-base64": [ { "location": "etm-dgraham-5.1.13/bump.py:121", "code": " check_output(f\"git commit -a -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:123", "code": " check_output(f\"git tag -a -f '{new_version}' -m '{version_info}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:128", "code": " check_output(f\"git commit -a --amend -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ] }, "path": "/tmp/tmp8mpj2ho9/etm-dgraham" } }```
1.0
etm-dgraham 5.1.13 has 4 GuardDog issues - https://pypi.org/project/etm-dgraham https://inspector.pypi.io/project/etm-dgraham ```{ "dependency": "etm-dgraham", "version": "5.1.13", "result": { "issues": 4, "errors": {}, "results": { "silent-process-execution": [ { "location": "etm-dgraham-5.1.13/etm/view.py:1558", "code": " pid = subprocess.Popen(parts, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).pid", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ], "exec-base64": [ { "location": "etm-dgraham-5.1.13/bump.py:121", "code": " check_output(f\"git commit -a -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:123", "code": " check_output(f\"git tag -a -f '{new_version}' -m '{version_info}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:128", "code": " check_output(f\"git commit -a --amend -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ] }, "path": "/tmp/tmp8mpj2ho9/etm-dgraham" } }```
process
etm dgraham has guarddog issues dependency etm dgraham version result issues errors results silent process execution location etm dgraham etm view py code pid subprocess popen parts stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull pid message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null exec location etm dgraham bump py code check output f git commit a m tmsg message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location etm dgraham bump py code check output f git tag a f new version m version info message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location etm dgraham bump py code check output f git commit a amend m tmsg message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n path tmp etm dgraham
1
22,414
31,147,121,467
IssuesEvent
2023-08-16 07:22:53
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Ubuntu Image 18.04 no longer valid
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
Hello, the code samples for output variables still use `vmImage: 'ubuntu-18.04'` as this image is no longer available this lead to a job simply not starting without any error message for me. Please update the samples to use current or maybe latest vm-images. Thank you! Best regards, Sven --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Service: **azure-devops-pipelines** * Sub-service: **azure-devops-pipelines-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Ubuntu Image 18.04 no longer valid - Hello, the code samples for output variables still use `vmImage: 'ubuntu-18.04'` as this image is no longer available this lead to a job simply not starting without any error message for me. Please update the samples to use current or maybe latest vm-images. Thank you! Best regards, Sven --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md) * Service: **azure-devops-pipelines** * Sub-service: **azure-devops-pipelines-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
ubuntu image no longer valid hello the code samples for output variables still use vmimage ubuntu as this image is no longer available this lead to a job simply not starting without any error message for me please update the samples to use current or maybe latest vm images thank you best regards sven document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id bcdb content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
1
14,768
18,045,970,132
IssuesEvent
2021-09-18 22:41:05
SirSertile/CNCS-Capstone
https://api.github.com/repos/SirSertile/CNCS-Capstone
closed
Write up parts list for "mock doorway"
Custom Hardware In Process
This is a test-bed for attacks on doors. Should have all the major features of a door, including a lever handle, strike plate, and hinge. This focuses on attacks NOT primarily focused around the locking component. This allows me to purchase cheaper options for the handle.
1.0
Write up parts list for "mock doorway" - This is a test-bed for attacks on doors. Should have all the major features of a door, including a lever handle, strike plate, and hinge. This focuses on attacks NOT primarily focused around the locking component. This allows me to purchase cheaper options for the handle.
process
write up parts list for mock doorway this is a test bed for attacks on doors should have all the major features of a door including a lever handle strike plate and hinge this focuses on attacks not primarily focused around the locking component this allows me to purchase cheaper options for the handle
1
8,790
11,908,164,460
IssuesEvent
2020-03-31 00:09:48
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Layer Sorting in Toolbox Multiple Selection
Feature Request Processing
When you go into a tool in the tool box, say Merge Vector Layers for example, it is difficult to find which input layers to select. It would be great if the input layers could sort by alphabetical order or give the same file structure as the layers list in the main QGIS window. Usually i have to rename files i want to merge so it has lots of xxxxx in front of it, to be able to see it easier in the list. I have over 200 layers within a project. Image shows how poorly sorted the list is. ![image](https://user-images.githubusercontent.com/56584917/69215180-ea7f2f80-0ba3-11ea-9919-5f78d66f3da0.png)
1.0
Layer Sorting in Toolbox Multiple Selection - When you go into a tool in the tool box, say Merge Vector Layers for example, it is difficult to find which input layers to select. It would be great if the input layers could sort by alphabetical order or give the same file structure as the layers list in the main QGIS window. Usually i have to rename files i want to merge so it has lots of xxxxx in front of it, to be able to see it easier in the list. I have over 200 layers within a project. Image shows how poorly sorted the list is. ![image](https://user-images.githubusercontent.com/56584917/69215180-ea7f2f80-0ba3-11ea-9919-5f78d66f3da0.png)
process
layer sorting in toolbox multiple selection when you go into a tool in the tool box say merge vector layers for example it is difficult to find which input layers to select it would be great if the input layers could sort by alphabetical order or give the same file structure as the layers list in the main qgis window usually i have to rename files i want to merge so it has lots of xxxxx in front of it to be able to see it easier in the list i have over layers within a project image shows how poorly sorted the list is
1
442,128
30,818,403,532
IssuesEvent
2023-08-01 14:47:53
iodepo/odis-arch
https://api.github.com/repos/iodepo/odis-arch
opened
Documentation focus
documentation
Creating a generic issue for holding some goals on the reworking of the documentation - [ ] Need a simple single page that defines the requirements for partners to join; ODIS Cat entry, sitemap.xml, etc - [ ] Perhaps look at the initial work (very basic) for partner on boarding and the decision tree in https://github.com/iodepo/odis-arch/tree/master/docs/decisiontree - [ ] update the SHACL validation section to focus on the use for alignment to the search We should also review all the current issues tagged with documentation: https://github.com/iodepo/odis-arch/labels/documentation
1.0
Documentation focus - Creating a generic issue for holding some goals on the reworking of the documentation - [ ] Need a simple single page that defines the requirements for partners to join; ODIS Cat entry, sitemap.xml, etc - [ ] Perhaps look at the initial work (very basic) for partner on boarding and the decision tree in https://github.com/iodepo/odis-arch/tree/master/docs/decisiontree - [ ] update the SHACL validation section to focus on the use for alignment to the search We should also review all the current issues tagged with documentation: https://github.com/iodepo/odis-arch/labels/documentation
non_process
documentation focus creating a generic issue for holding some goals on the reworking of the documentation need a simple single page that defines the requirements for partners to join odis cat entry sitemap xml etc perhaps look at the initial work very basic for partner on boarding and the decision tree in update the shacl validation section to focus on the use for alignment to the search we should also review all the current issues tagged with documentation
0
45,742
5,730,462,106
IssuesEvent
2017-04-21 09:29:13
FlightControl-Master/MOOSE
https://api.github.com/repos/FlightControl-Master/MOOSE
closed
CONTROLLABLE.TaskReturnToBase
enhancement ready for testing
Documentation of how to implement is missing, or this feature is not functional.
1.0
CONTROLLABLE.TaskReturnToBase - Documentation of how to implement is missing, or this feature is not functional.
non_process
controllable taskreturntobase documentation of how to implement is missing or this feature is not functional
0
198,065
14,959,895,672
IssuesEvent
2021-01-27 04:24:34
EnterpriseDB/docs
https://api.github.com/repos/EnterpriseDB/docs
closed
Feedback on /bart/2.6.1/bart_inst/03_configuring_bart.mdx
hyperlinks_latest_version
Hyperlinks Configuring the Database Server section - a. "Authorize SSH/SCP access without a password prompt <authorizing\_ssh/scp\_access>" hhperlink is not working. b. The hyperlink for "Update the BART configuration file (server section) "<adding\_a\_database\_server>." is not working.
1.0
Feedback on /bart/2.6.1/bart_inst/03_configuring_bart.mdx - Hyperlinks Configuring the Database Server section - a. "Authorize SSH/SCP access without a password prompt <authorizing\_ssh/scp\_access>" hhperlink is not working. b. The hyperlink for "Update the BART configuration file (server section) "<adding\_a\_database\_server>." is not working.
non_process
feedback on bart bart inst configuring bart mdx hyperlinks configuring the database server section a authorize ssh scp access without a password prompt hhperlink is not working b the hyperlink for update the bart configuration file server section is not working
0
387,560
11,463,355,102
IssuesEvent
2020-02-07 15:51:21
canonical-web-and-design/tutorials.ubuntu.com
https://api.github.com/repos/canonical-web-and-design/tutorials.ubuntu.com
closed
Search, filter, and sort choices aren’t linkable
Priority: Medium
1\. Go to https://tutorials.ubuntu.com/ 2\. Enter search text, and/or filter by topic, and/or sort the results. 3\. Copy the URL. 4\. Open a new window and load the URL. What you see: The default list of tutorials. What you should see: The search text, filter, and sort order that you specified. This is inconsistent with the search functions on most Web sites. And it makes [snapcraft.io-static-pages#393](https://github.com/canonical-websites/snapcraft.io-static-pages/issues/393) impractical to fix.
1.0
Search, filter, and sort choices aren’t linkable - 1\. Go to https://tutorials.ubuntu.com/ 2\. Enter search text, and/or filter by topic, and/or sort the results. 3\. Copy the URL. 4\. Open a new window and load the URL. What you see: The default list of tutorials. What you should see: The search text, filter, and sort order that you specified. This is inconsistent with the search functions on most Web sites. And it makes [snapcraft.io-static-pages#393](https://github.com/canonical-websites/snapcraft.io-static-pages/issues/393) impractical to fix.
non_process
search filter and sort choices aren’t linkable go to enter search text and or filter by topic and or sort the results copy the url open a new window and load the url what you see the default list of tutorials what you should see the search text filter and sort order that you specified this is inconsistent with the search functions on most web sites and it makes impractical to fix
0
428,069
12,402,348,499
IssuesEvent
2020-05-21 11:47:50
LorittaBot/Loritta
https://api.github.com/repos/LorittaBot/Loritta
closed
Add user's discord badges in +profile
Module: Loritta (Discord) 🎀 Priority: Low Status: On Hold 😴 Type: Enhancement ✨
Waiting for JDA's user flags branch to be merged. We can also drop the user flags field in the user settings table.
1.0
Add user's discord badges in +profile - Waiting for JDA's user flags branch to be merged. We can also drop the user flags field in the user settings table.
non_process
add user s discord badges in profile waiting for jda s user flags branch to be merged we can also drop the user flags field in the user settings table
0
311,503
23,390,295,234
IssuesEvent
2022-08-11 17:09:02
gpbl/react-day-picker
https://api.github.com/repos/gpbl/react-day-picker
closed
website: use shadow DOM to render DayPicker in the examples
Priority: High Type: Documentation
In the website we use Sandpack to render DayPicker because Docusaurus [overrides](https://github.com/facebook/docusaurus/issues/6032) the default style from DayPicker. A workaround for this solution is to use shadow DOM (e.g. via https://www.npmjs.com/package/react-shadow) for a proper style encapsulation. This would provide an optimal testing environment when developing DayPicker.
1.0
website: use shadow DOM to render DayPicker in the examples - In the website we use Sandpack to render DayPicker because Docusaurus [overrides](https://github.com/facebook/docusaurus/issues/6032) the default style from DayPicker. A workaround for this solution is to use shadow DOM (e.g. via https://www.npmjs.com/package/react-shadow) for a proper style encapsulation. This would provide an optimal testing environment when developing DayPicker.
non_process
website use shadow dom to render daypicker in the examples in the website we use sandpack to render daypicker because docusaurus the default style from daypicker a workaround for this solution is to use shadow dom e g via for a proper style encapsulation this would provide an optimal testing environment when developing daypicker
0
96,868
20,120,788,212
IssuesEvent
2022-02-08 01:55:48
bngarren/icu-rounder
https://api.github.com/repos/bngarren/icu-rounder
closed
Migration
feature - major refactoring/code quality
Working on "migration" **branch**. Updating many things, but most importantly MUI to v5 and an entirely new styling system: - [X] NPM to v8.1.2 and node to v16.13.2 - ea48f4d1 - [X] React Router (react-router-dom) to v6.2.1 - 12a42781 - [X] react-movable to v3.0.2- 0dee881c - ~~@react-pdf/renderer~~ Tried to update to v2.1.0, but kept getting an "unsupported number: -Infinity" error which I think was related to the CSS formatting of the grid; could try again in the future, but not needed for now. `https://github.com/diegomura/react-pdf/issues/1629` - ~~react-scripts~~ Tried to update to v5.0.0 but there is a big issue with using node back-end packages in the front-end browser setting, something to do with polyfills. Essentially, this is a breaking change for now and will keep 4.0.3, until `https://github.com/facebook/create-react-app/issues/11756` is resolved - [x] Material UI v5 ***This is a BIG refactor** 64b8419e - [X] Added the new @mui and @emotion packages. Ran the "preset-safe" codemod to transform a lot of code needed for the v4->v5 migration. This affected ~38 files or so (most components). I then fixed some quick errors in several files. fe415396, 75420adc - There is still a non-fatal error in ContentInput when switching from nestedContent to simpleContent that I can't figure out, #36 - [x] Prior to digging into the styling overhaul for each component, let's first create a **Theme**, and use theme variables throughout our components. This will set up us for quick theme switches, color corrections, etc. later on - [x] Gradually convert each "old" way of styling using `makeStyles` (currently being covered by the @mui/styles package) to the new @emotion based way. See https://mui.com/guides/migration-v4/#migrate-from-jss - ~~Upgrade `firebase` package to v9 which allows for tree-shaking and smaller bundle sizes. Follow the guide: https://firebase.google.com/docs/web/modular-upgrade~~ Converting to #39 - [x] **FINALLY**: Merge migration branch into develop
1.0
Migration - Working on "migration" **branch**. Updating many things, but most importantly MUI to v5 and an entirely new styling system: - [X] NPM to v8.1.2 and node to v16.13.2 - ea48f4d1 - [X] React Router (react-router-dom) to v6.2.1 - 12a42781 - [X] react-movable to v3.0.2- 0dee881c - ~~@react-pdf/renderer~~ Tried to update to v2.1.0, but kept getting an "unsupported number: -Infinity" error which I think was related to the CSS formatting of the grid; could try again in the future, but not needed for now. `https://github.com/diegomura/react-pdf/issues/1629` - ~~react-scripts~~ Tried to update to v5.0.0 but there is a big issue with using node back-end packages in the front-end browser setting, something to do with polyfills. Essentially, this is a breaking change for now and will keep 4.0.3, until `https://github.com/facebook/create-react-app/issues/11756` is resolved - [x] Material UI v5 ***This is a BIG refactor** 64b8419e - [X] Added the new @mui and @emotion packages. Ran the "preset-safe" codemod to transform a lot of code needed for the v4->v5 migration. This affected ~38 files or so (most components). I then fixed some quick errors in several files. fe415396, 75420adc - There is still a non-fatal error in ContentInput when switching from nestedContent to simpleContent that I can't figure out, #36 - [x] Prior to digging into the styling overhaul for each component, let's first create a **Theme**, and use theme variables throughout our components. This will set up us for quick theme switches, color corrections, etc. later on - [x] Gradually convert each "old" way of styling using `makeStyles` (currently being covered by the @mui/styles package) to the new @emotion based way. See https://mui.com/guides/migration-v4/#migrate-from-jss - ~~Upgrade `firebase` package to v9 which allows for tree-shaking and smaller bundle sizes. Follow the guide: https://firebase.google.com/docs/web/modular-upgrade~~ Converting to #39 - [x] **FINALLY**: Merge migration branch into develop
non_process
migration working on migration branch updating many things but most importantly mui to and an entirely new styling system npm to and node to react router react router dom to react movable to react pdf renderer tried to update to but kept getting an unsupported number infinity error which i think was related to the css formatting of the grid could try again in the future but not needed for now react scripts tried to update to but there is a big issue with using node back end packages in the front end browser setting something to do with polyfills essentially this is a breaking change for now and will keep until is resolved material ui this is a big refactor added the new mui and emotion packages ran the preset safe codemod to transform a lot of code needed for the migration this affected files or so most components i then fixed some quick errors in several files there is still a non fatal error in contentinput when switching from nestedcontent to simplecontent that i can t figure out prior to digging into the styling overhaul for each component let s first create a theme and use theme variables throughout our components this will set up us for quick theme switches color corrections etc later on gradually convert each old way of styling using makestyles currently being covered by the mui styles package to the new emotion based way see upgrade firebase package to which allows for tree shaking and smaller bundle sizes follow the guide converting to finally merge migration branch into develop
0
98,847
20,812,497,077
IssuesEvent
2022-03-18 05:40:40
Ale-Torres/BrowserQuest
https://api.github.com/repos/Ale-Torres/BrowserQuest
closed
Assignment made within subexpression
code smell
In the function trimDots in require jquery there is an assignment made with a for loop which is a code smell.
1.0
Assignment made within subexpression - In the function trimDots in require jquery there is an assignment made with a for loop which is a code smell.
non_process
assignment made within subexpression in the function trimdots in require jquery there is an assignment made with a for loop which is a code smell
0
16,250
3,349,047,718
IssuesEvent
2015-11-17 07:14:49
Menon24/B6T3WX3I4U4VCWIYBNAYQJLH
https://api.github.com/repos/Menon24/B6T3WX3I4U4VCWIYBNAYQJLH
closed
xMU6GQI+anLuPL2acDbAWs+mcSGjIhMnIiXBXbAvQI7CpriwgxprKvVLmuZ54EnBC7AfYzSfI/KWPHOrMSC7GmYB9SridPZdkOsGm0Ifn8yeJzZRErTQGIPAeKa0mW0mAnxxDf/stj9GKIT87GyfA7Dxkxk1IkqbSoBKieD92gM=
design
PPv6NHfg/0A6JTUR8dJyeW7I1zbfiVPp50QzSvCb5mUVngYgZiOjMRL8RmMAWT84RI9EfUyEovHvY1LN2sb+I+PXnEFD8myBCvdzFoHz9GX7zrPRjL00phdUPBYoFdIrvurBJU+aY8PdORBW6FVULe3pVhhEAHS5/xSHMNfVEITo6x+Cyy7Vvbe0U+ZVRBEns/g/sh1nDBVAumJ1CTgHbqh+OLSD5EENazoM6u0XkxUW+oMJI4E9ZbBINUaq31Dx+ikpe+pQAi0PQ7mk2baEYUJ0WsA1njpx24+q2z7azi+ndlyJH3auIxg5JJRK6Slsxh82zkp4+Ks8qfJY3Tf3w7KhDE5rHWETewG1oCG3MlZyFnfR68heeIBh5mNKUtOFiOydHU6oISNRgShwmYuFvV85hCgHEm+Yp2hoGlIyPdsUfTHv5vMb+olV65IzZ5ruaBrit2Km909ypOxTlrYcQm/bGu6tNG/1+ploZVyUU+np19oS5oTP871B/Zh0N0nB0DnzlMd7y3RE9j3NE/EDeWoC6Cyqdedzl4f2l40sf+W5kNCHEnsoKBfKRrjMrfZZ7q6GOCm+m1L7L68h++XJXURiI1KtApgy5MbuxIT4+CdGcC4MjLHvsbtFgj2ePKhgyPp/fKg//7T4pPziuO9aBPqqhgeR89DtKPEJEz2ua/BdleeBPL/eGK+W/fVyvW+Y
1.0
xMU6GQI+anLuPL2acDbAWs+mcSGjIhMnIiXBXbAvQI7CpriwgxprKvVLmuZ54EnBC7AfYzSfI/KWPHOrMSC7GmYB9SridPZdkOsGm0Ifn8yeJzZRErTQGIPAeKa0mW0mAnxxDf/stj9GKIT87GyfA7Dxkxk1IkqbSoBKieD92gM= - PPv6NHfg/0A6JTUR8dJyeW7I1zbfiVPp50QzSvCb5mUVngYgZiOjMRL8RmMAWT84RI9EfUyEovHvY1LN2sb+I+PXnEFD8myBCvdzFoHz9GX7zrPRjL00phdUPBYoFdIrvurBJU+aY8PdORBW6FVULe3pVhhEAHS5/xSHMNfVEITo6x+Cyy7Vvbe0U+ZVRBEns/g/sh1nDBVAumJ1CTgHbqh+OLSD5EENazoM6u0XkxUW+oMJI4E9ZbBINUaq31Dx+ikpe+pQAi0PQ7mk2baEYUJ0WsA1njpx24+q2z7azi+ndlyJH3auIxg5JJRK6Slsxh82zkp4+Ks8qfJY3Tf3w7KhDE5rHWETewG1oCG3MlZyFnfR68heeIBh5mNKUtOFiOydHU6oISNRgShwmYuFvV85hCgHEm+Yp2hoGlIyPdsUfTHv5vMb+olV65IzZ5ruaBrit2Km909ypOxTlrYcQm/bGu6tNG/1+ploZVyUU+np19oS5oTP871B/Zh0N0nB0DnzlMd7y3RE9j3NE/EDeWoC6Cyqdedzl4f2l40sf+W5kNCHEnsoKBfKRrjMrfZZ7q6GOCm+m1L7L68h++XJXURiI1KtApgy5MbuxIT4+CdGcC4MjLHvsbtFgj2ePKhgyPp/fKg//7T4pPziuO9aBPqqhgeR89DtKPEJEz2ua/BdleeBPL/eGK+W/fVyvW+Y
non_process
i zvrbens g ikpe plozvyuu fkg bdleebpl egk w fvyvw y
0
18,282
24,372,887,291
IssuesEvent
2022-10-03 20:59:00
apache/arrow-rs
https://api.github.com/repos/apache/arrow-rs
closed
Release Arrow `24.0.0` (next release after `23.0.0`)
development-process
Follow on from https://github.com/apache/arrow-rs/issues/2665 * Planned Release Candidate: 2022-09-30 * Planned Release and Publish to crates.io: 2022-10-03 Items (from [dev/release/README.md](https://github.com/apache/arrow-rs/blob/master/dev/release/README.md)): - [x] PR to update version and CHANGELOG: https://github.com/apache/arrow-rs/pull/2808 - [x] Release candidate created: https://lists.apache.org/thread/62dg461z716mddtm7vj2vdysjxwz3jcl - [x] Release candidate approved: https://lists.apache.org/thread/cmmz42tkqdlo4n29ds3nbh68t8kpdjl2 - [x] Release to crates.iohttps://lists.apache.org/thread/cmmz42tkqdlo4n29ds3nbh68t8kpdjl2 - [x] Draft update to DataFusion: https://github.com/apache/arrow-datafusion/pull/3691 See full list here: https://github.com/apache/arrow-rs/compare/23.0.0...master cc @iajoiner @tustvold @viirya
1.0
Release Arrow `24.0.0` (next release after `23.0.0`) - Follow on from https://github.com/apache/arrow-rs/issues/2665 * Planned Release Candidate: 2022-09-30 * Planned Release and Publish to crates.io: 2022-10-03 Items (from [dev/release/README.md](https://github.com/apache/arrow-rs/blob/master/dev/release/README.md)): - [x] PR to update version and CHANGELOG: https://github.com/apache/arrow-rs/pull/2808 - [x] Release candidate created: https://lists.apache.org/thread/62dg461z716mddtm7vj2vdysjxwz3jcl - [x] Release candidate approved: https://lists.apache.org/thread/cmmz42tkqdlo4n29ds3nbh68t8kpdjl2 - [x] Release to crates.iohttps://lists.apache.org/thread/cmmz42tkqdlo4n29ds3nbh68t8kpdjl2 - [x] Draft update to DataFusion: https://github.com/apache/arrow-datafusion/pull/3691 See full list here: https://github.com/apache/arrow-rs/compare/23.0.0...master cc @iajoiner @tustvold @viirya
process
release arrow next release after follow on from planned release candidate planned release and publish to crates io items from pr to update version and changelog release candidate created release candidate approved release to crates io draft update to datafusion see full list here cc iajoiner tustvold viirya
1
172,364
13,303,116,587
IssuesEvent
2020-08-25 15:06:08
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: backupTPCC failed
C-test-failure O-roachtest O-robot branch-release-19.1 release-blocker
[(roachtest).backupTPCC failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2210032&tab=buildLog) on [release-19.1@b4804a05fe7a2f7b0c6cbad9128a811c5d29ba0a](https://github.com/cockroachdb/cockroach/commits/b4804a05fe7a2f7b0c6cbad9128a811c5d29ba0a): ``` The test failed on branch=release-19.1, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/backupTPCC/run_1 cluster.go:2167,backup.go:229,test_runner.go:754: output in run_054323.657_n1_workload_init_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2210032-1598074895-16-n3cpu4:1 -- ./workload init tpcc --warehouses=10 {pgurl:1-3} returned: exit status 20 (1) attached stack trace | main.(*cluster).RunE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2245 | main.(*cluster).Run | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2165 | main.registerBackup.func4 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/backup.go:229 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:754 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Wraps: (2) 2 safe details enclosed Wraps: (3) output in run_054323.657_n1_workload_init_tpcc Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2210032-1598074895-16-n3cpu4:1 -- ./workload init tpcc --warehouses=10 {pgurl:1-3} returned | stderr: | d.go:140 imported customer (16s, 300000 rows) | I200822 05:43:47.204948 1 workload/workloadsql/dataload.go:140 imported history (6s, 300000 rows) | I200822 05:43:53.734094 1 workload/workloadsql/dataload.go:140 imported order (7s, 300000 rows) | I200822 05:43:54.526333 1 workload/workloadsql/dataload.go:140 imported new_order (1s, 90000 rows) | I200822 05:43:56.029161 1 workload/workloadsql/dataload.go:140 imported item (2s, 100000 rows) | I200822 05:44:50.156559 1 workload/workloadsql/dataload.go:140 imported stock (54s, 1000000 rows) | I200822 05:45:33.330498 1 workload/workloadsql/dataload.go:140 imported order_line (43s, 3001222 rows) | Error: Could not postload: pq: foreign key requires an existing index on columns ("h_c_w_id", "h_c_d_id", "h_c_id") | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 1. Command with error: | | ``` | | ./workload init tpcc --warehouses=10 {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (5) exit status 20 Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError ``` <details><summary>More</summary><p> Artifacts: [/backupTPCC](https://teamcity.cockroachdb.com/viewLog.html?buildId=2210032&tab=artifacts#/backupTPCC) Related: - #53182 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #53119 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #53082 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202008191705_v19.2.10](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202008191705_v19.2.10) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2AbackupTPCC.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: backupTPCC failed - [(roachtest).backupTPCC failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2210032&tab=buildLog) on [release-19.1@b4804a05fe7a2f7b0c6cbad9128a811c5d29ba0a](https://github.com/cockroachdb/cockroach/commits/b4804a05fe7a2f7b0c6cbad9128a811c5d29ba0a): ``` The test failed on branch=release-19.1, cloud=gce: test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/backupTPCC/run_1 cluster.go:2167,backup.go:229,test_runner.go:754: output in run_054323.657_n1_workload_init_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2210032-1598074895-16-n3cpu4:1 -- ./workload init tpcc --warehouses=10 {pgurl:1-3} returned: exit status 20 (1) attached stack trace | main.(*cluster).RunE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2245 | main.(*cluster).Run | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2165 | main.registerBackup.func4 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/backup.go:229 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:754 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1357 Wraps: (2) 2 safe details enclosed Wraps: (3) output in run_054323.657_n1_workload_init_tpcc Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2210032-1598074895-16-n3cpu4:1 -- ./workload init tpcc --warehouses=10 {pgurl:1-3} returned | stderr: | d.go:140 imported customer (16s, 300000 rows) | I200822 05:43:47.204948 1 workload/workloadsql/dataload.go:140 imported history (6s, 300000 rows) | I200822 05:43:53.734094 1 workload/workloadsql/dataload.go:140 imported order (7s, 300000 rows) | I200822 05:43:54.526333 1 workload/workloadsql/dataload.go:140 imported new_order (1s, 90000 rows) | I200822 05:43:56.029161 1 workload/workloadsql/dataload.go:140 imported item (2s, 100000 rows) | I200822 05:44:50.156559 1 workload/workloadsql/dataload.go:140 imported stock (54s, 1000000 rows) | I200822 05:45:33.330498 1 workload/workloadsql/dataload.go:140 imported order_line (43s, 3001222 rows) | Error: Could not postload: pq: foreign key requires an existing index on columns ("h_c_w_id", "h_c_d_id", "h_c_id") | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 1. Command with error: | | ``` | | ./workload init tpcc --warehouses=10 {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (5) exit status 20 Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError ``` <details><summary>More</summary><p> Artifacts: [/backupTPCC](https://teamcity.cockroachdb.com/viewLog.html?buildId=2210032&tab=artifacts#/backupTPCC) Related: - #53182 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #53119 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) - #53082 roachtest: backupTPCC failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202008191705_v19.2.10](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202008191705_v19.2.10) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2AbackupTPCC.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_process
roachtest backuptpcc failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts backuptpcc run cluster go backup go test runner go output in run workload init tpcc home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload init tpcc warehouses pgurl returned exit status attached stack trace main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster run home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerbackup home agent work go src github com cockroachdb cockroach pkg cmd roachtest backup go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps safe details enclosed wraps output in run workload init tpcc wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload init tpcc warehouses pgurl returned stderr d go imported customer rows workload workloadsql dataload go imported history rows workload workloadsql dataload go imported order rows workload workloadsql dataload go imported new order rows workload workloadsql dataload go imported item rows workload workloadsql dataload go imported stock rows workload workloadsql dataload go imported order line rows error could not postload pq foreign key requires an existing index on columns h c w id h c d id h c id error command problem exit status command problem wraps node command with error workload init tpcc warehouses pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror more artifacts related roachtest backuptpcc failed roachtest backuptpcc failed roachtest backuptpcc failed powered by
0
534
3,000,111,028
IssuesEvent
2015-07-23 22:43:48
zhengj2007/BFO-test
https://api.github.com/repos/zhengj2007/BFO-test
opened
Listing Clear Competency Questions for BFO2-FOL
imported Type-BFO2-Process
_From [jacu...@gmail.com](https://code.google.com/u/112908894172563557652/) on February 05, 2013 20:26:55_ As we think about how to get started and proceed in creating BFO2-FOL, I wonder what purpose of this effort is and what we intend to achieve. We need a discussion of competency questions for BFO2-FOL. What counts as success for BF02-FOL? I put this here since it seems to apply to all of the efforts. _Original issue: http://code.google.com/p/bfo/issues/detail?id=151_
1.0
Listing Clear Competency Questions for BFO2-FOL - _From [jacu...@gmail.com](https://code.google.com/u/112908894172563557652/) on February 05, 2013 20:26:55_ As we think about how to get started and proceed in creating BFO2-FOL, I wonder what purpose of this effort is and what we intend to achieve. We need a discussion of competency questions for BFO2-FOL. What counts as success for BF02-FOL? I put this here since it seems to apply to all of the efforts. _Original issue: http://code.google.com/p/bfo/issues/detail?id=151_
process
listing clear competency questions for fol from on february as we think about how to get started and proceed in creating fol i wonder what purpose of this effort is and what we intend to achieve we need a discussion of competency questions for fol what counts as success for fol i put this here since it seems to apply to all of the efforts original issue
1