Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
8,213
11,405,306,143
IssuesEvent
2020-01-31 11:46:33
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Several unit tests involving masked arrays are not implemented correctly ?
bug preprocessor
**Describe the bug** While working on #392 I found out that the numpy testing function `assert_array_equal` operates on `masked arrays` differently than what one might expect. Mwe: ``` import numpy as np from numpy.testing import assert_array_equal a = np.ma.masked_array(np.array([1,2]),mask=[True, True]) b = np.ma.masked_array(np.array([1,2]),mask=[True, False]) assert_array_equal(a,b) # doesn't raise ! ``` Correct testing should check the mask and the data separately to assert equality. ``` assert_array_equal(a.data, b.data) assert_array_equal(a.mask, b.mask) ``` Several functions in `test_mask.py` are impacted, e.g.: https://github.com/ESMValGroup/ESMValCore/blob/7d682894d872bd00909c040aee435857bbb9768f/tests/unit/preprocessor/_mask/test_mask.py#L94-L98
1.0
Several unit tests involving masked arrays are not implemented correctly ? - **Describe the bug** While working on #392 I found out that the numpy testing function `assert_array_equal` operates on `masked arrays` differently than what one might expect. Mwe: ``` import numpy as np from numpy.testing import assert_array_equal a = np.ma.masked_array(np.array([1,2]),mask=[True, True]) b = np.ma.masked_array(np.array([1,2]),mask=[True, False]) assert_array_equal(a,b) # doesn't raise ! ``` Correct testing should check the mask and the data separately to assert equality. ``` assert_array_equal(a.data, b.data) assert_array_equal(a.mask, b.mask) ``` Several functions in `test_mask.py` are impacted, e.g.: https://github.com/ESMValGroup/ESMValCore/blob/7d682894d872bd00909c040aee435857bbb9768f/tests/unit/preprocessor/_mask/test_mask.py#L94-L98
process
several unit tests involving masked arrays are not implemented correctly describe the bug while working on i found out that the numpy testing function assert array equal operates on masked arrays differently than what one might expect mwe import numpy as np from numpy testing import assert array equal a np ma masked array np array mask b np ma masked array np array mask assert array equal a b doesn t raise correct testing should check the mask and the data separately to assert equality assert array equal a data b data assert array equal a mask b mask several functions in test mask py are impacted e g
1
11,673
3,214,309,334
IssuesEvent
2015-10-07 00:42:23
broadinstitute/hellbender
https://api.github.com/repos/broadinstitute/hellbender
closed
use SmallBamWriter in dataflow tests
Dataflow tests
dataflow tests should use SmallBamWriter and compare the resulting bam files rather than outputting a text file with reads encoded as json
1.0
use SmallBamWriter in dataflow tests - dataflow tests should use SmallBamWriter and compare the resulting bam files rather than outputting a text file with reads encoded as json
non_process
use smallbamwriter in dataflow tests dataflow tests should use smallbamwriter and compare the resulting bam files rather than outputting a text file with reads encoded as json
0
17,962
23,973,740,276
IssuesEvent
2022-09-13 09:49:56
Open-Data-Product-Initiative/open-data-product-spec
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec
opened
Prepayment option to Data Pricing
enhancement unprocessed
In the [API world prepayment option](https://thenewstack.io/how-developers-monetize-apis-prepay-emerges-as-new-option/) is emerging as one of the options in pricing. It might make sense to add such an option to ODPS https://opendataproducts.org/#data-pricing as an option in _unit._ **option suggestion name:** _prepayment_ **option description:** A prepayment of credits into the platform in such a way that should it go low enough, automatically, we can refill that account. Once it goes to zero, no more service until the account is pre-paid again — just like the Starbucks app.
1.0
Prepayment option to Data Pricing - In the [API world prepayment option](https://thenewstack.io/how-developers-monetize-apis-prepay-emerges-as-new-option/) is emerging as one of the options in pricing. It might make sense to add such an option to ODPS https://opendataproducts.org/#data-pricing as an option in _unit._ **option suggestion name:** _prepayment_ **option description:** A prepayment of credits into the platform in such a way that should it go low enough, automatically, we can refill that account. Once it goes to zero, no more service until the account is pre-paid again — just like the Starbucks app.
process
prepayment option to data pricing in the is emerging as one of the options in pricing it might make sense to add such an option to odps as an option in unit option suggestion name prepayment option description a prepayment of credits into the platform in such a way that should it go low enough automatically we can refill that account once it goes to zero no more service until the account is pre paid again — just like the starbucks app
1
70,228
8,513,738,192
IssuesEvent
2018-10-31 16:45:45
cockpit-project/cockpit
https://api.github.com/repos/cockpit-project/cockpit
closed
How much RAM do I have?
needsdesign
So one thing that I've been asking myself while looking at the memory performance graph on my home server is "Sure, 1.5 GB of Memory used, but 1.5 out of what now again?" If we had this number somewhere in the UI, it would help an admin tremendously in deciding if it's time to buy more RAM for the server or not.
1.0
How much RAM do I have? - So one thing that I've been asking myself while looking at the memory performance graph on my home server is "Sure, 1.5 GB of Memory used, but 1.5 out of what now again?" If we had this number somewhere in the UI, it would help an admin tremendously in deciding if it's time to buy more RAM for the server or not.
non_process
how much ram do i have so one thing that i ve been asking myself while looking at the memory performance graph on my home server is sure gb of memory used but out of what now again if we had this number somewhere in the ui it would help an admin tremendously in deciding if it s time to buy more ram for the server or not
0
274,401
23,837,310,420
IssuesEvent
2022-09-06 07:20:08
wazuh/wazuh-qa
https://api.github.com/repos/wazuh/wazuh-qa
closed
E2E tests: Research Emotet test failures
team/qa subteam/qa-hurricane test/e2e
## Description After the debugging and testing achieved in https://github.com/wazuh/wazuh-qa/issues/3166, we could see that test_emotet was failing, so we must find the reason for the failure and a solution for it. ### Executions - https://github.com/wazuh/wazuh-qa/issues/3166#issuecomment-1228250975 - https://github.com/wazuh/wazuh-qa/issues/3166#issuecomment-1228386460
1.0
E2E tests: Research Emotet test failures - ## Description After the debugging and testing achieved in https://github.com/wazuh/wazuh-qa/issues/3166, we could see that test_emotet was failing, so we must find the reason for the failure and a solution for it. ### Executions - https://github.com/wazuh/wazuh-qa/issues/3166#issuecomment-1228250975 - https://github.com/wazuh/wazuh-qa/issues/3166#issuecomment-1228386460
non_process
tests research emotet test failures description after the debugging and testing achieved in we could see that test emotet was failing so we must find the reason for the failure and a solution for it executions
0
204,603
23,259,548,186
IssuesEvent
2022-08-04 12:23:54
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
[Security Solution] Only Full screen icon is showing for empty message under Execution events
bug triage_needed impact:low Team: SecuritySolution v8.4.0
**Describe the bug** Only Full screen icon is showing for empty message under Execution events **Build info** ``` VERSION : 8.4.0 BC1 Build: 54999 COMMIT: 58f7eaf0f8dc3c43cbfcd393e587f155e97b3d0d ``` **Preconditions** 1. Kibana should be running 2. Execution events tab should be enabled 3. Rule should be created **Steps to Reproduce** 1. Navigate to security > Rules page 2. Click on above created rule 3. Click on Execution events tab under rule details page 4. Expand the log that doesnot have any message 5. Observe that Full screen icon is showing for message 6. Click on full screen icon 7. Observe that blank screen is displaying **Actual Result** Full screen icon is showing of no data in message under Execution events **Expected Result** - Full screen icon should not be displayed for empty message under Execution events - Hyphen should be displayed for empty message **Screen-cast** https://user-images.githubusercontent.com/61860752/182845772-8db72ffc-7672-4043-9fa4-159c187bcae5.mp4
True
[Security Solution] Only Full screen icon is showing for empty message under Execution events - **Describe the bug** Only Full screen icon is showing for empty message under Execution events **Build info** ``` VERSION : 8.4.0 BC1 Build: 54999 COMMIT: 58f7eaf0f8dc3c43cbfcd393e587f155e97b3d0d ``` **Preconditions** 1. Kibana should be running 2. Execution events tab should be enabled 3. Rule should be created **Steps to Reproduce** 1. Navigate to security > Rules page 2. Click on above created rule 3. Click on Execution events tab under rule details page 4. Expand the log that doesnot have any message 5. Observe that Full screen icon is showing for message 6. Click on full screen icon 7. Observe that blank screen is displaying **Actual Result** Full screen icon is showing of no data in message under Execution events **Expected Result** - Full screen icon should not be displayed for empty message under Execution events - Hyphen should be displayed for empty message **Screen-cast** https://user-images.githubusercontent.com/61860752/182845772-8db72ffc-7672-4043-9fa4-159c187bcae5.mp4
non_process
only full screen icon is showing for empty message under execution events describe the bug only full screen icon is showing for empty message under execution events build info version build commit preconditions kibana should be running execution events tab should be enabled rule should be created steps to reproduce navigate to security rules page click on above created rule click on execution events tab under rule details page expand the log that doesnot have any message observe that full screen icon is showing for message click on full screen icon observe that blank screen is displaying actual result full screen icon is showing of no data in message under execution events expected result full screen icon should not be displayed for empty message under execution events hyphen should be displayed for empty message screen cast
0
16,248
20,798,555,620
IssuesEvent
2022-03-17 11:43:56
ltechkorea/mlperf-inference
https://api.github.com/repos/ltechkorea/mlperf-inference
closed
Run Benchmark
speech to text medical imaging Recommendation natural language processing object detection image classification pre-submit
### Run Benchmark - [ ] Image Classification - [ ] Object Detection - [ ] Netural Language Processing - [ ] Recommendation - [ ] Medical Imaging - [ ] Speech to Text
1.0
Run Benchmark - ### Run Benchmark - [ ] Image Classification - [ ] Object Detection - [ ] Netural Language Processing - [ ] Recommendation - [ ] Medical Imaging - [ ] Speech to Text
process
run benchmark run benchmark image classification object detection netural language processing recommendation medical imaging speech to text
1
15,304
19,343,736,284
IssuesEvent
2021-12-15 08:38:45
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[feature][processing] Allow saving outputs direct to other database destinations
Automatic new feature Processing 3.14
Original commit: https://github.com/qgis/QGIS/commit/c2161638d16c954186f0ecc4769bc7645636f01f by nyalldawson Previously outputs could only be written direct to postgres databases. With this change, this functionality has been made more flexible and now supports direct writing to any database provider which implements the connections API (currently postgres, geopackage, spatialite and sql server) Ultimately this exposes the new ability to directly save outputs to SQL Server or Spatialite databases (alongside the previous GPKG+Postgres options which already existed) (As soon as oracle, db2, ... have the connections API implemented we'll instantly gain direct write support for those too!)
1.0
[feature][processing] Allow saving outputs direct to other database destinations - Original commit: https://github.com/qgis/QGIS/commit/c2161638d16c954186f0ecc4769bc7645636f01f by nyalldawson Previously outputs could only be written direct to postgres databases. With this change, this functionality has been made more flexible and now supports direct writing to any database provider which implements the connections API (currently postgres, geopackage, spatialite and sql server) Ultimately this exposes the new ability to directly save outputs to SQL Server or Spatialite databases (alongside the previous GPKG+Postgres options which already existed) (As soon as oracle, db2, ... have the connections API implemented we'll instantly gain direct write support for those too!)
process
allow saving outputs direct to other database destinations original commit by nyalldawson previously outputs could only be written direct to postgres databases with this change this functionality has been made more flexible and now supports direct writing to any database provider which implements the connections api currently postgres geopackage spatialite and sql server ultimately this exposes the new ability to directly save outputs to sql server or spatialite databases alongside the previous gpkg postgres options which already existed as soon as oracle have the connections api implemented we ll instantly gain direct write support for those too
1
5,661
8,531,480,731
IssuesEvent
2018-11-04 12:49:31
magit/magit
https://api.github.com/repos/magit/magit
closed
Show progress messages in magit-process buffer during fetch
feature request process
When doing a fetch using magit I'd like to be able to see how far it's got, but all I see in the ```*magit-process*``` buffer is the following: ``` run git … fetch origin remote: Counting objects: 160, done. remote: Compressing objects: 100% (78/78), done. ``` Then nothing else until the fetch completes. I'm currently waiting for >700MiB to download, but have no idea how far it's got. When I run ```git fetch``` from the command line I can see output like this: ``` Receiving objects: 20% (28/136), 1.34 MiB | 195.00 KiB/s ``` But magit hides this output. Please could it not hide this output? Thanks, Mark
1.0
Show progress messages in magit-process buffer during fetch - When doing a fetch using magit I'd like to be able to see how far it's got, but all I see in the ```*magit-process*``` buffer is the following: ``` run git … fetch origin remote: Counting objects: 160, done. remote: Compressing objects: 100% (78/78), done. ``` Then nothing else until the fetch completes. I'm currently waiting for >700MiB to download, but have no idea how far it's got. When I run ```git fetch``` from the command line I can see output like this: ``` Receiving objects: 20% (28/136), 1.34 MiB | 195.00 KiB/s ``` But magit hides this output. Please could it not hide this output? Thanks, Mark
process
show progress messages in magit process buffer during fetch when doing a fetch using magit i d like to be able to see how far it s got but all i see in the magit process buffer is the following run git … fetch origin remote counting objects done remote compressing objects done then nothing else until the fetch completes i m currently waiting for to download but have no idea how far it s got when i run git fetch from the command line i can see output like this receiving objects mib kib s but magit hides this output please could it not hide this output thanks mark
1
109,148
4,381,283,109
IssuesEvent
2016-08-06 04:46:41
WalkBikeCupertino/v2.0
https://api.github.com/repos/WalkBikeCupertino/v2.0
opened
Need walking section
enhancement P1 - Medium Priority
[Larry 7/30/2016] Walking Section: The city will conduct an effort this year to generate a Cupertino Pedestrian Plan. As this develops in the community this year, I’d like to see a section focused on that…. Your thoughts and suggestions are welcomed. [Jennifer 8/3/2016] This might be good idea for a box at the bottom (replacing the other content in time), but I would think that the overall content (long-term) would fit into one of our existing categories. We don’t want to come across as a Bike site, with a sprinkling of pedestrian content. It should probably be integrated.
1.0
Need walking section - [Larry 7/30/2016] Walking Section: The city will conduct an effort this year to generate a Cupertino Pedestrian Plan. As this develops in the community this year, I’d like to see a section focused on that…. Your thoughts and suggestions are welcomed. [Jennifer 8/3/2016] This might be good idea for a box at the bottom (replacing the other content in time), but I would think that the overall content (long-term) would fit into one of our existing categories. We don’t want to come across as a Bike site, with a sprinkling of pedestrian content. It should probably be integrated.
non_process
need walking section walking section the city will conduct an effort this year to generate a cupertino pedestrian plan as this develops in the community this year i’d like to see a section focused on that… your thoughts and suggestions are welcomed this might be good idea for a box at the bottom replacing the other content in time but i would think that the overall content long term would fit into one of our existing categories we don’t want to come across as a bike site with a sprinkling of pedestrian content it should probably be integrated
0
9,975
13,019,092,256
IssuesEvent
2020-07-26 20:39:09
GeorgesOatesLarsen/Physics-GRE-Testgen
https://api.github.com/repos/GeorgesOatesLarsen/Physics-GRE-Testgen
opened
ETS2017 Problem 5
PROBLEM: PLEASE PROCESS Trivia
> By definition the electric displacement current through a surface S is proportional to the > magnetic flux through S > rate of change of the magnetic flux through S > time integral of the magnetic flux through S > electric flux through S > rate of change of the electric flux through S
1.0
ETS2017 Problem 5 - > By definition the electric displacement current through a surface S is proportional to the > magnetic flux through S > rate of change of the magnetic flux through S > time integral of the magnetic flux through S > electric flux through S > rate of change of the electric flux through S
process
problem by definition the electric displacement current through a surface s is proportional to the magnetic flux through s rate of change of the magnetic flux through s time integral of the magnetic flux through s electric flux through s rate of change of the electric flux through s
1
62,035
12,197,360,329
IssuesEvent
2020-04-29 20:38:20
kwk/test-llvm-bz-import-5
https://api.github.com/repos/kwk/test-llvm-bz-import-5
closed
clang++ with -fno-elide-constructors generates incorrect code
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED clang/LLVM Codegen dummy import from bugzilla
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=12208.
1.0
clang++ with -fno-elide-constructors generates incorrect code - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=12208.
non_process
clang with fno elide constructors generates incorrect code this issue was imported from bugzilla
0
285,359
8,757,814,851
IssuesEvent
2018-12-14 22:47:43
pravega/pravega
https://api.github.com/repos/pravega/pravega
closed
Metric Naming Problems
area/metrics kind/enhancement priority/P2 status/needs-attention
**Problem description** We have a set of filters to break up the stats coming from pravega and create tags for them. There are new metrics in Pravega that no longer match those filters. Here are our current telegraf filters: ``` "*.*.*.pravega.*.* ....source.measurement pravega=1", "*.*.*.pravega.*.*.* ....source.measurement.field pravega=1", "*.*.*.pravega.*.*.*.*.* ....source.measurement.scope.stream.field pravega=1", "*.*.*.pravega.*.*.*.*.*.* ....source.measurement.scope.stream.segment.field pravega=1", ``` We can add a new filter to catch the case where there are 4 items after pravega: ``` "*.*.*.pravega.*.* ....source.measurement pravega=1", "*.*.*.pravega.*.*.* ....source.measurement.field pravega=1", "*.*.*.pravega.*.*.*.* ....source.measurement.container.field pravega=1", "*.*.*.pravega.*.*.*.*.* ....source.measurement.scope.stream.field pravega=1", "*.*.*.pravega.*.*.*.*.*.* ....source.measurement.scope.stream.segment.field pravega=1", ``` Even with this change, we'll have a couple of problems: 1. These items do not have a source: ``` active_segments bookkeeper_leger_count ``` 2. There are some container ids that have the format of: ``` 0-fail 1-fail 2-fail ``` **Problem location** Metrics **Suggestions for an improvement** Add a source for the active_segments and bookkeeper_leger_count and move the "-fail" suffix to the measurement name and not the container id.
1.0
Metric Naming Problems - **Problem description** We have a set of filters to break up the stats coming from pravega and create tags for them. There are new metrics in Pravega that no longer match those filters. Here are our current telegraf filters: ``` "*.*.*.pravega.*.* ....source.measurement pravega=1", "*.*.*.pravega.*.*.* ....source.measurement.field pravega=1", "*.*.*.pravega.*.*.*.*.* ....source.measurement.scope.stream.field pravega=1", "*.*.*.pravega.*.*.*.*.*.* ....source.measurement.scope.stream.segment.field pravega=1", ``` We can add a new filter to catch the case where there are 4 items after pravega: ``` "*.*.*.pravega.*.* ....source.measurement pravega=1", "*.*.*.pravega.*.*.* ....source.measurement.field pravega=1", "*.*.*.pravega.*.*.*.* ....source.measurement.container.field pravega=1", "*.*.*.pravega.*.*.*.*.* ....source.measurement.scope.stream.field pravega=1", "*.*.*.pravega.*.*.*.*.*.* ....source.measurement.scope.stream.segment.field pravega=1", ``` Even with this change, we'll have a couple of problems: 1. These items do not have a source: ``` active_segments bookkeeper_leger_count ``` 2. There are some container ids that have the format of: ``` 0-fail 1-fail 2-fail ``` **Problem location** Metrics **Suggestions for an improvement** Add a source for the active_segments and bookkeeper_leger_count and move the "-fail" suffix to the measurement name and not the container id.
non_process
metric naming problems problem description we have a set of filters to break up the stats coming from pravega and create tags for them there are new metrics in pravega that no longer match those filters here are our current telegraf filters pravega source measurement pravega pravega source measurement field pravega pravega source measurement scope stream field pravega pravega source measurement scope stream segment field pravega we can add a new filter to catch the case where there are items after pravega pravega source measurement pravega pravega source measurement field pravega pravega source measurement container field pravega pravega source measurement scope stream field pravega pravega source measurement scope stream segment field pravega even with this change we ll have a couple of problems these items do not have a source active segments bookkeeper leger count there are some container ids that have the format of fail fail fail problem location metrics suggestions for an improvement add a source for the active segments and bookkeeper leger count and move the fail suffix to the measurement name and not the container id
0
619,454
19,526,320,844
IssuesEvent
2021-12-30 08:32:17
ita-social-projects/TeachUA
https://api.github.com/repos/ita-social-projects/TeachUA
closed
Check Put method for User Component
bug Backend Priority: High Task API
Oleksandr Всім привіт, зараз працюю з сервісом для редагування даних в компоненті users для адмінки і зіткнувся з такою траблою, що при виконанні метода put в полі номер телефону, яке є неедітбл добавляються цифри 38 на беці і поле стає невалідним, тому що не відповідає реквайменту, що має бути 10 цифр і при цьому зникає можливість редагувати даного юзера, якраз через невалідність поля з номером телефону, чи можна якось це поправити?
1.0
Check Put method for User Component - Oleksandr Всім привіт, зараз працюю з сервісом для редагування даних в компоненті users для адмінки і зіткнувся з такою траблою, що при виконанні метода put в полі номер телефону, яке є неедітбл добавляються цифри 38 на беці і поле стає невалідним, тому що не відповідає реквайменту, що має бути 10 цифр і при цьому зникає можливість редагувати даного юзера, якраз через невалідність поля з номером телефону, чи можна якось це поправити?
non_process
check put method for user component oleksandr всім привіт зараз працюю з сервісом для редагування даних в компоненті users для адмінки і зіткнувся з такою траблою що при виконанні метода put в полі номер телефону яке є неедітбл добавляються цифри на беці і поле стає невалідним тому що не відповідає реквайменту що має бути цифр і при цьому зникає можливість редагувати даного юзера якраз через невалідність поля з номером телефону чи можна якось це поправити
0
12,058
14,739,543,827
IssuesEvent
2021-01-07 07:25:30
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Create new SAB Code for VCC Switch
anc-process anp-1 ant-enhancement
In GitLab by @kdjstudios on Sep 6, 2018, 10:02 **Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-06-68787/conversation **Server:** Internal **Client/Site:** NA **Account:** NA **Issue:** Would you please create new VCC Activity code: VCC Total Agent Time 55099 = In_calls_time_agent_talk (7006) + out_calls_time_talk (7010) + patch_time_withagent (7014) + check_in_calls_time (7017) + agent_work_time (7030) Please make available to all AnswerNet VCC Sites. Thanks! Cori
1.0
Create new SAB Code for VCC Switch - In GitLab by @kdjstudios on Sep 6, 2018, 10:02 **Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-06-68787/conversation **Server:** Internal **Client/Site:** NA **Account:** NA **Issue:** Would you please create new VCC Activity code: VCC Total Agent Time 55099 = In_calls_time_agent_talk (7006) + out_calls_time_talk (7010) + patch_time_withagent (7014) + check_in_calls_time (7017) + agent_work_time (7030) Please make available to all AnswerNet VCC Sites. Thanks! Cori
process
create new sab code for vcc switch in gitlab by kdjstudios on sep submitted by cori bartlett helpdesk server internal client site na account na issue would you please create new vcc activity code vcc total agent time in calls time agent talk out calls time talk patch time withagent check in calls time agent work time please make available to all answernet vcc sites thanks cori
1
5,409
8,235,669,783
IssuesEvent
2018-09-09 07:47:53
pwittchen/neurosky-android-sdk
https://api.github.com/repos/pwittchen/neurosky-android-sdk
closed
Release 0.0.2
release process
**Release notes**: - updating value of the type of the mid gamma brain wave signal -> https://github.com/pwittchen/neurosky-android-sdk/commit/45b8e292faeea0b2291d1a95f9a89b8c769fbf8f
1.0
Release 0.0.2 - **Release notes**: - updating value of the type of the mid gamma brain wave signal -> https://github.com/pwittchen/neurosky-android-sdk/commit/45b8e292faeea0b2291d1a95f9a89b8c769fbf8f
process
release release notes updating value of the type of the mid gamma brain wave signal
1
43,958
9,526,389,151
IssuesEvent
2019-04-28 19:37:21
thirtybees/thirtybees
https://api.github.com/repos/thirtybees/thirtybees
opened
Get rid of the SemVer dependency
Code Quality Enhancement
After SemVer (introduced with #149) turned out to be incompatible with 4 part version numbers (see #915), it's only remaining usage is comparing a module's thirty bees version requirement in `Module::checkCompliancy()`. Allowing slightly more fancy version ranges in one place doesn't justify a whole dependency, IMHO. Suggested replacement: replace `$this->tb_versions_compliancy` in the module's main file with `$this->tb_min_version`, a simple version string. No _tb_max_version_, because a module developer can't know when a module becomes incompatible before it actually happens. Looking at the usage of _tb_versions_compliancy_ in all the thirty bees modules, none of them defines an upper version number. The replacement needs a proper deprecation period, of course.
1.0
Get rid of the SemVer dependency - After SemVer (introduced with #149) turned out to be incompatible with 4 part version numbers (see #915), it's only remaining usage is comparing a module's thirty bees version requirement in `Module::checkCompliancy()`. Allowing slightly more fancy version ranges in one place doesn't justify a whole dependency, IMHO. Suggested replacement: replace `$this->tb_versions_compliancy` in the module's main file with `$this->tb_min_version`, a simple version string. No _tb_max_version_, because a module developer can't know when a module becomes incompatible before it actually happens. Looking at the usage of _tb_versions_compliancy_ in all the thirty bees modules, none of them defines an upper version number. The replacement needs a proper deprecation period, of course.
non_process
get rid of the semver dependency after semver introduced with turned out to be incompatible with part version numbers see it s only remaining usage is comparing a module s thirty bees version requirement in module checkcompliancy allowing slightly more fancy version ranges in one place doesn t justify a whole dependency imho suggested replacement replace this tb versions compliancy in the module s main file with this tb min version a simple version string no tb max version because a module developer can t know when a module becomes incompatible before it actually happens looking at the usage of tb versions compliancy in all the thirty bees modules none of them defines an upper version number the replacement needs a proper deprecation period of course
0
91,595
26,431,931,441
IssuesEvent
2023-01-14 23:00:18
Leafwing-Studios/Emergence
https://api.github.com/repos/Leafwing-Studios/Emergence
opened
Fix `Cargo.lock` creation in CI sometimes failing
bug build-system
Sometimes, the `cargo update` command in the CI workflows to create a `Cargo.lock` file for proper caching, seems to fail: <https://github.com/Leafwing-Studios/Emergence/actions/runs/3918987400/jobs/6699708119#step:4:10> ``` error: failed to get `bevy-trait-query` as a dependency of package `emergence_lib v0.1.0 (/home/runner/work/Emergence/Emergence/emergence_lib)` Caused by: failed to load source for dependency `bevy-trait-query` Caused by: Unable to update https://github.com/Leafwing-Studios/bevy-trait-query?rev=65533bf8680753a3f998056e1719b826652f3b69 Caused by: revspec '65533bf8680753a3f998056e1719b826652f3b69' not found; class=Reference (4); code=NotFound (-3) ``` It seems like it can't find the specified commit hash. If we search for the commit on GitHub [it finds it](https://github.com/Leafwing-Studios/bevy-trait-query/commit/65533bf8680753a3f998056e1719b826652f3b69), but gives the following warning: > This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. We can try to change the commit rev to attempt to fix this.
1.0
Fix `Cargo.lock` creation in CI sometimes failing - Sometimes, the `cargo update` command in the CI workflows to create a `Cargo.lock` file for proper caching, seems to fail: <https://github.com/Leafwing-Studios/Emergence/actions/runs/3918987400/jobs/6699708119#step:4:10> ``` error: failed to get `bevy-trait-query` as a dependency of package `emergence_lib v0.1.0 (/home/runner/work/Emergence/Emergence/emergence_lib)` Caused by: failed to load source for dependency `bevy-trait-query` Caused by: Unable to update https://github.com/Leafwing-Studios/bevy-trait-query?rev=65533bf8680753a3f998056e1719b826652f3b69 Caused by: revspec '65533bf8680753a3f998056e1719b826652f3b69' not found; class=Reference (4); code=NotFound (-3) ``` It seems like it can't find the specified commit hash. If we search for the commit on GitHub [it finds it](https://github.com/Leafwing-Studios/bevy-trait-query/commit/65533bf8680753a3f998056e1719b826652f3b69), but gives the following warning: > This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. We can try to change the commit rev to attempt to fix this.
non_process
fix cargo lock creation in ci sometimes failing sometimes the cargo update command in the ci workflows to create a cargo lock file for proper caching seems to fail error failed to get bevy trait query as a dependency of package emergence lib home runner work emergence emergence emergence lib caused by failed to load source for dependency bevy trait query caused by unable to update caused by revspec not found class reference code notfound it seems like it can t find the specified commit hash if we search for the commit on github but gives the following warning this commit does not belong to any branch on this repository and may belong to a fork outside of the repository we can try to change the commit rev to attempt to fix this
0
8,259
11,425,424,769
IssuesEvent
2020-02-03 19:48:32
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
regulation of poly(A)-specific ribonuclease activity
New term request PomBase RNA processes community curation regulation
id: GO:new1 name: regulation of poly(A)-specific ribonuclease activity namespace: biological_process def: "Any process that modulates the rate, frequency or extent of the catalysis of the exonucleolytic cleavage of poly(A) to 5'-AMP." [GOC:mah,PMID:29932902] intersection_of: GO:0008150 ! biological_process intersection_of: regulates GO:0004535 ! poly(A)-specific ribonuclease activity if you also need asserted links, they would be: is_a: GO:1901917 ! regulation of exoribonuclease activity relationship: regulates GO:0004535 ! poly(A)-specific ribonuclease activity and + and - subtypes, following same pattern id: GO:new2 name: negative regulation of poly(A)-specific ribonuclease activity id: GO:new3 name: positive regulation of poly(A)-specific ribonuclease activity to use to annotate effect of S. pombe Pabp RNA binding on Ccr4 and Caf1 nuclease activities in the context of removing poly(A) tails
1.0
regulation of poly(A)-specific ribonuclease activity - id: GO:new1 name: regulation of poly(A)-specific ribonuclease activity namespace: biological_process def: "Any process that modulates the rate, frequency or extent of the catalysis of the exonucleolytic cleavage of poly(A) to 5'-AMP." [GOC:mah,PMID:29932902] intersection_of: GO:0008150 ! biological_process intersection_of: regulates GO:0004535 ! poly(A)-specific ribonuclease activity if you also need asserted links, they would be: is_a: GO:1901917 ! regulation of exoribonuclease activity relationship: regulates GO:0004535 ! poly(A)-specific ribonuclease activity and + and - subtypes, following same pattern id: GO:new2 name: negative regulation of poly(A)-specific ribonuclease activity id: GO:new3 name: positive regulation of poly(A)-specific ribonuclease activity to use to annotate effect of S. pombe Pabp RNA binding on Ccr4 and Caf1 nuclease activities in the context of removing poly(A) tails
process
regulation of poly a specific ribonuclease activity id go name regulation of poly a specific ribonuclease activity namespace biological process def any process that modulates the rate frequency or extent of the catalysis of the exonucleolytic cleavage of poly a to amp intersection of go biological process intersection of regulates go poly a specific ribonuclease activity if you also need asserted links they would be is a go regulation of exoribonuclease activity relationship regulates go poly a specific ribonuclease activity and and subtypes following same pattern id go name negative regulation of poly a specific ribonuclease activity id go name positive regulation of poly a specific ribonuclease activity to use to annotate effect of s pombe pabp rna binding on and nuclease activities in the context of removing poly a tails
1
629,017
20,021,208,142
IssuesEvent
2022-02-01 16:33:06
dbbs-lab/bsb
https://api.github.com/repos/dbbs-lab/bsb
closed
Release 4.0
priority.urgent
Since some of the changes such as #137 only make sense in the context of v4.0 we should aim to have v4 released before the publication of the paper. This means that all the design issues will need to be solved and all breaking changes need to be implemented; All non-breaking changes can then be postponed to minor releases (4.1, 4.2, ...) and we can just try to pack as many features as possible into 4.0 but release whenever the publication comes around. All features of v4.0 should however be: * Completely tested * Completely documented * Sensibly commented ## Module rework progress ### Configuration ![Configuration](https://img.shields.io/badge/configuration-incredible-brightgreen) * Completely reworked * Amazing dynamic attribute based system * Multi-document JSON parser ### Core ![Core](https://img.shields.io/badge/core-cleanup-orange) * `Scaffold` object cleanup, most functions will need to be moved into one of the storage interfaces. ### CLI ![CLI](https://img.shields.io/badge/cli-barebones-orange) * [x] Reworked to allow the following: * [x] Make pluggable * [ ] Projects * [ ] Audits * [x] Include/Exclude/partials ### Placement ![Placement](https://img.shields.io/badge/placement-ok-yellow) * [x] Parallel reconstruction (#139) (parallelism established with #223) * [x] Assign morphologies during placement, not per connection type * [x] Placement independent of cell types * [x] Fix ParticlePlacement (overlap between cell types in same layer; strategy grouping) * [ ] Add tests for topology * [ ] Add docs for topology * [ ] Make more regions and partitions (atlas, mesh, ...) * [ ] Add tests for `after` and ordering of strats * [ ] Fix the multi-partition placement for ParticlePlacement & Satellite. * [x] Update PlacementSet properties to `load_*` functions ### Morphologies ![Morphologies](https://img.shields.io/badge/morphologies-good-yellowgreen) * We have the new structure but there's a lot of legacy code that needs to be audited (#125) * Compartments have been stripped from the codebase, so all of that code should now visibly error out when used. ### Connectivity ![Connectivity](https://img.shields.io/badge/connectivity-reworked-yellow) * [x] Adapted to v4 structure * [x] Parallel connectivity * ~~Labels & Filters API not completed yet~~ Filters replaced by `RegionOfInterest` * Labels will be dealt with by `PlacementSet` like #310 did for v3 * [ ] Testing and documentation * [ ] Rewrite the out of the box connectivity strategies (will overlap with #125) ### Plugins ![Plugins](https://img.shields.io/badge/plugins-good-yellowgreen) Seem to be in a good place since the slight rework and documentation they just underwent. * [ ] Make plugin overview command (https://github.com/dbbs-lab/bsb/issues/268) ### Simulation ![Simulation](https://img.shields.io/badge/simulation-okay-yellow) I don't expect a lot of changes here except for decoupling the `Simulation` from the new `SimulatorAdapter` (#97) (no longer a `SimulationAdapter`), and capturing our output in the Neo format (#93) (which will be able to be exported to SONATA for example), we'll have to see for specific issues on a per-adapter basis #### NEST ![NEST](https://img.shields.io/badge/NEST-okay-yellow) We need to upgrade to NEST 3 and maintain our extension modules. The output also needs to be captured a bit better. * Entire adapter still needs to be documented, and how we pass the config params into NEST and our relationship with their models should be perfectly clear. #### NEURON ![NEURON](https://img.shields.io/badge/NEURON-weak-orange) Lots of features are missing like weights, overwriting variables on the models before instantiating them, having cells target relays (only devices atm). * Entire adapter still needs to be documented, specifically all the targetting mechanisms (cell & section) and devices. * Solve the NEURON & MPI conflict once and for all (will require changes to Patch aswell) * Straighten out the responsibilities of Patch/Arborize and scaffold (I think the BSB shouldn't have any responsibilities except for initialising the models and asking the model to `create_synapses/receivers/transmitters`) #### Arbor ![Arbor](https://img.shields.io/badge/Arbor-good-green) Pretty much the best adapter now. Still also needs a way to distribute properties on individual cells. ### Storage ![Storage](https://img.shields.io/badge/storage-excellent-green) We're on a very good track here, we just need to expand the `Storage` interfaces as we go along reworking the other modules and can pin down exactly what data needs to be saved. I think attempting to define all that beforehand will lead to changing specifications and premature optimization. ### Reporting ![NEST](https://img.shields.io/badge/reporting-weak-orange) Still a bit unclear. I'd like to work with the listener-pattern and register printing as the default listener. I'd also like to be able to specify `modes` or even richer metadata with a message object so that each listener can decide how to handle for example a progress bar, a progress map or plain report messages (this can differ greatly between for example a log and a terminal) ### Postprocessing ![processing](https://img.shields.io/badge/processing-done-green) Nothing needs to happen here, perhaps we could recycle the reconstruction parallelization here aswell ### Plotting ![plotting](https://img.shields.io/badge/plotting-horrid-darkred) Plotting has been given zero love, it has always been programmed "just-in-time" to produce figures so it's a Frankensteinian atrocity with at least 3 different paradigms (class based, decorated & functional elements) used and 0 coherence with kwargs thrown on everywhere and a terrible coupling between HDF5 result files and the plots. It's state probably won't improve until we adopt something like Neo and we might just use whatever visualisation tools that are available for Neo rather than maintaining our own plotting library.
1.0
Release 4.0 - Since some of the changes such as #137 only make sense in the context of v4.0 we should aim to have v4 released before the publication of the paper. This means that all the design issues will need to be solved and all breaking changes need to be implemented; All non-breaking changes can then be postponed to minor releases (4.1, 4.2, ...) and we can just try to pack as many features as possible into 4.0 but release whenever the publication comes around. All features of v4.0 should however be: * Completely tested * Completely documented * Sensibly commented ## Module rework progress ### Configuration ![Configuration](https://img.shields.io/badge/configuration-incredible-brightgreen) * Completely reworked * Amazing dynamic attribute based system * Multi-document JSON parser ### Core ![Core](https://img.shields.io/badge/core-cleanup-orange) * `Scaffold` object cleanup, most functions will need to be moved into one of the storage interfaces. ### CLI ![CLI](https://img.shields.io/badge/cli-barebones-orange) * [x] Reworked to allow the following: * [x] Make pluggable * [ ] Projects * [ ] Audits * [x] Include/Exclude/partials ### Placement ![Placement](https://img.shields.io/badge/placement-ok-yellow) * [x] Parallel reconstruction (#139) (parallelism established with #223) * [x] Assign morphologies during placement, not per connection type * [x] Placement independent of cell types * [x] Fix ParticlePlacement (overlap between cell types in same layer; strategy grouping) * [ ] Add tests for topology * [ ] Add docs for topology * [ ] Make more regions and partitions (atlas, mesh, ...) * [ ] Add tests for `after` and ordering of strats * [ ] Fix the multi-partition placement for ParticlePlacement & Satellite. * [x] Update PlacementSet properties to `load_*` functions ### Morphologies ![Morphologies](https://img.shields.io/badge/morphologies-good-yellowgreen) * We have the new structure but there's a lot of legacy code that needs to be audited (#125) * Compartments have been stripped from the codebase, so all of that code should now visibly error out when used. ### Connectivity ![Connectivity](https://img.shields.io/badge/connectivity-reworked-yellow) * [x] Adapted to v4 structure * [x] Parallel connectivity * ~~Labels & Filters API not completed yet~~ Filters replaced by `RegionOfInterest` * Labels will be dealt with by `PlacementSet` like #310 did for v3 * [ ] Testing and documentation * [ ] Rewrite the out of the box connectivity strategies (will overlap with #125) ### Plugins ![Plugins](https://img.shields.io/badge/plugins-good-yellowgreen) Seem to be in a good place since the slight rework and documentation they just underwent. * [ ] Make plugin overview command (https://github.com/dbbs-lab/bsb/issues/268) ### Simulation ![Simulation](https://img.shields.io/badge/simulation-okay-yellow) I don't expect a lot of changes here except for decoupling the `Simulation` from the new `SimulatorAdapter` (#97) (no longer a `SimulationAdapter`), and capturing our output in the Neo format (#93) (which will be able to be exported to SONATA for example), we'll have to see for specific issues on a per-adapter basis #### NEST ![NEST](https://img.shields.io/badge/NEST-okay-yellow) We need to upgrade to NEST 3 and maintain our extension modules. The output also needs to be captured a bit better. * Entire adapter still needs to be documented, and how we pass the config params into NEST and our relationship with their models should be perfectly clear. #### NEURON ![NEURON](https://img.shields.io/badge/NEURON-weak-orange) Lots of features are missing like weights, overwriting variables on the models before instantiating them, having cells target relays (only devices atm). * Entire adapter still needs to be documented, specifically all the targetting mechanisms (cell & section) and devices. * Solve the NEURON & MPI conflict once and for all (will require changes to Patch aswell) * Straighten out the responsibilities of Patch/Arborize and scaffold (I think the BSB shouldn't have any responsibilities except for initialising the models and asking the model to `create_synapses/receivers/transmitters`) #### Arbor ![Arbor](https://img.shields.io/badge/Arbor-good-green) Pretty much the best adapter now. Still also needs a way to distribute properties on individual cells. ### Storage ![Storage](https://img.shields.io/badge/storage-excellent-green) We're on a very good track here, we just need to expand the `Storage` interfaces as we go along reworking the other modules and can pin down exactly what data needs to be saved. I think attempting to define all that beforehand will lead to changing specifications and premature optimization. ### Reporting ![NEST](https://img.shields.io/badge/reporting-weak-orange) Still a bit unclear. I'd like to work with the listener-pattern and register printing as the default listener. I'd also like to be able to specify `modes` or even richer metadata with a message object so that each listener can decide how to handle for example a progress bar, a progress map or plain report messages (this can differ greatly between for example a log and a terminal) ### Postprocessing ![processing](https://img.shields.io/badge/processing-done-green) Nothing needs to happen here, perhaps we could recycle the reconstruction parallelization here aswell ### Plotting ![plotting](https://img.shields.io/badge/plotting-horrid-darkred) Plotting has been given zero love, it has always been programmed "just-in-time" to produce figures so it's a Frankensteinian atrocity with at least 3 different paradigms (class based, decorated & functional elements) used and 0 coherence with kwargs thrown on everywhere and a terrible coupling between HDF5 result files and the plots. It's state probably won't improve until we adopt something like Neo and we might just use whatever visualisation tools that are available for Neo rather than maintaining our own plotting library.
non_process
release since some of the changes such as only make sense in the context of we should aim to have released before the publication of the paper this means that all the design issues will need to be solved and all breaking changes need to be implemented all non breaking changes can then be postponed to minor releases and we can just try to pack as many features as possible into but release whenever the publication comes around all features of should however be completely tested completely documented sensibly commented module rework progress configuration completely reworked amazing dynamic attribute based system multi document json parser core scaffold object cleanup most functions will need to be moved into one of the storage interfaces cli reworked to allow the following make pluggable projects audits include exclude partials placement parallel reconstruction parallelism established with assign morphologies during placement not per connection type placement independent of cell types fix particleplacement overlap between cell types in same layer strategy grouping add tests for topology add docs for topology make more regions and partitions atlas mesh add tests for after and ordering of strats fix the multi partition placement for particleplacement satellite update placementset properties to load functions morphologies we have the new structure but there s a lot of legacy code that needs to be audited compartments have been stripped from the codebase so all of that code should now visibly error out when used connectivity adapted to structure parallel connectivity labels filters api not completed yet filters replaced by regionofinterest labels will be dealt with by placementset like did for testing and documentation rewrite the out of the box connectivity strategies will overlap with plugins seem to be in a good place since the slight rework and documentation they just underwent make plugin overview command simulation i don t expect a lot of changes here except for decoupling the simulation from the new simulatoradapter no longer a simulationadapter and capturing our output in the neo format which will be able to be exported to sonata for example we ll have to see for specific issues on a per adapter basis nest we need to upgrade to nest and maintain our extension modules the output also needs to be captured a bit better entire adapter still needs to be documented and how we pass the config params into nest and our relationship with their models should be perfectly clear neuron lots of features are missing like weights overwriting variables on the models before instantiating them having cells target relays only devices atm entire adapter still needs to be documented specifically all the targetting mechanisms cell section and devices solve the neuron mpi conflict once and for all will require changes to patch aswell straighten out the responsibilities of patch arborize and scaffold i think the bsb shouldn t have any responsibilities except for initialising the models and asking the model to create synapses receivers transmitters arbor pretty much the best adapter now still also needs a way to distribute properties on individual cells storage we re on a very good track here we just need to expand the storage interfaces as we go along reworking the other modules and can pin down exactly what data needs to be saved i think attempting to define all that beforehand will lead to changing specifications and premature optimization reporting still a bit unclear i d like to work with the listener pattern and register printing as the default listener i d also like to be able to specify modes or even richer metadata with a message object so that each listener can decide how to handle for example a progress bar a progress map or plain report messages this can differ greatly between for example a log and a terminal postprocessing nothing needs to happen here perhaps we could recycle the reconstruction parallelization here aswell plotting plotting has been given zero love it has always been programmed just in time to produce figures so it s a frankensteinian atrocity with at least different paradigms class based decorated functional elements used and coherence with kwargs thrown on everywhere and a terrible coupling between result files and the plots it s state probably won t improve until we adopt something like neo and we might just use whatever visualisation tools that are available for neo rather than maintaining our own plotting library
0
246,130
20,824,486,788
IssuesEvent
2022-03-18 19:00:22
ChainSafe/ui-monorepo
https://api.github.com/repos/ChainSafe/ui-monorepo
opened
Add ui test coverage for file copying to share folder
Testing
As described, automate the below scenarios: - User can share a file to a new shared folder and delete original file - User can share a file to a new shared folder and keep original file - User can share a file to existing shared folder and delete original file - User can share a file to existing shared folder and keep original file
1.0
Add ui test coverage for file copying to share folder - As described, automate the below scenarios: - User can share a file to a new shared folder and delete original file - User can share a file to a new shared folder and keep original file - User can share a file to existing shared folder and delete original file - User can share a file to existing shared folder and keep original file
non_process
add ui test coverage for file copying to share folder as described automate the below scenarios user can share a file to a new shared folder and delete original file user can share a file to a new shared folder and keep original file user can share a file to existing shared folder and delete original file user can share a file to existing shared folder and keep original file
0
8,998
12,109,124,567
IssuesEvent
2020-04-21 08:14:40
AmboVent-1690-108/AmboVent
https://api.github.com/repos/AmboVent-1690-108/AmboVent
closed
Add a pull request template in .github/pull_request_template.md
process-improvement
Add a pull request template in .github/pull_request_template.md. See here: https://help.github.com/en/github/building-a-strong-community/creating-a-pull-request-template-for-your-repository Tell the user to run `./run_clang-format.sh` before submitting a PR, and have them check the box: [x] I have run the auto-formatter: `./run_clang-format.sh` Note that markdown comments in the template can be added with: <!---comment---> <!--- this is a comment --->
1.0
Add a pull request template in .github/pull_request_template.md - Add a pull request template in .github/pull_request_template.md. See here: https://help.github.com/en/github/building-a-strong-community/creating-a-pull-request-template-for-your-repository Tell the user to run `./run_clang-format.sh` before submitting a PR, and have them check the box: [x] I have run the auto-formatter: `./run_clang-format.sh` Note that markdown comments in the template can be added with: <!---comment---> <!--- this is a comment --->
process
add a pull request template in github pull request template md add a pull request template in github pull request template md see here tell the user to run run clang format sh before submitting a pr and have them check the box i have run the auto formatter run clang format sh note that markdown comments in the template can be added with
1
2,638
5,413,716,308
IssuesEvent
2017-03-01 17:20:30
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
Pas vu à la télé (4) Panama papers et Société Générale
Language: French Process: [6] Approved
# Video title Pas vu à la télé (4) Panama papers et Société Générale # URL https://www.youtube.com/watch?v=5exxOVSOBps # Youtube subtitles language French # Duration 56:36 # Subtitles URL https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&ref=player&lang=fr&v=5exxOVSOBps&tab=captions&bl=vmp
1.0
Pas vu à la télé (4) Panama papers et Société Générale - # Video title Pas vu à la télé (4) Panama papers et Société Générale # URL https://www.youtube.com/watch?v=5exxOVSOBps # Youtube subtitles language French # Duration 56:36 # Subtitles URL https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&ref=player&lang=fr&v=5exxOVSOBps&tab=captions&bl=vmp
process
pas vu à la télé panama papers et société générale video title pas vu à la télé panama papers et société générale url youtube subtitles language french duration subtitles url
1
8,237
11,417,546,182
IssuesEvent
2020-02-03 00:03:09
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
Parcel sometimes fails to start due to LESS sourcemaps error
:bug: Bug CSS Preprocessing Stale
# 🐛 bug report Parcel **sometimes** fails to start due to [a LESS sourcemaps error](https://i.imgur.com/bjnGMr1.png) (no files changed between those `npm start` instances), It seems to happen randomly, sometimes I have to re-run the script 3 times before it happens. Editing any .LESS file (so that parcel rebuilds the changes) or restarting Parcel seems to solve it. My hunch is that is due to the plugins that the LESS files (boostrap 4 LESS adaptation) use, in this case. ## 🎛 Configuration (.babelrc, package.json, cli command) A demo project to reproduce the issue plus more configuration info [can be found here](https://github.com/mirkea/parcel-less-bug) ## 🤔 Expected Behavior Parcel should finish the build all the time, or fail all the time. ## 😯 Current Behavior Parcel fails to build, sometimes, > /mnt/e/Work/CO/Parcel LESS bug demo/src/components/core/Grid/Grid.less: Cannot read property 'substring' of undefined >at SourceMapOutput.add (/mnt/e/Work/CO/Parcel LESS bug demo/node_modules/less/lib/less/source-map-output.js:72:39) ## 💁 Possible Solution Running with no sourcemaps works (as a workaround) ## 🔦 Context I had to remove sourcemaps entirely (not just the css ones with only affect me but the .JS ones as well which the other devs working on the project might need) ## 💻 Code Sample [Demo project](https://github.com/mirkea/parcel-less-bug) ## 🌍 Your Environment | Software | Version(s) | | ---------------- | ---------- | | Parcel | 1.12.3 | Node | v8.10.0 | npm/Yarn | 6.9.0 | Operating System | Ubuntu (linux subsystem) on Windows 10
1.0
Parcel sometimes fails to start due to LESS sourcemaps error - # 🐛 bug report Parcel **sometimes** fails to start due to [a LESS sourcemaps error](https://i.imgur.com/bjnGMr1.png) (no files changed between those `npm start` instances), It seems to happen randomly, sometimes I have to re-run the script 3 times before it happens. Editing any .LESS file (so that parcel rebuilds the changes) or restarting Parcel seems to solve it. My hunch is that is due to the plugins that the LESS files (boostrap 4 LESS adaptation) use, in this case. ## 🎛 Configuration (.babelrc, package.json, cli command) A demo project to reproduce the issue plus more configuration info [can be found here](https://github.com/mirkea/parcel-less-bug) ## 🤔 Expected Behavior Parcel should finish the build all the time, or fail all the time. ## 😯 Current Behavior Parcel fails to build, sometimes, > /mnt/e/Work/CO/Parcel LESS bug demo/src/components/core/Grid/Grid.less: Cannot read property 'substring' of undefined >at SourceMapOutput.add (/mnt/e/Work/CO/Parcel LESS bug demo/node_modules/less/lib/less/source-map-output.js:72:39) ## 💁 Possible Solution Running with no sourcemaps works (as a workaround) ## 🔦 Context I had to remove sourcemaps entirely (not just the css ones with only affect me but the .JS ones as well which the other devs working on the project might need) ## 💻 Code Sample [Demo project](https://github.com/mirkea/parcel-less-bug) ## 🌍 Your Environment | Software | Version(s) | | ---------------- | ---------- | | Parcel | 1.12.3 | Node | v8.10.0 | npm/Yarn | 6.9.0 | Operating System | Ubuntu (linux subsystem) on Windows 10
process
parcel sometimes fails to start due to less sourcemaps error 🐛 bug report parcel sometimes fails to start due to no files changed between those npm start instances it seems to happen randomly sometimes i have to re run the script times before it happens editing any less file so that parcel rebuilds the changes or restarting parcel seems to solve it my hunch is that is due to the plugins that the less files boostrap less adaptation use in this case 🎛 configuration babelrc package json cli command a demo project to reproduce the issue plus more configuration info 🤔 expected behavior parcel should finish the build all the time or fail all the time 😯 current behavior parcel fails to build sometimes mnt e work co parcel less bug demo src components core grid grid less cannot read property substring of undefined at sourcemapoutput add mnt e work co parcel less bug demo node modules less lib less source map output js 💁 possible solution running with no sourcemaps works as a workaround 🔦 context i had to remove sourcemaps entirely not just the css ones with only affect me but the js ones as well which the other devs working on the project might need 💻 code sample 🌍 your environment software version s parcel node npm yarn operating system ubuntu linux subsystem on windows
1
50,429
26,638,104,481
IssuesEvent
2023-01-25 00:24:22
aesim-tech/simba-project
https://api.github.com/repos/aesim-tech/simba-project
closed
NDETE V2 & Multi time Steps
enhancement epic performance
**Motivations** *Next Discontinuity Event Time Estimator V2* Today, we always surround discontinuities by two points separated by the "min time step". This is to guarantee the accuracy of the discontinuity event. However the discontinuity event time of some events (like the PWM) is known accurately. If we discriminate the type of discontinuity event (accurate vs estimate), we can **improve the performance** (less simulated points) **and the accuracy** (discontinuity events exactly when they are supposed to happen). *Multi time step* Today, the control solver always iterates with the power solver. This is important because we have a variable time step and some power devices (like the controlled source) needs to be synchronized with the control solver. However, in many cases, the user wants to build a control loop that behaves like a real digital controller: fixed sampled time & no iteration. Also, we can't model digital devices like a delay, or a THD calculation block and this will fix this. **To-Do** *Next Discontinuity Event Time Estimator V2* - [x] Modify the discontinuous devices to discriminate the different types of discontinuity events (accurate / estimate) - [x] Update the NDETE algorithm to calculate a time point exactly at the accurate discontinuity event time - [x] Fix the issues related to the numerical precision and double number comparison - [x] Tests *Multi time step* - [X] New sampling time setting on all control models - [X] When the sampling time is set to "auto", the model behaves as it does today: Calculated at each time step and iterates with the power solver. The default value is "auto". - [X] When the sampling time is set to a value, the model will be calculated at each sampling time and will not iterate with the power solver. It will be calculated at the end of a time step. - [X] All the devices, with an "auto" sampling time that are connected to the output of a model with a defined sampling time will share the same sampling time. - [x] All models with the same sampling time step will be attached to the same control solver. - [x] The predictive (master) time step solver calculates a point at each sample of the control solvers. - [x] Update the scopes to manage data with different time steps - [x] Check if this is possible to parallelize the power and control solvers - [x] Tests
True
NDETE V2 & Multi time Steps - **Motivations** *Next Discontinuity Event Time Estimator V2* Today, we always surround discontinuities by two points separated by the "min time step". This is to guarantee the accuracy of the discontinuity event. However the discontinuity event time of some events (like the PWM) is known accurately. If we discriminate the type of discontinuity event (accurate vs estimate), we can **improve the performance** (less simulated points) **and the accuracy** (discontinuity events exactly when they are supposed to happen). *Multi time step* Today, the control solver always iterates with the power solver. This is important because we have a variable time step and some power devices (like the controlled source) needs to be synchronized with the control solver. However, in many cases, the user wants to build a control loop that behaves like a real digital controller: fixed sampled time & no iteration. Also, we can't model digital devices like a delay, or a THD calculation block and this will fix this. **To-Do** *Next Discontinuity Event Time Estimator V2* - [x] Modify the discontinuous devices to discriminate the different types of discontinuity events (accurate / estimate) - [x] Update the NDETE algorithm to calculate a time point exactly at the accurate discontinuity event time - [x] Fix the issues related to the numerical precision and double number comparison - [x] Tests *Multi time step* - [X] New sampling time setting on all control models - [X] When the sampling time is set to "auto", the model behaves as it does today: Calculated at each time step and iterates with the power solver. The default value is "auto". - [X] When the sampling time is set to a value, the model will be calculated at each sampling time and will not iterate with the power solver. It will be calculated at the end of a time step. - [X] All the devices, with an "auto" sampling time that are connected to the output of a model with a defined sampling time will share the same sampling time. - [x] All models with the same sampling time step will be attached to the same control solver. - [x] The predictive (master) time step solver calculates a point at each sample of the control solvers. - [x] Update the scopes to manage data with different time steps - [x] Check if this is possible to parallelize the power and control solvers - [x] Tests
non_process
ndete multi time steps motivations next discontinuity event time estimator today we always surround discontinuities by two points separated by the min time step this is to guarantee the accuracy of the discontinuity event however the discontinuity event time of some events like the pwm is known accurately if we discriminate the type of discontinuity event accurate vs estimate we can improve the performance less simulated points and the accuracy discontinuity events exactly when they are supposed to happen multi time step today the control solver always iterates with the power solver this is important because we have a variable time step and some power devices like the controlled source needs to be synchronized with the control solver however in many cases the user wants to build a control loop that behaves like a real digital controller fixed sampled time no iteration also we can t model digital devices like a delay or a thd calculation block and this will fix this to do next discontinuity event time estimator modify the discontinuous devices to discriminate the different types of discontinuity events accurate estimate update the ndete algorithm to calculate a time point exactly at the accurate discontinuity event time fix the issues related to the numerical precision and double number comparison tests multi time step new sampling time setting on all control models when the sampling time is set to auto the model behaves as it does today calculated at each time step and iterates with the power solver the default value is auto when the sampling time is set to a value the model will be calculated at each sampling time and will not iterate with the power solver it will be calculated at the end of a time step all the devices with an auto sampling time that are connected to the output of a model with a defined sampling time will share the same sampling time all models with the same sampling time step will be attached to the same control solver the predictive master time step solver calculates a point at each sample of the control solvers update the scopes to manage data with different time steps check if this is possible to parallelize the power and control solvers tests
0
20,844
27,615,193,464
IssuesEvent
2023-03-09 18:46:53
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Review representation of alternative mRNA splicing, via spliceosome
RNA processes mini-project
We need to review representation of alternative mRNA splicing, via spliceosome to clarify its relationship to mRNA splicing, via spliceosome and regulation of mRNA splicing, via spliceosome. As part of this, we will need to review the associated sub-processes and molecular functions to ensure that we don't have redundant concepts here. For reference: ![image](https://user-images.githubusercontent.com/1730534/37786000-f2c1ecca-2dd1-11e8-8381-6d2a4b156103.png) ![image](https://user-images.githubusercontent.com/1730534/37785920-ba47f038-2dd1-11e8-8c9c-a0586d976fc7.png) ![image](https://user-images.githubusercontent.com/1730534/37785851-91dc2c54-2dd1-11e8-8923-a3c9ac9e8f3a.png)
1.0
Review representation of alternative mRNA splicing, via spliceosome - We need to review representation of alternative mRNA splicing, via spliceosome to clarify its relationship to mRNA splicing, via spliceosome and regulation of mRNA splicing, via spliceosome. As part of this, we will need to review the associated sub-processes and molecular functions to ensure that we don't have redundant concepts here. For reference: ![image](https://user-images.githubusercontent.com/1730534/37786000-f2c1ecca-2dd1-11e8-8381-6d2a4b156103.png) ![image](https://user-images.githubusercontent.com/1730534/37785920-ba47f038-2dd1-11e8-8c9c-a0586d976fc7.png) ![image](https://user-images.githubusercontent.com/1730534/37785851-91dc2c54-2dd1-11e8-8923-a3c9ac9e8f3a.png)
process
review representation of alternative mrna splicing via spliceosome we need to review representation of alternative mrna splicing via spliceosome to clarify its relationship to mrna splicing via spliceosome and regulation of mrna splicing via spliceosome as part of this we will need to review the associated sub processes and molecular functions to ensure that we don t have redundant concepts here for reference
1
12,900
9,810,836,655
IssuesEvent
2019-06-12 21:31:48
Azure/azure-sdk-for-node
https://api.github.com/repos/Azure/azure-sdk-for-node
closed
SDK for CRUD operations on recovery services vault backup policies
Recovery Services Backup Service Attention customer-reported
Hi, I can see there are list operations for listing backup policies: https://docs.microsoft.com/en-gb/javascript/api/azure-arm-recoveryservicesbackup/backuppolicies?view=azure-node-2.2.0 However I can't find any way to get, create, update or delete a backup policy - do these exist? In the past you have suggested calling APIs directly with `sendRequest`, so I'm wondering if that's an option here? Also, are there SDK functions to attach backup policies to a VM? Thanks, Mike.
2.0
SDK for CRUD operations on recovery services vault backup policies - Hi, I can see there are list operations for listing backup policies: https://docs.microsoft.com/en-gb/javascript/api/azure-arm-recoveryservicesbackup/backuppolicies?view=azure-node-2.2.0 However I can't find any way to get, create, update or delete a backup policy - do these exist? In the past you have suggested calling APIs directly with `sendRequest`, so I'm wondering if that's an option here? Also, are there SDK functions to attach backup policies to a VM? Thanks, Mike.
non_process
sdk for crud operations on recovery services vault backup policies hi i can see there are list operations for listing backup policies however i can t find any way to get create update or delete a backup policy do these exist in the past you have suggested calling apis directly with sendrequest so i m wondering if that s an option here also are there sdk functions to attach backup policies to a vm thanks mike
0
70,290
30,607,846,543
IssuesEvent
2023-07-23 08:25:28
apache/incubator-opendal
https://api.github.com/repos/apache/incubator-opendal
closed
feat: Add write without content length support for obs
good first issue help wanted services/obs
OBS supports multipart upload too, it's better for us to implement write withou content length for them.
1.0
feat: Add write without content length support for obs - OBS supports multipart upload too, it's better for us to implement write withou content length for them.
non_process
feat add write without content length support for obs obs supports multipart upload too it s better for us to implement write withou content length for them
0
160,027
13,778,980,159
IssuesEvent
2020-10-08 13:12:28
psyplot/psy-view
https://api.github.com/repos/psyplot/psy-view
closed
Make GUI documentation
documentation
Document every feature of the psy-view GUI with screenshots and explanation
1.0
Make GUI documentation - Document every feature of the psy-view GUI with screenshots and explanation
non_process
make gui documentation document every feature of the psy view gui with screenshots and explanation
0
850
3,315,860,473
IssuesEvent
2015-11-06 14:28:03
beesmart-it/trend-hrm
https://api.github.com/repos/beesmart-it/trend-hrm
closed
Generate Selection Processes Applicants by Step report
enhancement reports requirement selection process
Generate Selection Processes Applicants by Step report
1.0
Generate Selection Processes Applicants by Step report - Generate Selection Processes Applicants by Step report
process
generate selection processes applicants by step report generate selection processes applicants by step report
1
20,668
27,334,852,688
IssuesEvent
2023-02-26 03:51:03
cse442-at-ub/project_s23-team-infinity
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
closed
Create a Hello World page by using React.js and PHP
Processing Task Sprint 1
**Task Test** *Test 1* 1) run 'npm start' command in the directory 2) Verify that a webpage is running on localhost:3000 by default 3) Verify that there is a field to enter your name and a submit button 4) Type your name in the field 5) Verify that it holds the functionality of typing in the field 6) Hit submit button 7) Verify that your name shows on the screen properly with the welcome message 8) Type another name and verity that the message shows correctly with the name you type in the second time.
1.0
Create a Hello World page by using React.js and PHP - **Task Test** *Test 1* 1) run 'npm start' command in the directory 2) Verify that a webpage is running on localhost:3000 by default 3) Verify that there is a field to enter your name and a submit button 4) Type your name in the field 5) Verify that it holds the functionality of typing in the field 6) Hit submit button 7) Verify that your name shows on the screen properly with the welcome message 8) Type another name and verity that the message shows correctly with the name you type in the second time.
process
create a hello world page by using react js and php task test test run npm start command in the directory verify that a webpage is running on localhost by default verify that there is a field to enter your name and a submit button type your name in the field verify that it holds the functionality of typing in the field hit submit button verify that your name shows on the screen properly with the welcome message type another name and verity that the message shows correctly with the name you type in the second time
1
5,738
8,580,602,618
IssuesEvent
2018-11-13 12:29:33
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Process does not die when child process to which stdin is piped dies
child_process
<!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: v10.3.0 * **Platform**: Linux plutobox 4.18.16-arch1-1-ARCH #1 SMP PREEMPT Sat Oct 20 22:06:45 UTC 2018 x86_64 GNU/Linux * **Subsystem**: `child_process`/`stream`/`process` <!-- Please provide more details below this comment. --> Basically a duplicate of: https://github.com/nodejs/node/issues/2276, which was either closed without having been resolved or has regressed. Stick the following in a file: ``` #!/usr/bin/env node const cp = require("child_process"); const { stdin, stdout, stderr } = process; const child = cp.execFile("ps", [], { stdio: ["pipe", "pipe", "pipe"] }); process.stdin.pipe(child.stdin); // this is the problematic one child.stdout.pipe(process.stdout); child.stderr.pipe(process.stderr); child.on("exit", (code, signal) => console.log({ msg: "Exited", code, signal }), ); ``` `chmod +x` and run it in some shell, and you'll see it'll output the stuff from `ps`, then just hang. Assuming your shell's stdin is being forwarded to the node process, hitting enter will unstick it. If you comment out the `process.stdin...` line (since `ps` doesn't actually need any stdin), you'll see it end as expected. The workaround I'm using for this is to add the event handler `child.on("exit", (code, signal) => code !== null ? process.exit(code) : signal !== null ? process.kill(process.pid, signal) : fail("Impossible situation, process exited but neither exited nor was killed"))`. I can't really use this in the general case, because I may need to do more stuff while the child process is running and exiting, and can't always assume the parent needs to die precisely when this one specific child process dies. At the same time the parent should be free to exit when all children are dead, and shouldn't hang like this.
1.0
Process does not die when child process to which stdin is piped dies - <!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: v10.3.0 * **Platform**: Linux plutobox 4.18.16-arch1-1-ARCH #1 SMP PREEMPT Sat Oct 20 22:06:45 UTC 2018 x86_64 GNU/Linux * **Subsystem**: `child_process`/`stream`/`process` <!-- Please provide more details below this comment. --> Basically a duplicate of: https://github.com/nodejs/node/issues/2276, which was either closed without having been resolved or has regressed. Stick the following in a file: ``` #!/usr/bin/env node const cp = require("child_process"); const { stdin, stdout, stderr } = process; const child = cp.execFile("ps", [], { stdio: ["pipe", "pipe", "pipe"] }); process.stdin.pipe(child.stdin); // this is the problematic one child.stdout.pipe(process.stdout); child.stderr.pipe(process.stderr); child.on("exit", (code, signal) => console.log({ msg: "Exited", code, signal }), ); ``` `chmod +x` and run it in some shell, and you'll see it'll output the stuff from `ps`, then just hang. Assuming your shell's stdin is being forwarded to the node process, hitting enter will unstick it. If you comment out the `process.stdin...` line (since `ps` doesn't actually need any stdin), you'll see it end as expected. The workaround I'm using for this is to add the event handler `child.on("exit", (code, signal) => code !== null ? process.exit(code) : signal !== null ? process.kill(process.pid, signal) : fail("Impossible situation, process exited but neither exited nor was killed"))`. I can't really use this in the general case, because I may need to do more stuff while the child process is running and exiting, and can't always assume the parent needs to die precisely when this one specific child process dies. At the same time the parent should be free to exit when all children are dead, and shouldn't hang like this.
process
process does not die when child process to which stdin is piped dies thank you for reporting a possible bug in node js please fill in as much of the template below as you can version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version platform linux plutobox arch smp preempt sat oct utc gnu linux subsystem child process stream process basically a duplicate of which was either closed without having been resolved or has regressed stick the following in a file usr bin env node const cp require child process const stdin stdout stderr process const child cp execfile ps stdio process stdin pipe child stdin this is the problematic one child stdout pipe process stdout child stderr pipe process stderr child on exit code signal console log msg exited code signal chmod x and run it in some shell and you ll see it ll output the stuff from ps then just hang assuming your shell s stdin is being forwarded to the node process hitting enter will unstick it if you comment out the process stdin line since ps doesn t actually need any stdin you ll see it end as expected the workaround i m using for this is to add the event handler child on exit code signal code null process exit code signal null process kill process pid signal fail impossible situation process exited but neither exited nor was killed i can t really use this in the general case because i may need to do more stuff while the child process is running and exiting and can t always assume the parent needs to die precisely when this one specific child process dies at the same time the parent should be free to exit when all children are dead and shouldn t hang like this
1
133,309
18,290,501,898
IssuesEvent
2021-10-05 14:47:36
mittell/vue-tailwind
https://api.github.com/repos/mittell/vue-tailwind
closed
CVE-2021-3757 (High) detected in immer-8.0.1.tgz - autoclosed
security vulnerability
## CVE-2021-3757 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-8.0.1.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-8.0.1.tgz">https://registry.npmjs.org/immer/-/immer-8.0.1.tgz</a></p> <p> Dependency Hierarchy: - addon-essentials-6.3.0-alpha.9.tgz (Root Library) - addon-docs-6.3.0-alpha.9.tgz - builder-webpack4-6.3.0-alpha.9.tgz - react-dev-utils-11.0.4.tgz - :x: **immer-8.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mittell/vue-tailwind/commit/bac43b00e10f25251d75661389cb8214248478b7">bac43b00e10f25251d75661389cb8214248478b7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> immer is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-09-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3757>CVE-2021-3757</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/23d38099-71cd-42ed-a77a-71e68094adfa/">https://huntr.dev/bounties/23d38099-71cd-42ed-a77a-71e68094adfa/</a></p> <p>Release Date: 2021-09-02</p> <p>Fix Resolution: immer - 9.0.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3757 (High) detected in immer-8.0.1.tgz - autoclosed - ## CVE-2021-3757 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-8.0.1.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-8.0.1.tgz">https://registry.npmjs.org/immer/-/immer-8.0.1.tgz</a></p> <p> Dependency Hierarchy: - addon-essentials-6.3.0-alpha.9.tgz (Root Library) - addon-docs-6.3.0-alpha.9.tgz - builder-webpack4-6.3.0-alpha.9.tgz - react-dev-utils-11.0.4.tgz - :x: **immer-8.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/mittell/vue-tailwind/commit/bac43b00e10f25251d75661389cb8214248478b7">bac43b00e10f25251d75661389cb8214248478b7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> immer is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-09-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3757>CVE-2021-3757</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/23d38099-71cd-42ed-a77a-71e68094adfa/">https://huntr.dev/bounties/23d38099-71cd-42ed-a77a-71e68094adfa/</a></p> <p>Release Date: 2021-09-02</p> <p>Fix Resolution: immer - 9.0.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in immer tgz autoclosed cve high severity vulnerability vulnerable library immer tgz create your next immutable state by mutating the current one library home page a href dependency hierarchy addon essentials alpha tgz root library addon docs alpha tgz builder alpha tgz react dev utils tgz x immer tgz vulnerable library found in head commit a href found in base branch master vulnerability details immer is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution immer step up your open source security game with whitesource
0
25,757
2,683,968,401
IssuesEvent
2015-03-28 14:36:37
ConEmu/old-issues
https://api.github.com/repos/ConEmu/old-issues
closed
ConEmu.exe 2010.11.08 Постоянное сообщение о не возможности сохранить данные в ConEmu.xml
1 star bug imported Priority-Medium
_From [Zero...@gmail.com](https://code.google.com/u/103642962356045697092/) on November 14, 2010 23:16:57_ ситуация: FAR v1.75 (build 2634) x86 [1.75.0.2634] OS: Win7 x86sp1beta ConEmu .exe 2010.11.08 Conemu запущен в основном сеансе доменного админа. Запускаю Conemu от юзера Test (User right) При запуске сообщается: --------------------------- ConEmu --------------------------- Can't register Minimize/Restore hotkey, ErrCode=0x00000581 \--------------------------- OK \--------------------------- а при выходе из фара идут постоянные messagebox's --------------------------- ConEmu --------------------------- Can't open file for writing! C:\Program Files\Far2\ ConEmu .xml ErrCode=0x00000005 \--------------------------- OK \--------------------------- _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=316_
1.0
ConEmu.exe 2010.11.08 Постоянное сообщение о не возможности сохранить данные в ConEmu.xml - _From [Zero...@gmail.com](https://code.google.com/u/103642962356045697092/) on November 14, 2010 23:16:57_ ситуация: FAR v1.75 (build 2634) x86 [1.75.0.2634] OS: Win7 x86sp1beta ConEmu .exe 2010.11.08 Conemu запущен в основном сеансе доменного админа. Запускаю Conemu от юзера Test (User right) При запуске сообщается: --------------------------- ConEmu --------------------------- Can't register Minimize/Restore hotkey, ErrCode=0x00000581 \--------------------------- OK \--------------------------- а при выходе из фара идут постоянные messagebox's --------------------------- ConEmu --------------------------- Can't open file for writing! C:\Program Files\Far2\ ConEmu .xml ErrCode=0x00000005 \--------------------------- OK \--------------------------- _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=316_
non_process
conemu exe постоянное сообщение о не возможности сохранить данные в conemu xml from on november ситуация far build os conemu exe conemu запущен в основном сеансе доменного админа запускаю conemu от юзера test user right при запуске сообщается conemu can t register minimize restore hotkey errcode ok а при выходе из фара идут постоянные messagebox s conemu can t open file for writing c program files conemu xml errcode ok original issue
0
18,866
24,794,708,751
IssuesEvent
2022-10-24 16:14:53
dishmint/ArcanePropagation
https://api.github.com/repos/dishmint/ArcanePropagation
closed
Shader not centered when using display density of 2
bug processing4
Setting pixel density to 2 in `p4_ArcanePropagation_Shader` makes the shader uncentered. I know it's because more pixels are being drawn, but changing img.width/height to img.pixelWidth/pixelHeight didn't seem to fix the issue.
1.0
Shader not centered when using display density of 2 - Setting pixel density to 2 in `p4_ArcanePropagation_Shader` makes the shader uncentered. I know it's because more pixels are being drawn, but changing img.width/height to img.pixelWidth/pixelHeight didn't seem to fix the issue.
process
shader not centered when using display density of setting pixel density to in arcanepropagation shader makes the shader uncentered i know it s because more pixels are being drawn but changing img width height to img pixelwidth pixelheight didn t seem to fix the issue
1
34,046
7,330,651,001
IssuesEvent
2018-03-05 10:39:09
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Prevent fileUpload messages to display services messages
defect
I'm using the MessageService with the global messages setup. Any time I publish a message with the service - the notification shows up inside the file upload content area. How can I specify which 'messages' it displays ? (ie. I still want notification on things like wrong file type, or file size, etc...)
1.0
Prevent fileUpload messages to display services messages - I'm using the MessageService with the global messages setup. Any time I publish a message with the service - the notification shows up inside the file upload content area. How can I specify which 'messages' it displays ? (ie. I still want notification on things like wrong file type, or file size, etc...)
non_process
prevent fileupload messages to display services messages i m using the messageservice with the global messages setup any time i publish a message with the service the notification shows up inside the file upload content area how can i specify which messages it displays ie i still want notification on things like wrong file type or file size etc
0
11,638
13,688,541,316
IssuesEvent
2020-09-30 11:53:32
kami-blue/client
https://api.github.com/repos/kami-blue/client
closed
baritone doesn't work
-incompatible -invalid
if u write ;b goto 5 5 5 it would count as a commend but it wont do any thing... and if i write # its a chat message which is ok cus its ;b so please make ;b work.
True
baritone doesn't work - if u write ;b goto 5 5 5 it would count as a commend but it wont do any thing... and if i write # its a chat message which is ok cus its ;b so please make ;b work.
non_process
baritone doesn t work if u write b goto it would count as a commend but it wont do any thing and if i write its a chat message which is ok cus its b so please make b work
0
5,473
8,343,398,781
IssuesEvent
2018-09-30 03:54:07
JustBru00/RenamePlugin
https://api.github.com/repos/JustBru00/RenamePlugin
closed
[BUG] /removeloreline, /rename, and blacklisted words not working as intended
Processing
Broken: Players are able to /removeloreline, /rename, /lore, & /setloreline from an item with a blacklisted word, thus removing any lore or name from an item they shouldn't be able to. However, players _are not able_ to /setloreline, /lore [blacklisted word] or /rename [blacklisted word]
1.0
[BUG] /removeloreline, /rename, and blacklisted words not working as intended - Broken: Players are able to /removeloreline, /rename, /lore, & /setloreline from an item with a blacklisted word, thus removing any lore or name from an item they shouldn't be able to. However, players _are not able_ to /setloreline, /lore [blacklisted word] or /rename [blacklisted word]
process
removeloreline rename and blacklisted words not working as intended broken players are able to removeloreline rename lore setloreline from an item with a blacklisted word thus removing any lore or name from an item they shouldn t be able to however players are not able to setloreline lore or rename
1
105,833
16,660,997,582
IssuesEvent
2021-06-06 10:18:31
AlexRogalskiy/proverbs
https://api.github.com/repos/AlexRogalskiy/proverbs
opened
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js
security vulnerability
## CVE-2012-6708 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: proverbs/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: proverbs/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/proverbs/commit/03f188eadf72040f458099c95899cd912240a3ac">03f188eadf72040f458099c95899cd912240a3ac</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v1.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: proverbs/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: proverbs/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/proverbs/commit/03f188eadf72040f458099c95899cd912240a3ac">03f188eadf72040f458099c95899cd912240a3ac</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v1.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file proverbs node modules redeyed examples browser index html path to vulnerable library proverbs node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
915
3,373,939,319
IssuesEvent
2015-11-24 10:28:56
walnutzhang/notes
https://api.github.com/repos/walnutzhang/notes
opened
Processing学习笔记 01
Processing
开始学Processing了,使用的资料是: [The Nature of Code](http://natureofcode.com/book/introduction) 这本不算基础书籍,主要讲怎样用 Processing 来模拟真实世界的规律,涉及到利用(伪)随机数和概率来设计算法。因为这本没有 Processing 语法和API的讲解,可能不太适合没有任何编程经验的人来入门。 [Processing.org tutorials](https://processing.org/tutorials/) 这个是 Processing 官方的教程了,Text Tutorials 那部分组织得挺好的,基础 API 和语法可以从这边来了解。 今天主要看的是 https://processing.org/tutorials/overview/ 这篇overview 和 http://natureofcode.com/book/introduction/ NoC的 Introduction章节。 ## Processing 概述与环境准备 其实 Porcessing 应该算是几个东西: - 一种简化的语言 - 一套基于 Java 的绘图API - 一个编译执行的 IDE 环境 语言这边其实没什么特别要说的,基本是简化了的 Java。 其实不用 Processing 语言直接用 Java 与 Processing 的 API 来写也是可以的,而且可能更适合复杂一些的项目。初期先使用 Processing 语言来熟悉它提供的API 吧(这也是官方教程推荐的做法)。 关于 IDE, 懒得吐槽了…… 实在太差劲了。使用了之后马上抛弃,先安装 processing-java 命令行工具, 然后在 Atom 里安装 Processing 插件和语法高亮。除了没有实时错误提示,其他都OK。 ## Processing 项目 每个Processing项目叫做一个 sketch。 代码文件的后缀为 .pde 。 Processing 里比较基础的方法有: size(), background(), stroke(), line() 之类的。 如果只用这些语句,那么生成的图像就是一个静态的图像。 如果提供了 void setup(){} 和 void draw(){} 的话,那就是一个 interactive program. setup 只执行一次,draw 会不断重复执行。 所以 setup 用来初始化环境(比如设定 sketch 的尺寸), draw 用来执行动画。 此外还有一些事件方法,比如 void mousePressed(){}, 每次点击鼠标就会执行。 另外每个 sketch 里还有一些全局变量, 比如 mouseX, mouseY 提供了鼠标在画布里的坐标;width 和 height 则是画布的宽高(不过这俩是 size() 执行设定尺寸之后才设定的)。 saveFrame() 方法可以将项目输出为图像。注意,如果这句放在 draw 里, 每次draw执行都会输出一张新的图片。 size(400, 400, P2D) 这句, P2D 的意思是用 OpenGL 来快速渲染2D图形。 P3D 是用 OpenGL 渲染3D。还可以是 PDF,后面接输出路径。 loadStrings() 和 loadImage() 这俩可以从sketch项目文件夹下的data文件夹(手动创建)里读取数据。 ## NoC Intro
1.0
Processing学习笔记 01 - 开始学Processing了,使用的资料是: [The Nature of Code](http://natureofcode.com/book/introduction) 这本不算基础书籍,主要讲怎样用 Processing 来模拟真实世界的规律,涉及到利用(伪)随机数和概率来设计算法。因为这本没有 Processing 语法和API的讲解,可能不太适合没有任何编程经验的人来入门。 [Processing.org tutorials](https://processing.org/tutorials/) 这个是 Processing 官方的教程了,Text Tutorials 那部分组织得挺好的,基础 API 和语法可以从这边来了解。 今天主要看的是 https://processing.org/tutorials/overview/ 这篇overview 和 http://natureofcode.com/book/introduction/ NoC的 Introduction章节。 ## Processing 概述与环境准备 其实 Porcessing 应该算是几个东西: - 一种简化的语言 - 一套基于 Java 的绘图API - 一个编译执行的 IDE 环境 语言这边其实没什么特别要说的,基本是简化了的 Java。 其实不用 Processing 语言直接用 Java 与 Processing 的 API 来写也是可以的,而且可能更适合复杂一些的项目。初期先使用 Processing 语言来熟悉它提供的API 吧(这也是官方教程推荐的做法)。 关于 IDE, 懒得吐槽了…… 实在太差劲了。使用了之后马上抛弃,先安装 processing-java 命令行工具, 然后在 Atom 里安装 Processing 插件和语法高亮。除了没有实时错误提示,其他都OK。 ## Processing 项目 每个Processing项目叫做一个 sketch。 代码文件的后缀为 .pde 。 Processing 里比较基础的方法有: size(), background(), stroke(), line() 之类的。 如果只用这些语句,那么生成的图像就是一个静态的图像。 如果提供了 void setup(){} 和 void draw(){} 的话,那就是一个 interactive program. setup 只执行一次,draw 会不断重复执行。 所以 setup 用来初始化环境(比如设定 sketch 的尺寸), draw 用来执行动画。 此外还有一些事件方法,比如 void mousePressed(){}, 每次点击鼠标就会执行。 另外每个 sketch 里还有一些全局变量, 比如 mouseX, mouseY 提供了鼠标在画布里的坐标;width 和 height 则是画布的宽高(不过这俩是 size() 执行设定尺寸之后才设定的)。 saveFrame() 方法可以将项目输出为图像。注意,如果这句放在 draw 里, 每次draw执行都会输出一张新的图片。 size(400, 400, P2D) 这句, P2D 的意思是用 OpenGL 来快速渲染2D图形。 P3D 是用 OpenGL 渲染3D。还可以是 PDF,后面接输出路径。 loadStrings() 和 loadImage() 这俩可以从sketch项目文件夹下的data文件夹(手动创建)里读取数据。 ## NoC Intro
process
processing学习笔记 开始学processing了,使用的资料是: 这本不算基础书籍,主要讲怎样用 processing 来模拟真实世界的规律,涉及到利用(伪)随机数和概率来设计算法。因为这本没有 processing 语法和api的讲解,可能不太适合没有任何编程经验的人来入门。 这个是 processing 官方的教程了,text tutorials 那部分组织得挺好的,基础 api 和语法可以从这边来了解。 今天主要看的是 这篇overview 和 noc的 introduction章节。 processing 概述与环境准备 其实 porcessing 应该算是几个东西: 一种简化的语言 一套基于 java 的绘图api 一个编译执行的 ide 环境 语言这边其实没什么特别要说的,基本是简化了的 java。 其实不用 processing 语言直接用 java 与 processing 的 api 来写也是可以的,而且可能更适合复杂一些的项目。初期先使用 processing 语言来熟悉它提供的api 吧(这也是官方教程推荐的做法)。 关于 ide, 懒得吐槽了…… 实在太差劲了。使用了之后马上抛弃,先安装 processing java 命令行工具, 然后在 atom 里安装 processing 插件和语法高亮。除了没有实时错误提示,其他都ok。 processing 项目 每个processing项目叫做一个 sketch。 代码文件的后缀为 pde 。 processing 里比较基础的方法有: size background stroke line 之类的。 如果只用这些语句,那么生成的图像就是一个静态的图像。 如果提供了 void setup 和 void draw 的话,那就是一个 interactive program setup 只执行一次,draw 会不断重复执行。 所以 setup 用来初始化环境(比如设定 sketch 的尺寸), draw 用来执行动画。 此外还有一些事件方法,比如 void mousepressed 每次点击鼠标就会执行。 另外每个 sketch 里还有一些全局变量, 比如 mousex mousey 提供了鼠标在画布里的坐标;width 和 height 则是画布的宽高(不过这俩是 size 执行设定尺寸之后才设定的)。 saveframe 方法可以将项目输出为图像。注意,如果这句放在 draw 里, 每次draw执行都会输出一张新的图片。 size 这句, 的意思是用 opengl 。 是用 opengl 。还可以是 pdf,后面接输出路径。 loadstrings 和 loadimage 这俩可以从sketch项目文件夹下的data文件夹(手动创建)里读取数据。 noc intro
1
6,805
9,954,325,768
IssuesEvent
2019-07-05 08:05:55
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
Calculation functions are executed even when cached
priority/important topic/caching topic/engine topic/processes type/bug
Reproducible by the following: ``` #!/usr/bin/env runaiida from aiida import orm from aiida.engine import calcfunction from aiida.manage.caching import enable_caching @calcfunction def add(x, y): return x + y x = orm.Int(1) y = orm.Int(2) result, node = add.run_get_node(x, y) with enable_caching(): result_cached, node_cached = add.run_get_node(x, y) assert result_cached.is_stored is True ``` which raises `AssertionError` because the result node is not actually stored
1.0
Calculation functions are executed even when cached - Reproducible by the following: ``` #!/usr/bin/env runaiida from aiida import orm from aiida.engine import calcfunction from aiida.manage.caching import enable_caching @calcfunction def add(x, y): return x + y x = orm.Int(1) y = orm.Int(2) result, node = add.run_get_node(x, y) with enable_caching(): result_cached, node_cached = add.run_get_node(x, y) assert result_cached.is_stored is True ``` which raises `AssertionError` because the result node is not actually stored
process
calculation functions are executed even when cached reproducible by the following usr bin env runaiida from aiida import orm from aiida engine import calcfunction from aiida manage caching import enable caching calcfunction def add x y return x y x orm int y orm int result node add run get node x y with enable caching result cached node cached add run get node x y assert result cached is stored is true which raises assertionerror because the result node is not actually stored
1
16,256
20,816,758,052
IssuesEvent
2022-03-18 11:09:51
maticnetwork/miden
https://api.github.com/repos/maticnetwork/miden
closed
Range checks for u32 operations
processor
As described in [u32 operations](https://hackmd.io/NC-yRmmtRQSvToTHb96e8Q) note, these operations need to rely on a set of helper registers. This is not yet implemented because the helper registers will actually be located in the decoder which is currently not implemented. Some of the helper registers are needed to perform 16-bit range checks. And while we can't yet populate these registers with needed values, we can make lookups into the `RangeChecker` for for these value (this can be done via `RangeChecker::add_value()` method). At the high level, there are two things we should do: 1. For u32 operations which require range checks, we should make sure we add the required values to the `RangeChecker`. It might make sense to have some helper functions to handle this rather than duplicating this lookup code in every operation handler. 2. Once the above is done, executing u32 operations would result in the `RangeChecker` trace being populated. We should integrate this trace into the overall execution trace similar to how we did this for co-processor execution traces.
1.0
Range checks for u32 operations - As described in [u32 operations](https://hackmd.io/NC-yRmmtRQSvToTHb96e8Q) note, these operations need to rely on a set of helper registers. This is not yet implemented because the helper registers will actually be located in the decoder which is currently not implemented. Some of the helper registers are needed to perform 16-bit range checks. And while we can't yet populate these registers with needed values, we can make lookups into the `RangeChecker` for for these value (this can be done via `RangeChecker::add_value()` method). At the high level, there are two things we should do: 1. For u32 operations which require range checks, we should make sure we add the required values to the `RangeChecker`. It might make sense to have some helper functions to handle this rather than duplicating this lookup code in every operation handler. 2. Once the above is done, executing u32 operations would result in the `RangeChecker` trace being populated. We should integrate this trace into the overall execution trace similar to how we did this for co-processor execution traces.
process
range checks for operations as described in note these operations need to rely on a set of helper registers this is not yet implemented because the helper registers will actually be located in the decoder which is currently not implemented some of the helper registers are needed to perform bit range checks and while we can t yet populate these registers with needed values we can make lookups into the rangechecker for for these value this can be done via rangechecker add value method at the high level there are two things we should do for operations which require range checks we should make sure we add the required values to the rangechecker it might make sense to have some helper functions to handle this rather than duplicating this lookup code in every operation handler once the above is done executing operations would result in the rangechecker trace being populated we should integrate this trace into the overall execution trace similar to how we did this for co processor execution traces
1
209,460
16,021,633,263
IssuesEvent
2021-04-21 00:53:55
peake100/rogerRabbit-go
https://api.github.com/repos/peake100/rogerRabbit-go
opened
Testing/Test all Official Tutorials
good first issue help wanted testing
We should add tests that match all of the [official tutorials](https://www.rabbitmq.com/getstarted.html). In addition to being a good baseline to ensure correctness, it will also make sure that we avoid the embarrassment of someone trying out our lib by following along with the tutorials running into a bug. Likely we should use a testing suite, with each method running one of the tutorials and resetting the test connection between. We should also put this in it's own sub-package as it will likely involve quite a bit of code.
1.0
Testing/Test all Official Tutorials - We should add tests that match all of the [official tutorials](https://www.rabbitmq.com/getstarted.html). In addition to being a good baseline to ensure correctness, it will also make sure that we avoid the embarrassment of someone trying out our lib by following along with the tutorials running into a bug. Likely we should use a testing suite, with each method running one of the tutorials and resetting the test connection between. We should also put this in it's own sub-package as it will likely involve quite a bit of code.
non_process
testing test all official tutorials we should add tests that match all of the in addition to being a good baseline to ensure correctness it will also make sure that we avoid the embarrassment of someone trying out our lib by following along with the tutorials running into a bug likely we should use a testing suite with each method running one of the tutorials and resetting the test connection between we should also put this in it s own sub package as it will likely involve quite a bit of code
0
4,102
7,050,205,344
IssuesEvent
2018-01-03 03:59:29
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
opened
Java 9 build targets Java 8 but does not link against Java 8 runtime
category: dev & admin process type: process
This issue was originally raised in #2801. Our Java 9 build targets Java 8, but is incorrectly linking against the Java 9 runtime rather than the Java 8 runtime. At a minimum, the Java 9 build must specify the `-bootclasspath` option to `javac` (or the Gradle Java plugin equivalent) to point to a Java 8 runtime. We should also investigate using the new `--release` option to `javac` added in Java 9. It basically takes the place of `-source`, `-target`, and `-bootclasspath`. Gradle support for this option is unknown at this time. It would be nice if we can accomplish this in the Travis build without having to download JRE 8 with each build. JDK 8 should already be installed on the Travis build image even when JDK 9 is selected (I believe all that does is change the _default_ JDK used by the build, i.e. what's on the PATH). The solution also needs to account for dev environments, as devs may choose to use Java 9 to build locally. Because the location of the Java 8 runtime may differ between Travis and different dev's environments, the solution may require the specification of an environment variable or a Gradle project property to point to the Java 8 runtime. For devs using Eclipse, I believe there is nothing to be done. IIRC, Eclipse always uses the ECJ compiler it ships with (which is Java 9-compliant in Oxygen). It effectively sets the `-bootclasspath` option based on the project's selected JRE (which should be a Java 8 JRE or JDK). Eclipse devs probably already have their workspaces configured in this way. I'm not sure what, if anything, would need to be done for IntelliJ.
2.0
Java 9 build targets Java 8 but does not link against Java 8 runtime - This issue was originally raised in #2801. Our Java 9 build targets Java 8, but is incorrectly linking against the Java 9 runtime rather than the Java 8 runtime. At a minimum, the Java 9 build must specify the `-bootclasspath` option to `javac` (or the Gradle Java plugin equivalent) to point to a Java 8 runtime. We should also investigate using the new `--release` option to `javac` added in Java 9. It basically takes the place of `-source`, `-target`, and `-bootclasspath`. Gradle support for this option is unknown at this time. It would be nice if we can accomplish this in the Travis build without having to download JRE 8 with each build. JDK 8 should already be installed on the Travis build image even when JDK 9 is selected (I believe all that does is change the _default_ JDK used by the build, i.e. what's on the PATH). The solution also needs to account for dev environments, as devs may choose to use Java 9 to build locally. Because the location of the Java 8 runtime may differ between Travis and different dev's environments, the solution may require the specification of an environment variable or a Gradle project property to point to the Java 8 runtime. For devs using Eclipse, I believe there is nothing to be done. IIRC, Eclipse always uses the ECJ compiler it ships with (which is Java 9-compliant in Oxygen). It effectively sets the `-bootclasspath` option based on the project's selected JRE (which should be a Java 8 JRE or JDK). Eclipse devs probably already have their workspaces configured in this way. I'm not sure what, if anything, would need to be done for IntelliJ.
process
java build targets java but does not link against java runtime this issue was originally raised in our java build targets java but is incorrectly linking against the java runtime rather than the java runtime at a minimum the java build must specify the bootclasspath option to javac or the gradle java plugin equivalent to point to a java runtime we should also investigate using the new release option to javac added in java it basically takes the place of source target and bootclasspath gradle support for this option is unknown at this time it would be nice if we can accomplish this in the travis build without having to download jre with each build jdk should already be installed on the travis build image even when jdk is selected i believe all that does is change the default jdk used by the build i e what s on the path the solution also needs to account for dev environments as devs may choose to use java to build locally because the location of the java runtime may differ between travis and different dev s environments the solution may require the specification of an environment variable or a gradle project property to point to the java runtime for devs using eclipse i believe there is nothing to be done iirc eclipse always uses the ecj compiler it ships with which is java compliant in oxygen it effectively sets the bootclasspath option based on the project s selected jre which should be a java jre or jdk eclipse devs probably already have their workspaces configured in this way i m not sure what if anything would need to be done for intellij
1
88,905
3,787,265,679
IssuesEvent
2016-03-21 09:50:38
Arquisoft/VoteCounting_i2
https://api.github.com/repos/Arquisoft/VoteCounting_i2
opened
DB connection
priority
Create the model and connect to the database using JPA with hibernate. It must follow the model of the database defined with [VotingSystemGroup](https://github.com/Arquisoft/VotingSystem_I2).
1.0
DB connection - Create the model and connect to the database using JPA with hibernate. It must follow the model of the database defined with [VotingSystemGroup](https://github.com/Arquisoft/VotingSystem_I2).
non_process
db connection create the model and connect to the database using jpa with hibernate it must follow the model of the database defined with
0
18,960
3,735,575,509
IssuesEvent
2016-03-08 12:47:50
phracek/rebase-helper
https://api.github.com/repos/phracek/rebase-helper
closed
tox usage for rebase-helper test
enhancement unit tests
On the base of discussion with @thozza we could switch our tests to tox. Is there any volunteer who can handle it?
1.0
tox usage for rebase-helper test - On the base of discussion with @thozza we could switch our tests to tox. Is there any volunteer who can handle it?
non_process
tox usage for rebase helper test on the base of discussion with thozza we could switch our tests to tox is there any volunteer who can handle it
0
15,837
20,027,004,601
IssuesEvent
2022-02-01 22:36:47
googleapis/java-translate
https://api.github.com/repos/googleapis/java-translate
closed
com.example.translatev3beta1.BatchTranslateDocumentTests: testBatchTranslateDocument failed
priority: p2 type: process api: translate flakybot: issue flakybot: flaky
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f0cceb9c-ced2-4555-94f6-0b66c95884ac), [Sponge](http://sponge2/f0cceb9c-ced2-4555-94f6-0b66c95884ac) status: failed <details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:463) at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:100) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73) at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133) at com.example.translatev3beta1.BatchTranslateDocument.batchTranslateDocument(BatchTranslateDocument.java:117) at com.example.translatev3beta1.BatchTranslateDocumentTests.testBatchTranslateDocument(BatchTranslateDocumentTests.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808) at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463) at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427) at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at io.grpc.Status.asRuntimeException(Status.java:535) ... 13 more </pre></details>
1.0
com.example.translatev3beta1.BatchTranslateDocumentTests: testBatchTranslateDocument failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f0cceb9c-ced2-4555-94f6-0b66c95884ac), [Sponge](http://sponge2/f0cceb9c-ced2-4555-94f6-0b66c95884ac) status: failed <details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:463) at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:100) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73) at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:133) at com.example.translatev3beta1.BatchTranslateDocument.batchTranslateDocument(BatchTranslateDocument.java:117) at com.example.translatev3beta1.BatchTranslateDocumentTests.testBatchTranslateDocument(BatchTranslateDocumentTests.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548) Caused by: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133) at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038) at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808) at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563) at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463) at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427) at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557) at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable. at io.grpc.Status.asRuntimeException(Status.java:535) ... 13 more </pre></details>
process
com example batchtranslatedocumenttests testbatchtranslatedocument failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable the service is currently unavailable at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com example batchtranslatedocument batchtranslatedocument batchtranslatedocument java at com example batchtranslatedocumenttests testbatchtranslatedocument batchtranslatedocumenttests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable the service is currently unavailable at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception unavailable the service is currently unavailable at io grpc status asruntimeexception status java more
1
303,992
26,245,854,054
IssuesEvent
2023-01-05 15:12:10
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
Check Brazil phone number format (9 digits) is handled correctly
Component: Attestations Component: Identity PM identity: needs review
Mila pointed out that one possible reason for our lower success rate in Brazil could be due to misrecognised phone numbers. According to Mila (wikipedia below), the number of digits in a phone number increased from 8 to 9 to expand the set of phone numbers. From [wikipedia](https://en.wikipedia.org/wiki/Telephone_numbers_in_Brazil#Ninth_digit_for_mobile_numbers): > Brazilian mobile phone numbers always have nine digits now, but seven digits were usual in the first years, then eight digits became the standard for several years - [ ] Check whether this could be a reason for lower success rates (relevant [Slack thread](https://celo-org.slack.com/archives/CJXJ1EV0D/p1640209027390800?thread_ts=1640156533.390200&cid=CJXJ1EV0D))
1.0
Check Brazil phone number format (9 digits) is handled correctly - Mila pointed out that one possible reason for our lower success rate in Brazil could be due to misrecognised phone numbers. According to Mila (wikipedia below), the number of digits in a phone number increased from 8 to 9 to expand the set of phone numbers. From [wikipedia](https://en.wikipedia.org/wiki/Telephone_numbers_in_Brazil#Ninth_digit_for_mobile_numbers): > Brazilian mobile phone numbers always have nine digits now, but seven digits were usual in the first years, then eight digits became the standard for several years - [ ] Check whether this could be a reason for lower success rates (relevant [Slack thread](https://celo-org.slack.com/archives/CJXJ1EV0D/p1640209027390800?thread_ts=1640156533.390200&cid=CJXJ1EV0D))
non_process
check brazil phone number format digits is handled correctly mila pointed out that one possible reason for our lower success rate in brazil could be due to misrecognised phone numbers according to mila wikipedia below the number of digits in a phone number increased from to to expand the set of phone numbers from brazilian mobile phone numbers always have nine digits now but seven digits were usual in the first years then eight digits became the standard for several years check whether this could be a reason for lower success rates relevant
0
9,070
12,140,104,506
IssuesEvent
2020-04-23 19:58:13
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
remove gcp-devrel-py-tools from monitoring/api/v3/api-client/requirements-test.txt
priority: p2 remove-gcp-devrel-py-tools type: process
remove gcp-devrel-py-tools from monitoring/api/v3/api-client/requirements-test.txt
1.0
remove gcp-devrel-py-tools from monitoring/api/v3/api-client/requirements-test.txt - remove gcp-devrel-py-tools from monitoring/api/v3/api-client/requirements-test.txt
process
remove gcp devrel py tools from monitoring api api client requirements test txt remove gcp devrel py tools from monitoring api api client requirements test txt
1
109,798
13,837,185,268
IssuesEvent
2020-10-14 03:03:27
rubyforgood/casa
https://api.github.com/repos/rubyforgood/casa
closed
Remove non-link "Edit" on uneditable case contacts
:paintbrush: Design Hacktoberfest Priority: Medium
<img width="1603" alt="Screen Shot 2020-10-12 at 11 11 23 PM" src="https://user-images.githubusercontent.com/578159/95822218-53055c80-0ce0-11eb-84ac-15b706e63be7.png">
1.0
Remove non-link "Edit" on uneditable case contacts - <img width="1603" alt="Screen Shot 2020-10-12 at 11 11 23 PM" src="https://user-images.githubusercontent.com/578159/95822218-53055c80-0ce0-11eb-84ac-15b706e63be7.png">
non_process
remove non link edit on uneditable case contacts img width alt screen shot at pm src
0
253,359
27,300,559,414
IssuesEvent
2023-02-24 01:18:53
panasalap/linux-4.19.72_1
https://api.github.com/repos/panasalap/linux-4.19.72_1
opened
CVE-2020-0466 (High) detected in linux-yoctov5.4.51
security vulnerability
## CVE-2020-0466 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/eventpoll.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/eventpoll.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In do_epoll_ctl and ep_loop_check_proc of eventpoll.c, there is a possible use after free due to a logic error. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-147802478References: Upstream kernel <p>Publish Date: 2020-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0466>CVE-2020-0466</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-12-14</p> <p>Fix Resolution: v4.14.195,v4.19.142,v4.4.234,v4.9.234,v5.4.61,v5.7.18,v5.8.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-0466 (High) detected in linux-yoctov5.4.51 - ## CVE-2020-0466 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/eventpoll.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/eventpoll.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In do_epoll_ctl and ep_loop_check_proc of eventpoll.c, there is a possible use after free due to a logic error. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-147802478References: Upstream kernel <p>Publish Date: 2020-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0466>CVE-2020-0466</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-12-14</p> <p>Fix Resolution: v4.14.195,v4.19.142,v4.4.234,v4.9.234,v5.4.61,v5.7.18,v5.8.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files fs eventpoll c fs eventpoll c vulnerability details in do epoll ctl and ep loop check proc of eventpoll c there is a possible use after free due to a logic error this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend
0
15,958
20,174,827,573
IssuesEvent
2022-02-10 13:40:39
ooi-data/CE09OSSM-RID27-01-OPTAAD000-recovered_host-optaa_dj_dcl_instrument_recovered
https://api.github.com/repos/ooi-data/CE09OSSM-RID27-01-OPTAAD000-recovered_host-optaa_dj_dcl_instrument_recovered
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-02-10T13:40:39.176878. ## Details Flow name: `CE09OSSM-RID27-01-OPTAAD000-recovered_host-optaa_dj_dcl_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: cannot reindex or align along dimension 'wavelength' because the index has duplicate values <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 147, in process_dataset append_to_zarr(mod_ds, store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 355, in append_to_zarr mod_ds = mod_ds.reindex(dim_indexer) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2947, in reindex return self._reindex( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2976, in _reindex variables, indexes = alignment.reindex_variables( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/alignment.py", line 589, in reindex_variables raise ValueError( ValueError: cannot reindex or align along dimension 'wavelength' because the index has duplicate values ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-02-10T13:40:39.176878. ## Details Flow name: `CE09OSSM-RID27-01-OPTAAD000-recovered_host-optaa_dj_dcl_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: cannot reindex or align along dimension 'wavelength' because the index has duplicate values <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 157, in processing process_dataset( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 147, in process_dataset append_to_zarr(mod_ds, store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 355, in append_to_zarr mod_ds = mod_ds.reindex(dim_indexer) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2947, in reindex return self._reindex( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2976, in _reindex variables, indexes = alignment.reindex_variables( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/alignment.py", line 589, in reindex_variables raise ValueError( ValueError: cannot reindex or align along dimension 'wavelength' because the index has duplicate values ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host optaa dj dcl instrument recovered task name processing task error type valueerror error message cannot reindex or align along dimension wavelength because the index has duplicate values traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing process dataset file srv conda envs notebook lib site packages ooi harvester processor init py line in process dataset append to zarr mod ds store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr mod ds mod ds reindex dim indexer file srv conda envs notebook lib site packages xarray core dataset py line in reindex return self reindex file srv conda envs notebook lib site packages xarray core dataset py line in reindex variables indexes alignment reindex variables file srv conda envs notebook lib site packages xarray core alignment py line in reindex variables raise valueerror valueerror cannot reindex or align along dimension wavelength because the index has duplicate values
1
8,004
11,200,222,789
IssuesEvent
2020-01-03 21:04:45
knative/serving
https://api.github.com/repos/knative/serving
closed
Deflake IngressTLS tests
area/networking area/test-and-release kind/bug kind/process
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. --> <!-- ## In what area(s)? Remove the '> ' to select /area networking /area test-and-release Other classifications: /kind process --> ## What version of Knative? <!-- Delete all but your choice --> HEAD ## Expected Behavior IngressTLS conformance tests are not flaky. ## Actual Behavior They are flaky. /assign @ZhiminXiang
1.0
Deflake IngressTLS tests - <!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. --> <!-- ## In what area(s)? Remove the '> ' to select /area networking /area test-and-release Other classifications: /kind process --> ## What version of Knative? <!-- Delete all but your choice --> HEAD ## Expected Behavior IngressTLS conformance tests are not flaky. ## Actual Behavior They are flaky. /assign @ZhiminXiang
process
deflake ingresstls tests in what area s remove the to select area networking area test and release other classifications kind process what version of knative head expected behavior ingresstls conformance tests are not flaky actual behavior they are flaky assign zhiminxiang
1
799,316
28,304,430,870
IssuesEvent
2023-04-10 09:32:37
AY2223S2-CS2103T-W15-2/tp
https://api.github.com/repos/AY2223S2-CS2103T-W15-2/tp
closed
As a familiar user, I can sort orders by date
type.Story priority.Medium user.Orders
... so that I can keep track of orders due the earliest
1.0
As a familiar user, I can sort orders by date - ... so that I can keep track of orders due the earliest
non_process
as a familiar user i can sort orders by date so that i can keep track of orders due the earliest
0
17,360
23,185,427,478
IssuesEvent
2022-08-01 07:58:38
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[SQL Connector] discuss whether use a different docker repository name.
compute/data-processing
Use a different docker repository name.
1.0
[SQL Connector] discuss whether use a different docker repository name. - Use a different docker repository name.
process
discuss whether use a different docker repository name use a different docker repository name
1
269,086
8,425,604,059
IssuesEvent
2018-10-16 03:28:25
antonwilc0x/NSO
https://api.github.com/repos/antonwilc0x/NSO
opened
Basic Server Database
complexity: unknown priority: high server
In order to add or extend certain features, a database needs to be established. LightDB, a noSQL database, will serve as the foundation. For better flexibility, each database will be separate and operate independently of each other. This _should_ cut down on the number of writes at this stage. ## Databases - [ ] Users (Username, UUID, Password Hash, Assigned Avatar UUIDs) - [ ] Avatars (Avatar Name, UUID, Assigned User UUID, City, Money) - [ ] Lots (Lot Name, Assigned Avatar UUID, Category) The city database is not included in this since it's the most complicated part.
1.0
Basic Server Database - In order to add or extend certain features, a database needs to be established. LightDB, a noSQL database, will serve as the foundation. For better flexibility, each database will be separate and operate independently of each other. This _should_ cut down on the number of writes at this stage. ## Databases - [ ] Users (Username, UUID, Password Hash, Assigned Avatar UUIDs) - [ ] Avatars (Avatar Name, UUID, Assigned User UUID, City, Money) - [ ] Lots (Lot Name, Assigned Avatar UUID, Category) The city database is not included in this since it's the most complicated part.
non_process
basic server database in order to add or extend certain features a database needs to be established lightdb a nosql database will serve as the foundation for better flexibility each database will be separate and operate independently of each other this should cut down on the number of writes at this stage databases users username uuid password hash assigned avatar uuids avatars avatar name uuid assigned user uuid city money lots lot name assigned avatar uuid category the city database is not included in this since it s the most complicated part
0
3,076
6,086,230,951
IssuesEvent
2017-06-17 22:25:56
w3c/html
https://api.github.com/repos/w3c/html
closed
HTML5.2 wide review tracker
process
* A11y: [Complete, no issues](https://www.w3.org/WAI/APA/wiki/HTML_5.2) * I18n: [Complete, no issues](https://lists.w3.org/Archives/Public/public-html/2017May/0024.html) * Privacy IG: [Complete, no issues](https://www.w3.org/2017/04/20-privacy-minutes.html) * Security IG: * TAG: [Complete, no substantive issues](https://lists.w3.org/Archives/Public/public-html/2017Jun/0009.html) no substantive issues](https://github.com/w3ctag/design-reviews/issues/119#issuecomment-297909765)
1.0
HTML5.2 wide review tracker - * A11y: [Complete, no issues](https://www.w3.org/WAI/APA/wiki/HTML_5.2) * I18n: [Complete, no issues](https://lists.w3.org/Archives/Public/public-html/2017May/0024.html) * Privacy IG: [Complete, no issues](https://www.w3.org/2017/04/20-privacy-minutes.html) * Security IG: * TAG: [Complete, no substantive issues](https://lists.w3.org/Archives/Public/public-html/2017Jun/0009.html) no substantive issues](https://github.com/w3ctag/design-reviews/issues/119#issuecomment-297909765)
process
wide review tracker privacy ig security ig tag no substantive issues
1
15,312
19,405,299,629
IssuesEvent
2021-12-19 22:13:44
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
migrate (new): get all collections
script migration processing
build Javascript/Electron functionality (does not exist in Python script) get all collections, return a list of collection ids for a dropdown menu
1.0
migrate (new): get all collections - build Javascript/Electron functionality (does not exist in Python script) get all collections, return a list of collection ids for a dropdown menu
process
migrate new get all collections build javascript electron functionality does not exist in python script get all collections return a list of collection ids for a dropdown menu
1
201,977
15,241,819,955
IssuesEvent
2021-02-19 09:01:00
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
closed
An error dialog pops up when executing 'Propagate Access Control Lists…' for one Azure AD attached ADLS Gen2 folder
:beetle: regression :gear: adls gen2 🧪 testing
**Storage Explorer Version**: 1.17.0 **Build Number**: 20210204.2 **Branch**: main **Platform/OS:** Windows 10/ CentOS 7.6.1810 (Core)/ MacOS Catalina **Architecture**: ia32/x64 **Regression From**: Previous build(20210129.3) ## Steps to Reproduce ## 1. Expand one ADLS Gen2 account -> Blob Containers. 2. Create a blob container -> Upload one folder. 3. Give RBAC access to the blob container 'Blob Data Owner Role'. 4. Attach the folder via Azure AD -> Right click the attached folder under the 'Attached Containers -> Blob Containers' node. 5. Click 'Propagate Access Control Lists...' -> Click 'OK'. 6. Check there is no error dialog. ## Expected Experience ## No error dialog pops up. ## Actual Experience ## An error dialog pops up. ![image](https://user-images.githubusercontent.com/54055206/106865262-e33f3580-6705-11eb-8bf2-f50b264674f7.png) ## Additional Context ## This issue also reproduces for one Azure AD attached ADLS Gen2 folder via ACLs.
1.0
An error dialog pops up when executing 'Propagate Access Control Lists…' for one Azure AD attached ADLS Gen2 folder - **Storage Explorer Version**: 1.17.0 **Build Number**: 20210204.2 **Branch**: main **Platform/OS:** Windows 10/ CentOS 7.6.1810 (Core)/ MacOS Catalina **Architecture**: ia32/x64 **Regression From**: Previous build(20210129.3) ## Steps to Reproduce ## 1. Expand one ADLS Gen2 account -> Blob Containers. 2. Create a blob container -> Upload one folder. 3. Give RBAC access to the blob container 'Blob Data Owner Role'. 4. Attach the folder via Azure AD -> Right click the attached folder under the 'Attached Containers -> Blob Containers' node. 5. Click 'Propagate Access Control Lists...' -> Click 'OK'. 6. Check there is no error dialog. ## Expected Experience ## No error dialog pops up. ## Actual Experience ## An error dialog pops up. ![image](https://user-images.githubusercontent.com/54055206/106865262-e33f3580-6705-11eb-8bf2-f50b264674f7.png) ## Additional Context ## This issue also reproduces for one Azure AD attached ADLS Gen2 folder via ACLs.
non_process
an error dialog pops up when executing propagate access control lists… for one azure ad attached adls folder storage explorer version build number branch main platform os windows centos core macos catalina architecture regression from previous build steps to reproduce expand one adls account blob containers create a blob container upload one folder give rbac access to the blob container blob data owner role attach the folder via azure ad right click the attached folder under the attached containers blob containers node click propagate access control lists click ok check there is no error dialog expected experience no error dialog pops up actual experience an error dialog pops up additional context this issue also reproduces for one azure ad attached adls folder via acls
0
4,615
7,460,650,157
IssuesEvent
2018-03-30 20:42:49
attackle/rembot
https://api.github.com/repos/attackle/rembot
closed
Write algorithm to scale image to landscape a4 paper
processing
- [ ] Scale image img-width to paper-width: if img-width > img-height - [ ] Scale image height to paper-height : if img-width< img-height ## Resources [Image scaling](https://docs.opencv.org/3.4.0/da/d6e/tutorial_py_geometric_transformations.html)
1.0
Write algorithm to scale image to landscape a4 paper - - [ ] Scale image img-width to paper-width: if img-width > img-height - [ ] Scale image height to paper-height : if img-width< img-height ## Resources [Image scaling](https://docs.opencv.org/3.4.0/da/d6e/tutorial_py_geometric_transformations.html)
process
write algorithm to scale image to landscape paper scale image img width to paper width if img width img height scale image height to paper height if img width img height resources
1
34,419
4,539,918,771
IssuesEvent
2016-09-09 13:00:51
pythonapis/ZIF55P252J5SJRRTRWMMP4HC
https://api.github.com/repos/pythonapis/ZIF55P252J5SJRRTRWMMP4HC
closed
tsup2aPMGiw2rqedV1NMe50kj/Mow+TyKkT/jokeYVLI7jsuIKnMqXmrVJlafWMRmEvP/xwJMOyWgUIfF2A8yuYHR0IKs3nT+YOSOq1lC4BipV6QdylhfebLDYZryL2go6bL5WG20xF6GyI9cN/wRXWEi4qFIGLFRGbln2wrXWs=
design
9HoLrAUCqWon6q/TFLSHrcOWo7dgdgOeBB8zDl3s9GmMX16APOK49WS7lGkQwqHJpua1gerWIyOtAV8S1Hd2Vu7u1RmkobdqsBDvUTgUHe256xlA11LtmdDVQsYGXb7/3hqENgelZgEEdkn8nh/8EWRgxyYUuRtSfeEFgMkgfxLtg/+L8nJKBV7EOiHPhPhY9V7ueGIBfl/6jEDQk/eM8JF9IfHFpXy7HtU7QXpohSUdsYlnZ6iCGZhJhjma9grZUSsq4ccViCIgcy0N2MxZ4A6h7ANIGzpIQdmS4we+g/Ypkc/pLGHJH79w+V/Qsyzhr7sWJIs75/iEHVhcwSAJcsEmF/7RACWspyrRasXVaoPG+zLalzGWby6+F3DP/Jh7QDPnsWHvGV/N59erTyJioHYPMaILRyKzI+XiPa4FC9X6ceGoUpm9Jq/oFQoshiMjt0f1CxJPMrbCAc9x7qd3YLhDMj7Gmd2rqZZxdbHAl3gXx710W8K7Mi/qweQj+FfWS2PvZEmInQOHFya7IH87FXakseFMStqE1JZh84N5DmuqDxW5Q2oz/Kxvc0w2hbFKAov+L9/iw4L8CHozpQy7FU27jaqJyVdHO6ZQgvA3MSPiiU5Az+crrEMiTA9myWSETDhB62j2IfmQ040hCdy7WxFrZWw2VxCmwaACW5oI0B9++NCvaDVW4xNXTYs6OvxnzXkZ+6IRzSlQduH7BVwrJ3akseFMStqE1JZh84N5DmuqDxW5Q2oz/Kxvc0w2hbFK1l0gmWnSq8GHf9un9AI0WS3nPC8CMl2+hWWKC5iYRhsw7uIgcfaFzOTVuG+tyGsLNXiAYs1FFaWCVqLm31WB/YZ8umb3iXw1S3Sw3RpqpHwCVQOzUW4h8krClKgBwH6jOFiAkBHqeZX+W54Zp0m7RvtCJqle8CljCoFS2ujOC4Xes8/NDbWSlYEzIUjZDFptsQmdcF+PUBrMhkiCl5mH57GsOIVQAJ6PanP4KUEZesv4Vak2GMu+8vHFKRA3cCmcMQhb5K+idLRvbN2SALyQRcf0cgIzmYgJvj0efz1zwWo=
1.0
tsup2aPMGiw2rqedV1NMe50kj/Mow+TyKkT/jokeYVLI7jsuIKnMqXmrVJlafWMRmEvP/xwJMOyWgUIfF2A8yuYHR0IKs3nT+YOSOq1lC4BipV6QdylhfebLDYZryL2go6bL5WG20xF6GyI9cN/wRXWEi4qFIGLFRGbln2wrXWs= - 9HoLrAUCqWon6q/TFLSHrcOWo7dgdgOeBB8zDl3s9GmMX16APOK49WS7lGkQwqHJpua1gerWIyOtAV8S1Hd2Vu7u1RmkobdqsBDvUTgUHe256xlA11LtmdDVQsYGXb7/3hqENgelZgEEdkn8nh/8EWRgxyYUuRtSfeEFgMkgfxLtg/+L8nJKBV7EOiHPhPhY9V7ueGIBfl/6jEDQk/eM8JF9IfHFpXy7HtU7QXpohSUdsYlnZ6iCGZhJhjma9grZUSsq4ccViCIgcy0N2MxZ4A6h7ANIGzpIQdmS4we+g/Ypkc/pLGHJH79w+V/Qsyzhr7sWJIs75/iEHVhcwSAJcsEmF/7RACWspyrRasXVaoPG+zLalzGWby6+F3DP/Jh7QDPnsWHvGV/N59erTyJioHYPMaILRyKzI+XiPa4FC9X6ceGoUpm9Jq/oFQoshiMjt0f1CxJPMrbCAc9x7qd3YLhDMj7Gmd2rqZZxdbHAl3gXx710W8K7Mi/qweQj+FfWS2PvZEmInQOHFya7IH87FXakseFMStqE1JZh84N5DmuqDxW5Q2oz/Kxvc0w2hbFKAov+L9/iw4L8CHozpQy7FU27jaqJyVdHO6ZQgvA3MSPiiU5Az+crrEMiTA9myWSETDhB62j2IfmQ040hCdy7WxFrZWw2VxCmwaACW5oI0B9++NCvaDVW4xNXTYs6OvxnzXkZ+6IRzSlQduH7BVwrJ3akseFMStqE1JZh84N5DmuqDxW5Q2oz/Kxvc0w2hbFK1l0gmWnSq8GHf9un9AI0WS3nPC8CMl2+hWWKC5iYRhsw7uIgcfaFzOTVuG+tyGsLNXiAYs1FFaWCVqLm31WB/YZ8umb3iXw1S3Sw3RpqpHwCVQOzUW4h8krClKgBwH6jOFiAkBHqeZX+W54Zp0m7RvtCJqle8CljCoFS2ujOC4Xes8/NDbWSlYEzIUjZDFptsQmdcF+PUBrMhkiCl5mH57GsOIVQAJ6PanP4KUEZesv4Vak2GMu+8vHFKRA3cCmcMQhb5K+idLRvbN2SALyQRcf0cgIzmYgJvj0efz1zwWo=
non_process
mow tykkt g ypkc v iehvhcwsajcsemf qweqj ndbwslyeziujzdfptsqmdcf
0
320,486
27,437,599,431
IssuesEvent
2023-03-02 08:44:53
alttester/AltTester-Unity-SDK
https://api.github.com/repos/alttester/AltTester-Unity-SDK
opened
[AltTester Editor] [Tests list] When right-clicking a test deselect all the other selected tests
feature alttester editor
<!-- Short summary --> ## Description <!-- Describe shortly: What does this feature do? Why is it needed? --> An improvement on the workflow with tests in Unity would be to have in the AltTester Editor window in the `Tests list` the possibility to select only a test by right-clicking on it and that should deselect all the other already selected tests. ## Mocks <!-- Ex.: add mocks related to how the UX should look like with this feature. --> ## Additional info <!-- Add any additional info that could be of help like links to docs, resources etc. -->
1.0
[AltTester Editor] [Tests list] When right-clicking a test deselect all the other selected tests - <!-- Short summary --> ## Description <!-- Describe shortly: What does this feature do? Why is it needed? --> An improvement on the workflow with tests in Unity would be to have in the AltTester Editor window in the `Tests list` the possibility to select only a test by right-clicking on it and that should deselect all the other already selected tests. ## Mocks <!-- Ex.: add mocks related to how the UX should look like with this feature. --> ## Additional info <!-- Add any additional info that could be of help like links to docs, resources etc. -->
non_process
when right clicking a test deselect all the other selected tests description an improvement on the workflow with tests in unity would be to have in the alttester editor window in the tests list the possibility to select only a test by right clicking on it and that should deselect all the other already selected tests mocks additional info
0
20,369
27,026,717,200
IssuesEvent
2023-02-11 17:50:08
gobuffalo/middleware
https://api.github.com/repos/gobuffalo/middleware
opened
process: consolidate the default middlewares into a single module
process
As of today, we have 7 official middleware modules and 4 of them are used in a new app by default. Now, these middleware are stable and have no much update recently except dependency updates. To reduce the maintenance effort on these modules, and also to simplify the module dependency structure, I would like to consolidate the default middleware (which are `mw-csrf`, `mw-forcessl`, `mw-i18n`, and `mw-paramlogger`) into a single module. Each of them will be a separate package in the module but will share the same module version and other stuff. The process will be: 1. prepare the new module by merging them all together into this repository 2. update and clean up all the old modules before deprecating them, and releasing them as the last versions. 3. improve the CLI to use the new module for a new app 4. improve the CLI fixer to update existing apps, but this is optional Hey @paganotoni, as we discussed before, I am going to deprecate the old default middleware modules (4 modules) and merge them into this module. Please take a look at the rough process above and let me know if you have any concern or comment on it. ETA for this work is the end of this month roughly. Related tasks: * [ ] gobuffalo/mw-paramlogger/pull/12
1.0
process: consolidate the default middlewares into a single module - As of today, we have 7 official middleware modules and 4 of them are used in a new app by default. Now, these middleware are stable and have no much update recently except dependency updates. To reduce the maintenance effort on these modules, and also to simplify the module dependency structure, I would like to consolidate the default middleware (which are `mw-csrf`, `mw-forcessl`, `mw-i18n`, and `mw-paramlogger`) into a single module. Each of them will be a separate package in the module but will share the same module version and other stuff. The process will be: 1. prepare the new module by merging them all together into this repository 2. update and clean up all the old modules before deprecating them, and releasing them as the last versions. 3. improve the CLI to use the new module for a new app 4. improve the CLI fixer to update existing apps, but this is optional Hey @paganotoni, as we discussed before, I am going to deprecate the old default middleware modules (4 modules) and merge them into this module. Please take a look at the rough process above and let me know if you have any concern or comment on it. ETA for this work is the end of this month roughly. Related tasks: * [ ] gobuffalo/mw-paramlogger/pull/12
process
process consolidate the default middlewares into a single module as of today we have official middleware modules and of them are used in a new app by default now these middleware are stable and have no much update recently except dependency updates to reduce the maintenance effort on these modules and also to simplify the module dependency structure i would like to consolidate the default middleware which are mw csrf mw forcessl mw and mw paramlogger into a single module each of them will be a separate package in the module but will share the same module version and other stuff the process will be prepare the new module by merging them all together into this repository update and clean up all the old modules before deprecating them and releasing them as the last versions improve the cli to use the new module for a new app improve the cli fixer to update existing apps but this is optional hey paganotoni as we discussed before i am going to deprecate the old default middleware modules modules and merge them into this module please take a look at the rough process above and let me know if you have any concern or comment on it eta for this work is the end of this month roughly related tasks gobuffalo mw paramlogger pull
1
7,442
10,554,710,710
IssuesEvent
2019-10-03 20:06:13
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
Searching for places with city returns no results
processed
Hey team! I was using your awesome geocoding engine when I noticed something interesting. Let me tell you more about it. --- #### Here's what I did :innocent: <!-- try adding a link here to either [pelias.io/compare](http://pelias.io/compare) or [mapzen.com/search](https://mapzen.com/search) --> ## Searched for "Starbucks Seattle", "Starbucks Bellevue" #### Here's what I got :scream_cat: <!-- we :heart: screenshots or copy-pasted json results --> https://search.mapzen.com/v1/autocomplete?layers=address&text=starbucks%20seattle ## (no results) #### Here's what I was expecting :sparkles: <!-- try being as explicit as possible here so we know how to fix this issue --> ## It looks like the city isn't being used in search results, was expecting to see locations with that city #### Here's what I think could be improved :trophy:
1.0
Searching for places with city returns no results - Hey team! I was using your awesome geocoding engine when I noticed something interesting. Let me tell you more about it. --- #### Here's what I did :innocent: <!-- try adding a link here to either [pelias.io/compare](http://pelias.io/compare) or [mapzen.com/search](https://mapzen.com/search) --> ## Searched for "Starbucks Seattle", "Starbucks Bellevue" #### Here's what I got :scream_cat: <!-- we :heart: screenshots or copy-pasted json results --> https://search.mapzen.com/v1/autocomplete?layers=address&text=starbucks%20seattle ## (no results) #### Here's what I was expecting :sparkles: <!-- try being as explicit as possible here so we know how to fix this issue --> ## It looks like the city isn't being used in search results, was expecting to see locations with that city #### Here's what I think could be improved :trophy:
process
searching for places with city returns no results hey team i was using your awesome geocoding engine when i noticed something interesting let me tell you more about it here s what i did innocent searched for starbucks seattle starbucks bellevue here s what i got scream cat no results here s what i was expecting sparkles it looks like the city isn t being used in search results was expecting to see locations with that city here s what i think could be improved trophy
1
145,780
13,160,778,832
IssuesEvent
2020-08-10 18:15:07
chef/automate
https://api.github.com/repos/chef/automate
opened
Feature flags entry
documentation
The beta/legacy/features flag entry has changed subtly. They're also hard to find. We need to: * decide if/how to express this nuance * add a flags entry somewhere in reference to make the instructions discoverable Something along the lines of (Drafty): "Legacy, Beta, and Feature Flags: Use the `bets`, `lega`, or `feat` flags to turn on or off beta, legacy, or in-development features by turning the services. Select somewhere Automate screen--but not in an input box--and enter "feat", "lega", or "beta", to open the option box." https://github.com/chef/automate/pull/4202
1.0
Feature flags entry - The beta/legacy/features flag entry has changed subtly. They're also hard to find. We need to: * decide if/how to express this nuance * add a flags entry somewhere in reference to make the instructions discoverable Something along the lines of (Drafty): "Legacy, Beta, and Feature Flags: Use the `bets`, `lega`, or `feat` flags to turn on or off beta, legacy, or in-development features by turning the services. Select somewhere Automate screen--but not in an input box--and enter "feat", "lega", or "beta", to open the option box." https://github.com/chef/automate/pull/4202
non_process
feature flags entry the beta legacy features flag entry has changed subtly they re also hard to find we need to decide if how to express this nuance add a flags entry somewhere in reference to make the instructions discoverable something along the lines of drafty legacy beta and feature flags use the bets lega or feat flags to turn on or off beta legacy or in development features by turning the services select somewhere automate screen but not in an input box and enter feat lega or beta to open the option box
0
8,423
11,590,200,040
IssuesEvent
2020-02-24 05:49:18
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Kernel Ridge result is inconsistent to Gaussian Process Regression
Bug module:gaussian_process
<!-- Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the past issues. --> #### Describe the bug <!-- A clear and concise description of what the bug is. --> `KernelRidge` and `GaussianProcessRegressor` don't give the same result even with the same hyper-parametesrs. In theory they should. #### Steps/Code to Reproduce <!-- Please add a minimal example that we can reproduce the error by running the code. Be as succinct as possible, do not depend on external data. In short, we are going to copy-paste your code and we expect to get the same result as you. If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com --> Here is the gist. https://gist.github.com/lucidfrontier45/2aeb965dd03dc5b82837eceaf194460c I compared `KernelRidge` and `GaussianProcessRegressor` as well as my own Kernel Ridge implementation that use `sklearn.gaussian_process.kernels.RBF` and GPy's implementation. #### Expected Results <!-- Example: No error is thrown. Please paste or describe the expected results.--> The results of the four models should be identical. #### Actual Results <!-- Please paste or specifically describe the actual output or traceback. --> Only the result of sklearn's `KernelRidge` is different from the others. #### Versions <!-- Please run the following snippet and paste the output below. For scikit-learn >= 0.20: import sklearn; sklearn.show_versions() For scikit-learn < 0.20: import platform; print(platform.platform()) import sys; print("Python", sys.version) import numpy; print("NumPy", numpy.__version__) import scipy; print("SciPy", scipy.__version__) import sklearn; print("Scikit-Learn", sklearn.__version__) import imblearn; print("Imbalanced-Learn", imblearn.__version__) --> Linux-5.3.0-40-generic-x86_64-with-debian-buster-sid Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] NumPy 1.17.4 SciPy 1.3.2 Scikit-Learn 0.21.2 <!-- Thanks for contributing! -->
1.0
Kernel Ridge result is inconsistent to Gaussian Process Regression - <!-- Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the past issues. --> #### Describe the bug <!-- A clear and concise description of what the bug is. --> `KernelRidge` and `GaussianProcessRegressor` don't give the same result even with the same hyper-parametesrs. In theory they should. #### Steps/Code to Reproduce <!-- Please add a minimal example that we can reproduce the error by running the code. Be as succinct as possible, do not depend on external data. In short, we are going to copy-paste your code and we expect to get the same result as you. If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com --> Here is the gist. https://gist.github.com/lucidfrontier45/2aeb965dd03dc5b82837eceaf194460c I compared `KernelRidge` and `GaussianProcessRegressor` as well as my own Kernel Ridge implementation that use `sklearn.gaussian_process.kernels.RBF` and GPy's implementation. #### Expected Results <!-- Example: No error is thrown. Please paste or describe the expected results.--> The results of the four models should be identical. #### Actual Results <!-- Please paste or specifically describe the actual output or traceback. --> Only the result of sklearn's `KernelRidge` is different from the others. #### Versions <!-- Please run the following snippet and paste the output below. For scikit-learn >= 0.20: import sklearn; sklearn.show_versions() For scikit-learn < 0.20: import platform; print(platform.platform()) import sys; print("Python", sys.version) import numpy; print("NumPy", numpy.__version__) import scipy; print("SciPy", scipy.__version__) import sklearn; print("Scikit-Learn", sklearn.__version__) import imblearn; print("Imbalanced-Learn", imblearn.__version__) --> Linux-5.3.0-40-generic-x86_64-with-debian-buster-sid Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] NumPy 1.17.4 SciPy 1.3.2 Scikit-Learn 0.21.2 <!-- Thanks for contributing! -->
process
kernel ridge result is inconsistent to gaussian process regression before submitting a bug please make sure the issue hasn t been already addressed by searching through the past issues describe the bug a clear and concise description of what the bug is kernelridge and gaussianprocessregressor don t give the same result even with the same hyper parametesrs in theory they should steps code to reproduce please add a minimal example that we can reproduce the error by running the code be as succinct as possible do not depend on external data in short we are going to copy paste your code and we expect to get the same result as you if the code is too long feel free to put it in a public gist and link it in the issue here is the gist i compared kernelridge and gaussianprocessregressor as well as my own kernel ridge implementation that use sklearn gaussian process kernels rbf and gpy s implementation expected results the results of the four models should be identical actual results only the result of sklearn s kernelridge is different from the others versions please run the following snippet and paste the output below for scikit learn import sklearn sklearn show versions for scikit learn import platform print platform platform import sys print python sys version import numpy print numpy numpy version import scipy print scipy scipy version import sklearn print scikit learn sklearn version import imblearn print imbalanced learn imblearn version linux generic with debian buster sid python default mar numpy scipy scikit learn
1
18,625
24,579,708,941
IssuesEvent
2022-10-13 14:46:56
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Consent API] [PM] Participant details > Consent history > There is no records in the consent history table
Bug P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**AR:** There is no records in the consent history table after withdrawn from the study in the mobile apps. **ER:** Records should be displayed in the consent history table even after withdrawn from the study in the mobile apps. ![1q](https://user-images.githubusercontent.com/86007179/150734886-a7370273-0de5-4b28-8b68-320066458af0.png)
3.0
[Consent API] [PM] Participant details > Consent history > There is no records in the consent history table - **AR:** There is no records in the consent history table after withdrawn from the study in the mobile apps. **ER:** Records should be displayed in the consent history table even after withdrawn from the study in the mobile apps. ![1q](https://user-images.githubusercontent.com/86007179/150734886-a7370273-0de5-4b28-8b68-320066458af0.png)
process
participant details consent history there is no records in the consent history table ar there is no records in the consent history table after withdrawn from the study in the mobile apps er records should be displayed in the consent history table even after withdrawn from the study in the mobile apps
1
13,350
15,812,186,072
IssuesEvent
2021-04-05 04:58:30
e4exp/paper_manager_abstract
https://api.github.com/repos/e4exp/paper_manager_abstract
opened
Data Augmentation using Pre-trained Transformer Models
2020 Data Augmentation Natural Language Processing Pretraining Transformer
- https://arxiv.org/abs/2003.02245 - 2020 BERTのような言語モデルベースの事前学習モデルは、様々なNLPタスクにおいて大きな利益をもたらしている。 本論文では、条件付きデータ補強のための自動回帰モデル(GPT-2)、自動エンコーダーモデル(BERT)、seq2seqモデル(BART)など、様々なタイプの変換器ベースの事前学習モデルを研究する。 我々は、クラスラベルをテキストシーケンスに前置することで、事前に学習されたモデルを条件付きでデータ拡張するための簡単かつ効果的な方法を提供することを示す。 さらに、3つの分類ベンチマークにおいて、事前学習されたSeq2Seqモデルは、低リソース環境において他のデータ補強手法よりも優れた結果を示した。 さらに、データの多様性の観点から、学習済みモデルに基づくデータ増強がどのように異なるのか、また、そのような手法がクラス・ラベル情報をどの程度保持できるのかを調査した。
1.0
Data Augmentation using Pre-trained Transformer Models - - https://arxiv.org/abs/2003.02245 - 2020 BERTのような言語モデルベースの事前学習モデルは、様々なNLPタスクにおいて大きな利益をもたらしている。 本論文では、条件付きデータ補強のための自動回帰モデル(GPT-2)、自動エンコーダーモデル(BERT)、seq2seqモデル(BART)など、様々なタイプの変換器ベースの事前学習モデルを研究する。 我々は、クラスラベルをテキストシーケンスに前置することで、事前に学習されたモデルを条件付きでデータ拡張するための簡単かつ効果的な方法を提供することを示す。 さらに、3つの分類ベンチマークにおいて、事前学習されたSeq2Seqモデルは、低リソース環境において他のデータ補強手法よりも優れた結果を示した。 さらに、データの多様性の観点から、学習済みモデルに基づくデータ増強がどのように異なるのか、また、そのような手法がクラス・ラベル情報をどの程度保持できるのかを調査した。
process
data augmentation using pre trained transformer models bertのような言語モデルベースの事前学習モデルは、様々なnlpタスクにおいて大きな利益をもたらしている。 本論文では、条件付きデータ補強のための自動回帰モデル(gpt )、自動エンコーダーモデル(bert)、 (bart)など、様々なタイプの変換器ベースの事前学習モデルを研究する。 我々は、クラスラベルをテキストシーケンスに前置することで、事前に学習されたモデルを条件付きでデータ拡張するための簡単かつ効果的な方法を提供することを示す。 さらに、 、 、低リソース環境において他のデータ補強手法よりも優れた結果を示した。 さらに、データの多様性の観点から、学習済みモデルに基づくデータ増強がどのように異なるのか、また、そのような手法がクラス・ラベル情報をどの程度保持できるのかを調査した。
1
567
3,024,114,644
IssuesEvent
2015-08-02 08:18:04
HazyResearch/dd-genomics
https://api.github.com/repos/HazyResearch/dd-genomics
closed
Move coreNLP wrapper from bazaar -> dd-genomics/parser code
Low priority Preprocessing
This honestly shouldn't be that hard at all... that said I am chickening out from doing it immediately before a big run of PMC which is time sensitive. But this should be a fairly trivial thing to do at some point soon, piping the code around to a wrapper of a wrapper just seems silly...
1.0
Move coreNLP wrapper from bazaar -> dd-genomics/parser code - This honestly shouldn't be that hard at all... that said I am chickening out from doing it immediately before a big run of PMC which is time sensitive. But this should be a fairly trivial thing to do at some point soon, piping the code around to a wrapper of a wrapper just seems silly...
process
move corenlp wrapper from bazaar dd genomics parser code this honestly shouldn t be that hard at all that said i am chickening out from doing it immediately before a big run of pmc which is time sensitive but this should be a fairly trivial thing to do at some point soon piping the code around to a wrapper of a wrapper just seems silly
1
2,559
5,315,466,497
IssuesEvent
2017-02-13 17:23:04
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
opened
[subtitles] [fr] Attentats : bobards médiatiques autour des propos de Jean-Luc Mélenchon​
Language: French Process: Someone is working on this issue Process: [1] Writing in progress
# Video title Attentats : bobards médiatiques autour des propos de Jean-Luc Mélenchon​ # URL https://www.youtube.com/watch?v=UjP4RCur4Hk # Youtube subtitles language French # Duration 2:25 # Subtitles URL https://www.youtube.com/timedtext_editor?tab=captions&ref=player&lang=fr&v=UjP4RCur4Hk&action_mde_edit_form=1&ui=hd&bl=vmp
2.0
[subtitles] [fr] Attentats : bobards médiatiques autour des propos de Jean-Luc Mélenchon​ - # Video title Attentats : bobards médiatiques autour des propos de Jean-Luc Mélenchon​ # URL https://www.youtube.com/watch?v=UjP4RCur4Hk # Youtube subtitles language French # Duration 2:25 # Subtitles URL https://www.youtube.com/timedtext_editor?tab=captions&ref=player&lang=fr&v=UjP4RCur4Hk&action_mde_edit_form=1&ui=hd&bl=vmp
process
attentats bobards médiatiques autour des propos de jean luc mélenchon​ video title attentats bobards médiatiques autour des propos de jean luc mélenchon​ url youtube subtitles language french duration subtitles url
1
84,133
10,476,637,541
IssuesEvent
2019-09-23 19:03:03
samvera/hyrax
https://api.github.com/repos/samvera/hyrax
closed
Display analytics info on show page
Analytics design needed epic
### Descriptive summary Display download counts and page views on the show page for a work. Example from UVa shown here: <img width="770" alt="show page analytics" src="https://user-images.githubusercontent.com/1766876/35356203-b8d1d844-011d-11e8-87cc-f593395275e1.png">
1.0
Display analytics info on show page - ### Descriptive summary Display download counts and page views on the show page for a work. Example from UVa shown here: <img width="770" alt="show page analytics" src="https://user-images.githubusercontent.com/1766876/35356203-b8d1d844-011d-11e8-87cc-f593395275e1.png">
non_process
display analytics info on show page descriptive summary display download counts and page views on the show page for a work example from uva shown here img width alt show page analytics src
0
299,571
22,613,721,781
IssuesEvent
2022-06-29 19:40:43
GMLC-TDC/HELICS
https://api.github.com/repos/GMLC-TDC/HELICS
closed
Add federation construction guidelines to documentation
enhancement documentation ux
**Describe the solution you'd like** They're relatively well established customs and conventions for federation construction that new users could benefit from. These rules-of-thumb should be added as part of the user guide documentation.
1.0
Add federation construction guidelines to documentation - **Describe the solution you'd like** They're relatively well established customs and conventions for federation construction that new users could benefit from. These rules-of-thumb should be added as part of the user guide documentation.
non_process
add federation construction guidelines to documentation describe the solution you d like they re relatively well established customs and conventions for federation construction that new users could benefit from these rules of thumb should be added as part of the user guide documentation
0
3,056
6,046,779,528
IssuesEvent
2017-06-12 13:01:56
itsyouonline/identityserver
https://api.github.com/repos/itsyouonline/identityserver
closed
Can't recover existing account in case password has been forgotten.
process_wontfix
# Scenario 1- Go to login page and click "FORGOTYOUR PASSWORD?" button 2- Provide a valid username or emailaddress of the account ## Actual Result: ![user_rec](https://cloud.githubusercontent.com/assets/15011431/19727170/f66bde7c-9b96-11e6-80c8-3f403f09f8af.png) ![email_rec](https://cloud.githubusercontent.com/assets/15011431/19727172/f829f5fa-9b96-11e6-9af4-033a136fc817.png) ## Expected Result: A valid username or email address should be accepted and a recovery email should be sent.
1.0
Can't recover existing account in case password has been forgotten. - # Scenario 1- Go to login page and click "FORGOTYOUR PASSWORD?" button 2- Provide a valid username or emailaddress of the account ## Actual Result: ![user_rec](https://cloud.githubusercontent.com/assets/15011431/19727170/f66bde7c-9b96-11e6-80c8-3f403f09f8af.png) ![email_rec](https://cloud.githubusercontent.com/assets/15011431/19727172/f829f5fa-9b96-11e6-9af4-033a136fc817.png) ## Expected Result: A valid username or email address should be accepted and a recovery email should be sent.
process
can t recover existing account in case password has been forgotten scenario go to login page and click forgotyour password button provide a valid username or emailaddress of the account actual result expected result a valid username or email address should be accepted and a recovery email should be sent
1
4,571
7,396,356,805
IssuesEvent
2018-03-18 11:28:28
ODiogoSilva/assemblerflow
https://api.github.com/repos/ODiogoSilva/assemblerflow
opened
Add skesa check for empty file at the end of assembly
bug process
Even when skesa crashes (out of RAM for instance) it creates an empty assembly file, which passes through nextflow output channels. A check should be added to confirm that only not empty files can pass, otherwise the process fails.
1.0
Add skesa check for empty file at the end of assembly - Even when skesa crashes (out of RAM for instance) it creates an empty assembly file, which passes through nextflow output channels. A check should be added to confirm that only not empty files can pass, otherwise the process fails.
process
add skesa check for empty file at the end of assembly even when skesa crashes out of ram for instance it creates an empty assembly file which passes through nextflow output channels a check should be added to confirm that only not empty files can pass otherwise the process fails
1
8,633
11,784,975,440
IssuesEvent
2020-03-17 09:27:08
prisma/prisma-client-js
https://api.github.com/repos/prisma/prisma-client-js
closed
Allow access to raw value used as Foreign Key in a relation
kind/feature process/product
The following is valid to Photon, but not to Lift: ``` model Foo { bar Bar @map("bar_uuid") barId String @map("bar_uuid") } ``` If would be fantastic if either the above was allowed, or if `barId` could be autogenerated on the type `Foo`. Making `barId` accessible significantly cleans the syntax for callers (and I think without `barId` the query engine may even implement joins or something similar under the covers). For context because we are struggling with some Photon performance issues, we're using some manual workarounds. This backs out to us having many cases like `barId` in our schema but commenting and uncommenting the `barId` line depending on whether we're using `Photon`, or `Lift`.
1.0
Allow access to raw value used as Foreign Key in a relation - The following is valid to Photon, but not to Lift: ``` model Foo { bar Bar @map("bar_uuid") barId String @map("bar_uuid") } ``` If would be fantastic if either the above was allowed, or if `barId` could be autogenerated on the type `Foo`. Making `barId` accessible significantly cleans the syntax for callers (and I think without `barId` the query engine may even implement joins or something similar under the covers). For context because we are struggling with some Photon performance issues, we're using some manual workarounds. This backs out to us having many cases like `barId` in our schema but commenting and uncommenting the `barId` line depending on whether we're using `Photon`, or `Lift`.
process
allow access to raw value used as foreign key in a relation the following is valid to photon but not to lift model foo bar bar map bar uuid barid string map bar uuid if would be fantastic if either the above was allowed or if barid could be autogenerated on the type foo making barid accessible significantly cleans the syntax for callers and i think without barid the query engine may even implement joins or something similar under the covers for context because we are struggling with some photon performance issues we re using some manual workarounds this backs out to us having many cases like barid in our schema but commenting and uncommenting the barid line depending on whether we re using photon or lift
1
20,931
27,776,887,347
IssuesEvent
2023-03-16 17:51:42
GoogleCloudPlatform/vertex-ai-samples
https://api.github.com/repos/GoogleCloudPlatform/vertex-ai-samples
reopened
Dependency Dashboard
type: process
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more. ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/nbqa-1.x -->[chore(deps): update dependency nbqa to v1.6.3](../pull/1396) - [ ] <!-- rebase-branch=renovate/python-3.x -->[chore(deps): update python docker tag to v3.11](../pull/1345) - [ ] <!-- rebase-branch=renovate/black-23.x -->[chore(deps): update dependency black to v23](../pull/1498) - [ ] <!-- rebase-branch=renovate/pyupgrade-3.x -->[chore(deps): update dependency pyupgrade to v3](../pull/1346) - [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once** ## Detected dependencies <details><summary>dockerfile</summary> <blockquote> <details><summary>.github/workflows/linter/Dockerfile</summary> - `python 3.10` </details> </blockquote> </details> <details><summary>github-actions</summary> <blockquote> <details><summary>.github/workflows/ci.yaml</summary> - `actions/setup-python v4` - `actions/checkout v3` </details> </blockquote> </details> <details><summary>pip_requirements</summary> <blockquote> <details><summary>.cloud-build/cleanup/cleanup-requirements.txt</summary> </details> <details><summary>.cloud-build/requirements.txt</summary> </details> <details><summary>.github/workflows/linter/requirements.txt</summary> - `black ==22.12.0` - `pyupgrade ==2.38.4` - `isort ==5.12.0` - `flake8 ==6.0.0` - `nbqa ==1.5.3` </details> </blockquote> </details> --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
1.0
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more. ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/nbqa-1.x -->[chore(deps): update dependency nbqa to v1.6.3](../pull/1396) - [ ] <!-- rebase-branch=renovate/python-3.x -->[chore(deps): update python docker tag to v3.11](../pull/1345) - [ ] <!-- rebase-branch=renovate/black-23.x -->[chore(deps): update dependency black to v23](../pull/1498) - [ ] <!-- rebase-branch=renovate/pyupgrade-3.x -->[chore(deps): update dependency pyupgrade to v3](../pull/1346) - [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once** ## Detected dependencies <details><summary>dockerfile</summary> <blockquote> <details><summary>.github/workflows/linter/Dockerfile</summary> - `python 3.10` </details> </blockquote> </details> <details><summary>github-actions</summary> <blockquote> <details><summary>.github/workflows/ci.yaml</summary> - `actions/setup-python v4` - `actions/checkout v3` </details> </blockquote> </details> <details><summary>pip_requirements</summary> <blockquote> <details><summary>.cloud-build/cleanup/cleanup-requirements.txt</summary> </details> <details><summary>.cloud-build/requirements.txt</summary> </details> <details><summary>.github/workflows/linter/requirements.txt</summary> - `black ==22.12.0` - `pyupgrade ==2.38.4` - `isort ==5.12.0` - `flake8 ==6.0.0` - `nbqa ==1.5.3` </details> </blockquote> </details> --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
process
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more open these updates have all been created already click a checkbox below to force a retry rebase of any pull pull pull pull click on this checkbox to rebase all open prs at once detected dependencies dockerfile github workflows linter dockerfile python github actions github workflows ci yaml actions setup python actions checkout pip requirements cloud build cleanup cleanup requirements txt cloud build requirements txt github workflows linter requirements txt black pyupgrade isort nbqa check this box to trigger a request for renovate to run again on this repository
1
762,499
26,720,532,210
IssuesEvent
2023-01-29 03:45:46
Pictogrammers/pictogrammers.com
https://api.github.com/repos/Pictogrammers/pictogrammers.com
closed
Advanced Icon Export
Feature Request High Priority
As a user of Pictogrammers.com, I would like to export icon glyphs in sizes and colors that fit my use case, so that I do not have to rely on external tools to modify an icon glyph I'd like to use. AC: - A new Advanced Export feature is added to the current Icon View. - The Advanced Export tool will only export raster-based PNG images. [(Why?)](#why-only-pngs) - A user can change the foreground and background color (rgba) to their preference. - A user can change the exported size of the icon. - The current download options for PNG of 24, 36, and 48, will be condensed into a simple "Export as PNG" which will give a user a 256x256 version in black. Anything else will be considered an advanced export use case. ### Why only PNGs? We provide icons for graphic design and development. For design, PNGs are generally higher-quality than other formats (such as JPG) and support transparency. Additionally, they are supported pretty much anywhere they would need to be used. If other formats (such as JPG, WebP, etc.) are required, converting them will be the responsibility of the user. For development, SVGs should be the primary format. They are vector-based and can be resized and colored easily and dynamically.
1.0
Advanced Icon Export - As a user of Pictogrammers.com, I would like to export icon glyphs in sizes and colors that fit my use case, so that I do not have to rely on external tools to modify an icon glyph I'd like to use. AC: - A new Advanced Export feature is added to the current Icon View. - The Advanced Export tool will only export raster-based PNG images. [(Why?)](#why-only-pngs) - A user can change the foreground and background color (rgba) to their preference. - A user can change the exported size of the icon. - The current download options for PNG of 24, 36, and 48, will be condensed into a simple "Export as PNG" which will give a user a 256x256 version in black. Anything else will be considered an advanced export use case. ### Why only PNGs? We provide icons for graphic design and development. For design, PNGs are generally higher-quality than other formats (such as JPG) and support transparency. Additionally, they are supported pretty much anywhere they would need to be used. If other formats (such as JPG, WebP, etc.) are required, converting them will be the responsibility of the user. For development, SVGs should be the primary format. They are vector-based and can be resized and colored easily and dynamically.
non_process
advanced icon export as a user of pictogrammers com i would like to export icon glyphs in sizes and colors that fit my use case so that i do not have to rely on external tools to modify an icon glyph i d like to use ac a new advanced export feature is added to the current icon view the advanced export tool will only export raster based png images why only pngs a user can change the foreground and background color rgba to their preference a user can change the exported size of the icon the current download options for png of and will be condensed into a simple export as png which will give a user a version in black anything else will be considered an advanced export use case why only pngs we provide icons for graphic design and development for design pngs are generally higher quality than other formats such as jpg and support transparency additionally they are supported pretty much anywhere they would need to be used if other formats such as jpg webp etc are required converting them will be the responsibility of the user for development svgs should be the primary format they are vector based and can be resized and colored easily and dynamically
0
95,899
27,659,070,947
IssuesEvent
2023-03-12 10:09:09
Sendouc/sendou.ink
https://api.github.com/repos/Sendouc/sendou.ink
closed
Build Analyzer crash in ConsumptionTable
build analyzer
Page crashes ("Error happened") ![image](https://user-images.githubusercontent.com/38327916/222538046-505841fe-8fd4-4a9a-a78c-31ab5ed7e306.png) http://localhost:5800/analyzer?weapon=10&build2=LDE%2CRES%2CSRU%2CQSJ%2CISS%2CISS%2CISS%2CISS%2CISS%2CBRU%2CBRU%2CSSU&lde=6&effect=LDE&focused=1 ``` Mar 2 09:53:16 PM GET /analyzer?weapon=10&build2=LDE%2CRES%2CSRU%2CQSJ%2CISS%2CISS%2CISS%2CISS%2CISS%2CBRU%2CBRU%2CSSU&lde=6&effect=LDE&focused=1 500 - - 39.087 ms Mar 2 09:53:16 PM GET /analyzer?weapon=1011&build=SSU,SCU,SS,SCU,NS,SSU,SSU,SSU,SJ,RES,SRU,QSJ&build2=U,U,U,U,U,U,U,U,U,U,U,U&lde=0&focused=1 200 - - 38.268 ms Mar 2 09:53:16 PM at Array.map (<anonymous>) Mar 2 09:53:16 PM at ConsumptionTable (/opt/render/project/src/build/index.js:19199:114) Mar 2 09:53:16 PM at Uc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:70:44) Mar 2 09:53:16 PM at Xc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:72:253) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:89) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM at $c (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:80:140) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:345) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM TypeError: Cannot read properties of undefined (reading 'value') Mar 2 09:53:16 PM at /opt/render/project/src/build/index.js:19209:213 Mar 2 09:53:16 PM at Array.map (<anonymous>) Mar 2 09:53:16 PM at ConsumptionTable (/opt/render/project/src/build/index.js:19199:114) Mar 2 09:53:16 PM at Uc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:70:44) Mar 2 09:53:16 PM at Xc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:72:253) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:89) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM at $c (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:80:140) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:345) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) ```
1.0
Build Analyzer crash in ConsumptionTable - Page crashes ("Error happened") ![image](https://user-images.githubusercontent.com/38327916/222538046-505841fe-8fd4-4a9a-a78c-31ab5ed7e306.png) http://localhost:5800/analyzer?weapon=10&build2=LDE%2CRES%2CSRU%2CQSJ%2CISS%2CISS%2CISS%2CISS%2CISS%2CBRU%2CBRU%2CSSU&lde=6&effect=LDE&focused=1 ``` Mar 2 09:53:16 PM GET /analyzer?weapon=10&build2=LDE%2CRES%2CSRU%2CQSJ%2CISS%2CISS%2CISS%2CISS%2CISS%2CBRU%2CBRU%2CSSU&lde=6&effect=LDE&focused=1 500 - - 39.087 ms Mar 2 09:53:16 PM GET /analyzer?weapon=1011&build=SSU,SCU,SS,SCU,NS,SSU,SSU,SSU,SJ,RES,SRU,QSJ&build2=U,U,U,U,U,U,U,U,U,U,U,U&lde=0&focused=1 200 - - 38.268 ms Mar 2 09:53:16 PM at Array.map (<anonymous>) Mar 2 09:53:16 PM at ConsumptionTable (/opt/render/project/src/build/index.js:19199:114) Mar 2 09:53:16 PM at Uc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:70:44) Mar 2 09:53:16 PM at Xc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:72:253) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:89) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM at $c (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:80:140) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:345) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM TypeError: Cannot read properties of undefined (reading 'value') Mar 2 09:53:16 PM at /opt/render/project/src/build/index.js:19209:213 Mar 2 09:53:16 PM at Array.map (<anonymous>) Mar 2 09:53:16 PM at ConsumptionTable (/opt/render/project/src/build/index.js:19199:114) Mar 2 09:53:16 PM at Uc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:70:44) Mar 2 09:53:16 PM at Xc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:72:253) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:89) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) Mar 2 09:53:16 PM at $c (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:80:140) Mar 2 09:53:16 PM at Z (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:78:345) Mar 2 09:53:16 PM at Yc (/opt/render/project/src/node_modules/react-dom/cjs/react-dom-server.node.production.min.js:81:98) ```
non_process
build analyzer crash in consumptiontable page crashes error happened mar pm get analyzer weapon lde lde effect lde focused ms mar pm get analyzer weapon build ssu scu ss scu ns ssu ssu ssu sj res sru qsj u u u u u u u u u u u u lde focused ms mar pm at array map mar pm at consumptiontable opt render project src build index js mar pm at uc opt render project src node modules react dom cjs react dom server node production min js mar pm at xc opt render project src node modules react dom cjs react dom server node production min js mar pm at z opt render project src node modules react dom cjs react dom server node production min js mar pm at yc opt render project src node modules react dom cjs react dom server node production min js mar pm at c opt render project src node modules react dom cjs react dom server node production min js mar pm at z opt render project src node modules react dom cjs react dom server node production min js mar pm at yc opt render project src node modules react dom cjs react dom server node production min js mar pm typeerror cannot read properties of undefined reading value mar pm at opt render project src build index js mar pm at array map mar pm at consumptiontable opt render project src build index js mar pm at uc opt render project src node modules react dom cjs react dom server node production min js mar pm at xc opt render project src node modules react dom cjs react dom server node production min js mar pm at z opt render project src node modules react dom cjs react dom server node production min js mar pm at yc opt render project src node modules react dom cjs react dom server node production min js mar pm at c opt render project src node modules react dom cjs react dom server node production min js mar pm at z opt render project src node modules react dom cjs react dom server node production min js mar pm at yc opt render project src node modules react dom cjs react dom server node production min js
0
10,532
13,310,607,361
IssuesEvent
2020-08-26 06:52:23
e4exp/paper_manager_abstract
https://api.github.com/repos/e4exp/paper_manager_abstract
opened
Tree-structured decoding with doubly-recurrent neural networks
2016 Natural Language Processing Recurrent Neural Network Tree Structure _read_later
* https://openreview.net/forum?id=HkYhZDqxg * ICLR 2017 conference submission 符号化された表現から木構造のオブジェクトを生成するためのニューラルネットワークアーキテクチャを提案する。 この手法の中核となるのは二重回帰ニューラルネットワークであり、木全体の幅と深さの再帰を別々にモデル化し、各セル内でそれらを組み合わせて出力を生成する。 木のトポロジーは明示的にモデル化されており、ネットワークはデコード時に木の内容とトポロジーの両方を予測することができます。 つまり、エンコードされたベクトル表現だけが与えられると、ネットワークはそれから木を生成し、ノードのラベルを予測することができます。 ここでは、文をベクトルとしてエンコードし、そこから木構造を生成するためにネットワークを訓練します。 実験の結果、このアーキテクチャがシーケンスの潜在木構造を回復し、文を単純な関数プログラムにマッピングするのに有効であることを示した。
1.0
Tree-structured decoding with doubly-recurrent neural networks - * https://openreview.net/forum?id=HkYhZDqxg * ICLR 2017 conference submission 符号化された表現から木構造のオブジェクトを生成するためのニューラルネットワークアーキテクチャを提案する。 この手法の中核となるのは二重回帰ニューラルネットワークであり、木全体の幅と深さの再帰を別々にモデル化し、各セル内でそれらを組み合わせて出力を生成する。 木のトポロジーは明示的にモデル化されており、ネットワークはデコード時に木の内容とトポロジーの両方を予測することができます。 つまり、エンコードされたベクトル表現だけが与えられると、ネットワークはそれから木を生成し、ノードのラベルを予測することができます。 ここでは、文をベクトルとしてエンコードし、そこから木構造を生成するためにネットワークを訓練します。 実験の結果、このアーキテクチャがシーケンスの潜在木構造を回復し、文を単純な関数プログラムにマッピングするのに有効であることを示した。
process
tree structured decoding with doubly recurrent neural networks iclr conference submission 符号化された表現から木構造のオブジェクトを生成するためのニューラルネットワークアーキテクチャを提案する。 この手法の中核となるのは二重回帰ニューラルネットワークであり、木全体の幅と深さの再帰を別々にモデル化し、各セル内でそれらを組み合わせて出力を生成する。 木のトポロジーは明示的にモデル化されており、ネットワークはデコード時に木の内容とトポロジーの両方を予測することができます。 つまり、エンコードされたベクトル表現だけが与えられると、ネットワークはそれから木を生成し、ノードのラベルを予測することができます。 ここでは、文をベクトルとしてエンコードし、そこから木構造を生成するためにネットワークを訓練します。 実験の結果、このアーキテクチャがシーケンスの潜在木構造を回復し、文を単純な関数プログラムにマッピングするのに有効であることを示した。
1
1,773
4,487,880,867
IssuesEvent
2016-08-30 03:49:22
MSUCSC430/Inference
https://api.github.com/repos/MSUCSC430/Inference
closed
Create Project Skeleton
Complete core devprocess
Create the skeleton for the project. Configure maven, travis, git ignore
1.0
Create Project Skeleton - Create the skeleton for the project. Configure maven, travis, git ignore
process
create project skeleton create the skeleton for the project configure maven travis git ignore
1
712,606
24,500,708,472
IssuesEvent
2022-10-10 12:35:03
simonvbrae/flordi-web
https://api.github.com/repos/simonvbrae/flordi-web
opened
Internationalisation
enhancement medium priority
The website should be available in english and dutch - Implement internationalisation - Add translations of context
1.0
Internationalisation - The website should be available in english and dutch - Implement internationalisation - Add translations of context
non_process
internationalisation the website should be available in english and dutch implement internationalisation add translations of context
0
109,721
9,411,889,796
IssuesEvent
2019-04-10 01:33:35
MVPStudio/mvp-studio-web-cms
https://api.github.com/repos/MVPStudio/mvp-studio-web-cms
opened
Create mock version of service
testing
add an environment variable such that if `MOCK=true` when running the server.js the formHandler.js always returns a 200 and success message
1.0
Create mock version of service - add an environment variable such that if `MOCK=true` when running the server.js the formHandler.js always returns a 200 and success message
non_process
create mock version of service add an environment variable such that if mock true when running the server js the formhandler js always returns a and success message
0
10,177
13,044,162,789
IssuesEvent
2020-07-29 03:47:36
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `RoundWithFracInt` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `RoundWithFracInt` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `RoundWithFracInt` from TiDB - ## Description Port the scalar function `RoundWithFracInt` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function roundwithfracint from tidb description port the scalar function roundwithfracint from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
163,919
12,750,291,595
IssuesEvent
2020-06-27 03:25:27
pantsbuild/pants
https://api.github.com/repos/pantsbuild/pants
closed
Rsc Compile integration test flake
flaky-test test-timeout
This test is flaky, in that it nearly reaches, and sometimes crosses the 10 minute timeout in travis between getting any output from the job. ``` Only executing 1 of 4 total tests in shard 3 of 20 . [100%] - generated xml file: /home/travis/build/pantsbuild/pants/.pants.d/test/pytest/tests.python.pants_test.backend.jvm.tasks.jvm_compile.rsc.rsc_compile_integration/junitxml/TEST-tests.python.pants_test.backend.jvm.tasks.jvm_compile.rsc.rsc_compile_integration.xml - 1 passed in 596.28 seconds ``` This one passed with 4 seconds to spare.
2.0
Rsc Compile integration test flake - This test is flaky, in that it nearly reaches, and sometimes crosses the 10 minute timeout in travis between getting any output from the job. ``` Only executing 1 of 4 total tests in shard 3 of 20 . [100%] - generated xml file: /home/travis/build/pantsbuild/pants/.pants.d/test/pytest/tests.python.pants_test.backend.jvm.tasks.jvm_compile.rsc.rsc_compile_integration/junitxml/TEST-tests.python.pants_test.backend.jvm.tasks.jvm_compile.rsc.rsc_compile_integration.xml - 1 passed in 596.28 seconds ``` This one passed with 4 seconds to spare.
non_process
rsc compile integration test flake this test is flaky in that it nearly reaches and sometimes crosses the minute timeout in travis between getting any output from the job only executing of total tests in shard of generated xml file home travis build pantsbuild pants pants d test pytest tests python pants test backend jvm tasks jvm compile rsc rsc compile integration junitxml test tests python pants test backend jvm tasks jvm compile rsc rsc compile integration xml passed in seconds this one passed with seconds to spare
0
16,235
20,782,301,501
IssuesEvent
2022-03-16 15:43:42
gaosiyan/gaosiyan.github.io
https://api.github.com/repos/gaosiyan/gaosiyan.github.io
opened
Signal Processing for Communications(通信信号处理)原书翻译 | gaosiyan's blog
Gitalk /post/signal-processing-for-communicationstong-xi
https://gaosiyan.github.io/post/signal-processing-for-communicationstong-xin-xin-hao-chu-li-yuan-shu-fan-yi/ 创建时间:2021-09-11. D:\资料库\数字信号处理\Signal Processing for Communications - 通信信号处理[2008].pdf 书籍信息 书名:Signal Processing for Com...
1.0
Signal Processing for Communications(通信信号处理)原书翻译 | gaosiyan's blog - https://gaosiyan.github.io/post/signal-processing-for-communicationstong-xin-xin-hao-chu-li-yuan-shu-fan-yi/ 创建时间:2021-09-11. D:\资料库\数字信号处理\Signal Processing for Communications - 通信信号处理[2008].pdf 书籍信息 书名:Signal Processing for Com...
process
signal processing for communications 通信信号处理 原书翻译 gaosiyan s blog 创建时间 d 资料库 数字信号处理 signal processing for communications 通信信号处理 pdf 书籍信息 书名 signal processing for com
1
22,197
30,754,312,463
IssuesEvent
2023-07-28 23:20:41
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
Allen and Heath Qu Connection not working in Companion 3.0
NOT YET PROCESSED
This is not a new request. I couldn't figure out how to add an issue that wasn't a new request. The Allen and Heath Qu-XX connection gives an error in Companion 3.0 when trying to connect. - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: What you would like to be able to make it do from Companion: Direct links or attachments to the ethernet control protocol or API:
1.0
Allen and Heath Qu Connection not working in Companion 3.0 - This is not a new request. I couldn't figure out how to add an issue that wasn't a new request. The Allen and Heath Qu-XX connection gives an error in Companion 3.0 when trying to connect. - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: What you would like to be able to make it do from Companion: Direct links or attachments to the ethernet control protocol or API:
process
allen and heath qu connection not working in companion this is not a new request i couldn t figure out how to add an issue that wasn t a new request the allen and heath qu xx connection gives an error in companion when trying to connect i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control what you would like to be able to make it do from companion direct links or attachments to the ethernet control protocol or api
1
4,347
7,247,676,416
IssuesEvent
2018-02-15 04:52:22
bojanrajkovic/Volley
https://api.github.com/repos/bojanrajkovic/Volley
closed
Start using .NET Core 2.1 sooner rather than later
CI Release Process
1. Test out perf improvements/build time improvements. 2. https://docs.microsoft.com/en-us/aspnet/core/mvc/views/view-compilation?tabs=aspnetcore2x -- Razor precompilation works for SCDs in 2.1, and Volley uses an SCD for its Docker/Kubernetes deployment.
1.0
Start using .NET Core 2.1 sooner rather than later - 1. Test out perf improvements/build time improvements. 2. https://docs.microsoft.com/en-us/aspnet/core/mvc/views/view-compilation?tabs=aspnetcore2x -- Razor precompilation works for SCDs in 2.1, and Volley uses an SCD for its Docker/Kubernetes deployment.
process
start using net core sooner rather than later test out perf improvements build time improvements razor precompilation works for scds in and volley uses an scd for its docker kubernetes deployment
1
20,432
27,098,505,109
IssuesEvent
2023-02-15 06:20:04
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Provided links contradict documentation
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
The articles seem to be outdated. The "Next Step" article for Azure automation and source control provides information on how the process works in an outdated format and states that you should be able to "Check-In" your changes from the Azure portal into the GitHub repository. I cannot see any "Check-In" option in the runbooks anymore and this documentation references this as a "GitHub" > Azure Automation only now. NOT Azure Automation to GitHub. Please take down the article reference or explain how to push Azure portal (automation) changes to GitHub now. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Use source control integration in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
1.0
Provided links contradict documentation - The articles seem to be outdated. The "Next Step" article for Azure automation and source control provides information on how the process works in an outdated format and states that you should be able to "Check-In" your changes from the Azure portal into the GitHub repository. I cannot see any "Check-In" option in the runbooks anymore and this documentation references this as a "GitHub" > Azure Automation only now. NOT Azure Automation to GitHub. Please take down the article reference or explain how to push Azure portal (automation) changes to GitHub now. --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Use source control integration in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
process
provided links contradict documentation the articles seem to be outdated the next step article for azure automation and source control provides information on how the process works in an outdated format and states that you should be able to check in your changes from the azure portal into the github repository i cannot see any check in option in the runbooks anymore and this documentation references this as a github azure automation only now not azure automation to github please take down the article reference or explain how to push azure portal automation changes to github now document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
1
2,846
5,808,940,173
IssuesEvent
2017-05-04 12:10:28
pwittchen/swipe
https://api.github.com/repos/pwittchen/swipe
opened
Release 0.1.0
release process
**Initial release notes**: - updated project dependencies - updated Gradle configuration - updated sample apps - renamed `com.github.pwittchen.swipe.library.Swipe#addListener` to `com.github.pwittchen.swipe.library.Swipe#setListener` - ... **Things to do**: TBD.
1.0
Release 0.1.0 - **Initial release notes**: - updated project dependencies - updated Gradle configuration - updated sample apps - renamed `com.github.pwittchen.swipe.library.Swipe#addListener` to `com.github.pwittchen.swipe.library.Swipe#setListener` - ... **Things to do**: TBD.
process
release initial release notes updated project dependencies updated gradle configuration updated sample apps renamed com github pwittchen swipe library swipe addlistener to com github pwittchen swipe library swipe setlistener things to do tbd
1
22,587
31,811,967,607
IssuesEvent
2023-09-13 17:29:02
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Non-annotated classes bundled inside generated class jar
more data needed type: support / not a bug (process) team-Rules-Java
I have an annotation processor that targets a specific annotation `A`, I register this processor using `java_plugin` rule and build the target using `kt_jvm_library`. **_What I expect_**: The generated class jar file should contain only classes annotated with `A`. **_What I get_**: There are a lot of non-annotated classes bundled inside my generated class jar. I attempted to convert one of the affected classes to Kotlin code, and the class disappeared from the bundle when rebuilding. Is this the expected behaviour with Java annotation processing? And if possible, how can I unbundle the affected classes?
1.0
Non-annotated classes bundled inside generated class jar - I have an annotation processor that targets a specific annotation `A`, I register this processor using `java_plugin` rule and build the target using `kt_jvm_library`. **_What I expect_**: The generated class jar file should contain only classes annotated with `A`. **_What I get_**: There are a lot of non-annotated classes bundled inside my generated class jar. I attempted to convert one of the affected classes to Kotlin code, and the class disappeared from the bundle when rebuilding. Is this the expected behaviour with Java annotation processing? And if possible, how can I unbundle the affected classes?
process
non annotated classes bundled inside generated class jar i have an annotation processor that targets a specific annotation a i register this processor using java plugin rule and build the target using kt jvm library what i expect the generated class jar file should contain only classes annotated with a what i get there are a lot of non annotated classes bundled inside my generated class jar i attempted to convert one of the affected classes to kotlin code and the class disappeared from the bundle when rebuilding is this the expected behaviour with java annotation processing and if possible how can i unbundle the affected classes
1
13,036
15,383,807,579
IssuesEvent
2021-03-03 03:26:50
aodn/imos-toolbox
https://api.github.com/repos/aodn/imos-toolbox
opened
better handling and error reporting for external calls
Type:enhancement Unit:Processing
Some external calls sometimes hangs the toolbox because of errors. Typical example is geomag: #727, #634 Requirements: - [ ] improve error handling for geomag - [ ] improve error handling for all external commands and interfaces.
1.0
better handling and error reporting for external calls - Some external calls sometimes hangs the toolbox because of errors. Typical example is geomag: #727, #634 Requirements: - [ ] improve error handling for geomag - [ ] improve error handling for all external commands and interfaces.
process
better handling and error reporting for external calls some external calls sometimes hangs the toolbox because of errors typical example is geomag requirements improve error handling for geomag improve error handling for all external commands and interfaces
1
305,683
9,375,426,669
IssuesEvent
2019-04-04 03:59:00
wso2/product-ei
https://api.github.com/repos/wso2/product-ei
closed
[Bug] Error when trying retrieve data in json format in dataservice
Priority/Highest Severity/Critical
In EI 6.1.1, when there is a dataservice with two operations (and also two resources), the second resource fails with the error [1] in the console when we try to call the rest endpoint of for the second resource with the header set as "Accept: application/json". This passes when the same is set for the first resource. IMO this should be a L1 bug since there is no workaround other than creating separate dataservice for the second resource. [1] - > [2017-09-14 13:58:38,778] [EI-Core] INFO - DeploymentInterceptor Removing Axis2 Service: dataservice2 {super-tenant} > [2017-09-14 13:58:38,798] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: dataservice2 {super-tenant} > [2017-09-14 13:58:38,798] [EI-Core] INFO - DeploymentEngine Deploying Web service: dataservice2.dbs - file:/Users/nira/wso2/qsp/inc-research/products/wso2ei-6.1.1/repository/deployment/server/dataservices/dataservice2.dbs > [2017-09-14 13:58:46,452] [EI-Core] ERROR - JsonStreamFormatter Error occurred while writing to application/json > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:107) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.axis2.AxisFault: Error while writing to the output stream using JsonWriter > at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:104) > ... 17 more > Caused by: javax.xml.stream.XMLStreamException: DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': DS Fault Message: Error in XML generation at StaticOutputElement.execute > DS Code: UNKNOWN_ERROR > Nested Exception:- > javax.xml.stream.XMLStreamException: Invalid Staring element > > DS Code: DATABASE_ERROR > Source Data Service:- > Name: dataservice2 > Location: /dataservice2.dbs > Description: N/A > Default Namespace: http://ws.wso2.org/dataservice > Current Request Name: _getall > Current Params: {} > Nested Exception:- > DS Fault Message: Error in XML generation at StaticOutputElement.execute > DS Code: UNKNOWN_ERROR > Nested Exception:- > javax.xml.stream.XMLStreamException: Invalid Staring element > > > at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:102) > at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.serialize(DSOMDataSource.java:107) > at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.internalSerialize(OMSourcedElementImpl.java:691) > at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.serializeAndConsume(OMSourcedElementImpl.java:754) > at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:100) > ... 17 more > [2017-09-14 13:58:46,457] [EI-Core] ERROR - PassThroughHttpSender Failed to submit the response > org.apache.axis2.AxisFault > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:113) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > [2017-09-14 13:58:46,458] [EI-Core] ERROR - ServerWorker Error processing GET request for : /services/dataservice2/all > org.apache.axis2.AxisFault: Failed to submit the response > at org.apache.synapse.transport.passthru.PassThroughHttpSender.handleException(PassThroughHttpSender.java:632) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:266) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.axis2.AxisFault > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:113) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > ... 10 more
1.0
[Bug] Error when trying retrieve data in json format in dataservice - In EI 6.1.1, when there is a dataservice with two operations (and also two resources), the second resource fails with the error [1] in the console when we try to call the rest endpoint of for the second resource with the header set as "Accept: application/json". This passes when the same is set for the first resource. IMO this should be a L1 bug since there is no workaround other than creating separate dataservice for the second resource. [1] - > [2017-09-14 13:58:38,778] [EI-Core] INFO - DeploymentInterceptor Removing Axis2 Service: dataservice2 {super-tenant} > [2017-09-14 13:58:38,798] [EI-Core] INFO - DeploymentInterceptor Deploying Axis2 service: dataservice2 {super-tenant} > [2017-09-14 13:58:38,798] [EI-Core] INFO - DeploymentEngine Deploying Web service: dataservice2.dbs - file:/Users/nira/wso2/qsp/inc-research/products/wso2ei-6.1.1/repository/deployment/server/dataservices/dataservice2.dbs > [2017-09-14 13:58:46,452] [EI-Core] ERROR - JsonStreamFormatter Error occurred while writing to application/json > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:107) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.axis2.AxisFault: Error while writing to the output stream using JsonWriter > at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:104) > ... 17 more > Caused by: javax.xml.stream.XMLStreamException: DS Fault Message: Error in 'SQLQuery.processPostNormalQuery': DS Fault Message: Error in XML generation at StaticOutputElement.execute > DS Code: UNKNOWN_ERROR > Nested Exception:- > javax.xml.stream.XMLStreamException: Invalid Staring element > > DS Code: DATABASE_ERROR > Source Data Service:- > Name: dataservice2 > Location: /dataservice2.dbs > Description: N/A > Default Namespace: http://ws.wso2.org/dataservice > Current Request Name: _getall > Current Params: {} > Nested Exception:- > DS Fault Message: Error in XML generation at StaticOutputElement.execute > DS Code: UNKNOWN_ERROR > Nested Exception:- > javax.xml.stream.XMLStreamException: Invalid Staring element > > > at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.execute(DSOMDataSource.java:102) > at org.wso2.carbon.dataservices.core.engine.DSOMDataSource.serialize(DSOMDataSource.java:107) > at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.internalSerialize(OMSourcedElementImpl.java:691) > at org.apache.axiom.om.impl.llom.OMSourcedElementImpl.serializeAndConsume(OMSourcedElementImpl.java:754) > at org.apache.axis2.json.gson.JsonFormatter.writeTo(JsonFormatter.java:100) > ... 17 more > [2017-09-14 13:58:46,457] [EI-Core] ERROR - PassThroughHttpSender Failed to submit the response > org.apache.axis2.AxisFault > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:113) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > [2017-09-14 13:58:46,458] [EI-Core] ERROR - ServerWorker Error processing GET request for : /services/dataservice2/all > org.apache.axis2.AxisFault: Failed to submit the response > at org.apache.synapse.transport.passthru.PassThroughHttpSender.handleException(PassThroughHttpSender.java:632) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:266) > at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) > at org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic(AbstractInOutSyncMessageReceiver.java:45) > at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110) > at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180) > at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:326) > at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:158) > at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.axis2.AxisFault > at org.wso2.carbon.integrator.core.json.JsonStreamFormatter.writeTo(JsonStreamFormatter.java:113) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.submitResponse(PassThroughHttpSender.java:573) > at org.apache.synapse.transport.passthru.PassThroughHttpSender.invoke(PassThroughHttpSender.java:264) > ... 10 more
non_process
error when trying retrieve data in json format in dataservice in ei when there is a dataservice with two operations and also two resources the second resource fails with the error in the console when we try to call the rest endpoint of for the second resource with the header set as accept application json this passes when the same is set for the first resource imo this should be a bug since there is no workaround other than creating separate dataservice for the second resource info deploymentinterceptor removing service super tenant info deploymentinterceptor deploying service super tenant info deploymentengine deploying web service dbs file users nira qsp inc research products repository deployment server dataservices dbs error jsonstreamformatter error occurred while writing to application json java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org carbon integrator core json jsonstreamformatter writeto jsonstreamformatter java at org apache synapse transport passthru passthroughhttpsender submitresponse passthroughhttpsender java at org apache synapse transport passthru passthroughhttpsender invoke passthroughhttpsender java at org apache engine axisengine send axisengine java at org apache receivers abstractinoutsyncmessagereceiver invokebusinesslogic abstractinoutsyncmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache synapse transport passthru serverworker processnonentityenclosingresthandler serverworker java at org apache synapse transport passthru serverworker run serverworker java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org apache axisfault error while writing to the output stream using jsonwriter at org apache json gson jsonformatter writeto jsonformatter java more caused by javax xml stream xmlstreamexception ds fault message error in sqlquery processpostnormalquery ds fault message error in xml generation at staticoutputelement execute ds code unknown error nested exception javax xml stream xmlstreamexception invalid staring element ds code database error source data service name location dbs description n a default namespace current request name getall current params nested exception ds fault message error in xml generation at staticoutputelement execute ds code unknown error nested exception javax xml stream xmlstreamexception invalid staring element at org carbon dataservices core engine dsomdatasource execute dsomdatasource java at org carbon dataservices core engine dsomdatasource serialize dsomdatasource java at org apache axiom om impl llom omsourcedelementimpl internalserialize omsourcedelementimpl java at org apache axiom om impl llom omsourcedelementimpl serializeandconsume omsourcedelementimpl java at org apache json gson jsonformatter writeto jsonformatter java more error passthroughhttpsender failed to submit the response org apache axisfault at org carbon integrator core json jsonstreamformatter writeto jsonstreamformatter java at org apache synapse transport passthru passthroughhttpsender submitresponse passthroughhttpsender java at org apache synapse transport passthru passthroughhttpsender invoke passthroughhttpsender java at org apache engine axisengine send axisengine java at org apache receivers abstractinoutsyncmessagereceiver invokebusinesslogic abstractinoutsyncmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache synapse transport passthru serverworker processnonentityenclosingresthandler serverworker java at org apache synapse transport passthru serverworker run serverworker java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java error serverworker error processing get request for services all org apache axisfault failed to submit the response at org apache synapse transport passthru passthroughhttpsender handleexception passthroughhttpsender java at org apache synapse transport passthru passthroughhttpsender invoke passthroughhttpsender java at org apache engine axisengine send axisengine java at org apache receivers abstractinoutsyncmessagereceiver invokebusinesslogic abstractinoutsyncmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache synapse transport passthru serverworker processnonentityenclosingresthandler serverworker java at org apache synapse transport passthru serverworker run serverworker java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org apache axisfault at org carbon integrator core json jsonstreamformatter writeto jsonstreamformatter java at org apache synapse transport passthru passthroughhttpsender submitresponse passthroughhttpsender java at org apache synapse transport passthru passthroughhttpsender invoke passthroughhttpsender java more
0
61,382
7,463,624,640
IssuesEvent
2018-04-01 08:14:31
nawissor/stomer
https://api.github.com/repos/nawissor/stomer
closed
prevent ad click from jumping
Design Passed
prevent ad click from jumping to the top when a user clicks on it
1.0
prevent ad click from jumping - prevent ad click from jumping to the top when a user clicks on it
non_process
prevent ad click from jumping prevent ad click from jumping to the top when a user clicks on it
0
278,310
24,144,711,551
IssuesEvent
2022-09-21 17:39:57
redpanda-data/redpanda
https://api.github.com/repos/redpanda-data/redpanda
closed
Failure in `kuttl/harness/centralized-configuration-tls`
area/k8s area/tests ci-failure ci-disabled-test
This test was added in https://github.com/redpanda-data/redpanda/pull/3978 Seen failing here: https://buildkite.com/redpanda/redpanda/builds/8462#72476e76-0765-4438-a307-78735d028fc5 ``` case.go:363: resource Pod:kuttl-test-pleasant-pigeon/: .status.containerStatuses.state.terminated.message: value mismatch, expected: 1003 != actual: 1000 ```
2.0
Failure in `kuttl/harness/centralized-configuration-tls` - This test was added in https://github.com/redpanda-data/redpanda/pull/3978 Seen failing here: https://buildkite.com/redpanda/redpanda/builds/8462#72476e76-0765-4438-a307-78735d028fc5 ``` case.go:363: resource Pod:kuttl-test-pleasant-pigeon/: .status.containerStatuses.state.terminated.message: value mismatch, expected: 1003 != actual: 1000 ```
non_process
failure in kuttl harness centralized configuration tls this test was added in seen failing here case go resource pod kuttl test pleasant pigeon status containerstatuses state terminated message value mismatch expected actual
0
11,022
13,806,965,668
IssuesEvent
2020-10-11 19:55:07
km4ack/pi-build
https://api.github.com/repos/km4ack/pi-build
closed
Update README.md
enhancement in process
Take out the part that says it should only be installed on a clean copy of Buster.
1.0
Update README.md - Take out the part that says it should only be installed on a clean copy of Buster.
process
update readme md take out the part that says it should only be installed on a clean copy of buster
1
6,486
9,555,402,767
IssuesEvent
2019-05-03 03:04:50
intel-ros/realsense
https://api.github.com/repos/intel-ros/realsense
closed
D415 pointcloud is noisy in rviz
calibration post-processing
Hi, I'm using d415. ``` firmware:5_10_3_0 librealsense: Intel® RealSense™ SDK 2.0 (build 2.16.0) ``` Below warning shows, ``` $ roslaunch realsense2_camera rs_rgbd.launch ... logging to /home/seed/.ros/log/cc11cb64-c0a1-11e8-84cb-a434d993f1c0/roslaunch-seed-ThinkPad-T460s-31679.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://seed-ThinkPad-T460s:41421/ SUMMARY ======== PARAMETERS * /camera/realsense2_camera/accel_fps: 1000 * /camera/realsense2_camera/accel_optical_frame_id: camera_accel_opti... * /camera/realsense2_camera/align_depth: True * /camera/realsense2_camera/aligned_depth_to_color_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_fisheye_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_infra1_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_infra2_frame_id: camera_aligned_de... * /camera/realsense2_camera/color_fps: 30 * /camera/realsense2_camera/color_height: 480 * /camera/realsense2_camera/color_optical_frame_id: camera_color_opti... * /camera/realsense2_camera/color_width: 640 * /camera/realsense2_camera/depth_fps: 30 * /camera/realsense2_camera/depth_height: 480 * /camera/realsense2_camera/depth_optical_frame_id: camera_depth_opti... * /camera/realsense2_camera/depth_width: 640 * /camera/realsense2_camera/enable_color: True * /camera/realsense2_camera/enable_depth: True * /camera/realsense2_camera/enable_fisheye: True * /camera/realsense2_camera/enable_imu: True * /camera/realsense2_camera/enable_infra1: True * /camera/realsense2_camera/enable_infra2: True * /camera/realsense2_camera/enable_pointcloud: False * /camera/realsense2_camera/enable_sync: True * /camera/realsense2_camera/fisheye_fps: 30 * /camera/realsense2_camera/fisheye_height: 480 * /camera/realsense2_camera/fisheye_optical_frame_id: camera_fisheye_op... * /camera/realsense2_camera/fisheye_width: 640 * /camera/realsense2_camera/gyro_fps: 1000 * /camera/realsense2_camera/gyro_optical_frame_id: camera_gyro_optic... * /camera/realsense2_camera/infra1_fps: 30 * /camera/realsense2_camera/infra1_height: 480 * /camera/realsense2_camera/infra1_optical_frame_id: camera_infra1_opt... * /camera/realsense2_camera/infra1_width: 640 * /camera/realsense2_camera/infra2_fps: 30 * /camera/realsense2_camera/infra2_height: 480 * /camera/realsense2_camera/infra2_optical_frame_id: camera_infra2_opt... * /camera/realsense2_camera/infra2_width: 640 * /camera/realsense2_camera/json_file_path: * /camera/realsense2_camera/rosbag_filename: * /camera/realsense2_camera/serial_no: * /rosdistro: kinetic * /rosversion: 1.12.14 NODES /camera/ color_rectify_color (nodelet/nodelet) points_xyzrgb_hw_registered (nodelet/nodelet) realsense2_camera (nodelet/nodelet) realsense2_camera_manager (nodelet/nodelet) auto-starting new master process[master]: started with pid [31689] ROS_MASTER_URI=http://localhost:11311 setting /run_id to cc11cb64-c0a1-11e8-84cb-a434d993f1c0 process[rosout-1]: started with pid [31703] started core service [/rosout] process[camera/realsense2_camera_manager-2]: started with pid [31720] process[camera/realsense2_camera-3]: started with pid [31721] process[camera/color_rectify_color-4]: started with pid [31722] process[camera/points_xyzrgb_hw_registered-5]: started with pid [31730] [ INFO] [1537866178.895956981]: Initializing nodelet with 4 worker threads. [ INFO] [1537866178.994602165]: RealSense ROS v2.0.3 [ INFO] [1537866178.994634329]: Running with LibRealSense v2.16.0 [ INFO] [1537866179.297975607]: getParameters... [ INFO] [1537866179.358371948]: setupDevice... [ INFO] [1537866179.358442328]: JSON file is not provided [ INFO] [1537866179.358474861]: ROS Node Namespace: camera [ INFO] [1537866179.358513235]: Device Name: Intel RealSense D415 [ INFO] [1537866179.358534864]: Device Serial No: 816312060493 [ INFO] [1537866179.358559419]: Device FW version: 05.10.03.00 [ INFO] [1537866179.358582056]: Device Product ID: 0x0AD3 [ INFO] [1537866179.358609188]: Enable PointCloud: Off [ INFO] [1537866179.358637472]: Align Depth: On [ INFO] [1537866179.358658196]: Sync Mode: On [ INFO] [1537866179.358970824]: Device Sensors: [ INFO] [1537866179.359174501]: Stereo Module was found. [ INFO] [1537866179.359223837]: RGB Camera was found. [ INFO] [1537866179.359450585]: (Fisheye, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359481659]: (Gyro, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359509306]: (Accel, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359564560]: setupPublishers... [ INFO] [1537866179.363307788]: Expected frequency for depth = 30.00000 [ INFO] [1537866179.395862362]: Expected frequency for infra1 = 30.00000 [ INFO] [1537866179.421994418]: Expected frequency for aligned_depth_to_infra1 = 30.00000 [ INFO] [1537866179.446018089]: Expected frequency for infra2 = 30.00000 [ INFO] [1537866179.471695361]: Expected frequency for aligned_depth_to_infra2 = 30.00000 [ INFO] [1537866179.496306551]: Expected frequency for color = 30.00000 [ INFO] [1537866179.522022056]: Expected frequency for aligned_depth_to_color = 30.00000 [ INFO] [1537866179.551641932]: setupStreams... [ INFO] [1537866179.561624840]: depth stream is enabled - width: 640, height: 480, fps: 30 [ INFO] [1537866179.562019535]: infra1 stream is enabled - width: 640, height: 480, fps: 30 [ INFO] [1537866179.562349605]: infra2 stream is enabled - width: 640, height: 480, fps: 30 25/09 18:02:59,564 WARNING [140294086170496] (backend-v4l2.cpp:1248) Pixel format 36315752-1a66-a242-9065-d01814a likely requires patch for fourcc code RW16! 25/09 18:02:59,565 WARNING [140294086170496] (sensor.cpp:338) Unregistered Media formats : [ RW16 ]; Supported: [ ] [ INFO] [1537866179.573798384]: color stream is enabled - width: 640, height: 480, fps: 30 25/09 18:02:59,589 ERROR [140293388445440] (backend-v4l2.cpp:1002) Video frame arrived in idle mode. 25/09 18:02:59,589 ERROR [140293388445440] (backend-v4l2.cpp:1002) Video frame arrived in idle mode. [ INFO] [1537866179.597572517]: publishStaticTransforms... [ INFO] [1537866179.598323409]: RealSense Node Is Up! 25/09 18:02:59,786 WARNING [140293158708992] (ds5-timestamp.cpp:64) UVC metadata payloads not available. Please refer to installation chapter for details. 25/09 18:02:59,807 WARNING [140293371086592] (ds5-timestamp.cpp:64) UVC metadata payloads not available. Please refer to installation chapter for details. [ WARN] [1537866179.817030484]: Frame metadata isn't available! (frame_timestamp_domain = RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME) 25/09 18:03:04,674 WARNING [140294086170496] (types.cpp:57) get_xu(id=11) failed! Last Error: Input/output error [ WARN] [1537866184.675410338]: Reconfigure callback failed with exception get_xu(id=11) failed! Last Error: Input/output error: [ WARN] [1537866370.065060462]: Messages of type 2 arrived out of order (will print only once) [ WARN] [1537866370.065684758]: Messages of type 1 arrived out of order (will print only once) ``` and in rviz, pointcloud is noisy as below. ![screenshot from 2018-09-25 18-06-45](https://user-images.githubusercontent.com/25475343/46004949-1a8d8880-c0ef-11e8-8234-4f49890b4775.png) I have searched many similar issues and tried measurement to solve the problem. But have not solved yet. Could you tell me how to solve this problem?
1.0
D415 pointcloud is noisy in rviz - Hi, I'm using d415. ``` firmware:5_10_3_0 librealsense: Intel® RealSense™ SDK 2.0 (build 2.16.0) ``` Below warning shows, ``` $ roslaunch realsense2_camera rs_rgbd.launch ... logging to /home/seed/.ros/log/cc11cb64-c0a1-11e8-84cb-a434d993f1c0/roslaunch-seed-ThinkPad-T460s-31679.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://seed-ThinkPad-T460s:41421/ SUMMARY ======== PARAMETERS * /camera/realsense2_camera/accel_fps: 1000 * /camera/realsense2_camera/accel_optical_frame_id: camera_accel_opti... * /camera/realsense2_camera/align_depth: True * /camera/realsense2_camera/aligned_depth_to_color_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_fisheye_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_infra1_frame_id: camera_aligned_de... * /camera/realsense2_camera/aligned_depth_to_infra2_frame_id: camera_aligned_de... * /camera/realsense2_camera/color_fps: 30 * /camera/realsense2_camera/color_height: 480 * /camera/realsense2_camera/color_optical_frame_id: camera_color_opti... * /camera/realsense2_camera/color_width: 640 * /camera/realsense2_camera/depth_fps: 30 * /camera/realsense2_camera/depth_height: 480 * /camera/realsense2_camera/depth_optical_frame_id: camera_depth_opti... * /camera/realsense2_camera/depth_width: 640 * /camera/realsense2_camera/enable_color: True * /camera/realsense2_camera/enable_depth: True * /camera/realsense2_camera/enable_fisheye: True * /camera/realsense2_camera/enable_imu: True * /camera/realsense2_camera/enable_infra1: True * /camera/realsense2_camera/enable_infra2: True * /camera/realsense2_camera/enable_pointcloud: False * /camera/realsense2_camera/enable_sync: True * /camera/realsense2_camera/fisheye_fps: 30 * /camera/realsense2_camera/fisheye_height: 480 * /camera/realsense2_camera/fisheye_optical_frame_id: camera_fisheye_op... * /camera/realsense2_camera/fisheye_width: 640 * /camera/realsense2_camera/gyro_fps: 1000 * /camera/realsense2_camera/gyro_optical_frame_id: camera_gyro_optic... * /camera/realsense2_camera/infra1_fps: 30 * /camera/realsense2_camera/infra1_height: 480 * /camera/realsense2_camera/infra1_optical_frame_id: camera_infra1_opt... * /camera/realsense2_camera/infra1_width: 640 * /camera/realsense2_camera/infra2_fps: 30 * /camera/realsense2_camera/infra2_height: 480 * /camera/realsense2_camera/infra2_optical_frame_id: camera_infra2_opt... * /camera/realsense2_camera/infra2_width: 640 * /camera/realsense2_camera/json_file_path: * /camera/realsense2_camera/rosbag_filename: * /camera/realsense2_camera/serial_no: * /rosdistro: kinetic * /rosversion: 1.12.14 NODES /camera/ color_rectify_color (nodelet/nodelet) points_xyzrgb_hw_registered (nodelet/nodelet) realsense2_camera (nodelet/nodelet) realsense2_camera_manager (nodelet/nodelet) auto-starting new master process[master]: started with pid [31689] ROS_MASTER_URI=http://localhost:11311 setting /run_id to cc11cb64-c0a1-11e8-84cb-a434d993f1c0 process[rosout-1]: started with pid [31703] started core service [/rosout] process[camera/realsense2_camera_manager-2]: started with pid [31720] process[camera/realsense2_camera-3]: started with pid [31721] process[camera/color_rectify_color-4]: started with pid [31722] process[camera/points_xyzrgb_hw_registered-5]: started with pid [31730] [ INFO] [1537866178.895956981]: Initializing nodelet with 4 worker threads. [ INFO] [1537866178.994602165]: RealSense ROS v2.0.3 [ INFO] [1537866178.994634329]: Running with LibRealSense v2.16.0 [ INFO] [1537866179.297975607]: getParameters... [ INFO] [1537866179.358371948]: setupDevice... [ INFO] [1537866179.358442328]: JSON file is not provided [ INFO] [1537866179.358474861]: ROS Node Namespace: camera [ INFO] [1537866179.358513235]: Device Name: Intel RealSense D415 [ INFO] [1537866179.358534864]: Device Serial No: 816312060493 [ INFO] [1537866179.358559419]: Device FW version: 05.10.03.00 [ INFO] [1537866179.358582056]: Device Product ID: 0x0AD3 [ INFO] [1537866179.358609188]: Enable PointCloud: Off [ INFO] [1537866179.358637472]: Align Depth: On [ INFO] [1537866179.358658196]: Sync Mode: On [ INFO] [1537866179.358970824]: Device Sensors: [ INFO] [1537866179.359174501]: Stereo Module was found. [ INFO] [1537866179.359223837]: RGB Camera was found. [ INFO] [1537866179.359450585]: (Fisheye, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359481659]: (Gyro, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359509306]: (Accel, 0) sensor isn't supported by current device! -- Skipping... [ INFO] [1537866179.359564560]: setupPublishers... [ INFO] [1537866179.363307788]: Expected frequency for depth = 30.00000 [ INFO] [1537866179.395862362]: Expected frequency for infra1 = 30.00000 [ INFO] [1537866179.421994418]: Expected frequency for aligned_depth_to_infra1 = 30.00000 [ INFO] [1537866179.446018089]: Expected frequency for infra2 = 30.00000 [ INFO] [1537866179.471695361]: Expected frequency for aligned_depth_to_infra2 = 30.00000 [ INFO] [1537866179.496306551]: Expected frequency for color = 30.00000 [ INFO] [1537866179.522022056]: Expected frequency for aligned_depth_to_color = 30.00000 [ INFO] [1537866179.551641932]: setupStreams... [ INFO] [1537866179.561624840]: depth stream is enabled - width: 640, height: 480, fps: 30 [ INFO] [1537866179.562019535]: infra1 stream is enabled - width: 640, height: 480, fps: 30 [ INFO] [1537866179.562349605]: infra2 stream is enabled - width: 640, height: 480, fps: 30 25/09 18:02:59,564 WARNING [140294086170496] (backend-v4l2.cpp:1248) Pixel format 36315752-1a66-a242-9065-d01814a likely requires patch for fourcc code RW16! 25/09 18:02:59,565 WARNING [140294086170496] (sensor.cpp:338) Unregistered Media formats : [ RW16 ]; Supported: [ ] [ INFO] [1537866179.573798384]: color stream is enabled - width: 640, height: 480, fps: 30 25/09 18:02:59,589 ERROR [140293388445440] (backend-v4l2.cpp:1002) Video frame arrived in idle mode. 25/09 18:02:59,589 ERROR [140293388445440] (backend-v4l2.cpp:1002) Video frame arrived in idle mode. [ INFO] [1537866179.597572517]: publishStaticTransforms... [ INFO] [1537866179.598323409]: RealSense Node Is Up! 25/09 18:02:59,786 WARNING [140293158708992] (ds5-timestamp.cpp:64) UVC metadata payloads not available. Please refer to installation chapter for details. 25/09 18:02:59,807 WARNING [140293371086592] (ds5-timestamp.cpp:64) UVC metadata payloads not available. Please refer to installation chapter for details. [ WARN] [1537866179.817030484]: Frame metadata isn't available! (frame_timestamp_domain = RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME) 25/09 18:03:04,674 WARNING [140294086170496] (types.cpp:57) get_xu(id=11) failed! Last Error: Input/output error [ WARN] [1537866184.675410338]: Reconfigure callback failed with exception get_xu(id=11) failed! Last Error: Input/output error: [ WARN] [1537866370.065060462]: Messages of type 2 arrived out of order (will print only once) [ WARN] [1537866370.065684758]: Messages of type 1 arrived out of order (will print only once) ``` and in rviz, pointcloud is noisy as below. ![screenshot from 2018-09-25 18-06-45](https://user-images.githubusercontent.com/25475343/46004949-1a8d8880-c0ef-11e8-8234-4f49890b4775.png) I have searched many similar issues and tried measurement to solve the problem. But have not solved yet. Could you tell me how to solve this problem?
process
pointcloud is noisy in rviz hi i m using firmware librealsense intel® realsense™ sdk build below warning shows roslaunch camera rs rgbd launch logging to home seed ros log roslaunch seed thinkpad log checking log directory for disk usage this may take awhile press ctrl c to interrupt done checking log file disk usage usage is started roslaunch server summary parameters camera camera accel fps camera camera accel optical frame id camera accel opti camera camera align depth true camera camera aligned depth to color frame id camera aligned de camera camera aligned depth to fisheye frame id camera aligned de camera camera aligned depth to frame id camera aligned de camera camera aligned depth to frame id camera aligned de camera camera color fps camera camera color height camera camera color optical frame id camera color opti camera camera color width camera camera depth fps camera camera depth height camera camera depth optical frame id camera depth opti camera camera depth width camera camera enable color true camera camera enable depth true camera camera enable fisheye true camera camera enable imu true camera camera enable true camera camera enable true camera camera enable pointcloud false camera camera enable sync true camera camera fisheye fps camera camera fisheye height camera camera fisheye optical frame id camera fisheye op camera camera fisheye width camera camera gyro fps camera camera gyro optical frame id camera gyro optic camera camera fps camera camera height camera camera optical frame id camera opt camera camera width camera camera fps camera camera height camera camera optical frame id camera opt camera camera width camera camera json file path camera camera rosbag filename camera camera serial no rosdistro kinetic rosversion nodes camera color rectify color nodelet nodelet points xyzrgb hw registered nodelet nodelet camera nodelet nodelet camera manager nodelet nodelet auto starting new master process started with pid ros master uri setting run id to process started with pid started core service process started with pid process started with pid process started with pid process started with pid initializing nodelet with worker threads realsense ros running with librealsense getparameters setupdevice json file is not provided ros node namespace camera device name intel realsense device serial no device fw version device product id enable pointcloud off align depth on sync mode on device sensors stereo module was found rgb camera was found fisheye sensor isn t supported by current device skipping gyro sensor isn t supported by current device skipping accel sensor isn t supported by current device skipping setuppublishers expected frequency for depth expected frequency for expected frequency for aligned depth to expected frequency for expected frequency for aligned depth to expected frequency for color expected frequency for aligned depth to color setupstreams depth stream is enabled width height fps stream is enabled width height fps stream is enabled width height fps warning backend cpp pixel format likely requires patch for fourcc code warning sensor cpp unregistered media formats supported color stream is enabled width height fps error backend cpp video frame arrived in idle mode error backend cpp video frame arrived in idle mode publishstatictransforms realsense node is up warning timestamp cpp uvc metadata payloads not available please refer to installation chapter for details warning timestamp cpp uvc metadata payloads not available please refer to installation chapter for details frame metadata isn t available frame timestamp domain timestamp domain system time warning types cpp get xu id failed last error input output error reconfigure callback failed with exception get xu id failed last error input output error messages of type arrived out of order will print only once messages of type arrived out of order will print only once and in rviz pointcloud is noisy as below i have searched many similar issues and tried measurement to solve the problem but have not solved yet could you tell me how to solve this problem
1
8,687
2,611,535,966
IssuesEvent
2015-02-27 06:05:51
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Keyboard Layout
auto-migrated Priority-Medium Type-Defect
``` IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game e.g. P,T etc doesnt work. ``` Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08 * Merged into: #192
1.0
Keyboard Layout - ``` IF i have russian keyboard layout turned on my OSX 10.7.5 some buttons in game e.g. P,T etc doesnt work. ``` Original issue reported on code.google.com by `maxis...@gmail.com` on 18 Jan 2014 at 10:08 * Merged into: #192
non_process
keyboard layout if i have russian keyboard layout turned on my osx some buttons in game e g p t etc doesnt work original issue reported on code google com by maxis gmail com on jan at merged into
0
32,401
6,772,685,431
IssuesEvent
2017-10-27 00:29:21
CocoaPods/CocoaPods
https://api.github.com/repos/CocoaPods/CocoaPods
opened
Previously installed pods for a smaller subset of platforms can be incomplete on incremental installation
s2:confirmed t2:defect
# Report ## What did you do? This is not at all specific to the involved pod, but this allows to reproduce this issue. It must be a podspec which is on multiple platforms on which it uses different proper subsets of source files. 1. Assume initially a Podfile as: ```ruby target "TargetA" do platform :ios, '10.0' pod "Fabric" end ``` 2. Run `pod install` 3. Add a target on a different platform using the same pod: ```diff target "TargetA" do platform :ios, '10.0' pod "Fabric" end +target "TargetB" do + platform :macos, '10.13' + pod "Fabric" +end ``` ## What did you expect to happen? Install all pod dependencies correctly. ## What happened instead? Pod Fabric is missing the platform-specific sources for macOS. ## Workaround Nuke the Pods directory and run `pod install` again. ## Why does that go wrong? This happens because the `PodDirCleaner` has cleaned the pod previously based on just including source files belonging to the selected subset of platforms, here iOS. When we add macOS as a platform, the `SandboxAnalyzer` doesn't know for what platforms this pod was installed before as this information is not present in the sandbox manifest, so that it cannot know that the pod has to be flagged as changed. So the pod will be categorized as unchanged and the installer will skip any further actions assuming there is nothing further to do. The cache which runs the `PodDirCleaner` as well is not affected by this. ## Proposed Solutions ### A: Clean more forgiving Don't clean files which are specified for platforms which are unused. That would require that the `PodDirCleaner` is initialized with the root spec for all platforms. ❎ This can blow up the size of the installation for certain cases. ### B: Add the platform to the sandbox manifest Store together with each pod in the sandbox manifest for what platform a pod was used when it was installed. That would allow the `SandboxAnalyzer` to flag the pod as changed and the installer would put the files correctly in place again. ❎ This requires changes to Core, migration to a new version etc.
1.0
Previously installed pods for a smaller subset of platforms can be incomplete on incremental installation - # Report ## What did you do? This is not at all specific to the involved pod, but this allows to reproduce this issue. It must be a podspec which is on multiple platforms on which it uses different proper subsets of source files. 1. Assume initially a Podfile as: ```ruby target "TargetA" do platform :ios, '10.0' pod "Fabric" end ``` 2. Run `pod install` 3. Add a target on a different platform using the same pod: ```diff target "TargetA" do platform :ios, '10.0' pod "Fabric" end +target "TargetB" do + platform :macos, '10.13' + pod "Fabric" +end ``` ## What did you expect to happen? Install all pod dependencies correctly. ## What happened instead? Pod Fabric is missing the platform-specific sources for macOS. ## Workaround Nuke the Pods directory and run `pod install` again. ## Why does that go wrong? This happens because the `PodDirCleaner` has cleaned the pod previously based on just including source files belonging to the selected subset of platforms, here iOS. When we add macOS as a platform, the `SandboxAnalyzer` doesn't know for what platforms this pod was installed before as this information is not present in the sandbox manifest, so that it cannot know that the pod has to be flagged as changed. So the pod will be categorized as unchanged and the installer will skip any further actions assuming there is nothing further to do. The cache which runs the `PodDirCleaner` as well is not affected by this. ## Proposed Solutions ### A: Clean more forgiving Don't clean files which are specified for platforms which are unused. That would require that the `PodDirCleaner` is initialized with the root spec for all platforms. ❎ This can blow up the size of the installation for certain cases. ### B: Add the platform to the sandbox manifest Store together with each pod in the sandbox manifest for what platform a pod was used when it was installed. That would allow the `SandboxAnalyzer` to flag the pod as changed and the installer would put the files correctly in place again. ❎ This requires changes to Core, migration to a new version etc.
non_process
previously installed pods for a smaller subset of platforms can be incomplete on incremental installation report what did you do this is not at all specific to the involved pod but this allows to reproduce this issue it must be a podspec which is on multiple platforms on which it uses different proper subsets of source files assume initially a podfile as ruby target targeta do platform ios pod fabric end run pod install add a target on a different platform using the same pod diff target targeta do platform ios pod fabric end target targetb do platform macos pod fabric end what did you expect to happen install all pod dependencies correctly what happened instead pod fabric is missing the platform specific sources for macos workaround nuke the pods directory and run pod install again why does that go wrong this happens because the poddircleaner has cleaned the pod previously based on just including source files belonging to the selected subset of platforms here ios when we add macos as a platform the sandboxanalyzer doesn t know for what platforms this pod was installed before as this information is not present in the sandbox manifest so that it cannot know that the pod has to be flagged as changed so the pod will be categorized as unchanged and the installer will skip any further actions assuming there is nothing further to do the cache which runs the poddircleaner as well is not affected by this proposed solutions a clean more forgiving don t clean files which are specified for platforms which are unused that would require that the poddircleaner is initialized with the root spec for all platforms ❎ this can blow up the size of the installation for certain cases b add the platform to the sandbox manifest store together with each pod in the sandbox manifest for what platform a pod was used when it was installed that would allow the sandboxanalyzer to flag the pod as changed and the installer would put the files correctly in place again ❎ this requires changes to core migration to a new version etc
0
6,733
23,812,105,332
IssuesEvent
2022-09-04 22:40:50
smcnab1/op-question-mark
https://api.github.com/repos/smcnab1/op-question-mark
closed
[BUG] Node-Red Rebuild
Status: Confirmed Type: Bug Priority: Critical For: Automations
Node-Red Flows missing. Find and re-build https://github.com/dortamur/ha-node-red-flows **Flows to Create** - [x] Hall Lighting - [x] Bathroom Lighting - [x] Living Room Lighting - [x] Bedroom Lighting - [x] Bed Sensor Activation - [x] Office Lighting (XBOX, Meeting) - [x] Actionable Notifications
1.0
[BUG] Node-Red Rebuild - Node-Red Flows missing. Find and re-build https://github.com/dortamur/ha-node-red-flows **Flows to Create** - [x] Hall Lighting - [x] Bathroom Lighting - [x] Living Room Lighting - [x] Bedroom Lighting - [x] Bed Sensor Activation - [x] Office Lighting (XBOX, Meeting) - [x] Actionable Notifications
non_process
node red rebuild node red flows missing find and re build flows to create hall lighting bathroom lighting living room lighting bedroom lighting bed sensor activation office lighting xbox meeting actionable notifications
0
13,685
16,442,798,426
IssuesEvent
2021-05-20 16:02:02
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
opened
infra: bot template/config for running on Cloud Run
type: process
In the future, we will want extra dependencies available to our bot. We should still be able to run our probot-based bots easily with our credentials and tasks shims in `gcf-utils` within a Docker container on Cloud Run.
1.0
infra: bot template/config for running on Cloud Run - In the future, we will want extra dependencies available to our bot. We should still be able to run our probot-based bots easily with our credentials and tasks shims in `gcf-utils` within a Docker container on Cloud Run.
process
infra bot template config for running on cloud run in the future we will want extra dependencies available to our bot we should still be able to run our probot based bots easily with our credentials and tasks shims in gcf utils within a docker container on cloud run
1