Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
58,499
| 14,405,057,309
|
IssuesEvent
|
2020-12-03 18:08:25
|
pravega/pravega
|
https://api.github.com/repos/pravega/pravega
|
opened
|
Fix resource leaks (in tests)
|
area/build area/testing kind/bug kind/enhancement version/0.9.0
|
**Problem description**
1. Executors that are not shut down
2. Metrics providers that are started but never shut down.
3. Others.
**Problem location**
Codebase.
**Suggestions for an improvement**
1. Ensure that all executors are created using `ExecutorServiceHelpers`. Define a leak detection level in this class. If enabled, record where executor is created from and use the `finalize` method to track leaks.
2. Search & destroy.
3. TBD
|
1.0
|
Fix resource leaks (in tests) - **Problem description**
1. Executors that are not shut down
2. Metrics providers that are started but never shut down.
3. Others.
**Problem location**
Codebase.
**Suggestions for an improvement**
1. Ensure that all executors are created using `ExecutorServiceHelpers`. Define a leak detection level in this class. If enabled, record where executor is created from and use the `finalize` method to track leaks.
2. Search & destroy.
3. TBD
|
non_process
|
fix resource leaks in tests problem description executors that are not shut down metrics providers that are started but never shut down others problem location codebase suggestions for an improvement ensure that all executors are created using executorservicehelpers define a leak detection level in this class if enabled record where executor is created from and use the finalize method to track leaks search destroy tbd
| 0
|
90,659
| 26,162,176,037
|
IssuesEvent
|
2022-12-31 18:33:45
|
sandboxie-plus/Sandboxie
|
https://api.github.com/repos/sandboxie-plus/Sandboxie
|
closed
|
[Plus] Opt in switch for feature #2531 to prevent data loss
|
Feature request added in next build
|
### Is your feature request related to a problem or use case?
The feature at #2531 can result in data loss if there is no switch to control whenever the users desires it or not
There are use cases allowed by the AutoDelete to postpone/cancel it having the option to keep the files as is untill the next time it is triggered and as the AutoDelete only triggers only after a box session ends the new feature should be opt in as it can easily cause data loss for users that are experienced with Sandboxie but unaware of the added feature.
Even configuring a not empty box to AutoDelete but not yet deleting the content or recover it would result in losing it , since how AutoDelete triggers (not starting box session posses no risk for its contents)
### Describe the solution you'd like
>title
### Describe alternatives you've considered
switch to "manual" box cleaning
|
1.0
|
[Plus] Opt in switch for feature #2531 to prevent data loss - ### Is your feature request related to a problem or use case?
The feature at #2531 can result in data loss if there is no switch to control whenever the users desires it or not
There are use cases allowed by the AutoDelete to postpone/cancel it having the option to keep the files as is untill the next time it is triggered and as the AutoDelete only triggers only after a box session ends the new feature should be opt in as it can easily cause data loss for users that are experienced with Sandboxie but unaware of the added feature.
Even configuring a not empty box to AutoDelete but not yet deleting the content or recover it would result in losing it , since how AutoDelete triggers (not starting box session posses no risk for its contents)
### Describe the solution you'd like
>title
### Describe alternatives you've considered
switch to "manual" box cleaning
|
non_process
|
opt in switch for feature to prevent data loss is your feature request related to a problem or use case the feature at can result in data loss if there is no switch to control whenever the users desires it or not there are use cases allowed by the autodelete to postpone cancel it having the option to keep the files as is untill the next time it is triggered and as the autodelete only triggers only after a box session ends the new feature should be opt in as it can easily cause data loss for users that are experienced with sandboxie but unaware of the added feature even configuring a not empty box to autodelete but not yet deleting the content or recover it would result in losing it since how autodelete triggers not starting box session posses no risk for its contents describe the solution you d like title describe alternatives you ve considered switch to manual box cleaning
| 0
|
142,264
| 19,085,149,962
|
IssuesEvent
|
2021-11-29 04:20:46
|
artsking/platform_external_wpa_supplicant_8
|
https://api.github.com/repos/artsking/platform_external_wpa_supplicant_8
|
opened
|
CVE-2021-30004 (Medium) detected in https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0, https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0
|
security vulnerability
|
## CVE-2021-30004 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0</b>, <b>https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In wpa_supplicant and hostapd 2.9, forging attacks may occur because AlgorithmIdentifier parameters are mishandled in tls/pkcs1.c and tls/x509v3.c.
<p>Publish Date: 2021-04-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-30004>CVE-2021-30004</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-30004 (Medium) detected in https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0, https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0 - ## CVE-2021-30004 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0</b>, <b>https://source.codeaurora.org/external/imx/aosp/platform/external/wpa_supplicant_8/android-10.0.0_2.6.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In wpa_supplicant and hostapd 2.9, forging attacks may occur because AlgorithmIdentifier parameters are mishandled in tls/pkcs1.c and tls/x509v3.c.
<p>Publish Date: 2021-04-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-30004>CVE-2021-30004</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable libraries vulnerability details in wpa supplicant and hostapd forging attacks may occur because algorithmidentifier parameters are mishandled in tls c and tls c publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href step up your open source security game with whitesource
| 0
|
9,524
| 12,500,258,285
|
IssuesEvent
|
2020-06-01 21:50:58
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Multiple remapped columns to the same table/field don't return correct column info
|
.Backend Priority:P3 Querying/Processor Type:Bug
|
This is the classic example we've been using of a table for "Posts" with two foreign keys to the same "People" table, "Author" and "Reviewer":

Column 1 is `author` (`remapped_to` is incorrect)
Column 4 is the remapped author name (`remapped_from` and `display_name` are incorrect)
Column 2 is `reviewer` (correct)
Column 5 is the remapped reviewer name (correct)
This results in only the Reviewer remapping being shown:

|
1.0
|
Multiple remapped columns to the same table/field don't return correct column info - This is the classic example we've been using of a table for "Posts" with two foreign keys to the same "People" table, "Author" and "Reviewer":

Column 1 is `author` (`remapped_to` is incorrect)
Column 4 is the remapped author name (`remapped_from` and `display_name` are incorrect)
Column 2 is `reviewer` (correct)
Column 5 is the remapped reviewer name (correct)
This results in only the Reviewer remapping being shown:

|
process
|
multiple remapped columns to the same table field don t return correct column info this is the classic example we ve been using of a table for posts with two foreign keys to the same people table author and reviewer column is author remapped to is incorrect column is the remapped author name remapped from and display name are incorrect column is reviewer correct column is the remapped reviewer name correct this results in only the reviewer remapping being shown
| 1
|
510
| 2,974,438,268
|
IssuesEvent
|
2015-07-15 00:35:20
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
EOF on Atlas post-processor
|
crash post-processor/atlas
|
Getting some weird errors trying to run a virtualbox build into vagrant and then into atlas.
```
==> virtualbox-iso: Running post-processor: atlas
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: unexpected EOF
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: 1 error(s) occurred:
* Post-processor failed: unexpected EOF
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-west-1: ami-baf6b4cd
--> amazon-ebs: jedineeper/aem-author/amazon.ami (v2)
panic: runtime error: index out of range
```
std/crash output; https://gist.github.com/jedineeper/952a9a57f7041c87f3e6
|
1.0
|
EOF on Atlas post-processor - Getting some weird errors trying to run a virtualbox build into vagrant and then into atlas.
```
==> virtualbox-iso: Running post-processor: atlas
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: unexpected EOF
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: 1 error(s) occurred:
* Post-processor failed: unexpected EOF
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
eu-west-1: ami-baf6b4cd
--> amazon-ebs: jedineeper/aem-author/amazon.ami (v2)
panic: runtime error: index out of range
```
std/crash output; https://gist.github.com/jedineeper/952a9a57f7041c87f3e6
|
process
|
eof on atlas post processor getting some weird errors trying to run a virtualbox build into vagrant and then into atlas virtualbox iso running post processor atlas build virtualbox iso errored error s occurred post processor failed unexpected eof some builds didn t complete successfully and had errors virtualbox iso error s occurred post processor failed unexpected eof builds finished the artifacts of successful builds are amazon ebs amis were created eu west ami amazon ebs jedineeper aem author amazon ami panic runtime error index out of range std crash output
| 1
|
14,080
| 16,961,414,751
|
IssuesEvent
|
2021-06-29 04:47:54
|
googleapis/python-spanner-django
|
https://api.github.com/repos/googleapis/python-spanner-django
|
closed
|
package publishing: use pypi
|
api: spanner priority: p3 type: process
|
We'll need to publish django-spanner on pypi.
Kindly paging @busunkim96 @skuruppu @vmanghnani
|
1.0
|
package publishing: use pypi - We'll need to publish django-spanner on pypi.
Kindly paging @busunkim96 @skuruppu @vmanghnani
|
process
|
package publishing use pypi we ll need to publish django spanner on pypi kindly paging skuruppu vmanghnani
| 1
|
222,576
| 17,083,400,961
|
IssuesEvent
|
2021-07-08 08:44:38
|
cytopia/devilbox
|
https://api.github.com/repos/cytopia/devilbox
|
closed
|
DNS setup in Windows using Docker and WSL 2
|
documentation issue:stale
|
<!---
1. Verify first that your question is not already reported on GitHub.
2. Verify that your question is not covered in the docs: https://devilbox.readthedocs.io
3. PLEASE FILL OUT ALL REQUIRED INFORMATION BELOW! Otherwise it might take more time to properly handle this question.
-->
#### ISSUE TYPE
<!-- DO NOT CHANGE THIS -->
- Documentation
<!-- DO NOT CHANGE THIS -->
#### SUMMARY
Documentation steps to make DNS updates in Windows doesn't take into consideration whether you are using WSL 2. Steps are different if using that Docker solution.
#### Goal
Update documentation to state the following:
When using Docker with WSL2 support, the DNS entries that are required in Windows must be placed in the Hyper-V virtual adapter related to the WSL 2 instance. Otherwise automatic DNS detection will not work.
|
1.0
|
DNS setup in Windows using Docker and WSL 2 - <!---
1. Verify first that your question is not already reported on GitHub.
2. Verify that your question is not covered in the docs: https://devilbox.readthedocs.io
3. PLEASE FILL OUT ALL REQUIRED INFORMATION BELOW! Otherwise it might take more time to properly handle this question.
-->
#### ISSUE TYPE
<!-- DO NOT CHANGE THIS -->
- Documentation
<!-- DO NOT CHANGE THIS -->
#### SUMMARY
Documentation steps to make DNS updates in Windows doesn't take into consideration whether you are using WSL 2. Steps are different if using that Docker solution.
#### Goal
Update documentation to state the following:
When using Docker with WSL2 support, the DNS entries that are required in Windows must be placed in the Hyper-V virtual adapter related to the WSL 2 instance. Otherwise automatic DNS detection will not work.
|
non_process
|
dns setup in windows using docker and wsl verify first that your question is not already reported on github verify that your question is not covered in the docs please fill out all required information below otherwise it might take more time to properly handle this question issue type documentation summary documentation steps to make dns updates in windows doesn t take into consideration whether you are using wsl steps are different if using that docker solution goal update documentation to state the following when using docker with support the dns entries that are required in windows must be placed in the hyper v virtual adapter related to the wsl instance otherwise automatic dns detection will not work
| 0
|
10,186
| 4,716,969,967
|
IssuesEvent
|
2016-10-16 10:55:27
|
curl/curl
|
https://api.github.com/repos/curl/curl
|
closed
|
consider utilizing `Requires.private` directives in `libcurl.pc` instead of `Libs.private`
|
build KNOWN_BUGS material
|
the curl build system goes through a good amount of effort to try and figure out when some external libs need other external libs, often for static linking purposes. for example, it probes openssl and then tries to see if openssl also needs zlib and libdl. this doesn't always work (new libs get added and curl doesn't probe those, or the libs get rebuilt which means curl needs to be rebuilt simply to reprobe the library).
ideally the `libcurl.pc` file, instead of looking like:
```
Libs.private: -lnghttp2 -lssl -lcrypto -lssl -lcrypto -lz
```
it would look like:
```
Libs.private:
Requires.private: libnghttp2 openssl zlib
```
now (1) the data will always be up-to-date regardless of how those other libs were configure/rebuilt/updated, and (2) curl wouldn't have to do all that probing.
NB: (1) can be done now and generally w/out pain, but (2) would mean that curl's build system itself would now rely on pkg-config being available/sane at build time. not sure if non-pkg-config builds is something curl cares about.
|
1.0
|
consider utilizing `Requires.private` directives in `libcurl.pc` instead of `Libs.private` - the curl build system goes through a good amount of effort to try and figure out when some external libs need other external libs, often for static linking purposes. for example, it probes openssl and then tries to see if openssl also needs zlib and libdl. this doesn't always work (new libs get added and curl doesn't probe those, or the libs get rebuilt which means curl needs to be rebuilt simply to reprobe the library).
ideally the `libcurl.pc` file, instead of looking like:
```
Libs.private: -lnghttp2 -lssl -lcrypto -lssl -lcrypto -lz
```
it would look like:
```
Libs.private:
Requires.private: libnghttp2 openssl zlib
```
now (1) the data will always be up-to-date regardless of how those other libs were configure/rebuilt/updated, and (2) curl wouldn't have to do all that probing.
NB: (1) can be done now and generally w/out pain, but (2) would mean that curl's build system itself would now rely on pkg-config being available/sane at build time. not sure if non-pkg-config builds is something curl cares about.
|
non_process
|
consider utilizing requires private directives in libcurl pc instead of libs private the curl build system goes through a good amount of effort to try and figure out when some external libs need other external libs often for static linking purposes for example it probes openssl and then tries to see if openssl also needs zlib and libdl this doesn t always work new libs get added and curl doesn t probe those or the libs get rebuilt which means curl needs to be rebuilt simply to reprobe the library ideally the libcurl pc file instead of looking like libs private lssl lcrypto lssl lcrypto lz it would look like libs private requires private openssl zlib now the data will always be up to date regardless of how those other libs were configure rebuilt updated and curl wouldn t have to do all that probing nb can be done now and generally w out pain but would mean that curl s build system itself would now rely on pkg config being available sane at build time not sure if non pkg config builds is something curl cares about
| 0
|
20,962
| 27,818,212,262
|
IssuesEvent
|
2023-03-18 23:14:46
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
closed
|
Create SignIn and SignUp front-end functionality
|
Processing Task Sprint 2
|
**Task test**
*Test 1*
1) Go to the Login webpage.
2) Verify the Login button at left bottom.
3) Verify there are 2 inputs: Username/email and password.
4) Verify the remember me checkbox works.
5) Click the login button to verify the alert error when the php server is off.
6) Click the login button to verify login successfully and turn to home page if php server is on.
7) Verify the alert error if any of inputs is wrong.
*Test 2*
1) Go to SignUp page
2) Verify the SignUp button is at bottom.
3) Verify there are 4 input boxes: "Username", "email address", "password", and "confirm password".
4) Check SignUp button returns 'invaid email address' if the email address entered is not in right formed.
5) Check SignUp button returns error when php server is off.
6) Check SignUp button returns login page when successfully signup.
7) Check SignUp button returns error when any of inputs are not in formed.
|
1.0
|
Create SignIn and SignUp front-end functionality - **Task test**
*Test 1*
1) Go to the Login webpage.
2) Verify the Login button at left bottom.
3) Verify there are 2 inputs: Username/email and password.
4) Verify the remember me checkbox works.
5) Click the login button to verify the alert error when the php server is off.
6) Click the login button to verify login successfully and turn to home page if php server is on.
7) Verify the alert error if any of inputs is wrong.
*Test 2*
1) Go to SignUp page
2) Verify the SignUp button is at bottom.
3) Verify there are 4 input boxes: "Username", "email address", "password", and "confirm password".
4) Check SignUp button returns 'invaid email address' if the email address entered is not in right formed.
5) Check SignUp button returns error when php server is off.
6) Check SignUp button returns login page when successfully signup.
7) Check SignUp button returns error when any of inputs are not in formed.
|
process
|
create signin and signup front end functionality task test test go to the login webpage verify the login button at left bottom verify there are inputs username email and password verify the remember me checkbox works click the login button to verify the alert error when the php server is off click the login button to verify login successfully and turn to home page if php server is on verify the alert error if any of inputs is wrong test go to signup page verify the signup button is at bottom verify there are input boxes username email address password and confirm password check signup button returns invaid email address if the email address entered is not in right formed check signup button returns error when php server is off check signup button returns login page when successfully signup check signup button returns error when any of inputs are not in formed
| 1
|
4,316
| 7,203,452,699
|
IssuesEvent
|
2018-02-06 09:16:10
|
itsyouonline/identityserver
|
https://api.github.com/repos/itsyouonline/identityserver
|
closed
|
v1/oauth/access_token api returns wrong scopes
|
process_wontfix state_verification
|
i sent the api request with scope ```user:memberof:<org>```
but in the response the scope field was ```user:admin```

|
1.0
|
v1/oauth/access_token api returns wrong scopes - i sent the api request with scope ```user:memberof:<org>```
but in the response the scope field was ```user:admin```

|
process
|
oauth access token api returns wrong scopes i sent the api request with scope user memberof but in the response the scope field was user admin
| 1
|
11,746
| 5,078,739,889
|
IssuesEvent
|
2016-12-28 16:37:20
|
opendatakit/opendatakit
|
https://api.github.com/repos/opendatakit/opendatakit
|
closed
|
clean up terminology in bottom bar
|
Build Priority-Low Type-Enhancement
|
*Migrated to opendatakit/build#70 by [spacetelescope/github-issues-import](https://github.com/spacetelescope/github-issues-import)*
Originally reported on Google Code with ID 439
```
these are pretty nitpicky, but a recent training showed some stumbling blocks. i don't
feel strongly about any of these.
either make it "choose one" and "choose multiple" or "select one" and "select multiple".
might also be clearer to say "radio button" or "checkboxes"
numeric feels awkward. maybe number?
media is vague. but understood that 'image/video/audio' is awkward. maybe split them
into different options?
```
Reported by `yanokwa` on 2011-11-21 00:44:29
|
1.0
|
clean up terminology in bottom bar - *Migrated to opendatakit/build#70 by [spacetelescope/github-issues-import](https://github.com/spacetelescope/github-issues-import)*
Originally reported on Google Code with ID 439
```
these are pretty nitpicky, but a recent training showed some stumbling blocks. i don't
feel strongly about any of these.
either make it "choose one" and "choose multiple" or "select one" and "select multiple".
might also be clearer to say "radio button" or "checkboxes"
numeric feels awkward. maybe number?
media is vague. but understood that 'image/video/audio' is awkward. maybe split them
into different options?
```
Reported by `yanokwa` on 2011-11-21 00:44:29
|
non_process
|
clean up terminology in bottom bar migrated to opendatakit build by originally reported on google code with id these are pretty nitpicky but a recent training showed some stumbling blocks i don t feel strongly about any of these either make it choose one and choose multiple or select one and select multiple might also be clearer to say radio button or checkboxes numeric feels awkward maybe number media is vague but understood that image video audio is awkward maybe split them into different options reported by yanokwa on
| 0
|
516
| 2,989,928,774
|
IssuesEvent
|
2015-07-21 04:54:27
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На главном портале "оживить" кнопку "Авторизация по ЭЦП"(типа БанкИД)
|
In process of testing test
|
Для вызова ЭЦП на боевом:
https://bankid.org.ua/DataAccessService/das/authorize?response_type=code&client_id=5d06b5ae-35ed-49aa-9249-36a1681be6a9&eds=true&redirect_uri=https://igov.org.ua:443/documents/user/bankid
для вызова ЭЦП на полигоне:
https://bankid.privatbank.ua/DataAccessService/das/authorize?response_type=code&client_id=5d06b5ae-35ed-49aa-9249-36a1681be6a9&eds=true&redirect_uri=https://poligon.igov.org.ua:443/documents/user/bankid
ВАЖНО: в проекте уже есть готовые переменные в конфиге, где хранится хост БанкИД:
, sProtocol_AccessService_BankID: 'https' //Test
, sHost_AccessService_BankID: 'bankid.privatbank.ua' //Test
, потому хардкодить хост НЕ НАДО, плиз)
|
1.0
|
На главном портале "оживить" кнопку "Авторизация по ЭЦП"(типа БанкИД) - Для вызова ЭЦП на боевом:
https://bankid.org.ua/DataAccessService/das/authorize?response_type=code&client_id=5d06b5ae-35ed-49aa-9249-36a1681be6a9&eds=true&redirect_uri=https://igov.org.ua:443/documents/user/bankid
для вызова ЭЦП на полигоне:
https://bankid.privatbank.ua/DataAccessService/das/authorize?response_type=code&client_id=5d06b5ae-35ed-49aa-9249-36a1681be6a9&eds=true&redirect_uri=https://poligon.igov.org.ua:443/documents/user/bankid
ВАЖНО: в проекте уже есть готовые переменные в конфиге, где хранится хост БанкИД:
, sProtocol_AccessService_BankID: 'https' //Test
, sHost_AccessService_BankID: 'bankid.privatbank.ua' //Test
, потому хардкодить хост НЕ НАДО, плиз)
|
process
|
на главном портале оживить кнопку авторизация по эцп типа банкид для вызова эцп на боевом для вызова эцп на полигоне важно в проекте уже есть готовые переменные в конфиге где хранится хост банкид sprotocol accessservice bankid https test shost accessservice bankid bankid privatbank ua test потому хардкодить хост не надо плиз
| 1
|
13,978
| 16,749,491,711
|
IssuesEvent
|
2021-06-11 20:27:17
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
A canary is not chirping
|
type: process
|
The dependencies and their versions are: {"dayjs":"^1.10.5","gcf-utils":"^8.0.2"}
at 2021 06-10 23:18:24
|
1.0
|
A canary is not chirping - The dependencies and their versions are: {"dayjs":"^1.10.5","gcf-utils":"^8.0.2"}
at 2021 06-10 23:18:24
|
process
|
a canary is not chirping the dependencies and their versions are dayjs gcf utils at
| 1
|
12,141
| 14,741,131,081
|
IssuesEvent
|
2021-01-07 10:08:47
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - credit card processing - something strange happened
|
anc-process anp-important ant-bug ant-enhancement ant-support has attachment
|
In GitLab by @kdjstudios on Jan 2, 2019, 11:21
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-02-82674/conversation
**Server:** External
**Client/Site:** Keener
**Account:** Multiple
**Issue:**
I was attempting to process the MAIN billing cycle credit cards and something happened that has never happened before. Every client beginning with Sturdivant and the rest of the way down to the end of the list ( there were 34 ) See Below These 34 totaled 20,738.59 and if you add the ones that actually did not process which was 9 accounts for 4,343.78 that equals the 25,082.37. So is it safe to say these 34 plus the other 9 did not process ? However, if you check their accounts, the credit card payment has been processed. 20,738.59 is a lot of money not to be sure if it processed or not especially since one thing is saying it did not process but then the payment has been applied to the account.
But then here is what it is saying was charged


|
1.0
|
Keener - credit card processing - something strange happened - In GitLab by @kdjstudios on Jan 2, 2019, 11:21
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-02-82674/conversation
**Server:** External
**Client/Site:** Keener
**Account:** Multiple
**Issue:**
I was attempting to process the MAIN billing cycle credit cards and something happened that has never happened before. Every client beginning with Sturdivant and the rest of the way down to the end of the list ( there were 34 ) See Below These 34 totaled 20,738.59 and if you add the ones that actually did not process which was 9 accounts for 4,343.78 that equals the 25,082.37. So is it safe to say these 34 plus the other 9 did not process ? However, if you check their accounts, the credit card payment has been processed. 20,738.59 is a lot of money not to be sure if it processed or not especially since one thing is saying it did not process but then the payment has been applied to the account.
But then here is what it is saying was charged


|
process
|
keener credit card processing something strange happened in gitlab by kdjstudios on jan submitted by gaylan garrett helpdesk server external client site keener account multiple issue i was attempting to process the main billing cycle credit cards and something happened that has never happened before every client beginning with sturdivant and the rest of the way down to the end of the list there were see below these totaled and if you add the ones that actually did not process which was accounts for that equals the so is it safe to say these plus the other did not process however if you check their accounts the credit card payment has been processed is a lot of money not to be sure if it processed or not especially since one thing is saying it did not process but then the payment has been applied to the account but then here is what it is saying was charged uploads image png uploads image png
| 1
|
18,774
| 24,677,504,251
|
IssuesEvent
|
2022-10-18 18:16:08
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
'::moniker-end' displayed in rendered docs
|
Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#conditionally-run-a-step

Ping @KathrynEE
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/expressions.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
'::moniker-end' displayed in rendered docs - https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#conditionally-run-a-step

Ping @KathrynEE
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/expressions.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
moniker end displayed in rendered docs ping kathrynee document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
119,869
| 4,777,070,379
|
IssuesEvent
|
2016-10-27 15:24:11
|
wow-mania/Redemption
|
https://api.github.com/repos/wow-mania/Redemption
|
closed
|
Graveyards in WG still not rezzing properly
|
Priority
|
Using Wow-Mania Launcher and Client
Windows 10
Have installed the latest updates.
Problem. During the battle for Wintergrasp the Spirit healer at the Horde Camp and the one at the Broken Temple Workshop perform the count down to rez but when the timer ends you dont rez. Instead the timer simply restarts. This forces the player to run back to corpse and wait, which is normally 2 mins after your first death. Sat dead for 12 mins out of a 15 min wintergrasp. it was ok since i had alot of dead people to talk to and the alliance easily rolled to victory. I asked if allies had same problem with their GY not working and some said yes. have attached screen shots.


|
1.0
|
Graveyards in WG still not rezzing properly - Using Wow-Mania Launcher and Client
Windows 10
Have installed the latest updates.
Problem. During the battle for Wintergrasp the Spirit healer at the Horde Camp and the one at the Broken Temple Workshop perform the count down to rez but when the timer ends you dont rez. Instead the timer simply restarts. This forces the player to run back to corpse and wait, which is normally 2 mins after your first death. Sat dead for 12 mins out of a 15 min wintergrasp. it was ok since i had alot of dead people to talk to and the alliance easily rolled to victory. I asked if allies had same problem with their GY not working and some said yes. have attached screen shots.


|
non_process
|
graveyards in wg still not rezzing properly using wow mania launcher and client windows have installed the latest updates problem during the battle for wintergrasp the spirit healer at the horde camp and the one at the broken temple workshop perform the count down to rez but when the timer ends you dont rez instead the timer simply restarts this forces the player to run back to corpse and wait which is normally mins after your first death sat dead for mins out of a min wintergrasp it was ok since i had alot of dead people to talk to and the alliance easily rolled to victory i asked if allies had same problem with their gy not working and some said yes have attached screen shots
| 0
|
215,102
| 7,290,297,320
|
IssuesEvent
|
2018-02-24 00:48:41
|
vwolfley/SchoolsData-App
|
https://api.github.com/repos/vwolfley/SchoolsData-App
|
closed
|
Add Student Enrollment Chart - Total Enrollment
|
Issue: Feature Priority: Medium
|
Bar chart showing total enrollment with percent change over time. (Growth)
|
1.0
|
Add Student Enrollment Chart - Total Enrollment - Bar chart showing total enrollment with percent change over time. (Growth)
|
non_process
|
add student enrollment chart total enrollment bar chart showing total enrollment with percent change over time growth
| 0
|
95,759
| 16,106,939,168
|
IssuesEvent
|
2021-04-27 15:57:57
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Please remove wrong warming
|
Pri2 assigned-to-author doc-enhancement security-center/svc triaged
|
As we discussed https://github.com/MicrosoftDocs/azure-docs/issues/73748, AKS cluster supports the Azure Defender and Log analytics agent.
> _Can the customer use the Azure Defender and Log analytics agent? Yes._
Could you please remove wrong important warning as below? It cause confusion. Thank you.
> Important
> We don't currently support installation of the Log Analytics agent on Azure Kubernetes Service clusters that are running on virtual machine scale sets.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ad7c05f1-f69c-c033-0409-39f5c8f03674
* Version Independent ID: ebcceb7b-0d9c-618c-9576-b3b1713fe1ef
* Content: [Container security with Azure Security Center and Azure Defender](https://docs.microsoft.com/en-us/azure/security-center/container-security#run-time-protection-for-kubernetes-nodes-and-clusters)
* Content Source: [articles/security-center/container-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/security-center/container-security.md)
* Service: **security-center**
* GitHub Login: @memildin
* Microsoft Alias: **memildin**
|
True
|
Please remove wrong warming - As we discussed https://github.com/MicrosoftDocs/azure-docs/issues/73748, AKS cluster supports the Azure Defender and Log analytics agent.
> _Can the customer use the Azure Defender and Log analytics agent? Yes._
Could you please remove wrong important warning as below? It cause confusion. Thank you.
> Important
> We don't currently support installation of the Log Analytics agent on Azure Kubernetes Service clusters that are running on virtual machine scale sets.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ad7c05f1-f69c-c033-0409-39f5c8f03674
* Version Independent ID: ebcceb7b-0d9c-618c-9576-b3b1713fe1ef
* Content: [Container security with Azure Security Center and Azure Defender](https://docs.microsoft.com/en-us/azure/security-center/container-security#run-time-protection-for-kubernetes-nodes-and-clusters)
* Content Source: [articles/security-center/container-security.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/security-center/container-security.md)
* Service: **security-center**
* GitHub Login: @memildin
* Microsoft Alias: **memildin**
|
non_process
|
please remove wrong warming as we discussed aks cluster supports the azure defender and log analytics agent can the customer use the azure defender and log analytics agent yes could you please remove wrong important warning as below it cause confusion thank you important we don t currently support installation of the log analytics agent on azure kubernetes service clusters that are running on virtual machine scale sets document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service security center github login memildin microsoft alias memildin
| 0
|
546,810
| 16,019,541,051
|
IssuesEvent
|
2021-04-20 20:38:55
|
ngageoint/hootenanny
|
https://api.github.com/repos/ngageoint/hootenanny
|
closed
|
'Line should be a closed area based on the tag "military=revetment"' when mapping an aircraft revetment
|
Category: Translation Priority: Medium Status: In Progress Type: Bug
|
**Describe the bug**
Received warning in iD when attempting to map an aircraft revetment with a non-closed line in iD under the MGCP schema.
**To Reproduce**
Steps to reproduce the behavior:
1. Start iD editor
1. Draw a non-closed line
1. Make sure "Tag Schema:" is set to "MGCP"
1. Search for "Revet"

1. select "Aircraft Revetment (GB050)"
2. iD displays warning:

1. MGCP TRD 4.6 allows for linear Aircraft Revetments, in fact area revetments are not allowed.
2. Note: I think it is possible, although not probable, to have a linear closed line Aircraft Revetment, although it is not likely, and it should not be treated as an area.
**Expected behavior**
- Allow for linear, non-closed, Aircraft Revetments
**Screenshots**
**Desktop (please complete the following information):**
- OS: Ubuntu 16.04
- Browser: Firefox
- Version: 85.0.1 (64 bit)
**Smartphone (please complete the following information):**
- N/A
**Additional context**
- None
|
1.0
|
'Line should be a closed area based on the tag "military=revetment"' when mapping an aircraft revetment - **Describe the bug**
Received warning in iD when attempting to map an aircraft revetment with a non-closed line in iD under the MGCP schema.
**To Reproduce**
Steps to reproduce the behavior:
1. Start iD editor
1. Draw a non-closed line
1. Make sure "Tag Schema:" is set to "MGCP"
1. Search for "Revet"

1. select "Aircraft Revetment (GB050)"
2. iD displays warning:

1. MGCP TRD 4.6 allows for linear Aircraft Revetments, in fact area revetments are not allowed.
2. Note: I think it is possible, although not probable, to have a linear closed line Aircraft Revetment, although it is not likely, and it should not be treated as an area.
**Expected behavior**
- Allow for linear, non-closed, Aircraft Revetments
**Screenshots**
**Desktop (please complete the following information):**
- OS: Ubuntu 16.04
- Browser: Firefox
- Version: 85.0.1 (64 bit)
**Smartphone (please complete the following information):**
- N/A
**Additional context**
- None
|
non_process
|
line should be a closed area based on the tag military revetment when mapping an aircraft revetment describe the bug received warning in id when attempting to map an aircraft revetment with a non closed line in id under the mgcp schema to reproduce steps to reproduce the behavior start id editor draw a non closed line make sure tag schema is set to mgcp search for revet select aircraft revetment id displays warning mgcp trd allows for linear aircraft revetments in fact area revetments are not allowed note i think it is possible although not probable to have a linear closed line aircraft revetment although it is not likely and it should not be treated as an area expected behavior allow for linear non closed aircraft revetments screenshots desktop please complete the following information os ubuntu browser firefox version bit smartphone please complete the following information n a additional context none
| 0
|
41,015
| 12,812,506,412
|
IssuesEvent
|
2020-07-04 06:53:35
|
shrivastava-prateek/angularjs-es6-webpack
|
https://api.github.com/repos/shrivastava-prateek/angularjs-es6-webpack
|
opened
|
WS-2019-0032 (Medium) detected in js-yaml-3.6.1.tgz, js-yaml-3.4.6.tgz
|
security vulnerability
|
## WS-2019-0032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>js-yaml-3.6.1.tgz</b>, <b>js-yaml-3.4.6.tgz</b></p></summary>
<p>
<details><summary><b>js-yaml-3.6.1.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.6.1.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.6.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-0.4.2.tgz (Root Library)
- istanbul-0.3.22.tgz
- :x: **js-yaml-3.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>js-yaml-3.4.6.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/jscs/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jscs-2.0.0.tgz (Root Library)
- jscs-2.11.0.tgz
- :x: **js-yaml-3.4.6.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions js-yaml prior to 3.13.0 are vulnerable to Denial of Service. By parsing a carefully-crafted YAML file, the node process stalls and may exhaust system resources leading to a Denial of Service.
<p>Publish Date: 2019-03-20
<p>URL: <a href=https://github.com/nodeca/js-yaml/commit/a567ef3c6e61eb319f0bfc2671d91061afb01235>WS-2019-0032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/788/versions">https://www.npmjs.com/advisories/788/versions</a></p>
<p>Release Date: 2019-03-20</p>
<p>Fix Resolution: js-yaml - 3.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0032 (Medium) detected in js-yaml-3.6.1.tgz, js-yaml-3.4.6.tgz - ## WS-2019-0032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>js-yaml-3.6.1.tgz</b>, <b>js-yaml-3.4.6.tgz</b></p></summary>
<p>
<details><summary><b>js-yaml-3.6.1.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.6.1.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.6.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-0.4.2.tgz (Root Library)
- istanbul-0.3.22.tgz
- :x: **js-yaml-3.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>js-yaml-3.4.6.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/jscs/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jscs-2.0.0.tgz (Root Library)
- jscs-2.11.0.tgz
- :x: **js-yaml-3.4.6.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions js-yaml prior to 3.13.0 are vulnerable to Denial of Service. By parsing a carefully-crafted YAML file, the node process stalls and may exhaust system resources leading to a Denial of Service.
<p>Publish Date: 2019-03-20
<p>URL: <a href=https://github.com/nodeca/js-yaml/commit/a567ef3c6e61eb319f0bfc2671d91061afb01235>WS-2019-0032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/788/versions">https://www.npmjs.com/advisories/788/versions</a></p>
<p>Release Date: 2019-03-20</p>
<p>Fix Resolution: js-yaml - 3.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in js yaml tgz js yaml tgz ws medium severity vulnerability vulnerable libraries js yaml tgz js yaml tgz js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules js yaml package json dependency hierarchy karma coverage tgz root library istanbul tgz x js yaml tgz vulnerable library js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules jscs node modules js yaml package json dependency hierarchy gulp jscs tgz root library jscs tgz x js yaml tgz vulnerable library found in head commit a href vulnerability details versions js yaml prior to are vulnerable to denial of service by parsing a carefully crafted yaml file the node process stalls and may exhaust system resources leading to a denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution js yaml step up your open source security game with whitesource
| 0
|
298,649
| 9,200,629,710
|
IssuesEvent
|
2019-03-07 17:30:53
|
qissue-bot/QGIS
|
https://api.github.com/repos/qissue-bot/QGIS
|
closed
|
qgis 0.8 - unable to open database crash
|
Category: Projection Support Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
|
---
Author Name: **anonymous -** (anonymous -)
Original Redmine Issue: 214, https://issues.qgis.org/issues/214
Original Assignee: Gary Sherman
---
Doing anything related to projection selection results in crash.
OS: [[WinXP]] Home
QGIS: 0.8 preview 1 and 060724.
E:\\qgis-0.8.0-win32-060724>qgis
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsprojectionselector.cpp, line 533
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
E:\\qgis-0.8.0-win32-060724>qgis
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsoptions.cpp, line 303
E:\\qgis-0.8.0-win32-060724>qgis
mPixmap.isNull() = 0
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsprojectionselector.cpp, line 533
|
1.0
|
qgis 0.8 - unable to open database crash - ---
Author Name: **anonymous -** (anonymous -)
Original Redmine Issue: 214, https://issues.qgis.org/issues/214
Original Assignee: Gary Sherman
---
Doing anything related to projection selection results in crash.
OS: [[WinXP]] Home
QGIS: 0.8 preview 1 and 060724.
E:\\qgis-0.8.0-win32-060724>qgis
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsprojectionselector.cpp, line 533
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
E:\\qgis-0.8.0-win32-060724>qgis
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsoptions.cpp, line 303
E:\\qgis-0.8.0-win32-060724>qgis
mPixmap.isNull() = 0
Can't open database: unable to open database file
Assertion failed: myResult == 0, file qgsprojectionselector.cpp, line 533
|
non_process
|
qgis unable to open database crash author name anonymous anonymous original redmine issue original assignee gary sherman doing anything related to projection selection results in crash os home qgis preview and e qgis qgis can t open database unable to open database file assertion failed myresult file qgsprojectionselector cpp line this application has requested the runtime to terminate it in an unusual way please contact the application s support team for more information e qgis qgis can t open database unable to open database file assertion failed myresult file qgsoptions cpp line e qgis qgis mpixmap isnull can t open database unable to open database file assertion failed myresult file qgsprojectionselector cpp line
| 0
|
134,262
| 19,101,349,616
|
IssuesEvent
|
2021-11-29 23:04:44
|
hackforla/website
|
https://api.github.com/repos/hackforla/website
|
closed
|
Update "Current Pages" in Figma (Nav Components, Toolkit, Guide Page, Donate)
|
good first issue To Update ! role: design Feature: Design system
|
### Overview
We need to update the "Current Pages" file in Figma so that it accurately reflects both the current live versions of the **Nav Components (Top Nav & Footer), Toolkit, Guides, Donate** pages on the site, as well as the most updated Figma designs (which may or may not currently be live, but are approved for development).
### Action Items
- [x] Go to the "Current Pages" section in Figma
- [x] Locate the section in the gray box which provides guidance on how to structure the work you'll be doing on the page
- [x] Start a new section below what's currently on the page (following our Figma page layout guidance).
- [x] Find the Figma pages for each: **Nav Components (Top Nav & Footer), Toolkit, Guides, Donate**
- [x] For each one, navigate to that particular page on the current live website (if there is a live version, use sitemap page to check page status)
- [x] Going back to the Figma page for the page you're working on, find the design that is currently live on the site. Copy and paste that design (both desktop and mobile) into the Current Pages file, following the structure shown in the example section of gray boxes.
- [x] Go back to the Figma page for the page you're working on and find the design located in the red box, which represents the most up-to-date design for the page that has been approved for development. Copy and paste this version to the Current Pages file below the "Current live version" you just added.
- [x] Do this for all 4 pages listed in this issue.
### Resources/Instructions
[Figma - Current Pages](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=3464%3A0)
[HfLA - Toolkit](https://www.hackforla.org/toolkit/)
[HfLA - Guide Page](https://www.hackforla.org/toolkit/)
[HfLA - Donate](https://www.hackforla.org/donate/)
|
2.0
|
Update "Current Pages" in Figma (Nav Components, Toolkit, Guide Page, Donate) - ### Overview
We need to update the "Current Pages" file in Figma so that it accurately reflects both the current live versions of the **Nav Components (Top Nav & Footer), Toolkit, Guides, Donate** pages on the site, as well as the most updated Figma designs (which may or may not currently be live, but are approved for development).
### Action Items
- [x] Go to the "Current Pages" section in Figma
- [x] Locate the section in the gray box which provides guidance on how to structure the work you'll be doing on the page
- [x] Start a new section below what's currently on the page (following our Figma page layout guidance).
- [x] Find the Figma pages for each: **Nav Components (Top Nav & Footer), Toolkit, Guides, Donate**
- [x] For each one, navigate to that particular page on the current live website (if there is a live version, use sitemap page to check page status)
- [x] Going back to the Figma page for the page you're working on, find the design that is currently live on the site. Copy and paste that design (both desktop and mobile) into the Current Pages file, following the structure shown in the example section of gray boxes.
- [x] Go back to the Figma page for the page you're working on and find the design located in the red box, which represents the most up-to-date design for the page that has been approved for development. Copy and paste this version to the Current Pages file below the "Current live version" you just added.
- [x] Do this for all 4 pages listed in this issue.
### Resources/Instructions
[Figma - Current Pages](https://www.figma.com/file/0RRPy1Ph7HafI3qOITg0Mr/Hack-for-LA-Website?node-id=3464%3A0)
[HfLA - Toolkit](https://www.hackforla.org/toolkit/)
[HfLA - Guide Page](https://www.hackforla.org/toolkit/)
[HfLA - Donate](https://www.hackforla.org/donate/)
|
non_process
|
update current pages in figma nav components toolkit guide page donate overview we need to update the current pages file in figma so that it accurately reflects both the current live versions of the nav components top nav footer toolkit guides donate pages on the site as well as the most updated figma designs which may or may not currently be live but are approved for development action items go to the current pages section in figma locate the section in the gray box which provides guidance on how to structure the work you ll be doing on the page start a new section below what s currently on the page following our figma page layout guidance find the figma pages for each nav components top nav footer toolkit guides donate for each one navigate to that particular page on the current live website if there is a live version use sitemap page to check page status going back to the figma page for the page you re working on find the design that is currently live on the site copy and paste that design both desktop and mobile into the current pages file following the structure shown in the example section of gray boxes go back to the figma page for the page you re working on and find the design located in the red box which represents the most up to date design for the page that has been approved for development copy and paste this version to the current pages file below the current live version you just added do this for all pages listed in this issue resources instructions
| 0
|
5,785
| 13,160,154,506
|
IssuesEvent
|
2020-08-10 17:04:26
|
18F/tts-tech-portfolio
|
https://api.github.com/repos/18F/tts-tech-portfolio
|
opened
|
determine hosting choice(s) for AI Portfolio content
|
g: initial i: enterprise architecture t: weeks
|
## Background information
The AI Portfolio is trying to launch a couple of things:
Thing | Visibility | Target delivery date
Use Case Library | Fed-only | September or earlier
Body of Knowledge | public | mid-October
Need to help them figure out what platforms/sites those will be hosted on.
## User stories
<!-- one or more "As a ..., I want ... so that ..." -->
## Acceptance criteria
- [definitive thing]
- [other definitive thing]
---
The assignee should add some checkboxes as a "sketch" of the steps to complete, which may evolve.
|
1.0
|
determine hosting choice(s) for AI Portfolio content - ## Background information
The AI Portfolio is trying to launch a couple of things:
Thing | Visibility | Target delivery date
Use Case Library | Fed-only | September or earlier
Body of Knowledge | public | mid-October
Need to help them figure out what platforms/sites those will be hosted on.
## User stories
<!-- one or more "As a ..., I want ... so that ..." -->
## Acceptance criteria
- [definitive thing]
- [other definitive thing]
---
The assignee should add some checkboxes as a "sketch" of the steps to complete, which may evolve.
|
non_process
|
determine hosting choice s for ai portfolio content background information the ai portfolio is trying to launch a couple of things thing visibility target delivery date use case library fed only september or earlier body of knowledge public mid october need to help them figure out what platforms sites those will be hosted on user stories acceptance criteria the assignee should add some checkboxes as a sketch of the steps to complete which may evolve
| 0
|
15,077
| 18,779,528,223
|
IssuesEvent
|
2021-11-08 03:37:20
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add See You Next Wednesday
|
suggested title in process
|
Please add as much of the following info as you can:
Title:
See You Next Wednesday
Type (film/tv show):
Film
Film or show in which it appears:
An American Werewolf in London
Is the parent film/show streaming anywhere?
Yes
About when in the parent film/show does it appear?
Towards the end.
Actual footage of the film/show can be seen (yes/no)?
Yes.
https://www.youtube.com/watch?v=q6_w7aa9MHI
|
1.0
|
Add See You Next Wednesday - Please add as much of the following info as you can:
Title:
See You Next Wednesday
Type (film/tv show):
Film
Film or show in which it appears:
An American Werewolf in London
Is the parent film/show streaming anywhere?
Yes
About when in the parent film/show does it appear?
Towards the end.
Actual footage of the film/show can be seen (yes/no)?
Yes.
https://www.youtube.com/watch?v=q6_w7aa9MHI
|
process
|
add see you next wednesday please add as much of the following info as you can title see you next wednesday type film tv show film film or show in which it appears an american werewolf in london is the parent film show streaming anywhere yes about when in the parent film show does it appear towards the end actual footage of the film show can be seen yes no yes
| 1
|
4,449
| 7,315,155,146
|
IssuesEvent
|
2018-03-01 10:03:32
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
Payconiq remarks
|
priority_critical process_duplicate type_ticket
|
See #601
Some remarks:
- [ ] 1/ Make sure transaction are created in an idempotent way. If an error happens after the Payconiq payment is made (which I had during testing) and the user retries by pressing the PAY button again, then the money shouldn't be withdrawn from his account a second time. Eg. by always using the same transaction id for the same message. Perhaps use the message key as transaction id.
- [ ] 2/ The wallet is empty. We should have a way to define embedded apps without adding them in the sidebar.
- [ ] 3/ We'll have to keep track of all transaction server-side. This is for monthly reports that we'll give to Payconiq.
- [ ] 3.1/ Check if we can add a reference from our app to the payconic transaction (eg: `45475646547654: Bestelling via Lochristi app bij keurslager de bruycker`
- [ ] 4/ Auto-submit the pay step in the order flow. We don't want the user to press the "Skip" button after payment.
- [ ] 5/ The payment provider, transaction id and status that are shown in the PAY step after payment are handy for developers, but not for the end users. When they look in the Payconiq app, they see the amount, date and time, description and transaction reference (which is different from the transaction id), I'd show these fields instead of the payment provider and transaction id. Also the dutch word for payment provider doesn't fit in the table.

- [ ] 6/ Rename "SDD Mandaat" in the download link to "SEPA Mandaat", because this is how it's called in the setup instructions
- [ ] 7/ When payment is enabled, but the user chose to skip the pay step, then mention this in the order detail
- EN: The order is not paid yet.
- NL: De bestelling is nog niet betaald
- [ ] 8/ This last remark is open for debate. The setup instructions are now downloadable in the form of a PDF. I got the remarks that it would be nicer if this is a separate page. Eg. a new tab called "PAyconiq" or "Mobile payments" next to "Order" (under "Settings"). Then we could show the setup instructions in the page itself (so no more pdf), with buttons for the documents that need to be downloaded.
- 1 ....................................
..................
.....
- A [ [SEPA Mandaat](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/mandate-nl.pdf) ]
- B [ [Belfius](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-belfius-nl.pdf) ] [ [KBC](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-kbc-nl.pdf) ] [ [Other](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-ing-nl.pdf) ]
- 2 .............
.........
- 3 .......................
- 4 ..........................
...........
Merchant ID: [ textfield ]
Account Key: [ textblock ]
So in a separate page the above instead of the current popup. It was also not clear for Steven that the green cog symbol was a button that could be pressed.

|
1.0
|
Payconiq remarks - See #601
Some remarks:
- [ ] 1/ Make sure transaction are created in an idempotent way. If an error happens after the Payconiq payment is made (which I had during testing) and the user retries by pressing the PAY button again, then the money shouldn't be withdrawn from his account a second time. Eg. by always using the same transaction id for the same message. Perhaps use the message key as transaction id.
- [ ] 2/ The wallet is empty. We should have a way to define embedded apps without adding them in the sidebar.
- [ ] 3/ We'll have to keep track of all transaction server-side. This is for monthly reports that we'll give to Payconiq.
- [ ] 3.1/ Check if we can add a reference from our app to the payconic transaction (eg: `45475646547654: Bestelling via Lochristi app bij keurslager de bruycker`
- [ ] 4/ Auto-submit the pay step in the order flow. We don't want the user to press the "Skip" button after payment.
- [ ] 5/ The payment provider, transaction id and status that are shown in the PAY step after payment are handy for developers, but not for the end users. When they look in the Payconiq app, they see the amount, date and time, description and transaction reference (which is different from the transaction id), I'd show these fields instead of the payment provider and transaction id. Also the dutch word for payment provider doesn't fit in the table.

- [ ] 6/ Rename "SDD Mandaat" in the download link to "SEPA Mandaat", because this is how it's called in the setup instructions
- [ ] 7/ When payment is enabled, but the user chose to skip the pay step, then mention this in the order detail
- EN: The order is not paid yet.
- NL: De bestelling is nog niet betaald
- [ ] 8/ This last remark is open for debate. The setup instructions are now downloadable in the form of a PDF. I got the remarks that it would be nicer if this is a separate page. Eg. a new tab called "PAyconiq" or "Mobile payments" next to "Order" (under "Settings"). Then we could show the setup instructions in the page itself (so no more pdf), with buttons for the documents that need to be downloaded.
- 1 ....................................
..................
.....
- A [ [SEPA Mandaat](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/mandate-nl.pdf) ]
- B [ [Belfius](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-belfius-nl.pdf) ] [ [KBC](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-kbc-nl.pdf) ] [ [Other](https://github.com/our-city-app/oca-backend/blob/55f811a96371d0376e4b70eedf85d15e65afe901/src/static/files/payment/payconiq/agreement-ing-nl.pdf) ]
- 2 .............
.........
- 3 .......................
- 4 ..........................
...........
Merchant ID: [ textfield ]
Account Key: [ textblock ]
So in a separate page the above instead of the current popup. It was also not clear for Steven that the green cog symbol was a button that could be pressed.

|
process
|
payconiq remarks see some remarks make sure transaction are created in an idempotent way if an error happens after the payconiq payment is made which i had during testing and the user retries by pressing the pay button again then the money shouldn t be withdrawn from his account a second time eg by always using the same transaction id for the same message perhaps use the message key as transaction id the wallet is empty we should have a way to define embedded apps without adding them in the sidebar we ll have to keep track of all transaction server side this is for monthly reports that we ll give to payconiq check if we can add a reference from our app to the payconic transaction eg bestelling via lochristi app bij keurslager de bruycker auto submit the pay step in the order flow we don t want the user to press the skip button after payment the payment provider transaction id and status that are shown in the pay step after payment are handy for developers but not for the end users when they look in the payconiq app they see the amount date and time description and transaction reference which is different from the transaction id i d show these fields instead of the payment provider and transaction id also the dutch word for payment provider doesn t fit in the table rename sdd mandaat in the download link to sepa mandaat because this is how it s called in the setup instructions when payment is enabled but the user chose to skip the pay step then mention this in the order detail en the order is not paid yet nl de bestelling is nog niet betaald this last remark is open for debate the setup instructions are now downloadable in the form of a pdf i got the remarks that it would be nicer if this is a separate page eg a new tab called payconiq or mobile payments next to order under settings then we could show the setup instructions in the page itself so no more pdf with buttons for the documents that need to be downloaded a b merchant id account key so in a separate page the above instead of the current popup it was also not clear for steven that the green cog symbol was a button that could be pressed
| 1
|
126,166
| 4,973,426,320
|
IssuesEvent
|
2016-12-06 00:59:19
|
gravityview/GravityView
|
https://api.github.com/repos/gravityview/GravityView
|
opened
|
GV Widgets rendering twice
|
Bug Core: GV Widget Priority: High
|
Widgets sometimes are rendering twice, duplicating the layout. Haven't yet determined if it's reproducable.
Reported here: https://secure.helpscout.net/conversation/287424884/7594/
|
1.0
|
GV Widgets rendering twice - Widgets sometimes are rendering twice, duplicating the layout. Haven't yet determined if it's reproducable.
Reported here: https://secure.helpscout.net/conversation/287424884/7594/
|
non_process
|
gv widgets rendering twice widgets sometimes are rendering twice duplicating the layout haven t yet determined if it s reproducable reported here
| 0
|
2,835
| 8,378,302,109
|
IssuesEvent
|
2018-10-06 12:49:47
|
ryota-murakami/blog
|
https://api.github.com/repos/ryota-murakami/blog
|
closed
|
JSビルドスタックのマイグレーション
|
current-scope🔎 re-architecture🚀
|
やりたいこと
- [ ] coffeeやめてes2018で書きたい
- [ ] JSモジュールをruby gemで管理するのをやめ、npm管理へ移行したい
- [ ] turbolinksは残した
webpack使いたい訳ではないけどAssetspipelineで要望が実現できなければ検討してみる。
|
1.0
|
JSビルドスタックのマイグレーション - やりたいこと
- [ ] coffeeやめてes2018で書きたい
- [ ] JSモジュールをruby gemで管理するのをやめ、npm管理へ移行したい
- [ ] turbolinksは残した
webpack使いたい訳ではないけどAssetspipelineで要望が実現できなければ検討してみる。
|
non_process
|
jsビルドスタックのマイグレーション やりたいこと jsモジュールをruby gemで管理するのをやめ、npm管理へ移行したい turbolinksは残した webpack使いたい訳ではないけどassetspipelineで要望が実現できなければ検討してみる。
| 0
|
349,862
| 24,959,289,070
|
IssuesEvent
|
2022-11-01 14:20:52
|
AIdictive/tuberia
|
https://api.github.com/repos/AIdictive/tuberia
|
closed
|
Define auto-doc
|
documentation 📖
|
Define how the auto-documentation feature will look like in the MVP.
* Should we generate the full `mkdocs` directory and config file or just generate a bunch of Markdown files that the user can manually incorporate to his/her `mkdocs` project? Another option is to create an `mkdocs` plugin like: https://ecurtin2.github.io/mkdocs-apidoc/ and https://mkdocstrings.github.io/.
* Define TOC of the autogenerated documentation. For example:
```
tuberia_overview.md
flows/
├─ flow00.md
├─ flow01.md
├─ ...
tables/
├─ table00.md
├─ table01.md
├─ ...
```
* Define how the overview/flow/table page will look like.
Note: This can be useful to extract docstrings from existing code: https://docs.python.org/3/library/inspect.html#inspect.getcomments.
|
1.0
|
Define auto-doc - Define how the auto-documentation feature will look like in the MVP.
* Should we generate the full `mkdocs` directory and config file or just generate a bunch of Markdown files that the user can manually incorporate to his/her `mkdocs` project? Another option is to create an `mkdocs` plugin like: https://ecurtin2.github.io/mkdocs-apidoc/ and https://mkdocstrings.github.io/.
* Define TOC of the autogenerated documentation. For example:
```
tuberia_overview.md
flows/
├─ flow00.md
├─ flow01.md
├─ ...
tables/
├─ table00.md
├─ table01.md
├─ ...
```
* Define how the overview/flow/table page will look like.
Note: This can be useful to extract docstrings from existing code: https://docs.python.org/3/library/inspect.html#inspect.getcomments.
|
non_process
|
define auto doc define how the auto documentation feature will look like in the mvp should we generate the full mkdocs directory and config file or just generate a bunch of markdown files that the user can manually incorporate to his her mkdocs project another option is to create an mkdocs plugin like and define toc of the autogenerated documentation for example tuberia overview md flows ├─ md ├─ md ├─ tables ├─ md ├─ md ├─ define how the overview flow table page will look like note this can be useful to extract docstrings from existing code
| 0
|
21,226
| 28,311,138,440
|
IssuesEvent
|
2023-04-10 15:29:49
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
closed
|
Connect Create Event to Backend
|
Processing Task Sprint 3
|
**Task Tests**
*Test 1*
Checking for successful image uploads
1) Visit https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/create-event
If prompted to log in, log in
2) Fill out the create event form. Preferably with multiple images uploaded in event images
3) Check out the webserver and see if the files are properly uploaded. Should be post{id}_thumbnail and all the images as post{id}_img{number}
4) Check out data base too if you want and check for similarities in content
*Test 2*
Checking for sanitizing content
1) Visit https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/create-event
If prompted to log in, log in
2) Fill out the create event form, but with html tags and stuff in the input
3) Check the database, should look like post id 1's title
|
1.0
|
Connect Create Event to Backend - **Task Tests**
*Test 1*
Checking for successful image uploads
1) Visit https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/create-event
If prompted to log in, log in
2) Fill out the create event form. Preferably with multiple images uploaded in event images
3) Check out the webserver and see if the files are properly uploaded. Should be post{id}_thumbnail and all the images as post{id}_img{number}
4) Check out data base too if you want and check for similarities in content
*Test 2*
Checking for sanitizing content
1) Visit https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/create-event
If prompted to log in, log in
2) Fill out the create event form, but with html tags and stuff in the input
3) Check the database, should look like post id 1's title
|
process
|
connect create event to backend task tests test checking for successful image uploads visit if prompted to log in log in fill out the create event form preferably with multiple images uploaded in event images check out the webserver and see if the files are properly uploaded should be post id thumbnail and all the images as post id img number check out data base too if you want and check for similarities in content test checking for sanitizing content visit if prompted to log in log in fill out the create event form but with html tags and stuff in the input check the database should look like post id s title
| 1
|
5,150
| 7,930,171,686
|
IssuesEvent
|
2018-07-06 17:44:29
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
TIME: failed to parse
|
ontology processing problem
|
Both submissions of the [TIME ontology](http://bioportal.bioontology.org/ontologies/TIME) failed to parse in BioPortal. According to the parsing log file, the failure occurs when we generate missing labels and try to load that data into the triplestore:
```
I, [2018-02-07T11:34:38.235297 #23635] INFO -- : ["OWLAPI Java command: parsing finished successfully."]
I, [2018-02-07T11:34:38.235458 #23635] INFO -- : ["Output size 118088 in `/srv/ncbo/repository/TIME/2/owlapi.xrdf`"]
I, [2018-02-07T11:34:39.391623 #23635] INFO -- : ["Triples /srv/ncbo/repository/TIME/2/owlapi.xrdf appended in <http://data.bioontology.org/ontologies/TIME/submissions/2>"]
E, [2018-02-07T11:34:39.794608 #23635] ERROR -- : ["Exception: Rapper cannot parse turtle file at /tmp/data_triple_store20180207-23635-1lftv1j: rapper: Parsing URI file:///tmp/data_triple_store20180207-23635-1lftv1j with parser turtle
rapper: Serializing with serializer ntriples
rapper: Error - URI file:///tmp/data_triple_store20180207-23635-1lftv1j:6 - syntax error at '<'
rapper: Parsing returned 5 triples
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:59:in `bnodes_filter_file'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:80:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:121:in `append_data_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:147:in `append_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:562:in `generate_missing_labels_pre'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:519:in `call'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:519:in `block in loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:478:in `block in process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:474:in `delete_if'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:474:in `process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:518:in `loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:930:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'"]
```
|
1.0
|
TIME: failed to parse - Both submissions of the [TIME ontology](http://bioportal.bioontology.org/ontologies/TIME) failed to parse in BioPortal. According to the parsing log file, the failure occurs when we generate missing labels and try to load that data into the triplestore:
```
I, [2018-02-07T11:34:38.235297 #23635] INFO -- : ["OWLAPI Java command: parsing finished successfully."]
I, [2018-02-07T11:34:38.235458 #23635] INFO -- : ["Output size 118088 in `/srv/ncbo/repository/TIME/2/owlapi.xrdf`"]
I, [2018-02-07T11:34:39.391623 #23635] INFO -- : ["Triples /srv/ncbo/repository/TIME/2/owlapi.xrdf appended in <http://data.bioontology.org/ontologies/TIME/submissions/2>"]
E, [2018-02-07T11:34:39.794608 #23635] ERROR -- : ["Exception: Rapper cannot parse turtle file at /tmp/data_triple_store20180207-23635-1lftv1j: rapper: Parsing URI file:///tmp/data_triple_store20180207-23635-1lftv1j with parser turtle
rapper: Serializing with serializer ntriples
rapper: Error - URI file:///tmp/data_triple_store20180207-23635-1lftv1j:6 - syntax error at '<'
rapper: Parsing returned 5 triples
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:59:in `bnodes_filter_file'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:80:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:121:in `append_data_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-999634f3a875/lib/goo/sparql/client.rb:147:in `append_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:562:in `generate_missing_labels_pre'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:519:in `call'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:519:in `block in loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:478:in `block in process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:474:in `delete_if'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:474:in `process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:518:in `loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-d6718646d37c/lib/ontologies_linked_data/models/ontology_submission.rb:930:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'"]
```
|
process
|
time failed to parse both submissions of the failed to parse in bioportal according to the parsing log file the failure occurs when we generate missing labels and try to load that data into the triplestore i info i info i info e error exception rapper cannot parse turtle file at tmp data triple rapper parsing uri file tmp data triple with parser turtle rapper serializing with serializer ntriples rapper error uri file tmp data triple syntax error at rapper parsing returned triples srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in bnodes filter file srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples no bnodes srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append data triples srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in generate missing labels pre srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in call srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in block in loop classes srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in block in process callbacks srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in delete if srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process callbacks srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in loop classes srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process submission srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in process submission bin ncbo ontology process in block in bin ncbo ontology process in each bin ncbo ontology process in
| 1
|
763,337
| 26,752,855,862
|
IssuesEvent
|
2023-01-30 21:08:24
|
dmwm/WMCore
|
https://api.github.com/repos/dmwm/WMCore
|
closed
|
Flaw in the GQ -> LQ data acquisition pagination logic
|
BUG Operations WMAgent High Priority CouchDB QPrio: High
|
**Impact of the bug**
WMAgent
**Describe the bug**
We received a report of lack of pressure in the condor pool - via MM this morning - and the reason was that the agents were no longer acquiring work from the global workqueue.
**How to reproduce it**
Very specific scenario, but apparently caused by ~5k ACDCs injected into the system yesterday (likely a correlation of target sites and very high priority).
**Expected behavior**
When the agent is running the GQ to LQ data acquisition (where "data" here is a workqueue element), the expected behavior and logic is:
* it has to iterate over the GQE in slices/pages
* whenever the target number of elements is found (passed in the query string as `num_elem`), stop iterating in the CouchDB list function and return data to the client
* keep iterating through the elements until the whole data is exhausted, or until num_elem are found.
So, for this issue, we need to correct how pagination is done to ensure that the list `workRestrictions` will receive all the required pages from the view function `availableByPriority`, instead of passing over only the first page and returning results to the end user.
**Additional context and error message**
The agent uses `availableWork()` method to pull work from GQ to the LQ. A few options are defined:
https://github.com/dmwm/WMCore/blob/master/src/python/WMCore/WorkQueue/WorkQueueBackend.py#L446-L471
including the use of `skip` and `limit`, to act as a pagination over all the global workqueue elements and evaluate them into pages (slices).
This call gets mapped to something similar to (from the couchdb log):
```
GET /workqueue/_design/WorkQueue/_list/workRestrictions/availableByPriority?
include_docs=true&
descending=true&
resources=%7B%22T2_CH_CERN%22%3A+20975.0%2C+%22T2_CH_CERN_P5%22%3A+187....5.0%7D&
limit=1000&
num_elem=1000&
team=%22production%22&
skip=0
```
|
1.0
|
Flaw in the GQ -> LQ data acquisition pagination logic - **Impact of the bug**
WMAgent
**Describe the bug**
We received a report of lack of pressure in the condor pool - via MM this morning - and the reason was that the agents were no longer acquiring work from the global workqueue.
**How to reproduce it**
Very specific scenario, but apparently caused by ~5k ACDCs injected into the system yesterday (likely a correlation of target sites and very high priority).
**Expected behavior**
When the agent is running the GQ to LQ data acquisition (where "data" here is a workqueue element), the expected behavior and logic is:
* it has to iterate over the GQE in slices/pages
* whenever the target number of elements is found (passed in the query string as `num_elem`), stop iterating in the CouchDB list function and return data to the client
* keep iterating through the elements until the whole data is exhausted, or until num_elem are found.
So, for this issue, we need to correct how pagination is done to ensure that the list `workRestrictions` will receive all the required pages from the view function `availableByPriority`, instead of passing over only the first page and returning results to the end user.
**Additional context and error message**
The agent uses `availableWork()` method to pull work from GQ to the LQ. A few options are defined:
https://github.com/dmwm/WMCore/blob/master/src/python/WMCore/WorkQueue/WorkQueueBackend.py#L446-L471
including the use of `skip` and `limit`, to act as a pagination over all the global workqueue elements and evaluate them into pages (slices).
This call gets mapped to something similar to (from the couchdb log):
```
GET /workqueue/_design/WorkQueue/_list/workRestrictions/availableByPriority?
include_docs=true&
descending=true&
resources=%7B%22T2_CH_CERN%22%3A+20975.0%2C+%22T2_CH_CERN_P5%22%3A+187....5.0%7D&
limit=1000&
num_elem=1000&
team=%22production%22&
skip=0
```
|
non_process
|
flaw in the gq lq data acquisition pagination logic impact of the bug wmagent describe the bug we received a report of lack of pressure in the condor pool via mm this morning and the reason was that the agents were no longer acquiring work from the global workqueue how to reproduce it very specific scenario but apparently caused by acdcs injected into the system yesterday likely a correlation of target sites and very high priority expected behavior when the agent is running the gq to lq data acquisition where data here is a workqueue element the expected behavior and logic is it has to iterate over the gqe in slices pages whenever the target number of elements is found passed in the query string as num elem stop iterating in the couchdb list function and return data to the client keep iterating through the elements until the whole data is exhausted or until num elem are found so for this issue we need to correct how pagination is done to ensure that the list workrestrictions will receive all the required pages from the view function availablebypriority instead of passing over only the first page and returning results to the end user additional context and error message the agent uses availablework method to pull work from gq to the lq a few options are defined including the use of skip and limit to act as a pagination over all the global workqueue elements and evaluate them into pages slices this call gets mapped to something similar to from the couchdb log get workqueue design workqueue list workrestrictions availablebypriority include docs true descending true resources ch cern ch cern limit num elem team skip
| 0
|
38,764
| 5,198,333,644
|
IssuesEvent
|
2017-01-23 17:49:28
|
hacklabr/povosisolados
|
https://api.github.com/repos/hacklabr/povosisolados
|
closed
|
Para usuários logados, exibir o "editar" com link que vai para edição da página no admin
|
status: teste
|
Não está aparecendo para página, creio que apenas para notícias. Testar em todos os post types
|
1.0
|
Para usuários logados, exibir o "editar" com link que vai para edição da página no admin - Não está aparecendo para página, creio que apenas para notícias. Testar em todos os post types
|
non_process
|
para usuários logados exibir o editar com link que vai para edição da página no admin não está aparecendo para página creio que apenas para notícias testar em todos os post types
| 0
|
28,055
| 6,935,795,841
|
IssuesEvent
|
2017-12-03 13:37:38
|
nosma/BankTransactionsAnalyzer
|
https://api.github.com/repos/nosma/BankTransactionsAnalyzer
|
opened
|
Refactor dist module
|
clean code
|
Verify that the dist module creates a zip with the new package structure.
All the namings are correct and the zip produced will have the following name
**BankTransactionAnalyzer-webApp-1.0**
|
1.0
|
Refactor dist module - Verify that the dist module creates a zip with the new package structure.
All the namings are correct and the zip produced will have the following name
**BankTransactionAnalyzer-webApp-1.0**
|
non_process
|
refactor dist module verify that the dist module creates a zip with the new package structure all the namings are correct and the zip produced will have the following name banktransactionanalyzer webapp
| 0
|
6,585
| 9,662,429,412
|
IssuesEvent
|
2019-05-20 20:48:22
|
googleapis/release-please
|
https://api.github.com/repos/googleapis/release-please
|
opened
|
make test coverage less abysmal
|
priority: p2 type: process
|
the test coverage on release-please is quite low, I'd like to put some work into increasing it (now that we're starting to roll out to a wider audience).
|
1.0
|
make test coverage less abysmal - the test coverage on release-please is quite low, I'd like to put some work into increasing it (now that we're starting to roll out to a wider audience).
|
process
|
make test coverage less abysmal the test coverage on release please is quite low i d like to put some work into increasing it now that we re starting to roll out to a wider audience
| 1
|
5,252
| 8,041,332,841
|
IssuesEvent
|
2018-07-31 02:18:09
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Changing loaded projects during a unit test corrupts the RubberduckParserState cache.
|
bug critical feature-unit-testing parse-tree-processing
|
[See the discussion here](http://chat.stackexchange.com/transcript/message/35491743#35491743).
The project event handlers in the parser state reject events when the VBE isn't in design mode, and the state doesn't get refreshed after the test run completes. This creates "issues" if the code being tested does something dodgy like this:
```
'@TestMethod
Public Sub TestMethod1() 'TODO Rename test
On Error GoTo TestFail
Application.Workbooks("Book1.xlsx").Close
Assert.Inconclusive
TestExit:
Exit Sub
TestFail:
Assert.Fail "Test raised an error: #" & Err.Number & " - " & Err.Description
End Sub
```
Opening and renaming a project create similar issues. The tests complete just fine, but starting with the next parser refresh, the wheels come off. The first time this happened I had a hard Excel crash, other times I was getting miscellaneous errors on parsing sometimes, crashes at others.
My guess is that this is an issue with the API too - any code that calls `.Parse` and then modifies the project landscape in the same procedure is likely going to be problematic.
|
1.0
|
Changing loaded projects during a unit test corrupts the RubberduckParserState cache. - [See the discussion here](http://chat.stackexchange.com/transcript/message/35491743#35491743).
The project event handlers in the parser state reject events when the VBE isn't in design mode, and the state doesn't get refreshed after the test run completes. This creates "issues" if the code being tested does something dodgy like this:
```
'@TestMethod
Public Sub TestMethod1() 'TODO Rename test
On Error GoTo TestFail
Application.Workbooks("Book1.xlsx").Close
Assert.Inconclusive
TestExit:
Exit Sub
TestFail:
Assert.Fail "Test raised an error: #" & Err.Number & " - " & Err.Description
End Sub
```
Opening and renaming a project create similar issues. The tests complete just fine, but starting with the next parser refresh, the wheels come off. The first time this happened I had a hard Excel crash, other times I was getting miscellaneous errors on parsing sometimes, crashes at others.
My guess is that this is an issue with the API too - any code that calls `.Parse` and then modifies the project landscape in the same procedure is likely going to be problematic.
|
process
|
changing loaded projects during a unit test corrupts the rubberduckparserstate cache the project event handlers in the parser state reject events when the vbe isn t in design mode and the state doesn t get refreshed after the test run completes this creates issues if the code being tested does something dodgy like this testmethod public sub todo rename test on error goto testfail application workbooks xlsx close assert inconclusive testexit exit sub testfail assert fail test raised an error err number err description end sub opening and renaming a project create similar issues the tests complete just fine but starting with the next parser refresh the wheels come off the first time this happened i had a hard excel crash other times i was getting miscellaneous errors on parsing sometimes crashes at others my guess is that this is an issue with the api too any code that calls parse and then modifies the project landscape in the same procedure is likely going to be problematic
| 1
|
16,586
| 2,919,300,479
|
IssuesEvent
|
2015-06-24 13:36:47
|
LittleWhite-tb/libdvfs
|
https://api.github.com/repos/LittleWhite-tb/libdvfs
|
closed
|
Kernel requirements for the use of this library
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. I'm running a red-hat flavor of linux (2.6.32-279.19.1.el6.x86_64) and
trying to install libdvfs on a sandy-bridge machine (Intel(R) Xeon(R) CPU
E5-2670)
2. The path /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor is missing
What is the expected output? What do you see instead?
Expected install to complete but I see the missing path instead.
What version of the product are you using? On what operating system?
red-hat enterprise linux (2.6.32-279.19.1.el6.x86_64) on a sandy-bridge machine
(Intel(R) Xeon(R) CPU E5-2670)
Please provide any additional information below.
Question:
Is this library supported only beyond a certain kernel version of linux? If so,
can you point at which one? Are there kernel modules that can be loaded instead?
Thanks a lot
```
Original issue reported on code.google.com by `akshay.v...@gmail.com` on 6 Jun 2014 at 8:42
|
1.0
|
Kernel requirements for the use of this library - ```
What steps will reproduce the problem?
1. I'm running a red-hat flavor of linux (2.6.32-279.19.1.el6.x86_64) and
trying to install libdvfs on a sandy-bridge machine (Intel(R) Xeon(R) CPU
E5-2670)
2. The path /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor is missing
What is the expected output? What do you see instead?
Expected install to complete but I see the missing path instead.
What version of the product are you using? On what operating system?
red-hat enterprise linux (2.6.32-279.19.1.el6.x86_64) on a sandy-bridge machine
(Intel(R) Xeon(R) CPU E5-2670)
Please provide any additional information below.
Question:
Is this library supported only beyond a certain kernel version of linux? If so,
can you point at which one? Are there kernel modules that can be loaded instead?
Thanks a lot
```
Original issue reported on code.google.com by `akshay.v...@gmail.com` on 6 Jun 2014 at 8:42
|
non_process
|
kernel requirements for the use of this library what steps will reproduce the problem i m running a red hat flavor of linux and trying to install libdvfs on a sandy bridge machine intel r xeon r cpu the path sys devices system cpu cpu cpufreq scaling governor is missing what is the expected output what do you see instead expected install to complete but i see the missing path instead what version of the product are you using on what operating system red hat enterprise linux on a sandy bridge machine intel r xeon r cpu please provide any additional information below question is this library supported only beyond a certain kernel version of linux if so can you point at which one are there kernel modules that can be loaded instead thanks a lot original issue reported on code google com by akshay v gmail com on jun at
| 0
|
267,132
| 23,284,269,751
|
IssuesEvent
|
2022-08-05 14:55:49
|
dask/distributed
|
https://api.github.com/repos/dask/distributed
|
opened
|
Flaky `distributed/tests/test_nanny.py::test_repeated_restarts`
|
flaky test
|
I saw this in https://github.com/dask/distributed/pull/6829#issuecomment-1205634945 as well. Good news, it's not caused by that PR, bad news, it seems to actually be flaky. I thought I might have fixed this in https://github.com/dask/distributed/pull/6823, but apparently not. This one is a real normal restart failure: `Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned`.
The `Worker process still alive after 15.999998664855958 seconds, killing` seems a little concerning. It's possible this test needs to be rewritten after https://github.com/dask/distributed/pull/6504, since before it probably didn't actually care if the workers shut down on time.
cc @hendrikmakait since I'm curious if https://github.com/dask/distributed/pull/6427 would help here—that cleans up the `Nanny.kill` implementation a bit.
```
____________________________ test_repeated_restarts ____________________________
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/utils_test.py:1111: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed/utils_test.py:376: in _run_and_close_tornado
return asyncio.run(inner_fn())
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/runners.py:44: in run
return loop.run_until_complete(main)
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:634: in run_until_complete
self.run_forever()
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:601: in run_forever
self._run_once()
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:1869: in _run_once
event_list = self._selector.select(timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f3a87ff0f40>, timeout = 0.065
def select(self, timeout=None):
if timeout is None:
timeout = -1
elif timeout <= 0:
timeout = 0
else:
# epoll_wait() has a resolution of 1 millisecond, round away
# from zero to wait *at least* timeout seconds.
timeout = math.ceil(timeout * 1e3) * 1e-3
# epoll_wait() expects `maxevents` to be greater than zero;
# we want to make sure that `select()` can be called when no
# FD is registered.
max_ev = max(len(self._fd_to_key), 1)
ready = []
try:
> fd_event_list = self._selector.poll(timeout, max_ev)
E Failed: Timeout >300.0s
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/selectors.py:469: Failed
----------------------------- Captured stdout call -----------------------------
Dumped cluster state to test_cluster_dump/test_repeated_restarts.yaml
----------------------------- Captured stderr call -----------------------------
2022-08-05 06:16:59,283 - distributed.scheduler - INFO - State start
2022-08-05 06:16:59,285 - distributed.scheduler - INFO - Clear task state
2022-08-05 06:16:59,286 - distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:45653
2022-08-05 06:16:59,286 - distributed.scheduler - INFO - dashboard at: 127.0.0.1:42243
2022-08-05 06:16:59,298 - distributed.nanny - INFO - Start Nanny at: 'tcp://127.0.0.1:35423'
2022-08-05 06:16:59,299 - distributed.nanny - INFO - Start Nanny at: 'tcp://127.0.0.1:34089'
2022-08-05 06:17:00,590 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42031
2022-08-05 06:17:00,590 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:42031
2022-08-05 06:17:00,590 - distributed.worker - INFO - dashboard at: 127.0.0.1:44619
2022-08-05 06:17:00,590 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:00,590 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,590 - distributed.worker - INFO - Threads: 2
2022-08-05 06:17:00,591 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:00,591 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-16dathge
2022-08-05 06:17:00,591 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,610 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:37995
2022-08-05 06:17:00,610 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:37995
2022-08-05 06:17:00,611 - distributed.worker - INFO - dashboard at: 127.0.0.1:37315
2022-08-05 06:17:00,611 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:00,611 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,611 - distributed.worker - INFO - Threads: 1
2022-08-05 06:17:00,611 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:00,611 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-z6_9djz4
2022-08-05 06:17:00,611 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,027 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42031', name: 1, status: init, memory: 0, processing: 0>
2022-08-05 06:17:01,028 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42031
2022-08-05 06:17:01,028 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,029 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:01,030 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,030 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,059 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37995', name: 0, status: init, memory: 0, processing: 0>
2022-08-05 06:17:01,059 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37995
2022-08-05 06:17:01,060 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,060 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:01,060 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,061 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,090 - distributed.scheduler - INFO - Receive client connection: Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:01,091 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,095 - distributed.scheduler - INFO - Releasing all requested keys
2022-08-05 06:17:01,095 - distributed.scheduler - INFO - Clear task state
2022-08-05 06:17:01,108 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:01,109 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:01,111 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37995
2022-08-05 06:17:01,112 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42031
2022-08-05 06:17:01,113 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c2d1f8e0-49c4-4e4e-bb40-2c755ee8617d Address tcp://127.0.0.1:37995 Status: Status.closing
2022-08-05 06:17:01,114 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17093a9c-4d9b-472f-89df-0c14cfb9fe80 Address tcp://127.0.0.1:42031 Status: Status.closing
2022-08-05 06:17:01,128 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37995', name: 0, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:01,128 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37995
2022-08-05 06:17:01,129 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42031', name: 1, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:01,129 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42031
2022-08-05 06:17:01,129 - distributed.scheduler - INFO - Lost all workers
2022-08-05 06:17:01,284 - distributed.nanny - WARNING - Restarting worker
2022-08-05 06:17:02,547 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38789
2022-08-05 06:17:02,547 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:38789
2022-08-05 06:17:02,547 - distributed.worker - INFO - dashboard at: 127.0.0.1:42551
2022-08-05 06:17:02,547 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:02,547 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,547 - distributed.worker - INFO - Threads: 1
2022-08-05 06:17:02,547 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:02,548 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-gdowzjl6
2022-08-05 06:17:02,548 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,983 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38789', name: 0, status: init, memory: 0, processing: 0>
2022-08-05 06:17:02,984 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38789
2022-08-05 06:17:02,985 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:02,985 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:02,985 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,986 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:17,132 - distributed.nanny - WARNING - Worker process still alive after 15.999998664855958 seconds, killing
2022-08-05 06:17:21,152 - distributed.core - ERROR - Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
Traceback (most recent call last):
File "/home/runner/work/distributed/distributed/distributed/utils.py", line 799, in wrapper
return await func(*args, **kwargs)
File "/home/runner/work/distributed/distributed/distributed/scheduler.py", line 5288, in restart
raise TimeoutError(msg) from None
asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
2022-08-05 06:17:21,153 - distributed.core - ERROR - Exception while handling op restart
Traceback (most recent call last):
File "/home/runner/work/distributed/distributed/distributed/core.py", line 769, in _handle_comm
result = await result
File "/home/runner/work/distributed/distributed/distributed/utils.py", line 799, in wrapper
return await func(*args, **kwargs)
File "/home/runner/work/distributed/distributed/distributed/scheduler.py", line 5288, in restart
raise TimeoutError(msg) from None
asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
2022-08-05 06:17:59,283 - distributed.scheduler - INFO - Remove client Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,283 - distributed.scheduler - INFO - Remove client Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,284 - distributed.scheduler - INFO - Close client connection: Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,285 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35423'.
2022-08-05 06:17:59,285 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:59,286 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34089'.
2022-08-05 06:17:59,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38789
2022-08-05 06:17:59,289 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-18510263-0ed9-4644-89db-405e9a768c10 Address tcp://127.0.0.1:38789 Status: Status.closing
2022-08-05 06:17:59,289 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38789', name: 0, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:59,289 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38789
2022-08-05 06:17:59,289 - distributed.scheduler - INFO - Lost all workers
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
Stack of AsyncProcess Dask Worker process (from Nanny) watch message queue ([1398](https://github.com/dask/distributed/runs/7685981380?check_suite_focus=true#step:11:1399)87856572160)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/runner/work/distributed/distributed/distributed/process.py", line 216, in _watch_message_queue
msg = q.get()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/queue.py", line 171, in get
self.not_empty.wait()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 312, in wait
waiter.acquire()
~~~~~~~~~~~~~~~~~~~~~ Stack of asyncio_1 (139888427005696) ~~~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~~~~~~~~ Stack of asyncio_0 (139888158570240) ~~~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~ Stack of Dask-Callback-Thread_0 (139887873353472) ~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~~~~~ Stack of Dask-Offload_0 (139889176532736) ~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
```
https://github.com/dask/distributed/runs/7685981380?check_suite_focus=true#step:11:1327
|
1.0
|
Flaky `distributed/tests/test_nanny.py::test_repeated_restarts` - I saw this in https://github.com/dask/distributed/pull/6829#issuecomment-1205634945 as well. Good news, it's not caused by that PR, bad news, it seems to actually be flaky. I thought I might have fixed this in https://github.com/dask/distributed/pull/6823, but apparently not. This one is a real normal restart failure: `Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned`.
The `Worker process still alive after 15.999998664855958 seconds, killing` seems a little concerning. It's possible this test needs to be rewritten after https://github.com/dask/distributed/pull/6504, since before it probably didn't actually care if the workers shut down on time.
cc @hendrikmakait since I'm curious if https://github.com/dask/distributed/pull/6427 would help here—that cleans up the `Nanny.kill` implementation a bit.
```
____________________________ test_repeated_restarts ____________________________
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/utils_test.py:1111: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed/utils_test.py:376: in _run_and_close_tornado
return asyncio.run(inner_fn())
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/runners.py:44: in run
return loop.run_until_complete(main)
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:634: in run_until_complete
self.run_forever()
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:601: in run_forever
self._run_once()
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/asyncio/base_events.py:1869: in _run_once
event_list = self._selector.select(timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <selectors.EpollSelector object at 0x7f3a87ff0f40>, timeout = 0.065
def select(self, timeout=None):
if timeout is None:
timeout = -1
elif timeout <= 0:
timeout = 0
else:
# epoll_wait() has a resolution of 1 millisecond, round away
# from zero to wait *at least* timeout seconds.
timeout = math.ceil(timeout * 1e3) * 1e-3
# epoll_wait() expects `maxevents` to be greater than zero;
# we want to make sure that `select()` can be called when no
# FD is registered.
max_ev = max(len(self._fd_to_key), 1)
ready = []
try:
> fd_event_list = self._selector.poll(timeout, max_ev)
E Failed: Timeout >300.0s
/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/selectors.py:469: Failed
----------------------------- Captured stdout call -----------------------------
Dumped cluster state to test_cluster_dump/test_repeated_restarts.yaml
----------------------------- Captured stderr call -----------------------------
2022-08-05 06:16:59,283 - distributed.scheduler - INFO - State start
2022-08-05 06:16:59,285 - distributed.scheduler - INFO - Clear task state
2022-08-05 06:16:59,286 - distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:45653
2022-08-05 06:16:59,286 - distributed.scheduler - INFO - dashboard at: 127.0.0.1:42243
2022-08-05 06:16:59,298 - distributed.nanny - INFO - Start Nanny at: 'tcp://127.0.0.1:35423'
2022-08-05 06:16:59,299 - distributed.nanny - INFO - Start Nanny at: 'tcp://127.0.0.1:34089'
2022-08-05 06:17:00,590 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42031
2022-08-05 06:17:00,590 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:42031
2022-08-05 06:17:00,590 - distributed.worker - INFO - dashboard at: 127.0.0.1:44619
2022-08-05 06:17:00,590 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:00,590 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,590 - distributed.worker - INFO - Threads: 2
2022-08-05 06:17:00,591 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:00,591 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-16dathge
2022-08-05 06:17:00,591 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,610 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:37995
2022-08-05 06:17:00,610 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:37995
2022-08-05 06:17:00,611 - distributed.worker - INFO - dashboard at: 127.0.0.1:37315
2022-08-05 06:17:00,611 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:00,611 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:00,611 - distributed.worker - INFO - Threads: 1
2022-08-05 06:17:00,611 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:00,611 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-z6_9djz4
2022-08-05 06:17:00,611 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,027 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42031', name: 1, status: init, memory: 0, processing: 0>
2022-08-05 06:17:01,028 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42031
2022-08-05 06:17:01,028 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,029 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:01,030 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,030 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,059 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37995', name: 0, status: init, memory: 0, processing: 0>
2022-08-05 06:17:01,059 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37995
2022-08-05 06:17:01,060 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,060 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:01,060 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:01,061 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,090 - distributed.scheduler - INFO - Receive client connection: Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:01,091 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:01,095 - distributed.scheduler - INFO - Releasing all requested keys
2022-08-05 06:17:01,095 - distributed.scheduler - INFO - Clear task state
2022-08-05 06:17:01,108 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:01,109 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:01,111 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37995
2022-08-05 06:17:01,112 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42031
2022-08-05 06:17:01,113 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-c2d1f8e0-49c4-4e4e-bb40-2c755ee8617d Address tcp://127.0.0.1:37995 Status: Status.closing
2022-08-05 06:17:01,114 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-17093a9c-4d9b-472f-89df-0c14cfb9fe80 Address tcp://127.0.0.1:42031 Status: Status.closing
2022-08-05 06:17:01,128 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37995', name: 0, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:01,128 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:37995
2022-08-05 06:17:01,129 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42031', name: 1, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:01,129 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:42031
2022-08-05 06:17:01,129 - distributed.scheduler - INFO - Lost all workers
2022-08-05 06:17:01,284 - distributed.nanny - WARNING - Restarting worker
2022-08-05 06:17:02,547 - distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38789
2022-08-05 06:17:02,547 - distributed.worker - INFO - Listening to: tcp://127.0.0.1:38789
2022-08-05 06:17:02,547 - distributed.worker - INFO - dashboard at: 127.0.0.1:42551
2022-08-05 06:17:02,547 - distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45653
2022-08-05 06:17:02,547 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,547 - distributed.worker - INFO - Threads: 1
2022-08-05 06:17:02,547 - distributed.worker - INFO - Memory: 6.78 GiB
2022-08-05 06:17:02,548 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-gdowzjl6
2022-08-05 06:17:02,548 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,983 - distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38789', name: 0, status: init, memory: 0, processing: 0>
2022-08-05 06:17:02,984 - distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38789
2022-08-05 06:17:02,985 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:02,985 - distributed.worker - INFO - Registered to: tcp://127.0.0.1:45653
2022-08-05 06:17:02,985 - distributed.worker - INFO - -------------------------------------------------
2022-08-05 06:17:02,986 - distributed.core - INFO - Starting established connection
2022-08-05 06:17:17,132 - distributed.nanny - WARNING - Worker process still alive after 15.999998664855958 seconds, killing
2022-08-05 06:17:21,152 - distributed.core - ERROR - Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
Traceback (most recent call last):
File "/home/runner/work/distributed/distributed/distributed/utils.py", line 799, in wrapper
return await func(*args, **kwargs)
File "/home/runner/work/distributed/distributed/distributed/scheduler.py", line 5288, in restart
raise TimeoutError(msg) from None
asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
2022-08-05 06:17:21,153 - distributed.core - ERROR - Exception while handling op restart
Traceback (most recent call last):
File "/home/runner/work/distributed/distributed/distributed/core.py", line 769, in _handle_comm
result = await result
File "/home/runner/work/distributed/distributed/distributed/utils.py", line 799, in wrapper
return await func(*args, **kwargs)
File "/home/runner/work/distributed/distributed/distributed/scheduler.py", line 5288, in restart
raise TimeoutError(msg) from None
asyncio.exceptions.TimeoutError: Waited for 2 worker(s) to reconnect after restarting, but after 20s, only 1 have returned. Consider a longer timeout, or `wait_for_workers=False`.
2022-08-05 06:17:59,283 - distributed.scheduler - INFO - Remove client Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,283 - distributed.scheduler - INFO - Remove client Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,284 - distributed.scheduler - INFO - Close client connection: Client-37aff111-1486-11ed-8acf-000d3aec3edc
2022-08-05 06:17:59,285 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:35423'.
2022-08-05 06:17:59,285 - distributed.nanny - INFO - Nanny asking worker to close
2022-08-05 06:17:59,286 - distributed.nanny - INFO - Closing Nanny at 'tcp://127.0.0.1:34089'.
2022-08-05 06:17:59,287 - distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38789
2022-08-05 06:17:59,289 - distributed.worker - INFO - Connection to scheduler broken. Closing without reporting. ID: Worker-18510263-0ed9-4644-89db-405e9a768c10 Address tcp://127.0.0.1:38789 Status: Status.closing
2022-08-05 06:17:59,289 - distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38789', name: 0, status: closing, memory: 0, processing: 0>
2022-08-05 06:17:59,289 - distributed.core - INFO - Removing comms to tcp://127.0.0.1:38789
2022-08-05 06:17:59,289 - distributed.scheduler - INFO - Lost all workers
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
Stack of AsyncProcess Dask Worker process (from Nanny) watch message queue ([1398](https://github.com/dask/distributed/runs/7685981380?check_suite_focus=true#step:11:1399)87856572160)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/runner/work/distributed/distributed/distributed/process.py", line 216, in _watch_message_queue
msg = q.get()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/queue.py", line 171, in get
self.not_empty.wait()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 312, in wait
waiter.acquire()
~~~~~~~~~~~~~~~~~~~~~ Stack of asyncio_1 (139888427005696) ~~~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~~~~~~~~ Stack of asyncio_0 (139888158570240) ~~~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~ Stack of Dask-Callback-Thread_0 (139887873353472) ~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
~~~~~~~~~~~~~~~~~~ Stack of Dask-Offload_0 (139889176532736) ~~~~~~~~~~~~~~~~~~~
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 937, in _bootstrap
self._bootstrap_inner()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/usr/share/miniconda3/envs/dask-distributed/lib/python3.9/concurrent/futures/thread.py", line 81, in _worker
work_item = work_queue.get(block=True)
```
https://github.com/dask/distributed/runs/7685981380?check_suite_focus=true#step:11:1327
|
non_process
|
flaky distributed tests test nanny py test repeated restarts i saw this in as well good news it s not caused by that pr bad news it seems to actually be flaky i thought i might have fixed this in but apparently not this one is a real normal restart failure waited for worker s to reconnect after restarting but after only have returned the worker process still alive after seconds killing seems a little concerning it s possible this test needs to be rewritten after since before it probably didn t actually care if the workers shut down on time cc hendrikmakait since i m curious if would help here—that cleans up the nanny kill implementation a bit test repeated restarts args kwds wraps func def inner args kwds with self recreate cm return func args kwds usr share envs dask distributed lib contextlib py distributed utils test py in test func return run and close tornado async fn outer distributed utils test py in run and close tornado return asyncio run inner fn usr share envs dask distributed lib asyncio runners py in run return loop run until complete main usr share envs dask distributed lib asyncio base events py in run until complete self run forever usr share envs dask distributed lib asyncio base events py in run forever self run once usr share envs dask distributed lib asyncio base events py in run once event list self selector select timeout self timeout def select self timeout none if timeout is none timeout elif timeout timeout else epoll wait has a resolution of millisecond round away from zero to wait at least timeout seconds timeout math ceil timeout epoll wait expects maxevents to be greater than zero we want to make sure that select can be called when no fd is registered max ev max len self fd to key ready try fd event list self selector poll timeout max ev e failed timeout usr share envs dask distributed lib selectors py failed captured stdout call dumped cluster state to test cluster dump test repeated restarts yaml captured stderr call distributed scheduler info state start distributed scheduler info clear task state distributed scheduler info scheduler at tcp distributed scheduler info dashboard at distributed nanny info start nanny at tcp distributed nanny info start nanny at tcp distributed worker info start worker at tcp distributed worker info listening to tcp distributed worker info dashboard at distributed worker info waiting to connect to tcp distributed worker info distributed worker info threads distributed worker info memory gib distributed worker info local directory tmp dask worker space worker distributed worker info distributed worker info start worker at tcp distributed worker info listening to tcp distributed worker info dashboard at distributed worker info waiting to connect to tcp distributed worker info distributed worker info threads distributed worker info memory gib distributed worker info local directory tmp dask worker space worker distributed worker info distributed scheduler info register worker distributed scheduler info starting worker compute stream tcp distributed core info starting established connection distributed worker info registered to tcp distributed worker info distributed core info starting established connection distributed scheduler info register worker distributed scheduler info starting worker compute stream tcp distributed core info starting established connection distributed worker info registered to tcp distributed worker info distributed core info starting established connection distributed scheduler info receive client connection client distributed core info starting established connection distributed scheduler info releasing all requested keys distributed scheduler info clear task state distributed nanny info nanny asking worker to close distributed nanny info nanny asking worker to close distributed worker info stopping worker at tcp distributed worker info stopping worker at tcp distributed worker info connection to scheduler broken closing without reporting id worker address tcp status status closing distributed worker info connection to scheduler broken closing without reporting id worker address tcp status status closing distributed scheduler info remove worker distributed core info removing comms to tcp distributed scheduler info remove worker distributed core info removing comms to tcp distributed scheduler info lost all workers distributed nanny warning restarting worker distributed worker info start worker at tcp distributed worker info listening to tcp distributed worker info dashboard at distributed worker info waiting to connect to tcp distributed worker info distributed worker info threads distributed worker info memory gib distributed worker info local directory tmp dask worker space worker distributed worker info distributed scheduler info register worker distributed scheduler info starting worker compute stream tcp distributed core info starting established connection distributed worker info registered to tcp distributed worker info distributed core info starting established connection distributed nanny warning worker process still alive after seconds killing distributed core error waited for worker s to reconnect after restarting but after only have returned consider a longer timeout or wait for workers false traceback most recent call last file home runner work distributed distributed distributed utils py line in wrapper return await func args kwargs file home runner work distributed distributed distributed scheduler py line in restart raise timeouterror msg from none asyncio exceptions timeouterror waited for worker s to reconnect after restarting but after only have returned consider a longer timeout or wait for workers false distributed core error exception while handling op restart traceback most recent call last file home runner work distributed distributed distributed core py line in handle comm result await result file home runner work distributed distributed distributed utils py line in wrapper return await func args kwargs file home runner work distributed distributed distributed scheduler py line in restart raise timeouterror msg from none asyncio exceptions timeouterror waited for worker s to reconnect after restarting but after only have returned consider a longer timeout or wait for workers false distributed scheduler info remove client client distributed scheduler info remove client client distributed scheduler info close client connection client distributed nanny info closing nanny at tcp distributed nanny info nanny asking worker to close distributed nanny info closing nanny at tcp distributed worker info stopping worker at tcp distributed worker info connection to scheduler broken closing without reporting id worker address tcp status status closing distributed scheduler info remove worker distributed core info removing comms to tcp distributed scheduler info lost all workers timeout stack of asyncprocess dask worker process from nanny watch message queue file usr share envs dask distributed lib threading py line in bootstrap self bootstrap inner file usr share envs dask distributed lib threading py line in bootstrap inner self run file usr share envs dask distributed lib threading py line in run self target self args self kwargs file home runner work distributed distributed distributed process py line in watch message queue msg q get file usr share envs dask distributed lib queue py line in get self not empty wait file usr share envs dask distributed lib threading py line in wait waiter acquire stack of asyncio file usr share envs dask distributed lib threading py line in bootstrap self bootstrap inner file usr share envs dask distributed lib threading py line in bootstrap inner self run file usr share envs dask distributed lib threading py line in run self target self args self kwargs file usr share envs dask distributed lib concurrent futures thread py line in worker work item work queue get block true stack of asyncio file usr share envs dask distributed lib threading py line in bootstrap self bootstrap inner file usr share envs dask distributed lib threading py line in bootstrap inner self run file usr share envs dask distributed lib threading py line in run self target self args self kwargs file usr share envs dask distributed lib concurrent futures thread py line in worker work item work queue get block true stack of dask callback thread file usr share envs dask distributed lib threading py line in bootstrap self bootstrap inner file usr share envs dask distributed lib threading py line in bootstrap inner self run file usr share envs dask distributed lib threading py line in run self target self args self kwargs file usr share envs dask distributed lib concurrent futures thread py line in worker work item work queue get block true stack of dask offload file usr share envs dask distributed lib threading py line in bootstrap self bootstrap inner file usr share envs dask distributed lib threading py line in bootstrap inner self run file usr share envs dask distributed lib threading py line in run self target self args self kwargs file usr share envs dask distributed lib concurrent futures thread py line in worker work item work queue get block true
| 0
|
14,808
| 18,110,508,221
|
IssuesEvent
|
2021-09-23 02:48:16
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
TLS parsing not working with mod_gnutls
|
log-processing
|
Hi!
I noticed that TLS parsing doesn't work with my Apache server. I'm using mod_gnutls and it outputs this in access.log
```
"TLS1.3" "ECDHE_RSA_CHACHA20_POLY1305"
```
Log-format is
```
log-format %v %h %^[%d %t.%f %^%^] "%r" %s %b "%K" "%k" "%R" "%u"
date-format %Y-%m-%d
time-format %H:%M:%S
```
Looking at #1967 I think there is a missing 'v' in TLS1. 3 and the cipher suites uses '_' instead of '-'.
|
1.0
|
TLS parsing not working with mod_gnutls - Hi!
I noticed that TLS parsing doesn't work with my Apache server. I'm using mod_gnutls and it outputs this in access.log
```
"TLS1.3" "ECDHE_RSA_CHACHA20_POLY1305"
```
Log-format is
```
log-format %v %h %^[%d %t.%f %^%^] "%r" %s %b "%K" "%k" "%R" "%u"
date-format %Y-%m-%d
time-format %H:%M:%S
```
Looking at #1967 I think there is a missing 'v' in TLS1. 3 and the cipher suites uses '_' instead of '-'.
|
process
|
tls parsing not working with mod gnutls hi i noticed that tls parsing doesn t work with my apache server i m using mod gnutls and it outputs this in access log ecdhe rsa log format is log format v h r s b k k r u date format y m d time format h m s looking at i think there is a missing v in and the cipher suites uses instead of
| 1
|
8,725
| 11,861,546,615
|
IssuesEvent
|
2020-03-25 16:30:27
|
ZenHubHQ/george
|
https://api.github.com/repos/ZenHubHQ/george
|
closed
|
Begin developing a curriculum to up level the teams knowledge of Agile development - content should be used for Webinars as well
|
Internal Process
|
**As Measured By:**
- [ ]
- [ ]
- [ ]
|
1.0
|
Begin developing a curriculum to up level the teams knowledge of Agile development - content should be used for Webinars as well - **As Measured By:**
- [ ]
- [ ]
- [ ]
|
process
|
begin developing a curriculum to up level the teams knowledge of agile development content should be used for webinars as well as measured by
| 1
|
21,957
| 30,453,753,245
|
IssuesEvent
|
2023-07-16 16:18:49
|
winter-telescope/mirar
|
https://api.github.com/repos/winter-telescope/mirar
|
opened
|
[BUG] Unclosed resource in AstrometryStatsWriter
|
bug processors
|
**Describe the bug**
I'm not sure what is causing this, but testing with winter I get many many warnings of the sort:
`/Users/robertstein/anaconda3/envs/mirar/lib/python3.11/socket.py:777: ResourceWarning: unclosed <ssl.SSLSocket fd=15, family=2, type=1, proto=0, laddr=('192.168.0.103', 52977), raddr=('193.147.152.106', 443)>
self._sock = None`
I think something is going wrong there, or in one of the code pieces imported by that processor. Might be astroquery-related.
|
1.0
|
[BUG] Unclosed resource in AstrometryStatsWriter - **Describe the bug**
I'm not sure what is causing this, but testing with winter I get many many warnings of the sort:
`/Users/robertstein/anaconda3/envs/mirar/lib/python3.11/socket.py:777: ResourceWarning: unclosed <ssl.SSLSocket fd=15, family=2, type=1, proto=0, laddr=('192.168.0.103', 52977), raddr=('193.147.152.106', 443)>
self._sock = None`
I think something is going wrong there, or in one of the code pieces imported by that processor. Might be astroquery-related.
|
process
|
unclosed resource in astrometrystatswriter describe the bug i m not sure what is causing this but testing with winter i get many many warnings of the sort users robertstein envs mirar lib socket py resourcewarning unclosed self sock none i think something is going wrong there or in one of the code pieces imported by that processor might be astroquery related
| 1
|
1,090
| 3,560,311,488
|
IssuesEvent
|
2016-01-23 01:19:03
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
When joining in a way that returns columns with same name, columns are suffixed and missing metadata
|
Bug Query Processor
|
e.g. `name` and `name_2`. I believe this is done automatically by JDBC.
* Since we don't know the source of `name_2` the correct Metadata doesn't get returned along with it.
* It would be better to give it a name like `<table_name>.name`.
See also #1447.
|
1.0
|
When joining in a way that returns columns with same name, columns are suffixed and missing metadata - e.g. `name` and `name_2`. I believe this is done automatically by JDBC.
* Since we don't know the source of `name_2` the correct Metadata doesn't get returned along with it.
* It would be better to give it a name like `<table_name>.name`.
See also #1447.
|
process
|
when joining in a way that returns columns with same name columns are suffixed and missing metadata e g name and name i believe this is done automatically by jdbc since we don t know the source of name the correct metadata doesn t get returned along with it it would be better to give it a name like name see also
| 1
|
156,932
| 13,656,553,689
|
IssuesEvent
|
2020-09-28 03:05:26
|
Requisitos-de-Software/2020.1-GuardioesdaSaude
|
https://api.github.com/repos/Requisitos-de-Software/2020.1-GuardioesdaSaude
|
closed
|
Criação do Documento First Things First
|
documentation
|
### Fazer o documento de First Things First
Fazer o documento First Things First do app analisado
Nessa issue será realizado:
- O documento First Things First do app analisado;
- Colocar o documento no gh pages.
Critérios de aceitação:
- [ ] First Things First criada
|
1.0
|
Criação do Documento First Things First - ### Fazer o documento de First Things First
Fazer o documento First Things First do app analisado
Nessa issue será realizado:
- O documento First Things First do app analisado;
- Colocar o documento no gh pages.
Critérios de aceitação:
- [ ] First Things First criada
|
non_process
|
criação do documento first things first fazer o documento de first things first fazer o documento first things first do app analisado nessa issue será realizado o documento first things first do app analisado colocar o documento no gh pages critérios de aceitação first things first criada
| 0
|
531,873
| 15,526,631,046
|
IssuesEvent
|
2021-03-13 02:01:28
|
linkerd/linkerd2
|
https://api.github.com/repos/linkerd/linkerd2
|
closed
|
Update documentation to remove references to .globals values
|
area/docs priority/P0
|
The Linkerd documentation contains references to the `globals` Helm values struct which has been removed.
For example: https://linkerd.io/2/tasks/install-helm/#helm-install-procedure
There are likely other places as well which need to be updated.
|
1.0
|
Update documentation to remove references to .globals values - The Linkerd documentation contains references to the `globals` Helm values struct which has been removed.
For example: https://linkerd.io/2/tasks/install-helm/#helm-install-procedure
There are likely other places as well which need to be updated.
|
non_process
|
update documentation to remove references to globals values the linkerd documentation contains references to the globals helm values struct which has been removed for example there are likely other places as well which need to be updated
| 0
|
221,314
| 24,611,240,286
|
IssuesEvent
|
2022-10-14 21:46:55
|
mendts-workshop2/TimFranklin
|
https://api.github.com/repos/mendts-workshop2/TimFranklin
|
opened
|
derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 5.3)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1313](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | ✅ |
| [CVE-2015-1832](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work.
<p>Publish Date: 2018-05-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313>CVE-2018-1313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p>
<p>Release Date: 2018-05-07</p>
<p>Fix Resolution: 10.14.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-1832</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.
<p>Publish Date: 2016-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p>
<p>Release Date: 2016-10-03</p>
<p>Fix Resolution: 10.12.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 5.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1313](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | ✅ |
| [CVE-2015-1832](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work.
<p>Publish Date: 2018-05-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313>CVE-2018-1313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p>
<p>Release Date: 2018-05-07</p>
<p>Fix Resolution: 10.14.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-1832</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop2/TimFranklin/commit/d15bae22b9837020a3400632342347880fd21d8e">d15bae22b9837020a3400632342347880fd21d8e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.
<p>Publish Date: 2016-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p>
<p>Release Date: 2016-10-03</p>
<p>Fix Resolution: 10.12.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
derby jar vulnerabilities highest severity is vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library epository org apache derby derby derby jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium derby jar direct medium derby jar direct details cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library epository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache derby to a specially crafted network packet can be used to request the derby network server to boot a database whose location and contents are under the user s control if the derby network server is not running with a java security manager policy file the attack is successful if the server is using a policy file the policy file must permit the database location to be read for the attack to work the default derby network server policy file distributed with the affected releases includes a permissive policy as the default network server policy which allows the attack to work publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library epository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch master vulnerability details xml external entity xxe vulnerability in the sqlxmlutil code in apache derby before when a java security manager is not in place allows context dependent attackers to read arbitrary files or cause a denial of service resource consumption via vectors involving xmlvti and the xml datatype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
816,766
| 30,611,339,519
|
IssuesEvent
|
2023-07-23 16:46:51
|
mojnp/mojnp
|
https://api.github.com/repos/mojnp/mojnp
|
opened
|
Translate every bit of content on all frontend applications from english to bosnian
|
enhancement help wanted UI PRIORITY patch frontend hotfix
|
This includes frontend application, partners application and super admin application
|
1.0
|
Translate every bit of content on all frontend applications from english to bosnian - This includes frontend application, partners application and super admin application
|
non_process
|
translate every bit of content on all frontend applications from english to bosnian this includes frontend application partners application and super admin application
| 0
|
62,092
| 3,171,849,879
|
IssuesEvent
|
2015-09-23 01:53:48
|
ChristianMurphy/nicest
|
https://api.github.com/repos/ChristianMurphy/nicest
|
opened
|
Code Project - Clearly show user warning text
|
bug Priority Medium
|
Currently it is hidden in the title, which requires mouse hover.
Find a better way to clearly communicate to the user what the issue is.
|
1.0
|
Code Project - Clearly show user warning text - Currently it is hidden in the title, which requires mouse hover.
Find a better way to clearly communicate to the user what the issue is.
|
non_process
|
code project clearly show user warning text currently it is hidden in the title which requires mouse hover find a better way to clearly communicate to the user what the issue is
| 0
|
6,023
| 8,823,770,959
|
IssuesEvent
|
2019-01-02 14:50:58
|
docker/docker.github.io
|
https://api.github.com/repos/docker/docker.github.io
|
closed
|
Add examples, more documentation around health checks
|
content/engine process/top25
|
### Problem description
Healthchecks are a powerful feature, but the documentation is currently sparse. There is some basic information on its usage in the [`builder documentation`](https://github.com/moby/moby/blob/83a4afe2645d2e39cf6b10a86e9e681a7ff06eb2/docs/reference/builder.md#healthcheck), and the flags are included in the [`docker container run`](https://github.com/moby/moby/blob/ebe0a489a5cc66406345f1aafa0f2f866fd64172/docs/reference/commandline/run.md) and [`docker service create`](https://github.com/moby/moby/blob/5fc912d2c87c1986e830a7d4f6b62bec385e9a14/docs/reference/commandline/service_create.md) command line reference, but no practical examples are included in the documentation.
### Suggestions for a fix
Docker 17.05 and up adds additional options for health checks (start period), and I wrote a comment on that pull request to illustrate the functionality which may be useful as a starting point https://github.com/moby/moby/pull/28938#issuecomment-301736698, https://github.com/moby/moby/issues/35881#issuecomment-356619230, and I saw @tomwillfixit has a demo repository to show health checks (https://github.com/tomwillfixit/healthcheck).
Those examples are very basic, but can be useful in itself (possibly we may want to add a "real" example as well).
/cc @mstanleyjones
also /cc @dongluochen @tianon who may be able to provide more information on this subject 👍
|
1.0
|
Add examples, more documentation around health checks - ### Problem description
Healthchecks are a powerful feature, but the documentation is currently sparse. There is some basic information on its usage in the [`builder documentation`](https://github.com/moby/moby/blob/83a4afe2645d2e39cf6b10a86e9e681a7ff06eb2/docs/reference/builder.md#healthcheck), and the flags are included in the [`docker container run`](https://github.com/moby/moby/blob/ebe0a489a5cc66406345f1aafa0f2f866fd64172/docs/reference/commandline/run.md) and [`docker service create`](https://github.com/moby/moby/blob/5fc912d2c87c1986e830a7d4f6b62bec385e9a14/docs/reference/commandline/service_create.md) command line reference, but no practical examples are included in the documentation.
### Suggestions for a fix
Docker 17.05 and up adds additional options for health checks (start period), and I wrote a comment on that pull request to illustrate the functionality which may be useful as a starting point https://github.com/moby/moby/pull/28938#issuecomment-301736698, https://github.com/moby/moby/issues/35881#issuecomment-356619230, and I saw @tomwillfixit has a demo repository to show health checks (https://github.com/tomwillfixit/healthcheck).
Those examples are very basic, but can be useful in itself (possibly we may want to add a "real" example as well).
/cc @mstanleyjones
also /cc @dongluochen @tianon who may be able to provide more information on this subject 👍
|
process
|
add examples more documentation around health checks problem description healthchecks are a powerful feature but the documentation is currently sparse there is some basic information on its usage in the and the flags are included in the and command line reference but no practical examples are included in the documentation suggestions for a fix docker and up adds additional options for health checks start period and i wrote a comment on that pull request to illustrate the functionality which may be useful as a starting point and i saw tomwillfixit has a demo repository to show health checks those examples are very basic but can be useful in itself possibly we may want to add a real example as well cc mstanleyjones also cc dongluochen tianon who may be able to provide more information on this subject 👍
| 1
|
20,019
| 26,491,811,777
|
IssuesEvent
|
2023-01-17 23:39:42
|
aolabNeuro/analyze
|
https://api.github.com/repos/aolabNeuro/analyze
|
closed
|
lfp samplerate vs samplerate
|
bug preprocessing
|
in the preprocessed lfp metadata we saved `samplerate` along with `lfp_samplerate`
the former is 25khz and the latter is 1khz. i think we should just name `samplerate` 1khz
|
1.0
|
lfp samplerate vs samplerate - in the preprocessed lfp metadata we saved `samplerate` along with `lfp_samplerate`
the former is 25khz and the latter is 1khz. i think we should just name `samplerate` 1khz
|
process
|
lfp samplerate vs samplerate in the preprocessed lfp metadata we saved samplerate along with lfp samplerate the former is and the latter is i think we should just name samplerate
| 1
|
11,817
| 14,632,440,018
|
IssuesEvent
|
2020-12-23 22:24:58
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Logs in `checkSource` not consistent with code
|
comp-processors unexpected behaviour
|
**Describe the bug**
See the code in `Pipe.cpp`. The logic of `if` not consistent with the logs below. Can `checkSource` be passed when `source.getOutputs().size() == 2` ?
```
static void checkSource(const IProcessor & source)
{
......
if (source.getOutputs().size() > 1)
throw Exception("Source for pipe should have single or two outputs, but " + source.getName() + " has " +
toString(source.getOutputs().size()) + " outputs.", ErrorCodes::LOGICAL_ERROR);
}
```
|
1.0
|
Logs in `checkSource` not consistent with code -
**Describe the bug**
See the code in `Pipe.cpp`. The logic of `if` not consistent with the logs below. Can `checkSource` be passed when `source.getOutputs().size() == 2` ?
```
static void checkSource(const IProcessor & source)
{
......
if (source.getOutputs().size() > 1)
throw Exception("Source for pipe should have single or two outputs, but " + source.getName() + " has " +
toString(source.getOutputs().size()) + " outputs.", ErrorCodes::LOGICAL_ERROR);
}
```
|
process
|
logs in checksource not consistent with code describe the bug see the code in pipe cpp the logic of if not consistent with the logs below can checksource be passed when source getoutputs size static void checksource const iprocessor source if source getoutputs size throw exception source for pipe should have single or two outputs but source getname has tostring source getoutputs size outputs errorcodes logical error
| 1
|
12,148
| 14,741,384,730
|
IssuesEvent
|
2021-01-07 10:32:22
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SA Billing - Speed-Ez - Invalid Late Fees
|
anc-process anp-important ant-bug
|
In GitLab by @kdjstudios on Jan 16, 2019, 08:25
**Submitted by:** Jo Ann Browne <joann@speedez.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6612343
**Server:** External
**Client/Site:** Speed Ez
**Account:** Multiple
**Issue:**
This is unbelievable and I believe not the first time. Has my other problems been addressed yet with account 1002. I will work on this LARGE list tomorrow
|
1.0
|
SA Billing - Speed-Ez - Invalid Late Fees - In GitLab by @kdjstudios on Jan 16, 2019, 08:25
**Submitted by:** Jo Ann Browne <joann@speedez.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6612343
**Server:** External
**Client/Site:** Speed Ez
**Account:** Multiple
**Issue:**
This is unbelievable and I believe not the first time. Has my other problems been addressed yet with account 1002. I will work on this LARGE list tomorrow
|
process
|
sa billing speed ez invalid late fees in gitlab by kdjstudios on jan submitted by jo ann browne helpdesk server external client site speed ez account multiple issue this is unbelievable and i believe not the first time has my other problems been addressed yet with account i will work on this large list tomorrow
| 1
|
761,135
| 26,668,370,260
|
IssuesEvent
|
2023-01-26 07:50:45
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - desktop site instead of mobile site
|
browser-firefox-mobile priority-critical engine-gecko
|
<!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/117348 -->
**URL**: https://support.mozilla.org/zh-CN/kb/firefox-drm?redirectslug=enable-drm&redirectlocale=zh-CN
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 10
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
oooooooooooooooooooooooooooooooooo
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200804091327</li><li>channel: nightly</li><li>hasTouchScreen: true</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
support.mozilla.org - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/117348 -->
**URL**: https://support.mozilla.org/zh-CN/kb/firefox-drm?redirectslug=enable-drm&redirectlocale=zh-CN
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 10
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
oooooooooooooooooooooooooooooooooo
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200804091327</li><li>channel: nightly</li><li>hasTouchScreen: true</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
support mozilla org desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser no problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce oooooooooooooooooooooooooooooooooo browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true from with ❤️
| 0
|
241,602
| 26,256,845,563
|
IssuesEvent
|
2023-01-06 02:02:46
|
mattdanielbrown/qcobjects-demo-app
|
https://api.github.com/repos/mattdanielbrown/qcobjects-demo-app
|
opened
|
CVE-2021-23383 (High) detected in handlebars-4.7.6.tgz
|
security vulnerability
|
## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.7.6.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- qcobjects-cli-0.1.141.tgz (Root Library)
- :x: **handlebars-4.7.6.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution (handlebars): 4.7.7</p>
<p>Direct dependency fix Resolution (qcobjects-cli): 0.1.143</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23383 (High) detected in handlebars-4.7.6.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.7.6.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- qcobjects-cli-0.1.141.tgz (Root Library)
- :x: **handlebars-4.7.6.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution (handlebars): 4.7.7</p>
<p>Direct dependency fix Resolution (qcobjects-cli): 0.1.143</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file package json path to vulnerable library node modules handlebars package json dependency hierarchy qcobjects cli tgz root library x handlebars tgz vulnerable library found in base branch master vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars direct dependency fix resolution qcobjects cli step up your open source security game with mend
| 0
|
3,751
| 6,733,153,386
|
IssuesEvent
|
2017-10-18 14:00:31
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
closed
|
Tree Planting Deficiency Dashboard - Process Snapshot
|
enhancement process workflow
|
Allow user to create a snapshot of current data and store in database for historical records
|
1.0
|
Tree Planting Deficiency Dashboard - Process Snapshot - Allow user to create a snapshot of current data and store in database for historical records
|
process
|
tree planting deficiency dashboard process snapshot allow user to create a snapshot of current data and store in database for historical records
| 1
|
134,432
| 5,226,331,177
|
IssuesEvent
|
2017-01-27 21:02:06
|
magnolo/newhere
|
https://api.github.com/repos/magnolo/newhere
|
closed
|
Map view responsiveness
|
bug front end medium priority
|
UNSURE IF RESPONSIVENESS OR OTHER BUG - see issue #142
Sometimes the map view takes a long time to catch up with the users selection of categories and offers, sometimes never showing the correct map view for the users selection:

Chrome Version 54.0.2840.59 beta (64-bit)/ OS X El Capitan Version 10.11.6
|
1.0
|
Map view responsiveness - UNSURE IF RESPONSIVENESS OR OTHER BUG - see issue #142
Sometimes the map view takes a long time to catch up with the users selection of categories and offers, sometimes never showing the correct map view for the users selection:

Chrome Version 54.0.2840.59 beta (64-bit)/ OS X El Capitan Version 10.11.6
|
non_process
|
map view responsiveness unsure if responsiveness or other bug see issue sometimes the map view takes a long time to catch up with the users selection of categories and offers sometimes never showing the correct map view for the users selection chrome version beta bit os x el capitan version
| 0
|
190,549
| 15,242,879,946
|
IssuesEvent
|
2021-02-19 10:28:12
|
jenkinsci/kubernetes-operator
|
https://api.github.com/repos/jenkinsci/kubernetes-operator
|
closed
|
Usage of getLatestAction unclear/wrong
|
bug documentation
|
**Describe the bug**
In v0.5.0 the backup/restore was updated to (optionally?) make use of some 'getLatestAction' configuration and script
- Documentation says this should be set on 'spec.backup.getLatestAction' https://github.com/jenkinsci/kubernetes-operator/blob/master/website/content/en/docs/Getting%20Started/latest/configure-backup-and-restore.md
- we see a log message that says 'spec.restore.getLatestAction' is missing, coming from here https://github.com/jenkinsci/kubernetes-operator/blob/a78b9ea77be7e88994d0a9e7af6248e22625bd73/pkg/configuration/backuprestore/backuprestore.go#L167
- we see the spec/type to be set on spec.restore https://github.com/jenkinsci/kubernetes-operator/blob/master/pkg/apis/jenkins/v1alpha2/jenkins_types.go#L606
So, what's wrong here?
Maybe it's just documentation?
Also, it seems like there is a corresponding value in the helm chart. But is it used anywhere? Doesn't look like it to me
https://github.com/jenkinsci/kubernetes-operator/blob/a78b9ea77be7e88994d0a9e7af6248e22625bd73/chart/jenkins-operator/values.yaml#L204
**To Reproduce**
upgrade to v0.5.0 and update configuration to latest best practice
|
1.0
|
Usage of getLatestAction unclear/wrong - **Describe the bug**
In v0.5.0 the backup/restore was updated to (optionally?) make use of some 'getLatestAction' configuration and script
- Documentation says this should be set on 'spec.backup.getLatestAction' https://github.com/jenkinsci/kubernetes-operator/blob/master/website/content/en/docs/Getting%20Started/latest/configure-backup-and-restore.md
- we see a log message that says 'spec.restore.getLatestAction' is missing, coming from here https://github.com/jenkinsci/kubernetes-operator/blob/a78b9ea77be7e88994d0a9e7af6248e22625bd73/pkg/configuration/backuprestore/backuprestore.go#L167
- we see the spec/type to be set on spec.restore https://github.com/jenkinsci/kubernetes-operator/blob/master/pkg/apis/jenkins/v1alpha2/jenkins_types.go#L606
So, what's wrong here?
Maybe it's just documentation?
Also, it seems like there is a corresponding value in the helm chart. But is it used anywhere? Doesn't look like it to me
https://github.com/jenkinsci/kubernetes-operator/blob/a78b9ea77be7e88994d0a9e7af6248e22625bd73/chart/jenkins-operator/values.yaml#L204
**To Reproduce**
upgrade to v0.5.0 and update configuration to latest best practice
|
non_process
|
usage of getlatestaction unclear wrong describe the bug in the backup restore was updated to optionally make use of some getlatestaction configuration and script documentation says this should be set on spec backup getlatestaction we see a log message that says spec restore getlatestaction is missing coming from here we see the spec type to be set on spec restore so what s wrong here maybe it s just documentation also it seems like there is a corresponding value in the helm chart but is it used anywhere doesn t look like it to me to reproduce upgrade to and update configuration to latest best practice
| 0
|
17,311
| 23,132,015,879
|
IssuesEvent
|
2022-07-28 11:15:43
|
mdsreq-fga-unb/2022.1-Meio-a-Meio
|
https://api.github.com/repos/mdsreq-fga-unb/2022.1-Meio-a-Meio
|
closed
|
Falta de Padrão
|
Processo de Requisitos Processo de Desenvolvimento
|
**Descrição**
vcs deveriam entender o processo como uma coisa só, padronizada, não separada. As informações que são apresentadas por MDS e REQ são distintas. Vejam:
3.2 Processo e procedimentos
**Disciplina | Atividade | Método | Ferramenta | Responsável | Entrega**
4.1 Elicitação de Requisitos
**Atividade | Método | Ferramenta**
- [x] Padronizar a estrutura de informações das seções
- [ ] Inserir papel responsável e responsável
- [ ] Inserir Entrega, em requisitos
|
2.0
|
Falta de Padrão - **Descrição**
vcs deveriam entender o processo como uma coisa só, padronizada, não separada. As informações que são apresentadas por MDS e REQ são distintas. Vejam:
3.2 Processo e procedimentos
**Disciplina | Atividade | Método | Ferramenta | Responsável | Entrega**
4.1 Elicitação de Requisitos
**Atividade | Método | Ferramenta**
- [x] Padronizar a estrutura de informações das seções
- [ ] Inserir papel responsável e responsável
- [ ] Inserir Entrega, em requisitos
|
process
|
falta de padrão descrição vcs deveriam entender o processo como uma coisa só padronizada não separada as informações que são apresentadas por mds e req são distintas vejam processo e procedimentos disciplina atividade método ferramenta responsável entrega elicitação de requisitos atividade método ferramenta padronizar a estrutura de informações das seções inserir papel responsável e responsável inserir entrega em requisitos
| 1
|
53,139
| 27,985,566,971
|
IssuesEvent
|
2023-03-26 16:57:19
|
romanz/electrs
|
https://api.github.com/repos/romanz/electrs
|
opened
|
Feature: consider using `bitcoin_slices` for parsing
|
enhancement performance
|
**Is your feature request related to a problem? Please describe.**
It should improve block parsing performance (by doing less memory allocations).
**Describe the solution you'd like**
We can parse Bitcoin blocks received via p2p using https://github.com/RCasatta/bitcoin_slices.
|
True
|
Feature: consider using `bitcoin_slices` for parsing - **Is your feature request related to a problem? Please describe.**
It should improve block parsing performance (by doing less memory allocations).
**Describe the solution you'd like**
We can parse Bitcoin blocks received via p2p using https://github.com/RCasatta/bitcoin_slices.
|
non_process
|
feature consider using bitcoin slices for parsing is your feature request related to a problem please describe it should improve block parsing performance by doing less memory allocations describe the solution you d like we can parse bitcoin blocks received via using
| 0
|
79,879
| 9,961,366,437
|
IssuesEvent
|
2019-07-07 03:31:01
|
KR4/Kaiserreich
|
https://api.github.com/repos/KR4/Kaiserreich
|
closed
|
Irredentism of the Socialist Republic of Italy
|
Suggestion Working as Designed
|
The socialist Republic of Italy, can throw a claim to Corsica, but it has no claims to Slovenia.
|
1.0
|
Irredentism of the Socialist Republic of Italy - The socialist Republic of Italy, can throw a claim to Corsica, but it has no claims to Slovenia.
|
non_process
|
irredentism of the socialist republic of italy the socialist republic of italy can throw a claim to corsica but it has no claims to slovenia
| 0
|
636,405
| 20,599,449,045
|
IssuesEvent
|
2022-03-06 02:23:00
|
InfusionBot/Welcome-Bot
|
https://api.github.com/repos/InfusionBot/Welcome-Bot
|
closed
|
Feature request: Add a warnings command
|
Priority: medium Status: stale Type: enhancement Version: minor hacktoberfest
|
### Checks
- [X] I [searched the issues page](https://github.com/Welcome-Bot/welcome-bot/issues?q=is%3Aissue) for already existing feature requests.
### Description
Similar to #335 and #336
We could add a warnings command
### Alternatives considered
_No response_
|
1.0
|
Feature request: Add a warnings command - ### Checks
- [X] I [searched the issues page](https://github.com/Welcome-Bot/welcome-bot/issues?q=is%3Aissue) for already existing feature requests.
### Description
Similar to #335 and #336
We could add a warnings command
### Alternatives considered
_No response_
|
non_process
|
feature request add a warnings command checks i for already existing feature requests description similar to and we could add a warnings command alternatives considered no response
| 0
|
713,396
| 24,527,425,348
|
IssuesEvent
|
2022-10-11 14:04:17
|
carbon-design-system/carbon-for-ibm-dotcom
|
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom
|
closed
|
[content-block-card-static]: create React wrapper
|
dev priority: medium package: web components
|
A React wrapper is not available for this web component and should be made available.
#### Additional Information
- experimental
#### Acceptance criteria
- [ ] added to Storybook
|
1.0
|
[content-block-card-static]: create React wrapper - A React wrapper is not available for this web component and should be made available.
#### Additional Information
- experimental
#### Acceptance criteria
- [ ] added to Storybook
|
non_process
|
create react wrapper a react wrapper is not available for this web component and should be made available additional information experimental acceptance criteria added to storybook
| 0
|
308,746
| 9,449,448,797
|
IssuesEvent
|
2019-04-16 01:57:35
|
smacademic/project-cgkm
|
https://api.github.com/repos/smacademic/project-cgkm
|
closed
|
Back Button Functionality In Arc View Is Missing
|
mode - missing priority - high severity - minor source - implementation type - enhancement
|
Currently the back button is non-functional/not working correctly in the arc-view section of the application. The back button should have two functionality:
- If at "master arc" section it should return to the home screen
- If at any subarc view then it should return to the next level up, parent view.
To do this I have a proposed solution:
- Have back button store the parent arc or possibly UUID (holding parent arc would be easiest to implement to current structure but possibly in-efficient)
- If at master-arc level store null and if back button is pressed and sees null go to home screen. (getting back to master-arc level may cause issue with this)
- Clicking on subarc should send parent arc to the back button so if pressed on next level it can send it back to stream and call get-children.
|
1.0
|
Back Button Functionality In Arc View Is Missing - Currently the back button is non-functional/not working correctly in the arc-view section of the application. The back button should have two functionality:
- If at "master arc" section it should return to the home screen
- If at any subarc view then it should return to the next level up, parent view.
To do this I have a proposed solution:
- Have back button store the parent arc or possibly UUID (holding parent arc would be easiest to implement to current structure but possibly in-efficient)
- If at master-arc level store null and if back button is pressed and sees null go to home screen. (getting back to master-arc level may cause issue with this)
- Clicking on subarc should send parent arc to the back button so if pressed on next level it can send it back to stream and call get-children.
|
non_process
|
back button functionality in arc view is missing currently the back button is non functional not working correctly in the arc view section of the application the back button should have two functionality if at master arc section it should return to the home screen if at any subarc view then it should return to the next level up parent view to do this i have a proposed solution have back button store the parent arc or possibly uuid holding parent arc would be easiest to implement to current structure but possibly in efficient if at master arc level store null and if back button is pressed and sees null go to home screen getting back to master arc level may cause issue with this clicking on subarc should send parent arc to the back button so if pressed on next level it can send it back to stream and call get children
| 0
|
129,446
| 27,494,596,424
|
IssuesEvent
|
2023-03-05 01:42:37
|
toeverything/blocksuite
|
https://api.github.com/repos/toeverything/blocksuite
|
closed
|
Edgeless mode code block can not switch language
|
bug mod:code mod:edgeless
|
The language picker does not show up in edgeless mode.
I see the language picker hidden in some cases(https://github.com/toeverything/blocksuite/blob/master/packages/blocks/src/code-block/code-block.ts#L242).
|
1.0
|
Edgeless mode code block can not switch language - The language picker does not show up in edgeless mode.
I see the language picker hidden in some cases(https://github.com/toeverything/blocksuite/blob/master/packages/blocks/src/code-block/code-block.ts#L242).
|
non_process
|
edgeless mode code block can not switch language the language picker does not show up in edgeless mode i see the language picker hidden in some cases
| 0
|
356,602
| 25,176,222,218
|
IssuesEvent
|
2022-11-11 09:29:47
|
yuxuanleong/pe
|
https://api.github.com/repos/yuxuanleong/pe
|
opened
|
Details in sequence diagram too small
|
severity.VeryLow type.DocumentationBug
|
### Description
The font size is visibly smaller than the plain-text in the DG
### Screenshot

<!--session: 1668150336020-b5d0861b-2832-4db8-be6d-8e9944e1b887-->
<!--Version: Web v3.4.4-->
|
1.0
|
Details in sequence diagram too small - ### Description
The font size is visibly smaller than the plain-text in the DG
### Screenshot

<!--session: 1668150336020-b5d0861b-2832-4db8-be6d-8e9944e1b887-->
<!--Version: Web v3.4.4-->
|
non_process
|
details in sequence diagram too small description the font size is visibly smaller than the plain text in the dg screenshot
| 0
|
16,359
| 21,037,814,548
|
IssuesEvent
|
2022-03-31 09:28:19
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
opened
|
[SQL Connector] Finish Implementation TODO checklist
|
compute/data-processing type/feature
|
There are some TODOs and questions remain unanswered.
- [ ] When should the PulsarSerializationSchema implements ResultTypeQueriable?
- [ ] What stop semantics should be supported in Pulsar SQL Connector ?
- [ ] How should sink parallelism affect PulsarSink runtime configs ?
- [ ] Need to understand how the flink decoding format works (UPDATE_BEFORE, UPDATE_AFTER)
- [ ] Why should. the PulsarTableSource have copy() / hashcode()/ equals() method ?
|
1.0
|
[SQL Connector] Finish Implementation TODO checklist - There are some TODOs and questions remain unanswered.
- [ ] When should the PulsarSerializationSchema implements ResultTypeQueriable?
- [ ] What stop semantics should be supported in Pulsar SQL Connector ?
- [ ] How should sink parallelism affect PulsarSink runtime configs ?
- [ ] Need to understand how the flink decoding format works (UPDATE_BEFORE, UPDATE_AFTER)
- [ ] Why should. the PulsarTableSource have copy() / hashcode()/ equals() method ?
|
process
|
finish implementation todo checklist there are some todos and questions remain unanswered when should the pulsarserializationschema implements resulttypequeriable what stop semantics should be supported in pulsar sql connector how should sink parallelism affect pulsarsink runtime configs need to understand how the flink decoding format works update before update after why should the pulsartablesource have copy hashcode equals method
| 1
|
16,839
| 23,178,777,562
|
IssuesEvent
|
2022-07-31 20:23:58
|
OroArmor/Netherite-Plus-Mod
|
https://api.github.com/repos/OroArmor/Netherite-Plus-Mod
|
closed
|
Game crashes right before reaching the main menu when used with Configured
|
Compatibility
|
When using Netherite Plus with Configured, the game crashes right before reaching the main menu.
Both mods I used were the latest release version for Minecraft 1.16.
I don't know which side the problem is on... but I'll post the crash report here.
I'm using a large mod pack so I don't know if this is caused by other mods? or Optifine?
[crash-2021-07-06_18.48.39-fml.txt](https://github.com/OroArmor/Netherite-Plus-Mod/files/6773985/crash-2021-07-06_18.48.39-fml.txt)
|
True
|
Game crashes right before reaching the main menu when used with Configured - When using Netherite Plus with Configured, the game crashes right before reaching the main menu.
Both mods I used were the latest release version for Minecraft 1.16.
I don't know which side the problem is on... but I'll post the crash report here.
I'm using a large mod pack so I don't know if this is caused by other mods? or Optifine?
[crash-2021-07-06_18.48.39-fml.txt](https://github.com/OroArmor/Netherite-Plus-Mod/files/6773985/crash-2021-07-06_18.48.39-fml.txt)
|
non_process
|
game crashes right before reaching the main menu when used with configured when using netherite plus with configured the game crashes right before reaching the main menu both mods i used were the latest release version for minecraft i don t know which side the problem is on but i ll post the crash report here i m using a large mod pack so i don t know if this is caused by other mods or optifine
| 0
|
16,422
| 21,220,597,614
|
IssuesEvent
|
2022-04-11 11:33:30
|
FOLIO-FSE/folio_migration_tools
|
https://api.github.com/repos/FOLIO-FSE/folio_migration_tools
|
opened
|
Add logging filter to only print first error of certain messages
|
enhancement/new feature simplify_migration_process
|
https://stackoverflow.com/questions/31953272/logging-print-message-only-once
"Cannot create property '{property_name_level1}'. Unsupported schema format: {schema_anomaly}"
|
1.0
|
Add logging filter to only print first error of certain messages - https://stackoverflow.com/questions/31953272/logging-print-message-only-once
"Cannot create property '{property_name_level1}'. Unsupported schema format: {schema_anomaly}"
|
process
|
add logging filter to only print first error of certain messages cannot create property property name unsupported schema format schema anomaly
| 1
|
11,698
| 14,544,857,350
|
IssuesEvent
|
2020-12-15 18:47:25
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Is there a way to add git commit hash to the build name?
|
Pri3 devops-cicd-process/tech devops/prod product-question ready-to-doc
|
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Is there a way to add git commit hash to the build name? -
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
is there a way to add git commit hash to the build name document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
69,752
| 17,839,092,789
|
IssuesEvent
|
2021-09-03 07:40:37
|
inmanta/inmanta-core
|
https://api.github.com/repos/inmanta/inmanta-core
|
opened
|
web-console clean_up_packages fails
|
build master task
|
The following indicates an error in `/clean_up_packages.js` that gets triggered during the cleanup phase of the [nightly builds](https://jenkins.inmanta.com/job/releases/job/npm/job/web-console-release/job/master/539/console):
```
$ node clean_up_packages
internal/modules/cjs/loader.js:1102
throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
^
Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/src/index.js
require() of ES modules is not supported.
require() of /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/src/index.js from /home/jenkins/workspace/s_npm_web-console-release_master/web-console/clean_up_packages.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/package.json.
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:14)
at Module.require (internal/modules/cjs/loader.js:974:19)
at require (internal/modules/cjs/helpers.js:92:18)
at Object.<anonymous> (/home/jenkins/workspace/s_npm_web-console-release_master/web-console/clean_up_packages.js:1:16)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:14) {
code: 'ERR_REQUIRE_ESM'
}
```
|
1.0
|
web-console clean_up_packages fails - The following indicates an error in `/clean_up_packages.js` that gets triggered during the cleanup phase of the [nightly builds](https://jenkins.inmanta.com/job/releases/job/npm/job/web-console-release/job/master/539/console):
```
$ node clean_up_packages
internal/modules/cjs/loader.js:1102
throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);
^
Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/src/index.js
require() of ES modules is not supported.
require() of /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/src/index.js from /home/jenkins/workspace/s_npm_web-console-release_master/web-console/clean_up_packages.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /home/jenkins/workspace/s_npm_web-console-release_master/web-console/node_modules/node-fetch/package.json.
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1102:13)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:14)
at Module.require (internal/modules/cjs/loader.js:974:19)
at require (internal/modules/cjs/helpers.js:92:18)
at Object.<anonymous> (/home/jenkins/workspace/s_npm_web-console-release_master/web-console/clean_up_packages.js:1:16)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:14) {
code: 'ERR_REQUIRE_ESM'
}
```
|
non_process
|
web console clean up packages fails the following indicates an error in clean up packages js that gets triggered during the cleanup phase of the node clean up packages internal modules cjs loader js throw new err require esm filename parentpath packagejsonpath error must use import to load es module home jenkins workspace s npm web console release master web console node modules node fetch src index js require of es modules is not supported require of home jenkins workspace s npm web console release master web console node modules node fetch src index js from home jenkins workspace s npm web console release master web console clean up packages js is an es module file as it is a js file whose nearest parent package json contains type module which defines all js files in that package scope as es modules instead rename index js to end in cjs change the requiring code to use import or remove type module from home jenkins workspace s npm web console release master web console node modules node fetch package json at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at object home jenkins workspace s npm web console release master web console clean up packages js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at function module load internal modules cjs loader js code err require esm
| 0
|
145,826
| 19,348,814,365
|
IssuesEvent
|
2021-12-15 13:44:21
|
iceColdChris/ZodiacWebsite
|
https://api.github.com/repos/iceColdChris/ZodiacWebsite
|
opened
|
CVE-2020-28498 (Medium) detected in elliptic-6.4.0.tgz
|
security vulnerability
|
## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p>
<p>Path to dependency file: ZodiacWebsite/package.json</p>
<p>Path to vulnerable library: ZodiacWebsite/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- webpack-3.8.1.tgz
- node-libs-browser-2.1.0.tgz
- crypto-browserify-3.12.0.tgz
- create-ecdh-4.0.3.tgz
- :x: **elliptic-6.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/iceColdChris/ZodiacWebsite/git/commits/0e43f8a8784f998d39b6419534b6a54485225e6e">0e43f8a8784f998d39b6419534b6a54485225e6e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28498 (Medium) detected in elliptic-6.4.0.tgz - ## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p>
<p>Path to dependency file: ZodiacWebsite/package.json</p>
<p>Path to vulnerable library: ZodiacWebsite/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- webpack-3.8.1.tgz
- node-libs-browser-2.1.0.tgz
- crypto-browserify-3.12.0.tgz
- create-ecdh-4.0.3.tgz
- :x: **elliptic-6.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/iceColdChris/ZodiacWebsite/git/commits/0e43f8a8784f998d39b6419534b6a54485225e6e">0e43f8a8784f998d39b6419534b6a54485225e6e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in elliptic tgz cve medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file zodiacwebsite package json path to vulnerable library zodiacwebsite node modules elliptic package json dependency hierarchy react scripts tgz root library webpack tgz node libs browser tgz crypto browserify tgz create ecdh tgz x elliptic tgz vulnerable library found in head commit a href vulnerability details the package elliptic before are vulnerable to cryptographic issues via the implementation in elliptic ec key js there is no check to confirm that the public key point passed into the derive function actually exists on the curve this results in the potential for the private key used in this implementation to be revealed after a number of ecdh operations are performed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
9,205
| 12,238,910,677
|
IssuesEvent
|
2020-05-04 20:40:07
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
scaffolding: our bot generation script has some bugs in the newest version of probot
|
type: process
|
1. the logic for instantiating a mock probot app is slightly different:
```js
probot = new Probot({});
probot.app = {
getSignedJsonWebToken() {
return 'abc123';
},
getInstallationAccessToken(): Promise<string> {
return Promise.resolve('abc123');
},
};
probot.load(myProbotApp);
```
2. The dependencies have drifted a bit out of date, we're now explicit with the `@octokit/rest`, `probot`.
3. we aren't on the latest version of `gts`.
4. we don't include a `files` field in the package.json, so were triggering the `eslint-plugin-node/no-unpublished-import` linting rule.
see: https://github.com/googleapis/repo-automation-bots/pull/486
|
1.0
|
scaffolding: our bot generation script has some bugs in the newest version of probot - 1. the logic for instantiating a mock probot app is slightly different:
```js
probot = new Probot({});
probot.app = {
getSignedJsonWebToken() {
return 'abc123';
},
getInstallationAccessToken(): Promise<string> {
return Promise.resolve('abc123');
},
};
probot.load(myProbotApp);
```
2. The dependencies have drifted a bit out of date, we're now explicit with the `@octokit/rest`, `probot`.
3. we aren't on the latest version of `gts`.
4. we don't include a `files` field in the package.json, so were triggering the `eslint-plugin-node/no-unpublished-import` linting rule.
see: https://github.com/googleapis/repo-automation-bots/pull/486
|
process
|
scaffolding our bot generation script has some bugs in the newest version of probot the logic for instantiating a mock probot app is slightly different js probot new probot probot app getsignedjsonwebtoken return getinstallationaccesstoken promise return promise resolve probot load myprobotapp the dependencies have drifted a bit out of date we re now explicit with the octokit rest probot we aren t on the latest version of gts we don t include a files field in the package json so were triggering the eslint plugin node no unpublished import linting rule see
| 1
|
16,408
| 21,191,410,542
|
IssuesEvent
|
2022-04-08 17:52:20
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Fix flaky darwin kitchensink jobs
|
stage: backlog process: flaky test
|
`darwin-test-kitchensink` and `darwin-test-binary-against-kitchensink` fail about 2/3 of the time in `develop`:

Also, they take a very long time to run, and are not properly parallelized.
They should be parallelized and the cause of the flake understood and fixed.
|
1.0
|
Fix flaky darwin kitchensink jobs - `darwin-test-kitchensink` and `darwin-test-binary-against-kitchensink` fail about 2/3 of the time in `develop`:

Also, they take a very long time to run, and are not properly parallelized.
They should be parallelized and the cause of the flake understood and fixed.
|
process
|
fix flaky darwin kitchensink jobs darwin test kitchensink and darwin test binary against kitchensink fail about of the time in develop also they take a very long time to run and are not properly parallelized they should be parallelized and the cause of the flake understood and fixed
| 1
|
4,602
| 7,451,307,888
|
IssuesEvent
|
2018-03-29 02:09:28
|
shobrook/BitVision
|
https://api.github.com/repos/shobrook/BitVision
|
closed
|
Fix the feature scaling function
|
medium priority preprocessing
|
I think we're polluting our test set with future data when we standardize it. I need to look into this more and if there's a proper way to scale features for time series prediction tasks.
|
1.0
|
Fix the feature scaling function - I think we're polluting our test set with future data when we standardize it. I need to look into this more and if there's a proper way to scale features for time series prediction tasks.
|
process
|
fix the feature scaling function i think we re polluting our test set with future data when we standardize it i need to look into this more and if there s a proper way to scale features for time series prediction tasks
| 1
|
2,212
| 5,051,347,383
|
IssuesEvent
|
2016-12-20 21:35:37
|
cfpb/design-manual
|
https://api.github.com/repos/cfpb/design-manual
|
closed
|
Design Manual Content Strategy
|
content strategy process and planning
|
I'm excited to chat more next week. Thought I'd start an issue to track our progress and discussions.
Doc with feedback:
https://docs.google.com/a/collab.cfpb.gov/document/d/1ddjsehWFoWue4vpZ-3Ms0VTSCaINjEfyZvKCg3194Q4/edit?usp=drive_web
Some things we should discuss at our meeting on 10/20:
- Process and timing for creating content standards and implementation
- What is "in" or "out" of scope of our effort (ie, nav labels, guides, etc.)
- Review document of existing frustrations
- Tasks to move forward
Did I miss anything you want to discuss?
@benguhin @schaferjh @marteki @keelerr @sonnakim @nataliafitzgerald
|
1.0
|
Design Manual Content Strategy - I'm excited to chat more next week. Thought I'd start an issue to track our progress and discussions.
Doc with feedback:
https://docs.google.com/a/collab.cfpb.gov/document/d/1ddjsehWFoWue4vpZ-3Ms0VTSCaINjEfyZvKCg3194Q4/edit?usp=drive_web
Some things we should discuss at our meeting on 10/20:
- Process and timing for creating content standards and implementation
- What is "in" or "out" of scope of our effort (ie, nav labels, guides, etc.)
- Review document of existing frustrations
- Tasks to move forward
Did I miss anything you want to discuss?
@benguhin @schaferjh @marteki @keelerr @sonnakim @nataliafitzgerald
|
process
|
design manual content strategy i m excited to chat more next week thought i d start an issue to track our progress and discussions doc with feedback some things we should discuss at our meeting on process and timing for creating content standards and implementation what is in or out of scope of our effort ie nav labels guides etc review document of existing frustrations tasks to move forward did i miss anything you want to discuss benguhin schaferjh marteki keelerr sonnakim nataliafitzgerald
| 1
|
21,957
| 4,760,584,530
|
IssuesEvent
|
2016-10-25 03:50:29
|
electron/electron.atom.io
|
https://api.github.com/repos/electron/electron.atom.io
|
closed
|
Markdown reference links not properly parsed
|
bug documentation
|
Links like this:
```
[MAS builds][mas-builds].
```
with reusable references like this:
```
[mas-builds]: ../tutorial/mac-app-store-submission-guide.md
```
Look like this on GitHub:
<img width="362" alt="screen shot 2016-10-24 at 8 38 27 pm" src="https://cloud.githubusercontent.com/assets/2289/19672296/0a332f28-9a2a-11e6-90d7-65d3b5d91e3f.png">
And like this on the website:
<img width="608" alt="screen shot 2016-10-24 at 8 40 17 pm" src="https://cloud.githubusercontent.com/assets/2289/19672305/1d57471a-9a2a-11e6-9dd5-874c380f3189.png">
with errant HTML like this:
```html
<p><strong>Note:</strong> This API has no effect on
docs/tutorial/mac-app-store-submission-guide.</p>
```
|
1.0
|
Markdown reference links not properly parsed - Links like this:
```
[MAS builds][mas-builds].
```
with reusable references like this:
```
[mas-builds]: ../tutorial/mac-app-store-submission-guide.md
```
Look like this on GitHub:
<img width="362" alt="screen shot 2016-10-24 at 8 38 27 pm" src="https://cloud.githubusercontent.com/assets/2289/19672296/0a332f28-9a2a-11e6-90d7-65d3b5d91e3f.png">
And like this on the website:
<img width="608" alt="screen shot 2016-10-24 at 8 40 17 pm" src="https://cloud.githubusercontent.com/assets/2289/19672305/1d57471a-9a2a-11e6-9dd5-874c380f3189.png">
with errant HTML like this:
```html
<p><strong>Note:</strong> This API has no effect on
docs/tutorial/mac-app-store-submission-guide.</p>
```
|
non_process
|
markdown reference links not properly parsed links like this with reusable references like this tutorial mac app store submission guide md look like this on github img width alt screen shot at pm src and like this on the website img width alt screen shot at pm src with errant html like this html note this api has no effect on docs tutorial mac app store submission guide
| 0
|
10,188
| 13,044,162,867
|
IssuesEvent
|
2020-07-29 03:47:37
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `LeastString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `LeastString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `LeastString` from TiDB -
## Description
Port the scalar function `LeastString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function leaststring from tidb description port the scalar function leaststring from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
20,685
| 27,356,828,848
|
IssuesEvent
|
2023-02-27 13:25:50
|
camunda/issues
|
https://api.github.com/repos/camunda/issues
|
opened
|
Complex Process Instance Modification
|
component:operate component:zeebe component:zeebe-process-automation public kind:epic feature-parity potential:8.2
|
### Value Proposition Statement
Repair process instances by moving running flow node instances, add or cancel existing ones in a process instances inside multi-instance sub-process.
### User Problem
After #100, Customers might face an issue when moving or adding new flow node instances. If they have a multi-instance sub-process in their diagrams, they cannot move/add/cancel these element instances.
### User Stories
_(Please also refer to kick-off notes for more comments)_
- As a Developer, I can perform cancel/move operations on elements within multi-instance subprocess (cancel all, move all)
- As a Developer, I can terminate an element instance (i.e. cancel a flow node instance) inside a multi-instance sub-process in Operate
_API in Zeebe, can pass different instructions with terminate command, ie. pass elementId. Zeebe already only allows termination on elementInstanceKey._
_Operate currently only passes flownode id_
_Change only needed on Operate (BE & FE) : select from instance history tree (need hint to let user know to do this)_
- As a Developer, I can move an element inside multi-instance sub-process in Operate
_For Operate will still need to select from history tree._
_For zeebe, we will need to allow the activation if an (indirect) flowscope is multi-instance (currently rejected), we also need to have ancestor selection if there are more than 1 flowscopes_
Out of scope:
- As a developer, I can activate an element (i.e. add a flow node instance) inside a multi-instance sub-process in Operate
- As a Developer, I can add/move tokens to start events, boundary events and events attached to event-based gateways
- - As a developer,[ I can add/move tokens to elements with multiple running scopes ](https://github.com/camunda/zeebe/issues/9646) - moved to https://github.com/camunda/product-hub/issues/957
### Implementation Notes
In order to cover the use cases, following features should be implemented:
- As a Developer, I can cancel/move one flow node instance, on a flow node (with supported type) that has multiple instances
Remaining Operate tasks:
- [x] https://github.com/camunda/operate/issues/2955
- [ ] https://github.com/camunda/operate/issues/3476
- [ ] https://github.com/camunda/operate/issues/3666
Remaining Zeebe issues:
- [x] https://github.com/camunda/zeebe/issues/9570
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: Oct 2022
* Designer assigned: @gastonpillet01 @karl-heinsen
Design Deliverables
* Design Brief - https://github.com/camunda/product-design/issues/180
* Flow - https://github.com/camunda/product-design/issues/181
* High Fidelity Wireframe (HFW) - https://github.com/camunda/product-design/issues/182
* Prototype - https://github.com/camunda/product-design/issues/216
* Specifications - https://github.com/camunda/product-design/issues/184
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
|
1.0
|
Complex Process Instance Modification - ### Value Proposition Statement
Repair process instances by moving running flow node instances, add or cancel existing ones in a process instances inside multi-instance sub-process.
### User Problem
After #100, Customers might face an issue when moving or adding new flow node instances. If they have a multi-instance sub-process in their diagrams, they cannot move/add/cancel these element instances.
### User Stories
_(Please also refer to kick-off notes for more comments)_
- As a Developer, I can perform cancel/move operations on elements within multi-instance subprocess (cancel all, move all)
- As a Developer, I can terminate an element instance (i.e. cancel a flow node instance) inside a multi-instance sub-process in Operate
_API in Zeebe, can pass different instructions with terminate command, ie. pass elementId. Zeebe already only allows termination on elementInstanceKey._
_Operate currently only passes flownode id_
_Change only needed on Operate (BE & FE) : select from instance history tree (need hint to let user know to do this)_
- As a Developer, I can move an element inside multi-instance sub-process in Operate
_For Operate will still need to select from history tree._
_For zeebe, we will need to allow the activation if an (indirect) flowscope is multi-instance (currently rejected), we also need to have ancestor selection if there are more than 1 flowscopes_
Out of scope:
- As a developer, I can activate an element (i.e. add a flow node instance) inside a multi-instance sub-process in Operate
- As a Developer, I can add/move tokens to start events, boundary events and events attached to event-based gateways
- - As a developer,[ I can add/move tokens to elements with multiple running scopes ](https://github.com/camunda/zeebe/issues/9646) - moved to https://github.com/camunda/product-hub/issues/957
### Implementation Notes
In order to cover the use cases, following features should be implemented:
- As a Developer, I can cancel/move one flow node instance, on a flow node (with supported type) that has multiple instances
Remaining Operate tasks:
- [x] https://github.com/camunda/operate/issues/2955
- [ ] https://github.com/camunda/operate/issues/3476
- [ ] https://github.com/camunda/operate/issues/3666
Remaining Zeebe issues:
- [x] https://github.com/camunda/zeebe/issues/9570
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: Oct 2022
* Designer assigned: @gastonpillet01 @karl-heinsen
Design Deliverables
* Design Brief - https://github.com/camunda/product-design/issues/180
* Flow - https://github.com/camunda/product-design/issues/181
* High Fidelity Wireframe (HFW) - https://github.com/camunda/product-design/issues/182
* Prototype - https://github.com/camunda/product-design/issues/216
* Specifications - https://github.com/camunda/product-design/issues/184
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
|
process
|
complex process instance modification value proposition statement repair process instances by moving running flow node instances add or cancel existing ones in a process instances inside multi instance sub process user problem after customers might face an issue when moving or adding new flow node instances if they have a multi instance sub process in their diagrams they cannot move add cancel these element instances user stories please also refer to kick off notes for more comments as a developer i can perform cancel move operations on elements within multi instance subprocess cancel all move all as a developer i can terminate an element instance i e cancel a flow node instance inside a multi instance sub process in operate api in zeebe can pass different instructions with terminate command ie pass elementid zeebe already only allows termination on elementinstancekey operate currently only passes flownode id change only needed on operate be fe select from instance history tree need hint to let user know to do this as a developer i can move an element inside multi instance sub process in operate for operate will still need to select from history tree for zeebe we will need to allow the activation if an indirect flowscope is multi instance currently rejected we also need to have ancestor selection if there are more than flowscopes out of scope as a developer i can activate an element i e add a flow node instance inside a multi instance sub process in operate as a developer i can add move tokens to start events boundary events and events attached to event based gateways as a developer moved to implementation notes in order to cover the use cases following features should be implemented as a developer i can cancel move one flow node instance on a flow node with supported type that has multiple instances remaining operate tasks remaining zeebe issues breakdown this section links to various sub issues tasks contributing to respective epic phase or phase results where appropriate discovery phase define phase design planning reviewed by design oct designer assigned karl heinsen design deliverables design brief flow high fidelity wireframe hfw prototype specifications documentation planning risk management risk class risk treatment implement phase validate phase
| 1
|
11,974
| 14,737,027,047
|
IssuesEvent
|
2021-01-07 00:39:43
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Towne - another comment & problem?
|
anc-core anc-external anc-process anc-ui anp-important ant-bug ant-support
|
In GitLab by @kdjstudios on Apr 10, 2018, 08:58
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-10-85723
**Server:** Hosted (Both?)
**Client/Site:** Towne (All)
**Account:** NA
**Issue:**
While I am in here and emailing, I would like to voice my opinion on the ‘duplicate’ button in the edit section:
It is pretty annoying that the button covers up the beginning of the word that you wish to possibly duplicate.
Another thing that is probably a problem is if you click on the ‘dup’ button and it duplicates that charge and you elect to not use it and click the trash can,
when you click on the ‘dup’ button again, it simply duplicates empty boxes.
|
1.0
|
Towne - another comment & problem? - In GitLab by @kdjstudios on Apr 10, 2018, 08:58
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-10-85723
**Server:** Hosted (Both?)
**Client/Site:** Towne (All)
**Account:** NA
**Issue:**
While I am in here and emailing, I would like to voice my opinion on the ‘duplicate’ button in the edit section:
It is pretty annoying that the button covers up the beginning of the word that you wish to possibly duplicate.
Another thing that is probably a problem is if you click on the ‘dup’ button and it duplicates that charge and you elect to not use it and click the trash can,
when you click on the ‘dup’ button again, it simply duplicates empty boxes.
|
process
|
towne another comment problem in gitlab by kdjstudios on apr submitted by deb crown helpdesk server hosted both client site towne all account na issue while i am in here and emailing i would like to voice my opinion on the ‘duplicate’ button in the edit section it is pretty annoying that the button covers up the beginning of the word that you wish to possibly duplicate another thing that is probably a problem is if you click on the ‘dup’ button and it duplicates that charge and you elect to not use it and click the trash can when you click on the ‘dup’ button again it simply duplicates empty boxes
| 1
|
55,171
| 23,403,997,685
|
IssuesEvent
|
2022-08-12 10:53:42
|
PreMiD/Presences
|
https://api.github.com/repos/PreMiD/Presences
|
opened
|
CollabVM
|
service request
|
### Website name
collabvm
### Website URL
https://computernewb.com/collab-vm/
### Website logo
https://computernewb.com/w/images/Collabvmlogo.png
### Prerequisites
- [ ] It is a paid service
- [ ] It displays NSFW content
- [ ] It is region restricted
### Description
I want the presence to display the vm's name that the user is in. (the vm name is stored in `window.vmName`, it is `null` if the user is not in a vm, otherwise it's a string).
|
1.0
|
CollabVM - ### Website name
collabvm
### Website URL
https://computernewb.com/collab-vm/
### Website logo
https://computernewb.com/w/images/Collabvmlogo.png
### Prerequisites
- [ ] It is a paid service
- [ ] It displays NSFW content
- [ ] It is region restricted
### Description
I want the presence to display the vm's name that the user is in. (the vm name is stored in `window.vmName`, it is `null` if the user is not in a vm, otherwise it's a string).
|
non_process
|
collabvm website name collabvm website url website logo prerequisites it is a paid service it displays nsfw content it is region restricted description i want the presence to display the vm s name that the user is in the vm name is stored in window vmname it is null if the user is not in a vm otherwise it s a string
| 0
|
16,662
| 21,730,013,820
|
IssuesEvent
|
2022-05-11 11:07:53
|
mttschltz/exnota
|
https://api.github.com/repos/mttschltz/exnota
|
opened
|
Set up integration testing for in-browser testing
|
process
|
A cursory search hints this might not be simple
|
1.0
|
Set up integration testing for in-browser testing - A cursory search hints this might not be simple
|
process
|
set up integration testing for in browser testing a cursory search hints this might not be simple
| 1
|
4,416
| 7,299,806,762
|
IssuesEvent
|
2018-02-26 21:21:18
|
jansensan/kinetic-cabinet
|
https://api.github.com/repos/jansensan/kinetic-cabinet
|
opened
|
Create CV demo version & prod version
|
processing
|
Create modes for sketch:
- CV demo version: to show how camera and PixelFlow are used;
- Prod version: that does not render camera graphics unnecessarily, to improve CPU usage.
|
1.0
|
Create CV demo version & prod version - Create modes for sketch:
- CV demo version: to show how camera and PixelFlow are used;
- Prod version: that does not render camera graphics unnecessarily, to improve CPU usage.
|
process
|
create cv demo version prod version create modes for sketch cv demo version to show how camera and pixelflow are used prod version that does not render camera graphics unnecessarily to improve cpu usage
| 1
|
20,177
| 3,560,665,408
|
IssuesEvent
|
2016-01-23 07:06:19
|
coddingtonbear/inthe.am
|
https://api.github.com/repos/coddingtonbear/inthe.am
|
closed
|
Add fade-out or slight delay when displaying/hiding progress indicator
|
design
|
For some processes -- like saving configs and triggering a bugwarrior build -- the progress indicator appears for just a fraction of a second. This should maybe be set up in such a way that it doesn't look quite so jarring.
|
1.0
|
Add fade-out or slight delay when displaying/hiding progress indicator - For some processes -- like saving configs and triggering a bugwarrior build -- the progress indicator appears for just a fraction of a second. This should maybe be set up in such a way that it doesn't look quite so jarring.
|
non_process
|
add fade out or slight delay when displaying hiding progress indicator for some processes like saving configs and triggering a bugwarrior build the progress indicator appears for just a fraction of a second this should maybe be set up in such a way that it doesn t look quite so jarring
| 0
|
6,819
| 9,962,377,332
|
IssuesEvent
|
2019-07-07 14:05:00
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Clean data in column
|
cxp machine-learning/svc product-question team-data-science-process/subsvc triaged
|
Is there a Sample on How to Clean all String column to remove special characters like , \n etc ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 84a9bf80-52b7-1c0c-74aa-1d624b65ccf2
* Version Independent ID: bcbecc73-1cbd-0621-4bf4-6b4f21cb6595
* Content: [Clean and prepare data for Azure Machine Learning - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/prepare-data)
* Content Source: [articles/machine-learning/team-data-science-process/prepare-data.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/prepare-data.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
1.0
|
Clean data in column - Is there a Sample on How to Clean all String column to remove special characters like , \n etc ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 84a9bf80-52b7-1c0c-74aa-1d624b65ccf2
* Version Independent ID: bcbecc73-1cbd-0621-4bf4-6b4f21cb6595
* Content: [Clean and prepare data for Azure Machine Learning - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/prepare-data)
* Content Source: [articles/machine-learning/team-data-science-process/prepare-data.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/prepare-data.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
process
|
clean data in column is there a sample on how to clean all string column to remove special characters like n etc document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service machine learning sub service team data science process github login marktab microsoft alias tdsp
| 1
|
9,830
| 12,827,886,667
|
IssuesEvent
|
2020-07-06 19:24:11
|
googleapis/code-suggester
|
https://api.github.com/repos/googleapis/code-suggester
|
opened
|
Framework-core library: handle existing fork
|
type: process
|
- [ ] Branches can be created on a pre-existing but up-to-date fork
### Solution
Given that a fork already exists, ensure that branches can still be created.
The branch should be based off of an existing fork. If the fork already exists, update the fork.
### Alternatives
Deleting the pre-existing fork is in-ideal because it can break potentially other active branches.
### Additional Information
From the [GitHub V3 API](https://developer.github.com/v3/repos/forks/#create-a-fork) if a fork is created for the first time the response is `202 Accepted`. If the fork already exists and the create fork API is called, `202 Accepted` is called but the same fork persists and is returned.
|
1.0
|
Framework-core library: handle existing fork - - [ ] Branches can be created on a pre-existing but up-to-date fork
### Solution
Given that a fork already exists, ensure that branches can still be created.
The branch should be based off of an existing fork. If the fork already exists, update the fork.
### Alternatives
Deleting the pre-existing fork is in-ideal because it can break potentially other active branches.
### Additional Information
From the [GitHub V3 API](https://developer.github.com/v3/repos/forks/#create-a-fork) if a fork is created for the first time the response is `202 Accepted`. If the fork already exists and the create fork API is called, `202 Accepted` is called but the same fork persists and is returned.
|
process
|
framework core library handle existing fork branches can be created on a pre existing but up to date fork solution given that a fork already exists ensure that branches can still be created the branch should be based off of an existing fork if the fork already exists update the fork alternatives deleting the pre existing fork is in ideal because it can break potentially other active branches additional information from the if a fork is created for the first time the response is accepted if the fork already exists and the create fork api is called accepted is called but the same fork persists and is returned
| 1
|
20,614
| 27,288,405,880
|
IssuesEvent
|
2023-02-23 15:01:50
|
googleapis/python-bigquery-pandas
|
https://api.github.com/repos/googleapis/python-bigquery-pandas
|
closed
|
TST: test pandas-gbq on Python 3.8
|
type: process api: bigquery
|
Python 3.8 was released in October. We should start running CI against it.
|
1.0
|
TST: test pandas-gbq on Python 3.8 - Python 3.8 was released in October. We should start running CI against it.
|
process
|
tst test pandas gbq on python python was released in october we should start running ci against it
| 1
|
14,024
| 8,444,222,331
|
IssuesEvent
|
2018-10-18 17:48:27
|
apollographql/react-apollo
|
https://api.github.com/repos/apollographql/react-apollo
|
closed
|
Apollo Query and Mutation components: JSX props should not use functions (react/jsx-no-bind)
|
docs has-reproduction performance
|
I'm using the Apollo's `<Mutation>` component like this:
```js
import gql from "graphql-tag";
import { Mutation } from "react-apollo";
const ADD_TODO = gql`
mutation addTodo($type: String!) {
addTodo(type: $type) {
id
type
}
}
`;
const AddTodo = () => {
let input;
return (
<Mutation mutation={ADD_TODO}>
{(addTodo, { data }) => (
<div>
<form
onSubmit={e => {
e.preventDefault();
addTodo({ variables: { type: input.value } });
input.value = "";
}}
>
<input
ref={node => {
input = node;
}}
/>
<button type="submit">Add Todo</button>
</form>
</div>
)}
</Mutation>
);
};
```
I get this error:
`[eslint] JSX props should not use functions (react/jsx-no-bind)`
I think it is complaining about:
```js
onSubmit={e => {
e.preventDefault();
addTodo({ variables: { type: input.value } });
input.value = "";
}}
```
because is an arrow function.
But it's the default in Apollo React Docs: https://www.apollographql.com/docs/react/essentials/mutations.html.
I asked eslint-plugin-react team how to proceed, here: https://github.com/yannickcr/eslint-plugin-react/issues/1872
<!--**Issue Labels**
While not necessary, you can help organize our issues by labeling this issue when you open it. To add a label automatically, simply [x] mark the appropriate box below:
- [x] has-reproduction
- [ ] feature
- [x] docs
- [ ] blocking
- [ ] good first issue
/label performances
/label lint
To add a label not listed above, simply place `/label another-label-name` on a line by itself.
-->
|
True
|
Apollo Query and Mutation components: JSX props should not use functions (react/jsx-no-bind) - I'm using the Apollo's `<Mutation>` component like this:
```js
import gql from "graphql-tag";
import { Mutation } from "react-apollo";
const ADD_TODO = gql`
mutation addTodo($type: String!) {
addTodo(type: $type) {
id
type
}
}
`;
const AddTodo = () => {
let input;
return (
<Mutation mutation={ADD_TODO}>
{(addTodo, { data }) => (
<div>
<form
onSubmit={e => {
e.preventDefault();
addTodo({ variables: { type: input.value } });
input.value = "";
}}
>
<input
ref={node => {
input = node;
}}
/>
<button type="submit">Add Todo</button>
</form>
</div>
)}
</Mutation>
);
};
```
I get this error:
`[eslint] JSX props should not use functions (react/jsx-no-bind)`
I think it is complaining about:
```js
onSubmit={e => {
e.preventDefault();
addTodo({ variables: { type: input.value } });
input.value = "";
}}
```
because is an arrow function.
But it's the default in Apollo React Docs: https://www.apollographql.com/docs/react/essentials/mutations.html.
I asked eslint-plugin-react team how to proceed, here: https://github.com/yannickcr/eslint-plugin-react/issues/1872
<!--**Issue Labels**
While not necessary, you can help organize our issues by labeling this issue when you open it. To add a label automatically, simply [x] mark the appropriate box below:
- [x] has-reproduction
- [ ] feature
- [x] docs
- [ ] blocking
- [ ] good first issue
/label performances
/label lint
To add a label not listed above, simply place `/label another-label-name` on a line by itself.
-->
|
non_process
|
apollo query and mutation components jsx props should not use functions react jsx no bind i m using the apollo s component like this js import gql from graphql tag import mutation from react apollo const add todo gql mutation addtodo type string addtodo type type id type const addtodo let input return addtodo data form onsubmit e e preventdefault addtodo variables type input value input value input ref node input node add todo i get this error jsx props should not use functions react jsx no bind i think it is complaining about js onsubmit e e preventdefault addtodo variables type input value input value because is an arrow function but it s the default in apollo react docs i asked eslint plugin react team how to proceed here issue labels while not necessary you can help organize our issues by labeling this issue when you open it to add a label automatically simply mark the appropriate box below has reproduction feature docs blocking good first issue label performances label lint to add a label not listed above simply place label another label name on a line by itself
| 0
|
1,432
| 3,996,347,516
|
IssuesEvent
|
2016-05-10 18:28:59
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
EPERM when spawning a child process as a different group and user
|
child_process
|
<!--
Thanks for wanting to report an issue you've found in Node.js. Please fill in
the template below by replacing the html comments with an appropriate answer.
If unsure about something, just do as best as you're able.
version: usually output of `node -v`
platform: either `uname -a` output, or if Windows, version and 32 or 64-bit.
subsystem: optional -- if known please specify affected core module name.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided. Ideally this test case should not have any external
dependencies. We understand that it is not always possible to reduce your code
to a small test case, but we would appreciate to have as
much data as possible.
Thank you!
-->
* **v15.10.1**:
* **Linux Cake 4.2.0-35-generic #40-Ubuntu SMP Tue Mar 15 22:15:45 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux**:
* **Child_Process**:
<!-- Enter your issue details below this comment. -->
Consider the following code (because i could be doing something wrong)
```
// Require Node.js standard library function to spawn a child process
var spawn = require('child_process').spawn;
// Create a child process for the Minecraft server using the same java process
// invocation we used manually before
var minecraftServerProcess = spawn('java', [
'-server',
'-Xmx2048M',
'-XX:+UseConcMarkSweepGC',
'-XX:+UseParNewGC',
'-XX:+CMSIncrementalPacing',
'-XX:ParallelGCThreads=2',
'-XX:+AggressiveOpts',
'-jar',
'minecraft_server.1.9.2.jar',
'nogui'
], {
uid: 1007,
gid: 1007
});
// Listen for events coming from the minecraft server process - in this case,
// just log out messages coming from the server
function log(data) {
process.stdout.write(data.toString());
}
minecraftServerProcess.stdout.on('data', log);
minecraftServerProcess.stderr.on('data', log);
// Create an express web app that can parse HTTP POST requests
var app = require('express')();
app.use(require('body-parser').urlencoded({
extended:false
}));
// Create a route that will respond to a POST request
app.get('/command', function(request, response) {
// Get the command from the HTTP request and send it to the Minecraft
// server process
var command = request.param('Body');
minecraftServerProcess.stdin.write(command+'\n');
// buffer output for a quarter of a second, then reply to HTTP request
var buffer = [];
var collector = function(data) {
data = data.toString();
buffer.push(data.split(']: ')[1]);
};
minecraftServerProcess.stdout.on('data', collector);
setTimeout(function() {
minecraftServerProcess.stdout.removeListener('data', collector);
response.send(buffer.join(''));
}, 250);
});
// Listen for incoming HTTP requests on port 3000
app.listen(3000,'localhost');
```
As you can see i am spawning a minecraft server as a different user (because minecraft is in its own folder and group)
And i am receiving the following error.
```
belldandu@Cake:/home/minecraft$ node app.js
internal/child_process.js:302
throw errnoException(err, 'spawn');
^
Error: spawn EPERM
at exports._errnoException (util.js:890:11)
at ChildProcess.spawn (internal/child_process.js:302:11)
at exports.spawn (child_process.js:367:9)
at Object.<anonymous> (/home/minecraft/app.js:6:30)
at Module._compile (module.js:413:34)
at Object.Module._extensions..js (module.js:422:10)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Function.Module.runMain (module.js:447:10)
at startup (node.js:146:18)
````
|
1.0
|
EPERM when spawning a child process as a different group and user - <!--
Thanks for wanting to report an issue you've found in Node.js. Please fill in
the template below by replacing the html comments with an appropriate answer.
If unsure about something, just do as best as you're able.
version: usually output of `node -v`
platform: either `uname -a` output, or if Windows, version and 32 or 64-bit.
subsystem: optional -- if known please specify affected core module name.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided. Ideally this test case should not have any external
dependencies. We understand that it is not always possible to reduce your code
to a small test case, but we would appreciate to have as
much data as possible.
Thank you!
-->
* **v15.10.1**:
* **Linux Cake 4.2.0-35-generic #40-Ubuntu SMP Tue Mar 15 22:15:45 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux**:
* **Child_Process**:
<!-- Enter your issue details below this comment. -->
Consider the following code (because i could be doing something wrong)
```
// Require Node.js standard library function to spawn a child process
var spawn = require('child_process').spawn;
// Create a child process for the Minecraft server using the same java process
// invocation we used manually before
var minecraftServerProcess = spawn('java', [
'-server',
'-Xmx2048M',
'-XX:+UseConcMarkSweepGC',
'-XX:+UseParNewGC',
'-XX:+CMSIncrementalPacing',
'-XX:ParallelGCThreads=2',
'-XX:+AggressiveOpts',
'-jar',
'minecraft_server.1.9.2.jar',
'nogui'
], {
uid: 1007,
gid: 1007
});
// Listen for events coming from the minecraft server process - in this case,
// just log out messages coming from the server
function log(data) {
process.stdout.write(data.toString());
}
minecraftServerProcess.stdout.on('data', log);
minecraftServerProcess.stderr.on('data', log);
// Create an express web app that can parse HTTP POST requests
var app = require('express')();
app.use(require('body-parser').urlencoded({
extended:false
}));
// Create a route that will respond to a POST request
app.get('/command', function(request, response) {
// Get the command from the HTTP request and send it to the Minecraft
// server process
var command = request.param('Body');
minecraftServerProcess.stdin.write(command+'\n');
// buffer output for a quarter of a second, then reply to HTTP request
var buffer = [];
var collector = function(data) {
data = data.toString();
buffer.push(data.split(']: ')[1]);
};
minecraftServerProcess.stdout.on('data', collector);
setTimeout(function() {
minecraftServerProcess.stdout.removeListener('data', collector);
response.send(buffer.join(''));
}, 250);
});
// Listen for incoming HTTP requests on port 3000
app.listen(3000,'localhost');
```
As you can see i am spawning a minecraft server as a different user (because minecraft is in its own folder and group)
And i am receiving the following error.
```
belldandu@Cake:/home/minecraft$ node app.js
internal/child_process.js:302
throw errnoException(err, 'spawn');
^
Error: spawn EPERM
at exports._errnoException (util.js:890:11)
at ChildProcess.spawn (internal/child_process.js:302:11)
at exports.spawn (child_process.js:367:9)
at Object.<anonymous> (/home/minecraft/app.js:6:30)
at Module._compile (module.js:413:34)
at Object.Module._extensions..js (module.js:422:10)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Function.Module.runMain (module.js:447:10)
at startup (node.js:146:18)
````
|
process
|
eperm when spawning a child process as a different group and user thanks for wanting to report an issue you ve found in node js please fill in the template below by replacing the html comments with an appropriate answer if unsure about something just do as best as you re able version usually output of node v platform either uname a output or if windows version and or bit subsystem optional if known please specify affected core module name it will be much easier for us to fix the issue if a test case that reproduces the problem is provided ideally this test case should not have any external dependencies we understand that it is not always possible to reduce your code to a small test case but we would appreciate to have as much data as possible thank you linux cake generic ubuntu smp tue mar utc gnu linux child process consider the following code because i could be doing something wrong require node js standard library function to spawn a child process var spawn require child process spawn create a child process for the minecraft server using the same java process invocation we used manually before var minecraftserverprocess spawn java server xx useconcmarksweepgc xx useparnewgc xx cmsincrementalpacing xx parallelgcthreads xx aggressiveopts jar minecraft server jar nogui uid gid listen for events coming from the minecraft server process in this case just log out messages coming from the server function log data process stdout write data tostring minecraftserverprocess stdout on data log minecraftserverprocess stderr on data log create an express web app that can parse http post requests var app require express app use require body parser urlencoded extended false create a route that will respond to a post request app get command function request response get the command from the http request and send it to the minecraft server process var command request param body minecraftserverprocess stdin write command n buffer output for a quarter of a second then reply to http request var buffer var collector function data data data tostring buffer push data split minecraftserverprocess stdout on data collector settimeout function minecraftserverprocess stdout removelistener data collector response send buffer join listen for incoming http requests on port app listen localhost as you can see i am spawning a minecraft server as a different user because minecraft is in its own folder and group and i am receiving the following error belldandu cake home minecraft node app js internal child process js throw errnoexception err spawn error spawn eperm at exports errnoexception util js at childprocess spawn internal child process js at exports spawn child process js at object home minecraft app js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js at startup node js
| 1
|
11,416
| 14,244,212,279
|
IssuesEvent
|
2020-11-19 06:26:53
|
eduardofrancisco1733533/4a
|
https://api.github.com/repos/eduardofrancisco1733533/4a
|
opened
|
complete_size_estemating_template
|
process dashboard
|
-completar el formato de estimación de LOC con los valores obtenidos
|
1.0
|
complete_size_estemating_template - -completar el formato de estimación de LOC con los valores obtenidos
|
process
|
complete size estemating template completar el formato de estimación de loc con los valores obtenidos
| 1
|
9,343
| 2,607,936,639
|
IssuesEvent
|
2015-02-26 00:29:07
|
chrsmithdemos/minify
|
https://api.github.com/repos/chrsmithdemos/minify
|
opened
|
Mod_ReWrite for Groups
|
auto-migrated Priority-Low Type-Enhancement
|
```
Can we use /min/something.js or /min/group.css to reference a group?
I added the following 2 rules to my .htaccess on my server and it works great.
I know the rules are vary, vary crude for a more global implementation but the
same concept should work.
RewriteRule ^(.*).js index.php?g=$1 [L,NE]
RewriteRule ^(.*).css index.php?g=$1 [L,NE]
```
-----
Original issue reported on code.google.com by `scot...@gmail.com` on 31 Dec 2011 at 12:41
|
1.0
|
Mod_ReWrite for Groups - ```
Can we use /min/something.js or /min/group.css to reference a group?
I added the following 2 rules to my .htaccess on my server and it works great.
I know the rules are vary, vary crude for a more global implementation but the
same concept should work.
RewriteRule ^(.*).js index.php?g=$1 [L,NE]
RewriteRule ^(.*).css index.php?g=$1 [L,NE]
```
-----
Original issue reported on code.google.com by `scot...@gmail.com` on 31 Dec 2011 at 12:41
|
non_process
|
mod rewrite for groups can we use min something js or min group css to reference a group i added the following rules to my htaccess on my server and it works great i know the rules are vary vary crude for a more global implementation but the same concept should work rewriterule js index php g rewriterule css index php g original issue reported on code google com by scot gmail com on dec at
| 0
|
235,818
| 18,061,148,283
|
IssuesEvent
|
2021-09-20 14:06:46
|
ita-social-projects/dokazovi-requirements
|
https://api.github.com/repos/ita-social-projects/dokazovi-requirements
|
opened
|
[Test for Story #189 ]Verify if the system shows the admin a warning message if the user don't put video in field 'Відео' and press button 'Опублікувати'
|
documentation test case
|
*Story link**
[#189 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/189)
### Status:
Not executed
### Title:
erify if the system shows the admin a warning message if the user don't put video in field 'Відео' and press button 'Опублікувати'
### Description:
### Pre-conditions:
1.Go to the web application as an admin.
2.Should be opened tab 'Відео'.
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Fill all field but in field 'Відео' don't put video| | User can fill fields | Not executed|Додайте, будь ласка, відео
2 | Press 'Опублікувати'| | Showing an error 'Додайте, будь ласка, відео'| Not Executed|
### Dependencies:
[#189 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/189)
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
|
1.0
|
[Test for Story #189 ]Verify if the system shows the admin a warning message if the user don't put video in field 'Відео' and press button 'Опублікувати' - *Story link**
[#189 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/189)
### Status:
Not executed
### Title:
erify if the system shows the admin a warning message if the user don't put video in field 'Відео' and press button 'Опублікувати'
### Description:
### Pre-conditions:
1.Go to the web application as an admin.
2.Should be opened tab 'Відео'.
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Fill all field but in field 'Відео' don't put video| | User can fill fields | Not executed|Додайте, будь ласка, відео
2 | Press 'Опублікувати'| | Showing an error 'Додайте, будь ласка, відео'| Not Executed|
### Dependencies:
[#189 Story](https://github.com/ita-social-projects/dokazovi-requirements/issues/189)
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
|
non_process
|
verify if the system shows the admin a warning message if the user don t put video in field відео and press button опублікувати story link status not executed title erify if the system shows the admin a warning message if the user don t put video in field відео and press button опублікувати description pre conditions go to the web application as an admin should be opened tab відео step № test steps test data expected result status pass fail not executed notes fill all field but in field відео don t put video user can fill fields not executed додайте будь ласка відео press опублікувати showing an error додайте будь ласка відео not executed dependencies
| 0
|
1,514
| 4,105,792,533
|
IssuesEvent
|
2016-06-06 04:35:04
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
documentation on Stream.write misses that callback does not ensure that data written to process.stdout/.stderr is flushed
|
doc process
|
```javascript
var empty = new Buffer(0)
process.stdout.write(empty, function() {
process.stderr.write(empty, function() {
process.exit(code);
});
});
```
doesn't reliable ensure that a consumer of node.js will get all data (yes - it is a PIPE, not a FILE !!).
|
1.0
|
documentation on Stream.write misses that callback does not ensure that data written to process.stdout/.stderr is flushed - ```javascript
var empty = new Buffer(0)
process.stdout.write(empty, function() {
process.stderr.write(empty, function() {
process.exit(code);
});
});
```
doesn't reliable ensure that a consumer of node.js will get all data (yes - it is a PIPE, not a FILE !!).
|
process
|
documentation on stream write misses that callback does not ensure that data written to process stdout stderr is flushed javascript var empty new buffer process stdout write empty function process stderr write empty function process exit code doesn t reliable ensure that a consumer of node js will get all data yes it is a pipe not a file
| 1
|
7,661
| 10,746,276,952
|
IssuesEvent
|
2019-10-30 10:43:46
|
microsoft/ptvsd
|
https://api.github.com/repos/microsoft/ptvsd
|
closed
|
Subprocesses should inherit PydevdCustomization
|
Bug Upstream-pydevd area:Multiprocessing
|
The crucial one is `DEFAULT_PROTOCOL`. At the moment, pydevd in subprocesses tries to connect to the adapter using the hardcoded default, which is not JSON.
|
1.0
|
Subprocesses should inherit PydevdCustomization - The crucial one is `DEFAULT_PROTOCOL`. At the moment, pydevd in subprocesses tries to connect to the adapter using the hardcoded default, which is not JSON.
|
process
|
subprocesses should inherit pydevdcustomization the crucial one is default protocol at the moment pydevd in subprocesses tries to connect to the adapter using the hardcoded default which is not json
| 1
|
10,389
| 13,197,150,112
|
IssuesEvent
|
2020-08-13 22:16:10
|
GoogleCloudPlatform/stackdriver-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/stackdriver-sandbox
|
closed
|
logging: istio installation reports failure (maybe unnecessarily?)
|
priority: p2 type: process
|
Upon running `./install.sh`, when it begins istio installation:
TLDR: The log says `Failed` and prints lots of stuff, suggesting that something has failed, then says it `Trying with...` and then `Istio 1.6.2 Download Complete!`
My initial guess is that it's trying two different methods and reporting when the first one didn't work, which is misleading (users might think something went wrong unnecessarily). Then, I'd suggest making this log less verbose.
Another could be that something actually failed (it doesn't seem this way to me) - then this is a deeper issue.
Full log of what I'm talking about:
```bash
null_resource.install_istio: Creating...
null_resource.install_istio: Provisioning with 'local-exec'...
null_resource.install_istio (local-exec): Executing: ["/bin/sh" "-c" "./istio/install_istio.sh"]
null_resource.install_istio (local-exec): ###
null_resource.install_istio (local-exec): ### Begin install istio control plane
null_resource.install_istio (local-exec): ###
null_resource.install_istio (local-exec): Downloading Istio 1.6.2...
null_resource.install_istio (local-exec): % Total % Received % Xferd Average Speed Time Time Time Current
null_resource.install_istio (local-exec): Dload Upload Total Spent Left Speed
null_resource.install_istio (local-exec): 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 0 3896 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 100 3896 100 3896 0 0 6334 0 --:--:-- --:--:-- --:--:-- 3804k
null_resource.install_istio (local-exec): Downloading istio-1.6.2 from https://github.com/istio/istio/releases/download/1.6.2/istio-1.6.
2-linux.tar.gz ...
null_resource.install_istio (local-exec): Failed.
null_resource.install_istio (local-exec): Trying with TARGET_ARCH. Downloading istio-1.6.2 from https://github.com/istio/istio/releases/
download/1.6.2/istio-1.6.2-linux-amd64.tar.gz ...
null_resource.install_istio (local-exec): Istio 1.6.2 Download Complete!
null_resource.install_istio (local-exec): Istio has been successfully downloaded into the istio-1.6.2 folder on your system.
```
|
1.0
|
logging: istio installation reports failure (maybe unnecessarily?) - Upon running `./install.sh`, when it begins istio installation:
TLDR: The log says `Failed` and prints lots of stuff, suggesting that something has failed, then says it `Trying with...` and then `Istio 1.6.2 Download Complete!`
My initial guess is that it's trying two different methods and reporting when the first one didn't work, which is misleading (users might think something went wrong unnecessarily). Then, I'd suggest making this log less verbose.
Another could be that something actually failed (it doesn't seem this way to me) - then this is a deeper issue.
Full log of what I'm talking about:
```bash
null_resource.install_istio: Creating...
null_resource.install_istio: Provisioning with 'local-exec'...
null_resource.install_istio (local-exec): Executing: ["/bin/sh" "-c" "./istio/install_istio.sh"]
null_resource.install_istio (local-exec): ###
null_resource.install_istio (local-exec): ### Begin install istio control plane
null_resource.install_istio (local-exec): ###
null_resource.install_istio (local-exec): Downloading Istio 1.6.2...
null_resource.install_istio (local-exec): % Total % Received % Xferd Average Speed Time Time Time Current
null_resource.install_istio (local-exec): Dload Upload Total Spent Left Speed
null_resource.install_istio (local-exec): 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 0 3896 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
null_resource.install_istio (local-exec): 100 3896 100 3896 0 0 6334 0 --:--:-- --:--:-- --:--:-- 3804k
null_resource.install_istio (local-exec): Downloading istio-1.6.2 from https://github.com/istio/istio/releases/download/1.6.2/istio-1.6.
2-linux.tar.gz ...
null_resource.install_istio (local-exec): Failed.
null_resource.install_istio (local-exec): Trying with TARGET_ARCH. Downloading istio-1.6.2 from https://github.com/istio/istio/releases/
download/1.6.2/istio-1.6.2-linux-amd64.tar.gz ...
null_resource.install_istio (local-exec): Istio 1.6.2 Download Complete!
null_resource.install_istio (local-exec): Istio has been successfully downloaded into the istio-1.6.2 folder on your system.
```
|
process
|
logging istio installation reports failure maybe unnecessarily upon running install sh when it begins istio installation tldr the log says failed and prints lots of stuff suggesting that something has failed then says it trying with and then istio download complete my initial guess is that it s trying two different methods and reporting when the first one didn t work which is misleading users might think something went wrong unnecessarily then i d suggest making this log less verbose another could be that something actually failed it doesn t seem this way to me then this is a deeper issue full log of what i m talking about bash null resource install istio creating null resource install istio provisioning with local exec null resource install istio local exec executing null resource install istio local exec null resource install istio local exec begin install istio control plane null resource install istio local exec null resource install istio local exec downloading istio null resource install istio local exec total received xferd average speed time time time current null resource install istio local exec dload upload total spent left speed null resource install istio local exec null resource install istio local exec null resource install istio local exec null resource install istio local exec null resource install istio local exec downloading istio from linux tar gz null resource install istio local exec failed null resource install istio local exec trying with target arch downloading istio from download istio linux tar gz null resource install istio local exec istio download complete null resource install istio local exec istio has been successfully downloaded into the istio folder on your system
| 1
|
698,359
| 23,976,041,249
|
IssuesEvent
|
2022-09-13 11:41:23
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.kicker.de - site is not usable
|
priority-important browser-fenix engine-gecko android13
|
<!-- @browser: Firefox Mobile 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:106.0) Gecko/106.0 Firefox/106.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/110734 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.kicker.de/
**Browser / Version**: Firefox Mobile 106.0
**Operating System**: Android 13
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The problem only occurs on Firefox 106. The page loads but is not usable, because I can't scroll down or up.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/e9c1c6e0-0605-4394-8685-422b72fca8f7.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220911091736</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/a966a321-7b71-43d3-85f7-5f1b333c1d46)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.kicker.de - site is not usable - <!-- @browser: Firefox Mobile 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:106.0) Gecko/106.0 Firefox/106.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/110734 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.kicker.de/
**Browser / Version**: Firefox Mobile 106.0
**Operating System**: Android 13
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The problem only occurs on Firefox 106. The page loads but is not usable, because I can't scroll down or up.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/e9c1c6e0-0605-4394-8685-422b72fca8f7.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220911091736</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/a966a321-7b71-43d3-85f7-5f1b333c1d46)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce the problem only occurs on firefox the page loads but is not usable because i can t scroll down or up view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
5,405
| 7,131,566,158
|
IssuesEvent
|
2018-01-22 11:28:48
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
aws_dynamodb_table resource reading tags from dynamodb-local throws UnknownOperationException
|
bug service/dynamodb upstream
|
_This issue was originally opened by @cstavro as hashicorp/terraform#11926. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
0.8.6
### Affected Resource(s)
- aws_dynamodb_table
### Terraform Configuration Files
```hcl
provider "aws" {
region = "us-east-1"
dynamodb_endpoint = "http://localhost:8000"
}
resource "aws_dynamodb_table" "mytable" {
name = "mytable"
read_capacity = 1
write_capacity = 1
hash_key = "id"
attribute {
name = "id"
type = "S"
}
}
```
### Debug Output
https://gist.github.com/cstavro/5e23b233ebfc29ffa48e5ead52d27c34
### Expected Behavior
- Should be able to plan and apply a config
- Should be able to plan after an apply
### Actual Behavior
- The initial plan works as expected but the apply errors when refreshing state. It does appear to create the tables and stores them in the tfstate file but you are unable to plan or apply any changes after that initial apply.
The output is here: https://gist.github.com/cstavro/d9d3686454fafd2d95a6ebc7cba43363
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
- This works fine in v0.8.5
- Running latest version of dynamodb-local for development
### References
Pretty sure this is the guy that causes the problem
- https://github.com/hashicorp/terraform/pull/11617
|
1.0
|
aws_dynamodb_table resource reading tags from dynamodb-local throws UnknownOperationException - _This issue was originally opened by @cstavro as hashicorp/terraform#11926. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
0.8.6
### Affected Resource(s)
- aws_dynamodb_table
### Terraform Configuration Files
```hcl
provider "aws" {
region = "us-east-1"
dynamodb_endpoint = "http://localhost:8000"
}
resource "aws_dynamodb_table" "mytable" {
name = "mytable"
read_capacity = 1
write_capacity = 1
hash_key = "id"
attribute {
name = "id"
type = "S"
}
}
```
### Debug Output
https://gist.github.com/cstavro/5e23b233ebfc29ffa48e5ead52d27c34
### Expected Behavior
- Should be able to plan and apply a config
- Should be able to plan after an apply
### Actual Behavior
- The initial plan works as expected but the apply errors when refreshing state. It does appear to create the tables and stores them in the tfstate file but you are unable to plan or apply any changes after that initial apply.
The output is here: https://gist.github.com/cstavro/d9d3686454fafd2d95a6ebc7cba43363
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
- This works fine in v0.8.5
- Running latest version of dynamodb-local for development
### References
Pretty sure this is the guy that causes the problem
- https://github.com/hashicorp/terraform/pull/11617
|
non_process
|
aws dynamodb table resource reading tags from dynamodb local throws unknownoperationexception this issue was originally opened by cstavro as hashicorp terraform it was migrated here as part of the the original body of the issue is below terraform version affected resource s aws dynamodb table terraform configuration files hcl provider aws region us east dynamodb endpoint resource aws dynamodb table mytable name mytable read capacity write capacity hash key id attribute name id type s debug output expected behavior should be able to plan and apply a config should be able to plan after an apply actual behavior the initial plan works as expected but the apply errors when refreshing state it does appear to create the tables and stores them in the tfstate file but you are unable to plan or apply any changes after that initial apply the output is here steps to reproduce terraform apply important factoids this works fine in running latest version of dynamodb local for development references pretty sure this is the guy that causes the problem
| 0
|
198,548
| 22,659,661,181
|
IssuesEvent
|
2022-07-02 01:14:07
|
loftwah/grindmodecypher.com
|
https://api.github.com/repos/loftwah/grindmodecypher.com
|
closed
|
CVE-2020-11022 (Medium) detected in phpunit/php-code-coverage-7.0.15, jquery-3.4.1.min.js - autoclosed
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>phpunit/php-code-coverage-7.0.15</b>, <b>jquery-3.4.1.min.js</b></p></summary>
<p>
<details><summary><b>phpunit/php-code-coverage-7.0.15</b></p></summary>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- antecedent/patchwork-2.1.17 (Root Library)
- phpunit/phpunit-8.5.23
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: /wp-content/plugins/jetpack/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/loftwah/grindmodecypher.com/commit/9796a73ba2275a3470dfddbffcb782959f17d4c7">9796a73ba2275a3470dfddbffcb782959f17d4c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in phpunit/php-code-coverage-7.0.15, jquery-3.4.1.min.js - autoclosed - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>phpunit/php-code-coverage-7.0.15</b>, <b>jquery-3.4.1.min.js</b></p></summary>
<p>
<details><summary><b>phpunit/php-code-coverage-7.0.15</b></p></summary>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- antecedent/patchwork-2.1.17 (Root Library)
- phpunit/phpunit-8.5.23
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to vulnerable library: /wp-content/plugins/jetpack/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/loftwah/grindmodecypher.com/commit/9796a73ba2275a3470dfddbffcb782959f17d4c7">9796a73ba2275a3470dfddbffcb782959f17d4c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in phpunit php code coverage jquery min js autoclosed cve medium severity vulnerability vulnerable libraries phpunit php code coverage jquery min js phpunit php code coverage library that provides collection processing and rendering functionality for php code coverage information library home page a href dependency hierarchy antecedent patchwork root library phpunit phpunit x phpunit php code coverage vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library wp content plugins jetpack vendor phpunit php code coverage src report html renderer template js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
400,678
| 11,778,945,473
|
IssuesEvent
|
2020-03-16 17:07:59
|
eobanb/indianapublicmedia-web
|
https://api.github.com/repos/eobanb/indianapublicmedia-web
|
closed
|
/events/ displaying in wrong order on index
|
bug high priority
|
Events should display in chronological order according to the date entered in the 'Event Date, Machine Readable' field, with the events coming up the soonest at the top. Currently they are appearing in an unknown/unpredictable order:
https://indianapublicmedia.org/events/
|
1.0
|
/events/ displaying in wrong order on index - Events should display in chronological order according to the date entered in the 'Event Date, Machine Readable' field, with the events coming up the soonest at the top. Currently they are appearing in an unknown/unpredictable order:
https://indianapublicmedia.org/events/
|
non_process
|
events displaying in wrong order on index events should display in chronological order according to the date entered in the event date machine readable field with the events coming up the soonest at the top currently they are appearing in an unknown unpredictable order
| 0
|
17,346
| 23,171,714,261
|
IssuesEvent
|
2022-07-30 20:31:41
|
open-ephys/GUI
|
https://api.github.com/repos/open-ephys/GUI
|
closed
|
On-board audio monitor only works for headstage port A
|
Processors Rhythm interface
|
Inside the Rhythm source module, selecting one or two channels for audio monitoring only works if the data is coming from port A. We need to add something that checks which headstage is sending data and updates the channel number accordingly.
|
1.0
|
On-board audio monitor only works for headstage port A - Inside the Rhythm source module, selecting one or two channels for audio monitoring only works if the data is coming from port A. We need to add something that checks which headstage is sending data and updates the channel number accordingly.
|
process
|
on board audio monitor only works for headstage port a inside the rhythm source module selecting one or two channels for audio monitoring only works if the data is coming from port a we need to add something that checks which headstage is sending data and updates the channel number accordingly
| 1
|
20,967
| 27,819,092,780
|
IssuesEvent
|
2023-03-19 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 17 Mar 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Global Knowledge Calibration for Fast Open-Vocabulary Segmentation
- **Authors:** Kunyang Han, Yong Liu, Jun Hao Liew, Henghui Ding, Yunchao Wei, Jiajun Liu, Yitong Wang, Yansong Tang, Yujiu Yang, Jiashi Feng, Yao Zhao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09181
- **Pdf link:** https://arxiv.org/pdf/2303.09181
- **Abstract**
Recent advancements in pre-trained vision-language models, such as CLIP, have enabled the segmentation of arbitrary concepts solely from textual inputs, a process commonly referred to as open-vocabulary semantic segmentation (OVS). However, existing OVS techniques confront a fundamental challenge: the trained classifier tends to overfit on the base classes observed during training, resulting in suboptimal generalization performance to unseen classes. To mitigate this issue, recent studies have proposed the use of an additional frozen pre-trained CLIP for classification. Nonetheless, this approach incurs heavy computational overheads as the CLIP vision encoder must be repeatedly forward-passed for each mask, rendering it impractical for real-world applications. To address this challenge, our objective is to develop a fast OVS model that can perform comparably or better without the extra computational burden of the CLIP image encoder during inference. To this end, we propose a core idea of preserving the generalizable representation when fine-tuning on known classes. Specifically, we introduce a text diversification strategy that generates a set of synonyms for each training category, which prevents the learned representation from collapsing onto specific known category names. Additionally, we employ a text-guided knowledge distillation method to preserve the generalizable knowledge of CLIP. Extensive experiments demonstrate that our proposed model achieves robust generalization performance across various datasets. Furthermore, we perform a preliminary exploration of open-vocabulary video segmentation and present a benchmark that can facilitate future open-vocabulary research in the video domain.
### Reduction of rain-induced errors for wind speed estimation on SAR observations using convolutional neural networks
- **Authors:** Aurélien Colin (1, 2), Pierre Tandeo (1, 3), Charles Peureux (2), Romain Husson (2), Ronan Fablet (1, 3)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Atmospheric and Oceanic Physics (physics.ao-ph)
- **Arxiv link:** https://arxiv.org/abs/2303.09200
- **Pdf link:** https://arxiv.org/pdf/2303.09200
- **Abstract**
Synthetic Aperture Radar is known to be able to provide high-resolution estimates of surface wind speed. These estimates usually rely on a Geophysical Model Function (GMF) that has difficulties accounting for non-wind processes such as rain events. Convolutional neural network, on the other hand, have the capacity to use contextual information and have demonstrated their ability to delimit rainfall areas. By carefully building a large dataset of SAR observations from the Copernicus Sentinel-1 mission, collocated with both GMF and atmospheric model wind speeds as well as rainfall estimates, we were able to train a wind speed estimator with reduced errors under rain. Collocations with in-situ wind speed measurements from buoys show a root mean square error that is reduced by 27% (resp. 45%) under rainfall estimated at more than 1 mm/h (resp. 3 mm/h). These results demonstrate the capacity of deep learning models to correct rain-related errors in SAR products.
### SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective
- **Authors:** Zipeng Xu, Songlong Xing, Enver Sangineto, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09270
- **Pdf link:** https://arxiv.org/pdf/2303.09270
- **Abstract**
Contrastive Language-Image Pre-Training (CLIP) has refreshed the state of the art for a broad range of vision-language cross-modal tasks. Particularly, it has created an intriguing research line of text-guided image style transfer, dispensing with the need for style reference images as in traditional style transfer methods. However, directly using CLIP to guide the transfer of style leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image, partly due to the entanglement of visual and written concepts inherent in CLIP. Inspired by the use of spectral analysis in filtering linguistic information at different granular levels, we analyse the patch embeddings from the last layer of the CLIP vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components. We propose SpectralCLIP, which implements a spectral filtering layer on top of the CLIP vision encoder, to alleviate the artifact issue. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We further apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Code is available at https://github.com/zipengxuc/SpectralCLIP.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Commonsense Knowledge Assisted Deep Learning for Resource-constrained and Fine-grained Object Detection
- **Authors:** Pu Zhang, Bin Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.09026
- **Pdf link:** https://arxiv.org/pdf/2303.09026
- **Abstract**
In this paper, we consider fine-grained image object detection in resource-constrained cases such as edge computing. Deep learning (DL), namely learning with deep neural networks (DNNs), has become the dominating approach to object detection. To achieve accurate fine-grained detection, one needs to employ a large enough DNN model and a vast amount of data annotations, which brings a challenge for using modern DL object detectors in resource-constrained cases. To this end, we propose an approach, which leverages commonsense knowledge to assist a coarse-grained object detector to get accurate fine-grained detection results. Specifically, we introduce a commonsense knowledge inference module (CKIM) to process coarse-grained lables given by a benchmark DL detector to produce fine-grained lables. We consider both crisp-rule and fuzzy-rule based inference in our CKIM; the latter is used to handle ambiguity in the target semantic labels. We implement our method based on several modern DL detectors, namely YOLOv4, Mobilenetv3-SSD and YOLOv7-tiny. Experiment results show that our approach outperforms benchmark detectors remarkably in terms of accuracy, model size and processing latency.
### MixCycle: Mixup Assisted Semi-Supervised 3D Single Object Tracking with Cycle Consistency
- **Authors:** Qiao Wu, Jiaqi Yang, Kun Sun, Chu'ai Zhang, Yanning Zhang, Mathieu Salzmann
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09219
- **Pdf link:** https://arxiv.org/pdf/2303.09219
- **Abstract**
3D single object tracking (SOT) is an indispensable part of automated driving. Existing approaches rely heavily on large, densely labeled datasets. However, annotating point clouds is both costly and time-consuming. Inspired by the great success of cycle tracking in unsupervised 2D SOT, we introduce the first semi-supervised approach to 3D SOT. Specifically, we introduce two cycle-consistency strategies for supervision: 1) Self tracking cycles, which leverage labels to help the model converge better in the early stages of training; 2) forward-backward cycles, which strengthen the tracker's robustness to motion variations and the template noise caused by the template update strategy. Furthermore, we propose a data augmentation strategy named SOTMixup to improve the tracker's robustness to point cloud diversity. SOTMixup generates training samples by sampling points in two point clouds with a mixing rate and assigns a reasonable loss weight for training according to the mixing rate. The resulting MixCycle approach generalizes to appearance matching-based trackers. On the KITTI benchmark, based on the P2B tracker, MixCycle trained with $\textbf{10%}$ labels outperforms P2B trained with $\textbf{100%}$ labels, and achieves a $\textbf{28.4%}$ precision improvement when using $\textbf{1%}$ labels. Our code will be publicly released.
### SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective
- **Authors:** Zipeng Xu, Songlong Xing, Enver Sangineto, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09270
- **Pdf link:** https://arxiv.org/pdf/2303.09270
- **Abstract**
Contrastive Language-Image Pre-Training (CLIP) has refreshed the state of the art for a broad range of vision-language cross-modal tasks. Particularly, it has created an intriguing research line of text-guided image style transfer, dispensing with the need for style reference images as in traditional style transfer methods. However, directly using CLIP to guide the transfer of style leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image, partly due to the entanglement of visual and written concepts inherent in CLIP. Inspired by the use of spectral analysis in filtering linguistic information at different granular levels, we analyse the patch embeddings from the last layer of the CLIP vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components. We propose SpectralCLIP, which implements a spectral filtering layer on top of the CLIP vision encoder, to alleviate the artifact issue. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We further apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Code is available at https://github.com/zipengxuc/SpectralCLIP.
### Tackling Clutter in Radar Data -- Label Generation and Detection Using PointNet++
- **Authors:** Johannes Kopp, Dominik Kellner, Aldi Piroli, Klaus Dietmayer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2303.09530
- **Pdf link:** https://arxiv.org/pdf/2303.09530
- **Abstract**
Radar sensors employed for environment perception, e.g. in autonomous vehicles, output a lot of unwanted clutter. These points, for which no corresponding real objects exist, are a major source of errors in following processing steps like object detection or tracking. We therefore present two novel neural network setups for identifying clutter. The input data, network architectures and training configuration are adjusted specifically for this task. Special attention is paid to the downsampling of point clouds composed of multiple sensor scans. In an extensive evaluation, the new setups display substantially better performance than existing approaches. Because there is no suitable public data set in which clutter is annotated, we design a method to automatically generate the respective labels. By applying it to existing data with object annotations and releasing its code, we effectively create the first freely available radar clutter data set representing real-world driving scenarios. Code and instructions are accessible at www.github.com/kopp-j/clutter-ds.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
- **Authors:** Yi Xie, Huaidong Zhang, Xuemiao Xu, Jianqing Zhu, Shengfeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09230
- **Pdf link:** https://arxiv.org/pdf/2303.09230
- **Abstract**
Previous Knowledge Distillation based efficient image retrieval methods employs a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic framework inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs (around 24.13% and 21.94% higher than state-of-the-arts) without sacrificing accuracy (around 2.11% mAP higher than state-of-the-arts).
## Keyword: RAW
### Unsupervised Facial Expression Representation Learning with Contrastive Local Warping
- **Authors:** Fanglei Xue, Yifan Sun, Yi Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09034
- **Pdf link:** https://arxiv.org/pdf/2303.09034
- **Abstract**
This paper investigates unsupervised representation learning for facial expression analysis. We think Unsupervised Facial Expression Representation (UFER) deserves exploration and has the potential to address some key challenges in facial expression analysis, such as scaling, annotation bias, the discrepancy between discrete labels and continuous emotions, and model pre-training. Such motivated, we propose a UFER method with contrastive local warping (ContraWarping), which leverages the insight that the emotional expression is robust to current global transformation (affine transformation, color jitter, etc.) but can be easily changed by random local warping. Therefore, given a facial image, ContraWarping employs some global transformations and local warping to generate its positive and negative samples and sets up a novel contrastive learning framework. Our in-depth investigation shows that: 1) the positive pairs from global transformations may be exploited with general self-supervised learning (e.g., BYOL) and already bring some informative features, and 2) the negative pairs from local warping explicitly introduce expression-related variation and further bring substantial improvement. Based on ContraWarping, we demonstrate the benefit of UFER under two facial expression analysis scenarios: facial expression recognition and image retrieval. For example, directly using ContraWarping features for linear probing achieves 79.14% accuracy on RAF-DB, significantly reducing the gap towards the full-supervised counterpart (88.92% / 84.81% with/without pre-training).
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 17 Mar 23 - ## Keyword: events
### Global Knowledge Calibration for Fast Open-Vocabulary Segmentation
- **Authors:** Kunyang Han, Yong Liu, Jun Hao Liew, Henghui Ding, Yunchao Wei, Jiajun Liu, Yitong Wang, Yansong Tang, Yujiu Yang, Jiashi Feng, Yao Zhao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09181
- **Pdf link:** https://arxiv.org/pdf/2303.09181
- **Abstract**
Recent advancements in pre-trained vision-language models, such as CLIP, have enabled the segmentation of arbitrary concepts solely from textual inputs, a process commonly referred to as open-vocabulary semantic segmentation (OVS). However, existing OVS techniques confront a fundamental challenge: the trained classifier tends to overfit on the base classes observed during training, resulting in suboptimal generalization performance to unseen classes. To mitigate this issue, recent studies have proposed the use of an additional frozen pre-trained CLIP for classification. Nonetheless, this approach incurs heavy computational overheads as the CLIP vision encoder must be repeatedly forward-passed for each mask, rendering it impractical for real-world applications. To address this challenge, our objective is to develop a fast OVS model that can perform comparably or better without the extra computational burden of the CLIP image encoder during inference. To this end, we propose a core idea of preserving the generalizable representation when fine-tuning on known classes. Specifically, we introduce a text diversification strategy that generates a set of synonyms for each training category, which prevents the learned representation from collapsing onto specific known category names. Additionally, we employ a text-guided knowledge distillation method to preserve the generalizable knowledge of CLIP. Extensive experiments demonstrate that our proposed model achieves robust generalization performance across various datasets. Furthermore, we perform a preliminary exploration of open-vocabulary video segmentation and present a benchmark that can facilitate future open-vocabulary research in the video domain.
### Reduction of rain-induced errors for wind speed estimation on SAR observations using convolutional neural networks
- **Authors:** Aurélien Colin (1, 2), Pierre Tandeo (1, 3), Charles Peureux (2), Romain Husson (2), Ronan Fablet (1, 3)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Atmospheric and Oceanic Physics (physics.ao-ph)
- **Arxiv link:** https://arxiv.org/abs/2303.09200
- **Pdf link:** https://arxiv.org/pdf/2303.09200
- **Abstract**
Synthetic Aperture Radar is known to be able to provide high-resolution estimates of surface wind speed. These estimates usually rely on a Geophysical Model Function (GMF) that has difficulties accounting for non-wind processes such as rain events. Convolutional neural network, on the other hand, have the capacity to use contextual information and have demonstrated their ability to delimit rainfall areas. By carefully building a large dataset of SAR observations from the Copernicus Sentinel-1 mission, collocated with both GMF and atmospheric model wind speeds as well as rainfall estimates, we were able to train a wind speed estimator with reduced errors under rain. Collocations with in-situ wind speed measurements from buoys show a root mean square error that is reduced by 27% (resp. 45%) under rainfall estimated at more than 1 mm/h (resp. 3 mm/h). These results demonstrate the capacity of deep learning models to correct rain-related errors in SAR products.
### SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective
- **Authors:** Zipeng Xu, Songlong Xing, Enver Sangineto, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09270
- **Pdf link:** https://arxiv.org/pdf/2303.09270
- **Abstract**
Contrastive Language-Image Pre-Training (CLIP) has refreshed the state of the art for a broad range of vision-language cross-modal tasks. Particularly, it has created an intriguing research line of text-guided image style transfer, dispensing with the need for style reference images as in traditional style transfer methods. However, directly using CLIP to guide the transfer of style leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image, partly due to the entanglement of visual and written concepts inherent in CLIP. Inspired by the use of spectral analysis in filtering linguistic information at different granular levels, we analyse the patch embeddings from the last layer of the CLIP vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components. We propose SpectralCLIP, which implements a spectral filtering layer on top of the CLIP vision encoder, to alleviate the artifact issue. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We further apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Code is available at https://github.com/zipengxuc/SpectralCLIP.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Commonsense Knowledge Assisted Deep Learning for Resource-constrained and Fine-grained Object Detection
- **Authors:** Pu Zhang, Bin Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.09026
- **Pdf link:** https://arxiv.org/pdf/2303.09026
- **Abstract**
In this paper, we consider fine-grained image object detection in resource-constrained cases such as edge computing. Deep learning (DL), namely learning with deep neural networks (DNNs), has become the dominating approach to object detection. To achieve accurate fine-grained detection, one needs to employ a large enough DNN model and a vast amount of data annotations, which brings a challenge for using modern DL object detectors in resource-constrained cases. To this end, we propose an approach, which leverages commonsense knowledge to assist a coarse-grained object detector to get accurate fine-grained detection results. Specifically, we introduce a commonsense knowledge inference module (CKIM) to process coarse-grained lables given by a benchmark DL detector to produce fine-grained lables. We consider both crisp-rule and fuzzy-rule based inference in our CKIM; the latter is used to handle ambiguity in the target semantic labels. We implement our method based on several modern DL detectors, namely YOLOv4, Mobilenetv3-SSD and YOLOv7-tiny. Experiment results show that our approach outperforms benchmark detectors remarkably in terms of accuracy, model size and processing latency.
### MixCycle: Mixup Assisted Semi-Supervised 3D Single Object Tracking with Cycle Consistency
- **Authors:** Qiao Wu, Jiaqi Yang, Kun Sun, Chu'ai Zhang, Yanning Zhang, Mathieu Salzmann
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09219
- **Pdf link:** https://arxiv.org/pdf/2303.09219
- **Abstract**
3D single object tracking (SOT) is an indispensable part of automated driving. Existing approaches rely heavily on large, densely labeled datasets. However, annotating point clouds is both costly and time-consuming. Inspired by the great success of cycle tracking in unsupervised 2D SOT, we introduce the first semi-supervised approach to 3D SOT. Specifically, we introduce two cycle-consistency strategies for supervision: 1) Self tracking cycles, which leverage labels to help the model converge better in the early stages of training; 2) forward-backward cycles, which strengthen the tracker's robustness to motion variations and the template noise caused by the template update strategy. Furthermore, we propose a data augmentation strategy named SOTMixup to improve the tracker's robustness to point cloud diversity. SOTMixup generates training samples by sampling points in two point clouds with a mixing rate and assigns a reasonable loss weight for training according to the mixing rate. The resulting MixCycle approach generalizes to appearance matching-based trackers. On the KITTI benchmark, based on the P2B tracker, MixCycle trained with $\textbf{10%}$ labels outperforms P2B trained with $\textbf{100%}$ labels, and achieves a $\textbf{28.4%}$ precision improvement when using $\textbf{1%}$ labels. Our code will be publicly released.
### SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective
- **Authors:** Zipeng Xu, Songlong Xing, Enver Sangineto, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09270
- **Pdf link:** https://arxiv.org/pdf/2303.09270
- **Abstract**
Contrastive Language-Image Pre-Training (CLIP) has refreshed the state of the art for a broad range of vision-language cross-modal tasks. Particularly, it has created an intriguing research line of text-guided image style transfer, dispensing with the need for style reference images as in traditional style transfer methods. However, directly using CLIP to guide the transfer of style leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image, partly due to the entanglement of visual and written concepts inherent in CLIP. Inspired by the use of spectral analysis in filtering linguistic information at different granular levels, we analyse the patch embeddings from the last layer of the CLIP vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components. We propose SpectralCLIP, which implements a spectral filtering layer on top of the CLIP vision encoder, to alleviate the artifact issue. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We further apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Code is available at https://github.com/zipengxuc/SpectralCLIP.
### Tackling Clutter in Radar Data -- Label Generation and Detection Using PointNet++
- **Authors:** Johannes Kopp, Dominik Kellner, Aldi Piroli, Klaus Dietmayer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Robotics (cs.RO); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2303.09530
- **Pdf link:** https://arxiv.org/pdf/2303.09530
- **Abstract**
Radar sensors employed for environment perception, e.g. in autonomous vehicles, output a lot of unwanted clutter. These points, for which no corresponding real objects exist, are a major source of errors in following processing steps like object detection or tracking. We therefore present two novel neural network setups for identifying clutter. The input data, network architectures and training configuration are adjusted specifically for this task. Special attention is paid to the downsampling of point clouds composed of multiple sensor scans. In an extensive evaluation, the new setups display substantially better performance than existing approaches. Because there is no suitable public data set in which clutter is annotated, we design a method to automatically generate the respective labels. By applying it to existing data with object annotations and releasing its code, we effectively create the first freely available radar clutter data set representing real-world driving scenarios. Code and instructions are accessible at www.github.com/kopp-j/clutter-ds.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
- **Authors:** Yi Xie, Huaidong Zhang, Xuemiao Xu, Jianqing Zhu, Shengfeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09230
- **Pdf link:** https://arxiv.org/pdf/2303.09230
- **Abstract**
Previous Knowledge Distillation based efficient image retrieval methods employs a lightweight network as the student model for fast inference. However, the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period, causing final performance degeneration. To tackle this issue, we propose a Capacity Dynamic Distillation framework, which constructs a student model with editable representation capacity. Specifically, the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs, and the student model is gradually compressed during the training. To dynamically adjust the model capacity, our dynamic framework inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator. The indicator is optimized simultaneously by the image retrieval loss and the compression loss, and a retrieval-guided gradient resetting mechanism is proposed to release the gradient conflict. Extensive experiments show that our method has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67% FLOPs (around 24.13% and 21.94% higher than state-of-the-arts) without sacrificing accuracy (around 2.11% mAP higher than state-of-the-arts).
## Keyword: RAW
### Unsupervised Facial Expression Representation Learning with Contrastive Local Warping
- **Authors:** Fanglei Xue, Yifan Sun, Yi Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.09034
- **Pdf link:** https://arxiv.org/pdf/2303.09034
- **Abstract**
This paper investigates unsupervised representation learning for facial expression analysis. We think Unsupervised Facial Expression Representation (UFER) deserves exploration and has the potential to address some key challenges in facial expression analysis, such as scaling, annotation bias, the discrepancy between discrete labels and continuous emotions, and model pre-training. Such motivated, we propose a UFER method with contrastive local warping (ContraWarping), which leverages the insight that the emotional expression is robust to current global transformation (affine transformation, color jitter, etc.) but can be easily changed by random local warping. Therefore, given a facial image, ContraWarping employs some global transformations and local warping to generate its positive and negative samples and sets up a novel contrastive learning framework. Our in-depth investigation shows that: 1) the positive pairs from global transformations may be exploited with general self-supervised learning (e.g., BYOL) and already bring some informative features, and 2) the negative pairs from local warping explicitly introduce expression-related variation and further bring substantial improvement. Based on ContraWarping, we demonstrate the benefit of UFER under two facial expression analysis scenarios: facial expression recognition and image retrieval. For example, directly using ContraWarping features for linear probing achieves 79.14% accuracy on RAF-DB, significantly reducing the gap towards the full-supervised counterpart (88.92% / 84.81% with/without pre-training).
## Keyword: raw image
There is no result
|
process
|
new submissions for fri mar keyword events global knowledge calibration for fast open vocabulary segmentation authors kunyang han yong liu jun hao liew henghui ding yunchao wei jiajun liu yitong wang yansong tang yujiu yang jiashi feng yao zhao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent advancements in pre trained vision language models such as clip have enabled the segmentation of arbitrary concepts solely from textual inputs a process commonly referred to as open vocabulary semantic segmentation ovs however existing ovs techniques confront a fundamental challenge the trained classifier tends to overfit on the base classes observed during training resulting in suboptimal generalization performance to unseen classes to mitigate this issue recent studies have proposed the use of an additional frozen pre trained clip for classification nonetheless this approach incurs heavy computational overheads as the clip vision encoder must be repeatedly forward passed for each mask rendering it impractical for real world applications to address this challenge our objective is to develop a fast ovs model that can perform comparably or better without the extra computational burden of the clip image encoder during inference to this end we propose a core idea of preserving the generalizable representation when fine tuning on known classes specifically we introduce a text diversification strategy that generates a set of synonyms for each training category which prevents the learned representation from collapsing onto specific known category names additionally we employ a text guided knowledge distillation method to preserve the generalizable knowledge of clip extensive experiments demonstrate that our proposed model achieves robust generalization performance across various datasets furthermore we perform a preliminary exploration of open vocabulary video segmentation and present a benchmark that can facilitate future open vocabulary research in the video domain reduction of rain induced errors for wind speed estimation on sar observations using convolutional neural networks authors aurélien colin pierre tandeo charles peureux romain husson ronan fablet subjects computer vision and pattern recognition cs cv atmospheric and oceanic physics physics ao ph arxiv link pdf link abstract synthetic aperture radar is known to be able to provide high resolution estimates of surface wind speed these estimates usually rely on a geophysical model function gmf that has difficulties accounting for non wind processes such as rain events convolutional neural network on the other hand have the capacity to use contextual information and have demonstrated their ability to delimit rainfall areas by carefully building a large dataset of sar observations from the copernicus sentinel mission collocated with both gmf and atmospheric model wind speeds as well as rainfall estimates we were able to train a wind speed estimator with reduced errors under rain collocations with in situ wind speed measurements from buoys show a root mean square error that is reduced by resp under rainfall estimated at more than mm h resp mm h these results demonstrate the capacity of deep learning models to correct rain related errors in sar products spectralclip preventing artifacts in text guided style transfer from a spectral perspective authors zipeng xu songlong xing enver sangineto nicu sebe subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract contrastive language image pre training clip has refreshed the state of the art for a broad range of vision language cross modal tasks particularly it has created an intriguing research line of text guided image style transfer dispensing with the need for style reference images as in traditional style transfer methods however directly using clip to guide the transfer of style leads to undesirable artifacts mainly written words and unrelated visual entities spread over the image partly due to the entanglement of visual and written concepts inherent in clip inspired by the use of spectral analysis in filtering linguistic information at different granular levels we analyse the patch embeddings from the last layer of the clip vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components we propose spectralclip which implements a spectral filtering layer on top of the clip vision encoder to alleviate the artifact issue experimental results show that spectralclip prevents the generation of artifacts effectively in quantitative and qualitative terms without impairing the stylisation quality we further apply spectralclip to text conditioned image generation and show that it prevents written words in the generated images code is available at keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp commonsense knowledge assisted deep learning for resource constrained and fine grained object detection authors pu zhang bin liu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract in this paper we consider fine grained image object detection in resource constrained cases such as edge computing deep learning dl namely learning with deep neural networks dnns has become the dominating approach to object detection to achieve accurate fine grained detection one needs to employ a large enough dnn model and a vast amount of data annotations which brings a challenge for using modern dl object detectors in resource constrained cases to this end we propose an approach which leverages commonsense knowledge to assist a coarse grained object detector to get accurate fine grained detection results specifically we introduce a commonsense knowledge inference module ckim to process coarse grained lables given by a benchmark dl detector to produce fine grained lables we consider both crisp rule and fuzzy rule based inference in our ckim the latter is used to handle ambiguity in the target semantic labels we implement our method based on several modern dl detectors namely ssd and tiny experiment results show that our approach outperforms benchmark detectors remarkably in terms of accuracy model size and processing latency mixcycle mixup assisted semi supervised single object tracking with cycle consistency authors qiao wu jiaqi yang kun sun chu ai zhang yanning zhang mathieu salzmann subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract single object tracking sot is an indispensable part of automated driving existing approaches rely heavily on large densely labeled datasets however annotating point clouds is both costly and time consuming inspired by the great success of cycle tracking in unsupervised sot we introduce the first semi supervised approach to sot specifically we introduce two cycle consistency strategies for supervision self tracking cycles which leverage labels to help the model converge better in the early stages of training forward backward cycles which strengthen the tracker s robustness to motion variations and the template noise caused by the template update strategy furthermore we propose a data augmentation strategy named sotmixup to improve the tracker s robustness to point cloud diversity sotmixup generates training samples by sampling points in two point clouds with a mixing rate and assigns a reasonable loss weight for training according to the mixing rate the resulting mixcycle approach generalizes to appearance matching based trackers on the kitti benchmark based on the tracker mixcycle trained with textbf labels outperforms trained with textbf labels and achieves a textbf precision improvement when using textbf labels our code will be publicly released spectralclip preventing artifacts in text guided style transfer from a spectral perspective authors zipeng xu songlong xing enver sangineto nicu sebe subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract contrastive language image pre training clip has refreshed the state of the art for a broad range of vision language cross modal tasks particularly it has created an intriguing research line of text guided image style transfer dispensing with the need for style reference images as in traditional style transfer methods however directly using clip to guide the transfer of style leads to undesirable artifacts mainly written words and unrelated visual entities spread over the image partly due to the entanglement of visual and written concepts inherent in clip inspired by the use of spectral analysis in filtering linguistic information at different granular levels we analyse the patch embeddings from the last layer of the clip vision encoder from the perspective of spectral analysis and find that the presence of undesirable artifacts is highly correlated to some certain frequency components we propose spectralclip which implements a spectral filtering layer on top of the clip vision encoder to alleviate the artifact issue experimental results show that spectralclip prevents the generation of artifacts effectively in quantitative and qualitative terms without impairing the stylisation quality we further apply spectralclip to text conditioned image generation and show that it prevents written words in the generated images code is available at tackling clutter in radar data label generation and detection using pointnet authors johannes kopp dominik kellner aldi piroli klaus dietmayer subjects computer vision and pattern recognition cs cv machine learning cs lg robotics cs ro signal processing eess sp arxiv link pdf link abstract radar sensors employed for environment perception e g in autonomous vehicles output a lot of unwanted clutter these points for which no corresponding real objects exist are a major source of errors in following processing steps like object detection or tracking we therefore present two novel neural network setups for identifying clutter the input data network architectures and training configuration are adjusted specifically for this task special attention is paid to the downsampling of point clouds composed of multiple sensor scans in an extensive evaluation the new setups display substantially better performance than existing approaches because there is no suitable public data set in which clutter is annotated we design a method to automatically generate the respective labels by applying it to existing data with object annotations and releasing its code we effectively create the first freely available radar clutter data set representing real world driving scenarios code and instructions are accessible at keyword image signal processing there is no result keyword image signal process there is no result keyword compression towards a smaller student capacity dynamic distillation for efficient image retrieval authors yi xie huaidong zhang xuemiao xu jianqing zhu shengfeng he subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract previous knowledge distillation based efficient image retrieval methods employs a lightweight network as the student model for fast inference however the lightweight student model lacks adequate representation capacity for effective knowledge imitation during the most critical early training period causing final performance degeneration to tackle this issue we propose a capacity dynamic distillation framework which constructs a student model with editable representation capacity specifically the employed student model is initially a heavy model to fruitfully learn distilled knowledge in the early training epochs and the student model is gradually compressed during the training to dynamically adjust the model capacity our dynamic framework inserts a learnable convolutional layer within each residual block in the student model as the channel importance indicator the indicator is optimized simultaneously by the image retrieval loss and the compression loss and a retrieval guided gradient resetting mechanism is proposed to release the gradient conflict extensive experiments show that our method has superior inference speed and accuracy e g on the veri dataset given the as a teacher our method saves model parameters and flops around and higher than state of the arts without sacrificing accuracy around map higher than state of the arts keyword raw unsupervised facial expression representation learning with contrastive local warping authors fanglei xue yifan sun yi yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper investigates unsupervised representation learning for facial expression analysis we think unsupervised facial expression representation ufer deserves exploration and has the potential to address some key challenges in facial expression analysis such as scaling annotation bias the discrepancy between discrete labels and continuous emotions and model pre training such motivated we propose a ufer method with contrastive local warping contrawarping which leverages the insight that the emotional expression is robust to current global transformation affine transformation color jitter etc but can be easily changed by random local warping therefore given a facial image contrawarping employs some global transformations and local warping to generate its positive and negative samples and sets up a novel contrastive learning framework our in depth investigation shows that the positive pairs from global transformations may be exploited with general self supervised learning e g byol and already bring some informative features and the negative pairs from local warping explicitly introduce expression related variation and further bring substantial improvement based on contrawarping we demonstrate the benefit of ufer under two facial expression analysis scenarios facial expression recognition and image retrieval for example directly using contrawarping features for linear probing achieves accuracy on raf db significantly reducing the gap towards the full supervised counterpart with without pre training keyword raw image there is no result
| 1
|
10,394
| 13,198,038,837
|
IssuesEvent
|
2020-08-14 01:06:38
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
is the `gh-pages` branch in this repo still used?
|
type: process
|
I noticed that the [gh-pages branch](https://github.com/googleapis/google-cloud-cpp/tree/gh-pages) in this repo is ~6 months behind... but the docs on https://googleapis.github.io/google-cloud-cpp/ still appear to be up to date. Do we need this branch, or can we delete it?
note there are also some references in the docs: https://github.com/googleapis/google-cloud-cpp/search?q=gh-pages&unscoped_q=gh-pages
|
1.0
|
is the `gh-pages` branch in this repo still used? - I noticed that the [gh-pages branch](https://github.com/googleapis/google-cloud-cpp/tree/gh-pages) in this repo is ~6 months behind... but the docs on https://googleapis.github.io/google-cloud-cpp/ still appear to be up to date. Do we need this branch, or can we delete it?
note there are also some references in the docs: https://github.com/googleapis/google-cloud-cpp/search?q=gh-pages&unscoped_q=gh-pages
|
process
|
is the gh pages branch in this repo still used i noticed that the in this repo is months behind but the docs on still appear to be up to date do we need this branch or can we delete it note there are also some references in the docs
| 1
|
317,777
| 23,689,114,958
|
IssuesEvent
|
2022-08-29 09:11:06
|
unitaryfund/mitiq
|
https://api.github.com/repos/unitaryfund/mitiq
|
reopened
|
Complete CDR theory subsection of the Mitiq user guide
|
documentation cdr feature-request
|
Pre-Request Checklist
---------------------
- [ ] I checked to make sure that this feature has not already been requested.
Issue Description
-----------------
The current docs section [What is the theory behind CDR?](https://mitiq.readthedocs.io/en/stable/guide/cdr-5-theory.html) contains
references to research papers.
Proposed Solution
-----------------
Adding some theoretical content similar to what we have now for ZNE and PEC. The content should be:
- short (without oversimplifying),
- easy to read and to understand,
- no too technical (for minor technical details, we can cite research papers).
Additional References
---------------------
The two references cited in [What is the theory behind CDR?](https://mitiq.readthedocs.io/en/stable/guide/cdr-5-theory.html) are the original references in which CDR and vnCDR were introduced. In the mitiq paper there is also short summary of both CDR and vnCDR that may be useful for this issue.
|
1.0
|
Complete CDR theory subsection of the Mitiq user guide - Pre-Request Checklist
---------------------
- [ ] I checked to make sure that this feature has not already been requested.
Issue Description
-----------------
The current docs section [What is the theory behind CDR?](https://mitiq.readthedocs.io/en/stable/guide/cdr-5-theory.html) contains
references to research papers.
Proposed Solution
-----------------
Adding some theoretical content similar to what we have now for ZNE and PEC. The content should be:
- short (without oversimplifying),
- easy to read and to understand,
- no too technical (for minor technical details, we can cite research papers).
Additional References
---------------------
The two references cited in [What is the theory behind CDR?](https://mitiq.readthedocs.io/en/stable/guide/cdr-5-theory.html) are the original references in which CDR and vnCDR were introduced. In the mitiq paper there is also short summary of both CDR and vnCDR that may be useful for this issue.
|
non_process
|
complete cdr theory subsection of the mitiq user guide pre request checklist i checked to make sure that this feature has not already been requested issue description the current docs section contains references to research papers proposed solution adding some theoretical content similar to what we have now for zne and pec the content should be short without oversimplifying easy to read and to understand no too technical for minor technical details we can cite research papers additional references the two references cited in are the original references in which cdr and vncdr were introduced in the mitiq paper there is also short summary of both cdr and vncdr that may be useful for this issue
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.