Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,002
| 20,188,208,541
|
IssuesEvent
|
2022-02-11 01:18:09
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Use penetration testing and red team exercises to validate security defenses for this workload
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Deployment & Testing Testing & Validation
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-test#penetration-testing-pentesting">Use penetration testing and red team exercises to validate security defenses for this workload</a>
<p><b>Why Consider This?</b></p>
Penetration tests provide a point-in-time validation of security defences; red teams can help provide ongoing visibility and assurance that your defences work as designed, potentiallytesting across different levels within your workload(s).
<p><b>Context</b></p>
<p><span>Penetration tests or red team programs can be used to simulate either one time, or persistent threats against an organization to validate defenses that have been put in place to protect organizational resources.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Use penetration testing and red team exercises to validate security defenses for this workload</span></p><p><span>"nbsp;</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/security/fundamentals/pen-testing" target="_blank"><span>Azure Penetration testing</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#penetration-testing" target="_blank"><span>Penetration testing</span></a><span /></p>
|
1.0
|
Use penetration testing and red team exercises to validate security defenses for this workload - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-test#penetration-testing-pentesting">Use penetration testing and red team exercises to validate security defenses for this workload</a>
<p><b>Why Consider This?</b></p>
Penetration tests provide a point-in-time validation of security defences; red teams can help provide ongoing visibility and assurance that your defences work as designed, potentiallytesting across different levels within your workload(s).
<p><b>Context</b></p>
<p><span>Penetration tests or red team programs can be used to simulate either one time, or persistent threats against an organization to validate defenses that have been put in place to protect organizational resources.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Use penetration testing and red team exercises to validate security defenses for this workload</span></p><p><span>"nbsp;</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/security/fundamentals/pen-testing" target="_blank"><span>Azure Penetration testing</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#penetration-testing" target="_blank"><span>Penetration testing</span></a><span /></p>
|
process
|
use penetration testing and red team exercises to validate security defenses for this workload why consider this penetration tests provide a point in time validation of security defences red teams can help provide ongoing visibility and assurance that your defences work as designed potentiallytesting across different levels within your workload s context penetration tests or red team programs can be used to simulate either one time or persistent threats against an organization to validate defenses that have been put in place to protect organizational resources suggested actions use penetration testing and red team exercises to validate security defenses for this workload nbsp learn more azure penetration testing penetration testing
| 1
|
717,310
| 24,670,876,593
|
IssuesEvent
|
2022-10-18 13:41:59
|
WordPress/openverse-frontend
|
https://api.github.com/repos/WordPress/openverse-frontend
|
closed
|
'Get this media' button scales to match title height
|
good first issue help wanted 🟨 priority: medium 🛠 goal: fix 🕹 aspect: interface 🎨 tech: css
|
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
The 'get this media' button scales to match the height of a work's title.
We should truncate the title to 2-3 lines and make the button align with the top rather than stretch to fill.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. Visit a single result with a long title, or modify the image title to span multiple lines
2. Observe.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->
Screenshot: https://share.cleanshot.com/C5SZVX
|
1.0
|
'Get this media' button scales to match title height - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
The 'get this media' button scales to match the height of a work's title.
We should truncate the title to 2-3 lines and make the button align with the top rather than stretch to fill.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. Visit a single result with a long title, or modify the image title to span multiple lines
2. Observe.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->
Screenshot: https://share.cleanshot.com/C5SZVX
|
non_process
|
get this media button scales to match title height description the get this media button scales to match the height of a work s title we should truncate the title to lines and make the button align with the top rather than stretch to fill reproduction visit a single result with a long title or modify the image title to span multiple lines observe screenshots screenshot
| 0
|
14,773
| 18,049,232,673
|
IssuesEvent
|
2021-09-19 12:56:05
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Register `list` for the `to_aiida_type` dispatch
|
type/accepted feature priority/nice-to-have topic/processes
|
This will allow users to pass normal base type lists to an input port accepting a `List` node and defining the serializer `to_aiida_type`.
|
1.0
|
Register `list` for the `to_aiida_type` dispatch - This will allow users to pass normal base type lists to an input port accepting a `List` node and defining the serializer `to_aiida_type`.
|
process
|
register list for the to aiida type dispatch this will allow users to pass normal base type lists to an input port accepting a list node and defining the serializer to aiida type
| 1
|
8,555
| 11,730,869,226
|
IssuesEvent
|
2020-03-10 22:24:39
|
MobileOrg/mobileorg
|
https://api.github.com/repos/MobileOrg/mobileorg
|
opened
|
Two unit-tests never pass on the first run
|
bug development process
|
We have two unit-tests that depend on the environment and never pass on the first run ([example](https://travis-ci.org/MobileOrg/mobileorg/builds/660833144) on TravisCI):
```
MobileOrgTests.OrgfileParserTests
789 testParseOrgFileDifferentTodoWords, failed
790 /Users/travis/build/MobileOrg/mobileorg/MobileOrgTests/OrgfileParserTests.swift:96
791 ```
792 } else {
793 XCTFail()
794 }
795 ```
796
797 testParseOrgFileDifferentTodoWords, failed
798 /Users/travis/build/MobileOrg/mobileorg/MobileOrgTests/OrgfileParserTests.swift:106
799 ```
800 } else {
801 XCTFail()
802 }
803 ```
```
Let's fix them.
|
1.0
|
Two unit-tests never pass on the first run - We have two unit-tests that depend on the environment and never pass on the first run ([example](https://travis-ci.org/MobileOrg/mobileorg/builds/660833144) on TravisCI):
```
MobileOrgTests.OrgfileParserTests
789 testParseOrgFileDifferentTodoWords, failed
790 /Users/travis/build/MobileOrg/mobileorg/MobileOrgTests/OrgfileParserTests.swift:96
791 ```
792 } else {
793 XCTFail()
794 }
795 ```
796
797 testParseOrgFileDifferentTodoWords, failed
798 /Users/travis/build/MobileOrg/mobileorg/MobileOrgTests/OrgfileParserTests.swift:106
799 ```
800 } else {
801 XCTFail()
802 }
803 ```
```
Let's fix them.
|
process
|
two unit tests never pass on the first run we have two unit tests that depend on the environment and never pass on the first run on travisci mobileorgtests orgfileparsertests testparseorgfiledifferenttodowords failed users travis build mobileorg mobileorg mobileorgtests orgfileparsertests swift else xctfail testparseorgfiledifferenttodowords failed users travis build mobileorg mobileorg mobileorgtests orgfileparsertests swift else xctfail let s fix them
| 1
|
673,192
| 22,952,201,731
|
IssuesEvent
|
2022-07-19 08:26:37
|
JonasMuehlmann/datastructures.go
|
https://api.github.com/repos/JonasMuehlmann/datastructures.go
|
opened
|
Implement `SinglyLinkedlist.ValueIterator`
|
enhancement effort: low priority: medium refactor
|
Use template for `ValueIterator` and `ElementIterator`
|
1.0
|
Implement `SinglyLinkedlist.ValueIterator` - Use template for `ValueIterator` and `ElementIterator`
|
non_process
|
implement singlylinkedlist valueiterator use template for valueiterator and elementiterator
| 0
|
188,047
| 14,437,380,498
|
IssuesEvent
|
2020-12-07 11:28:01
|
celery/celery
|
https://api.github.com/repos/celery/celery
|
closed
|
celery_worker pytest fixture timeouts since celery 5.0.3
|
Component: Cache Results Backend Component: Pytest Integration Issue Type: Bug Report Priority: Critical Status: Confirmed ✔ Status: Has Testcase ✔
|
Since the 5.0.3 release of celery, the `celery_worker` pytest fixture leads to a timeout when performing ping check.
The issue can be reproduced using this simple test file:
```python
pytest_plugins = ["celery.contrib.pytest"]
def test_create_task(celery_app, celery_worker):
@celery_app.task
def mul(x, y):
return x * y
assert mul.delay(4, 4).get(timeout=10) == 16
```
Below is the pytest output:
```
$ pytest -sv test_celery_worker.py
============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.7.3, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /home/anlambert/.virtualenvs/swh/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/anlambert/tmp/.hypothesis/examples')
rootdir: /home/anlambert/tmp
plugins: postgresql-2.5.2, asyncio-0.14.0, mock-3.3.1, cov-2.10.1, django-4.1.0, requests-mock-1.8.0, hypothesis-5.41.3, forked-1.3.0, swh.core-0.9.2.dev4+g6f9779f, flask-1.1.0, xdist-2.1.0, dash-1.17.0, swh.journal-0.5.2.dev1+g12b31a2
collected 1 item
test_celery_worker.py::test_create_task ERROR
===================================================================================================== ERRORS =====================================================================================================
_______________________________________________________________________________________ ERROR at setup of test_create_task _______________________________________________________________________________________
request = <SubRequest 'celery_worker' for <Function test_create_task>>, celery_app = <Celery celery.tests at 0x7f99b4b91d30>, celery_includes = (), celery_worker_pool = 'solo', celery_worker_parameters = {}
@pytest.fixture()
def celery_worker(request,
celery_app,
celery_includes,
celery_worker_pool,
celery_worker_parameters):
# type: (Any, Celery, Sequence[str], str, Any) -> WorkController
"""Fixture: Start worker in a thread, stop it when the test returns."""
if not NO_WORKER:
for module in celery_includes:
celery_app.loader.import_task_module(module)
with worker.start_worker(celery_app,
pool=celery_worker_pool,
> **celery_worker_parameters) as w:
../dev/celery/celery/contrib/pytest.py:196:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.7/contextlib.py:112: in __enter__
return next(self.gen)
../dev/celery/celery/contrib/testing/worker.py:82: in start_worker
assert ping.delay().get(timeout=ping_task_timeout) == 'pong'
../dev/celery/celery/result.py:230: in get
on_message=on_message,
../dev/celery/celery/backends/base.py:655: in wait_for_pending
no_ack=no_ack,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <celery.backends.cache.CacheBackend object at 0x7f99b411fb00>, task_id = '98b047a2-2027-453c-a317-eb31f44a2547', timeout = 10.0, interval = 0.5, no_ack = True, on_interval = <promise@0x7f99b4a2adf0>
def wait_for(self, task_id,
timeout=None, interval=0.5, no_ack=True, on_interval=None):
"""Wait for task and return its result.
If the task raises an exception, this exception
will be re-raised by :func:`wait_for`.
Raises:
celery.exceptions.TimeoutError:
If `timeout` is not :const:`None`, and the operation
takes longer than `timeout` seconds.
"""
self._ensure_not_eager()
time_elapsed = 0.0
while 1:
meta = self.get_task_meta(task_id)
if meta['status'] in states.READY_STATES:
return meta
if on_interval:
on_interval()
# avoid hammering the CPU checking status.
time.sleep(interval)
time_elapsed += interval
if timeout and time_elapsed >= timeout:
> raise TimeoutError('The operation timed out.')
E celery.exceptions.TimeoutError: The operation timed out.
../dev/celery/celery/backends/base.py:687: TimeoutError
============================================================================================ short test summary info =============================================================================================
ERROR test_celery_worker.py::test_create_task - celery.exceptions.TimeoutError: The operation timed out.
=============================================================================================== 1 error in 10.41s ================================================================================================
```
After a quick `git bisect` session, I managed to identify the commit that introduced the issue: https://github.com/celery/celery/commit/e2031688284484d5b5a57ba29cd9cae2d9a81e39
|
2.0
|
celery_worker pytest fixture timeouts since celery 5.0.3 - Since the 5.0.3 release of celery, the `celery_worker` pytest fixture leads to a timeout when performing ping check.
The issue can be reproduced using this simple test file:
```python
pytest_plugins = ["celery.contrib.pytest"]
def test_create_task(celery_app, celery_worker):
@celery_app.task
def mul(x, y):
return x * y
assert mul.delay(4, 4).get(timeout=10) == 16
```
Below is the pytest output:
```
$ pytest -sv test_celery_worker.py
============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.7.3, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /home/anlambert/.virtualenvs/swh/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/anlambert/tmp/.hypothesis/examples')
rootdir: /home/anlambert/tmp
plugins: postgresql-2.5.2, asyncio-0.14.0, mock-3.3.1, cov-2.10.1, django-4.1.0, requests-mock-1.8.0, hypothesis-5.41.3, forked-1.3.0, swh.core-0.9.2.dev4+g6f9779f, flask-1.1.0, xdist-2.1.0, dash-1.17.0, swh.journal-0.5.2.dev1+g12b31a2
collected 1 item
test_celery_worker.py::test_create_task ERROR
===================================================================================================== ERRORS =====================================================================================================
_______________________________________________________________________________________ ERROR at setup of test_create_task _______________________________________________________________________________________
request = <SubRequest 'celery_worker' for <Function test_create_task>>, celery_app = <Celery celery.tests at 0x7f99b4b91d30>, celery_includes = (), celery_worker_pool = 'solo', celery_worker_parameters = {}
@pytest.fixture()
def celery_worker(request,
celery_app,
celery_includes,
celery_worker_pool,
celery_worker_parameters):
# type: (Any, Celery, Sequence[str], str, Any) -> WorkController
"""Fixture: Start worker in a thread, stop it when the test returns."""
if not NO_WORKER:
for module in celery_includes:
celery_app.loader.import_task_module(module)
with worker.start_worker(celery_app,
pool=celery_worker_pool,
> **celery_worker_parameters) as w:
../dev/celery/celery/contrib/pytest.py:196:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.7/contextlib.py:112: in __enter__
return next(self.gen)
../dev/celery/celery/contrib/testing/worker.py:82: in start_worker
assert ping.delay().get(timeout=ping_task_timeout) == 'pong'
../dev/celery/celery/result.py:230: in get
on_message=on_message,
../dev/celery/celery/backends/base.py:655: in wait_for_pending
no_ack=no_ack,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <celery.backends.cache.CacheBackend object at 0x7f99b411fb00>, task_id = '98b047a2-2027-453c-a317-eb31f44a2547', timeout = 10.0, interval = 0.5, no_ack = True, on_interval = <promise@0x7f99b4a2adf0>
def wait_for(self, task_id,
timeout=None, interval=0.5, no_ack=True, on_interval=None):
"""Wait for task and return its result.
If the task raises an exception, this exception
will be re-raised by :func:`wait_for`.
Raises:
celery.exceptions.TimeoutError:
If `timeout` is not :const:`None`, and the operation
takes longer than `timeout` seconds.
"""
self._ensure_not_eager()
time_elapsed = 0.0
while 1:
meta = self.get_task_meta(task_id)
if meta['status'] in states.READY_STATES:
return meta
if on_interval:
on_interval()
# avoid hammering the CPU checking status.
time.sleep(interval)
time_elapsed += interval
if timeout and time_elapsed >= timeout:
> raise TimeoutError('The operation timed out.')
E celery.exceptions.TimeoutError: The operation timed out.
../dev/celery/celery/backends/base.py:687: TimeoutError
============================================================================================ short test summary info =============================================================================================
ERROR test_celery_worker.py::test_create_task - celery.exceptions.TimeoutError: The operation timed out.
=============================================================================================== 1 error in 10.41s ================================================================================================
```
After a quick `git bisect` session, I managed to identify the commit that introduced the issue: https://github.com/celery/celery/commit/e2031688284484d5b5a57ba29cd9cae2d9a81e39
|
non_process
|
celery worker pytest fixture timeouts since celery since the release of celery the celery worker pytest fixture leads to a timeout when performing ping check the issue can be reproduced using this simple test file python pytest plugins def test create task celery app celery worker celery app task def mul x y return x y assert mul delay get timeout below is the pytest output pytest sv test celery worker py test session starts platform linux python pytest py pluggy home anlambert virtualenvs swh bin cachedir pytest cache hypothesis profile default database directorybasedexampledatabase home anlambert tmp hypothesis examples rootdir home anlambert tmp plugins postgresql asyncio mock cov django requests mock hypothesis forked swh core flask xdist dash swh journal collected item test celery worker py test create task error errors error at setup of test create task request celery app celery includes celery worker pool solo celery worker parameters pytest fixture def celery worker request celery app celery includes celery worker pool celery worker parameters type any celery sequence str any workcontroller fixture start worker in a thread stop it when the test returns if not no worker for module in celery includes celery app loader import task module module with worker start worker celery app pool celery worker pool celery worker parameters as w dev celery celery contrib pytest py usr lib contextlib py in enter return next self gen dev celery celery contrib testing worker py in start worker assert ping delay get timeout ping task timeout pong dev celery celery result py in get on message on message dev celery celery backends base py in wait for pending no ack no ack self task id timeout interval no ack true on interval def wait for self task id timeout none interval no ack true on interval none wait for task and return its result if the task raises an exception this exception will be re raised by func wait for raises celery exceptions timeouterror if timeout is not const none and the operation takes longer than timeout seconds self ensure not eager time elapsed while meta self get task meta task id if meta in states ready states return meta if on interval on interval avoid hammering the cpu checking status time sleep interval time elapsed interval if timeout and time elapsed timeout raise timeouterror the operation timed out e celery exceptions timeouterror the operation timed out dev celery celery backends base py timeouterror short test summary info error test celery worker py test create task celery exceptions timeouterror the operation timed out error in after a quick git bisect session i managed to identify the commit that introduced the issue
| 0
|
48,703
| 13,184,720,994
|
IssuesEvent
|
2020-08-12 19:58:22
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
cmake ROOTCINT() could warn better (Trac #77)
|
Incomplete Migration Migrated from Trac cmake defect
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/77
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-07-19T13:36:23",
"description": "the rootcint macro could warn if the files listed don't exist rather than waiting until make bails out.\n\n",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1184852183000000",
"component": "cmake",
"summary": "cmake ROOTCINT() could warn better",
"priority": "normal",
"keywords": "",
"time": "2007-07-11T13:25:39",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
cmake ROOTCINT() could warn better (Trac #77) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/77
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-07-19T13:36:23",
"description": "the rootcint macro could warn if the files listed don't exist rather than waiting until make bails out.\n\n",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1184852183000000",
"component": "cmake",
"summary": "cmake ROOTCINT() could warn better",
"priority": "normal",
"keywords": "",
"time": "2007-07-11T13:25:39",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
cmake rootcint could warn better trac migrated from reported by troy and owned by troy json status closed changetime description the rootcint macro could warn if the files listed don t exist rather than waiting until make bails out n n reporter troy cc resolution fixed ts component cmake summary cmake rootcint could warn better priority normal keywords time milestone owner troy type defect
| 0
|
41,514
| 12,832,342,592
|
IssuesEvent
|
2020-07-07 07:29:55
|
rvvergara/todolist-react-version
|
https://api.github.com/repos/rvvergara/todolist-react-version
|
closed
|
CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz
|
security vulnerability
|
## CVE-2019-15657 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary>
<p>Utilities for ESLint plugins.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/todolist-react-version/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/todolist-react-version/node_modules/eslint-utils/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.16.0.tgz (Root Library)
- :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-react-version/commit/85fba0e7c02424e61ae0ebd7a786b50a67132bf3">85fba0e7c02424e61ae0ebd7a786b50a67132bf3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code.
<p>Publish Date: 2019-08-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p>
<p>Release Date: 2019-08-26</p>
<p>Fix Resolution: eslint-utils - 1.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz - ## CVE-2019-15657 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary>
<p>Utilities for ESLint plugins.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/todolist-react-version/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/todolist-react-version/node_modules/eslint-utils/package.json</p>
<p>
Dependency Hierarchy:
- eslint-5.16.0.tgz (Root Library)
- :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-react-version/commit/85fba0e7c02424e61ae0ebd7a786b50a67132bf3">85fba0e7c02424e61ae0ebd7a786b50a67132bf3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code.
<p>Publish Date: 2019-08-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p>
<p>Release Date: 2019-08-26</p>
<p>Fix Resolution: eslint-utils - 1.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in eslint utils tgz cve high severity vulnerability vulnerable library eslint utils tgz utilities for eslint plugins library home page a href path to dependency file tmp ws scm todolist react version package json path to vulnerable library tmp ws scm todolist react version node modules eslint utils package json dependency hierarchy eslint tgz root library x eslint utils tgz vulnerable library found in head commit a href vulnerability details in eslint utils before the getstaticvalue function can execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution eslint utils step up your open source security game with whitesource
| 0
|
431,141
| 30,219,958,190
|
IssuesEvent
|
2023-07-05 18:32:06
|
bcgov/nr-rfc-grib-copy
|
https://api.github.com/repos/bcgov/nr-rfc-grib-copy
|
opened
|
Listener Docs
|
documentation
|
Review existing documentation and pull together into easy to navigate docs.
* put detailed docs in ./docs folder
* provide links to docs from the root readme.md
* Add the following:
* describe how the listener works from a high level
* review / update running backend
* Describe the various jobs that are defined in scripts
* how executed / scheduled
|
1.0
|
Listener Docs - Review existing documentation and pull together into easy to navigate docs.
* put detailed docs in ./docs folder
* provide links to docs from the root readme.md
* Add the following:
* describe how the listener works from a high level
* review / update running backend
* Describe the various jobs that are defined in scripts
* how executed / scheduled
|
non_process
|
listener docs review existing documentation and pull together into easy to navigate docs put detailed docs in docs folder provide links to docs from the root readme md add the following describe how the listener works from a high level review update running backend describe the various jobs that are defined in scripts how executed scheduled
| 0
|
51,351
| 13,207,440,318
|
IssuesEvent
|
2020-08-14 23:06:35
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
Light yield calculation in HitMaker (Trac #265)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/265">https://code.icecube.wisc.edu/projects/icecube/ticket/265</a>, reported by hwissingand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-26T16:18:42",
"_ts": "1306426722000000",
"description": "As reported by Dima\n\nhttp://lists.icecube.wisc.edu/pipermail/ice3sim/2011-May/006794.html\n\nHitMaker calculates the muon light yield for the muon energy at creation rather than for the muon energy at the detector.\n\nAccording to Gary (cc), this mistake has persisted since AMANDA days, and was first pointed out more than a decade ago. \n\nThe fix would be critical for meaningful comparisons between Monte Carlo produced with ppc and photonics. ",
"reporter": "hwissing",
"cc": "ghill@amanda.wisc.edu",
"resolution": "fixed",
"time": "2011-05-25T20:16:50",
"component": "combo simulation",
"summary": "Light yield calculation in HitMaker",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Light yield calculation in HitMaker (Trac #265) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/265">https://code.icecube.wisc.edu/projects/icecube/ticket/265</a>, reported by hwissingand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-26T16:18:42",
"_ts": "1306426722000000",
"description": "As reported by Dima\n\nhttp://lists.icecube.wisc.edu/pipermail/ice3sim/2011-May/006794.html\n\nHitMaker calculates the muon light yield for the muon energy at creation rather than for the muon energy at the detector.\n\nAccording to Gary (cc), this mistake has persisted since AMANDA days, and was first pointed out more than a decade ago. \n\nThe fix would be critical for meaningful comparisons between Monte Carlo produced with ppc and photonics. ",
"reporter": "hwissing",
"cc": "ghill@amanda.wisc.edu",
"resolution": "fixed",
"time": "2011-05-25T20:16:50",
"component": "combo simulation",
"summary": "Light yield calculation in HitMaker",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
light yield calculation in hitmaker trac migrated from json status closed changetime ts description as reported by dima n n calculates the muon light yield for the muon energy at creation rather than for the muon energy at the detector n naccording to gary cc this mistake has persisted since amanda days and was first pointed out more than a decade ago n nthe fix would be critical for meaningful comparisons between monte carlo produced with ppc and photonics reporter hwissing cc ghill amanda wisc edu resolution fixed time component combo simulation summary light yield calculation in hitmaker priority critical keywords milestone owner olivas type defect
| 0
|
7,787
| 10,927,585,281
|
IssuesEvent
|
2019-11-22 17:00:39
|
edwardsmarc/CASFRI
|
https://api.github.com/repos/edwardsmarc/CASFRI
|
closed
|
Data type of geometry column missing geometry type and SRID.
|
bug high post-translation process
|
QGIS will not render the geometries without the geometry type and SRID present.
I suggest adding a step to the workflow. Perhaps something like "04_geometrytype.sql" which modifies the geometry column to include the geometry type and SRID. For example:
ALTER TABLE geo_all
ALTER COLUMN geometry TYPE geometry(multipolygon, 900914);
|
1.0
|
Data type of geometry column missing geometry type and SRID. - QGIS will not render the geometries without the geometry type and SRID present.
I suggest adding a step to the workflow. Perhaps something like "04_geometrytype.sql" which modifies the geometry column to include the geometry type and SRID. For example:
ALTER TABLE geo_all
ALTER COLUMN geometry TYPE geometry(multipolygon, 900914);
|
process
|
data type of geometry column missing geometry type and srid qgis will not render the geometries without the geometry type and srid present i suggest adding a step to the workflow perhaps something like geometrytype sql which modifies the geometry column to include the geometry type and srid for example alter table geo all alter column geometry type geometry multipolygon
| 1
|
22,130
| 30,674,050,213
|
IssuesEvent
|
2023-07-26 02:37:02
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] `group-columns` works incorrectly with `join-condition-rhs-columns`
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
When `join-condition-rhs-columns` are passed to `group-columns`, they get grouped into the source table instead of a join table.
### To Reproduce
Create a query from the sample Orders table and run this:
```js
const peopleTable = Lib.tableOrCardMetadata(5)
const columns = joinConditionRHSColumns(query, 0, peopleTable)
const [group] = ML.groupColumns(columns)
Lib.displayInfo(query, 0, group)
// Expected
{ name: "PEOPLE", displayName: "People", ...rest }
// Actual
{ name: "ORDERS", displayName: "Orders", ...rest }
```
|
1.0
|
[MLv2] [Bug] `group-columns` works incorrectly with `join-condition-rhs-columns` - When `join-condition-rhs-columns` are passed to `group-columns`, they get grouped into the source table instead of a join table.
### To Reproduce
Create a query from the sample Orders table and run this:
```js
const peopleTable = Lib.tableOrCardMetadata(5)
const columns = joinConditionRHSColumns(query, 0, peopleTable)
const [group] = ML.groupColumns(columns)
Lib.displayInfo(query, 0, group)
// Expected
{ name: "PEOPLE", displayName: "People", ...rest }
// Actual
{ name: "ORDERS", displayName: "Orders", ...rest }
```
|
process
|
group columns works incorrectly with join condition rhs columns when join condition rhs columns are passed to group columns they get grouped into the source table instead of a join table to reproduce create a query from the sample orders table and run this js const peopletable lib tableorcardmetadata const columns joinconditionrhscolumns query peopletable const ml groupcolumns columns lib displayinfo query group expected name people displayname people rest actual name orders displayname orders rest
| 1
|
214,809
| 24,117,309,285
|
IssuesEvent
|
2022-09-20 15:38:14
|
Gal-Doron/aspnet_accord-gal
|
https://api.github.com/repos/Gal-Doron/aspnet_accord-gal
|
opened
|
xunit.runner.visualstudio.2.4.1.nupkg: 2 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xunit.runner.visualstudio.2.4.1.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/newtonsoft.json/10.0.1/newtonsoft.json.10.0.1.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2019-0820](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A | ❌ |
| [WS-2022-0161](https://github.com/JamesNK/Newtonsoft.Json/commit/7e77bbe1beccceac4fc7b174b53abfefac278b66) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | newtonsoft.json.10.0.1.nupkg | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary>
### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p>
<p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.runner.visualstudio.2.4.1.nupkg (Root Library)
- microsoft.net.test.sdk.16.5.0.nupkg
- microsoft.testplatform.testhost.16.5.0.nupkg
- newtonsoft.json.10.0.1.nupkg
- system.xml.xdocument.4.3.0.nupkg
- system.xml.readerwriter.4.3.0.nupkg
- :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981.
Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg.
<p>Publish Date: 2019-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820>CVE-2019-0820</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p>
<p>Release Date: 2019-05-16</p>
<p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2022-0161</summary>
### Vulnerable Library - <b>newtonsoft.json.10.0.1.nupkg</b></p>
<p>Json.NET is a popular high-performance JSON framework for .NET</p>
<p>Library home page: <a href="https://api.nuget.org/packages/newtonsoft.json.10.0.1.nupkg">https://api.nuget.org/packages/newtonsoft.json.10.0.1.nupkg</a></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/newtonsoft.json/10.0.1/newtonsoft.json.10.0.1.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.runner.visualstudio.2.4.1.nupkg (Root Library)
- microsoft.net.test.sdk.16.5.0.nupkg
- microsoft.testplatform.testhost.16.5.0.nupkg
- :x: **newtonsoft.json.10.0.1.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Improper Handling of Exceptional Conditions in Newtonsoft.Json.
Newtonsoft.Json prior to version 13.0.1 is vulnerable to Insecure Defaults due to improper handling of StackOverFlow exception (SOE) whenever nested expressions are being processed. Exploiting this vulnerability results in Denial Of Service (DoS), and it is exploitable when an attacker sends 5 requests that cause SOE in time frame of 5 minutes. This vulnerability affects Internet Information Services (IIS) Applications.
<p>Publish Date: 2022-06-22
<p>URL: <a href=https://github.com/JamesNK/Newtonsoft.Json/commit/7e77bbe1beccceac4fc7b174b53abfefac278b66>WS-2022-0161</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-06-22</p>
<p>Fix Resolution: Newtonsoft.Json - 13.0.1;Microsoft.Extensions.ApiDescription.Server - 6.0.0</p>
</p>
<p></p>
</details>
|
True
|
xunit.runner.visualstudio.2.4.1.nupkg: 2 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xunit.runner.visualstudio.2.4.1.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/newtonsoft.json/10.0.1/newtonsoft.json.10.0.1.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2019-0820](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A | ❌ |
| [WS-2022-0161](https://github.com/JamesNK/Newtonsoft.Json/commit/7e77bbe1beccceac4fc7b174b53abfefac278b66) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | newtonsoft.json.10.0.1.nupkg | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary>
### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p>
<p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.runner.visualstudio.2.4.1.nupkg (Root Library)
- microsoft.net.test.sdk.16.5.0.nupkg
- microsoft.testplatform.testhost.16.5.0.nupkg
- newtonsoft.json.10.0.1.nupkg
- system.xml.xdocument.4.3.0.nupkg
- system.xml.readerwriter.4.3.0.nupkg
- :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981.
Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg.
<p>Publish Date: 2019-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0820>CVE-2019-0820</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p>
<p>Release Date: 2019-05-16</p>
<p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2022-0161</summary>
### Vulnerable Library - <b>newtonsoft.json.10.0.1.nupkg</b></p>
<p>Json.NET is a popular high-performance JSON framework for .NET</p>
<p>Library home page: <a href="https://api.nuget.org/packages/newtonsoft.json.10.0.1.nupkg">https://api.nuget.org/packages/newtonsoft.json.10.0.1.nupkg</a></p>
<p>Path to dependency file: /tests/Conduit.IntegrationTests/Conduit.IntegrationTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/newtonsoft.json/10.0.1/newtonsoft.json.10.0.1.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.runner.visualstudio.2.4.1.nupkg (Root Library)
- microsoft.net.test.sdk.16.5.0.nupkg
- microsoft.testplatform.testhost.16.5.0.nupkg
- :x: **newtonsoft.json.10.0.1.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/aspnet_accord-gal/commit/0dd73f2ef81cc5067295e46daa3ff0d359896d36">0dd73f2ef81cc5067295e46daa3ff0d359896d36</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Improper Handling of Exceptional Conditions in Newtonsoft.Json.
Newtonsoft.Json prior to version 13.0.1 is vulnerable to Insecure Defaults due to improper handling of StackOverFlow exception (SOE) whenever nested expressions are being processed. Exploiting this vulnerability results in Denial Of Service (DoS), and it is exploitable when an attacker sends 5 requests that cause SOE in time frame of 5 minutes. This vulnerability affects Internet Information Services (IIS) Applications.
<p>Publish Date: 2022-06-22
<p>URL: <a href=https://github.com/JamesNK/Newtonsoft.Json/commit/7e77bbe1beccceac4fc7b174b53abfefac278b66>WS-2022-0161</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-06-22</p>
<p>Fix Resolution: Newtonsoft.Json - 13.0.1;Microsoft.Extensions.ApiDescription.Server - 6.0.0</p>
</p>
<p></p>
</details>
|
non_process
|
xunit runner visualstudio nupkg vulnerabilities highest severity is vulnerable library xunit runner visualstudio nupkg path to dependency file tests conduit integrationtests conduit integrationtests csproj path to vulnerable library home wss scanner nuget packages newtonsoft json newtonsoft json nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high system text regularexpressions nupkg transitive n a high newtonsoft json nupkg transitive n a details cve vulnerable library system text regularexpressions nupkg provides the system text regularexpressions regex class an implementation of a regular expression e library home page a href path to dependency file tests conduit integrationtests conduit integrationtests csproj path to vulnerable library home wss scanner nuget packages system text regularexpressions system text regularexpressions nupkg dependency hierarchy xunit runner visualstudio nupkg root library microsoft net test sdk nupkg microsoft testplatform testhost nupkg newtonsoft json nupkg system xml xdocument nupkg system xml readerwriter nupkg x system text regularexpressions nupkg vulnerable library found in head commit a href found in base branch main vulnerability details a denial of service vulnerability exists when net framework and net core improperly process regex strings aka net framework and net core denial of service vulnerability this cve id is unique from cve cve mend note after conducting further research mend has determined that cve only affects environments with versions and only on environment of system text regularexpressions nupkg publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text regularexpressions ws vulnerable library newtonsoft json nupkg json net is a popular high performance json framework for net library home page a href path to dependency file tests conduit integrationtests conduit integrationtests csproj path to vulnerable library home wss scanner nuget packages newtonsoft json newtonsoft json nupkg dependency hierarchy xunit runner visualstudio nupkg root library microsoft net test sdk nupkg microsoft testplatform testhost nupkg x newtonsoft json nupkg vulnerable library found in head commit a href found in base branch main vulnerability details improper handling of exceptional conditions in newtonsoft json newtonsoft json prior to version is vulnerable to insecure defaults due to improper handling of stackoverflow exception soe whenever nested expressions are being processed exploiting this vulnerability results in denial of service dos and it is exploitable when an attacker sends requests that cause soe in time frame of minutes this vulnerability affects internet information services iis applications publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution newtonsoft json microsoft extensions apidescription server
| 0
|
21,785
| 30,295,023,941
|
IssuesEvent
|
2023-07-09 18:51:56
|
The-Data-Alchemists-Manipal/MindWave
|
https://api.github.com/repos/The-Data-Alchemists-Manipal/MindWave
|
closed
|
Image-processing-Matalab
|
gssoc23 level2 image-processing
|
### Is your feature request related to a problem? Please describe.
Image sampling and quantization; spatial and frequency domain image enhancement techniques; signal processing theories used for digital image processing, such as one- and two-dimensional convolution, and two-dimensional Fourier transformation;
### Describe the solution you'd like
morphological image processing; color models and basic color image processing.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
Image-processing-Matalab - ### Is your feature request related to a problem? Please describe.
Image sampling and quantization; spatial and frequency domain image enhancement techniques; signal processing theories used for digital image processing, such as one- and two-dimensional convolution, and two-dimensional Fourier transformation;
### Describe the solution you'd like
morphological image processing; color models and basic color image processing.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
process
|
image processing matalab is your feature request related to a problem please describe image sampling and quantization spatial and frequency domain image enhancement techniques signal processing theories used for digital image processing such as one and two dimensional convolution and two dimensional fourier transformation describe the solution you d like morphological image processing color models and basic color image processing describe alternatives you ve considered no response additional context no response code of conduct i agree to follow this project s code of conduct
| 1
|
316,070
| 27,134,136,242
|
IssuesEvent
|
2023-02-16 11:59:16
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
closed
|
Release 4.4.0 - RC1 - E2E UX tests - Centralized configuration - Agent groups
|
module/configuration team/cicd type/test/manual release test/4.4.0
|
The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Test information
| | |
|-------------------------|--------------------------------------------|
| **Test name** | Centralized configuration - Agent groups |
| **Category** | Configuration |
| **Deployment option** |[Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| **Main release issue** | https://github.com/wazuh/wazuh/issues/16132 |
| **Main E2E UX test issue** | https://github.com/wazuh/wazuh/issues/16135 |
| **Release candidate #** | RC1 |
## Environment
| | | |
|-|-|-|
| **Component** | **OS** | **Installation** |
| Wazuh dashboard | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh indexer | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh server | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh agent | Windows | Wazuh WUI one-liner deploy IP GROUP (created beforehand) |
## Test description
Test the functionality of agent groups for centralized configuration:
- [x] Try different groups for different OS
- [x] Try tags in the same group, to apply different blocks of configuration to different OS or agents.
- [x] Try creating groups with different files of diferent types and sizes (try to reach the limits)
- [x] Try creating multigroups and add agents to them, check configuration applies in order
- [x] Check if propagation times are acceptable
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
## Conclusions
All tests have been executed and the results can be found in the issue updates.
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| :green_circle: | Installing the Wazuh manager, dashboard, and indexer | - | - |
| :green_circle: | Test 1: Deploy an agent on version 4.4.0 and set it to a group upon deployment | - | - |
| :yellow_circle: | Test 2: Modifying configuration, adding agent to new groups. | Known issue | https://github.com/wazuh/wazuh-kibana-app/issues/5133 |
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] @wazuh/binary-beasts
- [x] @davidjiglesias
|
2.0
|
Release 4.4.0 - RC1 - E2E UX tests - Centralized configuration - Agent groups - The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Test information
| | |
|-------------------------|--------------------------------------------|
| **Test name** | Centralized configuration - Agent groups |
| **Category** | Configuration |
| **Deployment option** |[Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| **Main release issue** | https://github.com/wazuh/wazuh/issues/16132 |
| **Main E2E UX test issue** | https://github.com/wazuh/wazuh/issues/16135 |
| **Release candidate #** | RC1 |
## Environment
| | | |
|-|-|-|
| **Component** | **OS** | **Installation** |
| Wazuh dashboard | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh indexer | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh server | Amazon Linux 2 | [Installation assistant](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/installation-assistant.html)|
| Wazuh agent | Windows | Wazuh WUI one-liner deploy IP GROUP (created beforehand) |
## Test description
Test the functionality of agent groups for centralized configuration:
- [x] Try different groups for different OS
- [x] Try tags in the same group, to apply different blocks of configuration to different OS or agents.
- [x] Try creating groups with different files of diferent types and sizes (try to reach the limits)
- [x] Try creating multigroups and add agents to them, check configuration applies in order
- [x] Check if propagation times are acceptable
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
## Conclusions
All tests have been executed and the results can be found in the issue updates.
| **Status** | **Test** | **Failure type** | **Notes** |
|----------------|-------------|---------------------|----------------|
| :green_circle: | Installing the Wazuh manager, dashboard, and indexer | - | - |
| :green_circle: | Test 1: Deploy an agent on version 4.4.0 and set it to a group upon deployment | - | - |
| :yellow_circle: | Test 2: Modifying configuration, adding agent to new groups. | Known issue | https://github.com/wazuh/wazuh-kibana-app/issues/5133 |
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] @wazuh/binary-beasts
- [x] @davidjiglesias
|
non_process
|
release ux tests centralized configuration agent groups the following issue aims to run the specified test for the current release candidate report the results and open new issues for any encountered errors test information test name centralized configuration agent groups category configuration deployment option main release issue main ux test issue release candidate environment component os installation wazuh dashboard amazon linux wazuh indexer amazon linux wazuh server amazon linux wazuh agent windows wazuh wui one liner deploy ip group created beforehand test description test the functionality of agent groups for centralized configuration try different groups for different os try tags in the same group to apply different blocks of configuration to different os or agents try creating groups with different files of diferent types and sizes try to reach the limits try creating multigroups and add agents to them check configuration applies in order check if propagation times are acceptable test report procedure all test results must have one of the following statuses green circle all checks passed red circle there is at least one failed result yellow circle there is at least one expected failure or skipped test and no failures conclusions all tests have been executed and the results can be found in the issue updates status test failure type notes green circle installing the wazuh manager dashboard and indexer green circle test deploy an agent on version and set it to a group upon deployment yellow circle test modifying configuration adding agent to new groups known issue auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue wazuh binary beasts davidjiglesias
| 0
|
761,608
| 26,688,199,977
|
IssuesEvent
|
2023-01-27 00:36:30
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
opened
|
[DocDB] Load balancer should allow for moves between zones if minimum replicas is not violated
|
area/docdb priority/medium
|
### Description
When performing normal moves (i.e. not because of wrong placement or under-replication), the load balancer currently checks that the two servers are in the same placement block. This is not strictly necessary. For example, we could have two read replica nodes in different zones with `minimum_num_replicas = 0` and RF=1, but still not be able to move data from one to the other (this could happen, for example, when removing one of the nodes from the blacklist).
Another case is if we had an RF=3 cluster with 3 nodes in z0, and we want to add 3 nodes in z1. If minimum_num_replicas is 0 for the 3 new nodes, nothing will move over. If it is 1, only 1 replica will move over and we will have a 2-to-1 imbalance between zone 0 and zone 1.
|
1.0
|
[DocDB] Load balancer should allow for moves between zones if minimum replicas is not violated - ### Description
When performing normal moves (i.e. not because of wrong placement or under-replication), the load balancer currently checks that the two servers are in the same placement block. This is not strictly necessary. For example, we could have two read replica nodes in different zones with `minimum_num_replicas = 0` and RF=1, but still not be able to move data from one to the other (this could happen, for example, when removing one of the nodes from the blacklist).
Another case is if we had an RF=3 cluster with 3 nodes in z0, and we want to add 3 nodes in z1. If minimum_num_replicas is 0 for the 3 new nodes, nothing will move over. If it is 1, only 1 replica will move over and we will have a 2-to-1 imbalance between zone 0 and zone 1.
|
non_process
|
load balancer should allow for moves between zones if minimum replicas is not violated description when performing normal moves i e not because of wrong placement or under replication the load balancer currently checks that the two servers are in the same placement block this is not strictly necessary for example we could have two read replica nodes in different zones with minimum num replicas and rf but still not be able to move data from one to the other this could happen for example when removing one of the nodes from the blacklist another case is if we had an rf cluster with nodes in and we want to add nodes in if minimum num replicas is for the new nodes nothing will move over if it is only replica will move over and we will have a to imbalance between zone and zone
| 0
|
34,772
| 7,460,308,129
|
IssuesEvent
|
2018-03-30 19:05:23
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
Importeri vealogi täiendamine
|
C: AIS P: highest R: fixed T: defect
|
**Reported by sven syld on 10 Jun 2016 10:51 UTC**
'''Objekt'''
Andmete importimine AIS1'st ja teistest süsteemidest
'''Kirjeldus'''
Andmeimport on keerukas tegevus, mis sõltub mh juba andmebaasi imporditud andmetest. Kirjutada andmete impordi logimine ümber sedasi, et arhivaaridel oleks võimalik vealogist leida üles võimalik andmeviga ning seeläbi parandada impordi kvaliteeti.
Vealogi kirje peaks sisaldama vähemalt:
- vea allikas (nt fail või andmebaas/tabel)
- rida failis või kirje tabelis
- vea täpsem kirjeldus, nt "Ei leidnud fondi ABC.1, mille alla säilik lisada"
|
1.0
|
Importeri vealogi täiendamine - **Reported by sven syld on 10 Jun 2016 10:51 UTC**
'''Objekt'''
Andmete importimine AIS1'st ja teistest süsteemidest
'''Kirjeldus'''
Andmeimport on keerukas tegevus, mis sõltub mh juba andmebaasi imporditud andmetest. Kirjutada andmete impordi logimine ümber sedasi, et arhivaaridel oleks võimalik vealogist leida üles võimalik andmeviga ning seeläbi parandada impordi kvaliteeti.
Vealogi kirje peaks sisaldama vähemalt:
- vea allikas (nt fail või andmebaas/tabel)
- rida failis või kirje tabelis
- vea täpsem kirjeldus, nt "Ei leidnud fondi ABC.1, mille alla säilik lisada"
|
non_process
|
importeri vealogi täiendamine reported by sven syld on jun utc objekt andmete importimine st ja teistest süsteemidest kirjeldus andmeimport on keerukas tegevus mis sõltub mh juba andmebaasi imporditud andmetest kirjutada andmete impordi logimine ümber sedasi et arhivaaridel oleks võimalik vealogist leida üles võimalik andmeviga ning seeläbi parandada impordi kvaliteeti vealogi kirje peaks sisaldama vähemalt vea allikas nt fail või andmebaas tabel rida failis või kirje tabelis vea täpsem kirjeldus nt ei leidnud fondi abc mille alla säilik lisada
| 0
|
6,129
| 8,996,815,632
|
IssuesEvent
|
2019-02-02 05:02:55
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
CMake Error on MacOS
|
priority: p2 type: process
|
When following installation instructions:
```
brew install curl cmake libressl c-ares doxygen graphviz
git submodule update --init
export OPENSSL_ROOT_DIR=/usr/local/opt/libressl
cmake -H. -Bbuild-output
```
I get:
```
CMake Error at build-output/CMakeDoxygenDefaults.cmake:471 (set):
Syntax error in cmake code at
/Users/XXX/Code/google-cloud-cpp/build-output/CMakeDoxygenDefaults.cmake:471
when parsing string
\makeindex
Invalid character escape '\m'.
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.13.2/share/cmake/Modules/FindDoxygen.cmake:958 (include)
cmake/GoogleCloudCppCommon.cmake:94 (doxygen_add_docs)
google/cloud/CMakeLists.txt:53 (include)
```
|
1.0
|
CMake Error on MacOS - When following installation instructions:
```
brew install curl cmake libressl c-ares doxygen graphviz
git submodule update --init
export OPENSSL_ROOT_DIR=/usr/local/opt/libressl
cmake -H. -Bbuild-output
```
I get:
```
CMake Error at build-output/CMakeDoxygenDefaults.cmake:471 (set):
Syntax error in cmake code at
/Users/XXX/Code/google-cloud-cpp/build-output/CMakeDoxygenDefaults.cmake:471
when parsing string
\makeindex
Invalid character escape '\m'.
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.13.2/share/cmake/Modules/FindDoxygen.cmake:958 (include)
cmake/GoogleCloudCppCommon.cmake:94 (doxygen_add_docs)
google/cloud/CMakeLists.txt:53 (include)
```
|
process
|
cmake error on macos when following installation instructions brew install curl cmake libressl c ares doxygen graphviz git submodule update init export openssl root dir usr local opt libressl cmake h bbuild output i get cmake error at build output cmakedoxygendefaults cmake set syntax error in cmake code at users xxx code google cloud cpp build output cmakedoxygendefaults cmake when parsing string makeindex invalid character escape m call stack most recent call first usr local cellar cmake share cmake modules finddoxygen cmake include cmake googlecloudcppcommon cmake doxygen add docs google cloud cmakelists txt include
| 1
|
13,309
| 15,781,683,851
|
IssuesEvent
|
2021-04-01 11:44:50
|
wekan/wekan
|
https://api.github.com/repos/wekan/wekan
|
closed
|
Stable tag for latest release
|
Meta:Release-process
|
For automatic updates of containerized deployments used in production, it would be very helpful to have a "stable" tag referencing the latest release of Wekan in the registries docker.io and quay.io. At the moment there is only latest which is bleeding edge and updates too often for a production deployment. Periodically checking the registry places an (avoidable) burden on operators wishing to track the latest stable release.
|
1.0
|
Stable tag for latest release - For automatic updates of containerized deployments used in production, it would be very helpful to have a "stable" tag referencing the latest release of Wekan in the registries docker.io and quay.io. At the moment there is only latest which is bleeding edge and updates too often for a production deployment. Periodically checking the registry places an (avoidable) burden on operators wishing to track the latest stable release.
|
process
|
stable tag for latest release for automatic updates of containerized deployments used in production it would be very helpful to have a stable tag referencing the latest release of wekan in the registries docker io and quay io at the moment there is only latest which is bleeding edge and updates too often for a production deployment periodically checking the registry places an avoidable burden on operators wishing to track the latest stable release
| 1
|
4,381
| 7,262,732,927
|
IssuesEvent
|
2018-02-19 07:57:22
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
Wrong processing of 'for...in' expression
|
AREA: server SYSTEM: resource processing TYPE: bug
|
Original script:
```js
for (utag._i in utag.loader.GV(utag_cfg_ovrd))
utag.cfg[utag._i] = utag_cfg_ovrd[utag._i]
```
Processed script:
```js
for ( var __set$temp in utag.loader.GV(utag_cfg_ovrd))
__set$(utag.cfg,utag._i,__get$(utag_cfg_ovrd,utag._i))tag.cfg[utag._i] = utag_cfg_ovrd[utag._i] utag._i=__set$temp;tag.cfg[utag._i] = utag_cfg_ovrd[utag._i]
```
|
1.0
|
Wrong processing of 'for...in' expression - Original script:
```js
for (utag._i in utag.loader.GV(utag_cfg_ovrd))
utag.cfg[utag._i] = utag_cfg_ovrd[utag._i]
```
Processed script:
```js
for ( var __set$temp in utag.loader.GV(utag_cfg_ovrd))
__set$(utag.cfg,utag._i,__get$(utag_cfg_ovrd,utag._i))tag.cfg[utag._i] = utag_cfg_ovrd[utag._i] utag._i=__set$temp;tag.cfg[utag._i] = utag_cfg_ovrd[utag._i]
```
|
process
|
wrong processing of for in expression original script js for utag i in utag loader gv utag cfg ovrd utag cfg utag cfg ovrd processed script js for var set temp in utag loader gv utag cfg ovrd set utag cfg utag i get utag cfg ovrd utag i tag cfg utag cfg ovrd utag i set temp tag cfg utag cfg ovrd
| 1
|
386,932
| 11,453,306,370
|
IssuesEvent
|
2020-02-06 15:11:11
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.0 staging-1330] Orders use craft time from parent recipe.
|
Priority: Medium
|

It should be: 6 min; 1,5 min; 1 min; 1 min; 2 min
|
1.0
|
[0.9.0 staging-1330] Orders use craft time from parent recipe. - 
It should be: 6 min; 1,5 min; 1 min; 1 min; 2 min
|
non_process
|
orders use craft time from parent recipe it should be min min min min min
| 0
|
19,447
| 25,725,792,109
|
IssuesEvent
|
2022-12-07 16:34:03
|
googleapis/gax-dotnet
|
https://api.github.com/repos/googleapis/gax-dotnet
|
closed
|
Can't debug into GAX NuGet packages
|
type: process
|
Our .nupkg files contain pdbs, but for some reason I'm not able to step into them in the debugger.
It's possible this is just with local package sources, but we should investigate.
|
1.0
|
Can't debug into GAX NuGet packages - Our .nupkg files contain pdbs, but for some reason I'm not able to step into them in the debugger.
It's possible this is just with local package sources, but we should investigate.
|
process
|
can t debug into gax nuget packages our nupkg files contain pdbs but for some reason i m not able to step into them in the debugger it s possible this is just with local package sources but we should investigate
| 1
|
12,976
| 15,353,597,102
|
IssuesEvent
|
2021-03-01 08:47:04
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Alignment varies if the Recommended skills are more in Challenge details page.
|
P4 ShapeupProcess challenge- recommender-tool
|
Description:
Alignment varies if the Recommended skills are more in Challenge details page.

|
1.0
|
Alignment varies if the Recommended skills are more in Challenge details page. - Description:
Alignment varies if the Recommended skills are more in Challenge details page.

|
process
|
alignment varies if the recommended skills are more in challenge details page description alignment varies if the recommended skills are more in challenge details page
| 1
|
22,967
| 11,811,487,075
|
IssuesEvent
|
2020-03-19 18:17:49
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
opened
|
Implement concat search pattern semantics for literal search
|
customer estimate/1d roadmap search team/core-services
|
This issue tracks implementing the branching the logic for concat on literal search patterns. For example, if the pattern is `foo bar baz`, it implies `foo<single space>bar<single space>baz`. Our current implementation treats whitespace sensitively.
In time we may not necessarily want to _exactly_ match whitespace (e.g., to the exact number of spaces), but rather just match the presence of whitespace. There is some existing feedback/request that we be more flexible with handling whitespace in search patterns, and the concat semantics is the place to introduce such a change.
|
1.0
|
Implement concat search pattern semantics for literal search - This issue tracks implementing the branching the logic for concat on literal search patterns. For example, if the pattern is `foo bar baz`, it implies `foo<single space>bar<single space>baz`. Our current implementation treats whitespace sensitively.
In time we may not necessarily want to _exactly_ match whitespace (e.g., to the exact number of spaces), but rather just match the presence of whitespace. There is some existing feedback/request that we be more flexible with handling whitespace in search patterns, and the concat semantics is the place to introduce such a change.
|
non_process
|
implement concat search pattern semantics for literal search this issue tracks implementing the branching the logic for concat on literal search patterns for example if the pattern is foo bar baz it implies foo bar baz our current implementation treats whitespace sensitively in time we may not necessarily want to exactly match whitespace e g to the exact number of spaces but rather just match the presence of whitespace there is some existing feedback request that we be more flexible with handling whitespace in search patterns and the concat semantics is the place to introduce such a change
| 0
|
15,420
| 19,606,005,570
|
IssuesEvent
|
2022-01-06 09:33:37
|
plazi/community
|
https://api.github.com/repos/plazi/community
|
opened
|
to be processed https://doi.org/10.1007/s12526-021-01208-6
|
process request
|
another article to be processed from the list of CAS new species press release
https://doi.org/10.1007/s12526-021-01208-6
[marineBiodiversity.51.58.pdf](https://github.com/plazi/community/files/7820826/marineBiodiversity.51.58.pdf)
|
1.0
|
to be processed https://doi.org/10.1007/s12526-021-01208-6 - another article to be processed from the list of CAS new species press release
https://doi.org/10.1007/s12526-021-01208-6
[marineBiodiversity.51.58.pdf](https://github.com/plazi/community/files/7820826/marineBiodiversity.51.58.pdf)
|
process
|
to be processed another article to be processed from the list of cas new species press release
| 1
|
6,225
| 9,161,977,441
|
IssuesEvent
|
2019-03-01 12:02:44
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
ntr & def edits - linear elements
|
PomBase assembly and biogenesis cell cycle and DNA processes cellular component community curation protein complex start_and_end textual definition
|
We've recently learned of some ways to improve the representation of linear elements and their assembly in GO.
1. CC - edit one definition and add one new term
a. linear element ; GO:0030998
current def: "A proteinaceous scaffold associated with S. pombe chromosomes during meiotic prophase. Linear elements have a structure related to but not equivalent to the synaptonemal complex."
suggested new def: "A proteinaceous scaffold associated with fission yeast chromosomes during meiotic prophase. Linear elements consist of a protein complex, LinE, with four main structural components (Rec10, Rec25, Rec27, and Mug20 in S. pombe) associated with chromatin. The resulting structure is related to but not equivalent to the synaptonemal complex."
or delete more of the old def; I don't mind
b. new term
id: GO:new1
name: LinE complex
def: "A protein complex that associates with chromatin to form linear elements in fission yeast. In S. pombe, the LinE complex contains four main structural components (Rec10, Rec25, Rec27, and Mug20) and other associated proteins."
GO:0032991 ! protein-containing complex
is_a: GO:0044428 ! nuclear part
As I understand it, this complex first forms in the nucleus and then binds to chromatin to form linear elements.
2. BP - edit one existing def and add two new terms
a. linear element assembly ; GO:0030999
current def: "The cell cycle process in which a proteinaceous scaffold, related to the synaptonemal complex, is assembled in association with S. pombe chromosomes during meiotic prophase."
suggested new def: "The cell cycle process in which linear elements are assembled in association with fission yeast chromosomes during meiotic prophase. Linear element assembly begins with LinE complex formation and ends when LinE complexes are associated with chromatin in structures visible as nuclear foci. A linear element is a proteinaceous scaffold related to the synaptonemal complex."
b. new terms
id: GO:new2
name: LinE complex assembly
def: "The aggregation, arrangement and bonding together of a set of components during meiotic prophase to form a LinE complex, the protein complex that associates with chromatin to form linear elements in fission yeast. In S. pombe, the LinE complex contains four main structural components (Rec10, Rec25, Rec27, and Mug20) and other associated proteins."
GO:0034622 ! cellular protein-containing complex assembly
relationship: part_of GO:0030999 ! linear element assembly
id: GO:new3
name: choose one of "linear element maturation", "LinE focus formation", "LinE chromosome loading"
def: "The close association of LinE complexes with chromatin during meiotic prophase to form mature linear elements."
synonyms: any of the above not used as the term name
is_a: ??? maybe GO:0070192 ! chromosome organization involved in meiotic cell cycle; otherwise I'm not sure of anything more specific than GO:0016043 ! cellular component organization
relationship: part_of GO:0030999 ! linear element assembly
The two parts of linear element assembly can be separated phenotypically - there are mutants in which LinE complexes form apparently normally but don't associate with chromatin.
This is all based on personal communications to @Antonialock from Cristina Martín Castellanos, who has done an awesome job of community curation for PMID:30640914. I think you can use that paper as a definition reference for all of the edited defs and new terms, or we can ask Cristina if she can recommend other papers.
(enough labels on this one?)
|
1.0
|
ntr & def edits - linear elements - We've recently learned of some ways to improve the representation of linear elements and their assembly in GO.
1. CC - edit one definition and add one new term
a. linear element ; GO:0030998
current def: "A proteinaceous scaffold associated with S. pombe chromosomes during meiotic prophase. Linear elements have a structure related to but not equivalent to the synaptonemal complex."
suggested new def: "A proteinaceous scaffold associated with fission yeast chromosomes during meiotic prophase. Linear elements consist of a protein complex, LinE, with four main structural components (Rec10, Rec25, Rec27, and Mug20 in S. pombe) associated with chromatin. The resulting structure is related to but not equivalent to the synaptonemal complex."
or delete more of the old def; I don't mind
b. new term
id: GO:new1
name: LinE complex
def: "A protein complex that associates with chromatin to form linear elements in fission yeast. In S. pombe, the LinE complex contains four main structural components (Rec10, Rec25, Rec27, and Mug20) and other associated proteins."
GO:0032991 ! protein-containing complex
is_a: GO:0044428 ! nuclear part
As I understand it, this complex first forms in the nucleus and then binds to chromatin to form linear elements.
2. BP - edit one existing def and add two new terms
a. linear element assembly ; GO:0030999
current def: "The cell cycle process in which a proteinaceous scaffold, related to the synaptonemal complex, is assembled in association with S. pombe chromosomes during meiotic prophase."
suggested new def: "The cell cycle process in which linear elements are assembled in association with fission yeast chromosomes during meiotic prophase. Linear element assembly begins with LinE complex formation and ends when LinE complexes are associated with chromatin in structures visible as nuclear foci. A linear element is a proteinaceous scaffold related to the synaptonemal complex."
b. new terms
id: GO:new2
name: LinE complex assembly
def: "The aggregation, arrangement and bonding together of a set of components during meiotic prophase to form a LinE complex, the protein complex that associates with chromatin to form linear elements in fission yeast. In S. pombe, the LinE complex contains four main structural components (Rec10, Rec25, Rec27, and Mug20) and other associated proteins."
GO:0034622 ! cellular protein-containing complex assembly
relationship: part_of GO:0030999 ! linear element assembly
id: GO:new3
name: choose one of "linear element maturation", "LinE focus formation", "LinE chromosome loading"
def: "The close association of LinE complexes with chromatin during meiotic prophase to form mature linear elements."
synonyms: any of the above not used as the term name
is_a: ??? maybe GO:0070192 ! chromosome organization involved in meiotic cell cycle; otherwise I'm not sure of anything more specific than GO:0016043 ! cellular component organization
relationship: part_of GO:0030999 ! linear element assembly
The two parts of linear element assembly can be separated phenotypically - there are mutants in which LinE complexes form apparently normally but don't associate with chromatin.
This is all based on personal communications to @Antonialock from Cristina Martín Castellanos, who has done an awesome job of community curation for PMID:30640914. I think you can use that paper as a definition reference for all of the edited defs and new terms, or we can ask Cristina if she can recommend other papers.
(enough labels on this one?)
|
process
|
ntr def edits linear elements we ve recently learned of some ways to improve the representation of linear elements and their assembly in go cc edit one definition and add one new term a linear element go current def a proteinaceous scaffold associated with s pombe chromosomes during meiotic prophase linear elements have a structure related to but not equivalent to the synaptonemal complex suggested new def a proteinaceous scaffold associated with fission yeast chromosomes during meiotic prophase linear elements consist of a protein complex line with four main structural components and in s pombe associated with chromatin the resulting structure is related to but not equivalent to the synaptonemal complex or delete more of the old def i don t mind b new term id go name line complex def a protein complex that associates with chromatin to form linear elements in fission yeast in s pombe the line complex contains four main structural components and and other associated proteins go protein containing complex is a go nuclear part as i understand it this complex first forms in the nucleus and then binds to chromatin to form linear elements bp edit one existing def and add two new terms a linear element assembly go current def the cell cycle process in which a proteinaceous scaffold related to the synaptonemal complex is assembled in association with s pombe chromosomes during meiotic prophase suggested new def the cell cycle process in which linear elements are assembled in association with fission yeast chromosomes during meiotic prophase linear element assembly begins with line complex formation and ends when line complexes are associated with chromatin in structures visible as nuclear foci a linear element is a proteinaceous scaffold related to the synaptonemal complex b new terms id go name line complex assembly def the aggregation arrangement and bonding together of a set of components during meiotic prophase to form a line complex the protein complex that associates with chromatin to form linear elements in fission yeast in s pombe the line complex contains four main structural components and and other associated proteins go cellular protein containing complex assembly relationship part of go linear element assembly id go name choose one of linear element maturation line focus formation line chromosome loading def the close association of line complexes with chromatin during meiotic prophase to form mature linear elements synonyms any of the above not used as the term name is a maybe go chromosome organization involved in meiotic cell cycle otherwise i m not sure of anything more specific than go cellular component organization relationship part of go linear element assembly the two parts of linear element assembly can be separated phenotypically there are mutants in which line complexes form apparently normally but don t associate with chromatin this is all based on personal communications to antonialock from cristina martín castellanos who has done an awesome job of community curation for pmid i think you can use that paper as a definition reference for all of the edited defs and new terms or we can ask cristina if she can recommend other papers enough labels on this one
| 1
|
8,398
| 11,567,218,825
|
IssuesEvent
|
2020-02-20 13:57:03
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
running mean / filter preprocessor
|
enhancement preprocessor
|
Is there a preprocessor that can calculate running means (or a more general filter preprocessor)? I would need it to work over the time dimension but other dimensions may be helpful as well.
**Would you be able to help out?**
Probably not as I don't usually work with iris.
|
1.0
|
running mean / filter preprocessor - Is there a preprocessor that can calculate running means (or a more general filter preprocessor)? I would need it to work over the time dimension but other dimensions may be helpful as well.
**Would you be able to help out?**
Probably not as I don't usually work with iris.
|
process
|
running mean filter preprocessor is there a preprocessor that can calculate running means or a more general filter preprocessor i would need it to work over the time dimension but other dimensions may be helpful as well would you be able to help out probably not as i don t usually work with iris
| 1
|
5,899
| 8,717,107,770
|
IssuesEvent
|
2018-12-07 16:12:49
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
API Core PyType fails intermittently/changing errors
|
api: core flaky type: process
|
https://github.com/googleapis/google-cloud-python/pull/6769
Copying from #6769 and @rchen152
> @crwilcox It looks like what's happening is that pytype is somehow finding multiple files named path_template.py (and timeout.py and grpc_helpers.py - the error probably differs because it varies which file pytype discovers first). The other thing I noticed is that the file count in the status message ("Analyzing 53 sources with 0 dependencies") differs from the one I see (28 sources). Any idea why there might be two copies of some source files?
>
> EDIT: if there are two files with the same name in different directories, this may be google/pytype#198.
|
1.0
|
API Core PyType fails intermittently/changing errors - https://github.com/googleapis/google-cloud-python/pull/6769
Copying from #6769 and @rchen152
> @crwilcox It looks like what's happening is that pytype is somehow finding multiple files named path_template.py (and timeout.py and grpc_helpers.py - the error probably differs because it varies which file pytype discovers first). The other thing I noticed is that the file count in the status message ("Analyzing 53 sources with 0 dependencies") differs from the one I see (28 sources). Any idea why there might be two copies of some source files?
>
> EDIT: if there are two files with the same name in different directories, this may be google/pytype#198.
|
process
|
api core pytype fails intermittently changing errors copying from and crwilcox it looks like what s happening is that pytype is somehow finding multiple files named path template py and timeout py and grpc helpers py the error probably differs because it varies which file pytype discovers first the other thing i noticed is that the file count in the status message analyzing sources with dependencies differs from the one i see sources any idea why there might be two copies of some source files edit if there are two files with the same name in different directories this may be google pytype
| 1
|
6,142
| 9,013,286,163
|
IssuesEvent
|
2019-02-05 19:05:43
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
opened
|
Manual importing of some mails fails
|
bug mail processing verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8
* Installation method (source, package, ..): nay
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* being able to import failed messages
### Actual behavior:
* failed messages (some) can't be imported
### Steps to reproduce the behavior:
* try to manually import a unprocessible mail:
```
[xx@xx]$ rails r 'Channel::EmailParser.process_unprocessable_mails'
"ERROR: Can't process email, you will find it for bug reporting under /xxx/xxxx/zammad/tmp/unprocessable_mail/f43b0086aadcc3ff1385e8f50e43a806.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<NoMethodError: undefined method `new' for Mail::Encodings::UnixToUnix:Module>"
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:125:in `rescue in process': #<NoMethodError: undefined method `new' for Mail::Encodings::UnixToUnix:Module> (RuntimeError)
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/encodings/transfer_encoding.rb:13:in `can_transport?'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/encodings/transfer_encoding.rb:36:in `get_best_compatible'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/body.rb:144:in `get_best_encoding'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/body.rb:157:in `encoded'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:589:in `get_attachments'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:575:in `block in collect_attachments'
/usr/local/rvm/rubies/ruby-2.4.4/lib/ruby/2.4.0/delegate.rb:341:in `each'
/usr/local/rvm/rubies/ruby-2.4.4/lib/ruby/2.4.0/delegate.rb:341:in `block in delegating_block'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:573:in `collect_attachments'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:529:in `message_body_hash'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:76:in `parse'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:131:in `_process'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:108:in `process'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:475:in `block in process_unprocessable_mails'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `glob'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `process_unprocessable_mails'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `eval'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/command.rb:27:in `run'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/invocation.rb:126:in `invoke_command'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor.rb:387:in `dispatch'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command/base.rb:63:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command.rb:44:in `invoke'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands.rb:16:in `<top (required)>'
bin/rails:9:in `require'
bin/rails:9:in `<main>'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:106:in `process'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:475:in `block in process_unprocessable_mails'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `glob'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `process_unprocessable_mails'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `eval'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/command.rb:27:in `run'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/invocation.rb:126:in `invoke_command'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor.rb:387:in `dispatch'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command/base.rb:63:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command.rb:44:in `invoke'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands.rb:16:in `<top (required)>'
from bin/rails:9:in `require'
from bin/rails:9:in `<main>'
```
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Manual importing of some mails fails - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8
* Installation method (source, package, ..): nay
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* being able to import failed messages
### Actual behavior:
* failed messages (some) can't be imported
### Steps to reproduce the behavior:
* try to manually import a unprocessible mail:
```
[xx@xx]$ rails r 'Channel::EmailParser.process_unprocessable_mails'
"ERROR: Can't process email, you will find it for bug reporting under /xxx/xxxx/zammad/tmp/unprocessable_mail/f43b0086aadcc3ff1385e8f50e43a806.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<NoMethodError: undefined method `new' for Mail::Encodings::UnixToUnix:Module>"
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:125:in `rescue in process': #<NoMethodError: undefined method `new' for Mail::Encodings::UnixToUnix:Module> (RuntimeError)
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/encodings/transfer_encoding.rb:13:in `can_transport?'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/encodings/transfer_encoding.rb:36:in `get_best_compatible'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/body.rb:144:in `get_best_encoding'
/usr/local/rvm/gems/ruby-2.4.4/gems/mail-2.6.6/lib/mail/body.rb:157:in `encoded'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:589:in `get_attachments'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:575:in `block in collect_attachments'
/usr/local/rvm/rubies/ruby-2.4.4/lib/ruby/2.4.0/delegate.rb:341:in `each'
/usr/local/rvm/rubies/ruby-2.4.4/lib/ruby/2.4.0/delegate.rb:341:in `block in delegating_block'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:573:in `collect_attachments'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:529:in `message_body_hash'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:76:in `parse'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:131:in `_process'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:108:in `process'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:475:in `block in process_unprocessable_mails'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `glob'
/xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `process_unprocessable_mails'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `eval'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/command.rb:27:in `run'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/invocation.rb:126:in `invoke_command'
/usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor.rb:387:in `dispatch'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command/base.rb:63:in `perform'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command.rb:44:in `invoke'
/usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands.rb:16:in `<top (required)>'
bin/rails:9:in `require'
bin/rails:9:in `<main>'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:106:in `process'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:475:in `block in process_unprocessable_mails'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `glob'
from /xxx/xxxx/zammad/app/models/channel/email_parser.rb:474:in `process_unprocessable_mails'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `eval'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands/runner/runner_command.rb:37:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/command.rb:27:in `run'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor/invocation.rb:126:in `invoke_command'
from /usr/local/rvm/gems/ruby-2.4.4/gems/thor-0.20.0/lib/thor.rb:387:in `dispatch'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command/base.rb:63:in `perform'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/command.rb:44:in `invoke'
from /usr/local/rvm/gems/ruby-2.4.4/gems/railties-5.1.5/lib/rails/commands.rb:16:in `<top (required)>'
from bin/rails:9:in `require'
from bin/rails:9:in `<main>'
```
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
manual importing of some mails fails hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package nay operating system any database version any elasticsearch version any browser version any expected behavior being able to import failed messages actual behavior failed messages some can t be imported steps to reproduce the behavior try to manually import a unprocessible mail rails r channel emailparser process unprocessable mails error can t process email you will find it for bug reporting under xxx xxxx zammad tmp unprocessable mail eml please create an issue at error xxx xxxx zammad app models channel email parser rb in rescue in process runtimeerror usr local rvm gems ruby gems mail lib mail encodings transfer encoding rb in can transport usr local rvm gems ruby gems mail lib mail encodings transfer encoding rb in get best compatible usr local rvm gems ruby gems mail lib mail body rb in get best encoding usr local rvm gems ruby gems mail lib mail body rb in encoded xxx xxxx zammad app models channel email parser rb in get attachments xxx xxxx zammad app models channel email parser rb in block in collect attachments usr local rvm rubies ruby lib ruby delegate rb in each usr local rvm rubies ruby lib ruby delegate rb in block in delegating block xxx xxxx zammad app models channel email parser rb in collect attachments xxx xxxx zammad app models channel email parser rb in message body hash xxx xxxx zammad app models channel email parser rb in parse xxx xxxx zammad app models channel email parser rb in process xxx xxxx zammad app models channel email parser rb in process xxx xxxx zammad app models channel email parser rb in block in process unprocessable mails xxx xxxx zammad app models channel email parser rb in glob xxx xxxx zammad app models channel email parser rb in process unprocessable mails usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform usr local rvm gems ruby gems thor lib thor command rb in run usr local rvm gems ruby gems thor lib thor invocation rb in invoke command usr local rvm gems ruby gems thor lib thor rb in dispatch usr local rvm gems ruby gems railties lib rails command base rb in perform usr local rvm gems ruby gems railties lib rails command rb in invoke usr local rvm gems ruby gems railties lib rails commands rb in bin rails in require bin rails in from xxx xxxx zammad app models channel email parser rb in process from xxx xxxx zammad app models channel email parser rb in block in process unprocessable mails from xxx xxxx zammad app models channel email parser rb in glob from xxx xxxx zammad app models channel email parser rb in process unprocessable mails from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails commands rb in from bin rails in require from bin rails in yes i m sure this is a bug and no feature request or a general question
| 1
|
20,986
| 27,852,600,993
|
IssuesEvent
|
2023-03-20 19:57:08
|
dDevTech/tapas-top-frontend
|
https://api.github.com/repos/dDevTech/tapas-top-frontend
|
closed
|
Seguridad página registro 22/03/2023
|
pending in process require testing complex
|
**Esta tarea puede llevarse a cabo cuando se completen la de creación de página account-info y verificación de edad**
-Evitar que las páginas de registro de introducción de datos sean accedida directamente con su url.
Para la página de registro de información esencial de email, user, password debe haberse verificado antes la edad
Para la página de registro de información adicional (nombre, apellidos, foto), debe haberse verificado la edad y completado el formulario de registro de información esencial.
Esto se puede hacer con redux manteniendo una variable booleana con el estado del proceso de registro y la verificación de edad
|
1.0
|
Seguridad página registro 22/03/2023 - **Esta tarea puede llevarse a cabo cuando se completen la de creación de página account-info y verificación de edad**
-Evitar que las páginas de registro de introducción de datos sean accedida directamente con su url.
Para la página de registro de información esencial de email, user, password debe haberse verificado antes la edad
Para la página de registro de información adicional (nombre, apellidos, foto), debe haberse verificado la edad y completado el formulario de registro de información esencial.
Esto se puede hacer con redux manteniendo una variable booleana con el estado del proceso de registro y la verificación de edad
|
process
|
seguridad página registro esta tarea puede llevarse a cabo cuando se completen la de creación de página account info y verificación de edad evitar que las páginas de registro de introducción de datos sean accedida directamente con su url para la página de registro de información esencial de email user password debe haberse verificado antes la edad para la página de registro de información adicional nombre apellidos foto debe haberse verificado la edad y completado el formulario de registro de información esencial esto se puede hacer con redux manteniendo una variable booleana con el estado del proceso de registro y la verificación de edad
| 1
|
109,842
| 16,892,170,920
|
IssuesEvent
|
2021-06-23 10:34:30
|
epam/TimeBase
|
https://api.github.com/repos/epam/TimeBase
|
closed
|
CVE-2021-31812 (High) detected in pdfbox-2.0.11.jar - autoclosed
|
security vulnerability
|
## CVE-2021-31812 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pdfbox-2.0.11.jar</b></p></summary>
<p>The Apache PDFBox library is an open source Java tool for working with PDF documents.</p>
<p>Path to dependency file: TimeBase/java/installer/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.pdfbox/pdfbox/2.0.11/eb7e033d9ae41bd4f0b83681bc5dc01c2488d250/pdfbox-2.0.11.jar</p>
<p>
Dependency Hierarchy:
- izpack-compiler-5.1.3.jar (Root Library)
- tika-parsers-1.19.jar
- :x: **pdfbox-2.0.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/epam/TimeBase/commit/76d75f5eb2971c940ed61bb66cd24661abe01546">76d75f5eb2971c940ed61bb66cd24661abe01546</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
<p>Publish Date: 2021-06-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31812>CVE-2021-31812</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31812">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31812</a></p>
<p>Release Date: 2021-06-12</p>
<p>Fix Resolution: org.apache.pdfbox:pdfbox:2.0.24, org.apache.pdfbox:pdfbox-app:2.0.24</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.pdfbox","packageName":"pdfbox","packageVersion":"2.0.11","packageFilePaths":["/java/installer/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.codehaus.izpack:izpack-compiler:5.1.3;org.apache.tika:tika-parsers:1.19;org.apache.pdfbox:pdfbox:2.0.11","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.pdfbox:pdfbox:2.0.24, org.apache.pdfbox:pdfbox-app:2.0.24"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-31812","vulnerabilityDetails":"In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31812","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-31812 (High) detected in pdfbox-2.0.11.jar - autoclosed - ## CVE-2021-31812 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pdfbox-2.0.11.jar</b></p></summary>
<p>The Apache PDFBox library is an open source Java tool for working with PDF documents.</p>
<p>Path to dependency file: TimeBase/java/installer/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.pdfbox/pdfbox/2.0.11/eb7e033d9ae41bd4f0b83681bc5dc01c2488d250/pdfbox-2.0.11.jar</p>
<p>
Dependency Hierarchy:
- izpack-compiler-5.1.3.jar (Root Library)
- tika-parsers-1.19.jar
- :x: **pdfbox-2.0.11.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/epam/TimeBase/commit/76d75f5eb2971c940ed61bb66cd24661abe01546">76d75f5eb2971c940ed61bb66cd24661abe01546</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.
<p>Publish Date: 2021-06-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31812>CVE-2021-31812</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31812">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31812</a></p>
<p>Release Date: 2021-06-12</p>
<p>Fix Resolution: org.apache.pdfbox:pdfbox:2.0.24, org.apache.pdfbox:pdfbox-app:2.0.24</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.pdfbox","packageName":"pdfbox","packageVersion":"2.0.11","packageFilePaths":["/java/installer/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.codehaus.izpack:izpack-compiler:5.1.3;org.apache.tika:tika-parsers:1.19;org.apache.pdfbox:pdfbox:2.0.11","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.pdfbox:pdfbox:2.0.24, org.apache.pdfbox:pdfbox-app:2.0.24"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-31812","vulnerabilityDetails":"In Apache PDFBox, a carefully crafted PDF file can trigger an infinite loop while loading the file. This issue affects Apache PDFBox version 2.0.23 and prior 2.0.x versions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31812","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in pdfbox jar autoclosed cve high severity vulnerability vulnerable library pdfbox jar the apache pdfbox library is an open source java tool for working with pdf documents path to dependency file timebase java installer build gradle path to vulnerable library home wss scanner gradle caches modules files org apache pdfbox pdfbox pdfbox jar dependency hierarchy izpack compiler jar root library tika parsers jar x pdfbox jar vulnerable library found in head commit a href found in base branch main vulnerability details in apache pdfbox a carefully crafted pdf file can trigger an infinite loop while loading the file this issue affects apache pdfbox version and prior x versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache pdfbox pdfbox org apache pdfbox pdfbox app isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org codehaus izpack izpack compiler org apache tika tika parsers org apache pdfbox pdfbox isminimumfixversionavailable true minimumfixversion org apache pdfbox pdfbox org apache pdfbox pdfbox app basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache pdfbox a carefully crafted pdf file can trigger an infinite loop while loading the file this issue affects apache pdfbox version and prior x versions vulnerabilityurl
| 0
|
201,565
| 15,213,609,908
|
IssuesEvent
|
2021-02-17 12:03:08
|
pace-neutrons/Horace
|
https://api.github.com/repos/pace-neutrons/Horace
|
opened
|
test_tobyfit/test_refine_crystal fails if use_mex=0
|
Testing Tobyfit bug
|
Running the test_tobyfit subfolders, test_refine_crystal's single test file fails if use_mex=0. It reports that for each call to cut_sqw in bragg_positions, the number of pixels retained is 0. In comparison, a run with master typically gives over 5000 pixels retained. This does not look like a small rounding error in the number of pixels. WIth use_mex=1, the test passes.
|
1.0
|
test_tobyfit/test_refine_crystal fails if use_mex=0 - Running the test_tobyfit subfolders, test_refine_crystal's single test file fails if use_mex=0. It reports that for each call to cut_sqw in bragg_positions, the number of pixels retained is 0. In comparison, a run with master typically gives over 5000 pixels retained. This does not look like a small rounding error in the number of pixels. WIth use_mex=1, the test passes.
|
non_process
|
test tobyfit test refine crystal fails if use mex running the test tobyfit subfolders test refine crystal s single test file fails if use mex it reports that for each call to cut sqw in bragg positions the number of pixels retained is in comparison a run with master typically gives over pixels retained this does not look like a small rounding error in the number of pixels with use mex the test passes
| 0
|
320,707
| 27,452,919,431
|
IssuesEvent
|
2023-03-02 18:49:22
|
BoBAdministration/QA-Bug-Reports
|
https://api.github.com/repos/BoBAdministration/QA-Bug-Reports
|
closed
|
The Dreaded Infinite Food Has Returned
|
Fixed-PendingTesting
|
**Describe the Bug**
This model of seaweed is now providing infinite food for lurds.


**To Reproduce**
1. Logged onto o2, any titania server should work but this was confirmed on o2
2. Hop on a lurd and head to the ocean, looking for the type of seaweed in the above pictures. There is one at (X=307874.531250,Y=425485.781250,Z=-15124.646484) which is the first one I ate from in the evidence video.
3. The seaweed doesn't vanish, instead just being infinitely eaten and giving easy food to lurd.
4. Lurd profit.
**Expected behavior**
The seaweed should be consumed and not exist anymore until it respawns normally.
**Actual behavior**
This model of seaweed in particular is eaten infinitely, giving easy and infinite food to lurds.
**Screenshots & Video**
https://youtu.be/TvJ4VKcQWPw
**Branch Version**
Live Branch
**Character Information**
Random spawn lurd, 0.6 at the time of filming.
**Additional Information**
Official Titania map.
|
1.0
|
The Dreaded Infinite Food Has Returned - **Describe the Bug**
This model of seaweed is now providing infinite food for lurds.


**To Reproduce**
1. Logged onto o2, any titania server should work but this was confirmed on o2
2. Hop on a lurd and head to the ocean, looking for the type of seaweed in the above pictures. There is one at (X=307874.531250,Y=425485.781250,Z=-15124.646484) which is the first one I ate from in the evidence video.
3. The seaweed doesn't vanish, instead just being infinitely eaten and giving easy food to lurd.
4. Lurd profit.
**Expected behavior**
The seaweed should be consumed and not exist anymore until it respawns normally.
**Actual behavior**
This model of seaweed in particular is eaten infinitely, giving easy and infinite food to lurds.
**Screenshots & Video**
https://youtu.be/TvJ4VKcQWPw
**Branch Version**
Live Branch
**Character Information**
Random spawn lurd, 0.6 at the time of filming.
**Additional Information**
Official Titania map.
|
non_process
|
the dreaded infinite food has returned describe the bug this model of seaweed is now providing infinite food for lurds to reproduce logged onto any titania server should work but this was confirmed on hop on a lurd and head to the ocean looking for the type of seaweed in the above pictures there is one at x y z which is the first one i ate from in the evidence video the seaweed doesn t vanish instead just being infinitely eaten and giving easy food to lurd lurd profit expected behavior the seaweed should be consumed and not exist anymore until it respawns normally actual behavior this model of seaweed in particular is eaten infinitely giving easy and infinite food to lurds screenshots video branch version live branch character information random spawn lurd at the time of filming additional information official titania map
| 0
|
20,977
| 27,830,896,526
|
IssuesEvent
|
2023-03-20 04:44:26
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
AWS Lambda Resource Detection Processor
|
enhancement Stale processor/resourcedetection
|
### Component(s)
processor/resourcedetection
### Is your feature request related to a problem? Please describe.
Currently there is no seamless way to add AWS Lambda resource information into traces, metrics, and logs. Since there is support for running the Open Telemetry collector [within the AWS Lambda runtime](https://github.com/open-telemetry/opentelemetry-lambda), ideally there would be an easy way collect the AWS Lambda _host_ resource information via the collector.
### Describe the solution you'd like
Add a new AWS Lambda Resource Detection Processor, similar to the existing [aws](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor/internal/aws) resource detection processors.
The implementation should be similar the the [lambda detector](https://github.com/bhautikpip/opentelemetry-go-contrib/tree/main/detectors/aws/lambda) built into the `opentelemetry-go-contrib` repo where we can simply use the AWS Lambda [reserved environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime) to gather the necessary resource information.
Additionally, it would be nice if, like the [AWS EC2](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor#aws-ec2) resource detection processor, the lambda resource detection processor supported gathering tags for the Lambda Function. Like the EC2 resource detection processor, this would require that users add an IAM permission to the Lambda Function IAM role in order to get the function's tags, specifically the `lambda:ListTags` permission.
### Describe alternatives you've considered
Since the AWS Lambda Function resource information is exposed as environment variables, technically it should be possible to use the [Resource Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor) to insert / upsert attributes using environment variable values, but this would mean all users would need to know and understand the correct standard open telemetry attribute names and their corresponding AWS Lambda [reserved environment variable](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime) values. Also, this would not support collecting the lambda function's tags.
Another alternative that most users likely do today is to use a resource detector built into their programming language's open telemetry instrumentation library, like Go's [lambda detector](https://github.com/bhautikpip/opentelemetry-go-contrib/tree/main/detectors/aws/lambda), Python's [AwsLambdaResourceDetector](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/resource/_lambda.py), Java's [LambdaResource](https://github.com/open-telemetry/opentelemetry-java-contrib/blob/main/aws-resources/src/main/java/io/opentelemetry/contrib/aws/resource/LambdaResource.java), .NET's [AWSLambdaResourceDetector](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/main/src/OpenTelemetry.Contrib.Extensions.AWSXRay/Resources/AWSLambdaResourceDetector.cs), JavaScript's [AwsLambdaDetector](https://github.com/open-telemetry/opentelemetry-js-contrib/blob/main/detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsLambdaDetector.ts), etc.
### Additional context
Assuming a new AWS Lambda Resource Detection Processor is created, likely the [OpenTelemetry Lambda](https://github.com/open-telemetry/opentelemetry-lambda) project and the corresponding [AWS managed OpenTelemetry Lambda Layers](https://github.com/aws-observability/aws-otel-lambda) project would need to be updated to include the Resource Detection Processor in their distributions.
|
1.0
|
AWS Lambda Resource Detection Processor - ### Component(s)
processor/resourcedetection
### Is your feature request related to a problem? Please describe.
Currently there is no seamless way to add AWS Lambda resource information into traces, metrics, and logs. Since there is support for running the Open Telemetry collector [within the AWS Lambda runtime](https://github.com/open-telemetry/opentelemetry-lambda), ideally there would be an easy way collect the AWS Lambda _host_ resource information via the collector.
### Describe the solution you'd like
Add a new AWS Lambda Resource Detection Processor, similar to the existing [aws](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor/internal/aws) resource detection processors.
The implementation should be similar the the [lambda detector](https://github.com/bhautikpip/opentelemetry-go-contrib/tree/main/detectors/aws/lambda) built into the `opentelemetry-go-contrib` repo where we can simply use the AWS Lambda [reserved environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime) to gather the necessary resource information.
Additionally, it would be nice if, like the [AWS EC2](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor#aws-ec2) resource detection processor, the lambda resource detection processor supported gathering tags for the Lambda Function. Like the EC2 resource detection processor, this would require that users add an IAM permission to the Lambda Function IAM role in order to get the function's tags, specifically the `lambda:ListTags` permission.
### Describe alternatives you've considered
Since the AWS Lambda Function resource information is exposed as environment variables, technically it should be possible to use the [Resource Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor) to insert / upsert attributes using environment variable values, but this would mean all users would need to know and understand the correct standard open telemetry attribute names and their corresponding AWS Lambda [reserved environment variable](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime) values. Also, this would not support collecting the lambda function's tags.
Another alternative that most users likely do today is to use a resource detector built into their programming language's open telemetry instrumentation library, like Go's [lambda detector](https://github.com/bhautikpip/opentelemetry-go-contrib/tree/main/detectors/aws/lambda), Python's [AwsLambdaResourceDetector](https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/resource/_lambda.py), Java's [LambdaResource](https://github.com/open-telemetry/opentelemetry-java-contrib/blob/main/aws-resources/src/main/java/io/opentelemetry/contrib/aws/resource/LambdaResource.java), .NET's [AWSLambdaResourceDetector](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/main/src/OpenTelemetry.Contrib.Extensions.AWSXRay/Resources/AWSLambdaResourceDetector.cs), JavaScript's [AwsLambdaDetector](https://github.com/open-telemetry/opentelemetry-js-contrib/blob/main/detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsLambdaDetector.ts), etc.
### Additional context
Assuming a new AWS Lambda Resource Detection Processor is created, likely the [OpenTelemetry Lambda](https://github.com/open-telemetry/opentelemetry-lambda) project and the corresponding [AWS managed OpenTelemetry Lambda Layers](https://github.com/aws-observability/aws-otel-lambda) project would need to be updated to include the Resource Detection Processor in their distributions.
|
process
|
aws lambda resource detection processor component s processor resourcedetection is your feature request related to a problem please describe currently there is no seamless way to add aws lambda resource information into traces metrics and logs since there is support for running the open telemetry collector ideally there would be an easy way collect the aws lambda host resource information via the collector describe the solution you d like add a new aws lambda resource detection processor similar to the existing resource detection processors the implementation should be similar the the built into the opentelemetry go contrib repo where we can simply use the aws lambda to gather the necessary resource information additionally it would be nice if like the resource detection processor the lambda resource detection processor supported gathering tags for the lambda function like the resource detection processor this would require that users add an iam permission to the lambda function iam role in order to get the function s tags specifically the lambda listtags permission describe alternatives you ve considered since the aws lambda function resource information is exposed as environment variables technically it should be possible to use the to insert upsert attributes using environment variable values but this would mean all users would need to know and understand the correct standard open telemetry attribute names and their corresponding aws lambda values also this would not support collecting the lambda function s tags another alternative that most users likely do today is to use a resource detector built into their programming language s open telemetry instrumentation library like go s python s java s net s javascript s etc additional context assuming a new aws lambda resource detection processor is created likely the project and the corresponding project would need to be updated to include the resource detection processor in their distributions
| 1
|
16,831
| 22,061,919,759
|
IssuesEvent
|
2022-05-30 19:12:41
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
22.05 Feature Freeze
|
6.topic: release process
|
It's time for another feature freeze!
Let's clarify any blocking concerns for the 22.05 release in this thread, which will go live on May 30th.
Feature Freeze issue of th previous release: #140168
Release Schedule: #165792
**Edit**: I have crossed out subsystems that have responded that there are no blockers
Nix/nix-cli ecosystem: @Profpatsch @edolstra @grahamc @nbp
Mobile: @samueldr
~~NixOS Modules / internals: @ericson2314 @infinisil @alyssais @roberth~~
~~NixOS tests: @tfc~~
Marketing: @garbas @tomberek
~~Docs: @ryantm~~
Release: @NixOS/nixos-release-managers
Darwin: @NixOS/darwin-maintainers
~~BEAM: @NixOS/beam @minijackson~~
C: @matthewbauer @mic92
Coq: @CohenCyril @Zimmi48 @siraben @vbgl
~~Dhall: @Gabriel439 @ehmry~~
~~Emacs: @adisbladis~~
~~Vim/Neovim: @jonringer @softinio @teto~~
Go: @c00w @cstrahan @Frostman @kalbasit @mic92 @orivej @rvolosatovs @zowoq
Haskell: @NixOS/haskell
~~Python: @fridh @mweinelt @jonringer~~
Perl: @stigtsp
~~PHP: @NixOS/php @ma27~~
Ruby: @marsam
Rust: @andir @lnl7 @mic92 @zowoq
~~R: @bcdarwin @jbedo~~
Bazel: @mboes @marsam @uri-canva @avdv @olebedev @groodt @aherrmann @ylecornec
Blockchains: @mmahut @RaghavSood
Cinnamon: @mkg20001
~~DockerTools: @roberth @utdemir~~
GNOME: @NixOS/gnome @bobby285271 @dasj19 @maxeaubrey
~~Pantheon: @NixOS/pantheon~~
Podman: @NixOS/podman
PostgreSQL: @thoughtpolice
Qt / KDE: @NixOS/qt-kde
systemd: @NixOS/systemd
Everyone else: @NixOS/nixpkgs-committers @NixOS/release-engineers
If you think some subsystem/person/GitHub team should be added or removed for the next release, you can modify the list [here](https://github.com/NixOS/nixpkgs/blob/master/maintainers/team-list.nix#L17).
No issue is too big or too small, but let's remember that we are all working on the project voluntarily in our free time here, so let's focus on the issues that can be realistically addressed by release time. Thanks everyone!
|
1.0
|
22.05 Feature Freeze - It's time for another feature freeze!
Let's clarify any blocking concerns for the 22.05 release in this thread, which will go live on May 30th.
Feature Freeze issue of th previous release: #140168
Release Schedule: #165792
**Edit**: I have crossed out subsystems that have responded that there are no blockers
Nix/nix-cli ecosystem: @Profpatsch @edolstra @grahamc @nbp
Mobile: @samueldr
~~NixOS Modules / internals: @ericson2314 @infinisil @alyssais @roberth~~
~~NixOS tests: @tfc~~
Marketing: @garbas @tomberek
~~Docs: @ryantm~~
Release: @NixOS/nixos-release-managers
Darwin: @NixOS/darwin-maintainers
~~BEAM: @NixOS/beam @minijackson~~
C: @matthewbauer @mic92
Coq: @CohenCyril @Zimmi48 @siraben @vbgl
~~Dhall: @Gabriel439 @ehmry~~
~~Emacs: @adisbladis~~
~~Vim/Neovim: @jonringer @softinio @teto~~
Go: @c00w @cstrahan @Frostman @kalbasit @mic92 @orivej @rvolosatovs @zowoq
Haskell: @NixOS/haskell
~~Python: @fridh @mweinelt @jonringer~~
Perl: @stigtsp
~~PHP: @NixOS/php @ma27~~
Ruby: @marsam
Rust: @andir @lnl7 @mic92 @zowoq
~~R: @bcdarwin @jbedo~~
Bazel: @mboes @marsam @uri-canva @avdv @olebedev @groodt @aherrmann @ylecornec
Blockchains: @mmahut @RaghavSood
Cinnamon: @mkg20001
~~DockerTools: @roberth @utdemir~~
GNOME: @NixOS/gnome @bobby285271 @dasj19 @maxeaubrey
~~Pantheon: @NixOS/pantheon~~
Podman: @NixOS/podman
PostgreSQL: @thoughtpolice
Qt / KDE: @NixOS/qt-kde
systemd: @NixOS/systemd
Everyone else: @NixOS/nixpkgs-committers @NixOS/release-engineers
If you think some subsystem/person/GitHub team should be added or removed for the next release, you can modify the list [here](https://github.com/NixOS/nixpkgs/blob/master/maintainers/team-list.nix#L17).
No issue is too big or too small, but let's remember that we are all working on the project voluntarily in our free time here, so let's focus on the issues that can be realistically addressed by release time. Thanks everyone!
|
process
|
feature freeze it s time for another feature freeze let s clarify any blocking concerns for the release in this thread which will go live on may feature freeze issue of th previous release release schedule edit i have crossed out subsystems that have responded that there are no blockers nix nix cli ecosystem profpatsch edolstra grahamc nbp mobile samueldr nixos modules internals infinisil alyssais roberth nixos tests tfc marketing garbas tomberek docs ryantm release nixos nixos release managers darwin nixos darwin maintainers beam nixos beam minijackson c matthewbauer coq cohencyril siraben vbgl dhall ehmry emacs adisbladis vim neovim jonringer softinio teto go cstrahan frostman kalbasit orivej rvolosatovs zowoq haskell nixos haskell python fridh mweinelt jonringer perl stigtsp php nixos php ruby marsam rust andir zowoq r bcdarwin jbedo bazel mboes marsam uri canva avdv olebedev groodt aherrmann ylecornec blockchains mmahut raghavsood cinnamon dockertools roberth utdemir gnome nixos gnome maxeaubrey pantheon nixos pantheon podman nixos podman postgresql thoughtpolice qt kde nixos qt kde systemd nixos systemd everyone else nixos nixpkgs committers nixos release engineers if you think some subsystem person github team should be added or removed for the next release you can modify the list no issue is too big or too small but let s remember that we are all working on the project voluntarily in our free time here so let s focus on the issues that can be realistically addressed by release time thanks everyone
| 1
|
10,894
| 13,672,883,112
|
IssuesEvent
|
2020-09-29 09:06:04
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
Move run/events-* to eventarc/
|
api: run priority: p3 samples type: process
|
The Events for Cloud Run product/API is not separate from Run.
Given that, we should move the `run/events-*` samples to `eventarc/` and change the README and region tags appropriately.
.NET samples:
https://github.com/GoogleCloudPlatform/dotnet-docs-samples/tree/master/run
Example:
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/eventarc
---
We should leave the README for `run` since we still have the Knative sample. Probably just use the old README.
|
1.0
|
Move run/events-* to eventarc/ - The Events for Cloud Run product/API is not separate from Run.
Given that, we should move the `run/events-*` samples to `eventarc/` and change the README and region tags appropriately.
.NET samples:
https://github.com/GoogleCloudPlatform/dotnet-docs-samples/tree/master/run
Example:
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/eventarc
---
We should leave the README for `run` since we still have the Knative sample. Probably just use the old README.
|
process
|
move run events to eventarc the events for cloud run product api is not separate from run given that we should move the run events samples to eventarc and change the readme and region tags appropriately net samples example we should leave the readme for run since we still have the knative sample probably just use the old readme
| 1
|
262,564
| 22,917,910,840
|
IssuesEvent
|
2022-07-17 08:22:12
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
test_runner: add timeout for tests
|
feature request test_runner
|
### What is the problem this feature will solve?
currently - tests can easily be stuck or never end, which makes it very hard to understand and locate where your problem might originate
for example, this program will be very hard to debug:
```js
test('top level', { concurrency: 2 }, async (t) => {
await t.test('endless', () => new Promise(() => {
setTimeout(() => {}, /* Large number */);
}));
// ... many other tests
});
```
### What is the feature you are proposing to solve the problem?
add a timeout to tests running via `node:test`, I propose adding a default timeout that will be configurable
one of the most important parts of testing is failing fast and knowing what the failure is, and timing out will help determine **which** test is misfunctioning
### What alternatives have you considered?
_No response_
|
1.0
|
test_runner: add timeout for tests - ### What is the problem this feature will solve?
currently - tests can easily be stuck or never end, which makes it very hard to understand and locate where your problem might originate
for example, this program will be very hard to debug:
```js
test('top level', { concurrency: 2 }, async (t) => {
await t.test('endless', () => new Promise(() => {
setTimeout(() => {}, /* Large number */);
}));
// ... many other tests
});
```
### What is the feature you are proposing to solve the problem?
add a timeout to tests running via `node:test`, I propose adding a default timeout that will be configurable
one of the most important parts of testing is failing fast and knowing what the failure is, and timing out will help determine **which** test is misfunctioning
### What alternatives have you considered?
_No response_
|
non_process
|
test runner add timeout for tests what is the problem this feature will solve currently tests can easily be stuck or never end which makes it very hard to understand and locate where your problem might originate for example this program will be very hard to debug js test top level concurrency async t await t test endless new promise settimeout large number many other tests what is the feature you are proposing to solve the problem add a timeout to tests running via node test i propose adding a default timeout that will be configurable one of the most important parts of testing is failing fast and knowing what the failure is and timing out will help determine which test is misfunctioning what alternatives have you considered no response
| 0
|
245,724
| 18,792,228,375
|
IssuesEvent
|
2021-11-08 18:00:26
|
Green-Software-Foundation/software_carbon_intensity
|
https://api.github.com/repos/Green-Software-Foundation/software_carbon_intensity
|
closed
|
Methodology section: provide method to derive operational emissions of a given piece of software
|
documentation low
|
As stated earlier - the goal of the standard is to provide a "practical" method. Thus, it is necessary to also define "operational emissions of a given piece of software" and provide a method how to establish this.
|
1.0
|
Methodology section: provide method to derive operational emissions of a given piece of software - As stated earlier - the goal of the standard is to provide a "practical" method. Thus, it is necessary to also define "operational emissions of a given piece of software" and provide a method how to establish this.
|
non_process
|
methodology section provide method to derive operational emissions of a given piece of software as stated earlier the goal of the standard is to provide a practical method thus it is necessary to also define operational emissions of a given piece of software and provide a method how to establish this
| 0
|
56,999
| 13,962,905,732
|
IssuesEvent
|
2020-10-25 11:49:16
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
CUDA problem with Linux Kernel 5.9
|
module: build module: cuda triaged
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
torch.cuda.is_available() returns True on Ubuntu 20.04 with linux kernel 5.8.13, but returns False when the kernel is upgraded to 5.9-rc8. nvidia-smi and other gpu-related programs work as expected, only pytorch stops detecting the GPU.
## To Reproduce
In a Ubuntu 20.04 OS with linux kernel 5.9-rc8, run:
```python -c 'import torch; print(torch.cuda.is_available())'```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Pytorch should detect the GPU on Linux kernel 5.9.
## Environment
- PyTorch Version (e.g., 1.0): 1.6.0
- OS (e.g., Linux): Ubuntu 20.04.1 LTS (x86_64)
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.8
- CUDA version: 10.2 in pytorch, N/A in the system.
- GPU models and configuration: GeForce RTX 3090
cc @malfet @seemethere @walterddr @ngimel
|
1.0
|
CUDA problem with Linux Kernel 5.9 - ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
torch.cuda.is_available() returns True on Ubuntu 20.04 with linux kernel 5.8.13, but returns False when the kernel is upgraded to 5.9-rc8. nvidia-smi and other gpu-related programs work as expected, only pytorch stops detecting the GPU.
## To Reproduce
In a Ubuntu 20.04 OS with linux kernel 5.9-rc8, run:
```python -c 'import torch; print(torch.cuda.is_available())'```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Pytorch should detect the GPU on Linux kernel 5.9.
## Environment
- PyTorch Version (e.g., 1.0): 1.6.0
- OS (e.g., Linux): Ubuntu 20.04.1 LTS (x86_64)
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.8
- CUDA version: 10.2 in pytorch, N/A in the system.
- GPU models and configuration: GeForce RTX 3090
cc @malfet @seemethere @walterddr @ngimel
|
non_process
|
cuda problem with linux kernel 🐛 bug torch cuda is available returns true on ubuntu with linux kernel but returns false when the kernel is upgraded to nvidia smi and other gpu related programs work as expected only pytorch stops detecting the gpu to reproduce in a ubuntu os with linux kernel run python c import torch print torch cuda is available expected behavior pytorch should detect the gpu on linux kernel environment pytorch version e g os e g linux ubuntu lts how you installed pytorch conda pip source pip python version cuda version in pytorch n a in the system gpu models and configuration geforce rtx cc malfet seemethere walterddr ngimel
| 0
|
251,774
| 8,027,160,647
|
IssuesEvent
|
2018-07-27 08:05:12
|
smashingmagazine/redesign
|
https://api.github.com/repos/smashingmagazine/redesign
|
closed
|
i have latest browser of chrome and firefox but anyway text and buttons not showing up for me
|
bug high priority
|
<!--
Meow! Thanks for your patience and kind help. If you are reporting a new issue,
please double check that we do not have any duplicates already open. You can
ensure this by searching the issue list for this repository. If there is a
duplicate, please close your issue and add a comment to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you have an issue that
can be shown visually, please provide a screenshot or gif of the problem as well.
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Please use the questions below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST though.
-->
**- Do you want to request a *feature* or report a *bug*?**
**- What is the current behavior?**
**- If the current behavior is a bug, please provide the steps to reproduce.**
**- What is the expected behavior?**
**- Please mention your operating system version and the version of your browser**
|
1.0
|
i have latest browser of chrome and firefox but anyway text and buttons not showing up for me - <!--
Meow! Thanks for your patience and kind help. If you are reporting a new issue,
please double check that we do not have any duplicates already open. You can
ensure this by searching the issue list for this repository. If there is a
duplicate, please close your issue and add a comment to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you have an issue that
can be shown visually, please provide a screenshot or gif of the problem as well.
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Please use the questions below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST though.
-->
**- Do you want to request a *feature* or report a *bug*?**
**- What is the current behavior?**
**- If the current behavior is a bug, please provide the steps to reproduce.**
**- What is the expected behavior?**
**- Please mention your operating system version and the version of your browser**
|
non_process
|
i have latest browser of chrome and firefox but anyway text and buttons not showing up for me meow thanks for your patience and kind help if you are reporting a new issue please double check that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you suspect your issue is a bug please edit your issue description to include the bug report information shown below if you have an issue that can be shown visually please provide a screenshot or gif of the problem as well bug report information please use the questions below to provide key information from your environment you do not have to include this information if this is a feature request though do you want to request a feature or report a bug what is the current behavior if the current behavior is a bug please provide the steps to reproduce what is the expected behavior please mention your operating system version and the version of your browser
| 0
|
164,274
| 20,364,427,458
|
IssuesEvent
|
2022-02-21 02:46:25
|
prashantgodhwani/phonefriend
|
https://api.github.com/repos/prashantgodhwani/phonefriend
|
opened
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /phonefriend/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-1.7.2.tgz (Root Library)
- webpack-dev-server-2.11.5.tgz
- sockjs-client-1.1.5.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /phonefriend/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-1.7.2.tgz (Root Library)
- webpack-dev-server-2.11.5.tgz
- sockjs-client-1.1.5.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file phonefriend package json path to vulnerable library node modules url parse package json dependency hierarchy laravel mix tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution laravel mix step up your open source security game with whitesource
| 0
|
169,774
| 20,841,920,489
|
IssuesEvent
|
2022-03-21 01:51:18
|
nycbeardo/react-metronome
|
https://api.github.com/repos/nycbeardo/react-metronome
|
opened
|
CVE-2022-24771 (High) detected in node-forge-0.7.5.tgz
|
security vulnerability
|
## CVE-2022-24771 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.7.5.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p>
<p>Path to dependency file: /react-metronome/package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- selfsigned-1.10.4.tgz
- :x: **node-forge-0.7.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24771>CVE-2022-24771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-24771 (High) detected in node-forge-0.7.5.tgz - ## CVE-2022-24771 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.7.5.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p>
<p>Path to dependency file: /react-metronome/package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- selfsigned-1.10.4.tgz
- :x: **node-forge-0.7.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24771>CVE-2022-24771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24771</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file react metronome package json path to vulnerable library node modules node forge package json dependency hierarchy react scripts tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code is lenient in checking the digest algorithm structure this can allow a crafted structure that steals padding bytes and uses unchecked portion of the pkcs encoded message to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource
| 0
|
7,468
| 10,563,693,578
|
IssuesEvent
|
2019-10-04 21:45:36
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Error_reporting: 'test_report_exception' systest flakes w/ backoff error
|
api: clouderrorreporting flaky testing type: process
|
See [failed system test runs](https://source.cloud.google.com/results/invocations/b4cf10d9-0eee-474d-8120-ef168ccf0efb/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ferror_reporting/log).
```python
___________________ TestErrorReporting.test_report_exception ___________________
self = <test_system.TestErrorReporting testMethod=test_report_exception>
def test_report_exception(self):
# Get a class name unique to this test case.
class_name = "RuntimeError" + unique_resource_id("_")
# Simulate an error: group won't exist until we report
# first exception.
_simulate_exception(class_name, Config.CLIENT)
is_one = functools.partial(operator.eq, 1)
is_one.__name__ = "is_one" # partial() has no name.
retry = RetryResult(is_one, max_tries=6)
wrapped_get_count = retry(_get_error_count)
> error_count = wrapped_get_count(class_name, Config.CLIENT)
tests/system/test_system.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = ('RuntimeError_1545140110332', <google.cloud.error_reporting.client.Client object at 0x7fb7b6633828>)
kwargs = {}, tries = 6, result = None, delay = 32
msg = 'is_one. Trying again in 32 seconds...'
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
result = to_wrap(*args, **kwargs)
if self.result_predicate(result):
return result
delay = self.delay * self.backoff**tries
msg = "%s. Trying again in %d seconds..." % (
self.result_predicate.__name__, delay,)
self.logger(msg)
time.sleep(delay)
tries += 1
> raise BackoffFailed()
E test_utils.retry.BackoffFailed
../test_utils/test_utils/retry.py:155: BackoffFailed
```
|
1.0
|
Error_reporting: 'test_report_exception' systest flakes w/ backoff error - See [failed system test runs](https://source.cloud.google.com/results/invocations/b4cf10d9-0eee-474d-8120-ef168ccf0efb/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ferror_reporting/log).
```python
___________________ TestErrorReporting.test_report_exception ___________________
self = <test_system.TestErrorReporting testMethod=test_report_exception>
def test_report_exception(self):
# Get a class name unique to this test case.
class_name = "RuntimeError" + unique_resource_id("_")
# Simulate an error: group won't exist until we report
# first exception.
_simulate_exception(class_name, Config.CLIENT)
is_one = functools.partial(operator.eq, 1)
is_one.__name__ = "is_one" # partial() has no name.
retry = RetryResult(is_one, max_tries=6)
wrapped_get_count = retry(_get_error_count)
> error_count = wrapped_get_count(class_name, Config.CLIENT)
tests/system/test_system.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = ('RuntimeError_1545140110332', <google.cloud.error_reporting.client.Client object at 0x7fb7b6633828>)
kwargs = {}, tries = 6, result = None, delay = 32
msg = 'is_one. Trying again in 32 seconds...'
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
result = to_wrap(*args, **kwargs)
if self.result_predicate(result):
return result
delay = self.delay * self.backoff**tries
msg = "%s. Trying again in %d seconds..." % (
self.result_predicate.__name__, delay,)
self.logger(msg)
time.sleep(delay)
tries += 1
> raise BackoffFailed()
E test_utils.retry.BackoffFailed
../test_utils/test_utils/retry.py:155: BackoffFailed
```
|
process
|
error reporting test report exception systest flakes w backoff error see python testerrorreporting test report exception self def test report exception self get a class name unique to this test case class name runtimeerror unique resource id simulate an error group won t exist until we report first exception simulate exception class name config client is one functools partial operator eq is one name is one partial has no name retry retryresult is one max tries wrapped get count retry get error count error count wrapped get count class name config client tests system test system py args runtimeerror kwargs tries result none delay msg is one trying again in seconds wraps to wrap def wrapped function args kwargs tries while tries self max tries result to wrap args kwargs if self result predicate result return result delay self delay self backoff tries msg s trying again in d seconds self result predicate name delay self logger msg time sleep delay tries raise backofffailed e test utils retry backofffailed test utils test utils retry py backofffailed
| 1
|
19,560
| 25,884,728,102
|
IssuesEvent
|
2022-12-14 13:48:20
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Ensure that `ProcessNode.get_builder_restart` fully restores all inputs including `non_db` ones
|
requires discussion priority/nice-to-have topic/processes
|
I have a simple workchain to wrap a calculation. I used expose_inputs to forward everything, and it was using a namespace until recently.
But I would like to get rid of the namespace in this case, and simply forward every input to the calculation job, to get rid of the extra layer of indirection. But metadata.options seems to be impossible to forward directly, as validation for the options key only works for calculations.
The offending line seems to be:
> aiida/engine/processes/process.py in _setup_metadata(self)
> 650 elif name == 'options':
> 651 for option_name, option_value in metadata.items():
>--> 652 self.node.set_option(option_name, option_value)
> 653 else:
> 654 raise RuntimeError('unsupported metadata key: {}'.format(name))
>
> AttributeError: 'WorkChainNode' object has no attribute 'set_option'
set_option is only defined in calcjob.py, but not in workchain.py (or process.py). Adding it manually seems to just work in my case. I'm not sure if this has unwanted side effects, though, I only tried a local job for now.
|
1.0
|
Ensure that `ProcessNode.get_builder_restart` fully restores all inputs including `non_db` ones - I have a simple workchain to wrap a calculation. I used expose_inputs to forward everything, and it was using a namespace until recently.
But I would like to get rid of the namespace in this case, and simply forward every input to the calculation job, to get rid of the extra layer of indirection. But metadata.options seems to be impossible to forward directly, as validation for the options key only works for calculations.
The offending line seems to be:
> aiida/engine/processes/process.py in _setup_metadata(self)
> 650 elif name == 'options':
> 651 for option_name, option_value in metadata.items():
>--> 652 self.node.set_option(option_name, option_value)
> 653 else:
> 654 raise RuntimeError('unsupported metadata key: {}'.format(name))
>
> AttributeError: 'WorkChainNode' object has no attribute 'set_option'
set_option is only defined in calcjob.py, but not in workchain.py (or process.py). Adding it manually seems to just work in my case. I'm not sure if this has unwanted side effects, though, I only tried a local job for now.
|
process
|
ensure that processnode get builder restart fully restores all inputs including non db ones i have a simple workchain to wrap a calculation i used expose inputs to forward everything and it was using a namespace until recently but i would like to get rid of the namespace in this case and simply forward every input to the calculation job to get rid of the extra layer of indirection but metadata options seems to be impossible to forward directly as validation for the options key only works for calculations the offending line seems to be aiida engine processes process py in setup metadata self elif name options for option name option value in metadata items self node set option option name option value else raise runtimeerror unsupported metadata key format name attributeerror workchainnode object has no attribute set option set option is only defined in calcjob py but not in workchain py or process py adding it manually seems to just work in my case i m not sure if this has unwanted side effects though i only tried a local job for now
| 1
|
32,809
| 2,760,294,331
|
IssuesEvent
|
2015-04-28 11:14:30
|
ufal/lindat-dspace
|
https://api.github.com/repos/ufal/lindat-dspace
|
closed
|
Item marked hidden is visible
|
bug high priority lindat-specific
|
This was probably not merged completely, I wanted to investigate private items
@vidiecan private items? what do you think?
|
1.0
|
Item marked hidden is visible - This was probably not merged completely, I wanted to investigate private items
@vidiecan private items? what do you think?
|
non_process
|
item marked hidden is visible this was probably not merged completely i wanted to investigate private items vidiecan private items what do you think
| 0
|
360,539
| 10,693,979,878
|
IssuesEvent
|
2019-10-23 09:53:50
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
Category lists might lead to a DoS
|
bug in progress priority ticket
|
ntopng relies on external category lists (/lua/admin/edit_category_lists.lua). If one of the external lists has too many hosts/IPs this can lead to crashes or DoS when the list is too long and thus ntopng will be killed by the OOM.
It is requested to limit the number of entries ntopng will load from a file, in order to avoid side effects as in today's build.
|
1.0
|
Category lists might lead to a DoS - ntopng relies on external category lists (/lua/admin/edit_category_lists.lua). If one of the external lists has too many hosts/IPs this can lead to crashes or DoS when the list is too long and thus ntopng will be killed by the OOM.
It is requested to limit the number of entries ntopng will load from a file, in order to avoid side effects as in today's build.
|
non_process
|
category lists might lead to a dos ntopng relies on external category lists lua admin edit category lists lua if one of the external lists has too many hosts ips this can lead to crashes or dos when the list is too long and thus ntopng will be killed by the oom it is requested to limit the number of entries ntopng will load from a file in order to avoid side effects as in today s build
| 0
|
281,289
| 21,315,392,516
|
IssuesEvent
|
2022-04-16 07:17:26
|
zunedz/pe
|
https://api.github.com/repos/zunedz/pe
|
opened
|
Inconsistent Capitalization
|
type.DocumentationBug severity.VeryLow
|

The word `Available` should be capitalized to follow the format
<!--session: 1650086595107-47c3d0f7-d20b-4206-951f-db3057011ad8-->
<!--Version: Web v3.4.2-->
|
1.0
|
Inconsistent Capitalization - 
The word `Available` should be capitalized to follow the format
<!--session: 1650086595107-47c3d0f7-d20b-4206-951f-db3057011ad8-->
<!--Version: Web v3.4.2-->
|
non_process
|
inconsistent capitalization the word available should be capitalized to follow the format
| 0
|
20,712
| 27,408,989,054
|
IssuesEvent
|
2023-03-01 09:14:23
|
X-Sharp/XSharpPublic
|
https://api.github.com/repos/X-Sharp/XSharpPublic
|
reopened
|
X# preprocessor doesn't understand nested #translate
|
bug Preprocessor
|
**Describe the bug**
X# preprocessor doesn't understand nested #translate
**To Reproduce**
```
#xtranslate __xlangext_SuppressNoInit(<h> [, <t>]) => (<h> := <h>[, <t> := <t>])
#xtranslate __xlangext_Apply(<f>, <h> [, <t>]) => <f><h>[, <f><t>]
#xtranslate __xlangext_Last(<h>) => <h>
#xtranslate __xlangext_Last(<h>, <t,...>) => __xlangext_Last(<t>)
//
#xtranslate local {<args,...>} := <expr>;
=>;
local <args> := ({<args>} := <expr>, __xlangext_Last(<args>))
#xtranslate {<args,...>} := <expr>;
=>;
(__xlangext_SuppressNoInit(<args>),;
xlangext_AssignTuple(<expr>, __xlangext_Apply(@, <args>)))
local {a, b, c} := _Get()
{a, b, c} := _Get()
```
**Expected behavior (xBase ppo)**
```
local a,b,c := (((a := a, b := b, c := c),xlangext_AssignTuple(_Get(), @a, @ b, @ c)), c)
((a := a, b := b, c := c),xlangext_AssignTuple(_Get(), @a, @ b, @ c))
```
**Actual behavior (X# ppo)**
```
local a, b, c := ((__xlangext_SuppressNoInit(a, b, c ), xlangext_AssignTuple(_Get() , __xlangext_Apply(@, a, b, c ))) , c)
(__xlangext_SuppressNoInit(a, b, c ), xlangext_AssignTuple(_Get() , __xlangext_Apply(@, a, b, c )))
```
**Error message**
error XS9111: The error is most likely related to the token 'local' that was used at this location.
error XS9002: Parser: unexpected input ','
**Additional context**
X# Compiler version 2.14.0.4 (release)
|
1.0
|
X# preprocessor doesn't understand nested #translate - **Describe the bug**
X# preprocessor doesn't understand nested #translate
**To Reproduce**
```
#xtranslate __xlangext_SuppressNoInit(<h> [, <t>]) => (<h> := <h>[, <t> := <t>])
#xtranslate __xlangext_Apply(<f>, <h> [, <t>]) => <f><h>[, <f><t>]
#xtranslate __xlangext_Last(<h>) => <h>
#xtranslate __xlangext_Last(<h>, <t,...>) => __xlangext_Last(<t>)
//
#xtranslate local {<args,...>} := <expr>;
=>;
local <args> := ({<args>} := <expr>, __xlangext_Last(<args>))
#xtranslate {<args,...>} := <expr>;
=>;
(__xlangext_SuppressNoInit(<args>),;
xlangext_AssignTuple(<expr>, __xlangext_Apply(@, <args>)))
local {a, b, c} := _Get()
{a, b, c} := _Get()
```
**Expected behavior (xBase ppo)**
```
local a,b,c := (((a := a, b := b, c := c),xlangext_AssignTuple(_Get(), @a, @ b, @ c)), c)
((a := a, b := b, c := c),xlangext_AssignTuple(_Get(), @a, @ b, @ c))
```
**Actual behavior (X# ppo)**
```
local a, b, c := ((__xlangext_SuppressNoInit(a, b, c ), xlangext_AssignTuple(_Get() , __xlangext_Apply(@, a, b, c ))) , c)
(__xlangext_SuppressNoInit(a, b, c ), xlangext_AssignTuple(_Get() , __xlangext_Apply(@, a, b, c )))
```
**Error message**
error XS9111: The error is most likely related to the token 'local' that was used at this location.
error XS9002: Parser: unexpected input ','
**Additional context**
X# Compiler version 2.14.0.4 (release)
|
process
|
x preprocessor doesn t understand nested translate describe the bug x preprocessor doesn t understand nested translate to reproduce xtranslate xlangext suppressnoinit xtranslate xlangext apply xtranslate xlangext last xtranslate xlangext last xlangext last xtranslate local local xlangext last xtranslate xlangext suppressnoinit xlangext assigntuple xlangext apply local a b c get a b c get expected behavior xbase ppo local a b c a a b b c c xlangext assigntuple get a b c c a a b b c c xlangext assigntuple get a b c actual behavior x ppo local a b c xlangext suppressnoinit a b c xlangext assigntuple get xlangext apply a b c c xlangext suppressnoinit a b c xlangext assigntuple get xlangext apply a b c error message error the error is most likely related to the token local that was used at this location error parser unexpected input additional context x compiler version release
| 1
|
16,726
| 21,890,460,030
|
IssuesEvent
|
2022-05-20 00:36:18
|
googleapis/nodejs-notebooks
|
https://api.github.com/repos/googleapis/nodejs-notebooks
|
closed
|
promote library to GA
|
type: process api: notebooks
|
Package name: **FIXME**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
promote library to GA - Package name: **FIXME**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
promote library to ga package name fixme current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
663,777
| 22,206,283,618
|
IssuesEvent
|
2022-06-07 15:07:12
|
PolyhedralDev/TerraOverworldConfig
|
https://api.github.com/repos/PolyhedralDev/TerraOverworldConfig
|
closed
|
Add packed mud to various biomes
|
feature priority=high
|
- Arid spikes
- Golden spikes
- Various savanna biomes
- Steppes
- More(?)
|
1.0
|
Add packed mud to various biomes - - Arid spikes
- Golden spikes
- Various savanna biomes
- Steppes
- More(?)
|
non_process
|
add packed mud to various biomes arid spikes golden spikes various savanna biomes steppes more
| 0
|
12,726
| 15,095,557,212
|
IssuesEvent
|
2021-02-07 11:36:29
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
New R processes are not shown anymore
|
process:windows
|
Hi everyone,
Since the last update, I get `Wrong type argument: window-live-p, nil` when trying to eval some code with `C-c C-c` on a source file with no active session.
Pretty easy to reproduce:
1. Open a fresh Emacs session
2. Create a `source.R` file with the content `x <- 2`
3. Try to execute it with `C-c C-c` or whatever.
This works again if the R session is created manually with `M-x R`.
|
1.0
|
New R processes are not shown anymore - Hi everyone,
Since the last update, I get `Wrong type argument: window-live-p, nil` when trying to eval some code with `C-c C-c` on a source file with no active session.
Pretty easy to reproduce:
1. Open a fresh Emacs session
2. Create a `source.R` file with the content `x <- 2`
3. Try to execute it with `C-c C-c` or whatever.
This works again if the R session is created manually with `M-x R`.
|
process
|
new r processes are not shown anymore hi everyone since the last update i get wrong type argument window live p nil when trying to eval some code with c c c c on a source file with no active session pretty easy to reproduce open a fresh emacs session create a source r file with the content x try to execute it with c c c c or whatever this works again if the r session is created manually with m x r
| 1
|
14,378
| 17,400,556,437
|
IssuesEvent
|
2021-08-02 19:00:58
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
google-cloud-* libraries should allow `google-api-core`, `google-cloud-core`, `google-auth` >1, <3
|
type: process
|
For all `google-cloud-*` libraries that require `python >=3.6`, expand `google-api-core`, `google-cloud-core` and `google-auth` pins to `>1.x.x, <3.0.0dev`.
Leave a comment in the setup.py for these libraries asking library maintainers to not require >=2.x.x versions until https://github.com/googleapis/google-cloud-python/issues/10566 has been closed.
Release `google-cloud-*` libraries with expanded pins.
```py
dependencies = [
# NOTE: Maintainers, please do not require google-api-core>=2.x.x
# Until this issue is resolved: https://github.com/googleapis/google-cloud-python/issues/10566
"google-api-core[grpc] >= 1.26.0, <3.0.0dev",
"proto-plus >= 1.15.0",
"packaging >= 14.3",
]
```
This issue can be closed when all google-cloud-* libraries have latest releases with <3.0.0dev pins.
|
1.0
|
google-cloud-* libraries should allow `google-api-core`, `google-cloud-core`, `google-auth` >1, <3 - For all `google-cloud-*` libraries that require `python >=3.6`, expand `google-api-core`, `google-cloud-core` and `google-auth` pins to `>1.x.x, <3.0.0dev`.
Leave a comment in the setup.py for these libraries asking library maintainers to not require >=2.x.x versions until https://github.com/googleapis/google-cloud-python/issues/10566 has been closed.
Release `google-cloud-*` libraries with expanded pins.
```py
dependencies = [
# NOTE: Maintainers, please do not require google-api-core>=2.x.x
# Until this issue is resolved: https://github.com/googleapis/google-cloud-python/issues/10566
"google-api-core[grpc] >= 1.26.0, <3.0.0dev",
"proto-plus >= 1.15.0",
"packaging >= 14.3",
]
```
This issue can be closed when all google-cloud-* libraries have latest releases with <3.0.0dev pins.
|
process
|
google cloud libraries should allow google api core google cloud core google auth expand google api core google cloud core and google auth pins to x x leave a comment in the setup py for these libraries asking library maintainers to not require x x versions until has been closed release google cloud libraries with expanded pins py dependencies note maintainers please do not require google api core x x until this issue is resolved google api core proto plus packaging this issue can be closed when all google cloud libraries have latest releases with pins
| 1
|
9,170
| 12,225,156,875
|
IssuesEvent
|
2020-05-03 03:18:22
|
naoki-shigehisa/paper
|
https://api.github.com/repos/naoki-shigehisa/paper
|
closed
|
Gaussian Process Latent Variable Model Factorization for Context-aware Recommender Systems
|
2019 Gaussian Process recommendation あとで調べる 使ってみたい
|
## 0. 論文
タイトル:[Gaussian Process Latent Variable Model Factorization for Context-aware Recommender Systems](https://arxiv.org/abs/1912.09593)
著者:

arXiv投稿日:2019/12/19
学会/ジャーナル:
## 1. どんなもの?
潜在的要因を探し出すモデルGaussian Process Latent Variable Model Factorization (GPLVMF)を提案
## 2. 先行研究と比べてどこがすごい?
レコメンドパフォーマンスの向上
コンテキストの重要性が把握できる
## 3. 技術や手法のキモはどこ?
2人のユーザーで構成されているレコメンドデータセットが入力
(灰色:ユーザー、緑:アイテム、黄色:コンテキスト1、青:コンテキスト2、オレンジ:評価)
平均とカーネルに因数分解する

## 4. どうやって有効だと検証した?
以下の4つの実データセットを使用

結果

## 5. 議論はある?
内容は難しそうだけど面白そう。ちゃんと調べる。
## 6. 次に読むべき論文は?
Gaussian process for machine learning,
|
1.0
|
Gaussian Process Latent Variable Model Factorization for Context-aware Recommender Systems - ## 0. 論文
タイトル:[Gaussian Process Latent Variable Model Factorization for Context-aware Recommender Systems](https://arxiv.org/abs/1912.09593)
著者:

arXiv投稿日:2019/12/19
学会/ジャーナル:
## 1. どんなもの?
潜在的要因を探し出すモデルGaussian Process Latent Variable Model Factorization (GPLVMF)を提案
## 2. 先行研究と比べてどこがすごい?
レコメンドパフォーマンスの向上
コンテキストの重要性が把握できる
## 3. 技術や手法のキモはどこ?
2人のユーザーで構成されているレコメンドデータセットが入力
(灰色:ユーザー、緑:アイテム、黄色:コンテキスト1、青:コンテキスト2、オレンジ:評価)
平均とカーネルに因数分解する

## 4. どうやって有効だと検証した?
以下の4つの実データセットを使用

結果

## 5. 議論はある?
内容は難しそうだけど面白そう。ちゃんと調べる。
## 6. 次に読むべき論文は?
Gaussian process for machine learning,
|
process
|
gaussian process latent variable model factorization for context aware recommender systems 論文 タイトル: 著者: arxiv投稿日: 学会 ジャーナル: どんなもの? 潜在的要因を探し出すモデルgaussian process latent variable model factorization gplvmf を提案 先行研究と比べてどこがすごい? レコメンドパフォーマンスの向上 コンテキストの重要性が把握できる 技術や手法のキモはどこ? 灰色:ユーザー、緑:アイテム、黄色: 、青: 、オレンジ:評価 平均とカーネルに因数分解する どうやって有効だと検証した? 結果 議論はある? 内容は難しそうだけど面白そう。ちゃんと調べる。 次に読むべき論文は? gaussian process for machine learning
| 1
|
2,990
| 5,967,921,181
|
IssuesEvent
|
2017-05-30 16:58:46
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
reopened
|
[I18n] Document process for translating strings
|
is:Needing research type:Process where:Internationalization
|
Our components have a variety of strings that must be translated into our supported languages.
We should document this process and include it in our contributing info.
|
1.0
|
[I18n] Document process for translating strings - Our components have a variety of strings that must be translated into our supported languages.
We should document this process and include it in our contributing info.
|
process
|
document process for translating strings our components have a variety of strings that must be translated into our supported languages we should document this process and include it in our contributing info
| 1
|
139,189
| 12,839,182,369
|
IssuesEvent
|
2020-07-07 18:51:02
|
openenclave/openenclave
|
https://api.github.com/repos/openenclave/openenclave
|
closed
|
Document which features use which system ocalls
|
core documentation triaged
|
#2676 Tracks adding support to give developers more control over which ocalls the OE runtime may call on their behalf. Since developers will now have a choice to disable the OE runtime from making certain ocalls, we should document which features make which ocalls.
For example, if a developer wants to disable the syscall ocalls (by not including them in EDL), there should be enough documentation that the developer knows which posix functions they must not call in oelibc to resolve unresolved symbol linker errors.
|
1.0
|
Document which features use which system ocalls - #2676 Tracks adding support to give developers more control over which ocalls the OE runtime may call on their behalf. Since developers will now have a choice to disable the OE runtime from making certain ocalls, we should document which features make which ocalls.
For example, if a developer wants to disable the syscall ocalls (by not including them in EDL), there should be enough documentation that the developer knows which posix functions they must not call in oelibc to resolve unresolved symbol linker errors.
|
non_process
|
document which features use which system ocalls tracks adding support to give developers more control over which ocalls the oe runtime may call on their behalf since developers will now have a choice to disable the oe runtime from making certain ocalls we should document which features make which ocalls for example if a developer wants to disable the syscall ocalls by not including them in edl there should be enough documentation that the developer knows which posix functions they must not call in oelibc to resolve unresolved symbol linker errors
| 0
|
379,226
| 11,217,842,558
|
IssuesEvent
|
2020-01-07 10:08:14
|
wso2/product-microgateway
|
https://api.github.com/repos/wso2/product-microgateway
|
closed
|
Upgrade ballerina version to 1.1.0
|
Priority/Normal Type/New Feature
|
### Describe your problem(s)
Needs to upgrade the ballerina version to the latest. (v1.1.0)
### Describe your solution
N/A
### How will you implement it
N/A
---
#### Suggested Labels:
Improvement
|
1.0
|
Upgrade ballerina version to 1.1.0 - ### Describe your problem(s)
Needs to upgrade the ballerina version to the latest. (v1.1.0)
### Describe your solution
N/A
### How will you implement it
N/A
---
#### Suggested Labels:
Improvement
|
non_process
|
upgrade ballerina version to describe your problem s needs to upgrade the ballerina version to the latest describe your solution n a how will you implement it n a suggested labels improvement
| 0
|
11,979
| 14,737,077,887
|
IssuesEvent
|
2021-01-07 00:48:26
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Site062 GM cannot log into SAB
|
anc-ops anc-process anp-0.5 ant-bug ant-child/secondary ant-support
|
In GitLab by @kdjstudios on Apr 12, 2018, 10:43
**Submitted by:** "Denise Joseph" <denise.joseph@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-12-33528
**Server:** Internal
**Client/Site:** 062
**Account:** NA
**Issue:**
I have tried several times to log into SAB using my credentials I always used and for some reason I cannot get into SAB to review Toronto site accounts. Can you please look into this for me.
Thank you in advance for your assistance.
|
1.0
|
Site062 GM cannot log into SAB - In GitLab by @kdjstudios on Apr 12, 2018, 10:43
**Submitted by:** "Denise Joseph" <denise.joseph@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-12-33528
**Server:** Internal
**Client/Site:** 062
**Account:** NA
**Issue:**
I have tried several times to log into SAB using my credentials I always used and for some reason I cannot get into SAB to review Toronto site accounts. Can you please look into this for me.
Thank you in advance for your assistance.
|
process
|
gm cannot log into sab in gitlab by kdjstudios on apr submitted by denise joseph helpdesk server internal client site account na issue i have tried several times to log into sab using my credentials i always used and for some reason i cannot get into sab to review toronto site accounts can you please look into this for me thank you in advance for your assistance
| 1
|
19,124
| 10,321,348,982
|
IssuesEvent
|
2019-08-31 01:07:42
|
r888888888/danbooru
|
https://api.github.com/repos/r888888888/danbooru
|
closed
|
Optimize related tags
|
Performance
|
Related tags can be calculated pretty quickly in a single SQL query. Even tags up to 25k posts can be calculated in about a second or less. Larger tags can be approximated by operating on a small sample of the full tag.
This would make related tags an order of magnitude faster and the implementation simpler. We don't have to offload large tags to Reportbooru, we can do it all in delayed jobs on Danbooru.
This also tends to give better results, since it finds small tags that the current method tends to miss due to undersampling.
```
-- monster_girl, 25k posts
SELECT
tag,
count(*),
count(*) / (sqrt(25031.0 * tags.post_count)+0.00001) as cosine_sim
FROM posts, unnest(string_to_array(tag_string, ' ')) tag
JOIN tags on tags.name = tag
WHERE tag_index @@ 'monster_girl'
GROUP BY tag, tags.post_count
ORDER by cosine_sim DESC
LIMIT 100;
tag | count | cosine_sim
---------------------------------------+-------+------------------------
monster_girl | 25031 | 0.99999999960049538588
mermaid | 5843 | 0.48314650545850923041
head_fins | 3538 | 0.32660657738893507326
lamia | 2148 | 0.29293956870473529281
wakasagihime | 2430 | 0.28365156888126750165
monster_girl_encyclopedia | 2319 | 0.26389676343888820123
goo_girl | 1494 | 0.24430716153640192567
scales | 2701 | 0.24371230359126778246
monster_musume_no_iru_nichijou | 1816 | 0.23543018974897058610
harpy | 1370 | 0.23394899437526761605
insect_girl | 1756 | 0.23028243127000465695
plant_girl | 870 | 0.18643202861432030962
spider_girl | 1005 | 0.18360304757238975398
kenkou_cross | 726 | 0.16173334896089661840
centaur | 741 | 0.15366368719514002475
miia_(monster_musume) | 581 | 0.15105784707847674151
monsterification | 595 | 0.14184048319721711827
arachne | 493 | 0.14034093263116872556
extra_eyes | 999 | 0.13861726859996715691
fins | 785 | 0.13635986523093149717
fish_girl | 489 | 0.12169981011580615716
multiple_legs | 414 | 0.12121832577557585777
dragon_girl | 1526 | 0.12091619478774899993
blue_skin | 1654 | 0.11806967653800120442
talons | 522 | 0.11173086461119717746
rachnera_arachnera | 316 | 0.11113394211779918043
centorea_shianus | 311 | 0.10903843433417879459
scylla | 287 | 0.10707839345185582255
claws | 2115 | 0.10701027252663585165
papi_(monster_musume) | 280 | 0.10520241556977752685
carapace | 297 | 0.10092047788180225168
shell_bikini | 395 | 0.09822945521554505997
suu_(monster_musume) | 232 | 0.09565660253840222014
gills | 306 | 0.09494197826171912818
pointy_ears | 4840 | 0.08974122330161811389
bee_girl | 258 | 0.08766829391752518833
Time: 1291.083 ms (00:01.291)
```
|
True
|
Optimize related tags - Related tags can be calculated pretty quickly in a single SQL query. Even tags up to 25k posts can be calculated in about a second or less. Larger tags can be approximated by operating on a small sample of the full tag.
This would make related tags an order of magnitude faster and the implementation simpler. We don't have to offload large tags to Reportbooru, we can do it all in delayed jobs on Danbooru.
This also tends to give better results, since it finds small tags that the current method tends to miss due to undersampling.
```
-- monster_girl, 25k posts
SELECT
tag,
count(*),
count(*) / (sqrt(25031.0 * tags.post_count)+0.00001) as cosine_sim
FROM posts, unnest(string_to_array(tag_string, ' ')) tag
JOIN tags on tags.name = tag
WHERE tag_index @@ 'monster_girl'
GROUP BY tag, tags.post_count
ORDER by cosine_sim DESC
LIMIT 100;
tag | count | cosine_sim
---------------------------------------+-------+------------------------
monster_girl | 25031 | 0.99999999960049538588
mermaid | 5843 | 0.48314650545850923041
head_fins | 3538 | 0.32660657738893507326
lamia | 2148 | 0.29293956870473529281
wakasagihime | 2430 | 0.28365156888126750165
monster_girl_encyclopedia | 2319 | 0.26389676343888820123
goo_girl | 1494 | 0.24430716153640192567
scales | 2701 | 0.24371230359126778246
monster_musume_no_iru_nichijou | 1816 | 0.23543018974897058610
harpy | 1370 | 0.23394899437526761605
insect_girl | 1756 | 0.23028243127000465695
plant_girl | 870 | 0.18643202861432030962
spider_girl | 1005 | 0.18360304757238975398
kenkou_cross | 726 | 0.16173334896089661840
centaur | 741 | 0.15366368719514002475
miia_(monster_musume) | 581 | 0.15105784707847674151
monsterification | 595 | 0.14184048319721711827
arachne | 493 | 0.14034093263116872556
extra_eyes | 999 | 0.13861726859996715691
fins | 785 | 0.13635986523093149717
fish_girl | 489 | 0.12169981011580615716
multiple_legs | 414 | 0.12121832577557585777
dragon_girl | 1526 | 0.12091619478774899993
blue_skin | 1654 | 0.11806967653800120442
talons | 522 | 0.11173086461119717746
rachnera_arachnera | 316 | 0.11113394211779918043
centorea_shianus | 311 | 0.10903843433417879459
scylla | 287 | 0.10707839345185582255
claws | 2115 | 0.10701027252663585165
papi_(monster_musume) | 280 | 0.10520241556977752685
carapace | 297 | 0.10092047788180225168
shell_bikini | 395 | 0.09822945521554505997
suu_(monster_musume) | 232 | 0.09565660253840222014
gills | 306 | 0.09494197826171912818
pointy_ears | 4840 | 0.08974122330161811389
bee_girl | 258 | 0.08766829391752518833
Time: 1291.083 ms (00:01.291)
```
|
non_process
|
optimize related tags related tags can be calculated pretty quickly in a single sql query even tags up to posts can be calculated in about a second or less larger tags can be approximated by operating on a small sample of the full tag this would make related tags an order of magnitude faster and the implementation simpler we don t have to offload large tags to reportbooru we can do it all in delayed jobs on danbooru this also tends to give better results since it finds small tags that the current method tends to miss due to undersampling monster girl posts select tag count count sqrt tags post count as cosine sim from posts unnest string to array tag string tag join tags on tags name tag where tag index monster girl group by tag tags post count order by cosine sim desc limit tag count cosine sim monster girl mermaid head fins lamia wakasagihime monster girl encyclopedia goo girl scales monster musume no iru nichijou harpy insect girl plant girl spider girl kenkou cross centaur miia monster musume monsterification arachne extra eyes fins fish girl multiple legs dragon girl blue skin talons rachnera arachnera centorea shianus scylla claws papi monster musume carapace shell bikini suu monster musume gills pointy ears bee girl time ms
| 0
|
22,291
| 30,842,579,260
|
IssuesEvent
|
2023-08-02 11:43:51
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@yiminghe/update-notifier 6.0.2 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/update-notifier.js:112","message":"This package is silently executing another executable"}]}```
|
1.0
|
@yiminghe/update-notifier 6.0.2 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/update-notifier.js:112","message":"This package is silently executing another executable"}]}```
|
process
|
yiminghe update notifier has guarddog issues npm silent process execution n t t tdetached true n t t tstdio ignore n t t unref location package update notifier js message this package is silently executing another executable
| 1
|
228,902
| 25,263,138,498
|
IssuesEvent
|
2022-11-16 01:09:05
|
Satheesh575555/linux-3.0.35
|
https://api.github.com/repos/Satheesh575555/linux-3.0.35
|
opened
|
CVE-2011-4604 (Medium) detected in linuxlinux-3.0.40
|
security vulnerability
|
## CVE-2011-4604 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-3.0.35/commit/d886d5c33aadc1c4f116214d0060f5869b445fe1">d886d5c33aadc1c4f116214d0060f5869b445fe1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/batman-adv/icmp_socket.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The bat_socket_read function in net/batman-adv/icmp_socket.c in the Linux kernel before 3.3 allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a crafted batman-adv ICMP packet.
<p>Publish Date: 2013-06-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2011-4604>CVE-2011-4604</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4604">https://nvd.nist.gov/vuln/detail/CVE-2011-4604</a></p>
<p>Release Date: 2013-06-07</p>
<p>Fix Resolution: 3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2011-4604 (Medium) detected in linuxlinux-3.0.40 - ## CVE-2011-4604 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-3.0.35/commit/d886d5c33aadc1c4f116214d0060f5869b445fe1">d886d5c33aadc1c4f116214d0060f5869b445fe1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/batman-adv/icmp_socket.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The bat_socket_read function in net/batman-adv/icmp_socket.c in the Linux kernel before 3.3 allows remote attackers to cause a denial of service (memory corruption) or possibly have unspecified other impact via a crafted batman-adv ICMP packet.
<p>Publish Date: 2013-06-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2011-4604>CVE-2011-4604</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4604">https://nvd.nist.gov/vuln/detail/CVE-2011-4604</a></p>
<p>Release Date: 2013-06-07</p>
<p>Fix Resolution: 3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files net batman adv icmp socket c vulnerability details the bat socket read function in net batman adv icmp socket c in the linux kernel before allows remote attackers to cause a denial of service memory corruption or possibly have unspecified other impact via a crafted batman adv icmp packet publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
15,649
| 19,846,638,259
|
IssuesEvent
|
2022-01-21 07:25:07
|
ooi-data/CE09OSSM-SBD11-06-METBKA000-recovered_host-metbk_a_dcl_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE09OSSM-SBD11-06-METBKA000-recovered_host-metbk_a_dcl_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:25:06.328958.
## Details
Flow name: `CE09OSSM-SBD11-06-METBKA000-recovered_host-metbk_a_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:25:06.328958.
## Details
Flow name: `CE09OSSM-SBD11-06-METBKA000-recovered_host-metbk_a_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host metbk a dcl instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
10,492
| 13,258,414,137
|
IssuesEvent
|
2020-08-20 15:21:11
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
env property not documented
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
According to this github issue, it is possible to set environment variables for a task, but it's not documented here
https://github.com/SpecFlowOSS/SpecFlow/issues/1912#issuecomment-612057014
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
env property not documented -
According to this github issue, it is possible to set environment variables for a task, but it's not documented here
https://github.com/SpecFlowOSS/SpecFlow/issues/1912#issuecomment-612057014
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
env property not documented according to this github issue it is possible to set environment variables for a task but it s not documented here document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ebdf version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
41,542
| 10,732,332,544
|
IssuesEvent
|
2019-10-28 21:36:37
|
scalameta/metals
|
https://api.github.com/repos/scalameta/metals
|
closed
|
Re-enable Windows CI for pull requests
|
build windows
|
I just disabled Appveyor CI for pull requests because it's repeatedly failing with out of memory errors. It's a bad contributing experience when the CI fails for valid pull requests. Appveyor will continue to run against merged pull requests, making it possible to still manually check once in a while if master works on Windows.
We should try out the upcoming GitHub CI once it's out to see if provides more powerful machines to run our test suites.
|
1.0
|
Re-enable Windows CI for pull requests - I just disabled Appveyor CI for pull requests because it's repeatedly failing with out of memory errors. It's a bad contributing experience when the CI fails for valid pull requests. Appveyor will continue to run against merged pull requests, making it possible to still manually check once in a while if master works on Windows.
We should try out the upcoming GitHub CI once it's out to see if provides more powerful machines to run our test suites.
|
non_process
|
re enable windows ci for pull requests i just disabled appveyor ci for pull requests because it s repeatedly failing with out of memory errors it s a bad contributing experience when the ci fails for valid pull requests appveyor will continue to run against merged pull requests making it possible to still manually check once in a while if master works on windows we should try out the upcoming github ci once it s out to see if provides more powerful machines to run our test suites
| 0
|
10,822
| 13,609,292,395
|
IssuesEvent
|
2020-09-23 04:50:53
|
googleapis/java-shared-dependencies
|
https://api.github.com/repos/googleapis/java-shared-dependencies
|
closed
|
Dependency Dashboard
|
type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/iam.version -->deps: update iam.version to v1.0.1 (`com.google.api.grpc:grpc-google-iam-v1`, `com.google.api.grpc:proto-google-iam-v1`)
- [ ] <!-- rebase-branch=renovate/opencensus.version -->deps: update opencensus.version to v0.27.0 (`io.opencensus:opencensus-impl-core`, `io.opencensus:opencensus-impl`, `io.opencensus:opencensus-exporter-trace-stackdriver`, `io.opencensus:opencensus-exporter-stats-stackdriver`, `io.opencensus:opencensus-contrib-zpages`, `io.opencensus:opencensus-contrib-http-util`, `io.opencensus:opencensus-contrib-grpc-util`, `io.opencensus:opencensus-api`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/iam.version -->deps: update iam.version to v1.0.1 (`com.google.api.grpc:grpc-google-iam-v1`, `com.google.api.grpc:proto-google-iam-v1`)
- [ ] <!-- rebase-branch=renovate/opencensus.version -->deps: update opencensus.version to v0.27.0 (`io.opencensus:opencensus-impl-core`, `io.opencensus:opencensus-impl`, `io.opencensus:opencensus-exporter-trace-stackdriver`, `io.opencensus:opencensus-exporter-stats-stackdriver`, `io.opencensus:opencensus-contrib-zpages`, `io.opencensus:opencensus-contrib-http-util`, `io.opencensus:opencensus-contrib-grpc-util`, `io.opencensus:opencensus-api`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any deps update iam version to com google api grpc grpc google iam com google api grpc proto google iam deps update opencensus version to io opencensus opencensus impl core io opencensus opencensus impl io opencensus opencensus exporter trace stackdriver io opencensus opencensus exporter stats stackdriver io opencensus opencensus contrib zpages io opencensus opencensus contrib http util io opencensus opencensus contrib grpc util io opencensus opencensus api check this box to trigger a request for renovate to run again on this repository
| 1
|
18,596
| 24,570,815,317
|
IssuesEvent
|
2022-10-13 08:30:26
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
Incompatible with push button mod - dependency loop
|
mod:pypostprocessing postprocess-fail compatibility
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
ERROR: Dependency loop detected
stack traceback:
[C]: in function 'error'
__pypostprocessing__/prototypes/functions/auto_tech.lua:150: in function 'run'
__pypostprocessing__/data-final-fixes.lua:144: in main chunk
### Steps to reproduce
Add mod : https://mods.factorio.com/mod/pushbutton
Explode
### Additional context
_No response_
### Log file
_No response_
|
2.0
|
Incompatible with push button mod - dependency loop - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
ERROR: Dependency loop detected
stack traceback:
[C]: in function 'error'
__pypostprocessing__/prototypes/functions/auto_tech.lua:150: in function 'run'
__pypostprocessing__/data-final-fixes.lua:144: in main chunk
### Steps to reproduce
Add mod : https://mods.factorio.com/mod/pushbutton
Explode
### Additional context
_No response_
### Log file
_No response_
|
process
|
incompatible with push button mod dependency loop mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem error dependency loop detected stack traceback in function error pypostprocessing prototypes functions auto tech lua in function run pypostprocessing data final fixes lua in main chunk steps to reproduce add mod explode additional context no response log file no response
| 1
|
50,861
| 7,641,457,893
|
IssuesEvent
|
2018-05-08 05:05:43
|
ConsenSys/mythril
|
https://api.github.com/repos/ConsenSys/mythril
|
closed
|
Installation instructions problems
|
need documentation
|
Hi,
I have two issues with the installation instructions available at the README:
1. It's not mentioned there, but in macos you need to install `leveldb` before installing `mythril` from `pip`. `brew install leveldb` worked for me. If you don't install it first, `plyvel`'s compilation will fail.
2. It's not unclear if this message "If you plan to analyze Solidity code you'll also need the native version of solc. Solcjs is not supported." also applies when running `myth --truffle`, as you have to compile everything first.
|
1.0
|
Installation instructions problems - Hi,
I have two issues with the installation instructions available at the README:
1. It's not mentioned there, but in macos you need to install `leveldb` before installing `mythril` from `pip`. `brew install leveldb` worked for me. If you don't install it first, `plyvel`'s compilation will fail.
2. It's not unclear if this message "If you plan to analyze Solidity code you'll also need the native version of solc. Solcjs is not supported." also applies when running `myth --truffle`, as you have to compile everything first.
|
non_process
|
installation instructions problems hi i have two issues with the installation instructions available at the readme it s not mentioned there but in macos you need to install leveldb before installing mythril from pip brew install leveldb worked for me if you don t install it first plyvel s compilation will fail it s not unclear if this message if you plan to analyze solidity code you ll also need the native version of solc solcjs is not supported also applies when running myth truffle as you have to compile everything first
| 0
|
15,941
| 20,161,271,735
|
IssuesEvent
|
2022-02-09 21:51:11
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_success_first_then_exception (__main__.SpawnTest)
|
module: multiprocessing triaged module: flaky-tests skipped
|
Platforms%3A%20linux%0A%0A%20%20%20%20This%20test%20was%20disabled%20because%20it%20is%20failing%20on%20trunk.%20See%20%5Brecent%20examples%5D(http%3A%2F%2Ftorch-ci.com%2Ffailure%2Ftest_success_first_then_exception%252C%2520SpawnTest)%20and%20the%20most%20recent%0A%20%20%20%20%5Bworkflow%20logs%5D(https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Factions%2Fruns%2F1820236472).%0A%0A%20%20%20%20Over%20the%20past%206%20hours%2C%20it%20has%20been%20determined%20flaky%20in%201%20workflows%20with%0A%20%20%20%201%20red%20and%203%20green.
|
1.0
|
DISABLED test_success_first_then_exception (__main__.SpawnTest) - Platforms%3A%20linux%0A%0A%20%20%20%20This%20test%20was%20disabled%20because%20it%20is%20failing%20on%20trunk.%20See%20%5Brecent%20examples%5D(http%3A%2F%2Ftorch-ci.com%2Ffailure%2Ftest_success_first_then_exception%252C%2520SpawnTest)%20and%20the%20most%20recent%0A%20%20%20%20%5Bworkflow%20logs%5D(https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Factions%2Fruns%2F1820236472).%0A%0A%20%20%20%20Over%20the%20past%206%20hours%2C%20it%20has%20been%20determined%20flaky%20in%201%20workflows%20with%0A%20%20%20%201%20red%20and%203%20green.
|
process
|
disabled test success first then exception main spawntest platforms http ci com success first then exception https com
| 1
|
61,977
| 7,540,549,412
|
IssuesEvent
|
2018-04-17 06:55:13
|
otavanopisto/muikku
|
https://api.github.com/repos/otavanopisto/muikku
|
opened
|
Discussion - Pinned and locked icon missing from message thread
|
DISCUSSIONS REDESIGN2017 bug
|
You can see the icons in the discussion front page view, but not in the thread view.
|
1.0
|
Discussion - Pinned and locked icon missing from message thread - You can see the icons in the discussion front page view, but not in the thread view.
|
non_process
|
discussion pinned and locked icon missing from message thread you can see the icons in the discussion front page view but not in the thread view
| 0
|
3,718
| 6,732,876,652
|
IssuesEvent
|
2017-10-18 13:09:31
|
lockedata/rcms
|
https://api.github.com/repos/lockedata/rcms
|
opened
|
Build agenda
|
conference team oconf processes
|
## Detailed task
Create a schedule over multiple rooms (and days if required)
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [oconf](https://mysterious-coast-84721.herokuapp.com)
- System documentation: [ocw docs](http://openconferenceware.org/)
- Role: Conference team
- Area: Processes
|
1.0
|
Build agenda - ## Detailed task
Create a schedule over multiple rooms (and days if required)
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [oconf](https://mysterious-coast-84721.herokuapp.com)
- System documentation: [ocw docs](http://openconferenceware.org/)
- Role: Conference team
- Area: Processes
|
process
|
build agenda detailed task create a schedule over multiple rooms and days if required assessing the task try to perform the task use google and the system documentation to help part of what we re trying to assess how easy it is for people to work out how to do tasks use a 👍 reaction to this task if you were able to perform the task use a 👎 reaction to the task if you could not complete it add a reply with any comments or feedback extra info site system documentation role conference team area processes
| 1
|
5,953
| 8,780,374,641
|
IssuesEvent
|
2018-12-19 17:07:57
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
opened
|
Bigtable: systests leaking instances
|
api: bigtable testing type: process
|
Similar to #4935. We have a bunch of leaked instances, and are now seeing systest failures due to quota exhaustion. With the labels, it is possible to see that several different "kinds" of instances are leaked from systests:
- `dif-####` (leaked from `test_create_instance_w_two_clusters`).
- `g-c-p-###` w/ display name "Foo Bar Baz" (leaked from `test_update_display_name_and_labels`)
- `g-c-p-###` w/o display name (the default instance in `tests/system.py`)
- `new-###` (leaked from `test_create_instance` systest)
and from snippets:
- `inst-my-###` (leaked one of two snippets: `test_bigtable_create_instance` or `test_bigtable_delete_instance`)
- `snippet-###` w/ display name "My new instance" (default instance for snippets; display name set by 'test_bigtable_update_instance`)
- `snippet-###` w/o display name (default instance for snippets)
I will delete all but the most recent example of each kind.
|
1.0
|
Bigtable: systests leaking instances - Similar to #4935. We have a bunch of leaked instances, and are now seeing systest failures due to quota exhaustion. With the labels, it is possible to see that several different "kinds" of instances are leaked from systests:
- `dif-####` (leaked from `test_create_instance_w_two_clusters`).
- `g-c-p-###` w/ display name "Foo Bar Baz" (leaked from `test_update_display_name_and_labels`)
- `g-c-p-###` w/o display name (the default instance in `tests/system.py`)
- `new-###` (leaked from `test_create_instance` systest)
and from snippets:
- `inst-my-###` (leaked one of two snippets: `test_bigtable_create_instance` or `test_bigtable_delete_instance`)
- `snippet-###` w/ display name "My new instance" (default instance for snippets; display name set by 'test_bigtable_update_instance`)
- `snippet-###` w/o display name (default instance for snippets)
I will delete all but the most recent example of each kind.
|
process
|
bigtable systests leaking instances similar to we have a bunch of leaked instances and are now seeing systest failures due to quota exhaustion with the labels it is possible to see that several different kinds of instances are leaked from systests dif leaked from test create instance w two clusters g c p w display name foo bar baz leaked from test update display name and labels g c p w o display name the default instance in tests system py new leaked from test create instance systest and from snippets inst my leaked one of two snippets test bigtable create instance or test bigtable delete instance snippet w display name my new instance default instance for snippets display name set by test bigtable update instance snippet w o display name default instance for snippets i will delete all but the most recent example of each kind
| 1
|
6,945
| 10,113,061,283
|
IssuesEvent
|
2019-07-30 15:53:06
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
Audit all unit test source files and make sure BUILD targets exist for them
|
skill:Bazel type:Process
|
From the internal issue:
> Go through each component and make sure all unit test source files are included in a unit test BUILD target. Be aware that Swift and Objective-C have to be in separate libraries.
---
This is an internal issue. If you are a Googler, please visit [b/117431223](http://b/117431223) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117431223](http://b/117431223)
|
1.0
|
Audit all unit test source files and make sure BUILD targets exist for them - From the internal issue:
> Go through each component and make sure all unit test source files are included in a unit test BUILD target. Be aware that Swift and Objective-C have to be in separate libraries.
---
This is an internal issue. If you are a Googler, please visit [b/117431223](http://b/117431223) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117431223](http://b/117431223)
|
process
|
audit all unit test source files and make sure build targets exist for them from the internal issue go through each component and make sure all unit test source files are included in a unit test build target be aware that swift and objective c have to be in separate libraries this is an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
14,227
| 17,147,693,449
|
IssuesEvent
|
2021-07-13 16:19:47
|
googleapis/python-bigtable
|
https://api.github.com/repos/googleapis/python-bigtable
|
closed
|
Unit tests emit deprecation warnings
|
api: bigtable type: process
|
```bash
$ .nox/unit-3-8/bin/py.test tests/unit/
============================= test session starts ==============================
platform linux -- Python 3.8.1, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/tseaver/projects/agendaless/Google/src/python-bigtable
plugins: cov-2.12.1, asyncio-0.15.1
collected 1315 items
tests/unit/test_app_profile.py ..................... [ 1%]
tests/unit/test_backup.py .............................................. [ 5%]
.. [ 5%]
tests/unit/test_batcher.py ........ [ 5%]
tests/unit/test_client.py .................................... [ 8%]
tests/unit/test_cluster.py .................. [ 9%]
tests/unit/test_column_family.py ....................................... [ 12%]
[ 12%]
tests/unit/test_encryption_info.py .............. [ 13%]
tests/unit/test_error.py ........... [ 14%]
tests/unit/test_instance.py ................................... [ 17%]
tests/unit/test_policy.py ................ [ 18%]
tests/unit/test_row.py ...................................... [ 21%]
tests/unit/test_row_data.py ............................................ [ 24%]
................................................................ [ 29%]
tests/unit/test_row_filters.py ......................................... [ 32%]
........................................... [ 36%]
tests/unit/test_row_set.py ....................... [ 37%]
tests/unit/test_table.py ............................................... [ 41%]
..................................... [ 44%]
tests/unit/gapic/bigtable_admin_v2/test_bigtable_instance_admin.py ..... [ 44%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 61%]
.................s..s..ss................................. [ 65%]
tests/unit/gapic/bigtable_admin_v2/test_bigtable_table_admin.py ........ [ 66%]
........................................................................ [ 71%]
........................................................................ [ 77%]
........................................................................ [ 82%]
.......................................................s..s..ss......... [ 88%]
............................ [ 90%]
tests/unit/gapic/bigtable_v2/test_bigtable.py .......................... [ 92%]
......................................................................s. [ 97%]
.s..ss......................... [100%]
=============================== warnings summary ===============================
tests/unit/test_table.py::TestTable::test_row_factory_append
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:248: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key, append=True)
tests/unit/test_table.py::TestTable::test_row_factory_conditional
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:238: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key, filter_=filter_)
tests/unit/test_table.py::TestTable::test_row_factory_direct
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:227: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key)
tests/unit/test_table.py::TestTable::test_row_factory_failure
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:288: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
table.row(row_key, filter_=object(), append=True)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================ 1303 passed, 12 skipped, 4 warnings in 15.68s =================
```
|
1.0
|
Unit tests emit deprecation warnings - ```bash
$ .nox/unit-3-8/bin/py.test tests/unit/
============================= test session starts ==============================
platform linux -- Python 3.8.1, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/tseaver/projects/agendaless/Google/src/python-bigtable
plugins: cov-2.12.1, asyncio-0.15.1
collected 1315 items
tests/unit/test_app_profile.py ..................... [ 1%]
tests/unit/test_backup.py .............................................. [ 5%]
.. [ 5%]
tests/unit/test_batcher.py ........ [ 5%]
tests/unit/test_client.py .................................... [ 8%]
tests/unit/test_cluster.py .................. [ 9%]
tests/unit/test_column_family.py ....................................... [ 12%]
[ 12%]
tests/unit/test_encryption_info.py .............. [ 13%]
tests/unit/test_error.py ........... [ 14%]
tests/unit/test_instance.py ................................... [ 17%]
tests/unit/test_policy.py ................ [ 18%]
tests/unit/test_row.py ...................................... [ 21%]
tests/unit/test_row_data.py ............................................ [ 24%]
................................................................ [ 29%]
tests/unit/test_row_filters.py ......................................... [ 32%]
........................................... [ 36%]
tests/unit/test_row_set.py ....................... [ 37%]
tests/unit/test_table.py ............................................... [ 41%]
..................................... [ 44%]
tests/unit/gapic/bigtable_admin_v2/test_bigtable_instance_admin.py ..... [ 44%]
........................................................................ [ 50%]
........................................................................ [ 55%]
........................................................................ [ 61%]
.................s..s..ss................................. [ 65%]
tests/unit/gapic/bigtable_admin_v2/test_bigtable_table_admin.py ........ [ 66%]
........................................................................ [ 71%]
........................................................................ [ 77%]
........................................................................ [ 82%]
.......................................................s..s..ss......... [ 88%]
............................ [ 90%]
tests/unit/gapic/bigtable_v2/test_bigtable.py .......................... [ 92%]
......................................................................s. [ 97%]
.s..ss......................... [100%]
=============================== warnings summary ===============================
tests/unit/test_table.py::TestTable::test_row_factory_append
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:248: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key, append=True)
tests/unit/test_table.py::TestTable::test_row_factory_conditional
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:238: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key, filter_=filter_)
tests/unit/test_table.py::TestTable::test_row_factory_direct
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:227: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
row = table.row(row_key)
tests/unit/test_table.py::TestTable::test_row_factory_failure
/home/tseaver/projects/agendaless/Google/src/python-bigtable/tests/unit/test_table.py:288: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.append_row(), Table.conditional_row() and Table.direct_row() methods instead.
table.row(row_key, filter_=object(), append=True)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================ 1303 passed, 12 skipped, 4 warnings in 15.68s =================
```
|
process
|
unit tests emit deprecation warnings bash nox unit bin py test tests unit test session starts platform linux python pytest py pluggy rootdir home tseaver projects agendaless google src python bigtable plugins cov asyncio collected items tests unit test app profile py tests unit test backup py tests unit test batcher py tests unit test client py tests unit test cluster py tests unit test column family py tests unit test encryption info py tests unit test error py tests unit test instance py tests unit test policy py tests unit test row py tests unit test row data py tests unit test row filters py tests unit test row set py tests unit test table py tests unit gapic bigtable admin test bigtable instance admin py s s ss tests unit gapic bigtable admin test bigtable table admin py s s ss tests unit gapic bigtable test bigtable py s s ss warnings summary tests unit test table py testtable test row factory append home tseaver projects agendaless google src python bigtable tests unit test table py pendingdeprecationwarning this method will be deprecated in future versions please use table append row table conditional row and table direct row methods instead row table row row key append true tests unit test table py testtable test row factory conditional home tseaver projects agendaless google src python bigtable tests unit test table py pendingdeprecationwarning this method will be deprecated in future versions please use table append row table conditional row and table direct row methods instead row table row row key filter filter tests unit test table py testtable test row factory direct home tseaver projects agendaless google src python bigtable tests unit test table py pendingdeprecationwarning this method will be deprecated in future versions please use table append row table conditional row and table direct row methods instead row table row row key tests unit test table py testtable test row factory failure home tseaver projects agendaless google src python bigtable tests unit test table py pendingdeprecationwarning this method will be deprecated in future versions please use table append row table conditional row and table direct row methods instead table row row key filter object append true docs passed skipped warnings in
| 1
|
76,672
| 7,543,200,640
|
IssuesEvent
|
2018-04-17 14:55:50
|
ODM2/ODM2DataSharingPortal
|
https://api.github.com/repos/ODM2/ODM2DataSharingPortal
|
closed
|
Restyle Site: Styling issue on links to companion sites in main landing page
|
bug in progress ready for testing tested
|

The last line of the description of the companion site overlapps the "view site" link
|
2.0
|
Restyle Site: Styling issue on links to companion sites in main landing page - 
The last line of the description of the companion site overlapps the "view site" link
|
non_process
|
restyle site styling issue on links to companion sites in main landing page the last line of the description of the companion site overlapps the view site link
| 0
|
278,384
| 24,150,461,518
|
IssuesEvent
|
2022-09-21 23:45:56
|
gitpod-io/gitpod
|
https://api.github.com/repos/gitpod-io/gitpod
|
opened
|
[preview environments] unstable network led to many workarounds in integration tests
|
type: bug aspect: testing
|
### Bug description
:wave: hey there Platform team, in https://github.com/gitpod-io/gitpod/issues/12248, we found it necessary to make many code changes to make the integration tests and related port-forwarding more fault tolerant of unstable network conditions.
I'm not sure if the problem is exclusive to Harvester, or core-dev in GCP, or both, and wanted to give you a heads up. For example, this might impact @geropl and @akosyakov when they dedicate energy to integration tests for WebApp and IDE.
cc: @corneliusludmann , I'm not sure if you run integration tests in Harvester preview environments.
### Steps to reproduce
Run integration tests for webapp or ide, they may experience disconnects.
### Workspace affected
_No response_
### Expected behavior
The network for preview environments should be more stable
### Example repository
_No response_
### Anything else?
Not sure if this is impacting any other werft jobs
|
1.0
|
[preview environments] unstable network led to many workarounds in integration tests - ### Bug description
:wave: hey there Platform team, in https://github.com/gitpod-io/gitpod/issues/12248, we found it necessary to make many code changes to make the integration tests and related port-forwarding more fault tolerant of unstable network conditions.
I'm not sure if the problem is exclusive to Harvester, or core-dev in GCP, or both, and wanted to give you a heads up. For example, this might impact @geropl and @akosyakov when they dedicate energy to integration tests for WebApp and IDE.
cc: @corneliusludmann , I'm not sure if you run integration tests in Harvester preview environments.
### Steps to reproduce
Run integration tests for webapp or ide, they may experience disconnects.
### Workspace affected
_No response_
### Expected behavior
The network for preview environments should be more stable
### Example repository
_No response_
### Anything else?
Not sure if this is impacting any other werft jobs
|
non_process
|
unstable network led to many workarounds in integration tests bug description wave hey there platform team in we found it necessary to make many code changes to make the integration tests and related port forwarding more fault tolerant of unstable network conditions i m not sure if the problem is exclusive to harvester or core dev in gcp or both and wanted to give you a heads up for example this might impact geropl and akosyakov when they dedicate energy to integration tests for webapp and ide cc corneliusludmann i m not sure if you run integration tests in harvester preview environments steps to reproduce run integration tests for webapp or ide they may experience disconnects workspace affected no response expected behavior the network for preview environments should be more stable example repository no response anything else not sure if this is impacting any other werft jobs
| 0
|
222,662
| 7,434,920,168
|
IssuesEvent
|
2018-03-26 12:45:10
|
nutofem/nuto
|
https://api.github.com/repos/nutofem/nuto
|
opened
|
Damage material parameters
|
low priority question
|
### Problem
Currently, a lot of code is required to instantiate the appropriate [laws](https://github.com/nutofem/nuto/blob/PDE_reviewed/test/mechanics/constitutive/LocalIsotropicDamage.cpp#L38)/[integrands](https://github.com/nutofem/nuto/blob/PDE_reviewed/test/mechanics/integrands/GradientDamage.cpp#L114). Like
~~~cpp
template <int TDim>
auto TestLaw()
{
// Define the law using policy based design principles, hopefully applied correctly
using Damage = Constitutive::DamageLawExponential;
using StrainNorm = Constitutive::ModifiedMisesStrainNorm<TDim>;
using Evolution = Laws::EvolutionImplicit<TDim>;
using Law = Laws::LocalIsotropicDamage<TDim, Damage, Evolution>;
Damage dmg(kappa0, beta, alpha);
StrainNorm strainNorm(nu, fc / ft);
Evolution evolutionEq(StrainNorm(nu, fc / ft), /*numCells=*/1, /*numIps=*/1);
Laws::LinearElasticDamage<TDim> elasticDamage(E, nu);
return Law(elasticDamage, dmg, evolutionEq);
}
~~~
Some material parameters are required multiple times (`kappa0 = ft/E`) which is error prone and long.
### Solution
1) Define a material class.
Dealing with mechanical softening problems often involves the same set of parameters. Like
~~~cpp
struct SofteningMaterial
{
double youngsModulus;
double poissonRatio;
double tensileStrength;
double compressiveStrength;
double fractureEnergy;
double nonlocalRadius;
}
~~~
2) Add constructors that takes a `SofteningMaterial` to the softening laws/integrands, as well as the deeper classes like (strain norms, damage laws, linear elastic damage). This allows a proper policy based design with defaulted template arguments. The code above would then look like
~~~cpp
template <int TDim>
auto TestLaw(SofteningMaterial material)
{
return Laws::LocalIsotropicDamage<TDim>(material); // with reasonable the default template arguments
}
~~~
3) Still allow construction with the individual parameters. Still allow customization by changing the template parameters.
### Discussion:
- Is that relevant for someone else? I think so as we often deal with concrete.
- Naming alright? Access via public members alright? With `m` (`mYoungsModulus` vs `youngsModulus`)?
|
1.0
|
Damage material parameters - ### Problem
Currently, a lot of code is required to instantiate the appropriate [laws](https://github.com/nutofem/nuto/blob/PDE_reviewed/test/mechanics/constitutive/LocalIsotropicDamage.cpp#L38)/[integrands](https://github.com/nutofem/nuto/blob/PDE_reviewed/test/mechanics/integrands/GradientDamage.cpp#L114). Like
~~~cpp
template <int TDim>
auto TestLaw()
{
// Define the law using policy based design principles, hopefully applied correctly
using Damage = Constitutive::DamageLawExponential;
using StrainNorm = Constitutive::ModifiedMisesStrainNorm<TDim>;
using Evolution = Laws::EvolutionImplicit<TDim>;
using Law = Laws::LocalIsotropicDamage<TDim, Damage, Evolution>;
Damage dmg(kappa0, beta, alpha);
StrainNorm strainNorm(nu, fc / ft);
Evolution evolutionEq(StrainNorm(nu, fc / ft), /*numCells=*/1, /*numIps=*/1);
Laws::LinearElasticDamage<TDim> elasticDamage(E, nu);
return Law(elasticDamage, dmg, evolutionEq);
}
~~~
Some material parameters are required multiple times (`kappa0 = ft/E`) which is error prone and long.
### Solution
1) Define a material class.
Dealing with mechanical softening problems often involves the same set of parameters. Like
~~~cpp
struct SofteningMaterial
{
double youngsModulus;
double poissonRatio;
double tensileStrength;
double compressiveStrength;
double fractureEnergy;
double nonlocalRadius;
}
~~~
2) Add constructors that takes a `SofteningMaterial` to the softening laws/integrands, as well as the deeper classes like (strain norms, damage laws, linear elastic damage). This allows a proper policy based design with defaulted template arguments. The code above would then look like
~~~cpp
template <int TDim>
auto TestLaw(SofteningMaterial material)
{
return Laws::LocalIsotropicDamage<TDim>(material); // with reasonable the default template arguments
}
~~~
3) Still allow construction with the individual parameters. Still allow customization by changing the template parameters.
### Discussion:
- Is that relevant for someone else? I think so as we often deal with concrete.
- Naming alright? Access via public members alright? With `m` (`mYoungsModulus` vs `youngsModulus`)?
|
non_process
|
damage material parameters problem currently a lot of code is required to instantiate the appropriate like cpp template auto testlaw define the law using policy based design principles hopefully applied correctly using damage constitutive damagelawexponential using strainnorm constitutive modifiedmisesstrainnorm using evolution laws evolutionimplicit using law laws localisotropicdamage damage dmg beta alpha strainnorm strainnorm nu fc ft evolution evolutioneq strainnorm nu fc ft numcells numips laws linearelasticdamage elasticdamage e nu return law elasticdamage dmg evolutioneq some material parameters are required multiple times ft e which is error prone and long solution define a material class dealing with mechanical softening problems often involves the same set of parameters like cpp struct softeningmaterial double youngsmodulus double poissonratio double tensilestrength double compressivestrength double fractureenergy double nonlocalradius add constructors that takes a softeningmaterial to the softening laws integrands as well as the deeper classes like strain norms damage laws linear elastic damage this allows a proper policy based design with defaulted template arguments the code above would then look like cpp template auto testlaw softeningmaterial material return laws localisotropicdamage material with reasonable the default template arguments still allow construction with the individual parameters still allow customization by changing the template parameters discussion is that relevant for someone else i think so as we often deal with concrete naming alright access via public members alright with m myoungsmodulus vs youngsmodulus
| 0
|
17,238
| 22,960,690,727
|
IssuesEvent
|
2022-07-19 15:08:28
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.60
|
enhancement P1 process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.60.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.60 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.60.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on nothing for milestone github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
14,100
| 16,989,404,244
|
IssuesEvent
|
2021-06-30 18:20:24
|
googleapis/python-security-private-ca
|
https://api.github.com/repos/googleapis/python-security-private-ca
|
closed
|
Release as GA
|
api: security-privateca type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface, on or after June 16, 2021
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface, on or after June 16, 2021
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface on or after june server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
4,912
| 7,787,083,521
|
IssuesEvent
|
2018-06-06 21:06:39
|
bio-miga/miga
|
https://api.github.com/repos/bio-miga/miga
|
closed
|
clades projects shouldn't have hAAI
|
Processing bug
|
First, hAAI is virtually useless for clades, since the assumption is that all genomes are too close for them to be resolved like this. Second, in the unlikely event that hAAI can resolve a pair, this would break `:ogs`.
|
1.0
|
clades projects shouldn't have hAAI - First, hAAI is virtually useless for clades, since the assumption is that all genomes are too close for them to be resolved like this. Second, in the unlikely event that hAAI can resolve a pair, this would break `:ogs`.
|
process
|
clades projects shouldn t have haai first haai is virtually useless for clades since the assumption is that all genomes are too close for them to be resolved like this second in the unlikely event that haai can resolve a pair this would break ogs
| 1
|
12,872
| 15,263,981,329
|
IssuesEvent
|
2021-02-22 04:15:15
|
syncfusion/ej2-react-ui-components
|
https://api.github.com/repos/syncfusion/ej2-react-ui-components
|
closed
|
Document Editor Headings translation
|
word-processor
|
Using L10n to load translations (picked from ej2-locale repo fr.json)
```
"Heading": "Titre",
"Heading 1": "Rubrique 1",
"Heading 2": "Rubrique 2",
"Heading 3": "Rubrique 3",
"Heading 4": "Rubrique 4",
"Heading 5": "Rubrique 5",
"Heading 6": "Rubrique 6",
```
result : opening the styles dialog, it shows the "Heading 1", 2, 3... AND "Rubrique 1", 2, 3 ...
clicking on "modify" button generates an error :
> dictionary.js:57 Uncaught RangeError: No item with the specified key has been added.
> at Dictionary.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/base/dictionary.js.Dictionary.get (dictionary.js:57)
> at StyleDialog.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/implementation/dialogs/style-dialog.js.StyleDialog.getStyle (style-dialog.js:853)
> at StyleDialog.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/implementation/dialogs/style-dialog.js.StyleDialog.show (style-dialog.js:617)
> at HTMLButtonElement.StylesDialog.modifyStyles (styles-dialog.js:29)
|
1.0
|
Document Editor Headings translation - Using L10n to load translations (picked from ej2-locale repo fr.json)
```
"Heading": "Titre",
"Heading 1": "Rubrique 1",
"Heading 2": "Rubrique 2",
"Heading 3": "Rubrique 3",
"Heading 4": "Rubrique 4",
"Heading 5": "Rubrique 5",
"Heading 6": "Rubrique 6",
```
result : opening the styles dialog, it shows the "Heading 1", 2, 3... AND "Rubrique 1", 2, 3 ...
clicking on "modify" button generates an error :
> dictionary.js:57 Uncaught RangeError: No item with the specified key has been added.
> at Dictionary.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/base/dictionary.js.Dictionary.get (dictionary.js:57)
> at StyleDialog.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/implementation/dialogs/style-dialog.js.StyleDialog.getStyle (style-dialog.js:853)
> at StyleDialog.push../node_modules/@syncfusion/ej2-documenteditor/src/document-editor/implementation/dialogs/style-dialog.js.StyleDialog.show (style-dialog.js:617)
> at HTMLButtonElement.StylesDialog.modifyStyles (styles-dialog.js:29)
|
process
|
document editor headings translation using to load translations picked from locale repo fr json heading titre heading rubrique heading rubrique heading rubrique heading rubrique heading rubrique heading rubrique result opening the styles dialog it shows the heading and rubrique clicking on modify button generates an error dictionary js uncaught rangeerror no item with the specified key has been added at dictionary push node modules syncfusion documenteditor src document editor base dictionary js dictionary get dictionary js at styledialog push node modules syncfusion documenteditor src document editor implementation dialogs style dialog js styledialog getstyle style dialog js at styledialog push node modules syncfusion documenteditor src document editor implementation dialogs style dialog js styledialog show style dialog js at htmlbuttonelement stylesdialog modifystyles styles dialog js
| 1
|
325,473
| 24,050,989,475
|
IssuesEvent
|
2022-09-16 12:48:05
|
pkdc/simplified-tetris-with-score-handling
|
https://api.github.com/repos/pkdc/simplified-tetris-with-score-handling
|
closed
|
Document the keys to press for each action
|
documentation
|
Document the keys to press for each action
- [ ] End the game and display the record form
- [ ] start game
- [ ] left
- [ ] right
- [ ] accelerated fall
|
1.0
|
Document the keys to press for each action - Document the keys to press for each action
- [ ] End the game and display the record form
- [ ] start game
- [ ] left
- [ ] right
- [ ] accelerated fall
|
non_process
|
document the keys to press for each action document the keys to press for each action end the game and display the record form start game left right accelerated fall
| 0
|
168,752
| 6,385,276,040
|
IssuesEvent
|
2017-08-03 08:08:24
|
arquillian/smart-testing
|
https://api.github.com/repos/arquillian/smart-testing
|
closed
|
ArrayIndexOutOfBounds is thrown when using Surefire 2.20
|
Component: Core Priority: High Type: Bug
|
##### Issue Overview
`ArrayIndexOutOfBounds` is thrown when using Surefire 2.20 because `surefire` plugin does not use the semantic version format of having at least three sections in the version such as major.minor. patch
##### Expected Behaviour
Work with surefire 2.20
##### Current Behaviour
Throws an exception
|
1.0
|
ArrayIndexOutOfBounds is thrown when using Surefire 2.20 -
##### Issue Overview
`ArrayIndexOutOfBounds` is thrown when using Surefire 2.20 because `surefire` plugin does not use the semantic version format of having at least three sections in the version such as major.minor. patch
##### Expected Behaviour
Work with surefire 2.20
##### Current Behaviour
Throws an exception
|
non_process
|
arrayindexoutofbounds is thrown when using surefire issue overview arrayindexoutofbounds is thrown when using surefire because surefire plugin does not use the semantic version format of having at least three sections in the version such as major minor patch expected behaviour work with surefire current behaviour throws an exception
| 0
|
20,210
| 26,796,775,180
|
IssuesEvent
|
2023-02-01 12:28:36
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
INTERNAL ASSERT FAILED at "..\\aten\\src\\ATen\\MapAllocator.cpp":135
|
high priority module: windows module: multiprocessing triaged
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. Here's a simple script I just wrote naively mirroring what I would do with a standard multiprocessing queue:
```
import numpy
import torch
import time
import torch.multiprocessing as mp
import queue
from PIL import Image
def myproducer(thequeue,writerfinishqueue):
for i in range(2000):
thearray=numpy.full((2048,2048),255,dtype=numpy.uint8)
thequeue.put(torch.from_numpy(thearray))
def mywriter(thequeue,writerfinishqueue):
filecounter=0
starttime=0
while True:
try:
mytensor=thequeue.get_nowait()
filecounter=filecounter+1
if filecounter==1:
starttime=time.time()
mynumpyarray=mytensor.numpy()
Image.fromarray(mynumpyarray).convert("L").save(str(filecounter)+".bmp","BMP")
if filecounter==2000:
print(time.time()-starttime)
break
except queue.Empty:
pass
if __name__ == '__main__':
thequeue=mp.Queue()
myproducerproc=mp.Process(target=myproducer,args=(thequeue))
myproducerproc.start()
mywriterproc=mp.Process(target=mywriter,args=(thequeue))
mywriterproc.start()
myproducerproc.join()
mywriterproc.join()
```
Running this results in this error message:
```Process Process-2:
Traceback (most recent call last):
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\myuser\Desktop\writertest\testtens.py", line 22, in mywriter
mytensor=thequeue.get_nowait()
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\queues.py", line 129, in get_nowait
return self.get(False)
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\queues.py", line 116, in get
return _ForkingPickler.loads(res)
File "C:\Users\myuser\.conda\envs\ppump38\lib\site-packages\torch\multiprocessing\reductions.py", line 305, in rebuild_storage_filename
storage = cls._new_shared_filename(manager, handle, size)
RuntimeError: falseINTERNAL ASSERT FAILED at "..\\aten\\src\\ATen\\MapAllocator.cpp":135, please report a bug to PyTorch. Couldn't open shared file mapping: <00000218E50CF0F2>, error code: <2>
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Ideally this would run to completion without throwing any errors (in my trials it crashes ~20 frames before the end of the program). If I understand correctly, it's crashing because the producer process ends while the writer process is still reading the queue backed by shared memory. I solved this by adding an extra queue for the producer queue to receive a message from the writer queue signalling it's done, but if pytorch could extend the life of the shared memory behind the scenes I think that would be a better option. (If there's a more elegant way of handling this I would appreciate a pointer)
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
```
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.12 (default, Oct 12 2021, 03:01:40) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19043-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 3GB
Nvidia driver version: 471.96
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.3.0 haa95532_524
[conda] mkl-service 2.4.0 py38h2bbff1b_0
[conda] mkl_fft 1.3.1 py38h277e83a_0
[conda] mkl_random 1.2.2 py38hf11a4ad_0
[conda] numpy 1.21.2 py38hfca59bb_0
[conda] numpy-base 1.21.2 py38h0829f74_0
[conda] pytorch 1.10.0 py3.8_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.11.1 py38_cu113 pytorch
```
## Additional context
<!-- Add any other context about the problem here. -->
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin
|
1.0
|
INTERNAL ASSERT FAILED at "..\\aten\\src\\ATen\\MapAllocator.cpp":135 - ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. Here's a simple script I just wrote naively mirroring what I would do with a standard multiprocessing queue:
```
import numpy
import torch
import time
import torch.multiprocessing as mp
import queue
from PIL import Image
def myproducer(thequeue,writerfinishqueue):
for i in range(2000):
thearray=numpy.full((2048,2048),255,dtype=numpy.uint8)
thequeue.put(torch.from_numpy(thearray))
def mywriter(thequeue,writerfinishqueue):
filecounter=0
starttime=0
while True:
try:
mytensor=thequeue.get_nowait()
filecounter=filecounter+1
if filecounter==1:
starttime=time.time()
mynumpyarray=mytensor.numpy()
Image.fromarray(mynumpyarray).convert("L").save(str(filecounter)+".bmp","BMP")
if filecounter==2000:
print(time.time()-starttime)
break
except queue.Empty:
pass
if __name__ == '__main__':
thequeue=mp.Queue()
myproducerproc=mp.Process(target=myproducer,args=(thequeue))
myproducerproc.start()
mywriterproc=mp.Process(target=mywriter,args=(thequeue))
mywriterproc.start()
myproducerproc.join()
mywriterproc.join()
```
Running this results in this error message:
```Process Process-2:
Traceback (most recent call last):
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\myuser\Desktop\writertest\testtens.py", line 22, in mywriter
mytensor=thequeue.get_nowait()
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\queues.py", line 129, in get_nowait
return self.get(False)
File "C:\Users\myuser\.conda\envs\ppump38\lib\multiprocessing\queues.py", line 116, in get
return _ForkingPickler.loads(res)
File "C:\Users\myuser\.conda\envs\ppump38\lib\site-packages\torch\multiprocessing\reductions.py", line 305, in rebuild_storage_filename
storage = cls._new_shared_filename(manager, handle, size)
RuntimeError: falseINTERNAL ASSERT FAILED at "..\\aten\\src\\ATen\\MapAllocator.cpp":135, please report a bug to PyTorch. Couldn't open shared file mapping: <00000218E50CF0F2>, error code: <2>
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Ideally this would run to completion without throwing any errors (in my trials it crashes ~20 frames before the end of the program). If I understand correctly, it's crashing because the producer process ends while the writer process is still reading the queue backed by shared memory. I solved this by adding an extra queue for the producer queue to receive a message from the writer queue signalling it's done, but if pytorch could extend the life of the shared memory behind the scenes I think that would be a better option. (If there's a more elegant way of handling this I would appreciate a pointer)
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
```
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.12 (default, Oct 12 2021, 03:01:40) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19043-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 3GB
Nvidia driver version: 471.96
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.3.0 haa95532_524
[conda] mkl-service 2.4.0 py38h2bbff1b_0
[conda] mkl_fft 1.3.1 py38h277e83a_0
[conda] mkl_random 1.2.2 py38hf11a4ad_0
[conda] numpy 1.21.2 py38hfca59bb_0
[conda] numpy-base 1.21.2 py38h0829f74_0
[conda] pytorch 1.10.0 py3.8_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.11.1 py38_cu113 pytorch
```
## Additional context
<!-- Add any other context about the problem here. -->
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin
|
process
|
internal assert failed at aten src aten mapallocator cpp 🐛 bug to reproduce steps to reproduce the behavior here s a simple script i just wrote naively mirroring what i would do with a standard multiprocessing queue import numpy import torch import time import torch multiprocessing as mp import queue from pil import image def myproducer thequeue writerfinishqueue for i in range thearray numpy full dtype numpy thequeue put torch from numpy thearray def mywriter thequeue writerfinishqueue filecounter starttime while true try mytensor thequeue get nowait filecounter filecounter if filecounter starttime time time mynumpyarray mytensor numpy image fromarray mynumpyarray convert l save str filecounter bmp bmp if filecounter print time time starttime break except queue empty pass if name main thequeue mp queue myproducerproc mp process target myproducer args thequeue myproducerproc start mywriterproc mp process target mywriter args thequeue mywriterproc start myproducerproc join mywriterproc join running this results in this error message process process traceback most recent call last file c users myuser conda envs lib multiprocessing process py line in bootstrap self run file c users myuser conda envs lib multiprocessing process py line in run self target self args self kwargs file c users myuser desktop writertest testtens py line in mywriter mytensor thequeue get nowait file c users myuser conda envs lib multiprocessing queues py line in get nowait return self get false file c users myuser conda envs lib multiprocessing queues py line in get return forkingpickler loads res file c users myuser conda envs lib site packages torch multiprocessing reductions py line in rebuild storage filename storage cls new shared filename manager handle size runtimeerror falseinternal assert failed at aten src aten mapallocator cpp please report a bug to pytorch couldn t open shared file mapping error code expected behavior ideally this would run to completion without throwing any errors in my trials it crashes frames before the end of the program if i understand correctly it s crashing because the producer process ends while the writer process is still reading the queue backed by shared memory i solved this by adding an extra queue for the producer queue to receive a message from the writer queue signalling it s done but if pytorch could extend the life of the shared memory behind the scenes i think that would be a better option if there s a more elegant way of handling this i would appreciate a pointer environment please copy and paste the output from our or fill out the checklist below manually you can get the script and run it with wget for security purposes please check the contents of collect env py before running it python collect env py pytorch version e g os e g linux how you installed pytorch conda pip source build command you used if compiling from source python version cuda cudnn version gpu models and configuration any other relevant information pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os microsoft windows pro gcc version could not collect clang version could not collect cmake version could not collect libc version n a python version default oct bit runtime python platform windows is cuda available true cuda runtime version could not collect gpu models and configuration gpu nvidia geforce gtx nvidia driver version cudnn version could not collect hip runtime version n a miopen runtime version n a versions of relevant libraries numpy torch torchvision blas mkl cudatoolkit mkl mkl service mkl fft mkl random numpy numpy base pytorch pytorch pytorch mutex cuda pytorch torchvision pytorch additional context cc ezyang gchanan bdhirsh jbschlosser mszhanyi nbcsm vitalyfedyunin
| 1
|
167,695
| 26,535,505,339
|
IssuesEvent
|
2023-01-19 15:22:55
|
raft-tech/TANF-app
|
https://api.github.com/repos/raft-tech/TANF-app
|
opened
|
[Reearch Synthesis] Adhoc revisions to projects updates (draft)
|
Research & Design
|
**Description:**
OFA needs some revisions to the project updates.
**AC:**
- [ ] A hack.md with the drafted synthesis has been reviewed.
- [ ] PR has been opened containing the final draft of the synthesis.
**Tasks:**
- [ ] --Aggregate notes from research sessions--
- [ ] Delete video recording(s) once notes are compiled
- [ ] Document synthesis - Scope (research goal(s)), High-level findings, Actionable learnings, Issues that have to be added to Zenhub, Takeways for future research
**Supporting Documentation:**
- --Link to hack.md--
|
1.0
|
[Reearch Synthesis] Adhoc revisions to projects updates (draft) - **Description:**
OFA needs some revisions to the project updates.
**AC:**
- [ ] A hack.md with the drafted synthesis has been reviewed.
- [ ] PR has been opened containing the final draft of the synthesis.
**Tasks:**
- [ ] --Aggregate notes from research sessions--
- [ ] Delete video recording(s) once notes are compiled
- [ ] Document synthesis - Scope (research goal(s)), High-level findings, Actionable learnings, Issues that have to be added to Zenhub, Takeways for future research
**Supporting Documentation:**
- --Link to hack.md--
|
non_process
|
adhoc revisions to projects updates draft description ofa needs some revisions to the project updates ac a hack md with the drafted synthesis has been reviewed pr has been opened containing the final draft of the synthesis tasks aggregate notes from research sessions delete video recording s once notes are compiled document synthesis scope research goal s high level findings actionable learnings issues that have to be added to zenhub takeways for future research supporting documentation link to hack md
| 0
|
665,624
| 22,323,926,832
|
IssuesEvent
|
2022-06-14 08:58:53
|
fasten-project/fasten
|
https://api.github.com/repos/fasten-project/fasten
|
opened
|
Recent changes to DB logic in server and plugins have broken the Quality Analyzer plugin
|
bug Priority: Critical
|
## Describe the bug
Since recent changes to DB login in the FASTEN server and plugins, the Quality Analyzer plugin has been broken. It runs into DataAccess exceptions when accessing the DB.
These are the relevant changes:
<img width="486" alt="image" src="https://user-images.githubusercontent.com/3635696/173537600-36632fae-5ce8-4763-b59e-8565a6fb6d27.png">
## To Reproduce
Steps to reproduce the behavior:
1. Check out DC branch https://github.com/fasten-project/fasten-docker-deployment/tree/fasten-pipeline-release-0.0.9
2. Delete `docker-volumes/fasten`
3. `docker-compose --profile java up -d --build`
4. Browse to http://localhost:9080/api/mvn/packages/log4j:log4j/1.2.17
5. Observe the container log of fasten-docker-deployment-fasten-rapid-metadata-plugin-1 to see it run into DataAccess exceptions.
|
1.0
|
Recent changes to DB logic in server and plugins have broken the Quality Analyzer plugin - ## Describe the bug
Since recent changes to DB login in the FASTEN server and plugins, the Quality Analyzer plugin has been broken. It runs into DataAccess exceptions when accessing the DB.
These are the relevant changes:
<img width="486" alt="image" src="https://user-images.githubusercontent.com/3635696/173537600-36632fae-5ce8-4763-b59e-8565a6fb6d27.png">
## To Reproduce
Steps to reproduce the behavior:
1. Check out DC branch https://github.com/fasten-project/fasten-docker-deployment/tree/fasten-pipeline-release-0.0.9
2. Delete `docker-volumes/fasten`
3. `docker-compose --profile java up -d --build`
4. Browse to http://localhost:9080/api/mvn/packages/log4j:log4j/1.2.17
5. Observe the container log of fasten-docker-deployment-fasten-rapid-metadata-plugin-1 to see it run into DataAccess exceptions.
|
non_process
|
recent changes to db logic in server and plugins have broken the quality analyzer plugin describe the bug since recent changes to db login in the fasten server and plugins the quality analyzer plugin has been broken it runs into dataaccess exceptions when accessing the db these are the relevant changes img width alt image src to reproduce steps to reproduce the behavior check out dc branch delete docker volumes fasten docker compose profile java up d build browse to observe the container log of fasten docker deployment fasten rapid metadata plugin to see it run into dataaccess exceptions
| 0
|
10,500
| 13,260,031,385
|
IssuesEvent
|
2020-08-20 17:35:01
|
GoogleCloudPlatform/openmrs-fhir-analytics
|
https://api.github.com/repos/GoogleCloudPlatform/openmrs-fhir-analytics
|
opened
|
Evaluate the option of integrating FHIR module inside the pipeline code.
|
P2:should process
|
Currently we rely on the source OpenMRS to have FHIR module installed and query that module to get FHIR resources. Given that some concerns have been raised re. installing new modules in current OMRS implementations (e.g., see #3 and Atom Feed module) the same argument may apply to FHIR module too.
One solution to avoid any extra module installations is to integrate the FHIR module code inside our pipelines and only use the MySQL DB (or a replica) to access data. This has the extra benefits of:
1) Having some performance benefits by avoiding encoding and decoding FHIR resources as JSON (see [here](https://docs.google.com/document/d/1KPiY_9ziEcsh7tAft3QtKuZ7XlDlPxE3yN7xnUO0LJg/edit#bookmark=id.aifjnbavo7w6)).
2) Make it easier to use a replica DB instead of the primary MySQL DB (to reduce load on the main EMR).
|
1.0
|
Evaluate the option of integrating FHIR module inside the pipeline code. - Currently we rely on the source OpenMRS to have FHIR module installed and query that module to get FHIR resources. Given that some concerns have been raised re. installing new modules in current OMRS implementations (e.g., see #3 and Atom Feed module) the same argument may apply to FHIR module too.
One solution to avoid any extra module installations is to integrate the FHIR module code inside our pipelines and only use the MySQL DB (or a replica) to access data. This has the extra benefits of:
1) Having some performance benefits by avoiding encoding and decoding FHIR resources as JSON (see [here](https://docs.google.com/document/d/1KPiY_9ziEcsh7tAft3QtKuZ7XlDlPxE3yN7xnUO0LJg/edit#bookmark=id.aifjnbavo7w6)).
2) Make it easier to use a replica DB instead of the primary MySQL DB (to reduce load on the main EMR).
|
process
|
evaluate the option of integrating fhir module inside the pipeline code currently we rely on the source openmrs to have fhir module installed and query that module to get fhir resources given that some concerns have been raised re installing new modules in current omrs implementations e g see and atom feed module the same argument may apply to fhir module too one solution to avoid any extra module installations is to integrate the fhir module code inside our pipelines and only use the mysql db or a replica to access data this has the extra benefits of having some performance benefits by avoiding encoding and decoding fhir resources as json see make it easier to use a replica db instead of the primary mysql db to reduce load on the main emr
| 1
|
11,275
| 14,074,388,014
|
IssuesEvent
|
2020-11-04 07:12:38
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Create Points Layer From Table algorithm does not work in processing modeler
|
Bug Feedback Modeller Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The algorithm `Create Points Layer From Table` works if used directly. But when using it in the processing modeler, geometries are created with `NULL` coordinates.
**How to Reproduce**
100%
1. Extract and open the [project](https://github.com/qgis/QGIS/files/5473133/project.zip).
2. In the processing toolbox, Open `Project models` > `Test` > `Test`
3. Using the layer `data` as input, run the algorithm.
You will see no geometries in the output layers although I'm not confident if I used the algorithm correctly in the modeller 🤔.
**QGIS and OS versions**
QGIS version | 3.17.0-Master | QGIS code branch | master 9935bbe05e6e8709bc3fc120c23c721e862726f5
-- | -- | -- | --
Compiled against Qt | 5.15.1 | Running against Qt | 5.15.1
Compiled against GDAL/OGR | 3.1.3 | Running against GDAL/OGR | 3.1.3
Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3
Compiled against SQLite | 3.33.0 | Running against SQLite | 3.33.0
PostgreSQL Client Version | 12.4 | SpatiaLite Version | 5.0.0-beta0
QWT Version | 6.1.5 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020
OS Version | Fedora 33 (Workstation Edition)
Active python plugins | string_reader; quick_map_services; MetaSearch; db_manager; processing
|
1.0
|
Create Points Layer From Table algorithm does not work in processing modeler - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The algorithm `Create Points Layer From Table` works if used directly. But when using it in the processing modeler, geometries are created with `NULL` coordinates.
**How to Reproduce**
100%
1. Extract and open the [project](https://github.com/qgis/QGIS/files/5473133/project.zip).
2. In the processing toolbox, Open `Project models` > `Test` > `Test`
3. Using the layer `data` as input, run the algorithm.
You will see no geometries in the output layers although I'm not confident if I used the algorithm correctly in the modeller 🤔.
**QGIS and OS versions**
QGIS version | 3.17.0-Master | QGIS code branch | master 9935bbe05e6e8709bc3fc120c23c721e862726f5
-- | -- | -- | --
Compiled against Qt | 5.15.1 | Running against Qt | 5.15.1
Compiled against GDAL/OGR | 3.1.3 | Running against GDAL/OGR | 3.1.3
Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3
Compiled against SQLite | 3.33.0 | Running against SQLite | 3.33.0
PostgreSQL Client Version | 12.4 | SpatiaLite Version | 5.0.0-beta0
QWT Version | 6.1.5 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020
OS Version | Fedora 33 (Workstation Edition)
Active python plugins | string_reader; quick_map_services; MetaSearch; db_manager; processing
|
process
|
create points layer from table algorithm does not work in processing modeler bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug the algorithm create points layer from table works if used directly but when using it in the processing modeler geometries are created with null coordinates how to reproduce extract and open the in the processing toolbox open project models test test using the layer data as input run the algorithm you will see no geometries in the output layers although i m not confident if i used the algorithm correctly in the modeller 🤔 qgis and os versions qgis version master qgis code branch master compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version fedora workstation edition active python plugins string reader quick map services metasearch db manager processing
| 1
|
646
| 3,105,858,200
|
IssuesEvent
|
2015-08-31 23:27:01
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Update version to 0.0.3 in README.md after Maven Sync
|
release process
|
Version `0.0.3` was uploaded to Maven Central Repository.
Version number needs to be updated in `README.md` after Maven Sync.
|
1.0
|
Update version to 0.0.3 in README.md after Maven Sync - Version `0.0.3` was uploaded to Maven Central Repository.
Version number needs to be updated in `README.md` after Maven Sync.
|
process
|
update version to in readme md after maven sync version was uploaded to maven central repository version number needs to be updated in readme md after maven sync
| 1
|
6,459
| 9,546,572,626
|
IssuesEvent
|
2019-05-01 20:20:31
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Department of State: Languages
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Student Applicant
What: Language and Skills Page
Why: As a student I would like to add my language and skills to my application
A/C
- There will be a header: "Languages & Skills" (Bold)
- "All fields are optional" (In the right margin)
- This information will be populated from the user's USAJOBS profile
- There will be a card for each language that will include the following:
- A + sign that will expand the card
- The Language
- The + Add language link will take the user to a blank language page
- https://opm.invisionapp.com/d/main/#/console/15360465/319289346/preview
- there will be a header "Language" with a drop down list of languages
- There will be a header "Speaking language skill level" with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a header "Writing language skill level" with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a header "Reading language skill level with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a link "What does novice, intermediate and advanced mean?" that will take the user to the Help Center in a new window to the following link: <need link>
- When the user clicks the "Cancel and return" button they will return to the "Language & Skills" page and the new information will be discarded
- When the user clicks the "Save language" button the will return to the "Language & Skills" page with the new information listed.
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/319289343/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
1.0
|
Department of State: Languages - Who: Student Applicant
What: Language and Skills Page
Why: As a student I would like to add my language and skills to my application
A/C
- There will be a header: "Languages & Skills" (Bold)
- "All fields are optional" (In the right margin)
- This information will be populated from the user's USAJOBS profile
- There will be a card for each language that will include the following:
- A + sign that will expand the card
- The Language
- The + Add language link will take the user to a blank language page
- https://opm.invisionapp.com/d/main/#/console/15360465/319289346/preview
- there will be a header "Language" with a drop down list of languages
- There will be a header "Speaking language skill level" with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a header "Writing language skill level" with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a header "Reading language skill level with 4 radio buttons
- None
- Novice
- Intermediate
- Advanced
- There will be a link "What does novice, intermediate and advanced mean?" that will take the user to the Help Center in a new window to the following link: <need link>
- When the user clicks the "Cancel and return" button they will return to the "Language & Skills" page and the new information will be discarded
- When the user clicks the "Save language" button the will return to the "Language & Skills" page with the new information listed.
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/319289343/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
process
|
department of state languages who student applicant what language and skills page why as a student i would like to add my language and skills to my application a c there will be a header languages skills bold all fields are optional in the right margin this information will be populated from the user s usajobs profile there will be a card for each language that will include the following a sign that will expand the card the language the add language link will take the user to a blank language page there will be a header language with a drop down list of languages there will be a header speaking language skill level with radio buttons none novice intermediate advanced there will be a header writing language skill level with radio buttons none novice intermediate advanced there will be a header reading language skill level with radio buttons none novice intermediate advanced there will be a link what does novice intermediate and advanced mean that will take the user to the help center in a new window to the following link when the user clicks the cancel and return button they will return to the language skills page and the new information will be discarded when the user clicks the save language button the will return to the language skills page with the new information listed invision mock public link
| 1
|
718,047
| 24,702,341,731
|
IssuesEvent
|
2022-10-19 16:10:09
|
KelvinTegelaar/CIPP
|
https://api.github.com/repos/KelvinTegelaar/CIPP
|
closed
|
[Feature Request]:
|
enhancement no-priority
|
### Description of the new feature - must be an in-depth explanation of the feature you want, reasoning why, and the added benefits for MSPs as a whole.
We would like to have an additional field on the MFA report that notates the assigned user license and if possible exclude mailboxes that don't have licenses from the report entirely. We often finding conflicting information with the current report, and this would help us in our quarterly meetings with our clients to explain what is missing and how we need to address it moving forward using just one report.
### PowerShell commands you would normally use to achieve above request
_No response_
|
1.0
|
[Feature Request]: - ### Description of the new feature - must be an in-depth explanation of the feature you want, reasoning why, and the added benefits for MSPs as a whole.
We would like to have an additional field on the MFA report that notates the assigned user license and if possible exclude mailboxes that don't have licenses from the report entirely. We often finding conflicting information with the current report, and this would help us in our quarterly meetings with our clients to explain what is missing and how we need to address it moving forward using just one report.
### PowerShell commands you would normally use to achieve above request
_No response_
|
non_process
|
description of the new feature must be an in depth explanation of the feature you want reasoning why and the added benefits for msps as a whole we would like to have an additional field on the mfa report that notates the assigned user license and if possible exclude mailboxes that don t have licenses from the report entirely we often finding conflicting information with the current report and this would help us in our quarterly meetings with our clients to explain what is missing and how we need to address it moving forward using just one report powershell commands you would normally use to achieve above request no response
| 0
|
11,746
| 2,664,695,420
|
IssuesEvent
|
2015-03-20 15:59:07
|
holahmeds/remotedroid
|
https://api.github.com/repos/holahmeds/remotedroid
|
closed
|
Support sending ctrl/alt/etc for soft-keyboard supporting them
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Install a "full keyboard" such as Hacker's Keyboard
2. Connect to the server
3. See how ctrl/alt/f-buttons etc are not affecting the server side.
What is the expected output?
The keys should have had an effect on the server side.
What do you see instead?
Nothing happened
What version of the product are you using?
1.4
On what operating system?
ICS 4.0.3 on SGS2
```
Original issue reported on code.google.com by `sugoib...@gmail.com` on 12 Feb 2013 at 11:10
|
1.0
|
Support sending ctrl/alt/etc for soft-keyboard supporting them - ```
What steps will reproduce the problem?
1. Install a "full keyboard" such as Hacker's Keyboard
2. Connect to the server
3. See how ctrl/alt/f-buttons etc are not affecting the server side.
What is the expected output?
The keys should have had an effect on the server side.
What do you see instead?
Nothing happened
What version of the product are you using?
1.4
On what operating system?
ICS 4.0.3 on SGS2
```
Original issue reported on code.google.com by `sugoib...@gmail.com` on 12 Feb 2013 at 11:10
|
non_process
|
support sending ctrl alt etc for soft keyboard supporting them what steps will reproduce the problem install a full keyboard such as hacker s keyboard connect to the server see how ctrl alt f buttons etc are not affecting the server side what is the expected output the keys should have had an effect on the server side what do you see instead nothing happened what version of the product are you using on what operating system ics on original issue reported on code google com by sugoib gmail com on feb at
| 0
|
16,921
| 22,266,716,206
|
IssuesEvent
|
2022-06-10 08:12:16
|
camunda/feel-scala
|
https://api.github.com/repos/camunda/feel-scala
|
closed
|
range function before/after are failing with date and time objects
|
type: bug team/process-automation
|
**Describe the bug**
The FEEL expression
`after(date and time(now()), date and time(today(),time("14:00:00")))`
fails with
`14:44:02.988 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValDateTime(2022-05-13T14:44:02.987611+02:00[Europe/Berlin]))
14:44:02.990 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValNull, ValLocalDateTime(2022-05-13T14:00))`
**Expected behavior**
range functions work as documented
**Environment**
* FEEL engine version: FEEL Engine REPL (1.14.2) Ammonite Repl 2.5.3 (Scala 2.13.8 Java 11.0.13) | also: zeebe_8.0.1
|
1.0
|
range function before/after are failing with date and time objects - **Describe the bug**
The FEEL expression
`after(date and time(now()), date and time(today(),time("14:00:00")))`
fails with
`14:44:02.988 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValDateTime(2022-05-13T14:44:02.987611+02:00[Europe/Berlin]))
14:44:02.990 [main] WARN org.camunda.feel.FeelEngine - Suppressed failure: illegal arguments: List(ValNull, ValLocalDateTime(2022-05-13T14:00))`
**Expected behavior**
range functions work as documented
**Environment**
* FEEL engine version: FEEL Engine REPL (1.14.2) Ammonite Repl 2.5.3 (Scala 2.13.8 Java 11.0.13) | also: zeebe_8.0.1
|
process
|
range function before after are failing with date and time objects describe the bug the feel expression after date and time now date and time today time fails with warn org camunda feel feelengine suppressed failure illegal arguments list valdatetime warn org camunda feel feelengine suppressed failure illegal arguments list valnull vallocaldatetime expected behavior range functions work as documented environment feel engine version feel engine repl ammonite repl scala java also zeebe
| 1
|
4,878
| 7,755,609,815
|
IssuesEvent
|
2018-05-31 10:45:55
|
gvwilson/teachtogether.tech
|
https://api.github.com/repos/gvwilson/teachtogether.tech
|
opened
|
Ch06 Aleksandra Pawlik
|
Ch06 Process
|
- So important to emphasise and explain why reverse instructional design is NOT teaching to the test! myself know it is not but struggle to explain it to people (which means I don't quite understand the difference).
|
1.0
|
Ch06 Aleksandra Pawlik - - So important to emphasise and explain why reverse instructional design is NOT teaching to the test! myself know it is not but struggle to explain it to people (which means I don't quite understand the difference).
|
process
|
aleksandra pawlik so important to emphasise and explain why reverse instructional design is not teaching to the test myself know it is not but struggle to explain it to people which means i don t quite understand the difference
| 1
|
11,742
| 14,582,233,792
|
IssuesEvent
|
2020-12-18 12:01:54
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Terms of Service page > UI issue
|
Bug P2 Participant manager Process: Dev Process: Fixed Process: Reopened Process: Tested QA
|
AR : title is displaying as Terms and Condition
ER : Title should be 'Terms of Service'
[ Note : Terms of Service page should be same as About page except title]

|
4.0
|
Terms of Service page > UI issue - AR : title is displaying as Terms and Condition
ER : Title should be 'Terms of Service'
[ Note : Terms of Service page should be same as About page except title]

|
process
|
terms of service page ui issue ar title is displaying as terms and condition er title should be terms of service
| 1
|
69,478
| 17,688,407,779
|
IssuesEvent
|
2021-08-24 06:46:53
|
realthunder/FreeCAD_assembly3
|
https://api.github.com/repos/realthunder/FreeCAD_assembly3
|
closed
|
problems compiling on ubuntu 20.04
|
build question
|
The FreeCAD/FreeCAD repository seems to compile just fine.
I used the [freecad-daily PPA](https://launchpad.net/~freecad-maintainers/+archive/ubuntu/freecad-daily).
When trying to cmake i got the error below. As it seems the linkstage3 branch still uses qt4. The qt4 packages are not part of the ubuntu standard repositories and in the PPA there seem to be not all necessary packages related to qt4 (like shiboken).
Will there be a switch to qt5 in the near future to match the mainline FreeCAD?
How is linkstage3 meant to be compiled?
Am i missing / overlook something here, i.e. is there an easy way to make it compile?
```
CMake Warning at cMake/FreeCAD_Helpers/SetupShibokenAndPyside.cmake:122 (find_package):
By not providing "FindShiboken.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Shiboken",
but CMake did not find one.
Could not find a package configuration file provided by "Shiboken" with any
of the following names:
ShibokenConfig.cmake
shiboken-config.cmake
Add the installation prefix of "Shiboken" to CMAKE_PREFIX_PATH or set
"Shiboken_DIR" to a directory containing one of the above files. If
"Shiboken" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
CMakeLists.txt:75 (SetupShibokenAndPyside)
CMake Error at cMake/FreeCAD_Helpers/SetupShibokenAndPyside.cmake:124 (message):
===================
shiboken not found.
===================
Call Stack (most recent call first):
CMakeLists.txt:75 (SetupShibokenAndPyside)
```
Compiling shiboken manually also fails btw, with following error. Looks pretty outdated.
```
>>> make -j 3 ‹git:master ✔› 08:49.37 Mon Mar 08 2021 >>>
[ 1%] Built target libminimal
[ 5%] Built target libshiboken
[ 5%] Building CXX object tests/libsample/CMakeFiles/libsample.dir/size.cpp.o
[ 6%] Building CXX object tests/libsample/CMakeFiles/libsample.dir/simplefile.cpp.o
[ 16%] Built target apiextractor
[ 17%] Generating testmodifydocumentation.moc
Scanning dependencies of target testmodifydocumentation
[ 17%] Building CXX object ApiExtractor/tests/CMakeFiles/testmodifydocumentation.dir/testmodifydocumentation.cpp.o
[ 17%] Generating testtyperevision.moc
Scanning dependencies of target testtyperevision
[ 18%] Building CXX object ApiExtractor/tests/CMakeFiles/testtyperevision.dir/testtyperevision.cpp.o
/home/username/build/Shiboken/tests/libsample/simplefile.cpp: In member function ‘bool SimpleFile::exists() const’:
/home/username/build/Shiboken/tests/libsample/simplefile.cpp:93:12: error: cannot convert ‘std::ifstream’ {aka ‘std::basic_ifstream<char>’} to ‘bool’ in return
93 | return ifile;
| ^~~~~
/home/username/build/Shiboken/tests/libsample/simplefile.cpp: In static member function ‘static bool SimpleFile::exists(const char*)’:
/home/username/build/Shiboken/tests/libsample/simplefile.cpp:100:12: error: cannot convert ‘std::ifstream’ {aka ‘std::basic_ifstream<char>’} to ‘bool’ in return
100 | return ifile;
| ^~~~~
make[2]: *** [tests/libsample/CMakeFiles/libsample.dir/build.make:563: tests/libsample/CMakeFiles/libsample.dir/simplefile.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1614: tests/libsample/CMakeFiles/libsample.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 18%] Linking CXX executable testmodifydocumentation
[ 18%] Linking CXX executable testtyperevision
[ 18%] Built target testmodifydocumentation
[ 18%] Built target testtyperevision
make: *** [Makefile:160: all] Error 2
```
|
1.0
|
problems compiling on ubuntu 20.04 - The FreeCAD/FreeCAD repository seems to compile just fine.
I used the [freecad-daily PPA](https://launchpad.net/~freecad-maintainers/+archive/ubuntu/freecad-daily).
When trying to cmake i got the error below. As it seems the linkstage3 branch still uses qt4. The qt4 packages are not part of the ubuntu standard repositories and in the PPA there seem to be not all necessary packages related to qt4 (like shiboken).
Will there be a switch to qt5 in the near future to match the mainline FreeCAD?
How is linkstage3 meant to be compiled?
Am i missing / overlook something here, i.e. is there an easy way to make it compile?
```
CMake Warning at cMake/FreeCAD_Helpers/SetupShibokenAndPyside.cmake:122 (find_package):
By not providing "FindShiboken.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Shiboken",
but CMake did not find one.
Could not find a package configuration file provided by "Shiboken" with any
of the following names:
ShibokenConfig.cmake
shiboken-config.cmake
Add the installation prefix of "Shiboken" to CMAKE_PREFIX_PATH or set
"Shiboken_DIR" to a directory containing one of the above files. If
"Shiboken" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
CMakeLists.txt:75 (SetupShibokenAndPyside)
CMake Error at cMake/FreeCAD_Helpers/SetupShibokenAndPyside.cmake:124 (message):
===================
shiboken not found.
===================
Call Stack (most recent call first):
CMakeLists.txt:75 (SetupShibokenAndPyside)
```
Compiling shiboken manually also fails btw, with following error. Looks pretty outdated.
```
>>> make -j 3 ‹git:master ✔› 08:49.37 Mon Mar 08 2021 >>>
[ 1%] Built target libminimal
[ 5%] Built target libshiboken
[ 5%] Building CXX object tests/libsample/CMakeFiles/libsample.dir/size.cpp.o
[ 6%] Building CXX object tests/libsample/CMakeFiles/libsample.dir/simplefile.cpp.o
[ 16%] Built target apiextractor
[ 17%] Generating testmodifydocumentation.moc
Scanning dependencies of target testmodifydocumentation
[ 17%] Building CXX object ApiExtractor/tests/CMakeFiles/testmodifydocumentation.dir/testmodifydocumentation.cpp.o
[ 17%] Generating testtyperevision.moc
Scanning dependencies of target testtyperevision
[ 18%] Building CXX object ApiExtractor/tests/CMakeFiles/testtyperevision.dir/testtyperevision.cpp.o
/home/username/build/Shiboken/tests/libsample/simplefile.cpp: In member function ‘bool SimpleFile::exists() const’:
/home/username/build/Shiboken/tests/libsample/simplefile.cpp:93:12: error: cannot convert ‘std::ifstream’ {aka ‘std::basic_ifstream<char>’} to ‘bool’ in return
93 | return ifile;
| ^~~~~
/home/username/build/Shiboken/tests/libsample/simplefile.cpp: In static member function ‘static bool SimpleFile::exists(const char*)’:
/home/username/build/Shiboken/tests/libsample/simplefile.cpp:100:12: error: cannot convert ‘std::ifstream’ {aka ‘std::basic_ifstream<char>’} to ‘bool’ in return
100 | return ifile;
| ^~~~~
make[2]: *** [tests/libsample/CMakeFiles/libsample.dir/build.make:563: tests/libsample/CMakeFiles/libsample.dir/simplefile.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1614: tests/libsample/CMakeFiles/libsample.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 18%] Linking CXX executable testmodifydocumentation
[ 18%] Linking CXX executable testtyperevision
[ 18%] Built target testmodifydocumentation
[ 18%] Built target testtyperevision
make: *** [Makefile:160: all] Error 2
```
|
non_process
|
problems compiling on ubuntu the freecad freecad repository seems to compile just fine i used the when trying to cmake i got the error below as it seems the branch still uses the packages are not part of the ubuntu standard repositories and in the ppa there seem to be not all necessary packages related to like shiboken will there be a switch to in the near future to match the mainline freecad how is meant to be compiled am i missing overlook something here i e is there an easy way to make it compile cmake warning at cmake freecad helpers setupshibokenandpyside cmake find package by not providing findshiboken cmake in cmake module path this project has asked cmake to find a package configuration file provided by shiboken but cmake did not find one could not find a package configuration file provided by shiboken with any of the following names shibokenconfig cmake shiboken config cmake add the installation prefix of shiboken to cmake prefix path or set shiboken dir to a directory containing one of the above files if shiboken provides a separate development package or sdk be sure it has been installed call stack most recent call first cmakelists txt setupshibokenandpyside cmake error at cmake freecad helpers setupshibokenandpyside cmake message shiboken not found call stack most recent call first cmakelists txt setupshibokenandpyside compiling shiboken manually also fails btw with following error looks pretty outdated make j ‹git master ✔› mon mar built target libminimal built target libshiboken building cxx object tests libsample cmakefiles libsample dir size cpp o building cxx object tests libsample cmakefiles libsample dir simplefile cpp o built target apiextractor generating testmodifydocumentation moc scanning dependencies of target testmodifydocumentation building cxx object apiextractor tests cmakefiles testmodifydocumentation dir testmodifydocumentation cpp o generating testtyperevision moc scanning dependencies of target testtyperevision building cxx object apiextractor tests cmakefiles testtyperevision dir testtyperevision cpp o home username build shiboken tests libsample simplefile cpp in member function ‘bool simplefile exists const’ home username build shiboken tests libsample simplefile cpp error cannot convert ‘std ifstream’ aka ‘std basic ifstream ’ to ‘bool’ in return return ifile home username build shiboken tests libsample simplefile cpp in static member function ‘static bool simplefile exists const char ’ home username build shiboken tests libsample simplefile cpp error cannot convert ‘std ifstream’ aka ‘std basic ifstream ’ to ‘bool’ in return return ifile make error make error make waiting for unfinished jobs linking cxx executable testmodifydocumentation linking cxx executable testtyperevision built target testmodifydocumentation built target testtyperevision make error
| 0
|
143
| 2,575,872,201
|
IssuesEvent
|
2015-02-12 03:23:43
|
dominikwilkowski/bronzies
|
https://api.github.com/repos/dominikwilkowski/bronzies
|
closed
|
Create a localStorage wrapper with an offline-first approach
|
In process
|
https://github.com/marcuswestin/store.js/ seems to be the best candidate for this.
Will have to create new callbacks and structure to get this done.
|
1.0
|
Create a localStorage wrapper with an offline-first approach - https://github.com/marcuswestin/store.js/ seems to be the best candidate for this.
Will have to create new callbacks and structure to get this done.
|
process
|
create a localstorage wrapper with an offline first approach seems to be the best candidate for this will have to create new callbacks and structure to get this done
| 1
|
815,557
| 30,561,448,710
|
IssuesEvent
|
2023-07-20 14:51:02
|
openfheorg/openfhe-development
|
https://api.github.com/repos/openfheorg/openfhe-development
|
closed
|
Add scheme switching between CKKS and FHEW/TFHE (both ways)
|
new feature Priority: HIGH
|
Also add support for
* comparisons
* argmin/argmax
|
1.0
|
Add scheme switching between CKKS and FHEW/TFHE (both ways) - Also add support for
* comparisons
* argmin/argmax
|
non_process
|
add scheme switching between ckks and fhew tfhe both ways also add support for comparisons argmin argmax
| 0
|
19,132
| 25,186,586,104
|
IssuesEvent
|
2022-11-11 18:38:30
|
googleapis/nodejs-dms
|
https://api.github.com/repos/googleapis/nodejs-dms
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'dms' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'dms' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname dms invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
12,122
| 14,740,758,850
|
IssuesEvent
|
2021-01-07 09:35:09
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SITE062 SAB Holiday Charges
|
anc-process anp-1 ant-enhancement ant-support
|
In GitLab by @kdjstudios on Dec 3, 2018, 08:37
**Submitted by:** "Denise Joseph" <denise.joseph@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-30-19957
**Server:** Internal
**Client/Site:** Toronto
**Account:** ALL
**Issue:**
You mentioned the support team did need to manually adjust a few more items from the back end. I wander if any of these adjustments would impact the holiday fees causing them to double. The fee for this billing should be $30.00 but now for most accounts it shows $60.00 was charged. We need to have this corrected before billing can be finalized. Please help.
The holiday fee we charge per holiday is $15.00. For this billing a charge of $30.00 for the Christmas and Boxing day holidays per account needs to be applied not $60.00. We need to have this corrected before billing can be finalized. Please help.
Thank you in advance for your assistance.
|
1.0
|
SITE062 SAB Holiday Charges - In GitLab by @kdjstudios on Dec 3, 2018, 08:37
**Submitted by:** "Denise Joseph" <denise.joseph@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-30-19957
**Server:** Internal
**Client/Site:** Toronto
**Account:** ALL
**Issue:**
You mentioned the support team did need to manually adjust a few more items from the back end. I wander if any of these adjustments would impact the holiday fees causing them to double. The fee for this billing should be $30.00 but now for most accounts it shows $60.00 was charged. We need to have this corrected before billing can be finalized. Please help.
The holiday fee we charge per holiday is $15.00. For this billing a charge of $30.00 for the Christmas and Boxing day holidays per account needs to be applied not $60.00. We need to have this corrected before billing can be finalized. Please help.
Thank you in advance for your assistance.
|
process
|
sab holiday charges in gitlab by kdjstudios on dec submitted by denise joseph helpdesk server internal client site toronto account all issue you mentioned the support team did need to manually adjust a few more items from the back end i wander if any of these adjustments would impact the holiday fees causing them to double the fee for this billing should be but now for most accounts it shows was charged we need to have this corrected before billing can be finalized please help the holiday fee we charge per holiday is for this billing a charge of for the christmas and boxing day holidays per account needs to be applied not we need to have this corrected before billing can be finalized please help thank you in advance for your assistance
| 1
|
18,869
| 24,799,053,635
|
IssuesEvent
|
2022-10-24 19:56:54
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.66
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.62.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.66 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.62.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
2,952
| 5,945,867,318
|
IssuesEvent
|
2017-05-26 00:35:04
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
opened
|
Chunking fails when chunked file in subdir
|
bug preprocess/chunking
|
My map is: `sample/upandover.ditamap`
It references files outside of the `sample` directory, so there is extra "uplevels" processing. The map also has this reference for chunking:
`<topicref href="subdir/forchunks.dita" chunk="by-topic"/>`
The build crashes with the following error:
```
chunk:
[chunk] Processing file:/C:/DITA-OT/dita-ot-2.4.6/test/P025007/temp1/sample/upandover.ditamap
[chunk] Processing file:/C:/DITA-OT/dita-ot-2.4.6/test/P025007/temp1/sample/subdir/forchunks.dita
[chunk] C:\DITA-OT\dita-ot-2.4.6\test\P025007\temp1\sample\sample\subchunk.dita (The system cannot find the path specified.)
Error: java.util.NoSuchElementException
```
When I use a map that has the same references (but no uplevels), the build completes. However, the root chunk from `subdir/forchunks.dita` is missing; nothing is generated, and the `index.html` file has a link to `subdir/forchunks.dita.chunk` instead of to something in the map's directory as in reported in #2655 -- there are also a bunch of errors for that missing topic.
Attaching my sample files, the batch script (Windows) used to run, and the logs. I'm building `input.ditamap` (no uplevels), and `updandover.ditamap` with both `generate.copy.outer=1` and `generate.copy.outer=3`. The up and over test fails for each case. Test run with 2.4.6 but results match `develop` code for 2.5.
[chunkcrash.zip](https://github.com/dita-ot/dita-ot/files/1030514/chunkcrash.zip)
|
1.0
|
Chunking fails when chunked file in subdir - My map is: `sample/upandover.ditamap`
It references files outside of the `sample` directory, so there is extra "uplevels" processing. The map also has this reference for chunking:
`<topicref href="subdir/forchunks.dita" chunk="by-topic"/>`
The build crashes with the following error:
```
chunk:
[chunk] Processing file:/C:/DITA-OT/dita-ot-2.4.6/test/P025007/temp1/sample/upandover.ditamap
[chunk] Processing file:/C:/DITA-OT/dita-ot-2.4.6/test/P025007/temp1/sample/subdir/forchunks.dita
[chunk] C:\DITA-OT\dita-ot-2.4.6\test\P025007\temp1\sample\sample\subchunk.dita (The system cannot find the path specified.)
Error: java.util.NoSuchElementException
```
When I use a map that has the same references (but no uplevels), the build completes. However, the root chunk from `subdir/forchunks.dita` is missing; nothing is generated, and the `index.html` file has a link to `subdir/forchunks.dita.chunk` instead of to something in the map's directory as in reported in #2655 -- there are also a bunch of errors for that missing topic.
Attaching my sample files, the batch script (Windows) used to run, and the logs. I'm building `input.ditamap` (no uplevels), and `updandover.ditamap` with both `generate.copy.outer=1` and `generate.copy.outer=3`. The up and over test fails for each case. Test run with 2.4.6 but results match `develop` code for 2.5.
[chunkcrash.zip](https://github.com/dita-ot/dita-ot/files/1030514/chunkcrash.zip)
|
process
|
chunking fails when chunked file in subdir my map is sample upandover ditamap it references files outside of the sample directory so there is extra uplevels processing the map also has this reference for chunking the build crashes with the following error chunk processing file c dita ot dita ot test sample upandover ditamap processing file c dita ot dita ot test sample subdir forchunks dita c dita ot dita ot test sample sample subchunk dita the system cannot find the path specified error java util nosuchelementexception when i use a map that has the same references but no uplevels the build completes however the root chunk from subdir forchunks dita is missing nothing is generated and the index html file has a link to subdir forchunks dita chunk instead of to something in the map s directory as in reported in there are also a bunch of errors for that missing topic attaching my sample files the batch script windows used to run and the logs i m building input ditamap no uplevels and updandover ditamap with both generate copy outer and generate copy outer the up and over test fails for each case test run with but results match develop code for
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.