Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,604 | 6,572,392,258 | IssuesEvent | 2017-09-11 01:58:06 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | LXC_CONTAINER clone should support -P {new_directory} and -p {original_directory} for cloning into a new directory in lxc v1.0.x | affects_2.3 cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
lxc_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel aa1ec8af17)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ansible running on Ubuntu 12.04 LTS
lxc running on CentOS release 6.7 (Final)
##### SUMMARY
Presently there is not a way to clone (without snapshot) a directory based container to a new filesystem path. However this is supported by the underlying lxc-clone command:
lxc-clone -p /original/container/path/ -P /mnt/lxc_new_container_path -o original_container_name -n new_container_name
Usage: lxc-clone [-s] [-B backingstore] [-L size[unit]] [-K] [-M] [-H]
[-p lxcpath] [-P newlxcpath] orig new
-s: snapshot rather than copy
-B: use specified new backingstore. Default is the same as
the original. Options include aufs, btrfs, lvm, overlayfs,
dir and loop
-L: for blockdev-backed backingstore, use specified size \* specified
unit. Default size is the size of the source blockdev, default
unit is MB
-K: Keep name - do not change the container name
-M: Keep macaddr - do not choose a random new mac address
-p: use container orig from custom lxcpath
-P: create container new in custom lxcpath
##### EXPECTED RESULTS
Wanted the ability to clone a directory based container to a new filesystem location
##### ACTUAL RESULTS
Was not possible without modifying lxc_container.py src
| True | LXC_CONTAINER clone should support -P {new_directory} and -p {original_directory} for cloning into a new directory in lxc v1.0.x - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
lxc_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel aa1ec8af17)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
ansible running on Ubuntu 12.04 LTS
lxc running on CentOS release 6.7 (Final)
##### SUMMARY
Presently there is not a way to clone (without snapshot) a directory based container to a new filesystem path. However this is supported by the underlying lxc-clone command:
lxc-clone -p /original/container/path/ -P /mnt/lxc_new_container_path -o original_container_name -n new_container_name
Usage: lxc-clone [-s] [-B backingstore] [-L size[unit]] [-K] [-M] [-H]
[-p lxcpath] [-P newlxcpath] orig new
-s: snapshot rather than copy
-B: use specified new backingstore. Default is the same as
the original. Options include aufs, btrfs, lvm, overlayfs,
dir and loop
-L: for blockdev-backed backingstore, use specified size \* specified
unit. Default size is the size of the source blockdev, default
unit is MB
-K: Keep name - do not change the container name
-M: Keep macaddr - do not choose a random new mac address
-p: use container orig from custom lxcpath
-P: create container new in custom lxcpath
##### EXPECTED RESULTS
Wanted the ability to clone a directory based container to a new filesystem location
##### ACTUAL RESULTS
Was not possible without modifying lxc_container.py src
| main | lxc container clone should support p new directory and p original directory for cloning into a new directory in lxc x issue type feature idea component name lxc container ansible version ansible devel configuration os environment ansible running on ubuntu lts lxc running on centos release final summary presently there is not a way to clone without snapshot a directory based container to a new filesystem path however this is supported by the underlying lxc clone command lxc clone p original container path p mnt lxc new container path o original container name n new container name usage lxc clone orig new s snapshot rather than copy b use specified new backingstore default is the same as the original options include aufs btrfs lvm overlayfs dir and loop l for blockdev backed backingstore use specified size specified unit default size is the size of the source blockdev default unit is mb k keep name do not change the container name m keep macaddr do not choose a random new mac address p use container orig from custom lxcpath p create container new in custom lxcpath expected results wanted the ability to clone a directory based container to a new filesystem location actual results was not possible without modifying lxc container py src | 1 |
3,733 | 15,611,886,577 | IssuesEvent | 2021-03-19 14:47:47 | precice/precice | https://api.github.com/repos/precice/precice | opened | Clean multi-step configuration | maintainability | **Please describe the problem you are trying to solve.**
Currently the configuration of preCICE is directly tied into the parsing of the XML file.
This consists out of:
* LibXML2 which parses an XML file and uses callbacks for entering/leaving tags.
* The `XMLTag`s and `XMLAttribute`s, which are used to validate the XML and contain information on which `Configuration` callbacks should be dispatched to. The `XMLAttribute` knows how to validate and parse its value to the final type (`int`, `double`, `string`). Both of them also contain documentation.
* The ConfigParser, which uses libxml2 to parse an XML file and dispatches the callbacks from LibXML2 to `Listeners` based on the AST of `XMLTag`s given by the root tag.
* The `Configuration`s, which implement `xml::XMLTag::Listener`, receiving callbacks and build and configuring preCICE objects. They also build the AST of `XMLTag`s and `XMLAttribute`s and hold configured preCICE objects
* The `SolverInterface`, which created the base configuration, runs the ConfigParser on the Configuration (object and file), extracts the required configured objects from the Configuration and finalizes their configuration if required.
**Describe the solution you propose.**
Restructure the parsing to multiple stages:
1. Use LibXML2 to create an XML AST using the callbacks. This simply contains nested tags and attributes.
2. Transform the XML AST to a Configuration AST. This part checks tag occurrence and validates attributes.
3. Transform the Configuration AST to the required preCICE objects using contextual information (participant name, current rank, total ranks).
The benefits of this strategy is:
* Easier to test as stages can be checked separately
* Easier to debug as the output relies only on the input and both can be inspected separately.
* Allows to integrate location information (from the XML file) for each object in the Configuration AST. This should allow to generate very detailed error messages.
* The configuration AST contains all information, hence the order in the XML will not matter any more.
The downsides are:
* A lot of work
* Spreads out information into the transformation functions and the objects.
Open Questions:
* Where to put the XML documentation?
* How to distribute the information over the packages (`com`, `m2n`, ...)
**Describe alternatives you've considered**
Leave it as it is. | True | Clean multi-step configuration - **Please describe the problem you are trying to solve.**
Currently the configuration of preCICE is directly tied into the parsing of the XML file.
This consists out of:
* LibXML2 which parses an XML file and uses callbacks for entering/leaving tags.
* The `XMLTag`s and `XMLAttribute`s, which are used to validate the XML and contain information on which `Configuration` callbacks should be dispatched to. The `XMLAttribute` knows how to validate and parse its value to the final type (`int`, `double`, `string`). Both of them also contain documentation.
* The ConfigParser, which uses libxml2 to parse an XML file and dispatches the callbacks from LibXML2 to `Listeners` based on the AST of `XMLTag`s given by the root tag.
* The `Configuration`s, which implement `xml::XMLTag::Listener`, receiving callbacks and build and configuring preCICE objects. They also build the AST of `XMLTag`s and `XMLAttribute`s and hold configured preCICE objects
* The `SolverInterface`, which created the base configuration, runs the ConfigParser on the Configuration (object and file), extracts the required configured objects from the Configuration and finalizes their configuration if required.
**Describe the solution you propose.**
Restructure the parsing to multiple stages:
1. Use LibXML2 to create an XML AST using the callbacks. This simply contains nested tags and attributes.
2. Transform the XML AST to a Configuration AST. This part checks tag occurrence and validates attributes.
3. Transform the Configuration AST to the required preCICE objects using contextual information (participant name, current rank, total ranks).
The benefits of this strategy is:
* Easier to test as stages can be checked separately
* Easier to debug as the output relies only on the input and both can be inspected separately.
* Allows to integrate location information (from the XML file) for each object in the Configuration AST. This should allow to generate very detailed error messages.
* The configuration AST contains all information, hence the order in the XML will not matter any more.
The downsides are:
* A lot of work
* Spreads out information into the transformation functions and the objects.
Open Questions:
* Where to put the XML documentation?
* How to distribute the information over the packages (`com`, `m2n`, ...)
**Describe alternatives you've considered**
Leave it as it is. | main | clean multi step configuration please describe the problem you are trying to solve currently the configuration of precice is directly tied into the parsing of the xml file this consists out of which parses an xml file and uses callbacks for entering leaving tags the xmltag s and xmlattribute s which are used to validate the xml and contain information on which configuration callbacks should be dispatched to the xmlattribute knows how to validate and parse its value to the final type int double string both of them also contain documentation the configparser which uses to parse an xml file and dispatches the callbacks from to listeners based on the ast of xmltag s given by the root tag the configuration s which implement xml xmltag listener receiving callbacks and build and configuring precice objects they also build the ast of xmltag s and xmlattribute s and hold configured precice objects the solverinterface which created the base configuration runs the configparser on the configuration object and file extracts the required configured objects from the configuration and finalizes their configuration if required describe the solution you propose restructure the parsing to multiple stages use to create an xml ast using the callbacks this simply contains nested tags and attributes transform the xml ast to a configuration ast this part checks tag occurrence and validates attributes transform the configuration ast to the required precice objects using contextual information participant name current rank total ranks the benefits of this strategy is easier to test as stages can be checked separately easier to debug as the output relies only on the input and both can be inspected separately allows to integrate location information from the xml file for each object in the configuration ast this should allow to generate very detailed error messages the configuration ast contains all information hence the order in the xml will not matter any more the downsides are a lot of work spreads out information into the transformation functions and the objects open questions where to put the xml documentation how to distribute the information over the packages com describe alternatives you ve considered leave it as it is | 1 |
209,883 | 23,730,913,282 | IssuesEvent | 2022-08-31 01:33:30 | Baneeishaque/ageeri-pai-gold-and-diamonds-website | https://api.github.com/repos/Baneeishaque/ageeri-pai-gold-and-diamonds-website | opened | CVE-2020-11023 (Medium) detected in jquery-1.12.4.min.js | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to dependency file: /index.html@p=1322.html</p>
<p>Path to vulnerable library: /index.html@p=1322.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in jquery-1.12.4.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to dependency file: /index.html@p=1322.html</p>
<p>Path to vulnerable library: /index.html@p=1322.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file index html p html path to vulnerable library index html p html dependency hierarchy x jquery min js vulnerable library vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with mend | 0 |
12,600 | 9,875,146,411 | IssuesEvent | 2019-06-23 09:05:02 | OpenCHS/openchs-product | https://api.github.com/repos/OpenCHS/openchs-product | closed | Script to automate build and deploy | 0.6 Complete Infrastructure/other Should Story | Create a script to automatically build and deploy to demo environment. Doing this manually causes steps to be missed, and time spent diagnosing such issues. It is best this piece be automated.
Build rpm (openchs and openchs-reports) and apk with just one command
Given 2 rpms, recreate db, setup openchs and reports, and run impl/health modules etc
Dev Observations:
From build to deployment on vagrant takes 1m 24s on local machine.
./gradlew clean build_and_fetch_resources deploy | 1.0 | Script to automate build and deploy - Create a script to automatically build and deploy to demo environment. Doing this manually causes steps to be missed, and time spent diagnosing such issues. It is best this piece be automated.
Build rpm (openchs and openchs-reports) and apk with just one command
Given 2 rpms, recreate db, setup openchs and reports, and run impl/health modules etc
Dev Observations:
From build to deployment on vagrant takes 1m 24s on local machine.
./gradlew clean build_and_fetch_resources deploy | non_main | script to automate build and deploy create a script to automatically build and deploy to demo environment doing this manually causes steps to be missed and time spent diagnosing such issues it is best this piece be automated build rpm openchs and openchs reports and apk with just one command given rpms recreate db setup openchs and reports and run impl health modules etc dev observations from build to deployment on vagrant takes on local machine gradlew clean build and fetch resources deploy | 0 |
1,383 | 6,007,803,391 | IssuesEvent | 2017-06-06 05:18:57 | electron/electron | https://api.github.com/repos/electron/electron | closed | Make the Chrome cache accessible to electron. | triage/enhancement waiting/maintainer-feedback | Hi, I've started work on a new browser project today. It fits in well with another project I'm working on at the moment, so I've decided it's time. It already looks pretty great thanks to @adamschwartz 's chrome-tabs project.

I wrote down a few of my ideas a few days ago here (and the accompanying gist): https://news.ycombinator.com/item?id=13496660
I was just wondering if there is an appetite on the electron team to help me implement this and the accompanying urlcache idea, as described in the gist. Looking forward to your feedback. Thanks!
| True | Make the Chrome cache accessible to electron. - Hi, I've started work on a new browser project today. It fits in well with another project I'm working on at the moment, so I've decided it's time. It already looks pretty great thanks to @adamschwartz 's chrome-tabs project.

I wrote down a few of my ideas a few days ago here (and the accompanying gist): https://news.ycombinator.com/item?id=13496660
I was just wondering if there is an appetite on the electron team to help me implement this and the accompanying urlcache idea, as described in the gist. Looking forward to your feedback. Thanks!
| main | make the chrome cache accessible to electron hi i ve started work on a new browser project today it fits in well with another project i m working on at the moment so i ve decided it s time it already looks pretty great thanks to adamschwartz s chrome tabs project i wrote down a few of my ideas a few days ago here and the accompanying gist i was just wondering if there is an appetite on the electron team to help me implement this and the accompanying urlcache idea as described in the gist looking forward to your feedback thanks | 1 |
367,667 | 10,860,440,583 | IssuesEvent | 2019-11-14 09:04:44 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.youtube.com - see bug description | browser-firefox-reality engine-gecko priority-critical | <!-- @browser: Android 7.1.1 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile VR; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://www.youtube.com/watch?v=ciqbLezoppM
**Browser / Version**: Android 7.1.1
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: vr180 not working, oculus go
**Steps to Reproduce**:
180vr video works in youtube vr app but not firefox reality
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.youtube.com - see bug description - <!-- @browser: Android 7.1.1 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile VR; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://www.youtube.com/watch?v=ciqbLezoppM
**Browser / Version**: Android 7.1.1
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: vr180 not working, oculus go
**Steps to Reproduce**:
180vr video works in youtube vr app but not firefox reality
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | see bug description url browser version android operating system android tested another browser yes problem type something else description not working oculus go steps to reproduce video works in youtube vr app but not firefox reality browser configuration none from with ❤️ | 0 |
537,714 | 15,733,799,468 | IssuesEvent | 2021-03-29 20:02:42 | antoine-leguillou/Tynkle | https://api.github.com/repos/antoine-leguillou/Tynkle | closed | Suite issue 29 suppression question "devenir helper" et "obtenir de l'aide" | enhancement priority | Comme je te le disais dans l'issue 29, je te fais des issues pour chaque éléments à modifier suite à l'interface commune à tous les users:
suite à la validation de l'interface commune à tous les users et non plus les comptes Helper/Demandeur il y a donc certaines modif :
Une seule façon de créer un compte à tous les users
Sur la page [création de compte] il faut donc lever la question "Devenir Helper ?"
également sur les autres endroits: sur les pages profil il faut aussi lever "Devenir Helper?" et "Obtenir également de l'aide?"
Merci :) | 1.0 | Suite issue 29 suppression question "devenir helper" et "obtenir de l'aide" - Comme je te le disais dans l'issue 29, je te fais des issues pour chaque éléments à modifier suite à l'interface commune à tous les users:
suite à la validation de l'interface commune à tous les users et non plus les comptes Helper/Demandeur il y a donc certaines modif :
Une seule façon de créer un compte à tous les users
Sur la page [création de compte] il faut donc lever la question "Devenir Helper ?"
également sur les autres endroits: sur les pages profil il faut aussi lever "Devenir Helper?" et "Obtenir également de l'aide?"
Merci :) | non_main | suite issue suppression question devenir helper et obtenir de l aide comme je te le disais dans l issue je te fais des issues pour chaque éléments à modifier suite à l interface commune à tous les users suite à la validation de l interface commune à tous les users et non plus les comptes helper demandeur il y a donc certaines modif une seule façon de créer un compte à tous les users sur la page il faut donc lever la question devenir helper également sur les autres endroits sur les pages profil il faut aussi lever devenir helper et obtenir également de l aide merci | 0 |
5,283 | 26,686,533,318 | IssuesEvent | 2023-01-26 22:30:18 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Trying to add/patch a user with a set of roles results in a 500 | type: bug work: backend status: blocked restricted: maintainers | ## Description
Trying to add a new user with roles results in a 500.
POST: /users/
```
{
full_name: 'user123',
username: 'user123',
password: 'user123',
database_roles: [{ database: 1, role: 'manager' }],
}
```
Error:
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/ui/v0/users/
Django Version: 3.1.14
Python Version: 3.9.16
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 24, in perform_create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 205, in save
self.instance = self.create(validated_data)
File "/code/mathesar/api/ui/serializers/users.py", line 53, in create
user = User(**validated_data)
File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 496, in __init__
_setattr(self, prop, kwargs[prop])
File "/usr/local/lib/python3.9/site-packages/django/db/models/fields/related_descriptors.py", line 545, in __set__
raise TypeError(
Exception Type: TypeError at /api/ui/v0/users/
Exception Value: Direct assignment to the reverse side of a related set is prohibited. Use database_roles.set() instead.
```
The frontend needs to be able to create a user and assign roles to the user with a single request because the UX is designed to be that way. | True | Trying to add/patch a user with a set of roles results in a 500 - ## Description
Trying to add a new user with roles results in a 500.
POST: /users/
```
{
full_name: 'user123',
username: 'user123',
password: 'user123',
database_roles: [{ database: 1, role: 'manager' }],
}
```
Error:
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/ui/v0/users/
Django Version: 3.1.14
Python Version: 3.9.16
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 59, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 19, in create
self.perform_create(serializer)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 24, in perform_create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 205, in save
self.instance = self.create(validated_data)
File "/code/mathesar/api/ui/serializers/users.py", line 53, in create
user = User(**validated_data)
File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 496, in __init__
_setattr(self, prop, kwargs[prop])
File "/usr/local/lib/python3.9/site-packages/django/db/models/fields/related_descriptors.py", line 545, in __set__
raise TypeError(
Exception Type: TypeError at /api/ui/v0/users/
Exception Value: Direct assignment to the reverse side of a related set is prohibited. Use database_roles.set() instead.
```
The frontend needs to be able to create a user and assign roles to the user with a single request because the UX is designed to be that way. | main | trying to add patch a user with a set of roles results in a description trying to add a new user with roles results in a post users full name username password database roles error environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware mathesar middleware cursorclosedhandlermiddleware mathesar middleware passwordchangeneededmiddleware django userforeignkey middleware userforeignkeymiddleware django request cache middleware requestcachemiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in create self perform create serializer file usr local lib site packages rest framework mixins py line in perform create serializer save file usr local lib site packages rest framework serializers py line in save self instance self create validated data file code mathesar api ui serializers users py line in create user user validated data file usr local lib site packages django db models base py line in init setattr self prop kwargs file usr local lib site packages django db models fields related descriptors py line in set raise typeerror exception type typeerror at api ui users exception value direct assignment to the reverse side of a related set is prohibited use database roles set instead the frontend needs to be able to create a user and assign roles to the user with a single request because the ux is designed to be that way | 1 |
296,267 | 22,293,444,181 | IssuesEvent | 2022-06-12 17:57:12 | xam1002/TFG_Deteccion_Parkinson | https://api.github.com/repos/xam1002/TFG_Deteccion_Parkinson | closed | Trabajos relacionados | documentation | Te dejo algunos artículos/proyectos relacionados con visión artificial y Parkinson. La mayoría se centran en identificarlo a partir de fotos de dibujos o escritura, pero son trabajos que deberían ir a la parte de "Trabajos Relacionados":
- https://link.springer.com/chapter/10.1007/978-981-16-2937-2_15
- https://www.bradford.ac.uk/dhez/projects/parkinsons-vision/
- https://pyimagesearch.com/2019/04/29/detecting-parkinsons-disease-with-opencv-computer-vision-and-the-spiral-wave-test/
- https://pubmed.ncbi.nlm.nih.gov/27686705/
De hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros:
- https://link.springer.com/chapter/10.1007/978-3-030-87094-2_38 | 1.0 | Trabajos relacionados - Te dejo algunos artículos/proyectos relacionados con visión artificial y Parkinson. La mayoría se centran en identificarlo a partir de fotos de dibujos o escritura, pero son trabajos que deberían ir a la parte de "Trabajos Relacionados":
- https://link.springer.com/chapter/10.1007/978-981-16-2937-2_15
- https://www.bradford.ac.uk/dhez/projects/parkinsons-vision/
- https://pyimagesearch.com/2019/04/29/detecting-parkinsons-disease-with-opencv-computer-vision-and-the-spiral-wave-test/
- https://pubmed.ncbi.nlm.nih.gov/27686705/
De hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros:
- https://link.springer.com/chapter/10.1007/978-3-030-87094-2_38 | non_main | trabajos relacionados te dejo algunos artículos proyectos relacionados con visión artificial y parkinson la mayoría se centran en identificarlo a partir de fotos de dibujos o escritura pero son trabajos que deberían ir a la parte de trabajos relacionados de hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros | 0 |
3,246 | 12,368,707,100 | IssuesEvent | 2020-05-18 14:13:33 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | Minecraft failed to start! | Version not Maintainted | I got an error message saying "Minecraft failed to start!"
It said "The following mod(s) have been identified as potential causes: Tiny Progression"
and gave me this link https://paste.dimdev.org/fakivulihe.mccrash | True | Minecraft failed to start! - I got an error message saying "Minecraft failed to start!"
It said "The following mod(s) have been identified as potential causes: Tiny Progression"
and gave me this link https://paste.dimdev.org/fakivulihe.mccrash | main | minecraft failed to start i got an error message saying minecraft failed to start it said the following mod s have been identified as potential causes tiny progression and gave me this link | 1 |
354 | 3,264,076,922 | IssuesEvent | 2015-10-22 09:38:40 | embox/embox | https://api.github.com/repos/embox/embox | closed | file_desc's cursor moved by file systems | enhancement imported maintainability module:fs prio:normal | ##### What new or enhanced feature are you proposing?
Make cursor movement by vfs
##### What goals would this enhancement help you to achieve?
fs driver development will be less error-prone
##### How are you going to implement the enhancement?
Move it in fs-independent code.
Cc: @mrgaz | True | file_desc's cursor moved by file systems - ##### What new or enhanced feature are you proposing?
Make cursor movement by vfs
##### What goals would this enhancement help you to achieve?
fs driver development will be less error-prone
##### How are you going to implement the enhancement?
Move it in fs-independent code.
Cc: @mrgaz | main | file desc s cursor moved by file systems what new or enhanced feature are you proposing make cursor movement by vfs what goals would this enhancement help you to achieve fs driver development will be less error prone how are you going to implement the enhancement move it in fs independent code cc mrgaz | 1 |
275,337 | 23,908,458,079 | IssuesEvent | 2022-09-09 05:11:05 | icon-project/icon-bridge | https://api.github.com/repos/icon-project/icon-bridge | opened | Test to check for given user address if user is Blacklisted or not | test :test_tube: near draft | ## Feature
- [ ] icon-project/icon-bridge#43
## Scenario
Test to check if user is blacklisted or not
* Given
* User address
* When
* IsUserBlackListed is called
* Then
* output should show if user is blacklisted or not
| 2.0 | Test to check for given user address if user is Blacklisted or not - ## Feature
- [ ] icon-project/icon-bridge#43
## Scenario
Test to check if user is blacklisted or not
* Given
* User address
* When
* IsUserBlackListed is called
* Then
* output should show if user is blacklisted or not
| non_main | test to check for given user address if user is blacklisted or not feature icon project icon bridge scenario test to check if user is blacklisted or not given user address when isuserblacklisted is called then output should show if user is blacklisted or not | 0 |
195,067 | 6,902,372,612 | IssuesEvent | 2017-11-25 19:47:19 | OperationCode/operationcode_backend | https://api.github.com/repos/OperationCode/operationcode_backend | closed | Create object for SecureSet in code_schools.yaml | beginner friendly Priority: High Status: In Progress Type: Feature | # Feature
## Why is this feature being added?
SecureSet is an amazing partner, and also a Cyber Security Bootcamp. They've requested to be listed on our /code_schools page.
## What should your feature do?
Add SecureSet to ./config/code_schools.yaml in observation of rules set out in the top of that file.
**VA-Approved**
**Notes:** Not a traditional coding bootcamp. SecureSet focuses on security training relevant to the IT industry.
**Locations:**
- Colorado Springs, CO
- Denver, CO
**Website:** https://secureset.com/
| 1.0 | Create object for SecureSet in code_schools.yaml - # Feature
## Why is this feature being added?
SecureSet is an amazing partner, and also a Cyber Security Bootcamp. They've requested to be listed on our /code_schools page.
## What should your feature do?
Add SecureSet to ./config/code_schools.yaml in observation of rules set out in the top of that file.
**VA-Approved**
**Notes:** Not a traditional coding bootcamp. SecureSet focuses on security training relevant to the IT industry.
**Locations:**
- Colorado Springs, CO
- Denver, CO
**Website:** https://secureset.com/
| non_main | create object for secureset in code schools yaml feature why is this feature being added secureset is an amazing partner and also a cyber security bootcamp they ve requested to be listed on our code schools page what should your feature do add secureset to config code schools yaml in observation of rules set out in the top of that file va approved notes not a traditional coding bootcamp secureset focuses on security training relevant to the it industry locations colorado springs co denver co website | 0 |
4,260 | 21,261,069,963 | IssuesEvent | 2022-04-13 04:21:24 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | SAM cli version - official AWS site out of date and github docs unhelpful to get latest version | type/ux type/question blocked/close-if-inactive maintainer/need-followup | ### Description:
sam cli is more difficult to update to the latest version (Linux Ubuntu 21.10) than it should be - missing step in docs.
On Linux when using sam cli we are prompted regularly with: `SAM CLI update available (1.40.1); (1.36.0 installed)`
However when you do using the installer found here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html ie, download the zip, unzip it.
And run the install with the update flag: `sudo ./install --update`
Then it returns `Found same AWS SAM CLI version: /usr/local/aws-sam-cli/1.36.0. Skipping install.`
The only way I can force this to update, is to delete everything existing in the SAM folder, ie I remove:
aws-sam-cli-src
dist
install
THIRD-PARTY-LICENSES
Then try the `sudo ./install --update` and it does indeed update correctly..
This is very frustrating as the update does not appear to work out of the box for me, without deleting the existing source folders
Thanks
| True | SAM cli version - official AWS site out of date and github docs unhelpful to get latest version - ### Description:
sam cli is more difficult to update to the latest version (Linux Ubuntu 21.10) than it should be - missing step in docs.
On Linux when using sam cli we are prompted regularly with: `SAM CLI update available (1.40.1); (1.36.0 installed)`
However when you do using the installer found here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html ie, download the zip, unzip it.
And run the install with the update flag: `sudo ./install --update`
Then it returns `Found same AWS SAM CLI version: /usr/local/aws-sam-cli/1.36.0. Skipping install.`
The only way I can force this to update, is to delete everything existing in the SAM folder, ie I remove:
aws-sam-cli-src
dist
install
THIRD-PARTY-LICENSES
Then try the `sudo ./install --update` and it does indeed update correctly..
This is very frustrating as the update does not appear to work out of the box for me, without deleting the existing source folders
Thanks
| main | sam cli version official aws site out of date and github docs unhelpful to get latest version description sam cli is more difficult to update to the latest version linux ubuntu than it should be missing step in docs on linux when using sam cli we are prompted regularly with sam cli update available installed however when you do using the installer found here ie download the zip unzip it and run the install with the update flag sudo install update then it returns found same aws sam cli version usr local aws sam cli skipping install the only way i can force this to update is to delete everything existing in the sam folder ie i remove aws sam cli src dist install third party licenses then try the sudo install update and it does indeed update correctly this is very frustrating as the update does not appear to work out of the box for me without deleting the existing source folders thanks | 1 |
5,699 | 30,017,296,724 | IssuesEvent | 2023-06-26 19:48:48 | ipfs/helia | https://api.github.com/repos/ipfs/helia | closed | Storing data on the network not possible with helia in the browser? | kind/bug need/maintainers-input | I was trying to get some sort of "hello world" project going in helia.
Retrieving data is no issue, but storing my own does not seem to work.
This is my test file: [helia_.html](https://ipfs.io/ipfs/QmYUrkVUfNQ7pkhtKEnmo1c1H487Re7bSo9cR3U3FyNJtZ?filename=helia_.html)
looking for the file on https://ipfs.io/ipfs/CID or trying to get the data via cid in another ipfs client does not work.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>IPFS in the Browser via Helia</title>
<link rel="icon favicon" href="https://unpkg.com/@helia/css@1.0.1/logos/favicon.ico" />
<link href="https://cdn.jsdelivr.net/npm/prismjs/themes/prism.css" rel="stylesheet" />
</head>
<body>
<h1>IPFS in the Browser via Helia</h1>
This is testing if you can successfully store data on the network, using helia.
</p>
<hr />
<div>
<button onclick="logging = true">Enable Logging</button>
<button onclick="logging = false">Disable Logging</button>
</div>
<h1 id="status">Node status: <span id="statusValue">Not Started</span></h1>
<div id="nodeInfo">
<h3>ID: <span id="nodeId">unknown</span></h3>
<h3>Discovered Peers: <span id="discoveredPeerCount">0</span></h3>
<h3>Connected Peers: <span id="connectedPeerCount">0</span></h3>
<ul id="connectedPeersList"></ul>
</div>
<hr />
<h1 id="testStatus">Test data status: <span id="testStatusValue">Not Added</span></h1>
<div>
<h3>Hash: <span id="testHash">-</span></h3>
<h3>Content: <span id="testContent">-</span></h3>
</div>
<hr />
<h2>Event Log:</h2>
<article id="runningLog"></article>
</body>
<style>
#runningLog span {
display: block;
}
</style>
<script src="https://cdn.jsdelivr.net/npm/prismjs/components/prism-core.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/prismjs/plugins/autoloader/prism-autoloader.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/helia@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@helia/unixfs@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/libp2p@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@chainsafe/libp2p-yamux@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@chainsafe/libp2p-noise@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/websockets@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/bootstrap@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/blockstore-core@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/datastore-core@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/kad-dht@latest/dist/index.min.js" defer></script>
<script src="https://unpkg.com/@helia/strings/dist/index.min.js"></script>
<script>
var logging = false
</script>
<script type="module" defer>
/* global Helia, BlockstoreCore, DatastoreCore, HeliaUnixfs */
const statusValueEl = document.getElementById('statusValue')
const discoveredPeerCountEl = document.getElementById('discoveredPeerCount')
const connectedPeerCountEl = document.getElementById('connectedPeerCount')
const connectedPeersListEl = document.getElementById('connectedPeersList')
const logEl = document.getElementById('runningLog')
const nodeIdEl = document.getElementById('nodeId')
document.addEventListener('DOMContentLoaded', async () => {
const helia = window.helia = await instantiateHeliaNode()
window.heliaFs = await HeliaUnixfs.unixfs(helia)
helia.libp2p.addEventListener('peer:discovery', (evt) => {
window.discoveredPeers.set(evt.detail.id.toString(), evt.detail)
addToLog(`Discovered peer ${evt.detail.id.toString()}`)
})
helia.libp2p.addEventListener('peer:connect', (evt) => {
addToLog(`Connected to ${evt.detail.toString()}`)
})
helia.libp2p.addEventListener('peer:disconnect', (evt) => {
addToLog(`Disconnected from ${evt.detail.toString()}`)
})
setInterval(() => {
statusValueEl.innerHTML = helia.libp2p.isStarted() ? 'Online' : 'Offline'
updateConnectedPeers()
updateDiscoveredPeers()
}, 500)
const id = await helia.libp2p.peerId.toString()
nodeIdEl.innerHTML = id
/**
* You can write more code here to use it.
*
* https://github.com/ipfs/helia
* - helia.start
* - helia.stop
*
* https://github.com/ipfs/helia-unixfs
* - heliaFs.addBytes
* - heliaFs.addFile
* - heliaFs.ls
* - heliaFs.cat
*/
})
function ms2TimeString (a) {
const k = a % 1e3
const s = a / 1e3 % 60 | 0
const m = a / 6e4 % 60 | 0
const h = a / 36e5 % 24 | 0
return (h ? (h < 10 ? '0' + h : h) + ':' : '00:') +
(m < 10 ? 0 : '') + m + ':' +
(s < 10 ? 0 : '') + s + ':' +
(k < 100 ? k < 10 ? '00' : 0 : '') + k
}
const getLogLineEl = (msg) => {
const logLine = document.createElement('span')
logLine.innerHTML = `${ms2TimeString(performance.now())} - ${msg}`
return logLine
}
const addToLog = (msg) => {
if(logging) {
logEl.appendChild(getLogLineEl(msg))
}
}
let heliaInstance = null
const instantiateHeliaNode = async () => {
heliaInstance = await Helia.createHelia()
addToLog('Created Helia instance')
return heliaInstance
}
window.discoveredPeers = new Map()
const updateConnectedPeers = () => {
const peers = window.helia.libp2p.getPeers()
connectedPeerCountEl.innerHTML = peers.length
connectedPeersListEl.innerHTML = ''
for (const peer of peers) {
const peerEl = document.createElement('li')
peerEl.innerText = peer.toString()
connectedPeersListEl.appendChild(peerEl)
}
}
const updateDiscoveredPeers = () => {
discoveredPeerCountEl.innerHTML = window.discoveredPeers.size
}
</script>
<script>
async function testdata(){
if ((window.helia.libp2p.getPeers()).length > 5) {
const s = HeliaStrings.strings(helia)
const myImmutableAddress = await s.add('hello world ' + Math.random())
document.getElementById('testStatusValue').innerText = 'Added to local IPFS'
console.log(myImmutableAddress.toString())
document.getElementById('testHash').innerHTML = '<a href="https://ipfs.io/ipfs/'+myImmutableAddress.toString()+'" target="_blank">'+myImmutableAddress.toString()+'</a>'
document.getElementById('testContent').innerHTML = await s.get(myImmutableAddress)
console.log(await s.get(myImmutableAddress))
} else {
setTimeout(testdata, 1000)
}
}
setTimeout(testdata, 5000)
</script>
<script nomodule>
alert('Your browser does not support importing ESM modules')
</script>
</html>
``` | True | Storing data on the network not possible with helia in the browser? - I was trying to get some sort of "hello world" project going in helia.
Retrieving data is no issue, but storing my own does not seem to work.
This is my test file: [helia_.html](https://ipfs.io/ipfs/QmYUrkVUfNQ7pkhtKEnmo1c1H487Re7bSo9cR3U3FyNJtZ?filename=helia_.html)
looking for the file on https://ipfs.io/ipfs/CID or trying to get the data via cid in another ipfs client does not work.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>IPFS in the Browser via Helia</title>
<link rel="icon favicon" href="https://unpkg.com/@helia/css@1.0.1/logos/favicon.ico" />
<link href="https://cdn.jsdelivr.net/npm/prismjs/themes/prism.css" rel="stylesheet" />
</head>
<body>
<h1>IPFS in the Browser via Helia</h1>
This is testing if you can successfully store data on the network, using helia.
</p>
<hr />
<div>
<button onclick="logging = true">Enable Logging</button>
<button onclick="logging = false">Disable Logging</button>
</div>
<h1 id="status">Node status: <span id="statusValue">Not Started</span></h1>
<div id="nodeInfo">
<h3>ID: <span id="nodeId">unknown</span></h3>
<h3>Discovered Peers: <span id="discoveredPeerCount">0</span></h3>
<h3>Connected Peers: <span id="connectedPeerCount">0</span></h3>
<ul id="connectedPeersList"></ul>
</div>
<hr />
<h1 id="testStatus">Test data status: <span id="testStatusValue">Not Added</span></h1>
<div>
<h3>Hash: <span id="testHash">-</span></h3>
<h3>Content: <span id="testContent">-</span></h3>
</div>
<hr />
<h2>Event Log:</h2>
<article id="runningLog"></article>
</body>
<style>
#runningLog span {
display: block;
}
</style>
<script src="https://cdn.jsdelivr.net/npm/prismjs/components/prism-core.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/prismjs/plugins/autoloader/prism-autoloader.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/helia@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@helia/unixfs@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/libp2p@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@chainsafe/libp2p-yamux@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@chainsafe/libp2p-noise@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/websockets@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/bootstrap@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/blockstore-core@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/datastore-core@latest/dist/index.min.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/@libp2p/kad-dht@latest/dist/index.min.js" defer></script>
<script src="https://unpkg.com/@helia/strings/dist/index.min.js"></script>
<script>
var logging = false
</script>
<script type="module" defer>
/* global Helia, BlockstoreCore, DatastoreCore, HeliaUnixfs */
const statusValueEl = document.getElementById('statusValue')
const discoveredPeerCountEl = document.getElementById('discoveredPeerCount')
const connectedPeerCountEl = document.getElementById('connectedPeerCount')
const connectedPeersListEl = document.getElementById('connectedPeersList')
const logEl = document.getElementById('runningLog')
const nodeIdEl = document.getElementById('nodeId')
document.addEventListener('DOMContentLoaded', async () => {
const helia = window.helia = await instantiateHeliaNode()
window.heliaFs = await HeliaUnixfs.unixfs(helia)
helia.libp2p.addEventListener('peer:discovery', (evt) => {
window.discoveredPeers.set(evt.detail.id.toString(), evt.detail)
addToLog(`Discovered peer ${evt.detail.id.toString()}`)
})
helia.libp2p.addEventListener('peer:connect', (evt) => {
addToLog(`Connected to ${evt.detail.toString()}`)
})
helia.libp2p.addEventListener('peer:disconnect', (evt) => {
addToLog(`Disconnected from ${evt.detail.toString()}`)
})
setInterval(() => {
statusValueEl.innerHTML = helia.libp2p.isStarted() ? 'Online' : 'Offline'
updateConnectedPeers()
updateDiscoveredPeers()
}, 500)
const id = await helia.libp2p.peerId.toString()
nodeIdEl.innerHTML = id
/**
* You can write more code here to use it.
*
* https://github.com/ipfs/helia
* - helia.start
* - helia.stop
*
* https://github.com/ipfs/helia-unixfs
* - heliaFs.addBytes
* - heliaFs.addFile
* - heliaFs.ls
* - heliaFs.cat
*/
})
function ms2TimeString (a) {
const k = a % 1e3
const s = a / 1e3 % 60 | 0
const m = a / 6e4 % 60 | 0
const h = a / 36e5 % 24 | 0
return (h ? (h < 10 ? '0' + h : h) + ':' : '00:') +
(m < 10 ? 0 : '') + m + ':' +
(s < 10 ? 0 : '') + s + ':' +
(k < 100 ? k < 10 ? '00' : 0 : '') + k
}
const getLogLineEl = (msg) => {
const logLine = document.createElement('span')
logLine.innerHTML = `${ms2TimeString(performance.now())} - ${msg}`
return logLine
}
const addToLog = (msg) => {
if(logging) {
logEl.appendChild(getLogLineEl(msg))
}
}
let heliaInstance = null
const instantiateHeliaNode = async () => {
heliaInstance = await Helia.createHelia()
addToLog('Created Helia instance')
return heliaInstance
}
window.discoveredPeers = new Map()
const updateConnectedPeers = () => {
const peers = window.helia.libp2p.getPeers()
connectedPeerCountEl.innerHTML = peers.length
connectedPeersListEl.innerHTML = ''
for (const peer of peers) {
const peerEl = document.createElement('li')
peerEl.innerText = peer.toString()
connectedPeersListEl.appendChild(peerEl)
}
}
const updateDiscoveredPeers = () => {
discoveredPeerCountEl.innerHTML = window.discoveredPeers.size
}
</script>
<script>
async function testdata(){
if ((window.helia.libp2p.getPeers()).length > 5) {
const s = HeliaStrings.strings(helia)
const myImmutableAddress = await s.add('hello world ' + Math.random())
document.getElementById('testStatusValue').innerText = 'Added to local IPFS'
console.log(myImmutableAddress.toString())
document.getElementById('testHash').innerHTML = '<a href="https://ipfs.io/ipfs/'+myImmutableAddress.toString()+'" target="_blank">'+myImmutableAddress.toString()+'</a>'
document.getElementById('testContent').innerHTML = await s.get(myImmutableAddress)
console.log(await s.get(myImmutableAddress))
} else {
setTimeout(testdata, 1000)
}
}
setTimeout(testdata, 5000)
</script>
<script nomodule>
alert('Your browser does not support importing ESM modules')
</script>
</html>
``` | main | storing data on the network not possible with helia in the browser i was trying to get some sort of hello world project going in helia retrieving data is no issue but storing my own does not seem to work this is my test file looking for the file on or trying to get the data via cid in another ipfs client does not work ipfs in the browser via helia ipfs in the browser via helia this is testing if you can successfully store data on the network using helia enable logging disable logging node status not started id unknown discovered peers connected peers test data status not added hash content event log runninglog span display block script src var logging false global helia blockstorecore datastorecore heliaunixfs const statusvalueel document getelementbyid statusvalue const discoveredpeercountel document getelementbyid discoveredpeercount const connectedpeercountel document getelementbyid connectedpeercount const connectedpeerslistel document getelementbyid connectedpeerslist const logel document getelementbyid runninglog const nodeidel document getelementbyid nodeid document addeventlistener domcontentloaded async const helia window helia await instantiatehelianode window heliafs await heliaunixfs unixfs helia helia addeventlistener peer discovery evt window discoveredpeers set evt detail id tostring evt detail addtolog discovered peer evt detail id tostring helia addeventlistener peer connect evt addtolog connected to evt detail tostring helia addeventlistener peer disconnect evt addtolog disconnected from evt detail tostring setinterval statusvalueel innerhtml helia isstarted online offline updateconnectedpeers updatediscoveredpeers const id await helia peerid tostring nodeidel innerhtml id you can write more code here to use it helia start helia stop heliafs addbytes heliafs addfile heliafs ls heliafs cat function a const k a const s a const m a const h a return h h h h m m s s k k k const getloglineel msg const logline document createelement span logline innerhtml performance now msg return logline const addtolog msg if logging logel appendchild getloglineel msg let heliainstance null const instantiatehelianode async heliainstance await helia createhelia addtolog created helia instance return heliainstance window discoveredpeers new map const updateconnectedpeers const peers window helia getpeers connectedpeercountel innerhtml peers length connectedpeerslistel innerhtml for const peer of peers const peerel document createelement li peerel innertext peer tostring connectedpeerslistel appendchild peerel const updatediscoveredpeers discoveredpeercountel innerhtml window discoveredpeers size async function testdata if window helia getpeers length const s heliastrings strings helia const myimmutableaddress await s add hello world math random document getelementbyid teststatusvalue innertext added to local ipfs console log myimmutableaddress tostring document getelementbyid testhash innerhtml myimmutableaddress tostring document getelementbyid testcontent innerhtml await s get myimmutableaddress console log await s get myimmutableaddress else settimeout testdata settimeout testdata alert your browser does not support importing esm modules | 1 |
78,197 | 27,365,396,463 | IssuesEvent | 2023-02-27 18:45:34 | scipy/scipy | https://api.github.com/repos/scipy/scipy | reopened | BUG: solve_ivp inaccurate for piecewise-smooth system | defect scipy.integrate | ### Describe your issue.
When solving a simple piecewise-smooth system using the default configuration for solve_ivp (specifically rtol=1e-3 and atol=1e-6), the solution is significantly incorrect (on the order of 1e-2). I've compared the solution with a manual rk4 solver and as expected the solution quickly converges to the same oscillation, regardless of initial condition. Whereas solve_ivp produces two very different solutions even when the initial conditions differ by 1e-7, and both of these solutions are completely wrong.
The output from solve_ivp can be made accurate by decreasing the tolerances, but I would have thought that the default values should be enough to get a reasonably accurate solution. It seems that the solver is not taking sufficiently small step sizes in order to correctly catch the transition in the ODE. I also tried adding in an event function with the expectation that the step size would correctly shrink close to this transition but it doesn't seem to effect the accuracy, just outputs the time of the event.
The attached code solves the same ODE with manual rk4 and solve_ivp. The rk4 solutions converge correctly despite different initial conditions, and the solve_ivp solutions do not, despite very similar initial conditions.
Thanks
### Reproducing Code Example
```python
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
t = np.linspace(0,300,3001)
def dy(t, y):
x = 0.02 * np.sin(t) + 0.4
if x >= y:
b = 0.2
else:
b = 0.02
return b * (x - y)
def rk4(t,y,h):
k1 = h*dy(t, y)
k2 = h*dy(t, y + k1/2)
k3 = h*dy(t, y + k2/2)
k4 = h*dy(t, y + k3)
return (k1 + 2*k2 + 2*k3 + k4)/6
def solve_rk4(dy,t_span,y0,max_step):
dt = t_span[1]-t_span[0]
sol = np.empty_like(t_span)
sol[0] = y0
sub_step_n = int(dt/max_step)
for i,t in enumerate(t_span[:-1]):
yi = sol[i]
for j in range(sub_step_n):
yi += rk4(t,yi,max_step)
sol[i+1] = yi
return sol
def event(t,y):
return 0.02 * np.sin(t) + 0.4 - y
sol1 = solve_ivp(dy, t[[0,-1]], (0.42,), events=event, t_eval=t, rtol=1e-3, atol=1e-6).y.squeeze()
sol2 = solve_ivp(dy, t[[0,-1]], (0.4200001,), events=event, t_eval=t, rtol=1e-3, atol=1e-6).y.squeeze()
sol3 = solve_rk4(dy, t, 0.42, 0.1)
sol4 = solve_rk4(dy, t, 0.43, 0.1)
plt.plot(t,sol1,label='solve_ivp 0.42')
plt.plot(t,sol2,label='solve_ivp 0.4200001')
plt.plot(t,sol3,label='RK4 0.42')
plt.plot(t,sol4,label='RK4 0.43')
plt.legend()
plt.show()
```
### Error message
```shell
NA
```
### SciPy/NumPy/Python version information
blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2,AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL not found = AVX512_KNL | 1.0 | BUG: solve_ivp inaccurate for piecewise-smooth system - ### Describe your issue.
When solving a simple piecewise-smooth system using the default configuration for solve_ivp (specifically rtol=1e-3 and atol=1e-6), the solution is significantly incorrect (on the order of 1e-2). I've compared the solution with a manual rk4 solver and as expected the solution quickly converges to the same oscillation, regardless of initial condition. Whereas solve_ivp produces two very different solutions even when the initial conditions differ by 1e-7, and both of these solutions are completely wrong.
The output from solve_ivp can be made accurate by decreasing the tolerances, but I would have thought that the default values should be enough to get a reasonably accurate solution. It seems that the solver is not taking sufficiently small step sizes in order to correctly catch the transition in the ODE. I also tried adding in an event function with the expectation that the step size would correctly shrink close to this transition but it doesn't seem to effect the accuracy, just outputs the time of the event.
The attached code solves the same ODE with manual rk4 and solve_ivp. The rk4 solutions converge correctly despite different initial conditions, and the solve_ivp solutions do not, despite very similar initial conditions.
Thanks
### Reproducing Code Example
```python
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
t = np.linspace(0,300,3001)
def dy(t, y):
x = 0.02 * np.sin(t) + 0.4
if x >= y:
b = 0.2
else:
b = 0.02
return b * (x - y)
def rk4(t,y,h):
k1 = h*dy(t, y)
k2 = h*dy(t, y + k1/2)
k3 = h*dy(t, y + k2/2)
k4 = h*dy(t, y + k3)
return (k1 + 2*k2 + 2*k3 + k4)/6
def solve_rk4(dy,t_span,y0,max_step):
dt = t_span[1]-t_span[0]
sol = np.empty_like(t_span)
sol[0] = y0
sub_step_n = int(dt/max_step)
for i,t in enumerate(t_span[:-1]):
yi = sol[i]
for j in range(sub_step_n):
yi += rk4(t,yi,max_step)
sol[i+1] = yi
return sol
def event(t,y):
return 0.02 * np.sin(t) + 0.4 - y
sol1 = solve_ivp(dy, t[[0,-1]], (0.42,), events=event, t_eval=t, rtol=1e-3, atol=1e-6).y.squeeze()
sol2 = solve_ivp(dy, t[[0,-1]], (0.4200001,), events=event, t_eval=t, rtol=1e-3, atol=1e-6).y.squeeze()
sol3 = solve_rk4(dy, t, 0.42, 0.1)
sol4 = solve_rk4(dy, t, 0.43, 0.1)
plt.plot(t,sol1,label='solve_ivp 0.42')
plt.plot(t,sol2,label='solve_ivp 0.4200001')
plt.plot(t,sol3,label='RK4 0.42')
plt.plot(t,sol4,label='RK4 0.43')
plt.legend()
plt.show()
```
### Error message
```shell
NA
```
### SciPy/NumPy/Python version information
blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2,AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL not found = AVX512_KNL | non_main | bug solve ivp inaccurate for piecewise smooth system describe your issue when solving a simple piecewise smooth system using the default configuration for solve ivp specifically rtol and atol the solution is significantly incorrect on the order of i ve compared the solution with a manual solver and as expected the solution quickly converges to the same oscillation regardless of initial condition whereas solve ivp produces two very different solutions even when the initial conditions differ by and both of these solutions are completely wrong the output from solve ivp can be made accurate by decreasing the tolerances but i would have thought that the default values should be enough to get a reasonably accurate solution it seems that the solver is not taking sufficiently small step sizes in order to correctly catch the transition in the ode i also tried adding in an event function with the expectation that the step size would correctly shrink close to this transition but it doesn t seem to effect the accuracy just outputs the time of the event the attached code solves the same ode with manual and solve ivp the solutions converge correctly despite different initial conditions and the solve ivp solutions do not despite very similar initial conditions thanks reproducing code example python import numpy as np from scipy integrate import solve ivp import matplotlib pyplot as plt t np linspace def dy t y x np sin t if x y b else b return b x y def t y h h dy t y h dy t y h dy t y h dy t y return def solve dy t span max step dt t span t span sol np empty like t span sol sub step n int dt max step for i t in enumerate t span yi sol for j in range sub step n yi t yi max step sol yi return sol def event t y return np sin t y solve ivp dy t events event t eval t rtol atol y squeeze solve ivp dy t events event t eval t rtol atol y squeeze solve dy t solve dy t plt plot t label solve ivp plt plot t label solve ivp plt plot t label plt plot t label plt legend plt show error message shell na scipy numpy python version information blas mkl info not available blis info not available openblas info libraries library dirs language c define macros runtime library dirs blas opt info libraries library dirs language c define macros runtime library dirs lapack mkl info not available openblas lapack info libraries library dirs language c define macros runtime library dirs lapack opt info libraries library dirs language c define macros runtime library dirs supported simd extensions in this numpy install baseline sse found popcnt avx skx clx cnl icl not found knl | 0 |
7,663 | 8,028,683,124 | IssuesEvent | 2018-07-27 13:43:56 | ga4gh/dockstore | https://api.github.com/repos/ga4gh/dockstore | closed | Descriptor Type Pagination Sorting | enhancement gui2 web service | ## Feature Request
### Desired behaviour
Allow descriptor type pagination sorting in the Dockstore webservice (ascending and descending)
| 1.0 | Descriptor Type Pagination Sorting - ## Feature Request
### Desired behaviour
Allow descriptor type pagination sorting in the Dockstore webservice (ascending and descending)
| non_main | descriptor type pagination sorting feature request desired behaviour allow descriptor type pagination sorting in the dockstore webservice ascending and descending | 0 |
3,799 | 16,332,052,140 | IssuesEvent | 2021-05-12 10:24:35 | precice/precice | https://api.github.com/repos/precice/precice | opened | Simplify geometric operations on mesh primitives | good first issue help wanted maintainability | **Please describe the problem you are trying to solve.**
The current implementation of the barycentric coordinates are overly complicated and explicitly require the normals of triangles or edges. There are simpler ways to implement this.
**Describe the solution you propose.**
* Simplify `math/barycenter.[ch]pp`
* Remove `Edge::computeNormal()` as it is unnecessary.
**Describe alternatives you've considered**
_none_
**Additional context**
#1000
#179
| True | Simplify geometric operations on mesh primitives - **Please describe the problem you are trying to solve.**
The current implementation of the barycentric coordinates are overly complicated and explicitly require the normals of triangles or edges. There are simpler ways to implement this.
**Describe the solution you propose.**
* Simplify `math/barycenter.[ch]pp`
* Remove `Edge::computeNormal()` as it is unnecessary.
**Describe alternatives you've considered**
_none_
**Additional context**
#1000
#179
| main | simplify geometric operations on mesh primitives please describe the problem you are trying to solve the current implementation of the barycentric coordinates are overly complicated and explicitly require the normals of triangles or edges there are simpler ways to implement this describe the solution you propose simplify math barycenter pp remove edge computenormal as it is unnecessary describe alternatives you ve considered none additional context | 1 |
5,139 | 26,199,517,098 | IssuesEvent | 2023-01-03 16:12:59 | ipfs/aegir | https://api.github.com/repos/ipfs/aegir | closed | Standardize automatic NPM publishing | P1 kind/maintenance need/analysis effort/weeks need/maintainer-input | This is a placeholder issue for figuring out interplanetary conventions for automatic publishing to NPM.
@hugomrdias lmk if you have any preference here, below is my summary of past conversations
## Current state
- We have competing conventions and publishing flows across JS libs (IPFS/libp2p/ipld/multiformates):
- JS IPFS libs usually require maintainer to do manual publishing via `aegir release`
- Various IPLD repos use [mikeal/merge-release](https://github.com/mikeal/merge-release) + Github Action to publish to NPM on every merge to the main branch (major, minor or patch sem. version is picked based on commit messages)
## Unifying publishing across JS repos
Details TBD, but during triage discussion we hinted at upstreaming some convention, perhaps to aegir/CI template, just like we did for tests and other checks in https://github.com/ipfs/aegir/blob/master/md/github-actions.md
**Hard requirement:** adopted convention should work fine in repos without aegir.
### Prior Art
- [mikeal/merge-release](https://github.com/mikeal/merge-release) avoid commits on purpose
- merge-release had some problems recently dealing with >1 commits in a merge and figuring out the right semver bump (a major ended up as a patch and required a scramble to fix – annoying, but fixable)
- [release-please](https://github.com/googleapis/release-please) makes a PR so the maintainer is the one making the commits.
- This hybrid approach gives maintainer a bit of control (to batch changes together or to release every commit), and makes it possible to eyeball what the next release will be, before it's released.
- Seems to be blocked on support for npm workspaces/monorepos like js-ipfs one
- Release-please also has an extra tool that allows 2FA to be on and avoid automation tokens. But you need to configure it and deploy it to Google engine thing and that will do the 2FA for you.
- @hugomrdias did a PoC in [hugomrdias/playwright-test ](https://github.com/hugomrdias/playwright-test) where a PR is created from any outstanding changes and publish to NPM occurs when the aggregator PR is merged.
- [semantic-release](https://semantic-release.gitbook.io/semantic-release/) is what lerna uses behind the scenes, so it's being used by js-ipfs – we can see what the generated release notes look like, etc.
- note from @achingbrain regarding js-ipfs monorepo:
> Whatever solution we settle on, it should support monorepos as well as single-module repos, that way we can use npm 7 workspaces for dep hoisting and running scripts in packages, and this tool for releases and we can drop lerna.
Lerna works and works well, but npm 7 is faster and the long term maintenance of the project has looked a bit wobbly for a while.
| True | Standardize automatic NPM publishing - This is a placeholder issue for figuring out interplanetary conventions for automatic publishing to NPM.
@hugomrdias lmk if you have any preference here, below is my summary of past conversations
## Current state
- We have competing conventions and publishing flows across JS libs (IPFS/libp2p/ipld/multiformates):
- JS IPFS libs usually require maintainer to do manual publishing via `aegir release`
- Various IPLD repos use [mikeal/merge-release](https://github.com/mikeal/merge-release) + Github Action to publish to NPM on every merge to the main branch (major, minor or patch sem. version is picked based on commit messages)
## Unifying publishing across JS repos
Details TBD, but during triage discussion we hinted at upstreaming some convention, perhaps to aegir/CI template, just like we did for tests and other checks in https://github.com/ipfs/aegir/blob/master/md/github-actions.md
**Hard requirement:** adopted convention should work fine in repos without aegir.
### Prior Art
- [mikeal/merge-release](https://github.com/mikeal/merge-release) avoid commits on purpose
- merge-release had some problems recently dealing with >1 commits in a merge and figuring out the right semver bump (a major ended up as a patch and required a scramble to fix – annoying, but fixable)
- [release-please](https://github.com/googleapis/release-please) makes a PR so the maintainer is the one making the commits.
- This hybrid approach gives maintainer a bit of control (to batch changes together or to release every commit), and makes it possible to eyeball what the next release will be, before it's released.
- Seems to be blocked on support for npm workspaces/monorepos like js-ipfs one
- Release-please also has an extra tool that allows 2FA to be on and avoid automation tokens. But you need to configure it and deploy it to Google engine thing and that will do the 2FA for you.
- @hugomrdias did a PoC in [hugomrdias/playwright-test ](https://github.com/hugomrdias/playwright-test) where a PR is created from any outstanding changes and publish to NPM occurs when the aggregator PR is merged.
- [semantic-release](https://semantic-release.gitbook.io/semantic-release/) is what lerna uses behind the scenes, so it's being used by js-ipfs – we can see what the generated release notes look like, etc.
- note from @achingbrain regarding js-ipfs monorepo:
> Whatever solution we settle on, it should support monorepos as well as single-module repos, that way we can use npm 7 workspaces for dep hoisting and running scripts in packages, and this tool for releases and we can drop lerna.
Lerna works and works well, but npm 7 is faster and the long term maintenance of the project has looked a bit wobbly for a while.
| main | standardize automatic npm publishing this is a placeholder issue for figuring out interplanetary conventions for automatic publishing to npm hugomrdias lmk if you have any preference here below is my summary of past conversations current state we have competing conventions and publishing flows across js libs ipfs ipld multiformates js ipfs libs usually require maintainer to do manual publishing via aegir release various ipld repos use github action to publish to npm on every merge to the main branch major minor or patch sem version is picked based on commit messages unifying publishing across js repos details tbd but during triage discussion we hinted at upstreaming some convention perhaps to aegir ci template just like we did for tests and other checks in hard requirement adopted convention should work fine in repos without aegir prior art avoid commits on purpose merge release had some problems recently dealing with commits in a merge and figuring out the right semver bump a major ended up as a patch and required a scramble to fix – annoying but fixable makes a pr so the maintainer is the one making the commits this hybrid approach gives maintainer a bit of control to batch changes together or to release every commit and makes it possible to eyeball what the next release will be before it s released seems to be blocked on support for npm workspaces monorepos like js ipfs one release please also has an extra tool that allows to be on and avoid automation tokens but you need to configure it and deploy it to google engine thing and that will do the for you hugomrdias did a poc in where a pr is created from any outstanding changes and publish to npm occurs when the aggregator pr is merged is what lerna uses behind the scenes so it s being used by js ipfs – we can see what the generated release notes look like etc note from achingbrain regarding js ipfs monorepo whatever solution we settle on it should support monorepos as well as single module repos that way we can use npm workspaces for dep hoisting and running scripts in packages and this tool for releases and we can drop lerna lerna works and works well but npm is faster and the long term maintenance of the project has looked a bit wobbly for a while | 1 |
57,924 | 16,166,871,293 | IssuesEvent | 2021-05-01 17:23:24 | ascott18/TellMeWhen | https://api.github.com/repos/ascott18/TellMeWhen | closed | [Bug] Can't move a locked group | S: invalid T: defect | **What version of TellMeWhen are you using? **
v9.0.6 Classic
**What steps will reproduce the problem?**
1. Left click on an icon
2. Drag
**What do you expect to happen? What happens instead?**
Expect Icon to move, with my mouse, but it does not.
**Screenshots and Export Strings**
**String 1 [Icon: Buff/Debuff]**
```
^1^T^SType^Sbuff ^SName^SCorruption;~`Curse~`of~`Agony;~`Immolate;~`Siphon~`Life ^SEnabled^B ^t^N90601^S~`~| ^Sicon^^
```
**String 2 [Group: Dots (Group: 1)]**
```
^1^T^SGUID^STMW:group:1U6ucUgXeR8R ^SScale^F4866192577658899 ^f-52^SRows ^N4^STextureName ^SDetails~`Serenity^SOnlyInCombat ^B^SLocked ^B^SView ^Sbar^SColumns ^N1^SLayoutDirection ^N5^SName ^SDoTs^SSettingsPerView ^T^Sbar^T ^SSizeX^N135.9 ^SSizeY^N19.9 ^SPadding^N1 ^t^t^SIcons^T ^N1^T ^SType^Sbuff ^SName^SCorruption;~`Curse~`of~`Agony;~`Immolate;~`Siphon~`Life ^SEnabled^B ^t^N2^T ^SType^Scooldown ^SName^SCurse~`of~`Agony ^SEnabled^B ^t^N3^T ^SType^Scooldown ^SName^SImmolate ^SEnabled^B ^t^N4^T ^SType^Scooldown ^SName^SSiphon~`Life ^SEnabled^B ^t^t^SPoint^T ^Sy^F5266421121111656 ^f-45^Sx ^F-7403317190447788^f-46 ^Spoint^SRIGHT ^SrelativePoint^SRIGHT ^t^t^N90601^S~`~| ^Sgroup^N1 ^^
```

**Additional Info**
Regarding the image, the top group is the one that won't move. The Icons below (each is its own group) can be dragged perfectly fine
| 1.0 | [Bug] Can't move a locked group - **What version of TellMeWhen are you using? **
v9.0.6 Classic
**What steps will reproduce the problem?**
1. Left click on an icon
2. Drag
**What do you expect to happen? What happens instead?**
Expect Icon to move, with my mouse, but it does not.
**Screenshots and Export Strings**
**String 1 [Icon: Buff/Debuff]**
```
^1^T^SType^Sbuff ^SName^SCorruption;~`Curse~`of~`Agony;~`Immolate;~`Siphon~`Life ^SEnabled^B ^t^N90601^S~`~| ^Sicon^^
```
**String 2 [Group: Dots (Group: 1)]**
```
^1^T^SGUID^STMW:group:1U6ucUgXeR8R ^SScale^F4866192577658899 ^f-52^SRows ^N4^STextureName ^SDetails~`Serenity^SOnlyInCombat ^B^SLocked ^B^SView ^Sbar^SColumns ^N1^SLayoutDirection ^N5^SName ^SDoTs^SSettingsPerView ^T^Sbar^T ^SSizeX^N135.9 ^SSizeY^N19.9 ^SPadding^N1 ^t^t^SIcons^T ^N1^T ^SType^Sbuff ^SName^SCorruption;~`Curse~`of~`Agony;~`Immolate;~`Siphon~`Life ^SEnabled^B ^t^N2^T ^SType^Scooldown ^SName^SCurse~`of~`Agony ^SEnabled^B ^t^N3^T ^SType^Scooldown ^SName^SImmolate ^SEnabled^B ^t^N4^T ^SType^Scooldown ^SName^SSiphon~`Life ^SEnabled^B ^t^t^SPoint^T ^Sy^F5266421121111656 ^f-45^Sx ^F-7403317190447788^f-46 ^Spoint^SRIGHT ^SrelativePoint^SRIGHT ^t^t^N90601^S~`~| ^Sgroup^N1 ^^
```

**Additional Info**
Regarding the image, the top group is the one that won't move. The Icons below (each is its own group) can be dragged perfectly fine
| non_main | can t move a locked group what version of tellmewhen are you using classic what steps will reproduce the problem left click on an icon drag what do you expect to happen what happens instead expect icon to move with my mouse but it does not screenshots and export strings string t stype sbuff sname scorruption curse of agony immolate siphon life senabled b t s sicon string t sguid stmw group sscale f srows stexturename sdetails serenity sonlyincombat b slocked b sview sbar scolumns slayoutdirection sname sdots ssettingsperview t sbar t ssizex ssizey spadding t t sicons t t stype sbuff sname scorruption curse of agony immolate siphon life senabled b t t stype scooldown sname scurse of agony senabled b t t stype scooldown sname simmolate senabled b t t stype scooldown sname ssiphon life senabled b t t spoint t sy f sx f f spoint sright srelativepoint sright t t s sgroup additional info regarding the image the top group is the one that won t move the icons below each is its own group can be dragged perfectly fine | 0 |
113,686 | 11,812,094,460 | IssuesEvent | 2020-03-19 19:28:06 | jonrau1/ElectricEye | https://api.github.com/repos/jonrau1/ElectricEye | closed | Migrate from Severity.Normalized to Severity.Label | documentation enhancement | **Story**
As a user of ElectricEye, I want to migrate my finding Severity from `Severity.Normalized` to `Severity.Label` so that I can have the latest ASFF changes reflected in my changes and have an easy to read and parse finding severity that is based on strings and not integers.
**Definition of Done**
- All instances of `Severity.Normalized` translated to `Severity.Label`
- Updated documentation
**Nice to Have**
We should consider changing the CloudWatch Event / EventBridge Rules that look at the `ProductFields.aws/securityhub/SeverityLabel` to `Severity.Label` instead as that namespace is populated on the backend
**Additional Information**
https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html#asff-severity | 1.0 | Migrate from Severity.Normalized to Severity.Label - **Story**
As a user of ElectricEye, I want to migrate my finding Severity from `Severity.Normalized` to `Severity.Label` so that I can have the latest ASFF changes reflected in my changes and have an easy to read and parse finding severity that is based on strings and not integers.
**Definition of Done**
- All instances of `Severity.Normalized` translated to `Severity.Label`
- Updated documentation
**Nice to Have**
We should consider changing the CloudWatch Event / EventBridge Rules that look at the `ProductFields.aws/securityhub/SeverityLabel` to `Severity.Label` instead as that namespace is populated on the backend
**Additional Information**
https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html#asff-severity | non_main | migrate from severity normalized to severity label story as a user of electriceye i want to migrate my finding severity from severity normalized to severity label so that i can have the latest asff changes reflected in my changes and have an easy to read and parse finding severity that is based on strings and not integers definition of done all instances of severity normalized translated to severity label updated documentation nice to have we should consider changing the cloudwatch event eventbridge rules that look at the productfields aws securityhub severitylabel to severity label instead as that namespace is populated on the backend additional information | 0 |
238,603 | 18,245,597,930 | IssuesEvent | 2021-10-01 17:57:14 | uriahf/rtichoke | https://api.github.com/repos/uriahf/rtichoke | closed | Change naming convention, from "performance_table" to "performance_data" | documentation rtichoke function | "performance_data" is a better name than "performance_table" and it is usefull to distinguish between the data as an object and a rendered table.
The function `create_performance_table()` should be renamed to `prepare_performance_data()`.
The output should be tibble instead of data.frame. | 1.0 | Change naming convention, from "performance_table" to "performance_data" - "performance_data" is a better name than "performance_table" and it is usefull to distinguish between the data as an object and a rendered table.
The function `create_performance_table()` should be renamed to `prepare_performance_data()`.
The output should be tibble instead of data.frame. | non_main | change naming convention from performance table to performance data performance data is a better name than performance table and it is usefull to distinguish between the data as an object and a rendered table the function create performance table should be renamed to prepare performance data the output should be tibble instead of data frame | 0 |
5,116 | 26,046,767,841 | IssuesEvent | 2022-12-22 15:01:35 | jesus2099/konami-command | https://api.github.com/repos/jesus2099/konami-command | opened | Mobile: Remove current input refocus after click | client compatibility mb_POWER-VOTE improvement maintainability | This is seriously bogging me on mobile, scrolling back all the time.
See if removal of this feature will also be good for desktop. | True | Mobile: Remove current input refocus after click - This is seriously bogging me on mobile, scrolling back all the time.
See if removal of this feature will also be good for desktop. | main | mobile remove current input refocus after click this is seriously bogging me on mobile scrolling back all the time see if removal of this feature will also be good for desktop | 1 |
4,782 | 24,607,370,028 | IssuesEvent | 2022-10-14 17:34:22 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | closed | PerlDoc: Indentation broken. Code snippets need to be wrapped in `<pre><code>` | Bug Maintainer Input Requested Status: Needs a Developer Topic: Perl Skill: Perl | None of the code snippets for this Instant Answer are indented correctly.
I believe the cause is that our internal processing scripts is merging the spaces used for indentation because the code snippets are not wrapped in a `<pre><code>` block.
Once they are, the internal pipeline should no longer strip the extra spaces.
Example: https://duckduckgo.com/?q=perl+foreach&t=ffab&atb=v80-1_w&ia=web
------
IA Page: http://duck.co/ia/view/perl_doc
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GuiltyDolphin | True | PerlDoc: Indentation broken. Code snippets need to be wrapped in `<pre><code>` - None of the code snippets for this Instant Answer are indented correctly.
I believe the cause is that our internal processing scripts is merging the spaces used for indentation because the code snippets are not wrapped in a `<pre><code>` block.
Once they are, the internal pipeline should no longer strip the extra spaces.
Example: https://duckduckgo.com/?q=perl+foreach&t=ffab&atb=v80-1_w&ia=web
------
IA Page: http://duck.co/ia/view/perl_doc
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GuiltyDolphin | main | perldoc indentation broken code snippets need to be wrapped in none of the code snippets for this instant answer are indented correctly i believe the cause is that our internal processing scripts is merging the spaces used for indentation because the code snippets are not wrapped in a block once they are the internal pipeline should no longer strip the extra spaces example ia page guiltydolphin | 1 |
5,142 | 26,217,970,324 | IssuesEvent | 2023-01-04 12:38:19 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Table not found error | type: bug work: frontend status: ready restricted: maintainers | ## Steps to reproduce
1. Navigate to http://localhost:8000/mathesar_tables/
1. Click on "Library Management" to open schema with id `217` (or similar)
1. On the "Authors" table card, hover the "Go to Table" hyperlink. Observe that it uses schema `217` (or equivalent) in the URL. Good.
1. In the navigation header, open the "Choose a Schema" dropdown, and click on "public", navigating to http://localhost:8000/mathesar_tables/1/
1. Observe that table cards now use schema id `1` in their URLs. Good.
1. Use the same schema switcher to navigate back to "Library Management".
1. Hover the "Go to Table" hyperlinks for various tables.
1. Expect these URLs to use schema `217` (or equivalent).
1. Instead observe the table URLs to use schema `1`. Bad.
1. Clicking on one of these URLs rightly displays an error message like:
> Table with id 1267 not found.
| True | Table not found error - ## Steps to reproduce
1. Navigate to http://localhost:8000/mathesar_tables/
1. Click on "Library Management" to open schema with id `217` (or similar)
1. On the "Authors" table card, hover the "Go to Table" hyperlink. Observe that it uses schema `217` (or equivalent) in the URL. Good.
1. In the navigation header, open the "Choose a Schema" dropdown, and click on "public", navigating to http://localhost:8000/mathesar_tables/1/
1. Observe that table cards now use schema id `1` in their URLs. Good.
1. Use the same schema switcher to navigate back to "Library Management".
1. Hover the "Go to Table" hyperlinks for various tables.
1. Expect these URLs to use schema `217` (or equivalent).
1. Instead observe the table URLs to use schema `1`. Bad.
1. Clicking on one of these URLs rightly displays an error message like:
> Table with id 1267 not found.
| main | table not found error steps to reproduce navigate to click on library management to open schema with id or similar on the authors table card hover the go to table hyperlink observe that it uses schema or equivalent in the url good in the navigation header open the choose a schema dropdown and click on public navigating to observe that table cards now use schema id in their urls good use the same schema switcher to navigate back to library management hover the go to table hyperlinks for various tables expect these urls to use schema or equivalent instead observe the table urls to use schema bad clicking on one of these urls rightly displays an error message like table with id not found | 1 |
54,244 | 13,284,441,349 | IssuesEvent | 2020-08-24 06:18:15 | spack/spack | https://api.github.com/repos/spack/spack | opened | Installation issue: older versions of ocaml and heppdt should have fcommon to compile with gcc10 | build-error | ### Steps to reproduce the issue
$ spack install whizard@2.8.2
$ spack install heppdt@2.99.99his
This will pull in older version of ocaml and heppdt which will break on gcc10 without -fcommon
...
```
### Information on your system
[user@dda6cf507e3a packages]$ spack debug report
* **Spack:** 0.15.4
* **Python:** 3.8.5
* **Platform:** linux-mageia8-skylake
ocaml
450 /lib/spack/env/gcc/gcc -O2 -fno-strict-aliasing -fwrapv -Wall -fno-
tree-vrp -g -D_FILE_OFFSET_BITS=64 -D_REENTRANT -DCAML_NAME_SPACE
-DOCAML_STDLIB_DIR='"/opt/spack/linux-mageia8-skylake/gcc-10.2.0/oc
aml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/lib/ocaml"' -Wl,-E -o
ocamlruni prims.o libcamlruni.a -lm -ldl -lpthread
451 /usr/bin/ld/usr/bin/ld: : libcamlrund.a(backtrace_bd.o):/tmp/user/s
pack-stage/spack-stage-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth
5/spack-src/runtime/backtrace.c:libcamlruni.a(backtrace_bi.o)31: mu
ltiple definition of `caml_debug_info'; :/tmp/user/spack-stage/spac
k-stage-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/run
time/backtrace.c:31: multiple definition of `caml_debug_info'; libc
amlrund.a(backtrace_byt_bd.o):/tmp/user/spack-stage/spack-stage-oca
ml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/runtime/backtr
ace_byt.c:47: first defined here
452 libcamlruni.a(backtrace_byt_bi.o):/tmp/user/spack-stage/spack-stage
-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/runtime/ba
cktrace_byt.c:47: first defined here
### Additional information
scemama
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ X] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ X] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
| 1.0 | Installation issue: older versions of ocaml and heppdt should have fcommon to compile with gcc10 - ### Steps to reproduce the issue
$ spack install whizard@2.8.2
$ spack install heppdt@2.99.99his
This will pull in older version of ocaml and heppdt which will break on gcc10 without -fcommon
...
```
### Information on your system
[user@dda6cf507e3a packages]$ spack debug report
* **Spack:** 0.15.4
* **Python:** 3.8.5
* **Platform:** linux-mageia8-skylake
ocaml
450 /lib/spack/env/gcc/gcc -O2 -fno-strict-aliasing -fwrapv -Wall -fno-
tree-vrp -g -D_FILE_OFFSET_BITS=64 -D_REENTRANT -DCAML_NAME_SPACE
-DOCAML_STDLIB_DIR='"/opt/spack/linux-mageia8-skylake/gcc-10.2.0/oc
aml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/lib/ocaml"' -Wl,-E -o
ocamlruni prims.o libcamlruni.a -lm -ldl -lpthread
451 /usr/bin/ld/usr/bin/ld: : libcamlrund.a(backtrace_bd.o):/tmp/user/s
pack-stage/spack-stage-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth
5/spack-src/runtime/backtrace.c:libcamlruni.a(backtrace_bi.o)31: mu
ltiple definition of `caml_debug_info'; :/tmp/user/spack-stage/spac
k-stage-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/run
time/backtrace.c:31: multiple definition of `caml_debug_info'; libc
amlrund.a(backtrace_byt_bd.o):/tmp/user/spack-stage/spack-stage-oca
ml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/runtime/backtr
ace_byt.c:47: first defined here
452 libcamlruni.a(backtrace_byt_bi.o):/tmp/user/spack-stage/spack-stage
-ocaml-4.08.1-f3m2tsdrxfzvn2gthrbthev4n2wnuth5/spack-src/runtime/ba
cktrace_byt.c:47: first defined here
### Additional information
scemama
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ X] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ X] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate
| non_main | installation issue older versions of ocaml and heppdt should have fcommon to compile with steps to reproduce the issue spack install whizard spack install heppdt this will pull in older version of ocaml and heppdt which will break on without fcommon information on your system spack debug report spack python platform linux skylake ocaml lib spack env gcc gcc fno strict aliasing fwrapv wall fno tree vrp g d file offset bits d reentrant dcaml name space docaml stdlib dir opt spack linux skylake gcc oc aml lib ocaml wl e o ocamlruni prims o libcamlruni a lm ldl lpthread usr bin ld usr bin ld libcamlrund a backtrace bd o tmp user s pack stage spack stage ocaml spack src runtime backtrace c libcamlruni a backtrace bi o mu ltiple definition of caml debug info tmp user spack stage spac k stage ocaml spack src run time backtrace c multiple definition of caml debug info libc amlrund a backtrace byt bd o tmp user spack stage spack stage oca ml spack src runtime backtr ace byt c first defined here libcamlruni a backtrace byt bi o tmp user spack stage spack stage ocaml spack src runtime ba cktrace byt c first defined here additional information scemama and mention them here if they exist general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate | 0 |
247,500 | 26,711,685,128 | IssuesEvent | 2023-01-28 01:23:26 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | reopened | CVE-2018-20509 (Medium) detected in linuxlinux-4.1.17, linuxlinux-4.1.17 | security vulnerability | ## CVE-2018-20509 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.1.17</b>, <b>linuxlinux-4.1.17</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The print_binder_ref_olocked function in drivers/android/binder.c in the Linux kernel 4.14.90 allows local users to obtain sensitive address information by reading " ref *desc *node" lines in a debugfs file.
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20509>CVE-2018-20509</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20509">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20509</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: v4.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-20509 (Medium) detected in linuxlinux-4.1.17, linuxlinux-4.1.17 - ## CVE-2018-20509 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.1.17</b>, <b>linuxlinux-4.1.17</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The print_binder_ref_olocked function in drivers/android/binder.c in the Linux kernel 4.14.90 allows local users to obtain sensitive address information by reading " ref *desc *node" lines in a debugfs file.
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20509>CVE-2018-20509</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20509">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20509</a></p>
<p>Release Date: 2019-04-30</p>
<p>Fix Resolution: v4.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linuxlinux linuxlinux cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details the print binder ref olocked function in drivers android binder c in the linux kernel allows local users to obtain sensitive address information by reading ref desc node lines in a debugfs file publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
45,319 | 7,177,953,466 | IssuesEvent | 2018-01-31 15:11:45 | ktorio/ktor | https://api.github.com/repos/ktorio/ktor | closed | location-based routing doesn't seem to parse body into lambda param with Gson | documentation | Hi !
I've been having some issues with location-based routing. I have:
```kotlin
@Location("/login")
data class Login(val username: String = "", val password: String = "")
fun Route.login() {
post<Login> { userInfo ->
println(userInfo.username) // Prints an empty line
val test = call.receive<Login>()
println("Test: $test") // Correctly prints the data class
call.respond(Login("test", "testPass")) // works as expected
}
}
fun Application.main() {
... // feature installation
install(ContentNegotiation) {
gson {
setDateFormat(DateFormat.LONG)
setPrettyPrinting()
}
}
routing {
login()
}
}
```
But I couldn't find any way to make the lambda parameter filled w/o filling it myself by calling `receive` from a wrapper function. I thought that was kind of the point of the type arg we pass to `post`
Note that this exact code works in Thinkter demo, but the packages are quite outdated.
Any ideas ? | 1.0 | location-based routing doesn't seem to parse body into lambda param with Gson - Hi !
I've been having some issues with location-based routing. I have:
```kotlin
@Location("/login")
data class Login(val username: String = "", val password: String = "")
fun Route.login() {
post<Login> { userInfo ->
println(userInfo.username) // Prints an empty line
val test = call.receive<Login>()
println("Test: $test") // Correctly prints the data class
call.respond(Login("test", "testPass")) // works as expected
}
}
fun Application.main() {
... // feature installation
install(ContentNegotiation) {
gson {
setDateFormat(DateFormat.LONG)
setPrettyPrinting()
}
}
routing {
login()
}
}
```
But I couldn't find any way to make the lambda parameter filled w/o filling it myself by calling `receive` from a wrapper function. I thought that was kind of the point of the type arg we pass to `post`
Note that this exact code works in Thinkter demo, but the packages are quite outdated.
Any ideas ? | non_main | location based routing doesn t seem to parse body into lambda param with gson hi i ve been having some issues with location based routing i have kotlin location login data class login val username string val password string fun route login post userinfo println userinfo username prints an empty line val test call receive println test test correctly prints the data class call respond login test testpass works as expected fun application main feature installation install contentnegotiation gson setdateformat dateformat long setprettyprinting routing login but i couldn t find any way to make the lambda parameter filled w o filling it myself by calling receive from a wrapper function i thought that was kind of the point of the type arg we pass to post note that this exact code works in thinkter demo but the packages are quite outdated any ideas | 0 |
2,603 | 8,838,314,748 | IssuesEvent | 2019-01-05 15:57:40 | nicwaller/yourls-authmgr-plugin | https://api.github.com/repos/nicwaller/yourls-authmgr-plugin | closed | Can detailed action hooks be added to YOURLS core? | maintainability | Most actions that we might want to intercept (eg. add/delete) are handled by YOURLS in admin.php, and it doesn't always send notifications that an action is about to occur.
Ideally, the authmgr would register hooks for each individual action in YOURLS that needs to be controlled. But since that's not possible, it needs to intercept loading of the admin page and run some custom logic to determine what action is happening. This is prone to break in the future as YOURLS internal behaviour changes.
| True | Can detailed action hooks be added to YOURLS core? - Most actions that we might want to intercept (eg. add/delete) are handled by YOURLS in admin.php, and it doesn't always send notifications that an action is about to occur.
Ideally, the authmgr would register hooks for each individual action in YOURLS that needs to be controlled. But since that's not possible, it needs to intercept loading of the admin page and run some custom logic to determine what action is happening. This is prone to break in the future as YOURLS internal behaviour changes.
| main | can detailed action hooks be added to yourls core most actions that we might want to intercept eg add delete are handled by yourls in admin php and it doesn t always send notifications that an action is about to occur ideally the authmgr would register hooks for each individual action in yourls that needs to be controlled but since that s not possible it needs to intercept loading of the admin page and run some custom logic to determine what action is happening this is prone to break in the future as yourls internal behaviour changes | 1 |
477,261 | 13,758,687,718 | IssuesEvent | 2020-10-07 00:46:16 | canonical-web-and-design/maas-ui | https://api.github.com/repos/canonical-web-and-design/maas-ui | closed | Proptypes error in ScriptsUpload.test.js | Bug 🐛 Priority: Low | **Describe the bug**
Proptypes error when running tests in ScriptsUpload.test.js
```
PASS src/app/settings/views/Scripts/ScriptsUpload/ScriptsUpload.test.js
● Console
console.error ../node_modules/prop-types/checkPropTypes.js:20
Warning: Failed prop type: The prop `submitLabel` is marked as required in `FormCardButtons`, but its value is `undefined`.
in FormCardButtons (at ScriptsUpload.js:150)
in ScriptsUpload (at ScriptsUpload.test.js:49)
in Router (created by MemoryRouter)
in MemoryRouter (at ScriptsUpload.test.js:48)
in Provider (created by WrapperComponent)
in WrapperComponent
console.error ../node_modules/prop-types/checkPropTypes.js:20
Warning: Failed prop type: The prop `children` is marked as required in `ActionButton`, but its value is `undefined`.
in ActionButton (at FormCardButtons.js:56)
in FormCardButtons (at ScriptsUpload.js:150)
in form (created by Form)
in Form (at ScriptsUpload.js:127)
in div (created by Row)
in Row (at ScriptsUpload.js:126)
in div (created by Card)
in div (created by Card)
in Card (at FormCard.js:19)
in FormCard (at ScriptsUpload.js:105)
in ScriptsUpload (at ScriptsUpload.test.js:49)
in Router (created by MemoryRouter)
in MemoryRouter (at ScriptsUpload.test.js:48)
in Provider (created by WrapperComponent)
in WrapperComponent
```
**MAAS version**
master
**To Reproduce**
`yarn test ScriptsUpload.test.js`. | 1.0 | Proptypes error in ScriptsUpload.test.js - **Describe the bug**
Proptypes error when running tests in ScriptsUpload.test.js
```
PASS src/app/settings/views/Scripts/ScriptsUpload/ScriptsUpload.test.js
● Console
console.error ../node_modules/prop-types/checkPropTypes.js:20
Warning: Failed prop type: The prop `submitLabel` is marked as required in `FormCardButtons`, but its value is `undefined`.
in FormCardButtons (at ScriptsUpload.js:150)
in ScriptsUpload (at ScriptsUpload.test.js:49)
in Router (created by MemoryRouter)
in MemoryRouter (at ScriptsUpload.test.js:48)
in Provider (created by WrapperComponent)
in WrapperComponent
console.error ../node_modules/prop-types/checkPropTypes.js:20
Warning: Failed prop type: The prop `children` is marked as required in `ActionButton`, but its value is `undefined`.
in ActionButton (at FormCardButtons.js:56)
in FormCardButtons (at ScriptsUpload.js:150)
in form (created by Form)
in Form (at ScriptsUpload.js:127)
in div (created by Row)
in Row (at ScriptsUpload.js:126)
in div (created by Card)
in div (created by Card)
in Card (at FormCard.js:19)
in FormCard (at ScriptsUpload.js:105)
in ScriptsUpload (at ScriptsUpload.test.js:49)
in Router (created by MemoryRouter)
in MemoryRouter (at ScriptsUpload.test.js:48)
in Provider (created by WrapperComponent)
in WrapperComponent
```
**MAAS version**
master
**To Reproduce**
`yarn test ScriptsUpload.test.js`. | non_main | proptypes error in scriptsupload test js describe the bug proptypes error when running tests in scriptsupload test js pass src app settings views scripts scriptsupload scriptsupload test js ● console console error node modules prop types checkproptypes js warning failed prop type the prop submitlabel is marked as required in formcardbuttons but its value is undefined in formcardbuttons at scriptsupload js in scriptsupload at scriptsupload test js in router created by memoryrouter in memoryrouter at scriptsupload test js in provider created by wrappercomponent in wrappercomponent console error node modules prop types checkproptypes js warning failed prop type the prop children is marked as required in actionbutton but its value is undefined in actionbutton at formcardbuttons js in formcardbuttons at scriptsupload js in form created by form in form at scriptsupload js in div created by row in row at scriptsupload js in div created by card in div created by card in card at formcard js in formcard at scriptsupload js in scriptsupload at scriptsupload test js in router created by memoryrouter in memoryrouter at scriptsupload test js in provider created by wrappercomponent in wrappercomponent maas version master to reproduce yarn test scriptsupload test js | 0 |
157,669 | 24,706,542,110 | IssuesEvent | 2022-10-19 19:35:50 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | Bulk updates/deletes: Support in-memory provider | closed-by-design customer-reported | For unit testing it would be great if the in-memory provider could also support bulk updates and deletes from #795, as this simplifies testing greatly.
Currently it fails with a rather obscure error message:
```
System.InvalidOperationException : The LINQ expression 'DbSet<Source>()
.Where(s => s.CopiedFromId == __source_Id_0)
.Select(s => IncludeExpression(
EntityExpression:
IncludeExpression(
EntityExpression:
IncludeExpression(
EntityExpression:
s,
NavigationExpression:
EF.Property<UserMetadata>(s, "CreatedBy"), CreatedBy)
,
NavigationExpression:
EF.Property<UserMetadata>(s, "LastModifiedBy"), LastModifiedBy)
,
NavigationExpression:
EF.Property<OwnershipMetadata>(s, "OwnedBy"), OwnedBy)
)
.ExecuteUpdate(action => action.SetProperty<int?>(
propertyExpression: s => s.CopiedFromId,
valueExpression: _ => null))' could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information.
at Microsoft.EntityFrameworkCore.Query.QueryableMethodTranslatingExpressionVisitor.VisitMethodCall(MethodCallExpression methodCallExpression)
```
Note that this owned type is also included in the expression tree apparently, which makes it even more difficult to understand.
| 1.0 | Bulk updates/deletes: Support in-memory provider - For unit testing it would be great if the in-memory provider could also support bulk updates and deletes from #795, as this simplifies testing greatly.
Currently it fails with a rather obscure error message:
```
System.InvalidOperationException : The LINQ expression 'DbSet<Source>()
.Where(s => s.CopiedFromId == __source_Id_0)
.Select(s => IncludeExpression(
EntityExpression:
IncludeExpression(
EntityExpression:
IncludeExpression(
EntityExpression:
s,
NavigationExpression:
EF.Property<UserMetadata>(s, "CreatedBy"), CreatedBy)
,
NavigationExpression:
EF.Property<UserMetadata>(s, "LastModifiedBy"), LastModifiedBy)
,
NavigationExpression:
EF.Property<OwnershipMetadata>(s, "OwnedBy"), OwnedBy)
)
.ExecuteUpdate(action => action.SetProperty<int?>(
propertyExpression: s => s.CopiedFromId,
valueExpression: _ => null))' could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information.
at Microsoft.EntityFrameworkCore.Query.QueryableMethodTranslatingExpressionVisitor.VisitMethodCall(MethodCallExpression methodCallExpression)
```
Note that this owned type is also included in the expression tree apparently, which makes it even more difficult to understand.
| non_main | bulk updates deletes support in memory provider for unit testing it would be great if the in memory provider could also support bulk updates and deletes from as this simplifies testing greatly currently it fails with a rather obscure error message system invalidoperationexception the linq expression dbset where s s copiedfromid source id select s includeexpression entityexpression includeexpression entityexpression includeexpression entityexpression s navigationexpression ef property s createdby createdby navigationexpression ef property s lastmodifiedby lastmodifiedby navigationexpression ef property s ownedby ownedby executeupdate action action setproperty propertyexpression s s copiedfromid valueexpression null could not be translated either rewrite the query in a form that can be translated or switch to client evaluation explicitly by inserting a call to asenumerable asasyncenumerable tolist or tolistasync see for more information at microsoft entityframeworkcore query queryablemethodtranslatingexpressionvisitor visitmethodcall methodcallexpression methodcallexpression note that this owned type is also included in the expression tree apparently which makes it even more difficult to understand | 0 |
4,726 | 24,393,000,501 | IssuesEvent | 2022-10-04 16:41:24 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: Search - clear button does not call `onChange` with proper event | severity: 3 type: bug 🐛 status: waiting for maintainer response 💬 | ### Package
@carbon/react
### Browser
Chrome
### Package version
1.12.0
### React version
18.2.0
### Description
When clicking the close button after inputting text and clicking outside the component to trigger a blur/defocus event, the component is not collapsed even though the text input is empty.
In the Search component, when the Close button is clicked, `clearInput` is called. The event passed to the handler is for the button click.
When the Search component's `value` prop is undefined/null, then the `onChange` prop is called with the button click event.
When the Search component's `value` prop is defined, then the `onChange` prop is called with a modified event:
```js
{
...event.target,
target: { value: '' },
}
```
This event is not constructed properly and should either be a simple mock event:
```js
{
target: { value: '' }
}
```
or overwriting the value:
```js
{
...event,
target: {
...event.target,
value: '',
},
}
```
### Reproduction/example
https://react.carbondesignsystem.com/?path=/story/components-search--expandable
### Steps to reproduce
https://react.carbondesignsystem.com/?path=/story/components-search--expandable
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: Search - clear button does not call `onChange` with proper event - ### Package
@carbon/react
### Browser
Chrome
### Package version
1.12.0
### React version
18.2.0
### Description
When clicking the close button after inputting text and clicking outside the component to trigger a blur/defocus event, the component is not collapsed even though the text input is empty.
In the Search component, when the Close button is clicked, `clearInput` is called. The event passed to the handler is for the button click.
When the Search component's `value` prop is undefined/null, then the `onChange` prop is called with the button click event.
When the Search component's `value` prop is defined, then the `onChange` prop is called with a modified event:
```js
{
...event.target,
target: { value: '' },
}
```
This event is not constructed properly and should either be a simple mock event:
```js
{
target: { value: '' }
}
```
or overwriting the value:
```js
{
...event,
target: {
...event.target,
value: '',
},
}
```
### Reproduction/example
https://react.carbondesignsystem.com/?path=/story/components-search--expandable
### Steps to reproduce
https://react.carbondesignsystem.com/?path=/story/components-search--expandable
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | search clear button does not call onchange with proper event package carbon react browser chrome package version react version description when clicking the close button after inputting text and clicking outside the component to trigger a blur defocus event the component is not collapsed even though the text input is empty in the search component when the close button is clicked clearinput is called the event passed to the handler is for the button click when the search component s value prop is undefined null then the onchange prop is called with the button click event when the search component s value prop is defined then the onchange prop is called with a modified event js event target target value this event is not constructed properly and should either be a simple mock event js target value or overwriting the value js event target event target value reproduction example steps to reproduce code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
2,133 | 7,307,028,524 | IssuesEvent | 2018-02-28 00:37:08 | AndrewJGregory/Procify | https://api.github.com/repos/AndrewJGregory/Procify | closed | SCSS color codes | maintainability | Throughout the stylesheets there are raw references to hexadecimal colors. This is not readable nor DRY. These colors should be refactored into a colors.scss file with appropriate names. | True | SCSS color codes - Throughout the stylesheets there are raw references to hexadecimal colors. This is not readable nor DRY. These colors should be refactored into a colors.scss file with appropriate names. | main | scss color codes throughout the stylesheets there are raw references to hexadecimal colors this is not readable nor dry these colors should be refactored into a colors scss file with appropriate names | 1 |
189,204 | 15,180,631,212 | IssuesEvent | 2021-02-15 00:40:39 | dankamongmen/notcurses | https://api.github.com/repos/dankamongmen/notcurses | closed | ncplayer ought use direct mode when invoked with -k | documentation enhancement perf | A surprising number of people seem to be using `ncplayer` as a one-shot image display tool (`ncls` is probably closer to what they want, but that's not obvious). And indeed, I can see this particular use case becoming a benchmark. So let's improve `ncplayer` for this case.
I'd think the best current invocation to be `ncplayer -q -k -t0 file`. this doesn't print the frame number/time, exits immediately, and doesn't use the alternate screen.
`ncls file` takes consistently about 3/4 the time of `ncplayer -q -k -t0 file`:
```
[schwarzgerat](1) $ ( for i in `seq 0 3 ` ; do time ./ncls ../data/worldmap.png ; done ) | grep real
real 0m0.053s
user 0m0.050s
sys 0m0.005s
real 0m0.050s
user 0m0.036s
sys 0m0.017s
real 0m0.051s
user 0m0.031s
sys 0m0.021s
real 0m0.048s
user 0m0.036s
sys 0m0.013s
[schwarzgerat](1) $
```
vs
```
[schwarzgerat](1) $ ( for i in `seq 0 3 ` ; do time ./ncplayer -q -t0 -k ../data/worldmap.png 2> /dev/null ; done ) | grep real
real 0m0.079s
user 0m0.032s
sys 0m0.016s
real 0m0.081s
user 0m0.033s
sys 0m0.016s
real 0m0.081s
user 0m0.037s
sys 0m0.012s
real 0m0.079s
user 0m0.039s
sys 0m0.008s
[schwarzgerat](1) $
```
also, `ncplayer` really, really wants to print banners, which surely don't speed up anything. | 1.0 | ncplayer ought use direct mode when invoked with -k - A surprising number of people seem to be using `ncplayer` as a one-shot image display tool (`ncls` is probably closer to what they want, but that's not obvious). And indeed, I can see this particular use case becoming a benchmark. So let's improve `ncplayer` for this case.
I'd think the best current invocation to be `ncplayer -q -k -t0 file`. this doesn't print the frame number/time, exits immediately, and doesn't use the alternate screen.
`ncls file` takes consistently about 3/4 the time of `ncplayer -q -k -t0 file`:
```
[schwarzgerat](1) $ ( for i in `seq 0 3 ` ; do time ./ncls ../data/worldmap.png ; done ) | grep real
real 0m0.053s
user 0m0.050s
sys 0m0.005s
real 0m0.050s
user 0m0.036s
sys 0m0.017s
real 0m0.051s
user 0m0.031s
sys 0m0.021s
real 0m0.048s
user 0m0.036s
sys 0m0.013s
[schwarzgerat](1) $
```
vs
```
[schwarzgerat](1) $ ( for i in `seq 0 3 ` ; do time ./ncplayer -q -t0 -k ../data/worldmap.png 2> /dev/null ; done ) | grep real
real 0m0.079s
user 0m0.032s
sys 0m0.016s
real 0m0.081s
user 0m0.033s
sys 0m0.016s
real 0m0.081s
user 0m0.037s
sys 0m0.012s
real 0m0.079s
user 0m0.039s
sys 0m0.008s
[schwarzgerat](1) $
```
also, `ncplayer` really, really wants to print banners, which surely don't speed up anything. | non_main | ncplayer ought use direct mode when invoked with k a surprising number of people seem to be using ncplayer as a one shot image display tool ncls is probably closer to what they want but that s not obvious and indeed i can see this particular use case becoming a benchmark so let s improve ncplayer for this case i d think the best current invocation to be ncplayer q k file this doesn t print the frame number time exits immediately and doesn t use the alternate screen ncls file takes consistently about the time of ncplayer q k file for i in seq do time ncls data worldmap png done grep real real user sys real user sys real user sys real user sys vs for i in seq do time ncplayer q k data worldmap png dev null done grep real real user sys real user sys real user sys real user sys also ncplayer really really wants to print banners which surely don t speed up anything | 0 |
12,370 | 3,603,447,390 | IssuesEvent | 2016-02-03 19:05:41 | sys-bio/roadrunner | https://api.github.com/repos/sys-bio/roadrunner | closed | Test simulate | Documentation Error priority URGENT | Provide a comprehensive test suite for simulate. Make sure the documentation is consistent with the call itself. | 1.0 | Test simulate - Provide a comprehensive test suite for simulate. Make sure the documentation is consistent with the call itself. | non_main | test simulate provide a comprehensive test suite for simulate make sure the documentation is consistent with the call itself | 0 |
43,227 | 5,613,327,161 | IssuesEvent | 2017-04-03 08:57:12 | SAP/techne | https://api.github.com/repos/SAP/techne | opened | Design main Layout animation | design library Website | Goal: Inform the main layout hierarchy via animation.
Elements to animate:
**Bar at the top**: linear moving animation during 0.5s; sliding down as a whole element from the top of the browser window
**Bar at the side**: delayed 0.5s, linear moving animation during 0.5s; sliding from the left of the browser window
**Content area**: delayed 0.5s, sequenced linear fade-in animation; Sequenced delays and timing as per the line 65 of the following framer prototype
`````
Framer.Extras.Preloader.enable()
maxWidth = Framer.Device.screen.width
maxHeight = Framer.Device.screen.height
bg = new Layer
backgroundColor: "#fff"
width: maxWidth
height: maxHeight
start = new TextLayer
text: "START"
fontSize: 20
y: maxHeight - 150
x: maxWidth - 150
z: 100
reset = new TextLayer
text: "RESET"
fontSize: 20
y: maxHeight - 100
x: maxWidth - 150
z: 100
Main_Nav = new Layer
width: Framer.Device.screen.width
height: 50
backgroundColor: '#0486E0'
y: -50
z: 1
Main_Nav.states.hide =
y: -50
Main_Nav.states.intro =
y: 0
sideNavWidth = Framer.Device.screen.width * (0.61 / 2.5)
Side_Nav = new Layer
y: 50
width: sideNavWidth
height: Framer.Device.screen.height
backgroundColor: '#003459'
z: 0
x: sideNavWidth * -1
Side_Nav.states.hide =
x: sideNavWidth * -1
Side_Nav.states.intro =
x: 0
sectionHeight = (maxHeight * 0.61 / 2)
maxSections = maxHeight / sectionHeight
Sections = []
for i in [0..maxSections]
Sections[i] = new Layer
width: maxWidth - sideNavWidth
height: sectionHeight
y: 50 + i * sectionHeight
x: sideNavWidth
opacity: 0
backgroundColor: Color.random().lighten(20).desaturate(100)
animationOptions:
delay: 1 + (i/2 * 0.2)
time: 0.5
start.onClick ->
Main_Nav.animate "intro",
time: 0.5
Side_Nav.animate "intro",
time: 0.5
delay: 0.5
for i in [0..maxSections]
Sections[i].animate
y: 50 + i * sectionHeight
opacity: 1
reset.onClick () ->
Main_Nav.stateSwitch "hide"
Side_Nav.stateSwitch "hide"
for i in [0..maxSections]
Sections[i].animate
opacity: 0
options:
delay: 0
time: 0
`````
All animation is in the attached video
[main-layout-animation.mov.zip](https://github.com/SAP/techne/files/889462/main-layout-animation.mov.zip)
| 1.0 | Design main Layout animation - Goal: Inform the main layout hierarchy via animation.
Elements to animate:
**Bar at the top**: linear moving animation during 0.5s; sliding down as a whole element from the top of the browser window
**Bar at the side**: delayed 0.5s, linear moving animation during 0.5s; sliding from the left of the browser window
**Content area**: delayed 0.5s, sequenced linear fade-in animation; Sequenced delays and timing as per the line 65 of the following framer prototype
`````
Framer.Extras.Preloader.enable()
maxWidth = Framer.Device.screen.width
maxHeight = Framer.Device.screen.height
bg = new Layer
backgroundColor: "#fff"
width: maxWidth
height: maxHeight
start = new TextLayer
text: "START"
fontSize: 20
y: maxHeight - 150
x: maxWidth - 150
z: 100
reset = new TextLayer
text: "RESET"
fontSize: 20
y: maxHeight - 100
x: maxWidth - 150
z: 100
Main_Nav = new Layer
width: Framer.Device.screen.width
height: 50
backgroundColor: '#0486E0'
y: -50
z: 1
Main_Nav.states.hide =
y: -50
Main_Nav.states.intro =
y: 0
sideNavWidth = Framer.Device.screen.width * (0.61 / 2.5)
Side_Nav = new Layer
y: 50
width: sideNavWidth
height: Framer.Device.screen.height
backgroundColor: '#003459'
z: 0
x: sideNavWidth * -1
Side_Nav.states.hide =
x: sideNavWidth * -1
Side_Nav.states.intro =
x: 0
sectionHeight = (maxHeight * 0.61 / 2)
maxSections = maxHeight / sectionHeight
Sections = []
for i in [0..maxSections]
Sections[i] = new Layer
width: maxWidth - sideNavWidth
height: sectionHeight
y: 50 + i * sectionHeight
x: sideNavWidth
opacity: 0
backgroundColor: Color.random().lighten(20).desaturate(100)
animationOptions:
delay: 1 + (i/2 * 0.2)
time: 0.5
start.onClick ->
Main_Nav.animate "intro",
time: 0.5
Side_Nav.animate "intro",
time: 0.5
delay: 0.5
for i in [0..maxSections]
Sections[i].animate
y: 50 + i * sectionHeight
opacity: 1
reset.onClick () ->
Main_Nav.stateSwitch "hide"
Side_Nav.stateSwitch "hide"
for i in [0..maxSections]
Sections[i].animate
opacity: 0
options:
delay: 0
time: 0
`````
All animation is in the attached video
[main-layout-animation.mov.zip](https://github.com/SAP/techne/files/889462/main-layout-animation.mov.zip)
| non_main | design main layout animation goal inform the main layout hierarchy via animation elements to animate bar at the top linear moving animation during sliding down as a whole element from the top of the browser window bar at the side delayed linear moving animation during sliding from the left of the browser window content area delayed sequenced linear fade in animation sequenced delays and timing as per the line of the following framer prototype framer extras preloader enable maxwidth framer device screen width maxheight framer device screen height bg new layer backgroundcolor fff width maxwidth height maxheight start new textlayer text start fontsize y maxheight x maxwidth z reset new textlayer text reset fontsize y maxheight x maxwidth z main nav new layer width framer device screen width height backgroundcolor y z main nav states hide y main nav states intro y sidenavwidth framer device screen width side nav new layer y width sidenavwidth height framer device screen height backgroundcolor z x sidenavwidth side nav states hide x sidenavwidth side nav states intro x sectionheight maxheight maxsections maxheight sectionheight sections for i in sections new layer width maxwidth sidenavwidth height sectionheight y i sectionheight x sidenavwidth opacity backgroundcolor color random lighten desaturate animationoptions delay i time start onclick main nav animate intro time side nav animate intro time delay for i in sections animate y i sectionheight opacity reset onclick main nav stateswitch hide side nav stateswitch hide for i in sections animate opacity options delay time all animation is in the attached video | 0 |
218,833 | 16,772,215,131 | IssuesEvent | 2021-06-14 16:03:14 | BetterThanTomorrow/calva | https://api.github.com/repos/BetterThanTomorrow/calva | closed | Add info about clojure-lsp version setting in documentation | documentation | Made this to not forget, since I'm currently too busy to add it quickly. | 1.0 | Add info about clojure-lsp version setting in documentation - Made this to not forget, since I'm currently too busy to add it quickly. | non_main | add info about clojure lsp version setting in documentation made this to not forget since i m currently too busy to add it quickly | 0 |
31,788 | 5,997,029,064 | IssuesEvent | 2017-06-03 19:45:48 | dealii/dealii | https://api.github.com/repos/dealii/dealii | closed | Fix some nonrelative links in the documentation | Documentation | There are a few places in the documentation where we link to a page on `dealii.org` instead of internally in our own documentation. For example, in step-17:
```
target="_top">METIS</a> to partition meshes. The installation of deal.II
together with these two additional libraries is described in the <a
href="https://www.dealii.org/developer/readme.html" target="body">README</a> file.
```
we should use a relative link here so that things work with offline documentation.
Here is an exhaustive superset (not all of these need to be fixed) list of such links:
```
[drwells@archway deal-ii]$ pwd
/usr/share/doc/deal-ii
[drwells@archway deal-ii]$ ack -l 'www.dealii.org'
users/doxygen.html
readme.html
developers/porting.html
documentation.html
doxygen/deal.II/subscriptor_8h_source.html
doxygen/deal.II/changes_between_3_3_and_3_4.html
doxygen/deal.II/grid__in_8cc_source.html
doxygen/deal.II/menudata.js
doxygen/deal.II/step-45_8h_source.html
doxygen/deal.II/classFE__Q__Hierarchical.html
doxygen/deal.II/changes_between_2_0_and_3_0.html
doxygen/deal.II/changes_between_6_2_and_6_3.html
doxygen/deal.II/classFE__Q.html
doxygen/deal.II/changes_between_3_4_and_4_0.html
doxygen/deal.II/DEALGlossary.html
doxygen/deal.II/step_34.html
doxygen/deal.II/classFE__DGPNonparametric.html
doxygen/deal.II/index.html
doxygen/deal.II/group__Exceptions.html
doxygen/deal.II/changes_between_6_1_and_6_2.html
doxygen/deal.II/step_18.html
doxygen/deal.II/classFE__DGP.html
doxygen/deal.II/step_17.html
doxygen/deal.II/step_1.html
doxygen/deal.II/step_53.html
doxygen/deal.II/step_4.html
doxygen/deal.II/step_6.html
doxygen/deal.II/step_36.html
doxygen/deal.II/classFE__DGPMonomial.html
navbar.html
``` | 1.0 | Fix some nonrelative links in the documentation - There are a few places in the documentation where we link to a page on `dealii.org` instead of internally in our own documentation. For example, in step-17:
```
target="_top">METIS</a> to partition meshes. The installation of deal.II
together with these two additional libraries is described in the <a
href="https://www.dealii.org/developer/readme.html" target="body">README</a> file.
```
we should use a relative link here so that things work with offline documentation.
Here is an exhaustive superset (not all of these need to be fixed) list of such links:
```
[drwells@archway deal-ii]$ pwd
/usr/share/doc/deal-ii
[drwells@archway deal-ii]$ ack -l 'www.dealii.org'
users/doxygen.html
readme.html
developers/porting.html
documentation.html
doxygen/deal.II/subscriptor_8h_source.html
doxygen/deal.II/changes_between_3_3_and_3_4.html
doxygen/deal.II/grid__in_8cc_source.html
doxygen/deal.II/menudata.js
doxygen/deal.II/step-45_8h_source.html
doxygen/deal.II/classFE__Q__Hierarchical.html
doxygen/deal.II/changes_between_2_0_and_3_0.html
doxygen/deal.II/changes_between_6_2_and_6_3.html
doxygen/deal.II/classFE__Q.html
doxygen/deal.II/changes_between_3_4_and_4_0.html
doxygen/deal.II/DEALGlossary.html
doxygen/deal.II/step_34.html
doxygen/deal.II/classFE__DGPNonparametric.html
doxygen/deal.II/index.html
doxygen/deal.II/group__Exceptions.html
doxygen/deal.II/changes_between_6_1_and_6_2.html
doxygen/deal.II/step_18.html
doxygen/deal.II/classFE__DGP.html
doxygen/deal.II/step_17.html
doxygen/deal.II/step_1.html
doxygen/deal.II/step_53.html
doxygen/deal.II/step_4.html
doxygen/deal.II/step_6.html
doxygen/deal.II/step_36.html
doxygen/deal.II/classFE__DGPMonomial.html
navbar.html
``` | non_main | fix some nonrelative links in the documentation there are a few places in the documentation where we link to a page on dealii org instead of internally in our own documentation for example in step target top metis to partition meshes the installation of deal ii together with these two additional libraries is described in the a href target body readme file we should use a relative link here so that things work with offline documentation here is an exhaustive superset not all of these need to be fixed list of such links pwd usr share doc deal ii ack l users doxygen html readme html developers porting html documentation html doxygen deal ii subscriptor source html doxygen deal ii changes between and html doxygen deal ii grid in source html doxygen deal ii menudata js doxygen deal ii step source html doxygen deal ii classfe q hierarchical html doxygen deal ii changes between and html doxygen deal ii changes between and html doxygen deal ii classfe q html doxygen deal ii changes between and html doxygen deal ii dealglossary html doxygen deal ii step html doxygen deal ii classfe dgpnonparametric html doxygen deal ii index html doxygen deal ii group exceptions html doxygen deal ii changes between and html doxygen deal ii step html doxygen deal ii classfe dgp html doxygen deal ii step html doxygen deal ii step html doxygen deal ii step html doxygen deal ii step html doxygen deal ii step html doxygen deal ii step html doxygen deal ii classfe dgpmonomial html navbar html | 0 |
2,984 | 10,774,910,062 | IssuesEvent | 2019-11-03 10:25:51 | vostpt/mobile-app | https://api.github.com/repos/vostpt/mobile-app | closed | Onboarding Screen | Needs Maintainers Help | **Description**
Create tutorial for the app
**File Location**
```
- presentation
|__ ui
```
**Requirements**
- When swipping on the screen we go to the next (swipe right) or left (swipe left) screen
- There is one screen with an explanation about the location permission for the app. That screen will have an ""Accept"" and ""Refuse"" buttons
- Accept will show the OS's permission dialog to accept the permission
- Refuse will show a pop-up screen warning the user that some features of the app will not work properly, such as the map
**UI**

| True | Onboarding Screen - **Description**
Create tutorial for the app
**File Location**
```
- presentation
|__ ui
```
**Requirements**
- When swipping on the screen we go to the next (swipe right) or left (swipe left) screen
- There is one screen with an explanation about the location permission for the app. That screen will have an ""Accept"" and ""Refuse"" buttons
- Accept will show the OS's permission dialog to accept the permission
- Refuse will show a pop-up screen warning the user that some features of the app will not work properly, such as the map
**UI**

| main | onboarding screen description create tutorial for the app file location presentation ui requirements when swipping on the screen we go to the next swipe right or left swipe left screen there is one screen with an explanation about the location permission for the app that screen will have an accept and refuse buttons accept will show the os s permission dialog to accept the permission refuse will show a pop up screen warning the user that some features of the app will not work properly such as the map ui | 1 |
65,295 | 8,797,285,323 | IssuesEvent | 2018-12-23 17:41:43 | hugoShaka/ansible-mailserver | https://api.github.com/repos/hugoShaka/ansible-mailserver | closed | Write definition of done, PR template | documentation | We could improve the PR quality by providing a simple template with checks to do.
Checks may include:
- documentation
- CI
- tests / lint
- code quality
| 1.0 | Write definition of done, PR template - We could improve the PR quality by providing a simple template with checks to do.
Checks may include:
- documentation
- CI
- tests / lint
- code quality
| non_main | write definition of done pr template we could improve the pr quality by providing a simple template with checks to do checks may include documentation ci tests lint code quality | 0 |
100,635 | 16,490,122,864 | IssuesEvent | 2021-05-25 01:39:03 | rgordon95/advanced-react-redux-demo | https://api.github.com/repos/rgordon95/advanced-react-redux-demo | opened | CVE-2021-23383 (High) detected in handlebars-4.1.1.tgz | security vulnerability | ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /advanced-react-redux-demo/package.json</p>
<p>Path to vulnerable library: advanced-react-redux-demo/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.5.0.tgz (Root Library)
- jest-cli-24.5.0.tgz
- core-24.5.0.tgz
- reporters-24.5.0.tgz
- istanbul-api-2.1.1.tgz
- istanbul-reports-2.1.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23383 (High) detected in handlebars-4.1.1.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /advanced-react-redux-demo/package.json</p>
<p>Path to vulnerable library: advanced-react-redux-demo/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.5.0.tgz (Root Library)
- jest-cli-24.5.0.tgz
- core-24.5.0.tgz
- reporters-24.5.0.tgz
- istanbul-api-2.1.1.tgz
- istanbul-reports-2.1.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file advanced react redux demo package json path to vulnerable library advanced react redux demo node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz core tgz reporters tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
947 | 4,681,890,443 | IssuesEvent | 2016-10-09 00:55:10 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Bug Report: Fetch fails if ansible_ssh_host is localhost | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
fetch
##### ANSIBLE VERSION
2.1.0.0
##### CONFIGURATION
These are the most relevant config items though I don't know that there is a correlation:
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes.
##### STEPS TO REPRODUCE
Use the play below with ansible_ssh_host set to localhost
- hosts: '{{ hosts }}'
gather_facts: False
tasks:
- fetch:
src: /tmp/remote_file
dest: /tmp/local_file
flat: true
fail_on_missing: true
##### EXPECTED RESULTS
I would expect the behavior to be the same for localhost as it is for every other host.
##### ACTUAL RESULTS
Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not).
ok: [localhost] => {"changed": false, "file": "/tmp/remote_file", "invocation": {"module_args": {"dest": "/tmp/local_file", "fail_on_missing": true, "flat": true, "src": "/tmp/remote_file"}, "module_name": "fetch"}, "msg": "unable to calculate the checksum of the remote file"}
| True | Bug Report: Fetch fails if ansible_ssh_host is localhost - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
fetch
##### ANSIBLE VERSION
2.1.0.0
##### CONFIGURATION
These are the most relevant config items though I don't know that there is a correlation:
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When ansible_ssh_host is set to localhost, fetch says it succeeds, but it never gets the file. It's important that localhost works the same as every other host value for testing purposes.
##### STEPS TO REPRODUCE
Use the play below with ansible_ssh_host set to localhost
- hosts: '{{ hosts }}'
gather_facts: False
tasks:
- fetch:
src: /tmp/remote_file
dest: /tmp/local_file
flat: true
fail_on_missing: true
##### EXPECTED RESULTS
I would expect the behavior to be the same for localhost as it is for every other host.
##### ACTUAL RESULTS
Fetch says it succeeds, but verbose output actually shows this error (doesn't matter if the file exists or not).
ok: [localhost] => {"changed": false, "file": "/tmp/remote_file", "invocation": {"module_args": {"dest": "/tmp/local_file", "fail_on_missing": true, "flat": true, "src": "/tmp/remote_file"}, "module_name": "fetch"}, "msg": "unable to calculate the checksum of the remote file"}
| main | bug report fetch fails if ansible ssh host is localhost issue type bug report component name fetch ansible version configuration these are the most relevant config items though i don t know that there is a correlation ssh args o controlmaster auto o controlpersist os environment n a summary when ansible ssh host is set to localhost fetch says it succeeds but it never gets the file it s important that localhost works the same as every other host value for testing purposes steps to reproduce use the play below with ansible ssh host set to localhost hosts hosts gather facts false tasks fetch src tmp remote file dest tmp local file flat true fail on missing true expected results i would expect the behavior to be the same for localhost as it is for every other host actual results fetch says it succeeds but verbose output actually shows this error doesn t matter if the file exists or not ok changed false file tmp remote file invocation module args dest tmp local file fail on missing true flat true src tmp remote file module name fetch msg unable to calculate the checksum of the remote file | 1 |
162,278 | 13,885,515,053 | IssuesEvent | 2020-10-18 20:20:27 | okezieobi/server-my-diary-demo | https://api.github.com/repos/okezieobi/server-my-diary-demo | closed | Authenticated user can get all associated entries | documentation enhancement | An authenticated user can send a request to an endpoint of the app to get all authenticated entries | 1.0 | Authenticated user can get all associated entries - An authenticated user can send a request to an endpoint of the app to get all authenticated entries | non_main | authenticated user can get all associated entries an authenticated user can send a request to an endpoint of the app to get all authenticated entries | 0 |
659 | 2,551,808,549 | IssuesEvent | 2015-02-02 12:57:10 | haoxun/clidoc | https://api.github.com/repos/haoxun/clidoc | opened | Remove Cpp11 dependent libs. | to code | The goal:
* AST --> C++ code gen --> a header file + an implementation file.
* remove dependencies of flex generated scanner and bison/xpressive. | 1.0 | Remove Cpp11 dependent libs. - The goal:
* AST --> C++ code gen --> a header file + an implementation file.
* remove dependencies of flex generated scanner and bison/xpressive. | non_main | remove dependent libs the goal ast c code gen a header file an implementation file remove dependencies of flex generated scanner and bison xpressive | 0 |
1,674 | 6,574,094,254 | IssuesEvent | 2017-09-11 11:27:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_image_facts: unable to deal with image IDs | affects_2.2 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_image_facts`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to inspect an unnamed image.
##### STEPS TO REPRODUCE
``` sh
$ docker pull alpine
$ docker inspect --format={{.Id}} alpine
sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3
$ docker inspect
$ diff -q <(docker inspect alpine) <(docker inspect sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3)
$ echo $?
0
$ ansible -m docker_image_facts -a name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_image_facts -a name=alpine localhost`.
```
localhost | SUCCESS => {
"changed": false,
"images": [
{
"Architecture": "amd64",
"Author": "",
"Comment": "",
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": null,
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Hostname": "1d811a9194c4",
"Image": "",
"Labels": null,
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Container": "1d811a9194c47475510bc53700001c32f2b0eb8e3aca0914c5424109c0cd2056",
"ContainerConfig": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ADD file:7afbc23fda8b0b3872623c16af8e3490b2cee951aed14b3794389c2f946cc8c7 in / "
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Hostname": "1d811a9194c4",
"Image": "",
"Labels": null,
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2016-10-18T20:31:22.321427771Z",
"DockerVersion": "1.12.1",
"GraphDriver": {
"Data": {
"RootDir": "/var/lib/docker/overlay/7be156a62962247279d73aaafb6ccb6b9c3d25d188641fef7f447ea88563aa4f/root"
},
"Name": "overlay"
},
"Id": "sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3",
"Os": "linux",
"Parent": "",
"RepoDigests": [
"alpine@sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c"
],
"RepoTags": [
"alpine:latest"
],
"RootFS": {
"Layers": [
"sha256:011b303988d241a4ae28a6b82b0d8262751ef02910f0ae2265cb637504b72e36"
],
"Type": "layers"
},
"Size": 4799225,
"VirtualSize": 4799225
}
]
}
```
##### ACTUAL RESULTS
Instead no matching image is returned.
```
localhost | SUCCESS => {
"changed": false,
"images": []
}
```
| True | docker_image_facts: unable to deal with image IDs - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_image_facts`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to inspect an unnamed image.
##### STEPS TO REPRODUCE
``` sh
$ docker pull alpine
$ docker inspect --format={{.Id}} alpine
sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3
$ docker inspect
$ diff -q <(docker inspect alpine) <(docker inspect sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3)
$ echo $?
0
$ ansible -m docker_image_facts -a name=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_image_facts -a name=alpine localhost`.
```
localhost | SUCCESS => {
"changed": false,
"images": [
{
"Architecture": "amd64",
"Author": "",
"Comment": "",
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": null,
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Hostname": "1d811a9194c4",
"Image": "",
"Labels": null,
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Container": "1d811a9194c47475510bc53700001c32f2b0eb8e3aca0914c5424109c0cd2056",
"ContainerConfig": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ADD file:7afbc23fda8b0b3872623c16af8e3490b2cee951aed14b3794389c2f946cc8c7 in / "
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Hostname": "1d811a9194c4",
"Image": "",
"Labels": null,
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2016-10-18T20:31:22.321427771Z",
"DockerVersion": "1.12.1",
"GraphDriver": {
"Data": {
"RootDir": "/var/lib/docker/overlay/7be156a62962247279d73aaafb6ccb6b9c3d25d188641fef7f447ea88563aa4f/root"
},
"Name": "overlay"
},
"Id": "sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3",
"Os": "linux",
"Parent": "",
"RepoDigests": [
"alpine@sha256:1354db23ff5478120c980eca1611a51c9f2b88b61f24283ee8200bf9a54f2e5c"
],
"RepoTags": [
"alpine:latest"
],
"RootFS": {
"Layers": [
"sha256:011b303988d241a4ae28a6b82b0d8262751ef02910f0ae2265cb637504b72e36"
],
"Type": "layers"
},
"Size": 4799225,
"VirtualSize": 4799225
}
]
}
```
##### ACTUAL RESULTS
Instead no matching image is returned.
```
localhost | SUCCESS => {
"changed": false,
"images": []
}
```
| main | docker image facts unable to deal with image ids issue type bug report component name docker image facts ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing images by id ansible should do the same otherwise it s impossible to inspect an unnamed image steps to reproduce sh docker pull alpine docker inspect format id alpine docker inspect diff q docker inspect alpine docker inspect echo ansible m docker image facts a name localhost expected results the output should be the same as from ansible m docker image facts a name alpine localhost localhost success changed false images architecture author comment config attachstderr false attachstdin false attachstdout false cmd null domainname entrypoint null env path usr local sbin usr local bin usr sbin usr bin sbin bin hostname image labels null onbuild null openstdin false stdinonce false tty false user volumes null workingdir container containerconfig attachstderr false attachstdin false attachstdout false cmd bin sh c nop add file in domainname entrypoint null env path usr local sbin usr local bin usr sbin usr bin sbin bin hostname image labels null onbuild null openstdin false stdinonce false tty false user volumes null workingdir created dockerversion graphdriver data rootdir var lib docker overlay root name overlay id os linux parent repodigests alpine repotags alpine latest rootfs layers type layers size virtualsize actual results instead no matching image is returned localhost success changed false images | 1 |
154,011 | 12,180,491,774 | IssuesEvent | 2020-04-28 12:33:43 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Single word keywords does not increase purchase intent when searching | QA/Test-Plan-Specified QA/Yes bug feature/ads priority/P2 | Follow up to https://github.com/brave/brave-browser/issues/8047
Searching for `audi` does not increase purchase intent
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Clean profile
2. Connect to US
3. Run Brave with command line: `/usr/bin/brave-browser --enable-logging=stderr --vmodule=brave_ads=3 --brave-ads-staging --rewards=staging=true`
4. Enable Rewards
5. Search for `audi` in URL bar
## Actual result:
<!--Please add screenshots if needed-->
purchase intent weight is not increased
`automotive purchase intent by make-audi` in `Default/ads_service/client.json` is empty or doesn't exists
## Expected result:
purchase intent weight is increased
`automotive purchase intent by make-audi` in `Default/ads_service/client.json` is not empty and contains one element
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% repro rate
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.7.78 Chromium: 80.0.3987.149 (Official Build) dev (64-bit)
-- | --
Revision | 5f4eb224680e5d7dca88504586e9fd951840cac6-refs/branch-heads/3987_137@{#16}
OS | Ubuntu 18.04 LTS
cc @brave/legacy_qa @rebron @tmancey @moritzhaller | 1.0 | Single word keywords does not increase purchase intent when searching - Follow up to https://github.com/brave/brave-browser/issues/8047
Searching for `audi` does not increase purchase intent
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Clean profile
2. Connect to US
3. Run Brave with command line: `/usr/bin/brave-browser --enable-logging=stderr --vmodule=brave_ads=3 --brave-ads-staging --rewards=staging=true`
4. Enable Rewards
5. Search for `audi` in URL bar
## Actual result:
<!--Please add screenshots if needed-->
purchase intent weight is not increased
`automotive purchase intent by make-audi` in `Default/ads_service/client.json` is empty or doesn't exists
## Expected result:
purchase intent weight is increased
`automotive purchase intent by make-audi` in `Default/ads_service/client.json` is not empty and contains one element
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% repro rate
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.7.78 Chromium: 80.0.3987.149 (Official Build) dev (64-bit)
-- | --
Revision | 5f4eb224680e5d7dca88504586e9fd951840cac6-refs/branch-heads/3987_137@{#16}
OS | Ubuntu 18.04 LTS
cc @brave/legacy_qa @rebron @tmancey @moritzhaller | non_main | single word keywords does not increase purchase intent when searching follow up to searching for audi does not increase purchase intent steps to reproduce clean profile connect to us run brave with command line usr bin brave browser enable logging stderr vmodule brave ads brave ads staging rewards staging true enable rewards search for audi in url bar actual result purchase intent weight is not increased automotive purchase intent by make audi in default ads service client json is empty or doesn t exists expected result purchase intent weight is increased automotive purchase intent by make audi in default ads service client json is not empty and contains one element reproduces how often repro rate brave version brave version info brave chromium official build dev bit revision refs branch heads os ubuntu lts cc brave legacy qa rebron tmancey moritzhaller | 0 |
196,107 | 14,813,287,608 | IssuesEvent | 2021-01-14 01:40:01 | kubernetes/org | https://api.github.com/repos/kubernetes/org | closed | REQUEST: New membership for heqg | area/github-membership sig/node sig/release sig/testing | ### GitHub Username
@heqg
### Organization you are requesting membership in
@kubernetes
### Requirements
- [x] I have reviewed the community membership guidelines (https://git.k8s.io/community/community-membership.md)
- [x] I have enabled 2FA on my GitHub account (https://github.com/settings/security)
- [x] I have subscribed to the kubernetes-dev e-mail list (https://groups.google.com/forum/#!forum/kubernetes-dev)
- [x] I am actively contributing to 1 or more Kubernetes subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @klueska
- @MrHohn
### List of contributions to the Kubernetes project
- PRs reviewed / authored
https://github.com/kubernetes/kubernetes/pull/97911
https://github.com/kubernetes/kubernetes/pull/97873
https://github.com/kubernetes/kubernetes/pull/97787
https://github.com/kubernetes/kubernetes/pull/97749
https://github.com/kubernetes/kubernetes/pull/97711
https://github.com/kubernetes/kubernetes/pull/97666
https://github.com/kubernetes/kubernetes/pull/97653
https://github.com/kubernetes/kubernetes/pull/97631
https://github.com/kubernetes/kubernetes/pull/97629
https://github.com/kubernetes/kubernetes/pull/97590
https://github.com/kubernetes/kubernetes/pull/97587
https://github.com/kubernetes/kubernetes/pull/97536
https://github.com/kubernetes/kubernetes/pull/97518
https://github.com/kubernetes/kubernetes/pull/97477
https://github.com/kubernetes/website/pull/17063
- Issues responded to
https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+author%3Aheqg+
- SIG projects I am involved with
/sig node
/sig testing
/sig release | 1.0 | REQUEST: New membership for heqg - ### GitHub Username
@heqg
### Organization you are requesting membership in
@kubernetes
### Requirements
- [x] I have reviewed the community membership guidelines (https://git.k8s.io/community/community-membership.md)
- [x] I have enabled 2FA on my GitHub account (https://github.com/settings/security)
- [x] I have subscribed to the kubernetes-dev e-mail list (https://groups.google.com/forum/#!forum/kubernetes-dev)
- [x] I am actively contributing to 1 or more Kubernetes subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @klueska
- @MrHohn
### List of contributions to the Kubernetes project
- PRs reviewed / authored
https://github.com/kubernetes/kubernetes/pull/97911
https://github.com/kubernetes/kubernetes/pull/97873
https://github.com/kubernetes/kubernetes/pull/97787
https://github.com/kubernetes/kubernetes/pull/97749
https://github.com/kubernetes/kubernetes/pull/97711
https://github.com/kubernetes/kubernetes/pull/97666
https://github.com/kubernetes/kubernetes/pull/97653
https://github.com/kubernetes/kubernetes/pull/97631
https://github.com/kubernetes/kubernetes/pull/97629
https://github.com/kubernetes/kubernetes/pull/97590
https://github.com/kubernetes/kubernetes/pull/97587
https://github.com/kubernetes/kubernetes/pull/97536
https://github.com/kubernetes/kubernetes/pull/97518
https://github.com/kubernetes/kubernetes/pull/97477
https://github.com/kubernetes/website/pull/17063
- Issues responded to
https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+author%3Aheqg+
- SIG projects I am involved with
/sig node
/sig testing
/sig release | non_main | request new membership for heqg github username heqg organization you are requesting membership in kubernetes requirements i have reviewed the community membership guidelines i have enabled on my github account i have subscribed to the kubernetes dev e mail list i am actively contributing to or more kubernetes subprojects i have two sponsors that meet the sponsor requirements listed in the community membership guidelines i have spoken to my sponsors ahead of this application and they have agreed to sponsor my application sponsors klueska mrhohn list of contributions to the kubernetes project prs reviewed authored issues responded to sig projects i am involved with sig node sig testing sig release | 0 |
1,992 | 6,694,297,401 | IssuesEvent | 2017-10-10 00:58:42 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Forecast: Rounding Issue | Maintainer Input Requested Status: PR Received | The humidity has a rounding issue - a long series of 9's. See the attached screen shot (far left box).

---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
| True | Forecast: Rounding Issue - The humidity has a rounding issue - a long series of 9's. See the attached screen shot (far left box).

---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
| main | forecast rounding issue the humidity has a rounding issue a long series of s see the attached screen shot far left box ia page | 1 |
5,146 | 26,235,325,156 | IssuesEvent | 2023-01-05 06:29:26 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Param --docker-volume-basedir in sam local start-lambda is not used as expected | stage/needs-investigation maintainer/need-followup | ### Description:
Consider a scenario where Docker is running on a remote machine (or SAM CLI is running on a container itself).
Based on the [docs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-lambda.html), which for `--docker-volume-basedir` say:
> If Docker is running on a remote machine, you must mount the path where the AWS SAM file exists on the Docker machine, and modify this value to match the remote machine.
If I understand things correctly (which very likely may not be the case), running `sam local start-lambda --docker-volume-basedir /path/to/code/in/docker/machine`, should take the contents of the specified path **on the Docker host** and mount them on the Lambda Container, since the path exists in the Docker host machine, **not** in the local machine.
However, in the current implementation, it seems that `--docker-volume-basedir` is ignored in favor of the path containing the template file when looking for the code to be mounted in the Lambda Container.
### Steps to reproduce:
Suppose you have your env variable `DOCKER_HOST=tcp://docker_host:2375` which tells the CLI to use docker from a remote host (this could be also defined with the parameter `--container-host`), and you have mounted the source code to the Docker host as it is specified in the docs, to a directory e.g. /home/docker-host/stock-checker.
Now, you have a sam app such as the Stock Trader sample in your current directory, e.g. `/home/my-work-dir/stock-checker/" and you start lambda locally and execute a function with:
```
$ sam local start-lambda --debug --docker-volume-basedir /home/docker-host/stock-checker/.aws-sam/build
$ aws lambda invoke --endpoint http://localhost:3001 --function-name StockCheckerFunction response.json
```
### Observed result:
You will see the log:
`Mounting /home/my-work-dir/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container`
Which will fail because that path does not exist in the Docker host. The resulting exception is that the handler module cannot be loaded.
### Expected result:
The mounting path should be the one specified by the parameter `--docker-volume-basedir`, and the log should say
`Mounting /home/docker-host/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container`
Which is a valid path in the Docker host, and the source code is correctly mounted into the Lambda Container.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu
2. `sam --version`: 1.36.0
3. AWS region: eu-west-1
`Add --debug flag to command you are running`
### Suggestion
The way the path to be mounted is defined by a few factors:
* First, when a Function object is created, its codeuri is set to an absolute path which is the same as where the template file exists.
* This happens as a consequence of the following: [nvoke_context.py#L195](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/cli_common/invoke_context.py#L195), which calls `SamFunctionProvider()` without a value for use_raw_codeuri, which defaults to False, which in turn makes the created Function's `codeuri` to always have that absolute path relative to template directory.
* Second, when the mounting path for the Lambda Contianer is evaluated in [local_lambda.py#L184](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/lib/local_lambda.py#L184), it follows into [codeuri.py#L44](https://github.com/aws/aws-sam-cli/blob/develop/samcli/lib/utils/codeuri.py#L44), which will return `codeuri` only because it is already an absolute path (as shown in the previous point), and because calling `os.path.join(cwd, codeuri)` would still return just the value of `codeuri`, since it is an absolute path already. Completely disregarding `cwd` which at this point holds the value passed to `--docker-volume-basedir`
As a possible solution, I have tried adding a check for `docker_volume_basedir` in [invoke_context.py#L195](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/cli_common/invoke_context.py#L195), like this:
```
self._function_provider = SamFunctionProvider(self._stacks, use_raw_codeuri=bool(self._docker_volume_basedir))
```
In order to set a value for use_raw_codeuri` which will in turn make the resulting Function's `codeuri` field to be a relative path, and thus making `resolve_code_path` return a path that utilizes the value of `cwd` (same as `docker_volume_basedir`)
If this solution seems feasible and fits your overall plan/vision, I'd like to chip in with a PR, or if you have any plans for this issue in the future I'd like to contribute as well.
Note, doing so messes up 4 test cases, need to review further how/if it is possible to implement this suggestion safely
| True | Param --docker-volume-basedir in sam local start-lambda is not used as expected - ### Description:
Consider a scenario where Docker is running on a remote machine (or SAM CLI is running on a container itself).
Based on the [docs](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-lambda.html), which for `--docker-volume-basedir` say:
> If Docker is running on a remote machine, you must mount the path where the AWS SAM file exists on the Docker machine, and modify this value to match the remote machine.
If I understand things correctly (which very likely may not be the case), running `sam local start-lambda --docker-volume-basedir /path/to/code/in/docker/machine`, should take the contents of the specified path **on the Docker host** and mount them on the Lambda Container, since the path exists in the Docker host machine, **not** in the local machine.
However, in the current implementation, it seems that `--docker-volume-basedir` is ignored in favor of the path containing the template file when looking for the code to be mounted in the Lambda Container.
### Steps to reproduce:
Suppose you have your env variable `DOCKER_HOST=tcp://docker_host:2375` which tells the CLI to use docker from a remote host (this could be also defined with the parameter `--container-host`), and you have mounted the source code to the Docker host as it is specified in the docs, to a directory e.g. /home/docker-host/stock-checker.
Now, you have a sam app such as the Stock Trader sample in your current directory, e.g. `/home/my-work-dir/stock-checker/" and you start lambda locally and execute a function with:
```
$ sam local start-lambda --debug --docker-volume-basedir /home/docker-host/stock-checker/.aws-sam/build
$ aws lambda invoke --endpoint http://localhost:3001 --function-name StockCheckerFunction response.json
```
### Observed result:
You will see the log:
`Mounting /home/my-work-dir/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container`
Which will fail because that path does not exist in the Docker host. The resulting exception is that the handler module cannot be loaded.
### Expected result:
The mounting path should be the one specified by the parameter `--docker-volume-basedir`, and the log should say
`Mounting /home/docker-host/stock-checker/.aws-sam/build/StockCheckerFunctionas /var/task:ro,delegated inside runtime container`
Which is a valid path in the Docker host, and the source code is correctly mounted into the Lambda Container.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu
2. `sam --version`: 1.36.0
3. AWS region: eu-west-1
`Add --debug flag to command you are running`
### Suggestion
The way the path to be mounted is defined by a few factors:
* First, when a Function object is created, its codeuri is set to an absolute path which is the same as where the template file exists.
* This happens as a consequence of the following: [nvoke_context.py#L195](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/cli_common/invoke_context.py#L195), which calls `SamFunctionProvider()` without a value for use_raw_codeuri, which defaults to False, which in turn makes the created Function's `codeuri` to always have that absolute path relative to template directory.
* Second, when the mounting path for the Lambda Contianer is evaluated in [local_lambda.py#L184](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/lib/local_lambda.py#L184), it follows into [codeuri.py#L44](https://github.com/aws/aws-sam-cli/blob/develop/samcli/lib/utils/codeuri.py#L44), which will return `codeuri` only because it is already an absolute path (as shown in the previous point), and because calling `os.path.join(cwd, codeuri)` would still return just the value of `codeuri`, since it is an absolute path already. Completely disregarding `cwd` which at this point holds the value passed to `--docker-volume-basedir`
As a possible solution, I have tried adding a check for `docker_volume_basedir` in [invoke_context.py#L195](https://github.com/aws/aws-sam-cli/blob/develop/samcli/commands/local/cli_common/invoke_context.py#L195), like this:
```
self._function_provider = SamFunctionProvider(self._stacks, use_raw_codeuri=bool(self._docker_volume_basedir))
```
In order to set a value for use_raw_codeuri` which will in turn make the resulting Function's `codeuri` field to be a relative path, and thus making `resolve_code_path` return a path that utilizes the value of `cwd` (same as `docker_volume_basedir`)
If this solution seems feasible and fits your overall plan/vision, I'd like to chip in with a PR, or if you have any plans for this issue in the future I'd like to contribute as well.
Note, doing so messes up 4 test cases, need to review further how/if it is possible to implement this suggestion safely
| main | param docker volume basedir in sam local start lambda is not used as expected description consider a scenario where docker is running on a remote machine or sam cli is running on a container itself based on the which for docker volume basedir say if docker is running on a remote machine you must mount the path where the aws sam file exists on the docker machine and modify this value to match the remote machine if i understand things correctly which very likely may not be the case running sam local start lambda docker volume basedir path to code in docker machine should take the contents of the specified path on the docker host and mount them on the lambda container since the path exists in the docker host machine not in the local machine however in the current implementation it seems that docker volume basedir is ignored in favor of the path containing the template file when looking for the code to be mounted in the lambda container steps to reproduce suppose you have your env variable docker host tcp docker host which tells the cli to use docker from a remote host this could be also defined with the parameter container host and you have mounted the source code to the docker host as it is specified in the docs to a directory e g home docker host stock checker now you have a sam app such as the stock trader sample in your current directory e g home my work dir stock checker and you start lambda locally and execute a function with sam local start lambda debug docker volume basedir home docker host stock checker aws sam build aws lambda invoke endpoint function name stockcheckerfunction response json observed result you will see the log mounting home my work dir stock checker aws sam build stockcheckerfunctionas var task ro delegated inside runtime container which will fail because that path does not exist in the docker host the resulting exception is that the handler module cannot be loaded expected result the mounting path should be the one specified by the parameter docker volume basedir and the log should say mounting home docker host stock checker aws sam build stockcheckerfunctionas var task ro delegated inside runtime container which is a valid path in the docker host and the source code is correctly mounted into the lambda container additional environment details ex windows mac amazon linux etc os ubuntu sam version aws region eu west add debug flag to command you are running suggestion the way the path to be mounted is defined by a few factors first when a function object is created its codeuri is set to an absolute path which is the same as where the template file exists this happens as a consequence of the following which calls samfunctionprovider without a value for use raw codeuri which defaults to false which in turn makes the created function s codeuri to always have that absolute path relative to template directory second when the mounting path for the lambda contianer is evaluated in it follows into which will return codeuri only because it is already an absolute path as shown in the previous point and because calling os path join cwd codeuri would still return just the value of codeuri since it is an absolute path already completely disregarding cwd which at this point holds the value passed to docker volume basedir as a possible solution i have tried adding a check for docker volume basedir in like this self function provider samfunctionprovider self stacks use raw codeuri bool self docker volume basedir in order to set a value for use raw codeuri which will in turn make the resulting function s codeuri field to be a relative path and thus making resolve code path return a path that utilizes the value of cwd same as docker volume basedir if this solution seems feasible and fits your overall plan vision i d like to chip in with a pr or if you have any plans for this issue in the future i d like to contribute as well note doing so messes up test cases need to review further how if it is possible to implement this suggestion safely | 1 |
5,029 | 25,804,698,450 | IssuesEvent | 2022-12-11 09:49:57 | diofant/diofant | https://api.github.com/repos/diofant/diofant | closed | Replace setup.* with pyproject.toml | maintainability | Setuptools doesn't support declarative configuration with this file yet. This blocks replacement.
Other tools:
- [x] pylint - https://github.com/PyCQA/pylint/issues/617
- [x] pytest - https://github.com/pytest-dev/pytest/issues/1556
- [x] flake8 - https://github.com/PyCQA/flake8/issues/234. Use [flakehell](https://flakehell.readthedocs.io/)? https://github.com/carlosperate/awesome-pyproject/issues/47 ?
- [x] coverage - https://github.com/nedbat/coveragepy/issues/664
- [x] isort - https://github.com/timothycrosley/isort/issues/705
- [x] setuptools - https://github.com/pypa/setuptools/issues/1688. Use [flit](https://flit.readthedocs.io/en/stable/) (but see https://github.com/pypa/flit/issues/257) or [poetry](https://python-poetry.org/docs/) (has CoC!)?
- [x] setuptools_scm - https://github.com/pypa/setuptools_scm/pull/364
See also [this](https://github.com/carlosperate/awesome-pyproject/).
| True | Replace setup.* with pyproject.toml - Setuptools doesn't support declarative configuration with this file yet. This blocks replacement.
Other tools:
- [x] pylint - https://github.com/PyCQA/pylint/issues/617
- [x] pytest - https://github.com/pytest-dev/pytest/issues/1556
- [x] flake8 - https://github.com/PyCQA/flake8/issues/234. Use [flakehell](https://flakehell.readthedocs.io/)? https://github.com/carlosperate/awesome-pyproject/issues/47 ?
- [x] coverage - https://github.com/nedbat/coveragepy/issues/664
- [x] isort - https://github.com/timothycrosley/isort/issues/705
- [x] setuptools - https://github.com/pypa/setuptools/issues/1688. Use [flit](https://flit.readthedocs.io/en/stable/) (but see https://github.com/pypa/flit/issues/257) or [poetry](https://python-poetry.org/docs/) (has CoC!)?
- [x] setuptools_scm - https://github.com/pypa/setuptools_scm/pull/364
See also [this](https://github.com/carlosperate/awesome-pyproject/).
| main | replace setup with pyproject toml setuptools doesn t support declarative configuration with this file yet this blocks replacement other tools pylint pytest use coverage isort setuptools use but see or has coc setuptools scm see also | 1 |
135,090 | 10,962,006,748 | IssuesEvent | 2019-11-27 16:24:42 | raiden-network/raiden | https://api.github.com/repos/raiden-network/raiden | closed | watch_for_unlock_failures is too slow | Flag / Testing Topic / Flaky Tests Type / Optimization | A profiling of some flaky integration tests show that they spend way too much time in the `watch_for_unlock_failures` context manager.
Proposed fix: Rather than recovering every event from the database, patch the WAL and count events as they happen.
Related issue: #4803 (Although there will be more problems with the stress test) | 2.0 | watch_for_unlock_failures is too slow - A profiling of some flaky integration tests show that they spend way too much time in the `watch_for_unlock_failures` context manager.
Proposed fix: Rather than recovering every event from the database, patch the WAL and count events as they happen.
Related issue: #4803 (Although there will be more problems with the stress test) | non_main | watch for unlock failures is too slow a profiling of some flaky integration tests show that they spend way too much time in the watch for unlock failures context manager proposed fix rather than recovering every event from the database patch the wal and count events as they happen related issue although there will be more problems with the stress test | 0 |
4,377 | 22,284,608,945 | IssuesEvent | 2022-06-11 12:19:24 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] groHMM | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
upstream error
**Log of the bug**
<details>
http://bioconductor.org/checkResults/release/bioc-LATEST/groHMM/
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| True | [MAINTAIN] groHMM - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
upstream error
**Log of the bug**
<details>
http://bioconductor.org/checkResults/release/bioc-LATEST/groHMM/
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| main | grohmm please report the error of one package in one issue use multi issues to report multi bugs thanks upstream error log of the bug packages please complete the following information package name description add any other context about the problem here | 1 |
4,908 | 25,230,241,948 | IssuesEvent | 2022-11-14 19:10:23 | sul-dlss/preservation_catalog | https://api.github.com/repos/sul-dlss/preservation_catalog | closed | CompleteMoabHandler#check_exist refactor - it's shameful green, really. | refactor complete-moab-handler-more-maintainable | One approach:
- create small private methods that can be used by both confirm_online_version and check_existence
- create private method with some of logic in public check_existence method.
...
orig ticket:
-------
For discussion of the issue, see https://github.com/sul-dlss/preservation_catalog/pull/423#discussion_r157289371
The code to extract is here: https://github.com/sul-dlss/preservation_catalog/blob/7b89f23b27cac88c9498356d2e9219550c3043d9/app/services/preserved_object_handler.rb#L68-L109
The proposed new helper method name is `check_online_existence` | True | CompleteMoabHandler#check_exist refactor - it's shameful green, really. - One approach:
- create small private methods that can be used by both confirm_online_version and check_existence
- create private method with some of logic in public check_existence method.
...
orig ticket:
-------
For discussion of the issue, see https://github.com/sul-dlss/preservation_catalog/pull/423#discussion_r157289371
The code to extract is here: https://github.com/sul-dlss/preservation_catalog/blob/7b89f23b27cac88c9498356d2e9219550c3043d9/app/services/preserved_object_handler.rb#L68-L109
The proposed new helper method name is `check_online_existence` | main | completemoabhandler check exist refactor it s shameful green really one approach create small private methods that can be used by both confirm online version and check existence create private method with some of logic in public check existence method orig ticket for discussion of the issue see the code to extract is here the proposed new helper method name is check online existence | 1 |
86,711 | 10,515,389,477 | IssuesEvent | 2019-09-28 09:24:58 | backdrop/backdrop-issues | https://api.github.com/repos/backdrop/backdrop-issues | reopened | [DX] How to specify alternative configuration settings for elements using `system_settings_form()`? | type - documentation type - question | In https://api.backdropcms.org/api/backdrop/1/search/system_settings_form it mentions how you can specify a different config file for some of the form elements/values via `'#config'`. What do you do though if you want to save to the same top-level `'#config'`, but to a different setting?
So say that your form element is `$form['my']['cool']['element'] = array( ... );` but you want the setting to be saved as `my_cool_element` in the .json? ...is a custom submit handler the only option in that case?
So basically, I understand that you can do this:
```php
$primary_config = config('mymodule.settings');
$secondary_config = config('mymodule.moar.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config' => 'mymodule.moar.settings',
...
);
```
...and that this saves `first_setting` in `mymodule.settings.json`, while `second_setting` is saved in `mymodule.moar.settings.json`.
What I need to do though is something like this:
```php
$config = config('mymodule.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config_setting' => 'call_this_something_else',
...
);
```
...so both settings will be saved in the same `mymodule.settings.json` file. The first one as `"first_setting"`, while the second one as `"call_this_something_else"`. So instead of this:
```json
{
"_config_name": "mymodule.settings",
"_module": "mymodule",
"first_setting": 123,
"second_setting": "abc",
}
```
...I would instead want to have this:
```json
{
"_config_name": "mymodule.settings",
"_module": "mymodule",
"first_setting": 123,
"call_this_something_else": "abc",
}
``` | 1.0 | [DX] How to specify alternative configuration settings for elements using `system_settings_form()`? - In https://api.backdropcms.org/api/backdrop/1/search/system_settings_form it mentions how you can specify a different config file for some of the form elements/values via `'#config'`. What do you do though if you want to save to the same top-level `'#config'`, but to a different setting?
So say that your form element is `$form['my']['cool']['element'] = array( ... );` but you want the setting to be saved as `my_cool_element` in the .json? ...is a custom submit handler the only option in that case?
So basically, I understand that you can do this:
```php
$primary_config = config('mymodule.settings');
$secondary_config = config('mymodule.moar.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config' => 'mymodule.moar.settings',
...
);
```
...and that this saves `first_setting` in `mymodule.settings.json`, while `second_setting` is saved in `mymodule.moar.settings.json`.
What I need to do though is something like this:
```php
$config = config('mymodule.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config_setting' => 'call_this_something_else',
...
);
```
...so both settings will be saved in the same `mymodule.settings.json` file. The first one as `"first_setting"`, while the second one as `"call_this_something_else"`. So instead of this:
```json
{
"_config_name": "mymodule.settings",
"_module": "mymodule",
"first_setting": 123,
"second_setting": "abc",
}
```
...I would instead want to have this:
```json
{
"_config_name": "mymodule.settings",
"_module": "mymodule",
"first_setting": 123,
"call_this_something_else": "abc",
}
``` | non_main | how to specify alternative configuration settings for elements using system settings form in it mentions how you can specify a different config file for some of the form elements values via config what do you do though if you want to save to the same top level config but to a different setting so say that your form element is form array but you want the setting to be saved as my cool element in the json is a custom submit handler the only option in that case so basically i understand that you can do this php primary config config mymodule settings secondary config config mymodule moar settings form array config mymodule settings form array form array config mymodule moar settings and that this saves first setting in mymodule settings json while second setting is saved in mymodule moar settings json what i need to do though is something like this php config config mymodule settings form array config mymodule settings form array form array config setting call this something else so both settings will be saved in the same mymodule settings json file the first one as first setting while the second one as call this something else so instead of this json config name mymodule settings module mymodule first setting second setting abc i would instead want to have this json config name mymodule settings module mymodule first setting call this something else abc | 0 |
444,275 | 12,809,070,091 | IssuesEvent | 2020-07-03 14:51:47 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Member Counts Inaccurate in Parties | priority: medium section: Guilds section: Party Page status: issue: in progress type: medium level coding | ### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Clean-up of #7454. You can visit that ticket to see the full history of discussion on this issue.
`memberCount` in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave. It frequently gets out of sync, leading to people with 6-member parties showing a member count of 7, etc.; this is a nuisance at best, and in some cases involving the 30-person party limit, can actually interfere with people joining or inviting members.
This is the desired fix:
> change the group remove, leave and join routes to recompute `memberCount` each time it needs to be updated (i.e., count all existing members, rather than add or subtract one from the current value of `memberCount`)
...when **Parties** are involved. At this time, we're leaving Guilds alone for performance reasons. See #12286 for the status of that arm of the issue. | 1.0 | Member Counts Inaccurate in Parties - ### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Clean-up of #7454. You can visit that ticket to see the full history of discussion on this issue.
`memberCount` in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave. It frequently gets out of sync, leading to people with 6-member parties showing a member count of 7, etc.; this is a nuisance at best, and in some cases involving the 30-person party limit, can actually interfere with people joining or inviting members.
This is the desired fix:
> change the group remove, leave and join routes to recompute `memberCount` each time it needs to be updated (i.e., count all existing members, rather than add or subtract one from the current value of `memberCount`)
...when **Parties** are involved. At this time, we're leaving Guilds alone for performance reasons. See #12286 for the status of that arm of the issue. | non_main | member counts inaccurate in parties description describe bug in detail here include screenshots if helpful clean up of you can visit that ticket to see the full history of discussion on this issue membercount in group records is a stored computed value that we currently attempt to keep up to date by incrementing and decrementing as members join and leave it frequently gets out of sync leading to people with member parties showing a member count of etc this is a nuisance at best and in some cases involving the person party limit can actually interfere with people joining or inviting members this is the desired fix change the group remove leave and join routes to recompute membercount each time it needs to be updated i e count all existing members rather than add or subtract one from the current value of membercount when parties are involved at this time we re leaving guilds alone for performance reasons see for the status of that arm of the issue | 0 |
610 | 4,105,790,419 | IssuesEvent | 2016-06-06 04:33:42 | Microsoft/DirectXMath | https://api.github.com/repos/Microsoft/DirectXMath | closed | Remove old VS 2010/2012 support adapters | maintainence | Future versions of DirectXMath are likely to only support VS 2013 and VS 2015. Therefore, I can remove a number of things that are present for older compilers:
Remove the ``#ifdef`` guard from ``#pragma once``
Simplify this expression
#if ((_MSC_FULL_VER >= 170065501) && (_MSC_VER < 1800)) || (_MSC_FULL_VER >= 180020418)
#define _XM_VECTORCALL_ 1
#endif
Remove ``XM_CTOR_DEFAULT`` adapter and replace it with ``=default``
Remove ``XM_VMULQ_N_F32``, ``XM_VMLAQ_N_F32``, ``XM_VMULQ_LANE_F32``, ``XM_VMLAQ_LANE_F32`` adapters and replace it with ``vmulq_n_f32``, ``vmlaq_n_f32``, ``vmulq_lane_f32``, ``vmlaq_lane_f32``
| True | Remove old VS 2010/2012 support adapters - Future versions of DirectXMath are likely to only support VS 2013 and VS 2015. Therefore, I can remove a number of things that are present for older compilers:
Remove the ``#ifdef`` guard from ``#pragma once``
Simplify this expression
#if ((_MSC_FULL_VER >= 170065501) && (_MSC_VER < 1800)) || (_MSC_FULL_VER >= 180020418)
#define _XM_VECTORCALL_ 1
#endif
Remove ``XM_CTOR_DEFAULT`` adapter and replace it with ``=default``
Remove ``XM_VMULQ_N_F32``, ``XM_VMLAQ_N_F32``, ``XM_VMULQ_LANE_F32``, ``XM_VMLAQ_LANE_F32`` adapters and replace it with ``vmulq_n_f32``, ``vmlaq_n_f32``, ``vmulq_lane_f32``, ``vmlaq_lane_f32``
| main | remove old vs support adapters future versions of directxmath are likely to only support vs and vs therefore i can remove a number of things that are present for older compilers remove the ifdef guard from pragma once simplify this expression if msc full ver msc ver define xm vectorcall endif remove xm ctor default adapter and replace it with default remove xm vmulq n xm vmlaq n xm vmulq lane xm vmlaq lane adapters and replace it with vmulq n vmlaq n vmulq lane vmlaq lane | 1 |
1,607 | 6,572,399,251 | IssuesEvent | 2017-09-11 02:01:13 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | iptables invert (!) feature not working properly | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
iptables module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
There is no way (or I don't know it) how to use invert feature in iptables module. Regarding to iptables documentation invert sign (!) should be before flag not after.
<!--- You can also paste gist.github.com links for larger files -->
using command below
```
iptables: chain=DOCKER destination=172.224.10.11/32 in_interface=!some_docker_network out_interface=some_docker_network protocol=sctp match=sctp destination_port=4739 jump=ACCEPT
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-A DOCKER -d 172.224.10.11/32 ! -i some_docker_network -o some_docker_network -p sctp -m sctp --dport 4739 -j ACCEPT
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-A DOCKER -d 172.224.10.11/32 -i !some_docker_network -o some_docker_network -p sctp -m sctp --dport 4739 -j ACCEPT
```
Anyway you can pass this problem by writing shell script with iptables rules and run it as shell.
Thx for great and useful tool.
| True | iptables invert (!) feature not working properly - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
iptables module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
There is no way (or I don't know it) how to use invert feature in iptables module. Regarding to iptables documentation invert sign (!) should be before flag not after.
<!--- You can also paste gist.github.com links for larger files -->
using command below
```
iptables: chain=DOCKER destination=172.224.10.11/32 in_interface=!some_docker_network out_interface=some_docker_network protocol=sctp match=sctp destination_port=4739 jump=ACCEPT
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-A DOCKER -d 172.224.10.11/32 ! -i some_docker_network -o some_docker_network -p sctp -m sctp --dport 4739 -j ACCEPT
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION
-A DOCKER -d 172.224.10.11/32 -i !some_docker_network -o some_docker_network -p sctp -m sctp --dport 4739 -j ACCEPT
```
Anyway you can pass this problem by writing shell script with iptables rules and run it as shell.
Thx for great and useful tool.
| main | iptables invert feature not working properly issue type bug report component name iptables module ansible version ansible os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary there is no way or i don t know it how to use invert feature in iptables module regarding to iptables documentation invert sign should be before flag not after using command below iptables chain docker destination in interface some docker network out interface some docker network protocol sctp match sctp destination port jump accept expected results iptables s p input accept p forward accept p output accept n docker n docker isolation a docker d i some docker network o some docker network p sctp m sctp dport j accept actual results iptables s p input accept p forward accept p output accept n docker n docker isolation a docker d i some docker network o some docker network p sctp m sctp dport j accept anyway you can pass this problem by writing shell script with iptables rules and run it as shell thx for great and useful tool | 1 |
4,313 | 21,712,222,654 | IssuesEvent | 2022-05-10 14:44:03 | arcticicestudio/styleguide-markdown | https://api.github.com/repos/arcticicestudio/styleguide-markdown | opened | Update to `tmpl` template repository version `0.11.0` | type-improvement context-workflow scope-compatibility scope-maintainability | Update to [`tmpl` version `0.11.0`][1], including the versions in between starting from [0.10.0][2]:
1. [Optimized GitHub action workflow scope][3].
2. [Updated Node.js packages & GitHub actions][4] [^1] [^2].
3. [Opts-in the Dependabot version update configuration][5].
This will also include changes required for any linter matches.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/releases/tag/v0.10.0
[3]: https://github.com/svengreb/tmpl/issues/84
[4]: https://github.com/svengreb/tmpl/issues/86
[5]: https://github.com/svengreb/tmpl/issues/94
[^1]: https://github.com/svengreb/tmpl/issues/78
[^2]: https://github.com/svengreb/tmpl/issues/83
| True | Update to `tmpl` template repository version `0.11.0` - Update to [`tmpl` version `0.11.0`][1], including the versions in between starting from [0.10.0][2]:
1. [Optimized GitHub action workflow scope][3].
2. [Updated Node.js packages & GitHub actions][4] [^1] [^2].
3. [Opts-in the Dependabot version update configuration][5].
This will also include changes required for any linter matches.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/releases/tag/v0.10.0
[3]: https://github.com/svengreb/tmpl/issues/84
[4]: https://github.com/svengreb/tmpl/issues/86
[5]: https://github.com/svengreb/tmpl/issues/94
[^1]: https://github.com/svengreb/tmpl/issues/78
[^2]: https://github.com/svengreb/tmpl/issues/83
| main | update to tmpl template repository version update to including the versions in between starting from this will also include changes required for any linter matches | 1 |
382,777 | 26,514,816,569 | IssuesEvent | 2023-01-18 19:54:32 | newrelic/newrelic-ruby-agent | https://api.github.com/repos/newrelic/newrelic-ruby-agent | closed | Update documentation examples for Rails 7.0 Support | documentation | ### Feature Description
It can be frustrating to find documentation that suits your use case, but end up with an example that continues to raise errors.
Let's take some time to verify the examples in the Ruby Agent's documentation work on Rails 7.0. All examples should be technically accurate and functional for Rails 7.0 to complete this story. If code changes will take time that exceeds the estimate, reach out to the team to separate changes out into additional issues.
### Documentation to Verify
- [x] [Ruby Custom Metrics](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ruby-custom-metrics/)
- [x] [Ignoring Specific Transactions](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ignoring-specific-transactions/ )
- [x] Blocking all instrumentation - transactions blocked.
- [x] Ignoring specific actions with Rails
- [x] `:only`
- [x] `except`
- [x] Ignoring Apdex contributions
- [x] Block browser instrumentation
- [x] Ignoring transactions dynamically
- [x] NewRelic::Agent.ignore_transaction
- [x] NewRelic::Agent.ignore_apdex
- [x] NewRelic::Agent.ignore_enduser
- [x] Ignoring transactions by URL with configuration
- [x] [Sending Handled Errors](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/sending-handled-errors-new-relic/ )
- [x] [Ruby Custom Instrumentation](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ruby-custom-instrumentation/)
- [x] Tracing in class definitions
- [x] Tracing initializers
- [x] Tracing blocks of code
- [x] Naming transactions
### Additional context
Currently, there is a disclaimer listed at the top of articles with examples that have not been tested with Rails 7.0. This should be removed once examples are tested and, if necessary, updated. | 1.0 | Update documentation examples for Rails 7.0 Support - ### Feature Description
It can be frustrating to find documentation that suits your use case, but end up with an example that continues to raise errors.
Let's take some time to verify the examples in the Ruby Agent's documentation work on Rails 7.0. All examples should be technically accurate and functional for Rails 7.0 to complete this story. If code changes will take time that exceeds the estimate, reach out to the team to separate changes out into additional issues.
### Documentation to Verify
- [x] [Ruby Custom Metrics](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ruby-custom-metrics/)
- [x] [Ignoring Specific Transactions](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ignoring-specific-transactions/ )
- [x] Blocking all instrumentation - transactions blocked.
- [x] Ignoring specific actions with Rails
- [x] `:only`
- [x] `except`
- [x] Ignoring Apdex contributions
- [x] Block browser instrumentation
- [x] Ignoring transactions dynamically
- [x] NewRelic::Agent.ignore_transaction
- [x] NewRelic::Agent.ignore_apdex
- [x] NewRelic::Agent.ignore_enduser
- [x] Ignoring transactions by URL with configuration
- [x] [Sending Handled Errors](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/sending-handled-errors-new-relic/ )
- [x] [Ruby Custom Instrumentation](https://docs.newrelic.com/docs/apm/agents/ruby-agent/api-guides/ruby-custom-instrumentation/)
- [x] Tracing in class definitions
- [x] Tracing initializers
- [x] Tracing blocks of code
- [x] Naming transactions
### Additional context
Currently, there is a disclaimer listed at the top of articles with examples that have not been tested with Rails 7.0. This should be removed once examples are tested and, if necessary, updated. | non_main | update documentation examples for rails support feature description it can be frustrating to find documentation that suits your use case but end up with an example that continues to raise errors let s take some time to verify the examples in the ruby agent s documentation work on rails all examples should be technically accurate and functional for rails to complete this story if code changes will take time that exceeds the estimate reach out to the team to separate changes out into additional issues documentation to verify blocking all instrumentation transactions blocked ignoring specific actions with rails only except ignoring apdex contributions block browser instrumentation ignoring transactions dynamically newrelic agent ignore transaction newrelic agent ignore apdex newrelic agent ignore enduser ignoring transactions by url with configuration tracing in class definitions tracing initializers tracing blocks of code naming transactions additional context currently there is a disclaimer listed at the top of articles with examples that have not been tested with rails this should be removed once examples are tested and if necessary updated | 0 |
5,027 | 25,801,825,584 | IssuesEvent | 2022-12-11 03:23:11 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | configs.usersecrets doesn't work on Windows | 🐛 bug 🚧 maintainer issue | **Description of the bug**
It doesn't truncate the file.
**To Reproduce**
n/a
**Additional context**
n/a | True | configs.usersecrets doesn't work on Windows - **Description of the bug**
It doesn't truncate the file.
**To Reproduce**
n/a
**Additional context**
n/a | main | configs usersecrets doesn t work on windows description of the bug it doesn t truncate the file to reproduce n a additional context n a | 1 |
2,540 | 8,666,789,983 | IssuesEvent | 2018-11-29 06:04:51 | arcticicestudio/nord-docs | https://api.github.com/repos/arcticicestudio/nord-docs | opened | Use "binary" Git attribute for "Adobe Illustrator" project files | scope-compatibility scope-maintainability type-task | [“Adobe Illustrator“][wiki-ai] `.ai` artwork project files are currently handled as "normal" plain text by Git. This will be changed to the `binary` attribute instead to prevent encoding problems and noisy diff views.
[wiki-ai]: https://en.wikipedia.org/wiki/Adobe_Illustrator_Artwork | True | Use "binary" Git attribute for "Adobe Illustrator" project files - [“Adobe Illustrator“][wiki-ai] `.ai` artwork project files are currently handled as "normal" plain text by Git. This will be changed to the `binary` attribute instead to prevent encoding problems and noisy diff views.
[wiki-ai]: https://en.wikipedia.org/wiki/Adobe_Illustrator_Artwork | main | use binary git attribute for adobe illustrator project files ai artwork project files are currently handled as normal plain text by git this will be changed to the binary attribute instead to prevent encoding problems and noisy diff views | 1 |
193,757 | 14,661,576,025 | IssuesEvent | 2020-12-29 04:14:30 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | mrsplayground/OCI_TF_LAB1: oci/core_volume_attachment_test.go; 16 LoC | fresh small test |
Found a possible issue in [mrsplayground/OCI_TF_LAB1](https://www.github.com/mrsplayground/OCI_TF_LAB1) at [oci/core_volume_attachment_test.go](https://github.com/mrsplayground/OCI_TF_LAB1/blob/0754a23ebdf6429de9d286019594972a20306eda/oci/core_volume_attachment_test.go#L196-L211)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to volumeAttachmentId is reassigned at line 200
[Click here to see the code in its original context.](https://github.com/mrsplayground/OCI_TF_LAB1/blob/0754a23ebdf6429de9d286019594972a20306eda/oci/core_volume_attachment_test.go#L196-L211)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, volumeAttachmentId := range volumeAttachmentIds {
if ok := SweeperDefaultResourceId[volumeAttachmentId]; !ok {
detachVolumeRequest := oci_core.DetachVolumeRequest{}
detachVolumeRequest.VolumeAttachmentId = &volumeAttachmentId
detachVolumeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := computeClient.DetachVolume(context.Background(), detachVolumeRequest)
if error != nil {
fmt.Printf("Error deleting VolumeAttachment %s %s, It is possible that the resource is already deleted. Please verify manually \n", volumeAttachmentId, error)
continue
}
waitTillCondition(testAccProvider, &volumeAttachmentId, volumeAttachmentSweepWaitCondition, time.Duration(3*time.Minute),
volumeAttachmentSweepResponseFetchOperation, "core", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 0754a23ebdf6429de9d286019594972a20306eda
| 1.0 | mrsplayground/OCI_TF_LAB1: oci/core_volume_attachment_test.go; 16 LoC -
Found a possible issue in [mrsplayground/OCI_TF_LAB1](https://www.github.com/mrsplayground/OCI_TF_LAB1) at [oci/core_volume_attachment_test.go](https://github.com/mrsplayground/OCI_TF_LAB1/blob/0754a23ebdf6429de9d286019594972a20306eda/oci/core_volume_attachment_test.go#L196-L211)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to volumeAttachmentId is reassigned at line 200
[Click here to see the code in its original context.](https://github.com/mrsplayground/OCI_TF_LAB1/blob/0754a23ebdf6429de9d286019594972a20306eda/oci/core_volume_attachment_test.go#L196-L211)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, volumeAttachmentId := range volumeAttachmentIds {
if ok := SweeperDefaultResourceId[volumeAttachmentId]; !ok {
detachVolumeRequest := oci_core.DetachVolumeRequest{}
detachVolumeRequest.VolumeAttachmentId = &volumeAttachmentId
detachVolumeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := computeClient.DetachVolume(context.Background(), detachVolumeRequest)
if error != nil {
fmt.Printf("Error deleting VolumeAttachment %s %s, It is possible that the resource is already deleted. Please verify manually \n", volumeAttachmentId, error)
continue
}
waitTillCondition(testAccProvider, &volumeAttachmentId, volumeAttachmentSweepWaitCondition, time.Duration(3*time.Minute),
volumeAttachmentSweepResponseFetchOperation, "core", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 0754a23ebdf6429de9d286019594972a20306eda
| non_main | mrsplayground oci tf oci core volume attachment test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to volumeattachmentid is reassigned at line click here to show the line s of go which triggered the analyzer go for volumeattachmentid range volumeattachmentids if ok sweeperdefaultresourceid ok detachvolumerequest oci core detachvolumerequest detachvolumerequest volumeattachmentid volumeattachmentid detachvolumerequest requestmetadata retrypolicy getretrypolicy true core error computeclient detachvolume context background detachvolumerequest if error nil fmt printf error deleting volumeattachment s s it is possible that the resource is already deleted please verify manually n volumeattachmentid error continue waittillcondition testaccprovider volumeattachmentid volumeattachmentsweepwaitcondition time duration time minute volumeattachmentsweepresponsefetchoperation core true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
437,199 | 12,564,904,134 | IssuesEvent | 2020-06-08 08:48:36 | geosolutions-it/ckanext-faoclh | https://api.github.com/repos/geosolutions-it/ckanext-faoclh | closed | Base Multi Lingual Support | FAO-CLH Priority: High | The CLH CKAN portal needs to provide a multilingual support not only for the UI labels but also for the published contents like:
- Dataset content (title and description)
- Group content (title and description)
- Organization content (title and description)
- Resources content (title and description)
Required language are Italian, English, French, Spanish, Arabic, Russian, Chinese (these languages must be enabled in CKAN production.ini file and the "how to" installation section in the README file updated accordingly)
Also custom vocabulary entries need to be localized (see #3 and #28)
The [README file](https://github.com/geosolutions-it/ckanext-faoclh/blob/master/README.md) must be also updated including a section that document how to install and enable the [ckanext-multilng](https://github.com/geosolutions-it/ckanext-multilang) extension for FAO CLH.
| 1.0 | Base Multi Lingual Support - The CLH CKAN portal needs to provide a multilingual support not only for the UI labels but also for the published contents like:
- Dataset content (title and description)
- Group content (title and description)
- Organization content (title and description)
- Resources content (title and description)
Required language are Italian, English, French, Spanish, Arabic, Russian, Chinese (these languages must be enabled in CKAN production.ini file and the "how to" installation section in the README file updated accordingly)
Also custom vocabulary entries need to be localized (see #3 and #28)
The [README file](https://github.com/geosolutions-it/ckanext-faoclh/blob/master/README.md) must be also updated including a section that document how to install and enable the [ckanext-multilng](https://github.com/geosolutions-it/ckanext-multilang) extension for FAO CLH.
| non_main | base multi lingual support the clh ckan portal needs to provide a multilingual support not only for the ui labels but also for the published contents like dataset content title and description group content title and description organization content title and description resources content title and description required language are italian english french spanish arabic russian chinese these languages must be enabled in ckan production ini file and the how to installation section in the readme file updated accordingly also custom vocabulary entries need to be localized see and the must be also updated including a section that document how to install and enable the extension for fao clh | 0 |
1,055 | 4,864,134,304 | IssuesEvent | 2016-11-14 17:09:46 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | yum module saying packages are up to date when it's not | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
CentOS release 6.7 (Final)
##### SUMMARY
<!--- Explain the problem briefly -->
yum module says package is up to date while it isn't. An update with the command module works.
I'm sure there's a new version because Maven deploys it on a Nexus repo and yum check-update says there's a new version.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Install RPM
yum: name=app-name state=latest
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Status changed. The following :
```
- name: Install RPM
command: yum install -y app-name
```
Installs the new version correctly. Why doesn't the yum module do the same ?
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ok: [host] => {"changed": false, "invocation": {"module_args": {"conf_file": null, "disable_gpg_check": false, "disablerepo": null, "enablerepo": null, "exclude": null, "install_repoquery": true, "list": null, "name": ["app-name"], "state": "latest", "update_cache": false, "validate_certs": true}, "module_name": "yum"}, "msg": " Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.", "rc": 0, "results": ["All packages providing app-name are up to date", ""]}
```
| True | yum module saying packages are up to date when it's not - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
CentOS release 6.7 (Final)
##### SUMMARY
<!--- Explain the problem briefly -->
yum module says package is up to date while it isn't. An update with the command module works.
I'm sure there's a new version because Maven deploys it on a Nexus repo and yum check-update says there's a new version.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Install RPM
yum: name=app-name state=latest
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Status changed. The following :
```
- name: Install RPM
command: yum install -y app-name
```
Installs the new version correctly. Why doesn't the yum module do the same ?
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ok: [host] => {"changed": false, "invocation": {"module_args": {"conf_file": null, "disable_gpg_check": false, "disablerepo": null, "enablerepo": null, "exclude": null, "install_repoquery": true, "list": null, "name": ["app-name"], "state": "latest", "update_cache": false, "validate_certs": true}, "module_name": "yum"}, "msg": " Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.", "rc": 0, "results": ["All packages providing app-name are up to date", ""]}
```
| main | yum module saying packages are up to date when it s not issue type bug report component name yum ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos release final summary yum module says package is up to date while it isn t an update with the command module works i m sure there s a new version because maven deploys it on a nexus repo and yum check update says there s a new version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name install rpm yum name app name state latest expected results status changed the following name install rpm command yum install y app name installs the new version correctly why doesn t the yum module do the same actual results ok changed false invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name state latest update cache false validate certs true module name yum msg warning due to potential bad behaviour with rhnplugin and certificates used slower repoquery calls instead of yum api rc results | 1 |
250,979 | 7,993,407,642 | IssuesEvent | 2018-07-20 07:33:05 | OpenNebula/one | https://api.github.com/repos/OpenNebula/one | closed | (vCenter) wait poweron/off to be performed | Category: vCenter Priority: Normal Status: Accepted Type: Bug | **Description**
vCenter Virtualization Driver (VMM) has an asyn behaviour considering poweron and poweroff actions.
This provoke certain failures.
**To Reproduce**
Start a vCenter VM -> vCenter driver (deploy operation) finishes, OpenNebula believe that the machine is already running but the fact is that is in the starting state -> turn off the machine before OpenNebula realizes the correct state.
**Expected behavior**
poweron and poweroff driver operations should be blocking, in that way opennebula could wait and set the proper state.
**Details**
- Affected Component: vCenter Driver
- Hypervisor: vcenter
- Version: Development
| 1.0 | (vCenter) wait poweron/off to be performed - **Description**
vCenter Virtualization Driver (VMM) has an asyn behaviour considering poweron and poweroff actions.
This provoke certain failures.
**To Reproduce**
Start a vCenter VM -> vCenter driver (deploy operation) finishes, OpenNebula believe that the machine is already running but the fact is that is in the starting state -> turn off the machine before OpenNebula realizes the correct state.
**Expected behavior**
poweron and poweroff driver operations should be blocking, in that way opennebula could wait and set the proper state.
**Details**
- Affected Component: vCenter Driver
- Hypervisor: vcenter
- Version: Development
| non_main | vcenter wait poweron off to be performed description vcenter virtualization driver vmm has an asyn behaviour considering poweron and poweroff actions this provoke certain failures to reproduce start a vcenter vm vcenter driver deploy operation finishes opennebula believe that the machine is already running but the fact is that is in the starting state turn off the machine before opennebula realizes the correct state expected behavior poweron and poweroff driver operations should be blocking in that way opennebula could wait and set the proper state details affected component vcenter driver hypervisor vcenter version development | 0 |
6,290 | 5,348,571,835 | IssuesEvent | 2017-02-18 06:32:06 | zsh-users/zsh-autosuggestions | https://api.github.com/repos/zsh-users/zsh-autosuggestions | closed | Slow speed of cd | performance pull-request-welcome | After initialization of zsh-autosuggestions the speed of change dir in Midnight commander(regular movements in mc) dramaticaly falls down. I use oh-my-zsh zsh version - 5.2. Is there way to solve it?
| True | Slow speed of cd - After initialization of zsh-autosuggestions the speed of change dir in Midnight commander(regular movements in mc) dramaticaly falls down. I use oh-my-zsh zsh version - 5.2. Is there way to solve it?
| non_main | slow speed of cd after initialization of zsh autosuggestions the speed of change dir in midnight commander regular movements in mc dramaticaly falls down i use oh my zsh zsh version is there way to solve it | 0 |
1,452 | 6,292,573,384 | IssuesEvent | 2017-07-20 06:14:37 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Add a YAML file module? | affects_2.3 feature_idea waiting_on_maintainer | Hi,
There is already a module to tweak settings in ini files. But many softwares already use YAML configuration files, and as Ansible uses them for its own backend, I think it could be possible to add a module to handle settings in YAML files as well, at little cost.
This could be especially interesting as YAML syntax is prompt to multilines and cannot be parsed easily with lineinfile module.
What do you think about it?
Thanks
| True | Add a YAML file module? - Hi,
There is already a module to tweak settings in ini files. But many softwares already use YAML configuration files, and as Ansible uses them for its own backend, I think it could be possible to add a module to handle settings in YAML files as well, at little cost.
This could be especially interesting as YAML syntax is prompt to multilines and cannot be parsed easily with lineinfile module.
What do you think about it?
Thanks
| main | add a yaml file module hi there is already a module to tweak settings in ini files but many softwares already use yaml configuration files and as ansible uses them for its own backend i think it could be possible to add a module to handle settings in yaml files as well at little cost this could be especially interesting as yaml syntax is prompt to multilines and cannot be parsed easily with lineinfile module what do you think about it thanks | 1 |
111,551 | 17,028,311,732 | IssuesEvent | 2021-07-04 02:28:44 | ballerina-platform/ballerina-standard-library | https://api.github.com/repos/ballerina-platform/ballerina-standard-library | closed | Security Implementation for Swan Lake | SwanLakeDump Team/PCP Type/Summary Type/Task area/security module/auth module/jwt module/ldap module/oauth2 | ## Important Links
- Dashboard: https://ldclakmal.me/ballerina-security
- `Area/Security` Issues: https://ldclakmal.me/ballerina-security/issues/
### Proposed Designs:
- Design of Ballerina Authentication & Authorization Framework - Swan Lake Version
https://docs.google.com/document/d/1dGw5uUP6kqZNTwMfQ_Ik-k0HTMKhX70XpEA3tys9_kk/edit?usp=sharing
- Re-Design of Ballerina SecureSocket API - Swan Lake Version
https://docs.google.com/document/d/1Y2kLTOw9-sRK1vSEzw5NYdWSA4nwVCvPf3wrbwNDA4s/edit?usp=sharing
- [Review] Ballerina Security APIs of StdLib PCMs https://docs.google.com/document/d/16r_gjBi7SIqVffKVLtKGBevHQRxp7Fnoo9ELyIWV1BM/edit?usp=sharing
---
# Swan Lake Alpha
#### ballerina/auth
- [x] Update and refactor ballerina/auth module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/715
#### ballerina/jwt
- [x] Update and refactor ballerina/jwt module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/716
#### ballerina/oauth2
- [x] Update and refactor ballerina/oauth2 module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/717
- [x] Add support to add optional parameters in OAuth2 introspection request https://github.com/ballerina-platform/ballerina-standard-library/issues/23
- [x] Add support to read custom fields of OAuth2 introspection response https://github.com/ballerina-platform/ballerina-standard-library/issues/16
- [x] oauth2:OutboundOAuth2Provider is not renewing access token when downstream web API returns 403 https://github.com/ballerina-platform/ballerina-standard-library/issues/17
#### ballerina/ldap
- [x] Remove ballerina/ldap module by moving implementation to ballerina/auth module https://github.com/ballerina-platform/ballerina-standard-library/issues/718
#### ballerina/http
- [x] Implement imperative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/752
- [x] Implement declarative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/813
- [x] Align stdlib annotations with spec https://github.com/ballerina-platform/ballerina-standard-library/issues/74
- [x] Improve Ballerina authn & authz configurations https://github.com/ballerina-platform/ballerina-standard-library/issues/63
- [x] Add support to provide a custom claim name as authorization claim field https://github.com/ballerina-platform/ballerina-standard-library/issues/553
#### Common
- [x] Revisit security related BBEs with all the supported features https://github.com/ballerina-platform/ballerina-standard-library/issues/60
---
# Swan Lake Beta
#### Common
- [x] Improve error messages and log messages of security modules https://github.com/ballerina-platform/ballerina-standard-library/issues/1242
#### ballerina/http
- [x] Error while trying to authorize the request when `scopes` filed is not configured https://github.com/ballerina-platform/ballerina-standard-library/issues/972
- [x] Append auth provider error message to `http:Unauthorized` and `http:Forbidden` response types https://github.com/ballerina-platform/ballerina-standard-library/issues/974
- [x] Replace ballerina/reflect API usages in ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/1012
- [x] Extend listener auth handler APIs for `http:Headers` class https://github.com/ballerina-platform/ballerina-standard-library/issues/1013
- [x] Update `SecureSocket` API of HTTP https://github.com/ballerina-platform/ballerina-standard-library/issues/917
#### ballerina/auth
- [x] Enable basic auth file user store support https://github.com/ballerina-platform/ballerina-standard-library/issues/862
- [x] Update SecureSocket API of LDAP https://github.com/ballerina-platform/ballerina-standard-library/issues/1215
- [x] Remove encrypted and hashed password support https://github.com/ballerina-platform/ballerina-standard-library/issues/1214
- [x] Improve ballerina/auth test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1011
#### ballerina/jwt
- [x] Split JWT validation API for 2 APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1213
- [x] Replace base64 URL encode/decode APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1212
- [x] Extend private key/public cert support for JWT signature generation/validation https://github.com/ballerina-platform/ballerina-standard-library/issues/822
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Add `jti` claim as a user input for JWT generation https://github.com/ballerina-platform/ballerina-standard-library/issues/1210
- [x] Improve ballerina/jwt test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1010
#### ballerina/oauth2
- [x] JDK11 HTTP client used for OAuth2 introspection should support OAuth2 client authentication https://github.com/ballerina-platform/ballerina-standard-library/issues/935
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Improve the logic of extracting refresh_token from the authorization endpoint response https://github.com/ballerina-platform/ballerina-standard-library/issues/1206
- [x] Improve ballerina/oauth2 test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1009
#### ballerina/ldap
- [x] Move ballerina/ldap module to [ballerina-attic](https://github.com/ballerina-attic)
#### ballerina/crypto
- [x] Add support for reading public/private keys from PEM files https://github.com/ballerina-platform/ballerina-standard-library/issues/67
- [x] Improve private key decoding for PKCS8 format https://github.com/ballerina-platform/ballerina-standard-library/issues/1208
- [x] Update and refactor ballerina/crypto module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/908
- [x] Improve ballerina/crypto test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1297
#### ballerina/encoding
- [x] Update and refactor ballerina/encoding module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/907
#### ballerina/websocket
- [x] Add auth support for WebSocket clients https://github.com/ballerina-platform/ballerina-standard-library/issues/820
#### ballerina/graphql
- [x] Implement declarative auth design for GraphQL module https://github.com/ballerina-platform/ballerina-standard-library/issues/1336
---
# Swan Lake GA
#### Common
- [x] Revisit security related APIs across all StdLibs https://github.com/ballerina-platform/ballerina-standard-library/issues/1066
#### ballerina/websocket
- [x] Implement declarative auth design for server side https://github.com/ballerina-platform/ballerina-standard-library/issues/1405
- [x] Need to improve return of WebSocket server side auth errors https://github.com/ballerina-platform/ballerina-standard-library/issues/1230
#### ballerina/ftp
- [x] Implement Security for FTP https://github.com/ballerina-platform/ballerina-standard-library/issues/1438 | True | Security Implementation for Swan Lake - ## Important Links
- Dashboard: https://ldclakmal.me/ballerina-security
- `Area/Security` Issues: https://ldclakmal.me/ballerina-security/issues/
### Proposed Designs:
- Design of Ballerina Authentication & Authorization Framework - Swan Lake Version
https://docs.google.com/document/d/1dGw5uUP6kqZNTwMfQ_Ik-k0HTMKhX70XpEA3tys9_kk/edit?usp=sharing
- Re-Design of Ballerina SecureSocket API - Swan Lake Version
https://docs.google.com/document/d/1Y2kLTOw9-sRK1vSEzw5NYdWSA4nwVCvPf3wrbwNDA4s/edit?usp=sharing
- [Review] Ballerina Security APIs of StdLib PCMs https://docs.google.com/document/d/16r_gjBi7SIqVffKVLtKGBevHQRxp7Fnoo9ELyIWV1BM/edit?usp=sharing
---
# Swan Lake Alpha
#### ballerina/auth
- [x] Update and refactor ballerina/auth module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/715
#### ballerina/jwt
- [x] Update and refactor ballerina/jwt module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/716
#### ballerina/oauth2
- [x] Update and refactor ballerina/oauth2 module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/717
- [x] Add support to add optional parameters in OAuth2 introspection request https://github.com/ballerina-platform/ballerina-standard-library/issues/23
- [x] Add support to read custom fields of OAuth2 introspection response https://github.com/ballerina-platform/ballerina-standard-library/issues/16
- [x] oauth2:OutboundOAuth2Provider is not renewing access token when downstream web API returns 403 https://github.com/ballerina-platform/ballerina-standard-library/issues/17
#### ballerina/ldap
- [x] Remove ballerina/ldap module by moving implementation to ballerina/auth module https://github.com/ballerina-platform/ballerina-standard-library/issues/718
#### ballerina/http
- [x] Implement imperative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/752
- [x] Implement declarative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/813
- [x] Align stdlib annotations with spec https://github.com/ballerina-platform/ballerina-standard-library/issues/74
- [x] Improve Ballerina authn & authz configurations https://github.com/ballerina-platform/ballerina-standard-library/issues/63
- [x] Add support to provide a custom claim name as authorization claim field https://github.com/ballerina-platform/ballerina-standard-library/issues/553
#### Common
- [x] Revisit security related BBEs with all the supported features https://github.com/ballerina-platform/ballerina-standard-library/issues/60
---
# Swan Lake Beta
#### Common
- [x] Improve error messages and log messages of security modules https://github.com/ballerina-platform/ballerina-standard-library/issues/1242
#### ballerina/http
- [x] Error while trying to authorize the request when `scopes` filed is not configured https://github.com/ballerina-platform/ballerina-standard-library/issues/972
- [x] Append auth provider error message to `http:Unauthorized` and `http:Forbidden` response types https://github.com/ballerina-platform/ballerina-standard-library/issues/974
- [x] Replace ballerina/reflect API usages in ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/1012
- [x] Extend listener auth handler APIs for `http:Headers` class https://github.com/ballerina-platform/ballerina-standard-library/issues/1013
- [x] Update `SecureSocket` API of HTTP https://github.com/ballerina-platform/ballerina-standard-library/issues/917
#### ballerina/auth
- [x] Enable basic auth file user store support https://github.com/ballerina-platform/ballerina-standard-library/issues/862
- [x] Update SecureSocket API of LDAP https://github.com/ballerina-platform/ballerina-standard-library/issues/1215
- [x] Remove encrypted and hashed password support https://github.com/ballerina-platform/ballerina-standard-library/issues/1214
- [x] Improve ballerina/auth test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1011
#### ballerina/jwt
- [x] Split JWT validation API for 2 APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1213
- [x] Replace base64 URL encode/decode APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1212
- [x] Extend private key/public cert support for JWT signature generation/validation https://github.com/ballerina-platform/ballerina-standard-library/issues/822
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Add `jti` claim as a user input for JWT generation https://github.com/ballerina-platform/ballerina-standard-library/issues/1210
- [x] Improve ballerina/jwt test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1010
#### ballerina/oauth2
- [x] JDK11 HTTP client used for OAuth2 introspection should support OAuth2 client authentication https://github.com/ballerina-platform/ballerina-standard-library/issues/935
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Improve the logic of extracting refresh_token from the authorization endpoint response https://github.com/ballerina-platform/ballerina-standard-library/issues/1206
- [x] Improve ballerina/oauth2 test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1009
#### ballerina/ldap
- [x] Move ballerina/ldap module to [ballerina-attic](https://github.com/ballerina-attic)
#### ballerina/crypto
- [x] Add support for reading public/private keys from PEM files https://github.com/ballerina-platform/ballerina-standard-library/issues/67
- [x] Improve private key decoding for PKCS8 format https://github.com/ballerina-platform/ballerina-standard-library/issues/1208
- [x] Update and refactor ballerina/crypto module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/908
- [x] Improve ballerina/crypto test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1297
#### ballerina/encoding
- [x] Update and refactor ballerina/encoding module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/907
#### ballerina/websocket
- [x] Add auth support for WebSocket clients https://github.com/ballerina-platform/ballerina-standard-library/issues/820
#### ballerina/graphql
- [x] Implement declarative auth design for GraphQL module https://github.com/ballerina-platform/ballerina-standard-library/issues/1336
---
# Swan Lake GA
#### Common
- [x] Revisit security related APIs across all StdLibs https://github.com/ballerina-platform/ballerina-standard-library/issues/1066
#### ballerina/websocket
- [x] Implement declarative auth design for server side https://github.com/ballerina-platform/ballerina-standard-library/issues/1405
- [x] Need to improve return of WebSocket server side auth errors https://github.com/ballerina-platform/ballerina-standard-library/issues/1230
#### ballerina/ftp
- [x] Implement Security for FTP https://github.com/ballerina-platform/ballerina-standard-library/issues/1438 | non_main | security implementation for swan lake important links dashboard area security issues proposed designs design of ballerina authentication authorization framework swan lake version re design of ballerina securesocket api swan lake version ballerina security apis of stdlib pcms swan lake alpha ballerina auth update and refactor ballerina auth module apis ballerina jwt update and refactor ballerina jwt module apis ballerina update and refactor ballerina module apis add support to add optional parameters in introspection request add support to read custom fields of introspection response is not renewing access token when downstream web api returns ballerina ldap remove ballerina ldap module by moving implementation to ballerina auth module ballerina http implement imperative auth design for ballerina http module implement declarative auth design for ballerina http module align stdlib annotations with spec improve ballerina authn authz configurations add support to provide a custom claim name as authorization claim field common revisit security related bbes with all the supported features swan lake beta common improve error messages and log messages of security modules ballerina http error while trying to authorize the request when scopes filed is not configured append auth provider error message to http unauthorized and http forbidden response types replace ballerina reflect api usages in ballerina http module extend listener auth handler apis for http headers class update securesocket api of http ballerina auth enable basic auth file user store support update securesocket api of ldap remove encrypted and hashed password support improve ballerina auth test coverage ballerina jwt split jwt validation api for apis replace url encode decode apis extend private key public cert support for jwt signature generation validation improve ssl configurations in http client used for auth modules add jti claim as a user input for jwt generation improve ballerina jwt test coverage ballerina http client used for introspection should support client authentication improve ssl configurations in http client used for auth modules improve the logic of extracting refresh token from the authorization endpoint response improve ballerina test coverage ballerina ldap move ballerina ldap module to ballerina crypto add support for reading public private keys from pem files improve private key decoding for format update and refactor ballerina crypto module apis improve ballerina crypto test coverage ballerina encoding update and refactor ballerina encoding module apis ballerina websocket add auth support for websocket clients ballerina graphql implement declarative auth design for graphql module swan lake ga common revisit security related apis across all stdlibs ballerina websocket implement declarative auth design for server side need to improve return of websocket server side auth errors ballerina ftp implement security for ftp | 0 |
3,820 | 16,614,498,309 | IssuesEvent | 2021-06-02 15:08:12 | keptn/community | https://api.github.com/repos/keptn/community | closed | REQUEST: New membership maintainer for @Kirdock (Klaus Strießnig) | membership:maintainer status:approved | ### Klaus Strießnig
@Kirdock (currently approver)
### Requirements
- [x] I have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] I have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] I have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] I am actively contributing to 1 or more Keptn subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @christian-kreuzberger-dtx
- @johannes-b
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
Klaus is one of the core contributors to Keptn Bridge, and has touched several parts of the product already for more than 6 months.

* 90+ Commits: https://github.com/keptn/keptn/commits?author=Kirdock
* 47+ merged PRs: https://github.com/keptn/keptn/pulls?q=is%3Apr+author%3AKirdock+is%3Aclosed
* 24+ Issues created: https://github.com/keptn/keptn/issues/created_by/Kirdock
| True | REQUEST: New membership maintainer for @Kirdock (Klaus Strießnig) - ### Klaus Strießnig
@Kirdock (currently approver)
### Requirements
- [x] I have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] I have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] I have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] I am actively contributing to 1 or more Keptn subprojects
- [x] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
- @christian-kreuzberger-dtx
- @johannes-b
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
Klaus is one of the core contributors to Keptn Bridge, and has touched several parts of the product already for more than 6 months.

* 90+ Commits: https://github.com/keptn/keptn/commits?author=Kirdock
* 47+ merged PRs: https://github.com/keptn/keptn/pulls?q=is%3Apr+author%3AKirdock+is%3Aclosed
* 24+ Issues created: https://github.com/keptn/keptn/issues/created_by/Kirdock
| main | request new membership maintainer for kirdock klaus strießnig klaus strießnig kirdock currently approver requirements i have reviewed the community membership guidelines i have enabled on my github account see i have subscribed to the i am actively contributing to or more keptn subprojects i have two sponsors that meet the sponsor requirements listed in the community membership guidelines among other requirements sponsors must be approvers or maintainers of at least one repository i have spoken to my sponsors ahead of this application and they have agreed to sponsor my application sponsors christian kreuzberger dtx johannes b each sponsor should reply to this issue with the comment i support please remember it is an applicant s responsibility to get their sponsors confirmation before submitting the request list of contributions to the keptn project klaus is one of the core contributors to keptn bridge and has touched several parts of the product already for more than months commits merged prs issues created | 1 |
506,594 | 14,668,441,379 | IssuesEvent | 2020-12-29 21:16:06 | microsoft/PowerToys | https://api.github.com/repos/microsoft/PowerToys | closed | [Keyboard Navigation - Settings>Image Resizer>File Explorer]: Keyboard focus is lost after activating 'Settings' link in Image Resizer dialog. | Accessibility [E+D] Area-Accessibility Issue-Bug Priority-2 Product-Image Resizer Resolution-Fix-Committed | [Power Toys Settings - Image Resizer > File Explorer(System)]
**User Experience:**
This will impact the keyboard users if after activating a setting link focus is lost on the window.
**Test Environment:**
OS Version: 20221.1000
App Name: Power Toy Preview
App Version: v0.23.0
Screen Reader: Narrator
**Repro Steps:**
1. Open File Explorer using Win+E key.
2. Navigate to any photo and press right click.
3. Navigate to 'Image Resizer' and activate it. Image Resizer will open.
4. Navigate to settings link and activate it and verify the issue.
**Actual Result:**
Keyboard focus is getting lost after activating 'Settings' link. On pressing tab key focus jumps to 'Size' tab.
**Note:**
Same issue is repro on activating 'Delete' button.
**Expected Result:**
Keyboard focus should land on 'Size' tab after activating 'Settings' link.
**MAS Reference:**
https://microsoft.sharepoint.com/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={0de7fbe1-ad7e-48e5-bcbb-8d986691e2b9}
[32_Image Resizer_MAS2.4.3_After activating delete button focus is lost on the pane.zip](https://github.com/microsoft/PowerToys/files/5328964/32_Image.Resizer_MAS2.4.3_After.activating.delete.button.focus.is.lost.on.the.pane.zip)
[32_Image Resizer_MAS2.4.3_After activating Settings button focus is lost on the pane.zip](https://github.com/microsoft/PowerToys/files/5328966/32_Image.Resizer_MAS2.4.3_After.activating.Settings.button.focus.is.lost.on.the.pane.zip)
| 1.0 | [Keyboard Navigation - Settings>Image Resizer>File Explorer]: Keyboard focus is lost after activating 'Settings' link in Image Resizer dialog. - [Power Toys Settings - Image Resizer > File Explorer(System)]
**User Experience:**
This will impact the keyboard users if after activating a setting link focus is lost on the window.
**Test Environment:**
OS Version: 20221.1000
App Name: Power Toy Preview
App Version: v0.23.0
Screen Reader: Narrator
**Repro Steps:**
1. Open File Explorer using Win+E key.
2. Navigate to any photo and press right click.
3. Navigate to 'Image Resizer' and activate it. Image Resizer will open.
4. Navigate to settings link and activate it and verify the issue.
**Actual Result:**
Keyboard focus is getting lost after activating 'Settings' link. On pressing tab key focus jumps to 'Size' tab.
**Note:**
Same issue is repro on activating 'Delete' button.
**Expected Result:**
Keyboard focus should land on 'Size' tab after activating 'Settings' link.
**MAS Reference:**
https://microsoft.sharepoint.com/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={0de7fbe1-ad7e-48e5-bcbb-8d986691e2b9}
[32_Image Resizer_MAS2.4.3_After activating delete button focus is lost on the pane.zip](https://github.com/microsoft/PowerToys/files/5328964/32_Image.Resizer_MAS2.4.3_After.activating.delete.button.focus.is.lost.on.the.pane.zip)
[32_Image Resizer_MAS2.4.3_After activating Settings button focus is lost on the pane.zip](https://github.com/microsoft/PowerToys/files/5328966/32_Image.Resizer_MAS2.4.3_After.activating.Settings.button.focus.is.lost.on.the.pane.zip)
| non_main | keyboard focus is lost after activating settings link in image resizer dialog user experience this will impact the keyboard users if after activating a setting link focus is lost on the window test environment os version app name power toy preview app version screen reader narrator repro steps open file explorer using win e key navigate to any photo and press right click navigate to image resizer and activate it image resizer will open navigate to settings link and activate it and verify the issue actual result keyboard focus is getting lost after activating settings link on pressing tab key focus jumps to size tab note same issue is repro on activating delete button expected result keyboard focus should land on size tab after activating settings link mas reference | 0 |
1,704 | 6,574,406,212 | IssuesEvent | 2017-09-11 12:46:48 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apt module fail to upgrade installed only packages list if one package isn't available | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
standard
##### OS / ENVIRONMENT
Debian
##### SUMMARY
I use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded:
- name: upgrade "{{packages}}"
become: true
become_user: root
apt: name={{item}} state=latest update_cache=no only_upgrade=yes
with_items: "{{packages}}"
{{packages}} is a list of packages to upgrade if they are installed
but if one of the package is *not available* (and so not installed), the whole upgrade fail with
"No package matching 'xxxxxx' is available"
I think the package_status method in apt modules should return one more information named "available".
Then in "install" method, we can skip non available packages in the same way we skip non installed packages for only_upgrade=yes.
##### STEPS TO REPRODUCE
Execute a playbook with content
- name: upgrade "{{packages}}"
become: true
become_user: root
apt: name={{item}} state=latest update_cache=no only_upgrade=yes
with_items: "{{packages}}"
ona debian jessie machine with args:
ansible-playbook -l "online" playbooks/upgrade-packages.yml -e '{ "packages": [ "upgradable installed package", "non upgradable installed package", "non installed package", "non available package", ] }'
##### EXPECTED RESULTS
"upgradable installed package": should be upgraded
"non upgradable installed package": shouldn't be changed
"non installed package": should not be installed
"non available package": should be ignored
##### ACTUAL RESULTS
no packages are upgraded with result: "No package matching 'non available package' is available"
| True | apt module fail to upgrade installed only packages list if one package isn't available - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
standard
##### OS / ENVIRONMENT
Debian
##### SUMMARY
I use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded:
- name: upgrade "{{packages}}"
become: true
become_user: root
apt: name={{item}} state=latest update_cache=no only_upgrade=yes
with_items: "{{packages}}"
{{packages}} is a list of packages to upgrade if they are installed
but if one of the package is *not available* (and so not installed), the whole upgrade fail with
"No package matching 'xxxxxx' is available"
I think the package_status method in apt modules should return one more information named "available".
Then in "install" method, we can skip non available packages in the same way we skip non installed packages for only_upgrade=yes.
##### STEPS TO REPRODUCE
Execute a playbook with content
- name: upgrade "{{packages}}"
become: true
become_user: root
apt: name={{item}} state=latest update_cache=no only_upgrade=yes
with_items: "{{packages}}"
ona debian jessie machine with args:
ansible-playbook -l "online" playbooks/upgrade-packages.yml -e '{ "packages": [ "upgradable installed package", "non upgradable installed package", "non installed package", "non available package", ] }'
##### EXPECTED RESULTS
"upgradable installed package": should be upgraded
"non upgradable installed package": shouldn't be changed
"non installed package": should not be installed
"non available package": should be ignored
##### ACTUAL RESULTS
no packages are upgraded with result: "No package matching 'non available package' is available"
| main | apt module fail to upgrade installed only packages list if one package isn t available issue type bug report component name apt ansible version configuration standard os environment debian summary i use apt module to perform upgrades on a list of packages and make sure only installed packages are upgraded name upgrade packages become true become user root apt name item state latest update cache no only upgrade yes with items packages packages is a list of packages to upgrade if they are installed but if one of the package is not available and so not installed the whole upgrade fail with no package matching xxxxxx is available i think the package status method in apt modules should return one more information named available then in install method we can skip non available packages in the same way we skip non installed packages for only upgrade yes steps to reproduce execute a playbook with content name upgrade packages become true become user root apt name item state latest update cache no only upgrade yes with items packages ona debian jessie machine with args ansible playbook l online playbooks upgrade packages yml e packages expected results upgradable installed package should be upgraded non upgradable installed package shouldn t be changed non installed package should not be installed non available package should be ignored actual results no packages are upgraded with result no package matching non available package is available | 1 |
120,616 | 15,785,800,686 | IssuesEvent | 2021-04-01 16:50:05 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Add UserStore<TUser> : UserStore<TUser, DbContext, TKey> | area-identity design-proposal enhancement | <!--
This template is useful to build consensus about whether work should be done, and if so, the high-level shape of how it should be approached. Use this before fixating on a particular implementation.
-->
## Summary
When you want to override UserStore in order to be able to use it with custom ```UnitofWork``` and ```IdentityUser<TKey>``` is used, there is no way to do it as there is only one constructor signature which defaults to ```UserStore : UserStore<IdentityUser<string>>```. Bear in mind that ```TRole``` is not used so I cannot use that overload
## In scope
A modern SaaS project does include logic that goes beyond TUser (Tenant creation). Usually a tenant and a user have some kind of exclusive relationship therefore they have to be created within a single transaction. Using current implemenation there is no way to do this as you first have to create user and then tenant or vice versa. If something is wrong with either entities creation you have to write manual rollbacks and in my scenario I have also 3-4 entites created when a user is registered which causes a huge mess
| 1.0 | Add UserStore<TUser> : UserStore<TUser, DbContext, TKey> - <!--
This template is useful to build consensus about whether work should be done, and if so, the high-level shape of how it should be approached. Use this before fixating on a particular implementation.
-->
## Summary
When you want to override UserStore in order to be able to use it with custom ```UnitofWork``` and ```IdentityUser<TKey>``` is used, there is no way to do it as there is only one constructor signature which defaults to ```UserStore : UserStore<IdentityUser<string>>```. Bear in mind that ```TRole``` is not used so I cannot use that overload
## In scope
A modern SaaS project does include logic that goes beyond TUser (Tenant creation). Usually a tenant and a user have some kind of exclusive relationship therefore they have to be created within a single transaction. Using current implemenation there is no way to do this as you first have to create user and then tenant or vice versa. If something is wrong with either entities creation you have to write manual rollbacks and in my scenario I have also 3-4 entites created when a user is registered which causes a huge mess
| non_main | add userstore userstore this template is useful to build consensus about whether work should be done and if so the high level shape of how it should be approached use this before fixating on a particular implementation summary when you want to override userstore in order to be able to use it with custom unitofwork and identityuser is used there is no way to do it as there is only one constructor signature which defaults to userstore userstore bear in mind that trole is not used so i cannot use that overload in scope a modern saas project does include logic that goes beyond tuser tenant creation usually a tenant and a user have some kind of exclusive relationship therefore they have to be created within a single transaction using current implemenation there is no way to do this as you first have to create user and then tenant or vice versa if something is wrong with either entities creation you have to write manual rollbacks and in my scenario i have also entites created when a user is registered which causes a huge mess | 0 |
9,534 | 24,773,358,943 | IssuesEvent | 2022-10-23 12:30:21 | R-Type-Epitech-Nantes/R-Type | https://api.github.com/repos/R-Type-Epitech-Nantes/R-Type | closed | Re-do the architecture of the Components, System and Entities Creation librairies | Architecture E.C.S. ECS Game Systems ECS Game Shared Resources | Of all this three librairies should be in a single librairy | 1.0 | Re-do the architecture of the Components, System and Entities Creation librairies - Of all this three librairies should be in a single librairy | non_main | re do the architecture of the components system and entities creation librairies of all this three librairies should be in a single librairy | 0 |
99,996 | 30,595,259,186 | IssuesEvent | 2023-07-21 21:12:54 | rpopuc/gha-build-homolog | https://api.github.com/repos/rpopuc/gha-build-homolog | closed | Via create-issue: | build-homolog | ## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
feat/robson
| 1.0 | Via create-issue: - ## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
feat/robson
| non_main | via create issue description realiza deploy automatizado da aplicação environments environment branches feat robson | 0 |
22,426 | 3,956,154,969 | IssuesEvent | 2016-04-30 01:21:53 | vignek/workcollabration | https://api.github.com/repos/vignek/workcollabration | opened | not able to send message to vignesh | TestObject | logged in with megha. Cannot send message to Vignesh.
Message can be sent to all other contacts.
### Reporter
Vignesh Kumar
### App Version Under Test
Name from APK: Work Collabration
Version from APK: 1.0
Version code: 1
Your version name: 1.0
Package: com.sheikbro.onlinechat
Uploaded: April 28, 2016 — 04:48 AM
TestObject ID: 2
### Device
Name: LG Nexus 4 E960
Android version: 5.1.1
API level: 22
Resolution: 768 x 1280 (xhdpi)
Screen size: 4.7"
CPU: ARM | quad core | 2300 MHz
RAM: 2048 MB
Internal storage: 8192 MB
Model number: E960
Detailed specification: http://www.gsmarena.com/results.php3?sName=E960
TestObject ID: LG_Nexus_4_E960_real
TestObject Manual Testing: https://app.testobject.com/#/vignek/work-collabration/manual/viewer?device=LG_Nexus_4_E960_real
### Issue on TestObject
https://app.testobject.com/#/vignek/work-collabration/issues/15 | 1.0 | not able to send message to vignesh - logged in with megha. Cannot send message to Vignesh.
Message can be sent to all other contacts.
### Reporter
Vignesh Kumar
### App Version Under Test
Name from APK: Work Collabration
Version from APK: 1.0
Version code: 1
Your version name: 1.0
Package: com.sheikbro.onlinechat
Uploaded: April 28, 2016 — 04:48 AM
TestObject ID: 2
### Device
Name: LG Nexus 4 E960
Android version: 5.1.1
API level: 22
Resolution: 768 x 1280 (xhdpi)
Screen size: 4.7"
CPU: ARM | quad core | 2300 MHz
RAM: 2048 MB
Internal storage: 8192 MB
Model number: E960
Detailed specification: http://www.gsmarena.com/results.php3?sName=E960
TestObject ID: LG_Nexus_4_E960_real
TestObject Manual Testing: https://app.testobject.com/#/vignek/work-collabration/manual/viewer?device=LG_Nexus_4_E960_real
### Issue on TestObject
https://app.testobject.com/#/vignek/work-collabration/issues/15 | non_main | not able to send message to vignesh logged in with megha cannot send message to vignesh message can be sent to all other contacts reporter vignesh kumar app version under test name from apk work collabration version from apk version code your version name package com sheikbro onlinechat uploaded april — am testobject id device name lg nexus android version api level resolution x xhdpi screen size cpu arm quad core mhz ram mb internal storage mb model number detailed specification testobject id lg nexus real testobject manual testing issue on testobject | 0 |
2,974 | 10,708,108,267 | IssuesEvent | 2019-10-24 18:56:40 | 18F/cg-product | https://api.github.com/repos/18F/cg-product | closed | Fix Kubernetes pod not running false alarms | contractor-3-maintainability operations | Starting in August or September, we started getting alerts in Prometheus for "Kubernetes pod not running" that seem to be for pods that either are running, or have been deleted and replaced. These alerts seem to never clear, making it difficult to tell when there are real issues with kubernetes.
[Slack thread about this issue](https://gsa-tts.slack.com/archives/C0ENP71UG/p1565960918162500?thread_ts=1565960762.161600&cid=C0ENP71UG) :lock:
## Notes
- We currently manually clear these alerts by deleting the pods that continuously report this, if the alert doesn't clear itself
- `kubectl get pods -a | grep Evicted | awk '{print $1}' | xargs kubectl delete pod `
- The issue doesn't seem to be with Prometheus
## Next steps
- Research whether this is a known issue with Kubernetes
- Determine a path to fix this issue | True | Fix Kubernetes pod not running false alarms - Starting in August or September, we started getting alerts in Prometheus for "Kubernetes pod not running" that seem to be for pods that either are running, or have been deleted and replaced. These alerts seem to never clear, making it difficult to tell when there are real issues with kubernetes.
[Slack thread about this issue](https://gsa-tts.slack.com/archives/C0ENP71UG/p1565960918162500?thread_ts=1565960762.161600&cid=C0ENP71UG) :lock:
## Notes
- We currently manually clear these alerts by deleting the pods that continuously report this, if the alert doesn't clear itself
- `kubectl get pods -a | grep Evicted | awk '{print $1}' | xargs kubectl delete pod `
- The issue doesn't seem to be with Prometheus
## Next steps
- Research whether this is a known issue with Kubernetes
- Determine a path to fix this issue | main | fix kubernetes pod not running false alarms starting in august or september we started getting alerts in prometheus for kubernetes pod not running that seem to be for pods that either are running or have been deleted and replaced these alerts seem to never clear making it difficult to tell when there are real issues with kubernetes lock notes we currently manually clear these alerts by deleting the pods that continuously report this if the alert doesn t clear itself kubectl get pods a grep evicted awk print xargs kubectl delete pod the issue doesn t seem to be with prometheus next steps research whether this is a known issue with kubernetes determine a path to fix this issue | 1 |
1,601 | 6,572,381,742 | IssuesEvent | 2017-09-11 01:52:50 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Feature request: comment on authorized_key module | affects_2.3 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
authorized_key module
##### ANSIBLE VERSION
N/A
##### SUMMARY
It would be great, if you can add a comment parameter to the authroized_key module. If I take the public key from github, they don't have one. Somethings like this:
name: Add Authorized Key PR authorized_key: user=root key=https://github.com/puchrojo.keys state=present comment="PR@feder"
Regards,
Isaac
| True | Feature request: comment on authorized_key module - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
authorized_key module
##### ANSIBLE VERSION
N/A
##### SUMMARY
It would be great, if you can add a comment parameter to the authroized_key module. If I take the public key from github, they don't have one. Somethings like this:
name: Add Authorized Key PR authorized_key: user=root key=https://github.com/puchrojo.keys state=present comment="PR@feder"
Regards,
Isaac
| main | feature request comment on authorized key module issue type feature idea component name authorized key module ansible version n a summary it would be great if you can add a comment parameter to the authroized key module if i take the public key from github they don t have one somethings like this name add authorized key pr authorized key user root key state present comment pr feder regards isaac | 1 |
528,974 | 15,378,296,430 | IssuesEvent | 2021-03-02 18:08:19 | agrc/electrofishing-query | https://api.github.com/repos/agrc/electrofishing-query | opened | Add more Species/Length selections | lower priority | In the `Species and Length` filter, increase the number of selection options from 4 to 8.

| 1.0 | Add more Species/Length selections - In the `Species and Length` filter, increase the number of selection options from 4 to 8.

| non_main | add more species length selections in the species and length filter increase the number of selection options from to | 0 |
1,770 | 6,575,049,292 | IssuesEvent | 2017-09-11 14:53:00 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Add timeout option to gce module | affects_2.3 cloud feature_idea gce waiting_on_maintainer | ##### ISSUE TYPE: Feature Idea
##### COMPONENT NAME: `cloud/gce.py`
##### SUMMARY:
Add possibility to override default libcloud timeout and get rid of such errors:
```
14:50:51 TASK [Create GCE instance] *****************************************************
14:50:51 task path: /var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/playbooks/loadtesting-update-gce-image.yml:11
14:50:52 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
14:50:52 <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `" && echo ansible-tmp-1475841052.07-172163251633504="` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `" ) && sleep 0'
14:50:52 <localhost> PUT /tmp/tmpSINDmh TO /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce
14:50:52 <localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/ /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce && sleep 0'
14:50:52 <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce; rm -rf "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/" > /dev/null 2>&1 && sleep 0'
14:54:01 An exception occurred during task execution. The full traceback is:
14:54:01 Traceback (most recent call last):
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 640, in <module>
14:54:01 main()
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 602, in main
14:54:01 module, gce, inames)
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 433, in create_instances
14:54:01 pd = gce.create_volume(None, "%s" % name, image=lc_image())
14:54:01 File "/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py", line 3571, in create_volume
14:54:01 data=volume_data, params=params)
14:54:01 File "/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py", line 1007, in async_request
14:54:01 (self.timeout))
14:54:01 libcloud.common.types.LibcloudError: <LibcloudError in None 'Job did not complete in 180 seconds'>
14:54:01 fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "gce"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 640, in <module>\n main()\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 602, in main\n module, gce, inames)\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 433, in create_instances\n pd = gce.create_volume(None, \"%s\" % name, image=lc_image())\n File \"/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py\", line 3571, in create_volume\n data=volume_data, params=params)\n File \"/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py\", line 1007, in async_request\n (self.timeout))\nlibcloud.common.types.LibcloudError: <LibcloudError in None 'Job did not complete in 180 seconds'>\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| True | Add timeout option to gce module - ##### ISSUE TYPE: Feature Idea
##### COMPONENT NAME: `cloud/gce.py`
##### SUMMARY:
Add possibility to override default libcloud timeout and get rid of such errors:
```
14:50:51 TASK [Create GCE instance] *****************************************************
14:50:51 task path: /var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/playbooks/loadtesting-update-gce-image.yml:11
14:50:52 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
14:50:52 <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `" && echo ansible-tmp-1475841052.07-172163251633504="` echo $HOME/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504 `" ) && sleep 0'
14:50:52 <localhost> PUT /tmp/tmpSINDmh TO /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce
14:50:52 <localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/ /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce && sleep 0'
14:50:52 <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 python /var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/gce; rm -rf "/var/lib/jenkins/.ansible/tmp/ansible-tmp-1475841052.07-172163251633504/" > /dev/null 2>&1 && sleep 0'
14:54:01 An exception occurred during task execution. The full traceback is:
14:54:01 Traceback (most recent call last):
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 640, in <module>
14:54:01 main()
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 602, in main
14:54:01 module, gce, inames)
14:54:01 File "/tmp/ansible_wfsdcR/ansible_module_gce.py", line 433, in create_instances
14:54:01 pd = gce.create_volume(None, "%s" % name, image=lc_image())
14:54:01 File "/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py", line 3571, in create_volume
14:54:01 data=volume_data, params=params)
14:54:01 File "/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py", line 1007, in async_request
14:54:01 (self.timeout))
14:54:01 libcloud.common.types.LibcloudError: <LibcloudError in None 'Job did not complete in 180 seconds'>
14:54:01 fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "gce"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 640, in <module>\n main()\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 602, in main\n module, gce, inames)\n File \"/tmp/ansible_wfsdcR/ansible_module_gce.py\", line 433, in create_instances\n pd = gce.create_volume(None, \"%s\" % name, image=lc_image())\n File \"/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/compute/drivers/gce.py\", line 3571, in create_volume\n data=volume_data, params=params)\n File \"/var/lib/jenkins/jobs/loadtesting-cloud-mysql-build/workspace/ansible/.venv/lib/python2.7/site-packages/libcloud/common/base.py\", line 1007, in async_request\n (self.timeout))\nlibcloud.common.types.LibcloudError: <LibcloudError in None 'Job did not complete in 180 seconds'>\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| main | add timeout option to gce module issue type feature idea component name cloud gce py summary add possibility to override default libcloud timeout and get rid of such errors task task path var lib jenkins jobs loadtesting cloud mysql build workspace ansible playbooks loadtesting update gce image yml establish local connection for user jenkins exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpsindmh to var lib jenkins ansible tmp ansible tmp gce exec bin sh c chmod u x var lib jenkins ansible tmp ansible tmp var lib jenkins ansible tmp ansible tmp gce sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf python var lib jenkins ansible tmp ansible tmp gce rm rf var lib jenkins ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible wfsdcr ansible module gce py line in main file tmp ansible wfsdcr ansible module gce py line in main module gce inames file tmp ansible wfsdcr ansible module gce py line in create instances pd gce create volume none s name image lc image file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud compute drivers gce py line in create volume data volume data params params file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud common base py line in async request self timeout libcloud common types libclouderror fatal failed changed false failed true invocation module name gce module stderr traceback most recent call last n file tmp ansible wfsdcr ansible module gce py line in n main n file tmp ansible wfsdcr ansible module gce py line in main n module gce inames n file tmp ansible wfsdcr ansible module gce py line in create instances n pd gce create volume none s name image lc image n file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud compute drivers gce py line in create volume n data volume data params params n file var lib jenkins jobs loadtesting cloud mysql build workspace ansible venv lib site packages libcloud common base py line in async request n self timeout nlibcloud common types libclouderror n module stdout msg module failure | 1 |
2,156 | 7,481,784,397 | IssuesEvent | 2018-04-04 21:53:42 | lansuite/lansuite | https://api.github.com/repos/lansuite/lansuite | closed | Tournament overview not displayed if played as league | bug pending-maintainer-response | Originally reported on LS page: http://lansuite.orgapage.de/index.php?mod=board&action=thread&tid=1392&posts_page=0#pid7770
If a tournament is created and the mode is league then the detailed overview of the matches contains no data. Neither team names nor match results are shown.
## Expected Behavior
Display of team names on vertical and horizontal top/left row/column
Display of match results in the table.
Link to match details per table entry
## Current Behavior
Real-Life example here:
https://berg-lan.de/index.php?mod=tournament2&action=tree&step=2&tournamentid=45
If a tournament is created and the mode is league then the detailed overview of the matches contains no data. Neither team names nor match results are shown.
## Possible Solution
To-be investigated. Simplest solution: Replace by simple table
## Steps to Reproduce (for bugs)
1. Create tournament with mode = league
2. Play a few matches
3. Look at the match overview
## Context
Implications are only a loss of comfort, as the match listing still works.
## Your Environment
* Version used: maluz/lansuite/HEAD (Basically 4.2 with a few fixes)
* Operating System and version: Debian 8
* Enabled features: tournament2 | True | Tournament overview not displayed if played as league - Originally reported on LS page: http://lansuite.orgapage.de/index.php?mod=board&action=thread&tid=1392&posts_page=0#pid7770
If a tournament is created and the mode is league then the detailed overview of the matches contains no data. Neither team names nor match results are shown.
## Expected Behavior
Display of team names on vertical and horizontal top/left row/column
Display of match results in the table.
Link to match details per table entry
## Current Behavior
Real-Life example here:
https://berg-lan.de/index.php?mod=tournament2&action=tree&step=2&tournamentid=45
If a tournament is created and the mode is league then the detailed overview of the matches contains no data. Neither team names nor match results are shown.
## Possible Solution
To-be investigated. Simplest solution: Replace by simple table
## Steps to Reproduce (for bugs)
1. Create tournament with mode = league
2. Play a few matches
3. Look at the match overview
## Context
Implications are only a loss of comfort, as the match listing still works.
## Your Environment
* Version used: maluz/lansuite/HEAD (Basically 4.2 with a few fixes)
* Operating System and version: Debian 8
* Enabled features: tournament2 | main | tournament overview not displayed if played as league originally reported on ls page if a tournament is created and the mode is league then the detailed overview of the matches contains no data neither team names nor match results are shown expected behavior display of team names on vertical and horizontal top left row column display of match results in the table link to match details per table entry current behavior real life example here if a tournament is created and the mode is league then the detailed overview of the matches contains no data neither team names nor match results are shown possible solution to be investigated simplest solution replace by simple table steps to reproduce for bugs create tournament with mode league play a few matches look at the match overview context implications are only a loss of comfort as the match listing still works your environment version used maluz lansuite head basically with a few fixes operating system and version debian enabled features | 1 |
65,294 | 3,227,386,971 | IssuesEvent | 2015-10-11 05:10:20 | sohelvali/Test-Git-Issue | https://api.github.com/repos/sohelvali/Test-Git-Issue | closed | UK Labels-generic names not listed in exports | Export Matrix Medium Priority | They do show in search results but not in matrix or other exports. | 1.0 | UK Labels-generic names not listed in exports - They do show in search results but not in matrix or other exports. | non_main | uk labels generic names not listed in exports they do show in search results but not in matrix or other exports | 0 |
2,008 | 6,720,528,164 | IssuesEvent | 2017-10-16 08:12:34 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Singularity jumped straight past Level 3 after an emergency shutdown and restart of PA | Maintainability/Hinders improvements | Singularity jumped straight from Level 1 to Level 4 after an emergency shutdown of the PA and restart.
I was Engineer and was getting the Singularity up and running. I manged to get it up to Stage 2 and was about to leave it when I saw the Tesla generator come into view. I performed an emergency shutdown of the PA and attempted to recover the Tesla generator, which took several minutes.
The Singularity dropped down to a Stage 1 during this time, and stopped moving at the 9-o'clock containment field generator. After recovery was successful, I attempted to get the Singulo back up to Stage 2 by turning on the PA back up to 2. After a few minutes passed, I debated with Chief Engineer Godwin Ivanov (RumblyStubble) about hacking the PA to allow Level 3, as Level 2 was not appearing to work, but before I got around to it, the Singulo visibly jumped straight to Stage 4 or 5, I'm not really sure which. (See image below.)

I ahelped the issue, and Okand was able to confirm that it jumped to Level 3 in less than a second, but noted that it was 'right after it was created', implying the Singulo died, then recovered and went back up to Stage 3.
I was logging at the time, and the log is attached. I play under the character name 'Andreas Spitzer'.
[log 2016-11-15 (3 50 pm).htm.zip](https://github.com/tgstation/tgstation/files/593249/log.2016-11-15.3.50.pm.htm.zip)
MetaStation/Bagil, 2016-11-15
Windows 7 Ultimate, build 7601, byond 511.1363
[Admins]: # (If you are reporting a bug that occured AFTER you used varedit/admin buttons to alter an object out of normal operating conditions, please verify that you can re-create the bug without the varedit usage/admin buttons before reporting the issue.)
| True | Singularity jumped straight past Level 3 after an emergency shutdown and restart of PA - Singularity jumped straight from Level 1 to Level 4 after an emergency shutdown of the PA and restart.
I was Engineer and was getting the Singularity up and running. I manged to get it up to Stage 2 and was about to leave it when I saw the Tesla generator come into view. I performed an emergency shutdown of the PA and attempted to recover the Tesla generator, which took several minutes.
The Singularity dropped down to a Stage 1 during this time, and stopped moving at the 9-o'clock containment field generator. After recovery was successful, I attempted to get the Singulo back up to Stage 2 by turning on the PA back up to 2. After a few minutes passed, I debated with Chief Engineer Godwin Ivanov (RumblyStubble) about hacking the PA to allow Level 3, as Level 2 was not appearing to work, but before I got around to it, the Singulo visibly jumped straight to Stage 4 or 5, I'm not really sure which. (See image below.)

I ahelped the issue, and Okand was able to confirm that it jumped to Level 3 in less than a second, but noted that it was 'right after it was created', implying the Singulo died, then recovered and went back up to Stage 3.
I was logging at the time, and the log is attached. I play under the character name 'Andreas Spitzer'.
[log 2016-11-15 (3 50 pm).htm.zip](https://github.com/tgstation/tgstation/files/593249/log.2016-11-15.3.50.pm.htm.zip)
MetaStation/Bagil, 2016-11-15
Windows 7 Ultimate, build 7601, byond 511.1363
[Admins]: # (If you are reporting a bug that occured AFTER you used varedit/admin buttons to alter an object out of normal operating conditions, please verify that you can re-create the bug without the varedit usage/admin buttons before reporting the issue.)
| main | singularity jumped straight past level after an emergency shutdown and restart of pa singularity jumped straight from level to level after an emergency shutdown of the pa and restart i was engineer and was getting the singularity up and running i manged to get it up to stage and was about to leave it when i saw the tesla generator come into view i performed an emergency shutdown of the pa and attempted to recover the tesla generator which took several minutes the singularity dropped down to a stage during this time and stopped moving at the o clock containment field generator after recovery was successful i attempted to get the singulo back up to stage by turning on the pa back up to after a few minutes passed i debated with chief engineer godwin ivanov rumblystubble about hacking the pa to allow level as level was not appearing to work but before i got around to it the singulo visibly jumped straight to stage or i m not really sure which see image below i ahelped the issue and okand was able to confirm that it jumped to level in less than a second but noted that it was right after it was created implying the singulo died then recovered and went back up to stage i was logging at the time and the log is attached i play under the character name andreas spitzer metastation bagil windows ultimate build byond if you are reporting a bug that occured after you used varedit admin buttons to alter an object out of normal operating conditions please verify that you can re create the bug without the varedit usage admin buttons before reporting the issue | 1 |
392,072 | 11,583,100,826 | IssuesEvent | 2020-02-22 08:34:14 | Disfactory/Disfactory | https://api.github.com/repos/Disfactory/Disfactory | closed | 新增一個 API 可以 query 該工廠的 report record | Backend medium priority | **Is your feature request related to a problem? Please describe.**
使用者會想看到該工廠過去的修改紀錄
**Describe the solution you'd like**
新增一個 API `GET /factories/{factory_id}/report_records`
**Describe alternatives you've considered**
也是可以在原本的工廠 query 裡直接帶進去,但這樣可能就會很大包。
| 1.0 | 新增一個 API 可以 query 該工廠的 report record - **Is your feature request related to a problem? Please describe.**
使用者會想看到該工廠過去的修改紀錄
**Describe the solution you'd like**
新增一個 API `GET /factories/{factory_id}/report_records`
**Describe alternatives you've considered**
也是可以在原本的工廠 query 裡直接帶進去,但這樣可能就會很大包。
| non_main | 新增一個 api 可以 query 該工廠的 report record is your feature request related to a problem please describe 使用者會想看到該工廠過去的修改紀錄 describe the solution you d like 新增一個 api get factories factory id report records describe alternatives you ve considered 也是可以在原本的工廠 query 裡直接帶進去,但這樣可能就會很大包。 | 0 |
723 | 4,318,957,342 | IssuesEvent | 2016-07-24 11:04:03 | gogits/gogs | https://api.github.com/repos/gogits/gogs | closed | Diff split view not working on pull requests | kind/bug status/assigned to maintainer status/needs feedback | The diff split view is not working on pull requests. Clicking on the Split view button does not throw an error, it just shows no difference to the unified view. It does work on single commits though.
I have made an example on try.gogs.io:
https://try.gogs.io/meb/splitViewTest/compare/master...feature/test?style=split
- Gogs version (or commit ref): 0.9.0.0306
- Git version: 2.1.4
- Operating system: Linux x86_64
- Database:
- [ ] PostgreSQL
- [x] MySQL
- [ ] SQLite
- Can you reproduce the bug at http://try.gogs.io:
- [x] Yes
- [ ] No
- [ ] Not relevant | True | Diff split view not working on pull requests - The diff split view is not working on pull requests. Clicking on the Split view button does not throw an error, it just shows no difference to the unified view. It does work on single commits though.
I have made an example on try.gogs.io:
https://try.gogs.io/meb/splitViewTest/compare/master...feature/test?style=split
- Gogs version (or commit ref): 0.9.0.0306
- Git version: 2.1.4
- Operating system: Linux x86_64
- Database:
- [ ] PostgreSQL
- [x] MySQL
- [ ] SQLite
- Can you reproduce the bug at http://try.gogs.io:
- [x] Yes
- [ ] No
- [ ] Not relevant | main | diff split view not working on pull requests the diff split view is not working on pull requests clicking on the split view button does not throw an error it just shows no difference to the unified view it does work on single commits though i have made an example on try gogs io gogs version or commit ref git version operating system linux database postgresql mysql sqlite can you reproduce the bug at yes no not relevant | 1 |
387 | 3,420,714,660 | IssuesEvent | 2015-12-08 15:54:55 | rumpkernel/rumprun-packages | https://api.github.com/repos/rumpkernel/rumprun-packages | opened | Packages missing maintainer | maintainer wanted | rumprun-packages$ echo */README.md | xargs grep -c Maintainer | grep :0
erlang/README.md:0
libxml2/README.md:0
ngircd/README.md:0
pcre/README.md:0
rust/README.md:0
In case the contributors want to take maintainership, I'm pinging them here:
erlang by @neeraj9
libxml2 by @Incognito
ngircd by @ether42
pcre by @mato
rust by @gandro | True | Packages missing maintainer - rumprun-packages$ echo */README.md | xargs grep -c Maintainer | grep :0
erlang/README.md:0
libxml2/README.md:0
ngircd/README.md:0
pcre/README.md:0
rust/README.md:0
In case the contributors want to take maintainership, I'm pinging them here:
erlang by @neeraj9
libxml2 by @Incognito
ngircd by @ether42
pcre by @mato
rust by @gandro | main | packages missing maintainer rumprun packages echo readme md xargs grep c maintainer grep erlang readme md readme md ngircd readme md pcre readme md rust readme md in case the contributors want to take maintainership i m pinging them here erlang by by incognito ngircd by pcre by mato rust by gandro | 1 |
655 | 4,171,887,872 | IssuesEvent | 2016-06-21 02:19:27 | Particular/NServiceBus | https://api.github.com/repos/Particular/NServiceBus | closed | Make `ErrorQueueSettings` and `AuditConfigReader` public | Project: V6 Launch State: In Progress - Maintainer Prio Tag: Maintainer Prio | Currently in V6 there is no way outside of the core to discover the location of error/audit queues. This functionality is provided by `ErrorQueueSettings` and `AuditConfigReader`. If we expose these two classes then other plugins will be able to make use of this information.
This is currently required by the [ServiceControl V6 plugins](https://github.com/Particular/ServiceControl/issues/742). | True | Make `ErrorQueueSettings` and `AuditConfigReader` public - Currently in V6 there is no way outside of the core to discover the location of error/audit queues. This functionality is provided by `ErrorQueueSettings` and `AuditConfigReader`. If we expose these two classes then other plugins will be able to make use of this information.
This is currently required by the [ServiceControl V6 plugins](https://github.com/Particular/ServiceControl/issues/742). | main | make errorqueuesettings and auditconfigreader public currently in there is no way outside of the core to discover the location of error audit queues this functionality is provided by errorqueuesettings and auditconfigreader if we expose these two classes then other plugins will be able to make use of this information this is currently required by the | 1 |
2,049 | 6,902,062,327 | IssuesEvent | 2017-11-25 16:00:35 | NucleusPowered/Nucleus | https://api.github.com/repos/NucleusPowered/Nucleus | closed | CommandSpy not outputting anything in-game | bug for-maintainence-release | [nucleus-info-20171116-180753](https://github.com/NucleusPowered/Nucleus/files/1479462/nucleus-info-20171116-180753.txt)
After the latest update (**Nucleus-1.1.7-LTS-S5.1-MC1.10.2**) for Sponge, CommandSpy has stopped outputting anything to my Staff members. /commandspy says it enables/disables it, yet nothing changes. Console command logging still works as configured in my settings.
No errors occur in the console/chat.
SocialSpy still works as expected.
**Settings**
```
# Allows users with permission to see commands that other players are executing in real time
command-spy=ENABLED
```
```
command-spy {
# The blacklist (or whitelist if filter-is-whitelist is true) to use when determining which commands to spy on.
command-filter=[]
# If true, command-filter acts as a whitelist of commands to spy on, else, it functions as a blacklist.
filter-is-whitelist=false
# The prefix to use when displaying the player's command.
prefix="&8[&c!&8]&c{{name}} : "
}
```
| True | CommandSpy not outputting anything in-game - [nucleus-info-20171116-180753](https://github.com/NucleusPowered/Nucleus/files/1479462/nucleus-info-20171116-180753.txt)
After the latest update (**Nucleus-1.1.7-LTS-S5.1-MC1.10.2**) for Sponge, CommandSpy has stopped outputting anything to my Staff members. /commandspy says it enables/disables it, yet nothing changes. Console command logging still works as configured in my settings.
No errors occur in the console/chat.
SocialSpy still works as expected.
**Settings**
```
# Allows users with permission to see commands that other players are executing in real time
command-spy=ENABLED
```
```
command-spy {
# The blacklist (or whitelist if filter-is-whitelist is true) to use when determining which commands to spy on.
command-filter=[]
# If true, command-filter acts as a whitelist of commands to spy on, else, it functions as a blacklist.
filter-is-whitelist=false
# The prefix to use when displaying the player's command.
prefix="&8[&c!&8]&c{{name}} : "
}
```
| main | commandspy not outputting anything in game after the latest update nucleus lts for sponge commandspy has stopped outputting anything to my staff members commandspy says it enables disables it yet nothing changes console command logging still works as configured in my settings no errors occur in the console chat socialspy still works as expected settings allows users with permission to see commands that other players are executing in real time command spy enabled command spy the blacklist or whitelist if filter is whitelist is true to use when determining which commands to spy on command filter if true command filter acts as a whitelist of commands to spy on else it functions as a blacklist filter is whitelist false the prefix to use when displaying the player s command prefix c name | 1 |
132,586 | 5,189,040,513 | IssuesEvent | 2017-01-20 21:52:38 | CNMAT/CNMAT-MMJ-Depot | https://api.github.com/repos/CNMAT/CNMAT-MMJ-Depot | closed | music_calculator-add midi-in/menu | low priority | please add a midi-in menu/selection to music_calculator. This would be tied to the midi key section of the patch and would post up the Midi note in cents/freq/nearest note name on the display when played from a midi keyboard
| 1.0 | music_calculator-add midi-in/menu - please add a midi-in menu/selection to music_calculator. This would be tied to the midi key section of the patch and would post up the Midi note in cents/freq/nearest note name on the display when played from a midi keyboard
| non_main | music calculator add midi in menu please add a midi in menu selection to music calculator this would be tied to the midi key section of the patch and would post up the midi note in cents freq nearest note name on the display when played from a midi keyboard | 0 |
750 | 4,351,318,341 | IssuesEvent | 2016-07-31 19:51:43 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apt-rpm | bug_report waiting_on_maintainer | ##### Issue Type:
Bug Report
##### Component Name:
apt_rpm
##### Ansible Version:
N/A
##### Summary:
When I try to install a package using this command. It gives me a error.
ansible remote_hosts -m apt_rpm -s -a "pkg=elinks state=present"


FYI, I tried installing through Apt-get command and it works. | True | apt-rpm - ##### Issue Type:
Bug Report
##### Component Name:
apt_rpm
##### Ansible Version:
N/A
##### Summary:
When I try to install a package using this command. It gives me a error.
ansible remote_hosts -m apt_rpm -s -a "pkg=elinks state=present"


FYI, I tried installing through Apt-get command and it works. | main | apt rpm issue type bug report component name apt rpm ansible version n a summary when i try to install a package using this command it gives me a error ansible remote hosts m apt rpm s a pkg elinks state present fyi i tried installing through apt get command and it works | 1 |
641,777 | 20,834,350,292 | IssuesEvent | 2022-03-20 00:15:34 | LemonUIbyLemon/LemonUI | https://api.github.com/repos/LemonUIbyLemon/LemonUI | closed | Event for checking Items before they are added to NativeMenu's | priority: p3 low type: feature request status: acknowledged | Or making Add(NativeItem) overridable.
| 1.0 | Event for checking Items before they are added to NativeMenu's - Or making Add(NativeItem) overridable.
| non_main | event for checking items before they are added to nativemenu s or making add nativeitem overridable | 0 |
248,960 | 26,869,317,999 | IssuesEvent | 2023-02-04 08:44:44 | lukebrogan/WebGoat | https://api.github.com/repos/lukebrogan/WebGoat | closed | commons-text-1.9.jar: 1 vulnerabilities (highest severity is: 9.8) - autoclosed | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-text-1.9.jar</b></p></summary>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan/WebGoat/commit/f4b8c92895152a8f2409ca352cafbc342b6b7ffb">f4b8c92895152a8f2409ca352cafbc342b6b7ffb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-text version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-42889](https://www.mend.io/vulnerability-database/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | commons-text-1.9.jar | Direct | 1.10.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42889</summary>
### Vulnerable Library - <b>commons-text-1.9.jar</b></p>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-text-1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan/WebGoat/commit/f4b8c92895152a8f2409ca352cafbc342b6b7ffb">f4b8c92895152a8f2409ca352cafbc342b6b7ffb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default.
<p>Publish Date: 2022-10-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42889>CVE-2022-42889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p>
<p>Release Date: 2022-10-13</p>
<p>Fix Resolution: 1.10.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | commons-text-1.9.jar: 1 vulnerabilities (highest severity is: 9.8) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-text-1.9.jar</b></p></summary>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan/WebGoat/commit/f4b8c92895152a8f2409ca352cafbc342b6b7ffb">f4b8c92895152a8f2409ca352cafbc342b6b7ffb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-text version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-42889](https://www.mend.io/vulnerability-database/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | commons-text-1.9.jar | Direct | 1.10.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-42889</summary>
### Vulnerable Library - <b>commons-text-1.9.jar</b></p>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-text-1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebrogan/WebGoat/commit/f4b8c92895152a8f2409ca352cafbc342b6b7ffb">f4b8c92895152a8f2409ca352cafbc342b6b7ffb</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default.
<p>Publish Date: 2022-10-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42889>CVE-2022-42889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p>
<p>Release Date: 2022-10-13</p>
<p>Fix Resolution: 1.10.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_main | commons text jar vulnerabilities highest severity is autoclosed vulnerable library commons text jar apache commons text is a library focused on algorithms working on strings library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache commons commons text commons text jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in commons text version remediation available high commons text jar direct details cve vulnerable library commons text jar apache commons text is a library focused on algorithms working on strings library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache commons commons text commons text jar dependency hierarchy x commons text jar vulnerable library found in head commit a href found in base branch main vulnerability details apache commons text performs variable interpolation allowing properties to be dynamically evaluated and expanded the standard format for interpolation is prefix name where prefix is used to locate an instance of org apache commons text lookup stringlookup that performs the interpolation starting with version and continuing through the set of default lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers these lookups are script execute expressions using the jvm script execution engine javax script dns resolve dns records url load values from urls including from remote servers applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used users are recommended to upgrade to apache commons text which disables the problematic interpolators by default publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
125,324 | 4,955,541,748 | IssuesEvent | 2016-12-01 20:43:14 | WalkingMachine/sara_commun | https://api.github.com/repos/WalkingMachine/sara_commun | closed | Firefox bug d'affichage | bug Priority : LOW | 
Il semble avoir une case vide qui apparaît quand il y a un sous-titre donné a un membre. | 1.0 | Firefox bug d'affichage - 
Il semble avoir une case vide qui apparaît quand il y a un sous-titre donné a un membre. | non_main | firefox bug d affichage il semble avoir une case vide qui apparaît quand il y a un sous titre donné a un membre | 0 |
119,803 | 4,776,167,255 | IssuesEvent | 2016-10-27 12:59:39 | pmem/issues | https://api.github.com/repos/pmem/issues | opened | pmempool dump cli: Unable to provide range in format: -r 1- | Exposure: Low Priority: 4 low Type: Bug | ```
pmempool create log pool.log
pmempool dump -r 1- pool.log
```
> error: invalid range value specified -- '1-'
Found on: 1.1-879-g3d09c5c | 1.0 | pmempool dump cli: Unable to provide range in format: -r 1- - ```
pmempool create log pool.log
pmempool dump -r 1- pool.log
```
> error: invalid range value specified -- '1-'
Found on: 1.1-879-g3d09c5c | non_main | pmempool dump cli unable to provide range in format r pmempool create log pool log pmempool dump r pool log error invalid range value specified found on | 0 |
217,135 | 24,313,173,358 | IssuesEvent | 2022-09-30 01:58:41 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | reopened | CVE-2021-0512 (High) detected in linuxv5.2 | security vulnerability | ## CVE-2021-0512 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/hid-core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In __hidinput_change_resolution_multipliers of hid-input.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-173843328References: Upstream kernel
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-0512>CVE-2021-0512</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: ASB-2021-02-05_mainline</p>
</p>
</details>
<p></p>
| True | CVE-2021-0512 (High) detected in linuxv5.2 - ## CVE-2021-0512 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/hid-core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In __hidinput_change_resolution_multipliers of hid-input.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-173843328References: Upstream kernel
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-0512>CVE-2021-0512</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: ASB-2021-02-05_mainline</p>
</p>
</details>
<p></p>
| non_main | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files drivers hid hid core c vulnerability details in hidinput change resolution multipliers of hid input c there is a possible out of bounds write due to a heap buffer overflow this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution asb mainline | 0 |
6,097 | 2,610,220,981 | IssuesEvent | 2015-02-26 19:10:09 | chrsmith/somefinders | https://api.github.com/repos/chrsmith/somefinders | opened | jvc kd g547 инструкция.txt | auto-migrated Priority-Medium Type-Defect | ```
'''Анис Павлов'''
Привет всем не подскажите где можно найти
.jvc kd g547 инструкция.txt. где то видел уже
'''Виль Кузнецов'''
Вот хороший сайт где можно скачать
http://bit.ly/1dyXx5C
'''Вольдемар Шилов'''
Спасибо вроде то но просит телефон вводить
'''Виталий Алексеев'''
Не это не влияет на баланс
'''Гаральд Смирнов'''
Не это не влияет на баланс
Информация о файле: jvc kd g547 инструкция.txt
Загружен: В этом месяце
Скачан раз: 635
Рейтинг: 1229
Средняя скорость скачивания: 923
Похожих файлов: 38
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 9:41 | 1.0 | jvc kd g547 инструкция.txt - ```
'''Анис Павлов'''
Привет всем не подскажите где можно найти
.jvc kd g547 инструкция.txt. где то видел уже
'''Виль Кузнецов'''
Вот хороший сайт где можно скачать
http://bit.ly/1dyXx5C
'''Вольдемар Шилов'''
Спасибо вроде то но просит телефон вводить
'''Виталий Алексеев'''
Не это не влияет на баланс
'''Гаральд Смирнов'''
Не это не влияет на баланс
Информация о файле: jvc kd g547 инструкция.txt
Загружен: В этом месяце
Скачан раз: 635
Рейтинг: 1229
Средняя скорость скачивания: 923
Похожих файлов: 38
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 9:41 | non_main | jvc kd инструкция txt анис павлов привет всем не подскажите где можно найти jvc kd инструкция txt где то видел уже виль кузнецов вот хороший сайт где можно скачать вольдемар шилов спасибо вроде то но просит телефон вводить виталий алексеев не это не влияет на баланс гаральд смирнов не это не влияет на баланс информация о файле jvc kd инструкция txt загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at | 0 |
252,853 | 8,047,375,328 | IssuesEvent | 2018-08-01 00:16:38 | MrBlizzard/RCAdmins-Tracker | https://api.github.com/repos/MrBlizzard/RCAdmins-Tracker | closed | TM Shop | awaiting completion enhancement priority:normal | TM shop is a good idea in theory, but someone has to sit down and do the math and figure out how much each TM can go for without breaking the economy. For example, if you can buy a TM from the invmenus store, then sell it to a shopkeeper for pokedollars and buy several, you could create an infinite money glitch.
Alternatively, we could make a shop that only sells, and not buys. Even still, the prices couldn't be very high and some math will still have to be done.
First task is to come up with prices for TMs, and what TMs will be in the shop (as there are quite a few of them.
Feel free to discuss here until we come up with some ideas on how to organize, and whoever wants to take on the finding out prices part can. | 1.0 | TM Shop - TM shop is a good idea in theory, but someone has to sit down and do the math and figure out how much each TM can go for without breaking the economy. For example, if you can buy a TM from the invmenus store, then sell it to a shopkeeper for pokedollars and buy several, you could create an infinite money glitch.
Alternatively, we could make a shop that only sells, and not buys. Even still, the prices couldn't be very high and some math will still have to be done.
First task is to come up with prices for TMs, and what TMs will be in the shop (as there are quite a few of them.
Feel free to discuss here until we come up with some ideas on how to organize, and whoever wants to take on the finding out prices part can. | non_main | tm shop tm shop is a good idea in theory but someone has to sit down and do the math and figure out how much each tm can go for without breaking the economy for example if you can buy a tm from the invmenus store then sell it to a shopkeeper for pokedollars and buy several you could create an infinite money glitch alternatively we could make a shop that only sells and not buys even still the prices couldn t be very high and some math will still have to be done first task is to come up with prices for tms and what tms will be in the shop as there are quite a few of them feel free to discuss here until we come up with some ideas on how to organize and whoever wants to take on the finding out prices part can | 0 |
316,664 | 9,653,037,863 | IssuesEvent | 2019-05-18 23:11:10 | nimona/go-nimona | https://api.github.com/repos/nimona/go-nimona | closed | Add logger | Priority: Medium Type: Enhancement | A structured logger should be implemented for use across the project. Preferably over zap. | 1.0 | Add logger - A structured logger should be implemented for use across the project. Preferably over zap. | non_main | add logger a structured logger should be implemented for use across the project preferably over zap | 0 |
495,164 | 14,272,851,927 | IssuesEvent | 2020-11-21 18:50:18 | rism-ch/verovio | https://api.github.com/repos/rism-ch/verovio | opened | Strange transient beam angle behavior | bug low priority | Using the example data given below, the beam angle of double-dotted-eighth/32nd note rhythms keep changing from horizontal to sloped for no reason. I suspect that there is an uninitialized variable causing the problem (or possibly but less likely a memory leak causing it). Here is an animation showing the problem:

Notice that I placed a comment at the bottom of the text, and I am typing the letter `x` repeatedly. After each letter is inserted, the data is sent to verovio to be re-rendered. Each successive rendering will randomly change the slope of the dotted rhythm beams, but other beams such as the 8th/32nd/32nd beam, for example. When I try to reproduce with a single measure, I am not getting the behavior. This test was done with the most recent development version compiled to the javascript toolkit.
Here are snapshots of the music showing various states of the beams:
<img width="940" alt="Screen Shot 2020-11-21 at 9 11 26 AM" src="https://user-images.githubusercontent.com/3487289/99885028-f9e5ee00-2be6-11eb-811e-277222a5286f.png">
<img width="954" alt="Screen Shot 2020-11-21 at 9 11 04 AM" src="https://user-images.githubusercontent.com/3487289/99885029-fbafb180-2be6-11eb-9be6-11c5b9a1ddce.png">
Test MEI data:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-model href="https://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?>
<?xml-model href="https://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
<mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0">
<meiHead>
<fileDesc>
<titleStmt>
<title />
</titleStmt>
<pubStmt />
</fileDesc>
<encodingDesc>
<appInfo>
<application isodate="2020-11-21T09:21:11" version="3.1.0-dev-0b664f2">
<name>Verovio</name>
<p>Transcoded from Humdrum</p>
</application>
</appInfo>
<projectDesc>
<p>Encoded by: Craigt Sapp</p>
<p>Version: 2014/07/13/ (added ottavas)</p>
</projectDesc>
</encodingDesc>
<workList>
<work>
<title xml:id="title-L1" analog="humdrum:Xfi" type="translated">extract -s 1,3</title>
<title xml:id="title-L49" analog="humdrum:Xfi" type="translated">myank -m</title>
</work>
</workList>
<extMeta>
<frames xmlns="http://www.humdrum.org/ns/humxml">
<metaFrame n="0" token="!!!Xfilter: extract -s 1,3" xml:id="L1">
<frameInfo>
<startTime float="0" />
<frameType>reference</frameType>
<referenceKey>Xfilter</referenceKey>
<referenceValue>extract -s 1,3</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="48" token="!!!Xfilter: myank -m" xml:id="L49">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>Xfilter</referenceKey>
<referenceValue>myank -m</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="49" token="!!!ENC: Craigt Sapp" xml:id="L50">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>ENC</referenceKey>
<referenceValue>Craigt Sapp</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="50" token="!!!END: 2004/04/07/!!!!!!!!!!!" xml:id="L51">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>END</referenceKey>
<referenceValue>2004/04/07/!!!!!!!!!!!</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="51" token="!!!ONB: not proofread yet. some rest/note interpolations by SharpEye" xml:id="L52">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>ONB</referenceKey>
<referenceValue>not proofread yet. some rest/note interpolations by SharpEye</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="52" token="!!!EEV: 2014/07/13/ (added ottavas)" xml:id="L53">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>EEV</referenceKey>
<referenceValue>2014/07/13/ (added ottavas)</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="53" token="!!!RDF**kern: > = above" xml:id="L54">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>RDF**kern</referenceKey>
<referenceValue>> = above</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="54" token="!!!RDF**kern: < = below" xml:id="L55">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>RDF**kern</referenceKey>
<referenceValue>< = below</referenceValue>
</frameInfo>
</metaFrame>
</frames>
</extMeta>
</meiHead>
<music>
<body>
<mdiv xml:id="mdiv-0000001489277474">
<score xml:id="score-0000001061504672">
<scoreDef xml:id="scoredef-0000001186972249">
<staffGrp xml:id="staffgrp-0000001834787554" symbol="brace" bar.thru="true">
<label xml:id="label-0000001490130711">Piano</label>
<staffDef xml:id="staffdef-0000001947893997" n="1" lines="5">
<clef xml:id="clef-L4F2" shape="G" line="2" />
<keySig xml:id="keysig-L5F2" sig="3f" />
<meterSig xml:id="metersig-L6F2" count="4" unit="4" />
<instrDef xml:id="instrdef-0000000892253951" midi.instrnum="0" midi.instrname="Acoustic_Grand_Piano" />
</staffDef>
<staffDef xml:id="staffdef-0000000057604220" n="2" lines="5">
<clef xml:id="clef-L4F1" shape="F" line="4" />
<keySig xml:id="keysig-L5F1" sig="3f" />
<meterSig xml:id="metersig-L6F1" count="4" unit="4" />
<instrDef xml:id="instrdef-0000000187362019" midi.instrnum="0" midi.instrname="Acoustic_Grand_Piano" />
</staffDef>
</staffGrp>
</scoreDef>
<section xml:id="section-L2F1">
<measure xml:id="measure-L1" n="1">
<staff xml:id="staff-0000001496639669" n="1">
<layer xml:id="layer-L2F2N1" n="1">
<rest xml:id="rest-L8F2" dots="2" dur="8" />
<chord xml:id="chord-L9F2" dur="32">
<note xml:id="note-L9F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L9F2S2" oct="4" pname="a" accid="n" />
<note xml:id="note-L9F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L9F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<chord xml:id="chord-L10F2" dur="4">
<note xml:id="note-L10F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L10F2S2" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L10F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L10F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<beam xml:id="beam-L11F2-L12F2">
<chord xml:id="chord-L11F2" dots="2" dur="8">
<note xml:id="note-L11F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L11F2S2" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L11F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L11F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<chord xml:id="chord-L12F2" dur="32">
<note xml:id="note-L12F2S1" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L12F2S2" oct="5" pname="c" accid.ges="n" />
</chord>
</beam>
<beam xml:id="beam-L14F2-L16F2">
<note xml:id="note-L14F2" dots="1" dur="8" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L15F2" dur="32" oct="4" pname="b" accid="n" />
<note xml:id="note-L16F2" dur="32" oct="5" pname="c" accid.ges="n" />
</beam>
</layer>
<layer xml:id="layer-L4F2N2" n="2">
<space xml:id="space-0000001408382826" dots="1" dur="2" />
<note xml:id="note-L14F3" dur="4" oct="4" pname="a" accid.ges="n" />
</layer>
</staff>
<staff xml:id="staff-0000001379610814" n="2">
<layer xml:id="layer-L2F1N1" n="1">
<beam xml:id="beam-L8F1-L9F1">
<chord xml:id="chord-L8F1" dots="2" dur="8">
<note xml:id="note-L8F1S1" oct="2" pname="f" accid="s" />
<note xml:id="note-L8F1S2" oct="3" pname="f" accid="s" />
</chord>
<chord xml:id="chord-L9F1" dur="32">
<note xml:id="note-L9F1S1" oct="1" pname="f" accid="s" />
<note xml:id="note-L9F1S2" oct="2" pname="f" accid.ges="s" />
</chord>
</beam>
<chord xml:id="chord-L10F1" dur="4">
<note xml:id="note-L10F1S1" oct="2" pname="f" accid.ges="s" />
<note xml:id="note-L10F1S2" oct="3" pname="f" accid.ges="s" />
</chord>
<beam xml:id="beam-L11F1-L12F1">
<chord xml:id="chord-L11F1" dots="2" dur="8">
<note xml:id="note-L11F1S1" oct="2" pname="f" accid.ges="s" />
<note xml:id="note-L11F1S2" oct="3" pname="f" accid.ges="s" />
</chord>
<chord xml:id="chord-L12F1" dur="32">
<note xml:id="note-L12F1S1" oct="3" pname="f" accid.ges="s" />
<note xml:id="note-L12F1S2" oct="4" pname="e" accid.ges="f" />
</chord>
</beam>
<chord xml:id="chord-L14F1" dur="4">
<note xml:id="note-L14F1S1" oct="3" pname="f" accid.ges="s" />
<note xml:id="note-L14F1S2" oct="4" pname="e" accid.ges="f" />
</chord>
</layer>
</staff>
<tie xml:id="tie-L10F2S1-L11F2S1" startid="#note-L10F2S1" endid="#note-L11F2S1" />
<tie xml:id="tie-L10F2S2-L11F2S2" startid="#note-L10F2S2" endid="#note-L11F2S2" />
<tie xml:id="tie-L10F2S3-L11F2S3" startid="#note-L10F2S3" endid="#note-L11F2S3" />
<tie xml:id="tie-L10F2S4-L11F2S4" startid="#note-L10F2S4" endid="#note-L11F2S4" />
<tie xml:id="tie-L10F1S1-L11F1S1" startid="#note-L10F1S1" endid="#note-L11F1S1" />
<tie xml:id="tie-L10F1S2-L11F1S2" startid="#note-L10F1S2" endid="#note-L11F1S2" />
</measure>
<measure xml:id="measure-L18" n="2">
<staff xml:id="staff-L18F2N1" n="1">
<layer xml:id="layer-L18F2N1" n="1">
<chord xml:id="chord-L19F2" dur="8">
<note xml:id="note-L19F2S1" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L19F2S2" oct="4" pname="b" accid="n" />
</chord>
<rest xml:id="rest-L20F2" dur="8" />
<chord xml:id="chord-L21F2" dur="8">
<note xml:id="note-L21F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L21F2S2" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L21F2S3" oct="5" pname="c" accid.ges="n" />
</chord>
<rest xml:id="rest-L22F2" dur="8" />
<beam xml:id="beam-L23F2-L24F2">
<note xml:id="note-L23F2" dur="4" oct="4" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L24F2" dur="4" oct="4" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L25F2" dur="4" oct="4" pname="b" grace="unacc" stem.visible="false" accid.ges="n" />
</beam>
<chord xml:id="chord-L30F2" dur="8">
<note xml:id="note-L30F2S1" oct="4" pname="d" accid.ges="n" />
<note xml:id="note-L30F2S2" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L30F2S3" oct="5" pname="d" accid.ges="n" />
</chord>
<rest xml:id="rest-L31F2" dur="8" />
<rest xml:id="rest-L32F2" dur="4" />
</layer>
</staff>
<staff xml:id="staff-L18F1N1" n="2">
<layer xml:id="layer-L18F1N1" n="1">
<chord xml:id="chord-L19F1" dur="8">
<note xml:id="note-L19F1S1" oct="3" pname="g" accid.ges="n" />
<note xml:id="note-L19F1S2" oct="4" pname="d" accid.ges="n" />
</chord>
<rest xml:id="rest-L20F1" dur="8" />
<chord xml:id="chord-L21F1" dur="8">
<note xml:id="note-L21F1S1" oct="3" pname="c" accid.ges="n" />
<note xml:id="note-L21F1S2" oct="3" pname="g" accid.ges="n" />
<note xml:id="note-L21F1S3" oct="4" pname="c" accid.ges="n" />
</chord>
<rest xml:id="rest-L22F1" dur="8" />
<beam xml:id="beam-L23F1-L24F1">
<note xml:id="note-L23F1" dur="4" oct="1" pname="b" grace="unacc" stem.visible="false" accid="n" />
<note xml:id="note-L24F1" dur="4" oct="2" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L25F1" dur="4" oct="2" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L26F1" dur="4" oct="2" pname="b" grace="unacc" stem.visible="false" accid="n" />
<note xml:id="note-L27F1" dur="4" oct="3" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L28F1" dur="4" oct="3" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L29F1" dur="4" oct="3" pname="b" grace="unacc" stem.visible="false" accid="n" />
</beam>
<chord xml:id="chord-L30F1" dur="8">
<note xml:id="note-L30F1S1" oct="1" pname="b">
<accid xml:id="accid-L30F1S1" accid="n" func="caution" />
</note>
<note xml:id="note-L30F1S2" oct="2" pname="b">
<accid xml:id="accid-L30F1S2" accid="n" func="caution" />
</note>
</chord>
<rest xml:id="rest-L31F1" dur="8" />
<rest xml:id="rest-L32F1" dots="2" dur="8" />
<chord xml:id="chord-L33F1" dur="32">
<note xml:id="note-L33F1S1" oct="3" pname="a" accid.ges="f" />
<note xml:id="note-L33F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
</layer>
</staff>
<slur xml:id="slur-L23F2-L30F2" staff="1" startid="#note-L23F2" endid="#chord-L30F2" />
<slur xml:id="slur-L23F1-L29F1" staff="2" startid="#note-L23F1" endid="#note-L29F1" />
</measure>
<measure xml:id="measure-L34" n="3">
<staff xml:id="staff-L34F2N1" n="1">
<layer xml:id="layer-L34F2N1" n="1">
<rest xml:id="rest-L35F2" dots="2" dur="8" />
<chord xml:id="chord-L36F2" dur="32">
<note xml:id="note-L36F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L36F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L36F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L36F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<chord xml:id="chord-L37F2" dur="4">
<note xml:id="note-L37F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L37F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L37F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L37F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<beam xml:id="beam-L38F2-L40F2">
<chord xml:id="chord-L38F2" dots="2" dur="8">
<note xml:id="note-L38F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L38F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L38F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L38F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<chord xml:id="chord-L40F2" dur="32">
<note xml:id="note-L40F2S1" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L40F2S2" oct="5" pname="f" accid.ges="n" />
</chord>
</beam>
<note xml:id="note-L42F2" dur="4" oct="5" pname="d" stem.dir="down" accid.ges="n" />
</layer>
<layer xml:id="layer-L42F3N2" n="2">
<space xml:id="space-0000001261319241" dots="1" dur="2" />
<beam xml:id="beam-L42F3-L44F3">
<note xml:id="note-L42F3" dots="1" dur="8" oct="5" pname="f" stem.dir="up" accid.ges="n" />
<note xml:id="note-L43F3" dur="32" oct="5" pname="e" stem.dir="up" accid="n" />
<note xml:id="note-L44F3" dur="32" oct="5" pname="f" stem.dir="up" accid.ges="n" />
</beam>
</layer>
</staff>
<staff xml:id="staff-L34F1N1" n="2">
<layer xml:id="layer-L34F1N1" n="1">
<beam xml:id="beam-L35F1-L36F1">
<chord xml:id="chord-L35F1" dots="2" dur="8">
<note xml:id="note-L35F1S1" oct="2" pname="b" accid="n" />
<note xml:id="note-L35F1S2" oct="3" pname="b" accid="n" />
</chord>
<chord xml:id="chord-L36F1" dur="32">
<note xml:id="note-L36F1S1" oct="1" pname="b" accid="n" />
<note xml:id="note-L36F1S2" oct="2" pname="b" accid.ges="n" />
</chord>
</beam>
<chord xml:id="chord-L37F1" dur="4">
<note xml:id="note-L37F1S1" oct="2" pname="b" accid.ges="n" />
<note xml:id="note-L37F1S2" oct="3" pname="b" accid.ges="n" />
</chord>
<beam xml:id="beam-L38F1-L40F1">
<chord xml:id="chord-L38F1" dots="2" dur="8">
<note xml:id="note-L38F1S1" oct="2" pname="b" accid.ges="n" />
<note xml:id="note-L38F1S2" oct="3" pname="b" accid.ges="n" />
</chord>
<clef xml:id="clef-L39F1" shape="G" line="2" />
<chord xml:id="chord-L40F1" dur="32">
<note xml:id="note-L40F1S1" oct="3" pname="b" accid.ges="n" />
<note xml:id="note-L40F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
</beam>
<chord xml:id="chord-L42F1" dur="4">
<note xml:id="note-L42F1S1" oct="3" pname="b" accid.ges="n" />
<note xml:id="note-L42F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
<clef xml:id="clef-L45F1" shape="F" line="4" />
</layer>
</staff>
<tie xml:id="tie-L37F2S1-L38F2S1" startid="#note-L37F2S1" endid="#note-L38F2S1" />
<tie xml:id="tie-L37F2S2-L38F2S2" startid="#note-L37F2S2" endid="#note-L38F2S2" />
<tie xml:id="tie-L37F2S3-L38F2S3" startid="#note-L37F2S3" endid="#note-L38F2S3" />
<tie xml:id="tie-L37F2S4-L38F2S4" startid="#note-L37F2S4" endid="#note-L38F2S4" />
<tie xml:id="tie-L37F1S1-L38F1S1" startid="#note-L37F1S1" endid="#note-L38F1S1" />
<tie xml:id="tie-L37F1S2-L38F1S2" startid="#note-L37F1S2" endid="#note-L38F1S2" />
</measure>
</section>
</score>
</mdiv>
</body>
</music>
</mei>
<!-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -->
```
| 1.0 | Strange transient beam angle behavior - Using the example data given below, the beam angle of double-dotted-eighth/32nd note rhythms keep changing from horizontal to sloped for no reason. I suspect that there is an uninitialized variable causing the problem (or possibly but less likely a memory leak causing it). Here is an animation showing the problem:

Notice that I placed a comment at the bottom of the text, and I am typing the letter `x` repeatedly. After each letter is inserted, the data is sent to verovio to be re-rendered. Each successive rendering will randomly change the slope of the dotted rhythm beams, but other beams such as the 8th/32nd/32nd beam, for example. When I try to reproduce with a single measure, I am not getting the behavior. This test was done with the most recent development version compiled to the javascript toolkit.
Here are snapshots of the music showing various states of the beams:
<img width="940" alt="Screen Shot 2020-11-21 at 9 11 26 AM" src="https://user-images.githubusercontent.com/3487289/99885028-f9e5ee00-2be6-11eb-811e-277222a5286f.png">
<img width="954" alt="Screen Shot 2020-11-21 at 9 11 04 AM" src="https://user-images.githubusercontent.com/3487289/99885029-fbafb180-2be6-11eb-9be6-11c5b9a1ddce.png">
Test MEI data:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-model href="https://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?>
<?xml-model href="https://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
<mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0">
<meiHead>
<fileDesc>
<titleStmt>
<title />
</titleStmt>
<pubStmt />
</fileDesc>
<encodingDesc>
<appInfo>
<application isodate="2020-11-21T09:21:11" version="3.1.0-dev-0b664f2">
<name>Verovio</name>
<p>Transcoded from Humdrum</p>
</application>
</appInfo>
<projectDesc>
<p>Encoded by: Craigt Sapp</p>
<p>Version: 2014/07/13/ (added ottavas)</p>
</projectDesc>
</encodingDesc>
<workList>
<work>
<title xml:id="title-L1" analog="humdrum:Xfi" type="translated">extract -s 1,3</title>
<title xml:id="title-L49" analog="humdrum:Xfi" type="translated">myank -m</title>
</work>
</workList>
<extMeta>
<frames xmlns="http://www.humdrum.org/ns/humxml">
<metaFrame n="0" token="!!!Xfilter: extract -s 1,3" xml:id="L1">
<frameInfo>
<startTime float="0" />
<frameType>reference</frameType>
<referenceKey>Xfilter</referenceKey>
<referenceValue>extract -s 1,3</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="48" token="!!!Xfilter: myank -m" xml:id="L49">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>Xfilter</referenceKey>
<referenceValue>myank -m</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="49" token="!!!ENC: Craigt Sapp" xml:id="L50">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>ENC</referenceKey>
<referenceValue>Craigt Sapp</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="50" token="!!!END: 2004/04/07/!!!!!!!!!!!" xml:id="L51">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>END</referenceKey>
<referenceValue>2004/04/07/!!!!!!!!!!!</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="51" token="!!!ONB: not proofread yet. some rest/note interpolations by SharpEye" xml:id="L52">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>ONB</referenceKey>
<referenceValue>not proofread yet. some rest/note interpolations by SharpEye</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="52" token="!!!EEV: 2014/07/13/ (added ottavas)" xml:id="L53">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>EEV</referenceKey>
<referenceValue>2014/07/13/ (added ottavas)</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="53" token="!!!RDF**kern: > = above" xml:id="L54">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>RDF**kern</referenceKey>
<referenceValue>> = above</referenceValue>
</frameInfo>
</metaFrame>
<metaFrame n="54" token="!!!RDF**kern: < = below" xml:id="L55">
<frameInfo>
<startTime float="12" />
<frameType>reference</frameType>
<referenceKey>RDF**kern</referenceKey>
<referenceValue>< = below</referenceValue>
</frameInfo>
</metaFrame>
</frames>
</extMeta>
</meiHead>
<music>
<body>
<mdiv xml:id="mdiv-0000001489277474">
<score xml:id="score-0000001061504672">
<scoreDef xml:id="scoredef-0000001186972249">
<staffGrp xml:id="staffgrp-0000001834787554" symbol="brace" bar.thru="true">
<label xml:id="label-0000001490130711">Piano</label>
<staffDef xml:id="staffdef-0000001947893997" n="1" lines="5">
<clef xml:id="clef-L4F2" shape="G" line="2" />
<keySig xml:id="keysig-L5F2" sig="3f" />
<meterSig xml:id="metersig-L6F2" count="4" unit="4" />
<instrDef xml:id="instrdef-0000000892253951" midi.instrnum="0" midi.instrname="Acoustic_Grand_Piano" />
</staffDef>
<staffDef xml:id="staffdef-0000000057604220" n="2" lines="5">
<clef xml:id="clef-L4F1" shape="F" line="4" />
<keySig xml:id="keysig-L5F1" sig="3f" />
<meterSig xml:id="metersig-L6F1" count="4" unit="4" />
<instrDef xml:id="instrdef-0000000187362019" midi.instrnum="0" midi.instrname="Acoustic_Grand_Piano" />
</staffDef>
</staffGrp>
</scoreDef>
<section xml:id="section-L2F1">
<measure xml:id="measure-L1" n="1">
<staff xml:id="staff-0000001496639669" n="1">
<layer xml:id="layer-L2F2N1" n="1">
<rest xml:id="rest-L8F2" dots="2" dur="8" />
<chord xml:id="chord-L9F2" dur="32">
<note xml:id="note-L9F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L9F2S2" oct="4" pname="a" accid="n" />
<note xml:id="note-L9F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L9F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<chord xml:id="chord-L10F2" dur="4">
<note xml:id="note-L10F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L10F2S2" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L10F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L10F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<beam xml:id="beam-L11F2-L12F2">
<chord xml:id="chord-L11F2" dots="2" dur="8">
<note xml:id="note-L11F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L11F2S2" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L11F2S3" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L11F2S4" oct="5" pname="e" accid.ges="f" />
</chord>
<chord xml:id="chord-L12F2" dur="32">
<note xml:id="note-L12F2S1" oct="4" pname="a" accid.ges="n" />
<note xml:id="note-L12F2S2" oct="5" pname="c" accid.ges="n" />
</chord>
</beam>
<beam xml:id="beam-L14F2-L16F2">
<note xml:id="note-L14F2" dots="1" dur="8" oct="5" pname="c" accid.ges="n" />
<note xml:id="note-L15F2" dur="32" oct="4" pname="b" accid="n" />
<note xml:id="note-L16F2" dur="32" oct="5" pname="c" accid.ges="n" />
</beam>
</layer>
<layer xml:id="layer-L4F2N2" n="2">
<space xml:id="space-0000001408382826" dots="1" dur="2" />
<note xml:id="note-L14F3" dur="4" oct="4" pname="a" accid.ges="n" />
</layer>
</staff>
<staff xml:id="staff-0000001379610814" n="2">
<layer xml:id="layer-L2F1N1" n="1">
<beam xml:id="beam-L8F1-L9F1">
<chord xml:id="chord-L8F1" dots="2" dur="8">
<note xml:id="note-L8F1S1" oct="2" pname="f" accid="s" />
<note xml:id="note-L8F1S2" oct="3" pname="f" accid="s" />
</chord>
<chord xml:id="chord-L9F1" dur="32">
<note xml:id="note-L9F1S1" oct="1" pname="f" accid="s" />
<note xml:id="note-L9F1S2" oct="2" pname="f" accid.ges="s" />
</chord>
</beam>
<chord xml:id="chord-L10F1" dur="4">
<note xml:id="note-L10F1S1" oct="2" pname="f" accid.ges="s" />
<note xml:id="note-L10F1S2" oct="3" pname="f" accid.ges="s" />
</chord>
<beam xml:id="beam-L11F1-L12F1">
<chord xml:id="chord-L11F1" dots="2" dur="8">
<note xml:id="note-L11F1S1" oct="2" pname="f" accid.ges="s" />
<note xml:id="note-L11F1S2" oct="3" pname="f" accid.ges="s" />
</chord>
<chord xml:id="chord-L12F1" dur="32">
<note xml:id="note-L12F1S1" oct="3" pname="f" accid.ges="s" />
<note xml:id="note-L12F1S2" oct="4" pname="e" accid.ges="f" />
</chord>
</beam>
<chord xml:id="chord-L14F1" dur="4">
<note xml:id="note-L14F1S1" oct="3" pname="f" accid.ges="s" />
<note xml:id="note-L14F1S2" oct="4" pname="e" accid.ges="f" />
</chord>
</layer>
</staff>
<tie xml:id="tie-L10F2S1-L11F2S1" startid="#note-L10F2S1" endid="#note-L11F2S1" />
<tie xml:id="tie-L10F2S2-L11F2S2" startid="#note-L10F2S2" endid="#note-L11F2S2" />
<tie xml:id="tie-L10F2S3-L11F2S3" startid="#note-L10F2S3" endid="#note-L11F2S3" />
<tie xml:id="tie-L10F2S4-L11F2S4" startid="#note-L10F2S4" endid="#note-L11F2S4" />
<tie xml:id="tie-L10F1S1-L11F1S1" startid="#note-L10F1S1" endid="#note-L11F1S1" />
<tie xml:id="tie-L10F1S2-L11F1S2" startid="#note-L10F1S2" endid="#note-L11F1S2" />
</measure>
<measure xml:id="measure-L18" n="2">
<staff xml:id="staff-L18F2N1" n="1">
<layer xml:id="layer-L18F2N1" n="1">
<chord xml:id="chord-L19F2" dur="8">
<note xml:id="note-L19F2S1" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L19F2S2" oct="4" pname="b" accid="n" />
</chord>
<rest xml:id="rest-L20F2" dur="8" />
<chord xml:id="chord-L21F2" dur="8">
<note xml:id="note-L21F2S1" oct="4" pname="e" accid.ges="f" />
<note xml:id="note-L21F2S2" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L21F2S3" oct="5" pname="c" accid.ges="n" />
</chord>
<rest xml:id="rest-L22F2" dur="8" />
<beam xml:id="beam-L23F2-L24F2">
<note xml:id="note-L23F2" dur="4" oct="4" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L24F2" dur="4" oct="4" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L25F2" dur="4" oct="4" pname="b" grace="unacc" stem.visible="false" accid.ges="n" />
</beam>
<chord xml:id="chord-L30F2" dur="8">
<note xml:id="note-L30F2S1" oct="4" pname="d" accid.ges="n" />
<note xml:id="note-L30F2S2" oct="4" pname="g" accid.ges="n" />
<note xml:id="note-L30F2S3" oct="5" pname="d" accid.ges="n" />
</chord>
<rest xml:id="rest-L31F2" dur="8" />
<rest xml:id="rest-L32F2" dur="4" />
</layer>
</staff>
<staff xml:id="staff-L18F1N1" n="2">
<layer xml:id="layer-L18F1N1" n="1">
<chord xml:id="chord-L19F1" dur="8">
<note xml:id="note-L19F1S1" oct="3" pname="g" accid.ges="n" />
<note xml:id="note-L19F1S2" oct="4" pname="d" accid.ges="n" />
</chord>
<rest xml:id="rest-L20F1" dur="8" />
<chord xml:id="chord-L21F1" dur="8">
<note xml:id="note-L21F1S1" oct="3" pname="c" accid.ges="n" />
<note xml:id="note-L21F1S2" oct="3" pname="g" accid.ges="n" />
<note xml:id="note-L21F1S3" oct="4" pname="c" accid.ges="n" />
</chord>
<rest xml:id="rest-L22F1" dur="8" />
<beam xml:id="beam-L23F1-L24F1">
<note xml:id="note-L23F1" dur="4" oct="1" pname="b" grace="unacc" stem.visible="false" accid="n" />
<note xml:id="note-L24F1" dur="4" oct="2" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L25F1" dur="4" oct="2" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L26F1" dur="4" oct="2" pname="b" grace="unacc" stem.visible="false" accid="n" />
<note xml:id="note-L27F1" dur="4" oct="3" pname="d" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L28F1" dur="4" oct="3" pname="g" grace="unacc" stem.visible="false" accid.ges="n" />
<note xml:id="note-L29F1" dur="4" oct="3" pname="b" grace="unacc" stem.visible="false" accid="n" />
</beam>
<chord xml:id="chord-L30F1" dur="8">
<note xml:id="note-L30F1S1" oct="1" pname="b">
<accid xml:id="accid-L30F1S1" accid="n" func="caution" />
</note>
<note xml:id="note-L30F1S2" oct="2" pname="b">
<accid xml:id="accid-L30F1S2" accid="n" func="caution" />
</note>
</chord>
<rest xml:id="rest-L31F1" dur="8" />
<rest xml:id="rest-L32F1" dots="2" dur="8" />
<chord xml:id="chord-L33F1" dur="32">
<note xml:id="note-L33F1S1" oct="3" pname="a" accid.ges="f" />
<note xml:id="note-L33F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
</layer>
</staff>
<slur xml:id="slur-L23F2-L30F2" staff="1" startid="#note-L23F2" endid="#chord-L30F2" />
<slur xml:id="slur-L23F1-L29F1" staff="2" startid="#note-L23F1" endid="#note-L29F1" />
</measure>
<measure xml:id="measure-L34" n="3">
<staff xml:id="staff-L34F2N1" n="1">
<layer xml:id="layer-L34F2N1" n="1">
<rest xml:id="rest-L35F2" dots="2" dur="8" />
<chord xml:id="chord-L36F2" dur="32">
<note xml:id="note-L36F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L36F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L36F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L36F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<chord xml:id="chord-L37F2" dur="4">
<note xml:id="note-L37F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L37F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L37F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L37F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<beam xml:id="beam-L38F2-L40F2">
<chord xml:id="chord-L38F2" dots="2" dur="8">
<note xml:id="note-L38F2S1" oct="4" pname="a" accid.ges="f" />
<note xml:id="note-L38F2S2" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L38F2S3" oct="5" pname="f" accid.ges="n" />
<note xml:id="note-L38F2S4" oct="5" pname="a" accid.ges="f" />
</chord>
<chord xml:id="chord-L40F2" dur="32">
<note xml:id="note-L40F2S1" oct="5" pname="d" accid.ges="n" />
<note xml:id="note-L40F2S2" oct="5" pname="f" accid.ges="n" />
</chord>
</beam>
<note xml:id="note-L42F2" dur="4" oct="5" pname="d" stem.dir="down" accid.ges="n" />
</layer>
<layer xml:id="layer-L42F3N2" n="2">
<space xml:id="space-0000001261319241" dots="1" dur="2" />
<beam xml:id="beam-L42F3-L44F3">
<note xml:id="note-L42F3" dots="1" dur="8" oct="5" pname="f" stem.dir="up" accid.ges="n" />
<note xml:id="note-L43F3" dur="32" oct="5" pname="e" stem.dir="up" accid="n" />
<note xml:id="note-L44F3" dur="32" oct="5" pname="f" stem.dir="up" accid.ges="n" />
</beam>
</layer>
</staff>
<staff xml:id="staff-L34F1N1" n="2">
<layer xml:id="layer-L34F1N1" n="1">
<beam xml:id="beam-L35F1-L36F1">
<chord xml:id="chord-L35F1" dots="2" dur="8">
<note xml:id="note-L35F1S1" oct="2" pname="b" accid="n" />
<note xml:id="note-L35F1S2" oct="3" pname="b" accid="n" />
</chord>
<chord xml:id="chord-L36F1" dur="32">
<note xml:id="note-L36F1S1" oct="1" pname="b" accid="n" />
<note xml:id="note-L36F1S2" oct="2" pname="b" accid.ges="n" />
</chord>
</beam>
<chord xml:id="chord-L37F1" dur="4">
<note xml:id="note-L37F1S1" oct="2" pname="b" accid.ges="n" />
<note xml:id="note-L37F1S2" oct="3" pname="b" accid.ges="n" />
</chord>
<beam xml:id="beam-L38F1-L40F1">
<chord xml:id="chord-L38F1" dots="2" dur="8">
<note xml:id="note-L38F1S1" oct="2" pname="b" accid.ges="n" />
<note xml:id="note-L38F1S2" oct="3" pname="b" accid.ges="n" />
</chord>
<clef xml:id="clef-L39F1" shape="G" line="2" />
<chord xml:id="chord-L40F1" dur="32">
<note xml:id="note-L40F1S1" oct="3" pname="b" accid.ges="n" />
<note xml:id="note-L40F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
</beam>
<chord xml:id="chord-L42F1" dur="4">
<note xml:id="note-L42F1S1" oct="3" pname="b" accid.ges="n" />
<note xml:id="note-L42F1S2" oct="4" pname="a" accid.ges="f" />
</chord>
<clef xml:id="clef-L45F1" shape="F" line="4" />
</layer>
</staff>
<tie xml:id="tie-L37F2S1-L38F2S1" startid="#note-L37F2S1" endid="#note-L38F2S1" />
<tie xml:id="tie-L37F2S2-L38F2S2" startid="#note-L37F2S2" endid="#note-L38F2S2" />
<tie xml:id="tie-L37F2S3-L38F2S3" startid="#note-L37F2S3" endid="#note-L38F2S3" />
<tie xml:id="tie-L37F2S4-L38F2S4" startid="#note-L37F2S4" endid="#note-L38F2S4" />
<tie xml:id="tie-L37F1S1-L38F1S1" startid="#note-L37F1S1" endid="#note-L38F1S1" />
<tie xml:id="tie-L37F1S2-L38F1S2" startid="#note-L37F1S2" endid="#note-L38F1S2" />
</measure>
</section>
</score>
</mdiv>
</body>
</music>
</mei>
<!-- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -->
```
| non_main | strange transient beam angle behavior using the example data given below the beam angle of double dotted eighth note rhythms keep changing from horizontal to sloped for no reason i suspect that there is an uninitialized variable causing the problem or possibly but less likely a memory leak causing it here is an animation showing the problem notice that i placed a comment at the bottom of the text and i am typing the letter x repeatedly after each letter is inserted the data is sent to verovio to be re rendered each successive rendering will randomly change the slope of the dotted rhythm beams but other beams such as the beam for example when i try to reproduce with a single measure i am not getting the behavior this test was done with the most recent development version compiled to the javascript toolkit here are snapshots of the music showing various states of the beams img width alt screen shot at am src img width alt screen shot at am src test mei data xml xml model href type application xml schematypens xml model href type application xml schematypens verovio transcoded from humdrum encoded by craigt sapp version added ottavas extract s myank m frames xmlns reference xfilter extract s reference xfilter myank m reference enc craigt sapp reference end reference onb not proofread yet some rest note interpolations by sharpeye reference eev added ottavas above xml id reference rdf kern gt above reference rdf kern lt below piano | 0 |
421,587 | 28,326,197,803 | IssuesEvent | 2023-04-11 07:23:43 | opensquare-network/statescan-v2 | https://api.github.com/repos/opensquare-network/statescan-v2 | opened | runtime, comparison | documentation | <img width="1163" alt="image" src="https://user-images.githubusercontent.com/19513289/231086149-38d40904-ab87-4164-9ee9-8bfdd16c7003.png">
---
<img width="1061" alt="image" src="https://user-images.githubusercontent.com/19513289/231086221-3b617af5-c084-4ae4-bd39-5b679a791402.png">
| 1.0 | runtime, comparison - <img width="1163" alt="image" src="https://user-images.githubusercontent.com/19513289/231086149-38d40904-ab87-4164-9ee9-8bfdd16c7003.png">
---
<img width="1061" alt="image" src="https://user-images.githubusercontent.com/19513289/231086221-3b617af5-c084-4ae4-bd39-5b679a791402.png">
| non_main | runtime comparison img width alt image src img width alt image src | 0 |
3,677 | 15,036,159,790 | IssuesEvent | 2021-02-02 14:57:03 | IITIDIDX597/sp_2021_team1 | https://api.github.com/repos/IITIDIDX597/sp_2021_team1 | opened | Usage analytics | Epic: 1 Consuming Information Epic: 5 Maintaining the system Story Week 3 | **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As a: admin/analyst
I want: to see what articles clinicians are consuming
So that: I can get insight into what clinicians are consuming compared to the daily operations of the clinic | True | Usage analytics - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As a: admin/analyst
I want: to see what articles clinicians are consuming
So that: I can get insight into what clinicians are consuming compared to the daily operations of the clinic | main | usage analytics project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements the learning platform will be routinely updated with s lab s own research advancements as well as outside discoveries and best practices developed for rehabilitation treatments story details as a admin analyst i want to see what articles clinicians are consuming so that i can get insight into what clinicians are consuming compared to the daily operations of the clinic | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.