Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,664 | 6,574,059,542 | IssuesEvent | 2017-09-11 11:17:57 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apt: Packages were downgraded and -y was used without --allow-downgrades | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting
ansible 2.2.0.0
config file = /home/davidak/code/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Controller: elementary OS 0.4 Loki (based on Ubuntu 16.04 LTS)
Host: Ubuntu Server 16.04 LTS
##### SUMMARY
apt module don't downgrade package
##### STEPS TO REPRODUCE
Deploy this playbook:
```
- name: install bacula-fd
apt:
name: "{{ item }}=5.2.*"
update_cache: yes
cache_valid_time: "{{ apt_update_cache_valid_time | default(3600) }}"
state: present
with_items:
- bacula-common
- bacula-fd
notify: hold bacula-fd package on xenial
[...]
```
##### EXPECTED RESULTS
Ansible instruct apt to downgrade the package to the specified version.
(It currently is 7.0 on the server)
##### ACTUAL RESULTS
```
TASK [basecom.bacula-fd : install bacula-fd] ***********************************
task path: /home/davidak/code/ansible/roles/basecom.bacula-fd/tasks/Ubuntu.yml:1
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359 `" && echo ansible-tmp-1480590369.65-131741054149359="` echo $HOME/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359 `" ) && sleep 0'"'"''
<dev-ansible-xenial.cust.basecom.de> PUT /tmp/tmpiAy8LQ TO /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r '[dev-ansible-xenial.cust.basecom.de]'
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/ /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py && sleep 0'"'"''
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r -tt dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/" > /dev/null 2>&1 && sleep 0'"'"''
failed: [dev-ansible-xenial.cust.basecom.de] (item=[u'bacula-common=5.2.*', u'bacula-fd=5.2.*']) => {
"cache_update_time": 1480589184,
"cache_updated": false,
"failed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"name": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"only_upgrade": false,
"package": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
},
"item": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'bacula-common=5.2.*' 'bacula-fd=5.2.*'' failed: E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"stderr": "E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSuggested packages:\n bacula-doc bacula-traymonitor\nThe following packages will be DOWNGRADED:\n bacula-common bacula-fd\n0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 0 not upgraded.\n",
"stdout_lines": [
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"Suggested packages:",
" bacula-doc bacula-traymonitor",
"The following packages will be DOWNGRADED:",
" bacula-common bacula-fd",
"0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 0 not upgraded."
]
}
to retry, use: --limit @/home/davidak/code/ansible/site.retry
PLAY RECAP *********************************************************************
dev-ansible-xenial.cust.basecom.de : ok=49 changed=0 unreachable=0 failed=1
```
| True | apt: Packages were downgraded and -y was used without --allow-downgrades - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting
ansible 2.2.0.0
config file = /home/davidak/code/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Controller: elementary OS 0.4 Loki (based on Ubuntu 16.04 LTS)
Host: Ubuntu Server 16.04 LTS
##### SUMMARY
apt module don't downgrade package
##### STEPS TO REPRODUCE
Deploy this playbook:
```
- name: install bacula-fd
apt:
name: "{{ item }}=5.2.*"
update_cache: yes
cache_valid_time: "{{ apt_update_cache_valid_time | default(3600) }}"
state: present
with_items:
- bacula-common
- bacula-fd
notify: hold bacula-fd package on xenial
[...]
```
##### EXPECTED RESULTS
Ansible instruct apt to downgrade the package to the specified version.
(It currently is 7.0 on the server)
##### ACTUAL RESULTS
```
TASK [basecom.bacula-fd : install bacula-fd] ***********************************
task path: /home/davidak/code/ansible/roles/basecom.bacula-fd/tasks/Ubuntu.yml:1
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359 `" && echo ansible-tmp-1480590369.65-131741054149359="` echo $HOME/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359 `" ) && sleep 0'"'"''
<dev-ansible-xenial.cust.basecom.de> PUT /tmp/tmpiAy8LQ TO /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r '[dev-ansible-xenial.cust.basecom.de]'
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/ /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py && sleep 0'"'"''
<dev-ansible-xenial.cust.basecom.de> ESTABLISH SSH CONNECTION FOR USER: root
<dev-ansible-xenial.cust.basecom.de> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/davidak/.ansible/cp/ansible-ssh-%h-%p-%r -tt dev-ansible-xenial.cust.basecom.de '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/apt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1480590369.65-131741054149359/" > /dev/null 2>&1 && sleep 0'"'"''
failed: [dev-ansible-xenial.cust.basecom.de] (item=[u'bacula-common=5.2.*', u'bacula-fd=5.2.*']) => {
"cache_update_time": 1480589184,
"cache_updated": false,
"failed": true,
"invocation": {
"module_args": {
"allow_unauthenticated": false,
"autoremove": false,
"cache_valid_time": 3600,
"deb": null,
"default_release": null,
"dpkg_options": "force-confdef,force-confold",
"force": false,
"install_recommends": null,
"name": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"only_upgrade": false,
"package": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"purge": false,
"state": "present",
"update_cache": true,
"upgrade": null
},
"module_name": "apt"
},
"item": [
"bacula-common=5.2.*",
"bacula-fd=5.2.*"
],
"msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'bacula-common=5.2.*' 'bacula-fd=5.2.*'' failed: E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"stderr": "E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSuggested packages:\n bacula-doc bacula-traymonitor\nThe following packages will be DOWNGRADED:\n bacula-common bacula-fd\n0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 0 not upgraded.\n",
"stdout_lines": [
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"Suggested packages:",
" bacula-doc bacula-traymonitor",
"The following packages will be DOWNGRADED:",
" bacula-common bacula-fd",
"0 upgraded, 0 newly installed, 2 downgraded, 0 to remove and 0 not upgraded."
]
}
to retry, use: --limit @/home/davidak/code/ansible/site.retry
PLAY RECAP *********************************************************************
dev-ansible-xenial.cust.basecom.de : ok=49 changed=0 unreachable=0 failed=1
```
| main | apt packages were downgraded and y was used without allow downgrades issue type bug report component name apt ansible version log file at var log ansible log is not writeable and we cannot create it aborting ansible config file home davidak code ansible ansible cfg configured module search path default w o overrides configuration os environment controller elementary os loki based on ubuntu lts host ubuntu server lts summary apt module don t downgrade package steps to reproduce deploy this playbook name install bacula fd apt name item update cache yes cache valid time apt update cache valid time default state present with items bacula common bacula fd notify hold bacula fd package on xenial expected results ansible instruct apt to downgrade the package to the specified version it currently is on the server actual results task task path home davidak code ansible roles basecom bacula fd tasks ubuntu yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home davidak ansible cp ansible ssh h p r dev ansible xenial cust basecom de bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp apt py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home davidak ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home davidak ansible cp ansible ssh h p r dev ansible xenial cust basecom de bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp apt py sleep establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home davidak ansible cp ansible ssh h p r tt dev ansible xenial cust basecom de bin sh c usr bin python root ansible tmp ansible tmp apt py rm rf root ansible tmp ansible tmp dev null sleep failed item cache update time cache updated false failed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null name bacula common bacula fd only upgrade false package bacula common bacula fd purge false state present update cache true upgrade null module name apt item bacula common bacula fd msg usr bin apt get y o dpkg options force confdef o dpkg options force confold install bacula common bacula fd failed e packages were downgraded and y was used without allow downgrades n stderr e packages were downgraded and y was used without allow downgrades n stdout reading package lists nbuilding dependency tree nreading state information nsuggested packages n bacula doc bacula traymonitor nthe following packages will be downgraded n bacula common bacula fd upgraded newly installed downgraded to remove and not upgraded n stdout lines reading package lists building dependency tree reading state information suggested packages bacula doc bacula traymonitor the following packages will be downgraded bacula common bacula fd upgraded newly installed downgraded to remove and not upgraded to retry use limit home davidak code ansible site retry play recap dev ansible xenial cust basecom de ok changed unreachable failed | 1 |
4,525 | 23,530,750,031 | IssuesEvent | 2022-08-19 15:05:42 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: TileGroup is setting internal state even if valueSelected is provided | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components-react
### Browser
Chrome
### Package version
7.57.4
### React version
^17.0.2
### Description
I have a TileGroup wrapping two RadioTile components. I am passing valueSelected into TileGroup. When selecting the tile that is not currently selected, I am showing a modal to confirm they want to continue with the new selection. The modal's cancel does not change the value passed into valueSelected. The submit button on the modal, does confirm their change and my component's internal state changes and passes in the new valueSelected to the TileGroup.
However, the TileGroup code always changes the internal state of the TileGroup to have the new selection.
```
handleChange = (newSelection, value, evt) => {
if (newSelection !== this.state.selected) {
this.setState({ selected: newSelection });
this.props.onChange(newSelection, this.props.name, evt);
}
};
```
https://github.com/carbon-design-system/carbon/blame/main/packages/react/src/components/TileGroup/TileGroup.js#L107
### Reproduction/example
https://codesandbox.io/s/sleepy-tree-98cyiz?file=/src/index.js
### Steps to reproduce
Notice that the onChange for the TileGroup is setting the state as a hard-coded value, 'first'. That state is always passing in 'first' to valueSelected on TileGroup. When you select the other tile, it is showing as selected. The internal state of TileGroup is updating regardless of what I pass in valueSelected.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: TileGroup is setting internal state even if valueSelected is provided - ### Package
carbon-components-react
### Browser
Chrome
### Package version
7.57.4
### React version
^17.0.2
### Description
I have a TileGroup wrapping two RadioTile components. I am passing valueSelected into TileGroup. When selecting the tile that is not currently selected, I am showing a modal to confirm they want to continue with the new selection. The modal's cancel does not change the value passed into valueSelected. The submit button on the modal, does confirm their change and my component's internal state changes and passes in the new valueSelected to the TileGroup.
However, the TileGroup code always changes the internal state of the TileGroup to have the new selection.
```
handleChange = (newSelection, value, evt) => {
if (newSelection !== this.state.selected) {
this.setState({ selected: newSelection });
this.props.onChange(newSelection, this.props.name, evt);
}
};
```
https://github.com/carbon-design-system/carbon/blame/main/packages/react/src/components/TileGroup/TileGroup.js#L107
### Reproduction/example
https://codesandbox.io/s/sleepy-tree-98cyiz?file=/src/index.js
### Steps to reproduce
Notice that the onChange for the TileGroup is setting the state as a hard-coded value, 'first'. That state is always passing in 'first' to valueSelected on TileGroup. When you select the other tile, it is showing as selected. The internal state of TileGroup is updating regardless of what I pass in valueSelected.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | tilegroup is setting internal state even if valueselected is provided package carbon components react browser chrome package version react version description i have a tilegroup wrapping two radiotile components i am passing valueselected into tilegroup when selecting the tile that is not currently selected i am showing a modal to confirm they want to continue with the new selection the modal s cancel does not change the value passed into valueselected the submit button on the modal does confirm their change and my component s internal state changes and passes in the new valueselected to the tilegroup however the tilegroup code always changes the internal state of the tilegroup to have the new selection handlechange newselection value evt if newselection this state selected this setstate selected newselection this props onchange newselection this props name evt reproduction example steps to reproduce notice that the onchange for the tilegroup is setting the state as a hard coded value first that state is always passing in first to valueselected on tilegroup when you select the other tile it is showing as selected the internal state of tilegroup is updating regardless of what i pass in valueselected code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
507,158 | 14,679,929,001 | IssuesEvent | 2020-12-31 08:31:07 | k8smeetup/website-tasks | https://api.github.com/repos/k8smeetup/website-tasks | opened | /docs/concepts/scheduling-eviction/kube-scheduler.md | lang/zh priority/P0 sync/update version/master welcome | Source File: [/docs/concepts/scheduling-eviction/kube-scheduler.md](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/concepts/scheduling-eviction/kube-scheduler.md content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/concepts/scheduling-eviction/kube-scheduler.md
``` | 1.0 | /docs/concepts/scheduling-eviction/kube-scheduler.md - Source File: [/docs/concepts/scheduling-eviction/kube-scheduler.md](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/concepts/scheduling-eviction/kube-scheduler.md content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/concepts/scheduling-eviction/kube-scheduler.md
``` | non_main | docs concepts scheduling eviction kube scheduler md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs concepts scheduling eviction kube scheduler md content zh docs concepts scheduling eviction kube scheduler md 跨分支持查看原始文档更新差异 git diff release master content en docs concepts scheduling eviction kube scheduler md | 0 |
171,253 | 20,957,543,190 | IssuesEvent | 2022-03-27 10:02:15 | AlexRogalskiy/java-patterns | https://api.github.com/repos/AlexRogalskiy/java-patterns | opened | CVE-2022-23646 (High) detected in next-11.1.4.tgz | security vulnerability | ## CVE-2022-23646 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>next-11.1.4.tgz</b></p></summary>
<p>The React Framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/next/-/next-11.1.4.tgz">https://registry.npmjs.org/next/-/next-11.1.4.tgz</a></p>
<p>Path to dependency file: /tilt_modules/tilt_inspector/package.json</p>
<p>Path to vulnerable library: /tilt_modules/tilt_inspector/node_modules/next/package.json</p>
<p>
Dependency Hierarchy:
- tilt-inspector-0.1.8.tgz (Root Library)
- :x: **next-11.1.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/5c5074f1ebcf4d633b1abf0bab05b33c2caf9059">5c5074f1ebcf4d633b1abf0bab05b33c2caf9059</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Next.js is a React framework. Starting with version 10.0.0 and prior to version 12.1.0, Next.js is vulnerable to User Interface (UI) Misrepresentation of Critical Information. In order to be affected, the `next.config.js` file must have an `images.domains` array assigned and the image host assigned in `images.domains` must allow user-provided SVG. If the `next.config.js` file has `images.loader` assigned to something other than default, the instance is not affected. Version 12.1.0 contains a patch for this issue. As a workaround, change `next.config.js` to use a different `loader configuration` other than the default.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23646>CVE-2022-23646</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23646">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23646</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: next - 12.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23646 (High) detected in next-11.1.4.tgz - ## CVE-2022-23646 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>next-11.1.4.tgz</b></p></summary>
<p>The React Framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/next/-/next-11.1.4.tgz">https://registry.npmjs.org/next/-/next-11.1.4.tgz</a></p>
<p>Path to dependency file: /tilt_modules/tilt_inspector/package.json</p>
<p>Path to vulnerable library: /tilt_modules/tilt_inspector/node_modules/next/package.json</p>
<p>
Dependency Hierarchy:
- tilt-inspector-0.1.8.tgz (Root Library)
- :x: **next-11.1.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/5c5074f1ebcf4d633b1abf0bab05b33c2caf9059">5c5074f1ebcf4d633b1abf0bab05b33c2caf9059</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Next.js is a React framework. Starting with version 10.0.0 and prior to version 12.1.0, Next.js is vulnerable to User Interface (UI) Misrepresentation of Critical Information. In order to be affected, the `next.config.js` file must have an `images.domains` array assigned and the image host assigned in `images.domains` must allow user-provided SVG. If the `next.config.js` file has `images.loader` assigned to something other than default, the instance is not affected. Version 12.1.0 contains a patch for this issue. As a workaround, change `next.config.js` to use a different `loader configuration` other than the default.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23646>CVE-2022-23646</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23646">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23646</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: next - 12.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in next tgz cve high severity vulnerability vulnerable library next tgz the react framework library home page a href path to dependency file tilt modules tilt inspector package json path to vulnerable library tilt modules tilt inspector node modules next package json dependency hierarchy tilt inspector tgz root library x next tgz vulnerable library found in head commit a href found in base branch master vulnerability details next js is a react framework starting with version and prior to version next js is vulnerable to user interface ui misrepresentation of critical information in order to be affected the next config js file must have an images domains array assigned and the image host assigned in images domains must allow user provided svg if the next config js file has images loader assigned to something other than default the instance is not affected version contains a patch for this issue as a workaround change next config js to use a different loader configuration other than the default publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution next step up your open source security game with whitesource | 0 |
307,485 | 26,536,629,738 | IssuesEvent | 2023-01-19 16:05:14 | US-EPA-CAMD/easey-testing | https://api.github.com/repos/US-EPA-CAMD/easey-testing | opened | Lineraity Global Smoke Test | Test Automation | ## Context
- navigate to QA in global
- Select a configuration
- View a lineraity test record in the view modal | 1.0 | Lineraity Global Smoke Test - ## Context
- navigate to QA in global
- Select a configuration
- View a lineraity test record in the view modal | non_main | lineraity global smoke test context navigate to qa in global select a configuration view a lineraity test record in the view modal | 0 |
4,038 | 2,812,220,908 | IssuesEvent | 2015-05-18 07:01:49 | hashicorp/terraform | https://api.github.com/repos/hashicorp/terraform | closed | Missing documentation for terraform_remote_state resource. | documentation | This is a great resource to have! My `terraform.tfvars` was getting out of control. | 1.0 | Missing documentation for terraform_remote_state resource. - This is a great resource to have! My `terraform.tfvars` was getting out of control. | non_main | missing documentation for terraform remote state resource this is a great resource to have my terraform tfvars was getting out of control | 0 |
5,549 | 27,776,961,306 | IssuesEvent | 2023-03-16 17:54:54 | cosmos/ibc-rs | https://api.github.com/repos/cosmos/ibc-rs | closed | [ICS02] Replace specific `verify_functions` with generic `verify_membership` and `verify_non_membership` | A: breaking O: maintainability | <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary
In continuation of #404, part of #173, aiming toward https://github.com/cosmos/ibc/issues/684 implementation in IBC-rs
Replace specific verify_functions of `ClientState` trait with generic `verify_membership` and `verify_non_membership` methods
## Proposal
By separating the `verify_conn_delay_passed` process from the proof verification steps and isolating the concern, it has become apparent that the proof verification method in `ics02-clienstate` is carrying out a very similar task. With some minor refactoring, we can make use of generic interfaces and impose fewer implementation requirements on the builders.
| True | [ICS02] Replace specific `verify_functions` with generic `verify_membership` and `verify_non_membership` - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary
In continuation of #404, part of #173, aiming toward https://github.com/cosmos/ibc/issues/684 implementation in IBC-rs
Replace specific verify_functions of `ClientState` trait with generic `verify_membership` and `verify_non_membership` methods
## Proposal
By separating the `verify_conn_delay_passed` process from the proof verification steps and isolating the concern, it has become apparent that the proof verification method in `ics02-clienstate` is carrying out a very similar task. With some minor refactoring, we can make use of generic interfaces and impose fewer implementation requirements on the builders.
| main | replace specific verify functions with generic verify membership and verify non membership ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v please also ensure that this is not a duplicate issue ☺ summary in continuation of part of aiming toward implementation in ibc rs replace specific verify functions of clientstate trait with generic verify membership and verify non membership methods proposal by separating the verify conn delay passed process from the proof verification steps and isolating the concern it has become apparent that the proof verification method in clienstate is carrying out a very similar task with some minor refactoring we can make use of generic interfaces and impose fewer implementation requirements on the builders | 1 |
146,352 | 11,734,285,524 | IssuesEvent | 2020-03-11 09:03:55 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly failure | :Distributed/Cluster Coordination >test-failure | log:https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=ubuntu-18.04&&immutable/622/console
gradle:https://gradle-enterprise.elastic.co/s/olunhuu5qmlyk
reproduces locally
failure:
```
2> REPRODUCE WITH: ./gradlew ':server:test' --tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly" -Dtests.seed=30C7DB320E81F553 -Dtests.security.manager=true -Dtests.locale=es-VE -Dtests.timezone=Australia/Currie -Dcompiler.java=13
2> java.lang.AssertionError: node0 is a follower of node4
Expected: is <FOLLOWER>
but: was <CANDIDATE>
at __randomizedtesting.SeedInfo.seed([30C7DB320E81F553:2208AF1950E0880]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.cluster.coordination.AbstractCoordinatorTestCase$Cluster.stabilise(AbstractCoordinatorTestCase.java:530)
at org.elasticsearch.cluster.coordination.AbstractCoordinatorTestCase$Cluster.stabilise(AbstractCoordinatorTestCase.java:490)
at org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly(CoordinatorTests.java:409)
2> NOTE: leaving temporary files on disk at: /home/hendrik/work/git-elastic-prod/elasticsearch/server/build/testrun/test/temp/org.elasticsearch.cluster.coordination.CoordinatorTests_30C7DB320E81F553-003
2> NOTE: test params are: codec=Lucene84, sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@407d2d52), locale=es-VE, timezone=Australia/Currie
2> NOTE: Linux 5.3.0-40-generic amd64/AdoptOpenJDK 13.0.2 (64-bit)/cpus=16,threads=1,free=464710688,total=536870912
2> NOTE: All tests run in this JVM: [CoordinatorTests]
```
repro:
```
./gradlew ':server:test' --tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly" \
-Dtests.seed=30C7DB320E81F553 \
-Dtests.security.manager=true \
-Dtests.locale=es-VE \
-Dtests.timezone=Australia/Currie \
-Dcompiler.java=13
``` | 1.0 | [CI] testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly failure - log:https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-unix-compatibility/os=ubuntu-18.04&&immutable/622/console
gradle:https://gradle-enterprise.elastic.co/s/olunhuu5qmlyk
reproduces locally
failure:
```
2> REPRODUCE WITH: ./gradlew ':server:test' --tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly" -Dtests.seed=30C7DB320E81F553 -Dtests.security.manager=true -Dtests.locale=es-VE -Dtests.timezone=Australia/Currie -Dcompiler.java=13
2> java.lang.AssertionError: node0 is a follower of node4
Expected: is <FOLLOWER>
but: was <CANDIDATE>
at __randomizedtesting.SeedInfo.seed([30C7DB320E81F553:2208AF1950E0880]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.cluster.coordination.AbstractCoordinatorTestCase$Cluster.stabilise(AbstractCoordinatorTestCase.java:530)
at org.elasticsearch.cluster.coordination.AbstractCoordinatorTestCase$Cluster.stabilise(AbstractCoordinatorTestCase.java:490)
at org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly(CoordinatorTests.java:409)
2> NOTE: leaving temporary files on disk at: /home/hendrik/work/git-elastic-prod/elasticsearch/server/build/testrun/test/temp/org.elasticsearch.cluster.coordination.CoordinatorTests_30C7DB320E81F553-003
2> NOTE: test params are: codec=Lucene84, sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@407d2d52), locale=es-VE, timezone=Australia/Currie
2> NOTE: Linux 5.3.0-40-generic amd64/AdoptOpenJDK 13.0.2 (64-bit)/cpus=16,threads=1,free=464710688,total=536870912
2> NOTE: All tests run in this JVM: [CoordinatorTests]
```
repro:
```
./gradlew ':server:test' --tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLeaderDisconnectionWithoutDisconnectEventDetectedQuickly" \
-Dtests.seed=30C7DB320E81F553 \
-Dtests.security.manager=true \
-Dtests.locale=es-VE \
-Dtests.timezone=Australia/Currie \
-Dcompiler.java=13
``` | non_main | testleaderdisconnectionwithoutdisconnecteventdetectedquickly failure log gradle reproduces locally failure reproduce with gradlew server test tests org elasticsearch cluster coordination coordinatortests testleaderdisconnectionwithoutdisconnecteventdetectedquickly dtests seed dtests security manager true dtests locale es ve dtests timezone australia currie dcompiler java java lang assertionerror is a follower of expected is but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch cluster coordination abstractcoordinatortestcase cluster stabilise abstractcoordinatortestcase java at org elasticsearch cluster coordination abstractcoordinatortestcase cluster stabilise abstractcoordinatortestcase java at org elasticsearch cluster coordination coordinatortests testleaderdisconnectionwithoutdisconnecteventdetectedquickly coordinatortests java note leaving temporary files on disk at home hendrik work git elastic prod elasticsearch server build testrun test temp org elasticsearch cluster coordination coordinatortests note test params are codec sim asserting org apache lucene search similarities assertingsimilarity locale es ve timezone australia currie note linux generic adoptopenjdk bit cpus threads free total note all tests run in this jvm repro gradlew server test tests org elasticsearch cluster coordination coordinatortests testleaderdisconnectionwithoutdisconnecteventdetectedquickly dtests seed dtests security manager true dtests locale es ve dtests timezone australia currie dcompiler java | 0 |
564 | 4,029,969,437 | IssuesEvent | 2016-05-18 12:48:11 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | opened | Indeed Jobs: overtriggering on "time in (city)" queries | Maintainer Input Requested Triggering | A DuckDuckGo user submitted feedback that this IA is triggering for searches about times in a specific city, such as "time in paoli." The results show part-time jobs.
------
IA Page: http://duck.co/ia/view/indeed_jobs
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @tagawa | True | Indeed Jobs: overtriggering on "time in (city)" queries - A DuckDuckGo user submitted feedback that this IA is triggering for searches about times in a specific city, such as "time in paoli." The results show part-time jobs.
------
IA Page: http://duck.co/ia/view/indeed_jobs
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @tagawa | main | indeed jobs overtriggering on time in city queries a duckduckgo user submitted feedback that this ia is triggering for searches about times in a specific city such as time in paoli the results show part time jobs ia page tagawa | 1 |
341,630 | 24,706,247,814 | IssuesEvent | 2022-10-19 19:20:39 | crescentpartha/CheatSheets-for-Developers | https://api.github.com/repos/crescentpartha/CheatSheets-for-Developers | closed | Add vscode-cheatsheet.md file for VSCode Keyboard Shortcuts | documentation enhancement good first issue hacktoberfest hacktoberfest-2022 | **A new feature? Please describe.**
> Add `vscode-cheatsheet.md` file.
**Describe the solution you'd like**
> Commonly used `VSCode Keyboard Shortcuts` are added to `boost your productivity`.
| 1.0 | Add vscode-cheatsheet.md file for VSCode Keyboard Shortcuts - **A new feature? Please describe.**
> Add `vscode-cheatsheet.md` file.
**Describe the solution you'd like**
> Commonly used `VSCode Keyboard Shortcuts` are added to `boost your productivity`.
| non_main | add vscode cheatsheet md file for vscode keyboard shortcuts a new feature please describe add vscode cheatsheet md file describe the solution you d like commonly used vscode keyboard shortcuts are added to boost your productivity | 0 |
5,336 | 26,924,628,224 | IssuesEvent | 2023-02-07 12:58:38 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Unable to debug golang function using dlv from AWS documentation (`GLIBC_2.32' not found) | stage/bug-repro maintainer/need-followup | ### Description
Unable to debug golang applications using sam local invoke following the documentation provided by AWS. glibc used to build dlv doesn't seem to be compat with the glibc in the docker image. sam build cannot build in a docker container so not sure what we are suppose to do about this.
### Steps to reproduce
```
mkdir delve
GOARCH=amd64 GOOS=linux go build -o delve/dlv github.com/go-delve/delve/cmd/dlv
GOARCH=amd64 GOOS=linux go build -o dlv github.com/go-delve/delve/cmd/dlv
sam local invoke CrmEntityImportWorkerFunction -e unittestdata/crmentityimporter/events/110791.event --profile=tbrunomind -d 5986 --debugger-path delve/ --debug-args "-delveAPI=2" --debug"
```
### Observed result
```
local invoke command is called
Collected default values for parameters: {'ENV': 'dev'}
...
...
...
Private data, just list of ENV vars and template values
..
...
19 resources found in the template
Found Serverless function with name='CrmEntityImportWorkerFunction' and CodeUri='./crm-entity-import-worker/'
....
...
....
(Private data, just a list of all functions)
...
...
...
...
Found one Lambda function with name 'CrmEntityImportWorkerFunction'
Invoking crm-entity-import-worker (go1.x)
Environment variables overrides data is standard format
Loading AWS credentials from session with profile 'mywork'
Resolving code path. Cwd=/home/tbruno/Projects/GolandProjects/lambda-crm-integration, CodeUri=./crm-entity-import-worker/
Resolved absolute path to code is /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker
Code /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker is not a zip/jar file
Image was not found.
Building image...Adding custom GO Bootstrap to support debugging
.........................
Failed to download image with name amazon/aws-sam-cli-emulation-image-go1.x:debug-1.6.2
Failed to download a new amazon/aws-sam-cli-emulation-image-go1.x:debug-1.6.2 image. Invoking with the already downloaded image.
Mounting /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker as /var/task:ro,delegated inside runtime container
Setting up SIGTERM interrupt handler
/tmp/lambci_debug_files/dlv: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /tmp/lambci_debug_files/dlv)
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: ArchLinux/Manjaro
2. `sam --version`: SAM CLI, version 1.6.2 | True | Unable to debug golang function using dlv from AWS documentation (`GLIBC_2.32' not found) - ### Description
Unable to debug golang applications using sam local invoke following the documentation provided by AWS. glibc used to build dlv doesn't seem to be compat with the glibc in the docker image. sam build cannot build in a docker container so not sure what we are suppose to do about this.
### Steps to reproduce
```
mkdir delve
GOARCH=amd64 GOOS=linux go build -o delve/dlv github.com/go-delve/delve/cmd/dlv
GOARCH=amd64 GOOS=linux go build -o dlv github.com/go-delve/delve/cmd/dlv
sam local invoke CrmEntityImportWorkerFunction -e unittestdata/crmentityimporter/events/110791.event --profile=tbrunomind -d 5986 --debugger-path delve/ --debug-args "-delveAPI=2" --debug"
```
### Observed result
```
local invoke command is called
Collected default values for parameters: {'ENV': 'dev'}
...
...
...
Private data, just list of ENV vars and template values
..
...
19 resources found in the template
Found Serverless function with name='CrmEntityImportWorkerFunction' and CodeUri='./crm-entity-import-worker/'
....
...
....
(Private data, just a list of all functions)
...
...
...
...
Found one Lambda function with name 'CrmEntityImportWorkerFunction'
Invoking crm-entity-import-worker (go1.x)
Environment variables overrides data is standard format
Loading AWS credentials from session with profile 'mywork'
Resolving code path. Cwd=/home/tbruno/Projects/GolandProjects/lambda-crm-integration, CodeUri=./crm-entity-import-worker/
Resolved absolute path to code is /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker
Code /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker is not a zip/jar file
Image was not found.
Building image...Adding custom GO Bootstrap to support debugging
.........................
Failed to download image with name amazon/aws-sam-cli-emulation-image-go1.x:debug-1.6.2
Failed to download a new amazon/aws-sam-cli-emulation-image-go1.x:debug-1.6.2 image. Invoking with the already downloaded image.
Mounting /home/tbruno/Projects/GolandProjects/lambda-crm-integration/crm-entity-import-worker as /var/task:ro,delegated inside runtime container
Setting up SIGTERM interrupt handler
/tmp/lambci_debug_files/dlv: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /tmp/lambci_debug_files/dlv)
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: ArchLinux/Manjaro
2. `sam --version`: SAM CLI, version 1.6.2 | main | unable to debug golang function using dlv from aws documentation glibc not found description unable to debug golang applications using sam local invoke following the documentation provided by aws glibc used to build dlv doesn t seem to be compat with the glibc in the docker image sam build cannot build in a docker container so not sure what we are suppose to do about this steps to reproduce mkdir delve goarch goos linux go build o delve dlv github com go delve delve cmd dlv goarch goos linux go build o dlv github com go delve delve cmd dlv sam local invoke crmentityimportworkerfunction e unittestdata crmentityimporter events event profile tbrunomind d debugger path delve debug args delveapi debug observed result local invoke command is called collected default values for parameters env dev private data just list of env vars and template values resources found in the template found serverless function with name crmentityimportworkerfunction and codeuri crm entity import worker private data just a list of all functions found one lambda function with name crmentityimportworkerfunction invoking crm entity import worker x environment variables overrides data is standard format loading aws credentials from session with profile mywork resolving code path cwd home tbruno projects golandprojects lambda crm integration codeuri crm entity import worker resolved absolute path to code is home tbruno projects golandprojects lambda crm integration crm entity import worker code home tbruno projects golandprojects lambda crm integration crm entity import worker is not a zip jar file image was not found building image adding custom go bootstrap to support debugging failed to download image with name amazon aws sam cli emulation image x debug failed to download a new amazon aws sam cli emulation image x debug image invoking with the already downloaded image mounting home tbruno projects golandprojects lambda crm integration crm entity import worker as var task ro delegated inside runtime container setting up sigterm interrupt handler tmp lambci debug files dlv libc so version glibc not found required by tmp lambci debug files dlv additional environment details ex windows mac amazon linux etc os archlinux manjaro sam version sam cli version | 1 |
5,409 | 27,148,615,784 | IssuesEvent | 2023-02-16 22:20:13 | arcticicestudio/nord | https://api.github.com/repos/arcticicestudio/nord | opened | `nordtheme` organization migration | type-task context-workflow scope-maintainability | As part of the [“Northern Post — The state and roadmap of Nord“][2] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][1].
This issue only tracks the actual move as well as preparations steps to do so. The detailed plan, including tasklists for actual tasks, will follow later on for all Nord repositories.
[1]: https://github.com/arcticicestudio/nord/issues/180
[2]: https://github.com/nordtheme
| True | `nordtheme` organization migration - As part of the [“Northern Post — The state and roadmap of Nord“][2] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][1].
This issue only tracks the actual move as well as preparations steps to do so. The detailed plan, including tasklists for actual tasks, will follow later on for all Nord repositories.
[1]: https://github.com/arcticicestudio/nord/issues/180
[2]: https://github.com/nordtheme
| main | nordtheme organization migration as part of the announcement this repository will be migrated to this issue only tracks the actual move as well as preparations steps to do so the detailed plan including tasklists for actual tasks will follow later on for all nord repositories | 1 |
117,923 | 9,965,453,726 | IssuesEvent | 2019-07-08 08:48:41 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | closed | ISSN 2150-9301 Religion and society : advances in research Abstract und Keywords | Zotero_AUTO ready for testing | Bei den Artikeln stehen Abstract und Keywords.
https://www.berghahnjournals.com/view/journals/religion-and-society/9/1/arrs090103.xml
Beides wird nicht übertragen.
| 1.0 | ISSN 2150-9301 Religion and society : advances in research Abstract und Keywords - Bei den Artikeln stehen Abstract und Keywords.
https://www.berghahnjournals.com/view/journals/religion-and-society/9/1/arrs090103.xml
Beides wird nicht übertragen.
| non_main | issn religion and society advances in research abstract und keywords bei den artikeln stehen abstract und keywords beides wird nicht übertragen | 0 |
1,524 | 6,572,215,785 | IssuesEvent | 2017-09-11 00:09:29 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | lxc_container: add a way to wait for the container to get an IP address | affects_2.1 bug_report cloud waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
lxc_container
##### Ansible Version:
```
ansible 2.1.0 (devel bb5a11d079) last updated 2016/03/08 17:47:42 (GMT +300)
lib/ansible/modules/core: (detached HEAD 81cc84ab11) last updated 2016/03/08 18:34:46 (GMT +300)
lib/ansible/modules/extras: (detached HEAD b51efc51bc) last updated 2016/03/08 15:44:14 (GMT +300)
config file = /home/elnur/proj/doccafe/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
Ubuntu 14.04.4
##### Summary:
When creating a container, the module returns before an IP address gets assigned and hence the registered variable doesn't have the IP address further tasks could use to provision the container. Need some kind of `wait` option in the module.
##### Steps To Reproduce:
<!-- For bugs, please show exactly how to reproduce the problem.
For new features, show how the feature would be used. -->
```
- hosts: localhost
connection: local
become: yes
tasks:
- name: create database server
lxc_container:
name: database
template: ubuntu
state: started
container_config:
- lxc.start.auto = 1
register: database
- debug: var=database
```
My workaround is using the `container_command` option:
```
container_command: |
while [[ ! $(ifconfig eth0) =~ "inet addr:" ]]; do
sleep .5
done
```
It works _most of the time_, but sometimes our tests on CI fail because of the `ips` list being empty. I think the reason here is that the `container_command` gets executed first and then `container_config` gets executed second and that forces the container to restart and lose the IP address for a moment. That's just an assumption and might be wrong, but if it is right, the wait check should kick in after the config change as well. My workaround doesn't work here because it gets executed before the config change.
##### Expected Results:
```
"database": {
"changed": true,
"lxc_container": {
"init_pid": 31720,
"interfaces": [
"eth0",
"lo"
],
"ips": [
"10.0.3.202" # ← here it is
],
"state": "running"
}
}
```
##### Actual Results:
```
"database": {
"changed": true,
"lxc_container": {
"init_pid": 31720,
"interfaces": [
"eth0",
"lo"
],
"ips": [],
"state": "running"
}
}
```
| True | lxc_container: add a way to wait for the container to get an IP address - ##### Issue Type:
- Bug Report
##### Plugin Name:
lxc_container
##### Ansible Version:
```
ansible 2.1.0 (devel bb5a11d079) last updated 2016/03/08 17:47:42 (GMT +300)
lib/ansible/modules/core: (detached HEAD 81cc84ab11) last updated 2016/03/08 18:34:46 (GMT +300)
lib/ansible/modules/extras: (detached HEAD b51efc51bc) last updated 2016/03/08 15:44:14 (GMT +300)
config file = /home/elnur/proj/doccafe/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
Ubuntu 14.04.4
##### Summary:
When creating a container, the module returns before an IP address gets assigned and hence the registered variable doesn't have the IP address further tasks could use to provision the container. Need some kind of `wait` option in the module.
##### Steps To Reproduce:
<!-- For bugs, please show exactly how to reproduce the problem.
For new features, show how the feature would be used. -->
```
- hosts: localhost
connection: local
become: yes
tasks:
- name: create database server
lxc_container:
name: database
template: ubuntu
state: started
container_config:
- lxc.start.auto = 1
register: database
- debug: var=database
```
My workaround is using the `container_command` option:
```
container_command: |
while [[ ! $(ifconfig eth0) =~ "inet addr:" ]]; do
sleep .5
done
```
It works _most of the time_, but sometimes our tests on CI fail because of the `ips` list being empty. I think the reason here is that the `container_command` gets executed first and then `container_config` gets executed second and that forces the container to restart and lose the IP address for a moment. That's just an assumption and might be wrong, but if it is right, the wait check should kick in after the config change as well. My workaround doesn't work here because it gets executed before the config change.
##### Expected Results:
```
"database": {
"changed": true,
"lxc_container": {
"init_pid": 31720,
"interfaces": [
"eth0",
"lo"
],
"ips": [
"10.0.3.202" # ← here it is
],
"state": "running"
}
}
```
##### Actual Results:
```
"database": {
"changed": true,
"lxc_container": {
"init_pid": 31720,
"interfaces": [
"eth0",
"lo"
],
"ips": [],
"state": "running"
}
}
```
| main | lxc container add a way to wait for the container to get an ip address issue type bug report plugin name lxc container ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home elnur proj doccafe ansible cfg configured module search path default w o overrides environment ubuntu summary when creating a container the module returns before an ip address gets assigned and hence the registered variable doesn t have the ip address further tasks could use to provision the container need some kind of wait option in the module steps to reproduce for bugs please show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local become yes tasks name create database server lxc container name database template ubuntu state started container config lxc start auto register database debug var database my workaround is using the container command option container command while do sleep done it works most of the time but sometimes our tests on ci fail because of the ips list being empty i think the reason here is that the container command gets executed first and then container config gets executed second and that forces the container to restart and lose the ip address for a moment that s just an assumption and might be wrong but if it is right the wait check should kick in after the config change as well my workaround doesn t work here because it gets executed before the config change expected results database changed true lxc container init pid interfaces lo ips ← here it is state running actual results database changed true lxc container init pid interfaces lo ips state running | 1 |
505,917 | 14,654,543,254 | IssuesEvent | 2020-12-28 08:56:42 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | dbtagriculture.bihar.gov.in - site is not usable | browser-fenix engine-gecko ml-needsdiagnosis-false priority-normal | <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64306 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://dbtagriculture.bihar.gov.in/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/c10d202a-cf36-4387-8e35-c38b1481e96d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201220193140</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/0d7682a4-1e4c-44d6-9b66-6c32bb9fee51)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | dbtagriculture.bihar.gov.in - site is not usable - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/64306 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://dbtagriculture.bihar.gov.in/
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/c10d202a-cf36-4387-8e35-c38b1481e96d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201220193140</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/0d7682a4-1e4c-44d6-9b66-6c32bb9fee51)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | dbtagriculture bihar gov in site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description page not loading correctly steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
5,800 | 30,717,080,527 | IssuesEvent | 2023-07-27 13:41:14 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: TextInput focus lost when user is typing and type change from "number" to "text" | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
"carbon-components": "10.58.7", "carbon-components-react": "7.59.8",
### React version
16.14.0
### Description
We have situation in IBM product that whenever user exceed number in TextInput with type number, we change type into = "text" but the cursor is going at the start of input whenever user typing which is confused.
I created sandbox. Try to typing something and after 3 seconds your cursor in the input will go to at the start.
https://codesandbox.io/s/textinput-focus-lost-when-type-change-vjclgv?file=/index.js
### Reproduction/example
https://codesandbox.io/s/textinput-focus-lost-when-type-change-vjclgv?file=/index.js
### Steps to reproduce
Try to typing something and after 3 seconds your cursor in the input will go to at the start.
https://github.com/carbon-design-system/carbon/assets/23103157/314d2ac6-94ec-48a1-a192-bc8b852b36bc
### Suggested Severity
None
### Application/PAL
IBM product
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [ ] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: TextInput focus lost when user is typing and type change from "number" to "text" - ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
"carbon-components": "10.58.7", "carbon-components-react": "7.59.8",
### React version
16.14.0
### Description
We have situation in IBM product that whenever user exceed number in TextInput with type number, we change type into = "text" but the cursor is going at the start of input whenever user typing which is confused.
I created sandbox. Try to typing something and after 3 seconds your cursor in the input will go to at the start.
https://codesandbox.io/s/textinput-focus-lost-when-type-change-vjclgv?file=/index.js
### Reproduction/example
https://codesandbox.io/s/textinput-focus-lost-when-type-change-vjclgv?file=/index.js
### Steps to reproduce
Try to typing something and after 3 seconds your cursor in the input will go to at the start.
https://github.com/carbon-design-system/carbon/assets/23103157/314d2ac6-94ec-48a1-a192-bc8b852b36bc
### Suggested Severity
None
### Application/PAL
IBM product
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [ ] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | textinput focus lost when user is typing and type change from number to text package carbon components carbon components react browser chrome package version carbon components carbon components react react version description we have situation in ibm product that whenever user exceed number in textinput with type number we change type into text but the cursor is going at the start of input whenever user typing which is confused i created sandbox try to typing something and after seconds your cursor in the input will go to at the start reproduction example steps to reproduce try to typing something and after seconds your cursor in the input will go to at the start suggested severity none application pal ibm product code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
8,928 | 6,037,262,030 | IssuesEvent | 2017-06-09 18:13:41 | orbeon/orbeon-forms | https://api.github.com/repos/orbeon/orbeon-forms | opened | Repeated grid: consider removing/making menu optional | Form Runner Usability XBL Components | The menu was introduced so rows could be moved. But menus are not very efficient to use.
With #2461, moving can be done with drag and drop, and we should also consider keyboard shortcuts.
With that, moves are handled, and we are left with:
- insert after
- remove
These two could be simple icons on each row.
Inserting before can be handle with inserting after and moving. | True | Repeated grid: consider removing/making menu optional - The menu was introduced so rows could be moved. But menus are not very efficient to use.
With #2461, moving can be done with drag and drop, and we should also consider keyboard shortcuts.
With that, moves are handled, and we are left with:
- insert after
- remove
These two could be simple icons on each row.
Inserting before can be handle with inserting after and moving. | non_main | repeated grid consider removing making menu optional the menu was introduced so rows could be moved but menus are not very efficient to use with moving can be done with drag and drop and we should also consider keyboard shortcuts with that moves are handled and we are left with insert after remove these two could be simple icons on each row inserting before can be handle with inserting after and moving | 0 |
988 | 4,756,341,225 | IssuesEvent | 2016-10-24 13:45:39 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Fix minor typos in modules | affects_2.3 docs_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ios_template
The ios_template module has the following text:
```
Deprecated in 2.2. Use eos_config instead
```
I'm guessing this wasn't an attempt to sell more Arista switches :-) | True | Fix minor typos in modules - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ios_template
The ios_template module has the following text:
```
Deprecated in 2.2. Use eos_config instead
```
I'm guessing this wasn't an attempt to sell more Arista switches :-) | main | fix minor typos in modules issue type documentation report component name ios template the ios template module has the following text deprecated in use eos config instead i m guessing this wasn t an attempt to sell more arista switches | 1 |
272,534 | 8,514,544,657 | IssuesEvent | 2018-10-31 18:52:17 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | opened | Duplicated Kudos badges in the Profile section | Gitcoin Kudos bug priority: low | **Describe the bug**
This is different from https://github.com/gitcoinco/web/issues/2590. There seems to be duplicates of the Kudos I've sent to members of the Gitcoin team. When hovering over, one instance has details, and in another, none.
For the "Out of this World Programmer" Kudos I sent to Mark, the duplicate instance with details is completely missing.
**To Reproduce**
1. Send a kudos to a Gitcoin team member.
2. Check your profile page.
3. Hover over duplicates.
**Expected behavior**
There should only be one instance of the Kudos I send to others, with details on the hover-over.
**Screenshots**

**Live**

**Desktop:**
- OS: macOS High Sierra 10.13.4
- Browser: Google Chrome
- Browser Version: 70.0.3538.77 (Official Build) (64-bit) | 1.0 | Duplicated Kudos badges in the Profile section - **Describe the bug**
This is different from https://github.com/gitcoinco/web/issues/2590. There seems to be duplicates of the Kudos I've sent to members of the Gitcoin team. When hovering over, one instance has details, and in another, none.
For the "Out of this World Programmer" Kudos I sent to Mark, the duplicate instance with details is completely missing.
**To Reproduce**
1. Send a kudos to a Gitcoin team member.
2. Check your profile page.
3. Hover over duplicates.
**Expected behavior**
There should only be one instance of the Kudos I send to others, with details on the hover-over.
**Screenshots**

**Live**

**Desktop:**
- OS: macOS High Sierra 10.13.4
- Browser: Google Chrome
- Browser Version: 70.0.3538.77 (Official Build) (64-bit) | non_main | duplicated kudos badges in the profile section describe the bug this is different from there seems to be duplicates of the kudos i ve sent to members of the gitcoin team when hovering over one instance has details and in another none for the out of this world programmer kudos i sent to mark the duplicate instance with details is completely missing to reproduce send a kudos to a gitcoin team member check your profile page hover over duplicates expected behavior there should only be one instance of the kudos i send to others with details on the hover over screenshots live desktop os macos high sierra browser google chrome browser version official build bit | 0 |
1,401 | 6,025,460,174 | IssuesEvent | 2017-06-08 08:45:52 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Feature: Make Subversion module better behaved with local modified files | affects_2.0 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
subversion
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Stock
##### OS / ENVIRONMENT
Ansible server: ubuntu server 14.04LTS
target(s): ubuntu server 14.04LTS
##### SUMMARY
This is a feature request. Currently when doing an SVN update using the subversion toolset, an `update` will leave local moded files alone, and update the rest.
With Ansible, using the subversion module, the documentation says that if the repository exists, then the play will update the files, but if the there are local modified files, then it will **fail**. Or we can set the `force` option and it will discard the local modified files.
This behavior is not consistent with the SVN tool set. Update has a specific meaning and behavior. The force option essentially is a `revert` and `update`.
I propose that the module behave more like the SVN tool set, and let an update be an update, and let a check out be a checkout and a revert be a revert.
I do however, find the usefulness of the ability to know that there are local mods, so maybe an expanded option set would be best.
##### STEPS TO REPRODUCE
Instead of `force`, use `revert`.
`subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob revert=true`
This option would be the same as `svn revert -R <path>` being issued and then an `update`.
`subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob`
With no options specified, it would be like a `svn update <path>` if the repo exists, or a `svn co <path>` if the repo doesn't exist.
If the repo does exist and there are local mods, then the update could issue warning or info text to the Ansible user.
##### EXPECTED RESULTS
**instead of**
`fatal: [www.local]: FAILED! => {"changed": false, "failed": true, "msg": "ERROR: modified files exist in the repository."}`
**Maybe:**
`info: [www.local]: INFO! => {"changed": true, "failed": false, "msg": "WARNING: modified files exist in the repository."}`
and of course the files would have been updated.
| True | Feature: Make Subversion module better behaved with local modified files - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
subversion
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Stock
##### OS / ENVIRONMENT
Ansible server: ubuntu server 14.04LTS
target(s): ubuntu server 14.04LTS
##### SUMMARY
This is a feature request. Currently when doing an SVN update using the subversion toolset, an `update` will leave local moded files alone, and update the rest.
With Ansible, using the subversion module, the documentation says that if the repository exists, then the play will update the files, but if the there are local modified files, then it will **fail**. Or we can set the `force` option and it will discard the local modified files.
This behavior is not consistent with the SVN tool set. Update has a specific meaning and behavior. The force option essentially is a `revert` and `update`.
I propose that the module behave more like the SVN tool set, and let an update be an update, and let a check out be a checkout and a revert be a revert.
I do however, find the usefulness of the ability to know that there are local mods, so maybe an expanded option set would be best.
##### STEPS TO REPRODUCE
Instead of `force`, use `revert`.
`subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob revert=true`
This option would be the same as `svn revert -R <path>` being issued and then an `update`.
`subversion: repo=https://svnserver/svn/mob/trunk dest=/var/site-roots/mob`
With no options specified, it would be like a `svn update <path>` if the repo exists, or a `svn co <path>` if the repo doesn't exist.
If the repo does exist and there are local mods, then the update could issue warning or info text to the Ansible user.
##### EXPECTED RESULTS
**instead of**
`fatal: [www.local]: FAILED! => {"changed": false, "failed": true, "msg": "ERROR: modified files exist in the repository."}`
**Maybe:**
`info: [www.local]: INFO! => {"changed": true, "failed": false, "msg": "WARNING: modified files exist in the repository."}`
and of course the files would have been updated.
| main | feature make subversion module better behaved with local modified files issue type feature idea component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration stock os environment ansible server ubuntu server target s ubuntu server summary this is a feature request currently when doing an svn update using the subversion toolset an update will leave local moded files alone and update the rest with ansible using the subversion module the documentation says that if the repository exists then the play will update the files but if the there are local modified files then it will fail or we can set the force option and it will discard the local modified files this behavior is not consistent with the svn tool set update has a specific meaning and behavior the force option essentially is a revert and update i propose that the module behave more like the svn tool set and let an update be an update and let a check out be a checkout and a revert be a revert i do however find the usefulness of the ability to know that there are local mods so maybe an expanded option set would be best steps to reproduce instead of force use revert subversion repo dest var site roots mob revert true this option would be the same as svn revert r being issued and then an update subversion repo dest var site roots mob with no options specified it would be like a svn update if the repo exists or a svn co if the repo doesn t exist if the repo does exist and there are local mods then the update could issue warning or info text to the ansible user expected results instead of fatal failed changed false failed true msg error modified files exist in the repository maybe info info changed true failed false msg warning modified files exist in the repository and of course the files would have been updated | 1 |
457,081 | 13,151,469,280 | IssuesEvent | 2020-08-09 16:54:35 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | (python) opencv and theano conflict | bug category: gpu/cuda (contrib) category: ocl category: python bindings priority: low | When importing theano (a machine learning library that uses gpu for computation), opencv crashes in warpAffine:
``` python
import numpy as np
import cv2
import theano
def _rotate(img, pose):
center = np.mean(pose, axis=0, dtype=int)
angle = np.random.rand() * 20 - 10
r = cv2.getRotationMatrix2D(tuple(center), angle, 1.0)
print("tadaaaa")
img = cv2.warpAffine(img, r, (img.shape[1], img.shape[0]))
print("not tadaaa")
pose = np.dot((pose - center[None, :]), np.transpose(r[:, :2])) + center
return img, pose
def main():
img = cv2.imread("picture.jpg")
pose = np.array([[450, 250],
[350, 250],
[400, 350]])
img2, pose2 = _rotate(img, pose)
cv2.drawMarker(img, (int(pose[0, 0]), int(pose[0, 1])), (0, 0, 255))
cv2.drawMarker(img, (int(pose[1, 0]), int(pose[1, 1])), (0, 0, 255))
cv2.drawMarker(img2, (int(pose2[0, 0]), int(pose2[0, 1])), (0, 0, 255))
cv2.drawMarker(img2, (int(pose2[1, 0]), int(pose2[1, 1])), (0, 0, 255))
cv2.namedWindow('preview')
cv2.imshow('preview', img)
cv2.waitKey()
cv2.imshow('preview', img2)
cv2.waitKey()
cv2.destroyAllWindows()
main()
```
```
python test.py
Using gpu device 0: GeForce GTX 980 Ti (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5005)
tadaaaa
zsh: segmentation fault (core dumped) python test.py
```
I have OpenCV 3.1.0 (build script [here](https://git.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/opencv)). I have strictly no idea how to debug this, but I can run some tests for you if tell me how.
| 1.0 | (python) opencv and theano conflict - When importing theano (a machine learning library that uses gpu for computation), opencv crashes in warpAffine:
``` python
import numpy as np
import cv2
import theano
def _rotate(img, pose):
center = np.mean(pose, axis=0, dtype=int)
angle = np.random.rand() * 20 - 10
r = cv2.getRotationMatrix2D(tuple(center), angle, 1.0)
print("tadaaaa")
img = cv2.warpAffine(img, r, (img.shape[1], img.shape[0]))
print("not tadaaa")
pose = np.dot((pose - center[None, :]), np.transpose(r[:, :2])) + center
return img, pose
def main():
img = cv2.imread("picture.jpg")
pose = np.array([[450, 250],
[350, 250],
[400, 350]])
img2, pose2 = _rotate(img, pose)
cv2.drawMarker(img, (int(pose[0, 0]), int(pose[0, 1])), (0, 0, 255))
cv2.drawMarker(img, (int(pose[1, 0]), int(pose[1, 1])), (0, 0, 255))
cv2.drawMarker(img2, (int(pose2[0, 0]), int(pose2[0, 1])), (0, 0, 255))
cv2.drawMarker(img2, (int(pose2[1, 0]), int(pose2[1, 1])), (0, 0, 255))
cv2.namedWindow('preview')
cv2.imshow('preview', img)
cv2.waitKey()
cv2.imshow('preview', img2)
cv2.waitKey()
cv2.destroyAllWindows()
main()
```
```
python test.py
Using gpu device 0: GeForce GTX 980 Ti (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5005)
tadaaaa
zsh: segmentation fault (core dumped) python test.py
```
I have OpenCV 3.1.0 (build script [here](https://git.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/opencv)). I have strictly no idea how to debug this, but I can run some tests for you if tell me how.
| non_main | python opencv and theano conflict when importing theano a machine learning library that uses gpu for computation opencv crashes in warpaffine python import numpy as np import import theano def rotate img pose center np mean pose axis dtype int angle np random rand r tuple center angle print tadaaaa img warpaffine img r img shape img shape print not tadaaa pose np dot pose center np transpose r center return img pose def main img imread picture jpg pose np array rotate img pose drawmarker img int pose int pose drawmarker img int pose int pose drawmarker int int drawmarker int int namedwindow preview imshow preview img waitkey imshow preview waitkey destroyallwindows main python test py using gpu device geforce gtx ti cnmem is enabled with initial size of memory cudnn tadaaaa zsh segmentation fault core dumped python test py i have opencv build script i have strictly no idea how to debug this but i can run some tests for you if tell me how | 0 |
546,361 | 16,010,287,113 | IssuesEvent | 2021-04-20 09:41:02 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | Use Greek symbols for genes in genotype table on management page | FlyBase genotype management low priority | Currently it shows gene names like "beta4GalNAcTB" instead of "β4GalNAcTB". | 1.0 | Use Greek symbols for genes in genotype table on management page - Currently it shows gene names like "beta4GalNAcTB" instead of "β4GalNAcTB". | non_main | use greek symbols for genes in genotype table on management page currently it shows gene names like instead of | 0 |
595,863 | 18,076,427,437 | IssuesEvent | 2021-09-21 10:23:39 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | stm32h747i_disco M4 not working following merge of 9fa5437447712eece9c88e728ac05ac10fb01c4a | bug priority: medium platform: STM32 Regression | **Describe the bug**
Since merge of 9fa5437447712eece9c88e728ac05ac10fb01c4a, blinky sample ca'nt work on stm32h747i_disco_m4 target
**To Reproduce**
cd zephyr
west build -p -b stm32h747i_disco_m4 samples/basic/blinky
west flash
**Expected behavior**
Sample should work.
**Environment (please complete the following information):**
- OS: Linux
- SDK 0.13.0
- SHA1: Starting 9fa5437447712eece9c88e728ac05ac10fb01c4a
**Additional context**
Both procedures below have a positive impact:
- Run following command on m7 core: `west debug` `continue` > Blink starts
- Disable MPU and SERIAL (CONFIG_ARM_MPU=n, CONFIG_SERIAL=n) > Blink starts
Potentially linked to https://github.com/zephyrproject-rtos/zephyr/issues/37827.
Despite issue appears after 9fa5437447712eece9c88e728ac05ac10fb01c4a, I don't think this change has a direct impact but rather reveals a already existing, silent, issue.
| 1.0 | stm32h747i_disco M4 not working following merge of 9fa5437447712eece9c88e728ac05ac10fb01c4a - **Describe the bug**
Since merge of 9fa5437447712eece9c88e728ac05ac10fb01c4a, blinky sample ca'nt work on stm32h747i_disco_m4 target
**To Reproduce**
cd zephyr
west build -p -b stm32h747i_disco_m4 samples/basic/blinky
west flash
**Expected behavior**
Sample should work.
**Environment (please complete the following information):**
- OS: Linux
- SDK 0.13.0
- SHA1: Starting 9fa5437447712eece9c88e728ac05ac10fb01c4a
**Additional context**
Both procedures below have a positive impact:
- Run following command on m7 core: `west debug` `continue` > Blink starts
- Disable MPU and SERIAL (CONFIG_ARM_MPU=n, CONFIG_SERIAL=n) > Blink starts
Potentially linked to https://github.com/zephyrproject-rtos/zephyr/issues/37827.
Despite issue appears after 9fa5437447712eece9c88e728ac05ac10fb01c4a, I don't think this change has a direct impact but rather reveals a already existing, silent, issue.
| non_main | disco not working following merge of describe the bug since merge of blinky sample ca nt work on disco target to reproduce cd zephyr west build p b disco samples basic blinky west flash expected behavior sample should work environment please complete the following information os linux sdk starting additional context both procedures below have a positive impact run following command on core west debug continue blink starts disable mpu and serial config arm mpu n config serial n blink starts potentially linked to despite issue appears after i don t think this change has a direct impact but rather reveals a already existing silent issue | 0 |
605,826 | 18,740,878,954 | IssuesEvent | 2021-11-04 13:28:58 | Edgeryders-Participio/multi-dreams | https://api.github.com/repos/Edgeryders-Participio/multi-dreams | closed | Fix issue with apollo client/cache (currentOrg and currentOrgMember goes null on client side transition) | Priority: 1 (now - within 1 month) | The `TOP_LEVEL_QUERY` in `ui/pages/_app.js` is supposed to fetch the basics needed in many places, such as `currentOrgMember` and `currentOrg`, but since switching to apollo client 3 there seems to be some issues with this cache when doing a client side transition these values will go null.
Might need to look at the apollo client setup. | 1.0 | Fix issue with apollo client/cache (currentOrg and currentOrgMember goes null on client side transition) - The `TOP_LEVEL_QUERY` in `ui/pages/_app.js` is supposed to fetch the basics needed in many places, such as `currentOrgMember` and `currentOrg`, but since switching to apollo client 3 there seems to be some issues with this cache when doing a client side transition these values will go null.
Might need to look at the apollo client setup. | non_main | fix issue with apollo client cache currentorg and currentorgmember goes null on client side transition the top level query in ui pages app js is supposed to fetch the basics needed in many places such as currentorgmember and currentorg but since switching to apollo client there seems to be some issues with this cache when doing a client side transition these values will go null might need to look at the apollo client setup | 0 |
621,574 | 19,591,762,803 | IssuesEvent | 2022-01-05 13:45:59 | bounswe/2021SpringGroup6 | https://api.github.com/repos/bounswe/2021SpringGroup6 | closed | Search equipment | Platform: Front-end Priority: Medium Status: Waiting Review | Any user shall be search equipments from equipments page. Since its a very similar with event page @38programmer61 may implement it. | 1.0 | Search equipment - Any user shall be search equipments from equipments page. Since its a very similar with event page @38programmer61 may implement it. | non_main | search equipment any user shall be search equipments from equipments page since its a very similar with event page may implement it | 0 |
42,455 | 5,445,559,818 | IssuesEvent | 2017-03-07 08:02:00 | bounswe/bounswe2017group6 | https://api.github.com/repos/bounswe/bounswe2017group6 | closed | Creating new mock-up for Admin Control Panel (web) | design | The mock-up for Admin Control Panel should be redone with respect to newly agreed-upon style choices, and with new requirements. | 1.0 | Creating new mock-up for Admin Control Panel (web) - The mock-up for Admin Control Panel should be redone with respect to newly agreed-upon style choices, and with new requirements. | non_main | creating new mock up for admin control panel web the mock up for admin control panel should be redone with respect to newly agreed upon style choices and with new requirements | 0 |
5,346 | 26,962,350,988 | IssuesEvent | 2023-02-08 19:13:57 | NIAEFEUP/website-niaefeup-backend | https://api.github.com/repos/NIAEFEUP/website-niaefeup-backend | closed | security: change deprecated API | maintainability | Since Spring Boot 3, our security config is marked as deprecated. | True | security: change deprecated API - Since Spring Boot 3, our security config is marked as deprecated. | main | security change deprecated api since spring boot our security config is marked as deprecated | 1 |
1,616 | 6,572,638,155 | IssuesEvent | 2017-09-11 03:58:31 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | npm is not "becoming" root | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
[ssh_connection]
pipelining = True
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
control_path = /tmp/%%h-%%p-%%r
```
##### OS / ENVIRONMENT
OSX 10.10.5
##### SUMMARY
When installing anything with the npm module, its setting the permissions of all the modules files/directories to the user which was used to connect to the remote host, regardless of whether become is set or not.
##### STEPS TO REPRODUCE
1. Run the following task
```
- name: install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
```
1. check permissions on the module directory/files (`/usr/lib/node_modules/pm2`)
##### EXPECTED RESULTS
Files/directories should have root user permissions
##### ACTUAL RESULTS
Files/directories have ssh user permissions
| True | npm is not "becoming" root - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
[ssh_connection]
pipelining = True
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s
control_path = /tmp/%%h-%%p-%%r
```
##### OS / ENVIRONMENT
OSX 10.10.5
##### SUMMARY
When installing anything with the npm module, its setting the permissions of all the modules files/directories to the user which was used to connect to the remote host, regardless of whether become is set or not.
##### STEPS TO REPRODUCE
1. Run the following task
```
- name: install pm2
become: yes
npm:
name: pm2
global: yes
production: yes
state: present
```
1. check permissions on the module directory/files (`/usr/lib/node_modules/pm2`)
##### EXPECTED RESULTS
Files/directories should have root user permissions
##### ACTUAL RESULTS
Files/directories have ssh user permissions
| main | npm is not becoming root issue type bug report component name npm ansible version ansible configuration host key checking false pipelining true ssh args o forwardagent yes o controlmaster auto o controlpersist control path tmp h p r os environment osx summary when installing anything with the npm module its setting the permissions of all the modules files directories to the user which was used to connect to the remote host regardless of whether become is set or not steps to reproduce run the following task name install become yes npm name global yes production yes state present check permissions on the module directory files usr lib node modules expected results files directories should have root user permissions actual results files directories have ssh user permissions | 1 |
2,196 | 7,753,473,117 | IssuesEvent | 2018-05-31 00:52:26 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Health scanners displaying cat people as human. | Bug Consistency Issue Maintainability/Hinders improvements | Issue reported from Round ID: 82075 (/tg/Station Bagil [ENGLISH] [US-EAST] [100% FREE LAG]
Small issue after being informed that cat people where not human I checked with a health scanner to see it showed up as human.
| True | Health scanners displaying cat people as human. - Issue reported from Round ID: 82075 (/tg/Station Bagil [ENGLISH] [US-EAST] [100% FREE LAG]
Small issue after being informed that cat people where not human I checked with a health scanner to see it showed up as human.
| main | health scanners displaying cat people as human issue reported from round id tg station bagil small issue after being informed that cat people where not human i checked with a health scanner to see it showed up as human | 1 |
178,669 | 6,613,317,121 | IssuesEvent | 2017-09-20 08:49:07 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | draw hexagon at wrong location ? | > Bug Concerns GAML Display Java2D OS OSX Priority Medium Topology Grids Version Git | ### Steps to reproduce
1. I write a model to display the cells of a hexagon grid using an aspect aiming at drawing a hexagon.
global {}
grid plot height: 10 width: 10 neighbors: 6 {
aspect g {
draw hexagon(6) color: #blue border:#red;
}
}
experiment testHexa type: gui {
output {
display d {
grid plot lines: #black;
species plot aspect: g transparency: 0.7;
}
}
}
### Expected behavior
I was expected to have the hexagon displayed on the grid cell.
### Actual behavior
The hexagons displayed with the aspect are translated.
I have to use the following aspect to locate them correctly:
aspect g {
draw hexagon(6) at_location self.location color: #blue border:#black;
}
The bug comes from the fact that, as far as I am aware, the two following lines should be equivalent:
draw hexagon(6) at_location self.location color: #blue border:#black;
draw hexagon(6) color: #blue border:#black;
### System and version
GAMA continuous built
MacOSX
| 1.0 | draw hexagon at wrong location ? - ### Steps to reproduce
1. I write a model to display the cells of a hexagon grid using an aspect aiming at drawing a hexagon.
global {}
grid plot height: 10 width: 10 neighbors: 6 {
aspect g {
draw hexagon(6) color: #blue border:#red;
}
}
experiment testHexa type: gui {
output {
display d {
grid plot lines: #black;
species plot aspect: g transparency: 0.7;
}
}
}
### Expected behavior
I was expected to have the hexagon displayed on the grid cell.
### Actual behavior
The hexagons displayed with the aspect are translated.
I have to use the following aspect to locate them correctly:
aspect g {
draw hexagon(6) at_location self.location color: #blue border:#black;
}
The bug comes from the fact that, as far as I am aware, the two following lines should be equivalent:
draw hexagon(6) at_location self.location color: #blue border:#black;
draw hexagon(6) color: #blue border:#black;
### System and version
GAMA continuous built
MacOSX
| non_main | draw hexagon at wrong location steps to reproduce i write a model to display the cells of a hexagon grid using an aspect aiming at drawing a hexagon global grid plot height width neighbors aspect g draw hexagon color blue border red experiment testhexa type gui output display d grid plot lines black species plot aspect g transparency expected behavior i was expected to have the hexagon displayed on the grid cell actual behavior the hexagons displayed with the aspect are translated i have to use the following aspect to locate them correctly aspect g draw hexagon at location self location color blue border black the bug comes from the fact that as far as i am aware the two following lines should be equivalent draw hexagon at location self location color blue border black draw hexagon color blue border black system and version gama continuous built macosx | 0 |
2,459 | 8,639,898,590 | IssuesEvent | 2018-11-23 22:30:22 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | IQFLOAT isn't parsed corretly or not documented enough | V1 related (not maintained) | Hello, I just can't get rpitx to work with IQFLOAT data (complex type of GNURadio) to work properly with rpitx. I have a simple FM transmitter under gnu radio and pipe complex floats via network with sampling rate of 190000 (doesn't really matter as no sampling rate is working)
If i use
```
while true; do (nc -l -p 8011; dd if=/dev/zero bs=4096 count=30); done | sudo rpitx -i- -m IQFLOAT -f 99000 -c 1 -s 190000
```
and i get an to wide output with 2 peaks in them, the signal is unusable:

If I try the exact same IQ data, but first passing it via csdr
```
while true; do (nc -l -p 8011; dd if=/dev/zero bs=4096 count=30); done | csdr fmdemod_quadri_cf|csdr gain_ff 75000 |csdr convert_f_samplerf 5263 | sudo rpitx -i- -m RF -f 99000
```
I get a much better signal (actually quite good one) proper width:

It would be enough to use csdr, but it introduces additional latencies, adds requirement for gain multiplication and in general would be nice if rpitx would properly handle IQ data itself :)
In README.md about streaming from GNURadio its written to "shift" the signal by 5khz, and thats centers the signal on one of 2 peaks. But that doesn't fix the issue that signal itself is not a good representation of passed IQ values
| True | IQFLOAT isn't parsed corretly or not documented enough - Hello, I just can't get rpitx to work with IQFLOAT data (complex type of GNURadio) to work properly with rpitx. I have a simple FM transmitter under gnu radio and pipe complex floats via network with sampling rate of 190000 (doesn't really matter as no sampling rate is working)
If i use
```
while true; do (nc -l -p 8011; dd if=/dev/zero bs=4096 count=30); done | sudo rpitx -i- -m IQFLOAT -f 99000 -c 1 -s 190000
```
and i get an to wide output with 2 peaks in them, the signal is unusable:

If I try the exact same IQ data, but first passing it via csdr
```
while true; do (nc -l -p 8011; dd if=/dev/zero bs=4096 count=30); done | csdr fmdemod_quadri_cf|csdr gain_ff 75000 |csdr convert_f_samplerf 5263 | sudo rpitx -i- -m RF -f 99000
```
I get a much better signal (actually quite good one) proper width:

It would be enough to use csdr, but it introduces additional latencies, adds requirement for gain multiplication and in general would be nice if rpitx would properly handle IQ data itself :)
In README.md about streaming from GNURadio its written to "shift" the signal by 5khz, and thats centers the signal on one of 2 peaks. But that doesn't fix the issue that signal itself is not a good representation of passed IQ values
| main | iqfloat isn t parsed corretly or not documented enough hello i just can t get rpitx to work with iqfloat data complex type of gnuradio to work properly with rpitx i have a simple fm transmitter under gnu radio and pipe complex floats via network with sampling rate of doesn t really matter as no sampling rate is working if i use while true do nc l p dd if dev zero bs count done sudo rpitx i m iqfloat f c s and i get an to wide output with peaks in them the signal is unusable if i try the exact same iq data but first passing it via csdr while true do nc l p dd if dev zero bs count done csdr fmdemod quadri cf csdr gain ff csdr convert f samplerf sudo rpitx i m rf f i get a much better signal actually quite good one proper width it would be enough to use csdr but it introduces additional latencies adds requirement for gain multiplication and in general would be nice if rpitx would properly handle iq data itself in readme md about streaming from gnuradio its written to shift the signal by and thats centers the signal on one of peaks but that doesn t fix the issue that signal itself is not a good representation of passed iq values | 1 |
203,796 | 15,389,034,487 | IssuesEvent | 2021-03-03 11:32:49 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | wazuh-logtest: add support for testing location information | core/logtest enhancement | |Wazuh version|Component|Install type|Install method|Platform|
|---|---|---|---|---|
| 4.1 | wazuh-logtest| Manager | Any | Any |
Hi team!
It would be really great if `wazuh-logtest` tool allow to set different values for `location` field rather than current `stdin` fixed value. The main motivation of this feature is to check the accurate handling and comparing rule's `location` values.
This could be implemented by adding a optional command line parameter, and then use it as long as the `wazuh-logtest` session is active.
Regards,
Nico | 1.0 | wazuh-logtest: add support for testing location information - |Wazuh version|Component|Install type|Install method|Platform|
|---|---|---|---|---|
| 4.1 | wazuh-logtest| Manager | Any | Any |
Hi team!
It would be really great if `wazuh-logtest` tool allow to set different values for `location` field rather than current `stdin` fixed value. The main motivation of this feature is to check the accurate handling and comparing rule's `location` values.
This could be implemented by adding a optional command line parameter, and then use it as long as the `wazuh-logtest` session is active.
Regards,
Nico | non_main | wazuh logtest add support for testing location information wazuh version component install type install method platform wazuh logtest manager any any hi team it would be really great if wazuh logtest tool allow to set different values for location field rather than current stdin fixed value the main motivation of this feature is to check the accurate handling and comparing rule s location values this could be implemented by adding a optional command line parameter and then use it as long as the wazuh logtest session is active regards nico | 0 |
158,229 | 20,015,928,821 | IssuesEvent | 2022-02-01 12:04:25 | ghc-dev/Lindsey-Fletcher | https://api.github.com/repos/ghc-dev/Lindsey-Fletcher | opened | CVE-2021-44228 (High) detected in log4j-core-2.8.2.jar | security vulnerability | ## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /tory/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-core-2.8.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Lindsey-Fletcher/commit/aebc2399746edb9c505a5fd03c49888d0c31e0c7">aebc2399746edb9c505a5fd03c49888d0c31e0c7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.
<p>Publish Date: 2021-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44228","vulnerabilityDetails":"Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228","cvss3Severity":"high","cvss3Score":"10.0","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-44228 (High) detected in log4j-core-2.8.2.jar - ## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /tory/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-core-2.8.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Lindsey-Fletcher/commit/aebc2399746edb9c505a5fd03c49888d0c31e0c7">aebc2399746edb9c505a5fd03c49888d0c31e0c7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.
<p>Publish Date: 2021-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44228","vulnerabilityDetails":"Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228","cvss3Severity":"high","cvss3Score":"10.0","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in core jar cve high severity vulnerability vulnerable library core jar the apache implementation library home page a href path to dependency file pom xml path to vulnerable library tory org apache logging core core jar dependency hierarchy x core jar vulnerable library found in head commit a href found in base branch main vulnerability details apache through excluding security releases and jndi features used in configuration log messages and parameters do not protect against attacker controlled ldap and other jndi related endpoints an attacker who can control log messages or log message parameters can execute arbitrary code loaded from ldap servers when message lookup substitution is enabled from this behavior has been disabled by default from version along with and this functionality has been completely removed note that this vulnerability is specific to core and does not affect or other apache logging services projects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core org pax logging pax logging isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache logging core isminimumfixversionavailable true minimumfixversion org apache logging core org pax logging pax logging isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails apache through excluding security releases and jndi features used in configuration log messages and parameters do not protect against attacker controlled ldap and other jndi related endpoints an attacker who can control log messages or log message parameters can execute arbitrary code loaded from ldap servers when message lookup substitution is enabled from this behavior has been disabled by default from version along with and this functionality has been completely removed note that this vulnerability is specific to core and does not affect or other apache logging services projects vulnerabilityurl | 0 |
36,488 | 6,536,572,488 | IssuesEvent | 2017-08-31 18:40:31 | blueCFD/Core | https://api.github.com/repos/blueCFD/Core | opened | Add matplotlib to the software stack | documentation enhancement help wanted | Back in July 2013, we at [blueCAPE](http://www.bluecape.com.pt) were asked how to add [matplotlib](https://matplotlib.org/) to the blueCFD-Core software stack.
The objective was due to the presentation _Python Scripting for Gluing CFD Applications: A Case Study Demonstrating Automation of Grid Generation, Parameter Variation,Flow Simulation, Analysis, and Plotting by Eric Paterson_, available here: http://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/
Back then, the following steps could be performed:
1. Go to [matplotlib's download page](http://matplotlib.org/downloads.html)
2. Download `matplotlib-1.2.1.win-amd64-py2.7.exe` and/or `matplotlib-1.2.1.win32-py2.7.exe`, for 64-bit or 32-bit respectively.
3. Install it, for example by following the instructions from the FAQ entry [How to install additional software, such as Gmsh](http://bluecfd.github.io/Core/FAQ/how-to-install-additional-software-such-as-gmsh/).
A side note was that `matplotlib` is already provided with _PyLab_, therefore it might be a lot easier nowadays to install it, specially since blueCFD-Core 2016-1 uses MSys2 and Python 2.7 was installed in it, therefore it might be installable with Python's `pip` or `easy_install`.
| 1.0 | Add matplotlib to the software stack - Back in July 2013, we at [blueCAPE](http://www.bluecape.com.pt) were asked how to add [matplotlib](https://matplotlib.org/) to the blueCFD-Core software stack.
The objective was due to the presentation _Python Scripting for Gluing CFD Applications: A Case Study Demonstrating Automation of Grid Generation, Parameter Variation,Flow Simulation, Analysis, and Plotting by Eric Paterson_, available here: http://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/
Back then, the following steps could be performed:
1. Go to [matplotlib's download page](http://matplotlib.org/downloads.html)
2. Download `matplotlib-1.2.1.win-amd64-py2.7.exe` and/or `matplotlib-1.2.1.win32-py2.7.exe`, for 64-bit or 32-bit respectively.
3. Install it, for example by following the instructions from the FAQ entry [How to install additional software, such as Gmsh](http://bluecfd.github.io/Core/FAQ/how-to-install-additional-software-such-as-gmsh/).
A side note was that `matplotlib` is already provided with _PyLab_, therefore it might be a lot easier nowadays to install it, specially since blueCFD-Core 2016-1 uses MSys2 and Python 2.7 was installed in it, therefore it might be installable with Python's `pip` or `easy_install`.
| non_main | add matplotlib to the software stack back in july we at were asked how to add to the bluecfd core software stack the objective was due to the presentation python scripting for gluing cfd applications a case study demonstrating automation of grid generation parameter variation flow simulation analysis and plotting by eric paterson available here back then the following steps could be performed go to download matplotlib win exe and or matplotlib exe for bit or bit respectively install it for example by following the instructions from the faq entry a side note was that matplotlib is already provided with pylab therefore it might be a lot easier nowadays to install it specially since bluecfd core uses and python was installed in it therefore it might be installable with python s pip or easy install | 0 |
1,612 | 6,572,632,314 | IssuesEvent | 2017-09-11 03:55:24 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_eni can't gracefully create and/or attach to instance | affects_2.1 aws cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_eni
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /Projects/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
I would like to use Ansible to create an ENI with a predetermined private IP address in AWS if it doesn't already exist, attach it to an particular instance if it's not already attached, and raise an error if the ENI exists but is already attached to a different instance. The current `ec2_eni` module makes this difficult.
##### STEPS TO REPRODUCE
This is what I've currently got to emulate this behavior. It won't error out if the interface is already attached to a different instance, but it will at least handle creating/attaching an available one gracefully.
```
- name: Find existing network interface
ec2_eni_facts:
filters:
private_ip_address: "{{ eni_ip }}"
register: ec2_eni
- name: Check if existing network interface is already attached.
ec2_eni_facts:
filters:
private_ip_address: "{{ eni_ip }}"
status: available
register: ec2_eni_available
- name: Attach existing network interface
ec2_eni:
eni_id: "{{ ec2_eni.interfaces[0].id }}"
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
when: ec2_eni.interfaces|length and ec2_eni_available.interfaces|length
- name: Create network interface and attach
ec2_eni:
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
when: not ec2_eni.interfaces|length
```
It would be really nice to be able to cut that down to just this:
```
- name: Create and/or attach network interface
ec2_eni:
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
```
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
| True | ec2_eni can't gracefully create and/or attach to instance - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_eni
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /Projects/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
I would like to use Ansible to create an ENI with a predetermined private IP address in AWS if it doesn't already exist, attach it to an particular instance if it's not already attached, and raise an error if the ENI exists but is already attached to a different instance. The current `ec2_eni` module makes this difficult.
##### STEPS TO REPRODUCE
This is what I've currently got to emulate this behavior. It won't error out if the interface is already attached to a different instance, but it will at least handle creating/attaching an available one gracefully.
```
- name: Find existing network interface
ec2_eni_facts:
filters:
private_ip_address: "{{ eni_ip }}"
register: ec2_eni
- name: Check if existing network interface is already attached.
ec2_eni_facts:
filters:
private_ip_address: "{{ eni_ip }}"
status: available
register: ec2_eni_available
- name: Attach existing network interface
ec2_eni:
eni_id: "{{ ec2_eni.interfaces[0].id }}"
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
when: ec2_eni.interfaces|length and ec2_eni_available.interfaces|length
- name: Create network interface and attach
ec2_eni:
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
when: not ec2_eni.interfaces|length
```
It would be really nice to be able to cut that down to just this:
```
- name: Create and/or attach network interface
ec2_eni:
instance_id: "{{ instance_id }}"
private_ip_address: "{{ eni_ip }}"
subnet_id: subnet-11112222
state: present
device_index: 1
```
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
| main | eni can t gracefully create and or attach to instance issue type feature idea component name eni ansible version ansible config file projects ansible ansible cfg configured module search path default w o overrides configuration n a os environment n a summary i would like to use ansible to create an eni with a predetermined private ip address in aws if it doesn t already exist attach it to an particular instance if it s not already attached and raise an error if the eni exists but is already attached to a different instance the current eni module makes this difficult steps to reproduce this is what i ve currently got to emulate this behavior it won t error out if the interface is already attached to a different instance but it will at least handle creating attaching an available one gracefully name find existing network interface eni facts filters private ip address eni ip register eni name check if existing network interface is already attached eni facts filters private ip address eni ip status available register eni available name attach existing network interface eni eni id eni interfaces id instance id instance id private ip address eni ip subnet id subnet state present device index when eni interfaces length and eni available interfaces length name create network interface and attach eni instance id instance id private ip address eni ip subnet id subnet state present device index when not eni interfaces length it would be really nice to be able to cut that down to just this name create and or attach network interface eni instance id instance id private ip address eni ip subnet id subnet state present device index expected results n a actual results n a | 1 |
136,888 | 30,600,911,488 | IssuesEvent | 2023-07-22 11:16:58 | JuliaIPU/IPUToolkit.jl | https://api.github.com/repos/JuliaIPU/IPUToolkit.jl | opened | Expose functionality to only build a binary `.gp` codelet, without adding it to a graph | enhancement code generation | Currently we only provide the macro [`@codelet`](https://juliaipu.github.io/IPUToolkit.jl/stable/compiler/#IPUToolkit.IPUCompiler.@codelet) to build the `.gp` codelet _and_ add it to a graph, but it was suggested that it can be useful to just compile the `.gp` file, outside of a specific program. We should then expose this functionality: we already do that under the hood, we only need to reorganise the code and the API to allow users to only get the `.gp` file. | 1.0 | Expose functionality to only build a binary `.gp` codelet, without adding it to a graph - Currently we only provide the macro [`@codelet`](https://juliaipu.github.io/IPUToolkit.jl/stable/compiler/#IPUToolkit.IPUCompiler.@codelet) to build the `.gp` codelet _and_ add it to a graph, but it was suggested that it can be useful to just compile the `.gp` file, outside of a specific program. We should then expose this functionality: we already do that under the hood, we only need to reorganise the code and the API to allow users to only get the `.gp` file. | non_main | expose functionality to only build a binary gp codelet without adding it to a graph currently we only provide the macro to build the gp codelet and add it to a graph but it was suggested that it can be useful to just compile the gp file outside of a specific program we should then expose this functionality we already do that under the hood we only need to reorganise the code and the api to allow users to only get the gp file | 0 |
452,620 | 13,056,929,636 | IssuesEvent | 2020-07-30 06:10:14 | erxes/erxes | https://api.github.com/repos/erxes/erxes | closed | Improve menu of product/service | priority: Medium type: enhancement | Changes below:
1. Rename Contacts menu to Data
2. Add new tab in the "data" menu and to name a new tab to Product/Service
3. Move to product/service to the new tab | 1.0 | Improve menu of product/service - Changes below:
1. Rename Contacts menu to Data
2. Add new tab in the "data" menu and to name a new tab to Product/Service
3. Move to product/service to the new tab | non_main | improve menu of product service changes below rename contacts menu to data add new tab in the data menu and to name a new tab to product service move to product service to the new tab | 0 |
1,787 | 6,575,880,306 | IssuesEvent | 2017-09-11 17:41:19 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | group_by doesn't add hosts on second run (when group exists) | affects_2.3 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
group_by
##### ANSIBLE VERSION
HEAD
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When group_by is used twice, any new hosts are not added the second time.
The host will have the group in group_names, but will not appear in groups.groupname.
This seems to be a problem with the `self._inventory.clear_group_dict_cache()` call being misplaced in the new group creation block, not the host add block.
Pull request submitted: https://github.com/ansible/ansible/pull/17766
##### STEPS TO REPRODUCE
```
- name: Test grouping
hosts: 192.168.2.245 192.168.2.246
tasks:
- group_by:
key: test
when: inventory_hostname == '192.168.2.245'
- group_by:
key: test
when: inventory_hostname == '192.168.2.246'
- debug:
var: groups.test
- debug:
var: group_names
```
##### EXPECTED RESULTS
```
PLAY [Test grouping] ***********************************************************
TASK [setup] *******************************************************************
ok: [192.168.2.245]
ok: [192.168.2.246]
TASK [group_by] ****************************************************************
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.246]
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"groups.test": [
"192.168.2.245",
"192.168.2.246"
]
}
ok: [192.168.2.245] => {
"groups.test": [
"192.168.2.245",
"192.168.2.246"
]
}
TASK [debug] *******************************************************************
ok: [192.168.2.245] => {
"group_names": [
"test"
]
}
ok: [192.168.2.246] => {
"group_names": [
"test"
]
}
```
##### ACTUAL RESULTS
```
PLAY [Test] ********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.2.246]
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.246]
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"groups.test": [
"192.168.2.245"
]
}
ok: [192.168.2.245] => {
"groups.test": [
"192.168.2.245"
]
}
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"group_names": [
"test"
]
}
ok: [192.168.2.245] => {
"group_names":
"test"
]
}
```
| True | group_by doesn't add hosts on second run (when group exists) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
group_by
##### ANSIBLE VERSION
HEAD
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When group_by is used twice, any new hosts are not added the second time.
The host will have the group in group_names, but will not appear in groups.groupname.
This seems to be a problem with the `self._inventory.clear_group_dict_cache()` call being misplaced in the new group creation block, not the host add block.
Pull request submitted: https://github.com/ansible/ansible/pull/17766
##### STEPS TO REPRODUCE
```
- name: Test grouping
hosts: 192.168.2.245 192.168.2.246
tasks:
- group_by:
key: test
when: inventory_hostname == '192.168.2.245'
- group_by:
key: test
when: inventory_hostname == '192.168.2.246'
- debug:
var: groups.test
- debug:
var: group_names
```
##### EXPECTED RESULTS
```
PLAY [Test grouping] ***********************************************************
TASK [setup] *******************************************************************
ok: [192.168.2.245]
ok: [192.168.2.246]
TASK [group_by] ****************************************************************
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.246]
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"groups.test": [
"192.168.2.245",
"192.168.2.246"
]
}
ok: [192.168.2.245] => {
"groups.test": [
"192.168.2.245",
"192.168.2.246"
]
}
TASK [debug] *******************************************************************
ok: [192.168.2.245] => {
"group_names": [
"test"
]
}
ok: [192.168.2.246] => {
"group_names": [
"test"
]
}
```
##### ACTUAL RESULTS
```
PLAY [Test] ********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.2.246]
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.245]
TASK [group_by] ****************************************************************
ok: [192.168.2.246]
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"groups.test": [
"192.168.2.245"
]
}
ok: [192.168.2.245] => {
"groups.test": [
"192.168.2.245"
]
}
TASK [debug] *******************************************************************
ok: [192.168.2.246] => {
"group_names": [
"test"
]
}
ok: [192.168.2.245] => {
"group_names":
"test"
]
}
```
| main | group by doesn t add hosts on second run when group exists issue type bug report component name group by ansible version head os environment n a summary when group by is used twice any new hosts are not added the second time the host will have the group in group names but will not appear in groups groupname this seems to be a problem with the self inventory clear group dict cache call being misplaced in the new group creation block not the host add block pull request submitted steps to reproduce name test grouping hosts tasks group by key test when inventory hostname group by key test when inventory hostname debug var groups test debug var group names expected results play task ok ok task ok task ok task ok groups test ok groups test task ok group names test ok group names test actual results play task ok ok task ok task ok task ok groups test ok groups test task ok group names test ok group names test | 1 |
264,756 | 8,319,278,439 | IssuesEvent | 2018-09-25 16:46:03 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.esic.in - desktop site instead of mobile site | browser-firefox priority-normal | <!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.esic.in/ESICInsurance1/RevenueOne/Monthly%20Contribution/OnlineMonthlyContribution.aspx?ContributionPeriod=aj/07AZFgv8Xbovpy64c4w==&EmployerCode=rK2ozUC9vMNsVF6p1/6/jCX82zGIzJua&ContributionType=JUGm1Rbj4gM=
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: no. of days of wages is not fill
**Steps to Reproduce**:
monthl;y contribution
[](https://webcompat.com/uploads/2018/9/7a2705ba-4179-4aa1-8c6c-3fde83705a7e.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180920135444</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.esic.in - desktop site instead of mobile site - <!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.esic.in/ESICInsurance1/RevenueOne/Monthly%20Contribution/OnlineMonthlyContribution.aspx?ContributionPeriod=aj/07AZFgv8Xbovpy64c4w==&EmployerCode=rK2ozUC9vMNsVF6p1/6/jCX82zGIzJua&ContributionType=JUGm1Rbj4gM=
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: no. of days of wages is not fill
**Steps to Reproduce**:
monthl;y contribution
[](https://webcompat.com/uploads/2018/9/7a2705ba-4179-4aa1-8c6c-3fde83705a7e.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180920135444</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | desktop site instead of mobile site url browser version firefox operating system windows tested another browser yes problem type desktop site instead of mobile site description no of days of wages is not fill steps to reproduce monthl y contribution browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel beta from with ❤️ | 0 |
750,807 | 26,218,449,483 | IssuesEvent | 2023-01-04 13:01:00 | Kristify/Kristify | https://api.github.com/repos/Kristify/Kristify | opened | Backend crashes | bug Priority: High | If some fields in the config file are missing or wrong, the backend expects them to exist and be connect anyways, making it crash.
And because of issue #19 the enduser doesnt know why. **Those two things need to be fixed.** | 1.0 | Backend crashes - If some fields in the config file are missing or wrong, the backend expects them to exist and be connect anyways, making it crash.
And because of issue #19 the enduser doesnt know why. **Those two things need to be fixed.** | non_main | backend crashes if some fields in the config file are missing or wrong the backend expects them to exist and be connect anyways making it crash and because of issue the enduser doesnt know why those two things need to be fixed | 0 |
23,833 | 2,664,151,572 | IssuesEvent | 2015-03-20 12:40:22 | cs2103jan2015-t11-1c/main | https://api.github.com/repos/cs2103jan2015-t11-1c/main | closed | A user can know the start time and end time of a task | priority.high type.story | ...so that the user can track what is due and when. | 1.0 | A user can know the start time and end time of a task - ...so that the user can track what is due and when. | non_main | a user can know the start time and end time of a task so that the user can track what is due and when | 0 |
96,971 | 16,175,245,308 | IssuesEvent | 2021-05-03 05:10:50 | Hisham-TK/Node-TS-NEST-boilerplate | https://api.github.com/repos/Hisham-TK/Node-TS-NEST-boilerplate | opened | CVE-2021-23369 (High) detected in handlebars-4.7.6.tgz | security vulnerability | ## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.7.6.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz</a></p>
<p>Path to dependency file: Node-TS-NEST-boilerplate/package.json</p>
<p>Path to vulnerable library: Node-TS-NEST-boilerplate/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-1.1.11.tgz (Root Library)
- :x: **handlebars-4.7.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hisham-TK/Node-TS-NEST-boilerplate/commit/a593fdd2f63e1d333cbbbdd64402698c36f4b41e">a593fdd2f63e1d333cbbbdd64402698c36f4b41e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23369 (High) detected in handlebars-4.7.6.tgz - ## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.7.6.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.7.6.tgz</a></p>
<p>Path to dependency file: Node-TS-NEST-boilerplate/package.json</p>
<p>Path to vulnerable library: Node-TS-NEST-boilerplate/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-1.1.11.tgz (Root Library)
- :x: **handlebars-4.7.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hisham-TK/Node-TS-NEST-boilerplate/commit/a593fdd2f63e1d333cbbbdd64402698c36f4b41e">a593fdd2f63e1d333cbbbdd64402698c36f4b41e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file node ts nest boilerplate package json path to vulnerable library node ts nest boilerplate node modules handlebars package json dependency hierarchy compodoc tgz root library x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package handlebars before are vulnerable to remote code execution rce when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
2,155 | 7,481,683,118 | IssuesEvent | 2018-04-04 21:30:33 | lansuite/lansuite | https://api.github.com/repos/lansuite/lansuite | closed | Install on web-server with external db-server | pending-maintainer-response question | Why does the DB have to run on the web server?
The first page of the install script checks if there is a local mysql server. I think it will be better to check only if a php mod for database connection is available | True | Install on web-server with external db-server - Why does the DB have to run on the web server?
The first page of the install script checks if there is a local mysql server. I think it will be better to check only if a php mod for database connection is available | main | install on web server with external db server why does the db have to run on the web server the first page of the install script checks if there is a local mysql server i think it will be better to check only if a php mod for database connection is available | 1 |
4,089 | 19,301,952,766 | IssuesEvent | 2021-12-13 07:10:33 | Chaw-Chaw/ChawChawBack2 | https://api.github.com/repos/Chaw-Chaw/ChawChawBack2 | closed | 도메인 변경 | maintain | # 목적
> 호스팅 유지
# 상세 내용
- 도메인 기간만료로 인해 변경 (mylifeforcoding.com -> chawchaw.xyz)
- 기존 ssl 인증서 삭제 및 신규 발급 | True | 도메인 변경 - # 목적
> 호스팅 유지
# 상세 내용
- 도메인 기간만료로 인해 변경 (mylifeforcoding.com -> chawchaw.xyz)
- 기존 ssl 인증서 삭제 및 신규 발급 | main | 도메인 변경 목적 호스팅 유지 상세 내용 도메인 기간만료로 인해 변경 mylifeforcoding com chawchaw xyz 기존 ssl 인증서 삭제 및 신규 발급 | 1 |
4,780 | 24,607,328,916 | IssuesEvent | 2022-10-14 17:31:56 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | TimezoneConverter: Shows MSK as UTC+4, should be UTC+3 | Bug Relevancy Maintainer Timeout | Reported through DDG Feedback. I used a VPN to confirm the bug though:

---
IA Page: http://duck.co/ia/view/timezone_converter
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GlitchMr
| True | TimezoneConverter: Shows MSK as UTC+4, should be UTC+3 - Reported through DDG Feedback. I used a VPN to confirm the bug though:

---
IA Page: http://duck.co/ia/view/timezone_converter
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GlitchMr
| main | timezoneconverter shows msk as utc should be utc reported through ddg feedback i used a vpn to confirm the bug though ia page glitchmr | 1 |
257,912 | 22,263,108,274 | IssuesEvent | 2022-06-10 03:41:15 | Uuvana-Studios/longvinter-windows-client | https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client | opened | All the servers are dead | Bug Not Tested | All the servers are dead
Everyone else can play the game
I'm the only one who can't
Click on the server and it will appear on the main screen
Help me ;ㅅ; | 1.0 | All the servers are dead - All the servers are dead
Everyone else can play the game
I'm the only one who can't
Click on the server and it will appear on the main screen
Help me ;ㅅ; | non_main | all the servers are dead all the servers are dead everyone else can play the game i m the only one who can t click on the server and it will appear on the main screen help me ㅅ | 0 |
2,597 | 8,823,693,459 | IssuesEvent | 2019-01-02 14:35:00 | citrusframework/citrus | https://api.github.com/repos/citrusframework/citrus | closed | Breaking change in waitFor().file(File) | Prio: High Type: Maintainance | **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the file wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().file(new File("/path/to/file"));
```
**API after change**
```java
waitFor().file().resource(new File("/path/to/file"));
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540L133
BR,
Sven | True | Breaking change in waitFor().file(File) - **Citrus Version**
>= 2.7.7
**Description**
If you upgrade your Citrus version to 2.7.7 or higher, we've a breaking change in the file wait builder API. We'll correct this with one of the future releases to ensure effortless version upgrades
**API before change**
```java
waitFor().file(new File("/path/to/file"));
```
**API after change**
```java
waitFor().file().resource(new File("/path/to/file"));
```
**Additional information**
* Issue:#417
* Commit: https://github.com/citrusframework/citrus/commit/515e840f9133383d19304916db197ce5fdb9ac83#diff-f106d4946b18253678933a5267aa2540L133
BR,
Sven | main | breaking change in waitfor file file citrus version description if you upgrade your citrus version to or higher we ve a breaking change in the file wait builder api we ll correct this with one of the future releases to ensure effortless version upgrades api before change java waitfor file new file path to file api after change java waitfor file resource new file path to file additional information issue commit br sven | 1 |
2,071 | 7,008,592,227 | IssuesEvent | 2017-12-19 16:11:02 | chocolatey/chocolatey-package-requests | https://api.github.com/repos/chocolatey/chocolatey-package-requests | opened | RFM - cameyo | Status: Available For Maintainer(s) | The chocolatey core team packages maintainers have decided to drop this package, and opening a request for new maintainers.
The package was dropped due to changes on the software authors website, and there hasn't been any interest in providing a fix to keep the package up to date.
For anyone interested, the source of the package (including the non-working AU update script) can be seen here:
https://github.com/chocolatey/chocolatey-coreteampackages/tree/3166927a7614b0e56848ed20e0939c5bd63f6ded/automatic/cameyo | True | RFM - cameyo - The chocolatey core team packages maintainers have decided to drop this package, and opening a request for new maintainers.
The package was dropped due to changes on the software authors website, and there hasn't been any interest in providing a fix to keep the package up to date.
For anyone interested, the source of the package (including the non-working AU update script) can be seen here:
https://github.com/chocolatey/chocolatey-coreteampackages/tree/3166927a7614b0e56848ed20e0939c5bd63f6ded/automatic/cameyo | main | rfm cameyo the chocolatey core team packages maintainers have decided to drop this package and opening a request for new maintainers the package was dropped due to changes on the software authors website and there hasn t been any interest in providing a fix to keep the package up to date for anyone interested the source of the package including the non working au update script can be seen here | 1 |
4,224 | 20,908,842,699 | IssuesEvent | 2022-03-24 07:04:57 | pypiserver/pypiserver | https://api.github.com/repos/pypiserver/pypiserver | opened | Preparing new `pypiserver` release | type.Maintainance status.CRITICAL | > TO BE ELABORATED.
# Purpose
Hey everyone, I wanted to make a public strategy for the upcoming release collected in this issue as, unfortunately, it is taking me longer than I have anticipated. I'm getting there but would like to apologize for the delays. I'm promising to get it up and running.
Here I would like to make a draft roadmap of things I'm working on to get it ready.
Any help and suggestions on this are very welcome!
## Timeline
- [x] Receive the release credentials and access to DockerHub and Pypi
- [ ] Prepare a low-labor release process (in progress #417)
- [ ] Setup the release candidate preparation flow (#417)
- [ ] Setup the release CI/CD pipeline (_not started yet_)
- [ ] Get a new release published
- [ ] Document release process
| True | Preparing new `pypiserver` release - > TO BE ELABORATED.
# Purpose
Hey everyone, I wanted to make a public strategy for the upcoming release collected in this issue as, unfortunately, it is taking me longer than I have anticipated. I'm getting there but would like to apologize for the delays. I'm promising to get it up and running.
Here I would like to make a draft roadmap of things I'm working on to get it ready.
Any help and suggestions on this are very welcome!
## Timeline
- [x] Receive the release credentials and access to DockerHub and Pypi
- [ ] Prepare a low-labor release process (in progress #417)
- [ ] Setup the release candidate preparation flow (#417)
- [ ] Setup the release CI/CD pipeline (_not started yet_)
- [ ] Get a new release published
- [ ] Document release process
| main | preparing new pypiserver release to be elaborated purpose hey everyone i wanted to make a public strategy for the upcoming release collected in this issue as unfortunately it is taking me longer than i have anticipated i m getting there but would like to apologize for the delays i m promising to get it up and running here i would like to make a draft roadmap of things i m working on to get it ready any help and suggestions on this are very welcome timeline receive the release credentials and access to dockerhub and pypi prepare a low labor release process in progress setup the release candidate preparation flow setup the release ci cd pipeline not started yet get a new release published document release process | 1 |
5,740 | 30,347,123,882 | IssuesEvent | 2023-07-11 16:08:45 | ChimeraPy/ChimeraPy-Engine | https://api.github.com/repos/ChimeraPy/ChimeraPy-Engine | opened | Intre-Service communication using psygnal | maintainence | Instead of using a Facade pattern that connects services through input parameters, use [psygnal](https://pypi.org/project/psygnal/) as an event bus instead. | True | Intre-Service communication using psygnal - Instead of using a Facade pattern that connects services through input parameters, use [psygnal](https://pypi.org/project/psygnal/) as an event bus instead. | main | intre service communication using psygnal instead of using a facade pattern that connects services through input parameters use as an event bus instead | 1 |
33,658 | 12,216,793,230 | IssuesEvent | 2020-05-01 15:51:41 | OSWeekends/eventpoints-backend | https://api.github.com/repos/OSWeekends/eventpoints-backend | opened | CVE-2020-10109 (High) detected in Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl | security vulnerability | ## CVE-2020-10109 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl</b></p></summary>
<p>An asynchronous networking framework written in Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/14/49/eb654da38b15285d1f594933eefff36ce03106356197dba28ee8f5721a79/Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/14/49/eb654da38b15285d1f594933eefff36ce03106356197dba28ee8f5721a79/Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/eventpoints-backend/scrapers/requirements.txt</p>
<p>Path to vulnerable library: /eventpoints-backend/scrapers/requirements.txt</p>
<p>
Dependency Hierarchy:
- Scrapy-1.5.1-py2.py3-none-any.whl (Root Library)
- :x: **Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/OSWeekends/eventpoints-backend/commit/e1161180f794a408d085a12c47a25ffd29eb6240">e1161180f794a408d085a12c47a25ffd29eb6240</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request.
<p>Publish Date: 2020-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10109>CVE-2020-10109</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p5xh-vx83-mxcj">https://github.com/advisories/GHSA-p5xh-vx83-mxcj</a></p>
<p>Release Date: 2020-03-12</p>
<p>Fix Resolution: twisted - 20.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10109 (High) detected in Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl - ## CVE-2020-10109 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl</b></p></summary>
<p>An asynchronous networking framework written in Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/14/49/eb654da38b15285d1f594933eefff36ce03106356197dba28ee8f5721a79/Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/14/49/eb654da38b15285d1f594933eefff36ce03106356197dba28ee8f5721a79/Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/eventpoints-backend/scrapers/requirements.txt</p>
<p>Path to vulnerable library: /eventpoints-backend/scrapers/requirements.txt</p>
<p>
Dependency Hierarchy:
- Scrapy-1.5.1-py2.py3-none-any.whl (Root Library)
- :x: **Twisted-19.7.0-cp36-cp36m-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/OSWeekends/eventpoints-backend/commit/e1161180f794a408d085a12c47a25ffd29eb6240">e1161180f794a408d085a12c47a25ffd29eb6240</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request.
<p>Publish Date: 2020-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10109>CVE-2020-10109</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p5xh-vx83-mxcj">https://github.com/advisories/GHSA-p5xh-vx83-mxcj</a></p>
<p>Release Date: 2020-03-12</p>
<p>Fix Resolution: twisted - 20.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in twisted whl cve high severity vulnerability vulnerable library twisted whl an asynchronous networking framework written in python library home page a href path to dependency file tmp ws scm eventpoints backend scrapers requirements txt path to vulnerable library eventpoints backend scrapers requirements txt dependency hierarchy scrapy none any whl root library x twisted whl vulnerable library found in head commit a href vulnerability details in twisted web through there was an http request splitting vulnerability when presented with a content length and a chunked encoding header the content length took precedence and the remainder of the request body was interpreted as a pipelined request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution twisted step up your open source security game with whitesource | 0 |
5,843 | 31,025,883,988 | IssuesEvent | 2023-08-10 09:06:24 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | opened | LinkedIn - Get sentiment analysis from post comments | templates maintainer | This notebook provides a sentiment analysis of post comments from LinkedIn. It uses the LinkedIn API and Python libraries to extract the comments and analyze the sentiment of each comment. This is useful for organizations to understand the sentiment of their posts and the reactions of their followers.
| True | LinkedIn - Get sentiment analysis from post comments - This notebook provides a sentiment analysis of post comments from LinkedIn. It uses the LinkedIn API and Python libraries to extract the comments and analyze the sentiment of each comment. This is useful for organizations to understand the sentiment of their posts and the reactions of their followers.
| main | linkedin get sentiment analysis from post comments this notebook provides a sentiment analysis of post comments from linkedin it uses the linkedin api and python libraries to extract the comments and analyze the sentiment of each comment this is useful for organizations to understand the sentiment of their posts and the reactions of their followers | 1 |
1,790 | 6,575,881,782 | IssuesEvent | 2017-09-11 17:41:39 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | `assemble` does not create parent directory | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`assemble`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### SUMMARY
If a `dest` path contains a folder that is not yet present on the target system, `assemble` will fail.
##### STEPS TO REPRODUCE
If you create an `assemble` task with a `dest` containing a path that is not created on the target system at the moment you execute the task, it will not create it and will therefore fail:
```
- name: Set up authorized_keys
assemble:
src: files/public_keys
dest: /home/{{my.user}}/.ssh/authorized_keys
owner: {{my.user}}
group: {{my.user}}
mode: 0600
```
##### ACTUAL RESULTS
The example above will fail because the folder `.ssh` was not present:
```
TASK [Set up authorized_keys] **************************
fatal: [my-host]: FAILED! => {"changed": false, "failed": true, "msg": "Could not replace file: /tmp/tmpJaCmAZ to /home/my.user/.ssh/authorized_keys: [Errno 2] No such file or directory"}
```
| True | `assemble` does not create parent directory - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`assemble`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### SUMMARY
If a `dest` path contains a folder that is not yet present on the target system, `assemble` will fail.
##### STEPS TO REPRODUCE
If you create an `assemble` task with a `dest` containing a path that is not created on the target system at the moment you execute the task, it will not create it and will therefore fail:
```
- name: Set up authorized_keys
assemble:
src: files/public_keys
dest: /home/{{my.user}}/.ssh/authorized_keys
owner: {{my.user}}
group: {{my.user}}
mode: 0600
```
##### ACTUAL RESULTS
The example above will fail because the folder `.ssh` was not present:
```
TASK [Set up authorized_keys] **************************
fatal: [my-host]: FAILED! => {"changed": false, "failed": true, "msg": "Could not replace file: /tmp/tmpJaCmAZ to /home/my.user/.ssh/authorized_keys: [Errno 2] No such file or directory"}
```
| main | assemble does not create parent directory issue type bug report component name assemble ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides summary if a dest path contains a folder that is not yet present on the target system assemble will fail steps to reproduce if you create an assemble task with a dest containing a path that is not created on the target system at the moment you execute the task it will not create it and will therefore fail name set up authorized keys assemble src files public keys dest home my user ssh authorized keys owner my user group my user mode actual results the example above will fail because the folder ssh was not present task fatal failed changed false failed true msg could not replace file tmp tmpjacmaz to home my user ssh authorized keys no such file or directory | 1 |
33,511 | 15,987,623,488 | IssuesEvent | 2021-04-19 01:13:05 | man-group/dtale | https://api.github.com/repos/man-group/dtale | closed | Graphs slow with big data | Performance UI good first issue | I love the interactivity of plotly to plot data and visualize it, but the problem is when you try to do it with medium to big datasets. In my case I'm trying with a dataset of (235395, 21), I was surprise because in general is fast, but I wanted to mention a couple points.
- When I check the correlation matrix it appears a little bit squeeze, maybe have a button to see it in a new tab? Or redimension it directly in that view?
- The main annoying thing I saw was in describe section
- I expected to be able to use arrows keys to move up and down the columns and not use the mouse all the time, this could be another issue call Keyboard integration, as use ↑↓ to move across the columns and ←→ to move across "describe, histogram, categories..." sections
- And also in the Q-Q plot I noticed some slowness, I can't really view the values of the chart because it freezes, so I thought in showing you https://github.com/serant/lenspy that worked for me in a case of plotting a big dataset, although not sure if it really active, I think it doesn't support all kinds of plots...
- Is mainly for the developers but I saw in others github tag per issue, and that way we able to see quickly the improvements or bugs. | True | Graphs slow with big data - I love the interactivity of plotly to plot data and visualize it, but the problem is when you try to do it with medium to big datasets. In my case I'm trying with a dataset of (235395, 21), I was surprise because in general is fast, but I wanted to mention a couple points.
- When I check the correlation matrix it appears a little bit squeeze, maybe have a button to see it in a new tab? Or redimension it directly in that view?
- The main annoying thing I saw was in describe section
- I expected to be able to use arrows keys to move up and down the columns and not use the mouse all the time, this could be another issue call Keyboard integration, as use ↑↓ to move across the columns and ←→ to move across "describe, histogram, categories..." sections
- And also in the Q-Q plot I noticed some slowness, I can't really view the values of the chart because it freezes, so I thought in showing you https://github.com/serant/lenspy that worked for me in a case of plotting a big dataset, although not sure if it really active, I think it doesn't support all kinds of plots...
- Is mainly for the developers but I saw in others github tag per issue, and that way we able to see quickly the improvements or bugs. | non_main | graphs slow with big data i love the interactivity of plotly to plot data and visualize it but the problem is when you try to do it with medium to big datasets in my case i m trying with a dataset of i was surprise because in general is fast but i wanted to mention a couple points when i check the correlation matrix it appears a little bit squeeze maybe have a button to see it in a new tab or redimension it directly in that view the main annoying thing i saw was in describe section i expected to be able to use arrows keys to move up and down the columns and not use the mouse all the time this could be another issue call keyboard integration as use ↑↓ to move across the columns and ←→ to move across describe histogram categories sections and also in the q q plot i noticed some slowness i can t really view the values of the chart because it freezes so i thought in showing you that worked for me in a case of plotting a big dataset although not sure if it really active i think it doesn t support all kinds of plots is mainly for the developers but i saw in others github tag per issue and that way we able to see quickly the improvements or bugs | 0 |
4,982 | 25,585,251,525 | IssuesEvent | 2022-12-01 08:50:48 | gbif/ipt | https://api.github.com/repos/gbif/ipt | closed | replace itext-rtf | Component-Logic Maintainability Security | itext-rtf is dead and stuck in version 2.1.7 since 2009. This brings in lots of outdated dependencies.
It should be replaced by something modern. As RTF is a text based format the simplest would be to use a freemarker template for the Eml2Rtf conversion. Freemarker supports escaping for the [RTF output format](http://freemarker.org/docs/ref_directive_outputformat.html#autoid_123). | True | replace itext-rtf - itext-rtf is dead and stuck in version 2.1.7 since 2009. This brings in lots of outdated dependencies.
It should be replaced by something modern. As RTF is a text based format the simplest would be to use a freemarker template for the Eml2Rtf conversion. Freemarker supports escaping for the [RTF output format](http://freemarker.org/docs/ref_directive_outputformat.html#autoid_123). | main | replace itext rtf itext rtf is dead and stuck in version since this brings in lots of outdated dependencies it should be replaced by something modern as rtf is a text based format the simplest would be to use a freemarker template for the conversion freemarker supports escaping for the | 1 |
189,816 | 6,802,055,157 | IssuesEvent | 2017-11-02 18:49:33 | NREL/OpenStudio-BuildStock | https://api.github.com/repos/NREL/OpenStudio-BuildStock | opened | Use WattTime for hourly emissions and primary energy estimates | priority low | - [ ] Average and marginal
- [ ] CO2e and other criteria pollutants (NOx, SOx, methane, etc.)
- [ ] primary energy (account for different heat rates aka efficiencies for different types of gas plants—peaker vs CCT, etc., e.g., 6,000 vs 11,000)
cc @joseph-robertson @rHorsey | 1.0 | Use WattTime for hourly emissions and primary energy estimates - - [ ] Average and marginal
- [ ] CO2e and other criteria pollutants (NOx, SOx, methane, etc.)
- [ ] primary energy (account for different heat rates aka efficiencies for different types of gas plants—peaker vs CCT, etc., e.g., 6,000 vs 11,000)
cc @joseph-robertson @rHorsey | non_main | use watttime for hourly emissions and primary energy estimates average and marginal and other criteria pollutants nox sox methane etc primary energy account for different heat rates aka efficiencies for different types of gas plants—peaker vs cct etc e g vs cc joseph robertson rhorsey | 0 |
70,424 | 8,551,512,500 | IssuesEvent | 2018-11-07 18:16:07 | mkdocs/mkdocs | https://api.github.com/repos/mkdocs/mkdocs | closed | "Edit on github" expects "docs" for content location and ignores "docs_dir" value | Needs design decision | I had content in a directory called different from "docs". I had it properly set in the docs_dir in mkdocs.yml. However, the "edit on github" button still directed me to an (invalid) location that assumed "docs" is the home of my content.
Pointer to the repository and commit renaming content directory to "docs" to work around this bug: https://github.com/QIICR/DICOM4QI/commit/a1160c4a21c70d42de2939da1520f52aa0c9b736
I am using "material" theme, not sure if this is theme-specific. | 1.0 | "Edit on github" expects "docs" for content location and ignores "docs_dir" value - I had content in a directory called different from "docs". I had it properly set in the docs_dir in mkdocs.yml. However, the "edit on github" button still directed me to an (invalid) location that assumed "docs" is the home of my content.
Pointer to the repository and commit renaming content directory to "docs" to work around this bug: https://github.com/QIICR/DICOM4QI/commit/a1160c4a21c70d42de2939da1520f52aa0c9b736
I am using "material" theme, not sure if this is theme-specific. | non_main | edit on github expects docs for content location and ignores docs dir value i had content in a directory called different from docs i had it properly set in the docs dir in mkdocs yml however the edit on github button still directed me to an invalid location that assumed docs is the home of my content pointer to the repository and commit renaming content directory to docs to work around this bug i am using material theme not sure if this is theme specific | 0 |
5,255 | 26,598,365,020 | IssuesEvent | 2023-01-23 14:08:35 | ortuman/jackal | https://api.github.com/repos/ortuman/jackal | opened | ⚠️ Looking for a new maintainer ⚠️ | help wanted waiting on new maintainer | I'm looking for a new maintainer (or maintainers, plural). Unfortunately, I no longer have time to fully dedicate to maintaining code across the project.
If I don't have any luck finding new maintainer(s) in the next **12 months** or so, it's likely I'll mark this project as in maintenance mode only and archive the repo.
Please keep the replies on-topic. | True | ⚠️ Looking for a new maintainer ⚠️ - I'm looking for a new maintainer (or maintainers, plural). Unfortunately, I no longer have time to fully dedicate to maintaining code across the project.
If I don't have any luck finding new maintainer(s) in the next **12 months** or so, it's likely I'll mark this project as in maintenance mode only and archive the repo.
Please keep the replies on-topic. | main | ⚠️ looking for a new maintainer ⚠️ i m looking for a new maintainer or maintainers plural unfortunately i no longer have time to fully dedicate to maintaining code across the project if i don t have any luck finding new maintainer s in the next months or so it s likely i ll mark this project as in maintenance mode only and archive the repo please keep the replies on topic | 1 |
1,022 | 4,817,003,281 | IssuesEvent | 2016-11-04 12:11:09 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Git module does not work via http proxy | affects_2.1 bug_report waiting_on_maintainer | <!---
Please do not report issues/requests related to Ansible modules here !!
Report them to the appropriate modules-core or modules-extras project:
- https://github.com/ansible/ansible-modules-core/issues
- https://github.com/ansible/ansible-modules-extras/issues
Also verify first that your issue/request is not already reported in GitHub
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/task/feature -->
- git module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
- ansible 2.1.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
--> Suse 12
##### SUMMARY
<!--- Explain the problem briefly -->
Git module does not work via http proxy
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The connection which is initialized on the remote host, does not include proxy and is going straight to destination IP vi default gateway.
Example task
```
- name: git reset
environment:
http_proxy: 'http://x.x.x.x:3128'
https_proxy: 'http://x.x.x.x:3128'
git:
repo: https://bitbucket.org/account/myproject.git
dest: /tmp
update: yes
force: yes
```
Ended with Connection timed error
```
fatal: [remote.local]: FAILED! => {"changed": false, "cmd": "/usr/bin/git ls-remote origin -h refs/heads/master", "failed": true, "invocation": {"module_args": {"accept_hostkey": false, "bare": false, "clone": true, "depth": null, "dest": "/tmp", "executable": null, "force": true, "key_file": null, "recursive": true, "reference": null, "refspec": null, "remote": "origin", "repo": "https://bitbucket.org/account/myproject.git", "ssh_opts": null, "track_submodules": false, "update": true, "verify_commit": false, "version": "HEAD"}, "module_name": "git"}, "msg": "fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out", "rc": 128, "stderr": "fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out\n", "stdout": "", "stdout_lines": []}
```
Tcpdump on a remote machine shows
,,,
12:10:26.074852 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447144044 ecr 0,nop,wscale 7], length 0
12:10:42.090905 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447148048 ecr 0,nop,wscale 7], length 0
12:11:14.154911 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447156064 ecr 0,nop,wscale 7], length 0
,,,
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
git module, on the remote host, should initialize connection via proxy server to connect to remote repository.
| True | Git module does not work via http proxy - <!---
Please do not report issues/requests related to Ansible modules here !!
Report them to the appropriate modules-core or modules-extras project:
- https://github.com/ansible/ansible-modules-core/issues
- https://github.com/ansible/ansible-modules-extras/issues
Also verify first that your issue/request is not already reported in GitHub
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/task/feature -->
- git module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
- ansible 2.1.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
--> Suse 12
##### SUMMARY
<!--- Explain the problem briefly -->
Git module does not work via http proxy
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The connection which is initialized on the remote host, does not include proxy and is going straight to destination IP vi default gateway.
Example task
```
- name: git reset
environment:
http_proxy: 'http://x.x.x.x:3128'
https_proxy: 'http://x.x.x.x:3128'
git:
repo: https://bitbucket.org/account/myproject.git
dest: /tmp
update: yes
force: yes
```
Ended with Connection timed error
```
fatal: [remote.local]: FAILED! => {"changed": false, "cmd": "/usr/bin/git ls-remote origin -h refs/heads/master", "failed": true, "invocation": {"module_args": {"accept_hostkey": false, "bare": false, "clone": true, "depth": null, "dest": "/tmp", "executable": null, "force": true, "key_file": null, "recursive": true, "reference": null, "refspec": null, "remote": "origin", "repo": "https://bitbucket.org/account/myproject.git", "ssh_opts": null, "track_submodules": false, "update": true, "verify_commit": false, "version": "HEAD"}, "module_name": "git"}, "msg": "fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out", "rc": 128, "stderr": "fatal: unable to access 'https://bitbucket.org/account/myproject.git/': Failed to connect to bitbucket.org port 443: Connection timed out\n", "stdout": "", "stdout_lines": []}
```
Tcpdump on a remote machine shows
,,,
12:10:26.074852 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447144044 ecr 0,nop,wscale 7], length 0
12:10:42.090905 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447148048 ecr 0,nop,wscale 7], length 0
12:11:14.154911 IP remote.local.44928 > bitbucket.org.https: Flags [S], seq 2674356411, win 29200, options [mss 1460,sackOK,TS val 1447156064 ecr 0,nop,wscale 7], length 0
,,,
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
git module, on the remote host, should initialize connection via proxy server to connect to remote repository.
| main | git module does not work via http proxy please do not report issues requests related to ansible modules here report them to the appropriate modules core or modules extras project also verify first that your issue request is not already reported in github issue type bug report component name git module ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific suse summary git module does not work via http proxy steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the connection which is initialized on the remote host does not include proxy and is going straight to destination ip vi default gateway example task name git reset environment http proxy https proxy git repo dest tmp update yes force yes ended with connection timed error fatal failed changed false cmd usr bin git ls remote origin h refs heads master failed true invocation module args accept hostkey false bare false clone true depth null dest tmp executable null force true key file null recursive true reference null refspec null remote origin repo ssh opts null track submodules false update true verify commit false version head module name git msg fatal unable to access failed to connect to bitbucket org port connection timed out rc stderr fatal unable to access failed to connect to bitbucket org port connection timed out n stdout stdout lines tcpdump on a remote machine shows ip remote local bitbucket org https flags seq win options length ip remote local bitbucket org https flags seq win options length ip remote local bitbucket org https flags seq win options length expected results git module on the remote host should initialize connection via proxy server to connect to remote repository | 1 |
1,711 | 2,565,195,755 | IssuesEvent | 2015-02-07 03:53:48 | waterbearlang/waterbear | https://api.github.com/repos/waterbearlang/waterbear | opened | Template for waterbear tutorials | UX / Design | @JarrettSpiker we can keep track of ideas here.
Goal: Provide an easy way for users to get started creating a fun project or solving an interesting problem.
Ideas:
- Create a series of projects, with step by step tutorials with milestones
- The results pane would have two tabs, "results" and "tutorials" (or getting started), so that the user can go back and forth between instruction and testing. This idea is slightly inspired by CodeAcademy.
This gives a decently scoped project. There is no testing or forcing the user to get a correct answer to go to the next step, but it does give a nice interface to store tutorials and makes Waterbear more accessible to people who don't have a mentor to help them get started. It also provides the ability to easily add on features. For example, make it more like code academy where the code is "tested" as they go to be able to unlock the next step.
I attached my initial sketches of my ideas.


| 1.0 | Template for waterbear tutorials - @JarrettSpiker we can keep track of ideas here.
Goal: Provide an easy way for users to get started creating a fun project or solving an interesting problem.
Ideas:
- Create a series of projects, with step by step tutorials with milestones
- The results pane would have two tabs, "results" and "tutorials" (or getting started), so that the user can go back and forth between instruction and testing. This idea is slightly inspired by CodeAcademy.
This gives a decently scoped project. There is no testing or forcing the user to get a correct answer to go to the next step, but it does give a nice interface to store tutorials and makes Waterbear more accessible to people who don't have a mentor to help them get started. It also provides the ability to easily add on features. For example, make it more like code academy where the code is "tested" as they go to be able to unlock the next step.
I attached my initial sketches of my ideas.


| non_main | template for waterbear tutorials jarrettspiker we can keep track of ideas here goal provide an easy way for users to get started creating a fun project or solving an interesting problem ideas create a series of projects with step by step tutorials with milestones the results pane would have two tabs results and tutorials or getting started so that the user can go back and forth between instruction and testing this idea is slightly inspired by codeacademy this gives a decently scoped project there is no testing or forcing the user to get a correct answer to go to the next step but it does give a nice interface to store tutorials and makes waterbear more accessible to people who don t have a mentor to help them get started it also provides the ability to easily add on features for example make it more like code academy where the code is tested as they go to be able to unlock the next step i attached my initial sketches of my ideas | 0 |
193,839 | 14,662,848,254 | IssuesEvent | 2020-12-29 08:19:16 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | blugelabs/bluge: search/searcher/search_phrase_test.go; 48 LoC | fresh small test |
Found a possible issue in [blugelabs/bluge](https://www.github.com/blugelabs/bluge) at [search/searcher/search_phrase_test.go](https://github.com/blugelabs/bluge/blob/3f56c73b42596ced744093756398811e204e7a0f/search/searcher/search_phrase_test.go#L59-L106)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable test used in defer or goroutine at line 61
[Click here to see the code in its original context.](https://github.com/blugelabs/bluge/blob/3f56c73b42596ced744093756398811e204e7a0f/search/searcher/search_phrase_test.go#L59-L106)
<details>
<summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary>
```go
for testIndex, test := range tests {
defer func() {
err := test.searcher.Close()
if err != nil {
t.Fatal(err)
}
}()
ctx := &search.Context{
DocumentMatchPool: search.NewDocumentMatchPool(test.searcher.DocumentMatchPoolSize(), 0),
}
next, err := test.searcher.Next(ctx)
i := 0
for err == nil && next != nil {
next.Complete(nil)
if i < len(test.results) {
if next.Number != test.results[i].Number {
t.Errorf("expected result %d to have number %d got %d for test %d\n", i, test.results[i].Number, next.Number, testIndex)
}
if next.Score != test.results[i].Score {
t.Errorf("expected result %d to have score %v got %v for test %d\n", i, test.results[i].Score, next.Score, testIndex)
t.Logf("scoring explanation: %s\n", next.Explanation)
}
for _, ft := range test.fieldterms {
locs := next.Locations[ft[0]][ft[1]]
explocs := test.locations[ft[0]][ft[1]]
if len(explocs) != len(locs) {
t.Fatalf("expected result %d to have %d Locations (%#v) but got %d (%#v) for test %d with field %q and term %q\n", i, len(explocs), explocs, len(locs), locs, testIndex, ft[0], ft[1])
}
for ind, exploc := range explocs {
if !reflect.DeepEqual(*locs[ind], exploc) {
t.Errorf("expected result %d to have Location %v got %v for test %d\n", i, exploc, locs[ind], testIndex)
}
}
}
}
ctx.DocumentMatchPool.Put(next)
next, err = test.searcher.Next(ctx)
i++
}
if err != nil {
t.Fatalf("error iterating searcher: %v for test %d", err, testIndex)
}
if len(test.results) != i {
t.Errorf("expected %d results got %d for test %d", len(test.results), i, testIndex)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 3f56c73b42596ced744093756398811e204e7a0f
| 1.0 | blugelabs/bluge: search/searcher/search_phrase_test.go; 48 LoC -
Found a possible issue in [blugelabs/bluge](https://www.github.com/blugelabs/bluge) at [search/searcher/search_phrase_test.go](https://github.com/blugelabs/bluge/blob/3f56c73b42596ced744093756398811e204e7a0f/search/searcher/search_phrase_test.go#L59-L106)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable test used in defer or goroutine at line 61
[Click here to see the code in its original context.](https://github.com/blugelabs/bluge/blob/3f56c73b42596ced744093756398811e204e7a0f/search/searcher/search_phrase_test.go#L59-L106)
<details>
<summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary>
```go
for testIndex, test := range tests {
defer func() {
err := test.searcher.Close()
if err != nil {
t.Fatal(err)
}
}()
ctx := &search.Context{
DocumentMatchPool: search.NewDocumentMatchPool(test.searcher.DocumentMatchPoolSize(), 0),
}
next, err := test.searcher.Next(ctx)
i := 0
for err == nil && next != nil {
next.Complete(nil)
if i < len(test.results) {
if next.Number != test.results[i].Number {
t.Errorf("expected result %d to have number %d got %d for test %d\n", i, test.results[i].Number, next.Number, testIndex)
}
if next.Score != test.results[i].Score {
t.Errorf("expected result %d to have score %v got %v for test %d\n", i, test.results[i].Score, next.Score, testIndex)
t.Logf("scoring explanation: %s\n", next.Explanation)
}
for _, ft := range test.fieldterms {
locs := next.Locations[ft[0]][ft[1]]
explocs := test.locations[ft[0]][ft[1]]
if len(explocs) != len(locs) {
t.Fatalf("expected result %d to have %d Locations (%#v) but got %d (%#v) for test %d with field %q and term %q\n", i, len(explocs), explocs, len(locs), locs, testIndex, ft[0], ft[1])
}
for ind, exploc := range explocs {
if !reflect.DeepEqual(*locs[ind], exploc) {
t.Errorf("expected result %d to have Location %v got %v for test %d\n", i, exploc, locs[ind], testIndex)
}
}
}
}
ctx.DocumentMatchPool.Put(next)
next, err = test.searcher.Next(ctx)
i++
}
if err != nil {
t.Fatalf("error iterating searcher: %v for test %d", err, testIndex)
}
if len(test.results) != i {
t.Errorf("expected %d results got %d for test %d", len(test.results), i, testIndex)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 3f56c73b42596ced744093756398811e204e7a0f
| non_main | blugelabs bluge search searcher search phrase test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable test used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for testindex test range tests defer func err test searcher close if err nil t fatal err ctx search context documentmatchpool search newdocumentmatchpool test searcher documentmatchpoolsize next err test searcher next ctx i for err nil next nil next complete nil if i len test results if next number test results number t errorf expected result d to have number d got d for test d n i test results number next number testindex if next score test results score t errorf expected result d to have score v got v for test d n i test results score next score testindex t logf scoring explanation s n next explanation for ft range test fieldterms locs next locations explocs test locations if len explocs len locs t fatalf expected result d to have d locations v but got d v for test d with field q and term q n i len explocs explocs len locs locs testindex ft ft for ind exploc range explocs if reflect deepequal locs exploc t errorf expected result d to have location v got v for test d n i exploc locs testindex ctx documentmatchpool put next next err test searcher next ctx i if err nil t fatalf error iterating searcher v for test d err testindex if len test results i t errorf expected d results got d for test d len test results i testindex leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
154,092 | 13,537,058,721 | IssuesEvent | 2020-09-16 09:55:46 | backdrop/backdrop-issues | https://api.github.com/repos/backdrop/backdrop-issues | opened | Improve installation instructions for non-technical users | type - documentation | **Description of the need**
It actually popped up in the Zulip chat. A user had a really hard time to figure out how to install Backdrop with MAMP.
The [Backdrop installation instructions](https://backdropcms.org/installation) require some basic technical knowledge regarding web. That's fine for people coming from Drupal or people who already did web CMS installs.
The instructions seem, however, a little sparse for people with few know-how regarding the web. (How to create a database, what to put where...)
Comparing our instructions to the [Wordpress instructions](https://wordpress.org/support/article/how-to-install-wordpress/) might be helpful for improvements.
They provide phpMyAdmin screenshots and overall more details regarding what-and-how-and-where.
**Proposed solution**
We could keep the current instructions but also provide a "newbie page" with more step-by-step info and screenshots.
| 1.0 | Improve installation instructions for non-technical users - **Description of the need**
It actually popped up in the Zulip chat. A user had a really hard time to figure out how to install Backdrop with MAMP.
The [Backdrop installation instructions](https://backdropcms.org/installation) require some basic technical knowledge regarding web. That's fine for people coming from Drupal or people who already did web CMS installs.
The instructions seem, however, a little sparse for people with few know-how regarding the web. (How to create a database, what to put where...)
Comparing our instructions to the [Wordpress instructions](https://wordpress.org/support/article/how-to-install-wordpress/) might be helpful for improvements.
They provide phpMyAdmin screenshots and overall more details regarding what-and-how-and-where.
**Proposed solution**
We could keep the current instructions but also provide a "newbie page" with more step-by-step info and screenshots.
| non_main | improve installation instructions for non technical users description of the need it actually popped up in the zulip chat a user had a really hard time to figure out how to install backdrop with mamp the require some basic technical knowledge regarding web that s fine for people coming from drupal or people who already did web cms installs the instructions seem however a little sparse for people with few know how regarding the web how to create a database what to put where comparing our instructions to the might be helpful for improvements they provide phpmyadmin screenshots and overall more details regarding what and how and where proposed solution we could keep the current instructions but also provide a newbie page with more step by step info and screenshots | 0 |
228,143 | 18,161,614,572 | IssuesEvent | 2021-09-27 10:14:34 | ressec/Cherry | https://api.github.com/repos/ressec/Cherry | closed | Deliver the Email Address integration tests | Activity: Coding Model: Person Module: Persistence Category: Integration Test | ## Purpose
This task aims to deliver Email Address `integration tests` to ensure the persistent email address entities are working as expected on a target database.
### Integration Tests
- testCreateEmailAddressWithoutDocument
- testCreateEmailAddressWithDocument
- testUpdateEmailAddress
- testDeleteEmailAddress
- testRemovalOrphanEmailAddress | 1.0 | Deliver the Email Address integration tests - ## Purpose
This task aims to deliver Email Address `integration tests` to ensure the persistent email address entities are working as expected on a target database.
### Integration Tests
- testCreateEmailAddressWithoutDocument
- testCreateEmailAddressWithDocument
- testUpdateEmailAddress
- testDeleteEmailAddress
- testRemovalOrphanEmailAddress | non_main | deliver the email address integration tests purpose this task aims to deliver email address integration tests to ensure the persistent email address entities are working as expected on a target database integration tests testcreateemailaddresswithoutdocument testcreateemailaddresswithdocument testupdateemailaddress testdeleteemailaddress testremovalorphanemailaddress | 0 |
358,982 | 25,211,521,386 | IssuesEvent | 2022-11-14 04:37:44 | SigNoz/signoz-website | https://api.github.com/repos/SigNoz/signoz-website | closed | Linking internal pages | documentation | Will this type of linking work?
<img width="841" alt="Screenshot 2022-10-28 at 4 52 39 PM" src="https://user-images.githubusercontent.com/83692067/198575728-69341b32-1786-43ab-92e7-00b31b269ef5.png">
Currently, I am using the entire link like shown below. The below opens in a new page. I think for docs section opening in the same tab makes more sense just like our current behaviour.
<img width="942" alt="Screenshot 2022-10-28 at 4 54 12 PM" src="https://user-images.githubusercontent.com/83692067/198575916-e64924b6-ab07-47b2-94f0-8770e717e90c.png">
| 1.0 | Linking internal pages - Will this type of linking work?
<img width="841" alt="Screenshot 2022-10-28 at 4 52 39 PM" src="https://user-images.githubusercontent.com/83692067/198575728-69341b32-1786-43ab-92e7-00b31b269ef5.png">
Currently, I am using the entire link like shown below. The below opens in a new page. I think for docs section opening in the same tab makes more sense just like our current behaviour.
<img width="942" alt="Screenshot 2022-10-28 at 4 54 12 PM" src="https://user-images.githubusercontent.com/83692067/198575916-e64924b6-ab07-47b2-94f0-8770e717e90c.png">
| non_main | linking internal pages will this type of linking work img width alt screenshot at pm src currently i am using the entire link like shown below the below opens in a new page i think for docs section opening in the same tab makes more sense just like our current behaviour img width alt screenshot at pm src | 0 |
189,720 | 22,047,098,562 | IssuesEvent | 2022-05-30 03:53:28 | madhans23/linux-4.1.15 | https://api.github.com/repos/madhans23/linux-4.1.15 | closed | CVE-2017-5550 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed | security vulnerability | ## CVE-2017-5550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/iov_iter.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Off-by-one error in the pipe_advance function in lib/iov_iter.c in the Linux kernel before 4.9.5 allows local users to obtain sensitive information from uninitialized heap-memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer-release decision.
<p>Publish Date: 2017-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5550>CVE-2017-5550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-5550">https://nvd.nist.gov/vuln/detail/CVE-2017-5550</a></p>
<p>Release Date: 2017-02-06</p>
<p>Fix Resolution: 4.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-5550 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2017-5550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/iov_iter.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Off-by-one error in the pipe_advance function in lib/iov_iter.c in the Linux kernel before 4.9.5 allows local users to obtain sensitive information from uninitialized heap-memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer-release decision.
<p>Publish Date: 2017-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5550>CVE-2017-5550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-5550">https://nvd.nist.gov/vuln/detail/CVE-2017-5550</a></p>
<p>Release Date: 2017-02-06</p>
<p>Fix Resolution: 4.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files lib iov iter c vulnerability details off by one error in the pipe advance function in lib iov iter c in the linux kernel before allows local users to obtain sensitive information from uninitialized heap memory locations in opportunistic circumstances by reading from a pipe after an incorrect buffer release decision publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
724,228 | 24,921,407,800 | IssuesEvent | 2022-10-31 00:38:22 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | pm_device causes assertion error in sched.c with lis2dh | bug priority: low area: Sensors area: Power Management Stale | **Describe the bug**
When pm_power_state_force set to OFF, the idle thread tries to suspend sensors.
`kernel/idle.c`:
```c
if (pm_system_suspend(_kernel.idle) == false) {
k_cpu_idle();
}
```
But lis2dh has pm action handler that uses i2c bus.
Call sequence:
- `subsys/pm/pm.c`: function pm_suspend_devices
- `subsys/pm/device.c`: function pm_device_action_run
- `drivers/sensor/lis2dh/lis2dh.c`: lis2dh_pm_action (lis2dh->hw_tf->write_reg)
- `drivers/sensor/lis2dh/lis2dh_i2c.c`: lis2dh_i2c_write_reg
The i2c driver for nordic has semaphore inside to wait operation done:
`drivers/i2c/i2c_nrfx_twi.c`:
```c
ret = k_sem_take(&(get_dev_data(dev)->completion_sync),
I2C_TRANSFER_TIMEOUT_MSEC);
```
Then function `k_sem_take` calls function `z_pend_curr` in `sched.c`.
This call sequence in `sched.c` causes assertion error:
- z_pend_curr
- pend
- add_to_waitq_locked
- z_priq_wait_add
- z_priq_dumb_add (here error: `__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));`)
**To Reproduce**
Steps to reproduce the behavior:
Set the configs:
```
CONFIG_I2C=y
CONFIG_SENSOR=y
CONFIG_PM=y
CONFIG_PM_DEVICE=y
CONFIG_LIS2DH=y
```
Call the `pm_power_state_force(0, { PM_STATE_SOFT_OFF, 0, 0 });` from delayed work.
**Expected behavior**
Sensors and CPU halted without error.
Expected suspending sensors not from idle thread.
Probably, pm needs to be improved:
- Suspend devices in separate thread/work
- When devices suspended, switch to the idle thread
**Logs and console output**
```
[00:00:30.289,337] <inf> event_manager: e: power_down_event
[00:00:30.289,398] <inf> event_manager: e:module_state_event module:buttons state:STANDBY
ASSERTION FAIL [!z_is_idle_thread_object(thread)] @ WEST_TOPDIR/zephyr/kernel/sched.c:186
[00:00:30.336,364] <err> os: r0/a1: 0x00000004 r1/a2: 0x000000ba r2/a3: 0x200007e0
[00:00:30.336,395] <err> os: r3/a4: 0x20000e20 r12/ip: 0x00000000 r14/lr: 0x0000d4d5
[00:00:30.336,395] <err> os: xpsr: 0x61000000
[00:00:30.336,395] <err> os: Faulting instruction address (r15/pc): 0x0000efe6
[00:00:30.336,395] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
[00:00:30.336,395] <err> os: Current thread: 0x200007e0 (unknown)
```
**Environment (please complete the following information):**
- OS: Linux
- Toolchain: gnuarmeabi
- board: thingy52_nrf52832
| 1.0 | pm_device causes assertion error in sched.c with lis2dh - **Describe the bug**
When pm_power_state_force set to OFF, the idle thread tries to suspend sensors.
`kernel/idle.c`:
```c
if (pm_system_suspend(_kernel.idle) == false) {
k_cpu_idle();
}
```
But lis2dh has pm action handler that uses i2c bus.
Call sequence:
- `subsys/pm/pm.c`: function pm_suspend_devices
- `subsys/pm/device.c`: function pm_device_action_run
- `drivers/sensor/lis2dh/lis2dh.c`: lis2dh_pm_action (lis2dh->hw_tf->write_reg)
- `drivers/sensor/lis2dh/lis2dh_i2c.c`: lis2dh_i2c_write_reg
The i2c driver for nordic has semaphore inside to wait operation done:
`drivers/i2c/i2c_nrfx_twi.c`:
```c
ret = k_sem_take(&(get_dev_data(dev)->completion_sync),
I2C_TRANSFER_TIMEOUT_MSEC);
```
Then function `k_sem_take` calls function `z_pend_curr` in `sched.c`.
This call sequence in `sched.c` causes assertion error:
- z_pend_curr
- pend
- add_to_waitq_locked
- z_priq_wait_add
- z_priq_dumb_add (here error: `__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));`)
**To Reproduce**
Steps to reproduce the behavior:
Set the configs:
```
CONFIG_I2C=y
CONFIG_SENSOR=y
CONFIG_PM=y
CONFIG_PM_DEVICE=y
CONFIG_LIS2DH=y
```
Call the `pm_power_state_force(0, { PM_STATE_SOFT_OFF, 0, 0 });` from delayed work.
**Expected behavior**
Sensors and CPU halted without error.
Expected suspending sensors not from idle thread.
Probably, pm needs to be improved:
- Suspend devices in separate thread/work
- When devices suspended, switch to the idle thread
**Logs and console output**
```
[00:00:30.289,337] <inf> event_manager: e: power_down_event
[00:00:30.289,398] <inf> event_manager: e:module_state_event module:buttons state:STANDBY
ASSERTION FAIL [!z_is_idle_thread_object(thread)] @ WEST_TOPDIR/zephyr/kernel/sched.c:186
[00:00:30.336,364] <err> os: r0/a1: 0x00000004 r1/a2: 0x000000ba r2/a3: 0x200007e0
[00:00:30.336,395] <err> os: r3/a4: 0x20000e20 r12/ip: 0x00000000 r14/lr: 0x0000d4d5
[00:00:30.336,395] <err> os: xpsr: 0x61000000
[00:00:30.336,395] <err> os: Faulting instruction address (r15/pc): 0x0000efe6
[00:00:30.336,395] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
[00:00:30.336,395] <err> os: Current thread: 0x200007e0 (unknown)
```
**Environment (please complete the following information):**
- OS: Linux
- Toolchain: gnuarmeabi
- board: thingy52_nrf52832
| non_main | pm device causes assertion error in sched c with describe the bug when pm power state force set to off the idle thread tries to suspend sensors kernel idle c c if pm system suspend kernel idle false k cpu idle but has pm action handler that uses bus call sequence subsys pm pm c function pm suspend devices subsys pm device c function pm device action run drivers sensor c pm action hw tf write reg drivers sensor c write reg the driver for nordic has semaphore inside to wait operation done drivers nrfx twi c c ret k sem take get dev data dev completion sync transfer timeout msec then function k sem take calls function z pend curr in sched c this call sequence in sched c causes assertion error z pend curr pend add to waitq locked z priq wait add z priq dumb add here error assert no msg z is idle thread object thread to reproduce steps to reproduce the behavior set the configs config y config sensor y config pm y config pm device y config y call the pm power state force pm state soft off from delayed work expected behavior sensors and cpu halted without error expected suspending sensors not from idle thread probably pm needs to be improved suspend devices in separate thread work when devices suspended switch to the idle thread logs and console output event manager e power down event event manager e module state event module buttons state standby assertion fail west topdir zephyr kernel sched c os os ip lr os xpsr os faulting instruction address pc os zephyr fatal error kernel panic on cpu os current thread unknown environment please complete the following information os linux toolchain gnuarmeabi board | 0 |
27,018 | 5,310,327,384 | IssuesEvent | 2017-02-12 19:07:44 | gpbl/react-day-picker | https://api.github.com/repos/gpbl/react-day-picker | closed | fromMonth not working as expected | documentation support | My code below is not influencing the fromMonth in any way. Is there a bug or am I doing it wrong?
**Render function**
```
render() {
const month = new Date();
month.setMonth(month.getMonth() - 10);
return (
<DayPicker
firstDayOfWeek={ 1 }
onDayClick={ this.handleDayClick }
selectedDays={ day => DateUtils.isDayInRange(day, { from: this.state.from, to: this.state.to }) }
disabledDays={this.disabledDays}
enableOutsideDays={true}
fixedWeeks={true}
numberOfMonths={this.props.numberOfMonths}
toMonth={this.props.disableFuture ? new Date() : null}
fromMonth={month}
/>
);
}
``` | 1.0 | fromMonth not working as expected - My code below is not influencing the fromMonth in any way. Is there a bug or am I doing it wrong?
**Render function**
```
render() {
const month = new Date();
month.setMonth(month.getMonth() - 10);
return (
<DayPicker
firstDayOfWeek={ 1 }
onDayClick={ this.handleDayClick }
selectedDays={ day => DateUtils.isDayInRange(day, { from: this.state.from, to: this.state.to }) }
disabledDays={this.disabledDays}
enableOutsideDays={true}
fixedWeeks={true}
numberOfMonths={this.props.numberOfMonths}
toMonth={this.props.disableFuture ? new Date() : null}
fromMonth={month}
/>
);
}
``` | non_main | frommonth not working as expected my code below is not influencing the frommonth in any way is there a bug or am i doing it wrong render function render const month new date month setmonth month getmonth return daypicker firstdayofweek ondayclick this handledayclick selecteddays day dateutils isdayinrange day from this state from to this state to disableddays this disableddays enableoutsidedays true fixedweeks true numberofmonths this props numberofmonths tomonth this props disablefuture new date null frommonth month | 0 |
190,817 | 22,162,172,395 | IssuesEvent | 2022-06-04 17:05:33 | lettucebo/Ci.Extension | https://api.github.com/repos/lettucebo/Ci.Extension | opened | CVE-2017-0248 (High) detected in system.net.http.4.3.0.nupkg | security vulnerability | ## CVE-2017-0248 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /Ci.Extensions.Test/Ci.Extensions.Test.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- mstest.testadapter.2.2.10.nupkg (Root Library)
- newtonsoft.json.10.0.3.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lettucebo/Ci.Extension/commit/64486d8b998c8e372d17b403c47d3e7af392c719">64486d8b998c8e372d17b403c47d3e7af392c719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Microsoft .NET Framework 2.0, 3.5, 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2 and 4.7 allow an attacker to bypass Enhanced Security Usage taggings when they present a certificate that is invalid for a specific use, aka ".NET Security Feature Bypass Vulnerability."
<p>Publish Date: 2017-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0248>CVE-2017-0248</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/239">https://github.com/aspnet/Announcements/issues/239</a></p>
<p>Release Date: 2017-05-12</p>
<p>Fix Resolution: System.Text.Encodings.Web - 4.0.1, 4.3.1;System.Net.Http - 4.1.2, 4.3.2;System.Net.Http.WinHttpHandler - 4.0.2, 4.3.1;System.Net.Security - 4.0.1, 4.3.1;System.Net.WebSockets.Client - 4.0.1, 4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4, 1.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-0248 (High) detected in system.net.http.4.3.0.nupkg - ## CVE-2017-0248 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /Ci.Extensions.Test/Ci.Extensions.Test.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- mstest.testadapter.2.2.10.nupkg (Root Library)
- newtonsoft.json.10.0.3.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lettucebo/Ci.Extension/commit/64486d8b998c8e372d17b403c47d3e7af392c719">64486d8b998c8e372d17b403c47d3e7af392c719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Microsoft .NET Framework 2.0, 3.5, 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2 and 4.7 allow an attacker to bypass Enhanced Security Usage taggings when they present a certificate that is invalid for a specific use, aka ".NET Security Feature Bypass Vulnerability."
<p>Publish Date: 2017-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0248>CVE-2017-0248</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/239">https://github.com/aspnet/Announcements/issues/239</a></p>
<p>Release Date: 2017-05-12</p>
<p>Fix Resolution: System.Text.Encodings.Web - 4.0.1, 4.3.1;System.Net.Http - 4.1.2, 4.3.2;System.Net.Http.WinHttpHandler - 4.0.2, 4.3.1;System.Net.Security - 4.0.1, 4.3.1;System.Net.WebSockets.Client - 4.0.1, 4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4, 1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4, 1.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in system net http nupkg cve high severity vulnerability vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file ci extensions test ci extensions test csproj path to vulnerable library home wss scanner nuget packages system net http system net http nupkg dependency hierarchy mstest testadapter nupkg root library newtonsoft json nupkg netstandard library nupkg x system net http nupkg vulnerable library found in head commit a href found in base branch master vulnerability details microsoft net framework and allow an attacker to bypass enhanced security usage taggings when they present a certificate that is invalid for a specific use aka net security feature bypass vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text encodings web system net http system net http winhttphandler system net security system net websockets client microsoft aspnetcore mvc microsoft aspnetcore mvc core microsoft aspnetcore mvc abstractions microsoft aspnetcore mvc apiexplorer microsoft aspnetcore mvc cors microsoft aspnetcore mvc dataannotations microsoft aspnetcore mvc formatters json microsoft aspnetcore mvc formatters xml microsoft aspnetcore mvc localization microsoft aspnetcore mvc razor host microsoft aspnetcore mvc razor microsoft aspnetcore mvc taghelpers microsoft aspnetcore mvc viewfeatures microsoft aspnetcore mvc webapicompatshim step up your open source security game with mend | 0 |
8,675 | 2,875,826,555 | IssuesEvent | 2015-06-09 10:36:03 | HSLdevcom/digitransit-ui | https://api.github.com/repos/HSLdevcom/digitransit-ui | opened | Disruption info | designed | - [ ] Front page disruption info popup as in concept
- [ ] Highlight those lines on stop card that have disruption information as in concept | 1.0 | Disruption info - - [ ] Front page disruption info popup as in concept
- [ ] Highlight those lines on stop card that have disruption information as in concept | non_main | disruption info front page disruption info popup as in concept highlight those lines on stop card that have disruption information as in concept | 0 |
3,124 | 11,960,944,874 | IssuesEvent | 2020-04-05 05:57:56 | jayvdb/pypidb | https://api.github.com/repos/jayvdb/pypidb | opened | mwlib.* | network unmaintained | e.g. mwlib.ext
https://www.reportlab.org & http://www.reportlab.org fail.
```
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
WARNING pypidb._pypi:_pypi.py:459 http://www.reportlab.org: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
```
There is a lot of links to https://www.mediawiki.org/wiki/Special:ExtensionDistributor/Collection , which has no chance of helping.
They also load http://blog.pediapress.com/
timeout occurs in rtd, but it is after the rtd has resolved already.
```py
pypidb/_rtd.py:30: in __init__
token = _get_token("readthedocs.io")
``` | True | mwlib.* - e.g. mwlib.ext
https://www.reportlab.org & http://www.reportlab.org fail.
```
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
ERROR https_everywhere.adapter:adapter.py:124 handle_error requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
WARNING pypidb._pypi:_pypi.py:459 http://www.reportlab.org: HTTPSConnectionPool(host='www.reportlab.org', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f097d74bf40>, 'Connection to www.reportlab.org timed out. (connect timeout=15)'))
```
There is a lot of links to https://www.mediawiki.org/wiki/Special:ExtensionDistributor/Collection , which has no chance of helping.
They also load http://blog.pediapress.com/
timeout occurs in rtd, but it is after the rtd has resolved already.
```py
pypidb/_rtd.py:30: in __init__
token = _get_token("readthedocs.io")
``` | main | mwlib e g mwlib ext fail error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout error https everywhere adapter adapter py handle error requests exceptions connecttimeout httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout warning pypidb pypi pypi py httpsconnectionpool host port max retries exceeded with url caused by connecttimeouterror connection to timed out connect timeout there is a lot of links to which has no chance of helping they also load timeout occurs in rtd but it is after the rtd has resolved already py pypidb rtd py in init token get token readthedocs io | 1 |
203,014 | 15,864,980,746 | IssuesEvent | 2021-04-08 14:17:36 | tskit-dev/msprime | https://api.github.com/repos/tskit-dev/msprime | closed | Examples for coalescence rates and mean times | documentation | #1614 added some content for explaining the numerical methods on the demography debugger, and added a TODO section for the coalescence_rate_trajectory() and mean_coalescence_time() methods. I decided I wasn't the right person to explain these.
@petrelharp, @apragsdale, would one of you be able to take this up? We can probably reuse the examples from whichever paper we used this on, right? | 1.0 | Examples for coalescence rates and mean times - #1614 added some content for explaining the numerical methods on the demography debugger, and added a TODO section for the coalescence_rate_trajectory() and mean_coalescence_time() methods. I decided I wasn't the right person to explain these.
@petrelharp, @apragsdale, would one of you be able to take this up? We can probably reuse the examples from whichever paper we used this on, right? | non_main | examples for coalescence rates and mean times added some content for explaining the numerical methods on the demography debugger and added a todo section for the coalescence rate trajectory and mean coalescence time methods i decided i wasn t the right person to explain these petrelharp apragsdale would one of you be able to take this up we can probably reuse the examples from whichever paper we used this on right | 0 |
45,351 | 5,713,122,306 | IssuesEvent | 2017-04-19 06:44:04 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | kv: (unknown) failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/b6ca10fa19b59a6cb56df3fab904e9e9d446f96b
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=228032&tab=buildLog
```
GOPATH set to /go
git submodule update --init
github.com/cockroachdb/cockroach/pkg/cmd/ncpus
github.com/cockroachdb/cockroach/pkg/cmd/returncheck
github.com/cockroachdb/cockroach/vendor/github.com/client9/misspell/cmd/misspell
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/crlfmt
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/stress
github.com/cockroachdb/cockroach/vendor/github.com/golang/lint/golint
github.com/cockroachdb/cockroach/pkg/cmd/metacheck
github.com/cockroachdb/cockroach/vendor/github.com/Masterminds/glide
github.com/cockroachdb/cockroach/vendor/github.com/google/pprof
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
github.com/cockroachdb/cockroach/vendor/github.com/jteeuwen/go-bindata/go-bindata
github.com/cockroachdb/cockroach/vendor/github.com/kisielk/errcheck
github.com/cockroachdb/cockroach/vendor/github.com/mattn/goveralls
github.com/cockroachdb/cockroach/vendor/github.com/mdempsky/unconvert
github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl
github.com/cockroachdb/cockroach/vendor/github.com/wadey/gocovmerge
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/goimports
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/goyacc
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/guru
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/stringer
touch /go/src/github.com/cockroachdb/cockroach/bin/.bootstrap
go list -tags ' make x86_64_unknown_linux_gnu' -f ' CC=/x-tools/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc CXX=/x-tools/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-g++ go test -v -race -installsuffix release -tags '\'' make x86_64_unknown_linux_gnu'\'' -ldflags '\'' -s -w -extldflags "-static-libgcc -static-libstdc++" -X github.com/cockroachdb/cockroach/pkg/build.typ=release'\'' -i -c {{.ImportPath}} -o '\''{{.Dir}}'\''/stress.test && (cd '\''{{.Dir}}'\'' && if [ -f stress.test ]; then COCKROACH_STRESS=true stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/pkg/kv | /bin/bash
runtime/internal/sys
runtime/internal/atomic
runtime
errors
runtime/cgo
runtime/race
internal/race
sync/atomic
unicode
unicode/utf8
encoding
math
container/list
crypto/subtle
crypto/internal/cipherhw
internal/nettrace
vendor/golang_org/x/crypto/curve25519
vendor/golang_org/x/crypto/poly1305
sync
unicode/utf16
github.com/cockroachdb/cockroach/vendor/github.com/biogo/store/llrb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/naming
github.com/cockroachdb/cockroach/vendor/github.com/VividCortex/ewma
container/ring
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/oid
io
syscall
internal/singleflight
github.com/cockroachdb/cockroach/pkg/util/syncutil
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/singleflight
github.com/cockroachdb/cockroach/pkg/util/bufalloc
hash
crypto/cipher
runtime/trace
crypto/hmac
hash/crc32
hash/adler32
hash/fnv
bytes
strings
strconv
math/rand
github.com/cockroachdb/cockroach/pkg/util/shuffle
bufio
vendor/golang_org/x/text/transform
text/tabwriter
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/transform
github.com/cockroachdb/cockroach/vendor/github.com/kr/text
path
html
crypto
reflect
crypto/aes
crypto/rc4
encoding/base64
github.com/cockroachdb/cockroach/vendor/github.com/petermattis/goid
crypto/sha512
crypto/md5
crypto/sha1
time
internal/syscall/unix
crypto/sha256
github.com/cockroachdb/cockroach/vendor/golang.org/x/sys/unix
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/blowfish
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/ssh/terminal
os
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/keepalive
os/signal
fmt
encoding/binary
sort
encoding/pem
path/filepath
regexp/syntax
runtime/debug
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/sortkeys
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/utilities
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
github.com/cockroachdb/cockroach/vendor/github.com/facebookgo/clock
crypto/des
vendor/golang_org/x/crypto/chacha20poly1305/internal/chacha20
github.com/cockroachdb/cockroach/vendor/github.com/golang/snappy
io/ioutil
github.com/cockroachdb/cockroach/vendor/github.com/montanaflynn/stats
container/heap
vendor/golang_org/x/crypto/chacha20poly1305
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/internal/tag
math/big
encoding/gob
encoding/hex
context
github.com/cockroachdb/cockroach/vendor/github.com/pkg/errors
flag
encoding/json
log
encoding/csv
net
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go/log
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/context
compress/flate
regexp
vendor/golang_org/x/net/http2/hpack
vendor/golang_org/x/net/idna
vendor/golang_org/x/text/unicode/norm
vendor/golang_org/x/text/width
mime
compress/gzip
github.com/cockroachdb/cockroach/pkg/util/caller
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/proto
mime/quotedprintable
net/http/internal
net/url
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/internal/timeseries
text/template/parse
os/user
database/sql/driver
github.com/cockroachdb/cockroach/pkg/util/interval
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/proto
crypto/elliptic
encoding/asn1
crypto/rand
crypto/dsa
crypto/rsa
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/apd
github.com/cockroachdb/cockroach/vendor/github.com/dustin/go-humanize
crypto/ecdsa
crypto/x509/pkix
text/template
github.com/cockroachdb/cockroach/pkg/util/duration
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/http2/hpack
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/idna
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/codes
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/grpclog
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/metadata
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/tap
encoding/xml
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/bcrypt
github.com/cockroachdb/cockroach/pkg/util/encoding
github.com/cockroachdb/cockroach/pkg/util/retry
github.com/cockroachdb/cockroach/vendor/gopkg.in/yaml.v2
github.com/cockroachdb/cockroach/vendor/github.com/cenk/backoff
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/jsonpb
github.com/cockroachdb/cockroach/pkg/build
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/types
html/template
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/internal
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/ptypes/timestamp
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/basictracer-go/wire
github.com/cockroachdb/cockroach/vendor/github.com/codahale/hdrhistogram
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/client_model/go
github.com/cockroachdb/cockroach/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/model
internal/pprof/profile
testing
github.com/cockroachdb/cockroach/pkg/settings
compress/zlib
github.com/cockroachdb/cockroach/pkg/sql/privilege
github.com/cockroachdb/cockroach/vendor/github.com/knz/strtime
go/token
crypto/x509
github.com/cockroachdb/cockroach/vendor/github.com/spf13/pflag
vendor/golang_org/x/net/lex/httplex
net/textproto
github.com/cockroachdb/cockroach/vendor/github.com/satori/go.uuid
github.com/cockroachdb/cockroach/pkg/util/uuid
mime/multipart
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/lex/httplex
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/jsonpb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stats
log/syslog
runtime/pprof/internal/protopprof
runtime/pprof
go/constant
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/language
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/unicode/norm
crypto/tls
github.com/cockroachdb/cockroach/vendor/github.com/rcrowley/go-metrics
github.com/cockroachdb/cockroach/pkg/util/humanizeutil
github.com/cockroachdb/cockroach/pkg/util/log/logflags
github.com/cockroachdb/cockroach/pkg/util/envutil
github.com/cockroachdb/cockroach/pkg/sql/pgwire/pgerror
github.com/cockroachdb/cockroach/vendor/github.com/elastic/gosigar
github.com/cockroachdb/cockroach/pkg/util/timeutil
github.com/cockroachdb/cockroach/pkg/util/randutil
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/raft/raftpb
github.com/cockroachdb/cockroach/pkg/util
github.com/cockroachdb/cockroach/vendor/github.com/google/btree
github.com/cockroachdb/cockroach/vendor/github.com/kr/pretty
github.com/cockroachdb/cockroach/vendor/golang.org/x/time/rate
database/sql
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/internal/colltab
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/raft
github.com/cockroachdb/cockroach/pkg/ui
os/exec
go/scanner
go/ast
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/collate
github.com/cockroachdb/cockroach/pkg/util/sdnotify
github.com/cockroachdb/cockroach/pkg/util/leaktest
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup
net/http/httptrace
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/credentials
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq
testing/internal/testdeps
net/http
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/peer
go/doc
go/parser
go/build
github.com/cockroachdb/cockroach/pkg/testutils/buildutil
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go
github.com/cockroachdb/cockroach/pkg/util/httputil
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/thrift_0_9_2/lib/go/thrift
github.com/cockroachdb/cockroach/vendor/github.com/rlmcpherson/s3gof3r
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/trace
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/http2
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/expfmt
github.com/cockroachdb/cockroach/vendor/github.com/rubyist/circuitbreaker
github.com/cockroachdb/cockroach/vendor/github.com/elazarl/go-bindata-assetfs
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go/ext
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/basictracer-go
expvar
net/http/pprof
github.com/cockroachdb/cockroach/vendor/github.com/rcrowley/go-metrics/exp
github.com/cockroachdb/cockroach/pkg/util/log
github.com/cockroachdb/cockroach/pkg/util/hlc
github.com/cockroachdb/cockroach/pkg/util/metric
github.com/cockroachdb/cockroach/pkg/util/cache
github.com/cockroachdb/cockroach/pkg/storage/engine/enginepb
github.com/cockroachdb/cockroach/pkg/sql/mon
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/lightstep_thrift
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/thrift_rpc
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/collectorpb
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go
github.com/cockroachdb/cockroach/pkg/util/protoutil
github.com/cockroachdb/cockroach/pkg/util/tracing
github.com/cockroachdb/cockroach/pkg/roachpb
github.com/cockroachdb/cockroach/pkg/util/stop
github.com/cockroachdb/cockroach/pkg/keys
github.com/cockroachdb/cockroach/pkg/sql/parser
github.com/cockroachdb/cockroach/pkg/storage/storagebase
github.com/cockroachdb/cockroach/pkg/ts/tspb
github.com/cockroachdb/cockroach/pkg/security
github.com/cockroachdb/cockroach/pkg/util/netutil
github.com/cockroachdb/cockroach/pkg/util/grpcutil
github.com/cockroachdb/cockroach/pkg/config
github.com/cockroachdb/cockroach/pkg/internal/client
github.com/cockroachdb/cockroach/pkg/base
github.com/cockroachdb/cockroach/pkg/security/securitytest
github.com/cockroachdb/cockroach/pkg/gossip/resolver
github.com/cockroachdb/cockroach/pkg/rpc
github.com/cockroachdb/cockroach/pkg/testutils
github.com/cockroachdb/cockroach/pkg/storage/engine
github.com/cockroachdb/cockroach/pkg/sql/sqlutil
github.com/cockroachdb/cockroach/pkg/gossip
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/internal.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_internal.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/internal.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/data.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_data.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/data.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
github.com/cockroachdb/cockroach/pkg/server/status
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/metadata.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_metadata.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/metadata.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/server/status
pkg/server/status/runtime_jemalloc.go:25:32: fatal error: jemalloc/jemalloc.h: No such file or directory
// #include <jemalloc/jemalloc.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/mvcc.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_storage_engine_enginepb_mvcc.pb.cc:1:
pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/mvcc.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/encoding.cc:18:27: fatal error: rocksdb/slice.h: No such file or directory
#include "rocksdb/slice.h"
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/eventlistener.cc:17:38: fatal error: rocksdb/table_properties.h: No such file or directory
#include <rocksdb/table_properties.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/rocksdb.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_storage_engine_enginepb_rocksdb.pb.cc:1:
pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/rocksdb.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/util/hlc/timestamp.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_util_hlc_timestamp.pb.cc:1:
pkg/storage/engine/cockroach/pkg/util/hlc/timestamp.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/db.cc:19:48: fatal error: google/protobuf/stubs/stringprintf.h: No such file or directory
#include <google/protobuf/stubs/stringprintf.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/util/unresolved_addr.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_util_unresolved_addr.pb.cc:1:
pkg/storage/engine/cockroach/pkg/util/unresolved_addr.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
github.com/cockroachdb/cockroach/pkg/gossip/simulation
github.com/cockroachdb/cockroach/pkg/kv
github.com/cockroachdb/cockroach/pkg/sql/sqlbase
github.com/cockroachdb/cockroach/pkg/testutils/sqlutils
github.com/cockroachdb/cockroach/pkg/testutils/serverutils
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun
github.com/cockroachdb/cockroach/pkg/sql/distsqlplan
make: *** [stress] Error 2
Makefile:192: recipe for target 'stress' failed
``` | 1.0 | kv: (unknown) failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/b6ca10fa19b59a6cb56df3fab904e9e9d446f96b
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=228032&tab=buildLog
```
GOPATH set to /go
git submodule update --init
github.com/cockroachdb/cockroach/pkg/cmd/ncpus
github.com/cockroachdb/cockroach/pkg/cmd/returncheck
github.com/cockroachdb/cockroach/vendor/github.com/client9/misspell/cmd/misspell
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/crlfmt
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/stress
github.com/cockroachdb/cockroach/vendor/github.com/golang/lint/golint
github.com/cockroachdb/cockroach/pkg/cmd/metacheck
github.com/cockroachdb/cockroach/vendor/github.com/Masterminds/glide
github.com/cockroachdb/cockroach/vendor/github.com/google/pprof
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
github.com/cockroachdb/cockroach/vendor/github.com/jteeuwen/go-bindata/go-bindata
github.com/cockroachdb/cockroach/vendor/github.com/kisielk/errcheck
github.com/cockroachdb/cockroach/vendor/github.com/mattn/goveralls
github.com/cockroachdb/cockroach/vendor/github.com/mdempsky/unconvert
github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl
github.com/cockroachdb/cockroach/vendor/github.com/wadey/gocovmerge
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/goimports
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/goyacc
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/guru
github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/stringer
touch /go/src/github.com/cockroachdb/cockroach/bin/.bootstrap
go list -tags ' make x86_64_unknown_linux_gnu' -f ' CC=/x-tools/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc CXX=/x-tools/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-g++ go test -v -race -installsuffix release -tags '\'' make x86_64_unknown_linux_gnu'\'' -ldflags '\'' -s -w -extldflags "-static-libgcc -static-libstdc++" -X github.com/cockroachdb/cockroach/pkg/build.typ=release'\'' -i -c {{.ImportPath}} -o '\''{{.Dir}}'\''/stress.test && (cd '\''{{.Dir}}'\'' && if [ -f stress.test ]; then COCKROACH_STRESS=true stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/pkg/kv | /bin/bash
runtime/internal/sys
runtime/internal/atomic
runtime
errors
runtime/cgo
runtime/race
internal/race
sync/atomic
unicode
unicode/utf8
encoding
math
container/list
crypto/subtle
crypto/internal/cipherhw
internal/nettrace
vendor/golang_org/x/crypto/curve25519
vendor/golang_org/x/crypto/poly1305
sync
unicode/utf16
github.com/cockroachdb/cockroach/vendor/github.com/biogo/store/llrb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/internal
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/naming
github.com/cockroachdb/cockroach/vendor/github.com/VividCortex/ewma
container/ring
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq/oid
io
syscall
internal/singleflight
github.com/cockroachdb/cockroach/pkg/util/syncutil
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/singleflight
github.com/cockroachdb/cockroach/pkg/util/bufalloc
hash
crypto/cipher
runtime/trace
crypto/hmac
hash/crc32
hash/adler32
hash/fnv
bytes
strings
strconv
math/rand
github.com/cockroachdb/cockroach/pkg/util/shuffle
bufio
vendor/golang_org/x/text/transform
text/tabwriter
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/transform
github.com/cockroachdb/cockroach/vendor/github.com/kr/text
path
html
crypto
reflect
crypto/aes
crypto/rc4
encoding/base64
github.com/cockroachdb/cockroach/vendor/github.com/petermattis/goid
crypto/sha512
crypto/md5
crypto/sha1
time
internal/syscall/unix
crypto/sha256
github.com/cockroachdb/cockroach/vendor/golang.org/x/sys/unix
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/blowfish
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/ssh/terminal
os
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/keepalive
os/signal
fmt
encoding/binary
sort
encoding/pem
path/filepath
regexp/syntax
runtime/debug
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/sortkeys
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/utilities
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
github.com/cockroachdb/cockroach/vendor/github.com/facebookgo/clock
crypto/des
vendor/golang_org/x/crypto/chacha20poly1305/internal/chacha20
github.com/cockroachdb/cockroach/vendor/github.com/golang/snappy
io/ioutil
github.com/cockroachdb/cockroach/vendor/github.com/montanaflynn/stats
container/heap
vendor/golang_org/x/crypto/chacha20poly1305
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/internal/tag
math/big
encoding/gob
encoding/hex
context
github.com/cockroachdb/cockroach/vendor/github.com/pkg/errors
flag
encoding/json
log
encoding/csv
net
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go/log
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/context
compress/flate
regexp
vendor/golang_org/x/net/http2/hpack
vendor/golang_org/x/net/idna
vendor/golang_org/x/text/unicode/norm
vendor/golang_org/x/text/width
mime
compress/gzip
github.com/cockroachdb/cockroach/pkg/util/caller
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/proto
mime/quotedprintable
net/http/internal
net/url
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/internal/timeseries
text/template/parse
os/user
database/sql/driver
github.com/cockroachdb/cockroach/pkg/util/interval
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/proto
crypto/elliptic
encoding/asn1
crypto/rand
crypto/dsa
crypto/rsa
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/apd
github.com/cockroachdb/cockroach/vendor/github.com/dustin/go-humanize
crypto/ecdsa
crypto/x509/pkix
text/template
github.com/cockroachdb/cockroach/pkg/util/duration
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/http2/hpack
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/idna
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/codes
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/grpclog
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/metadata
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/tap
encoding/xml
github.com/cockroachdb/cockroach/vendor/golang.org/x/crypto/bcrypt
github.com/cockroachdb/cockroach/pkg/util/encoding
github.com/cockroachdb/cockroach/pkg/util/retry
github.com/cockroachdb/cockroach/vendor/gopkg.in/yaml.v2
github.com/cockroachdb/cockroach/vendor/github.com/cenk/backoff
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/jsonpb
github.com/cockroachdb/cockroach/pkg/build
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/types
html/template
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/internal
github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/ptypes/timestamp
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/basictracer-go/wire
github.com/cockroachdb/cockroach/vendor/github.com/codahale/hdrhistogram
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/client_model/go
github.com/cockroachdb/cockroach/vendor/github.com/matttproud/golang_protobuf_extensions/pbutil
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/model
internal/pprof/profile
testing
github.com/cockroachdb/cockroach/pkg/settings
compress/zlib
github.com/cockroachdb/cockroach/pkg/sql/privilege
github.com/cockroachdb/cockroach/vendor/github.com/knz/strtime
go/token
crypto/x509
github.com/cockroachdb/cockroach/vendor/github.com/spf13/pflag
vendor/golang_org/x/net/lex/httplex
net/textproto
github.com/cockroachdb/cockroach/vendor/github.com/satori/go.uuid
github.com/cockroachdb/cockroach/pkg/util/uuid
mime/multipart
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/lex/httplex
github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/jsonpb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stats
log/syslog
runtime/pprof/internal/protopprof
runtime/pprof
go/constant
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/language
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/unicode/norm
crypto/tls
github.com/cockroachdb/cockroach/vendor/github.com/rcrowley/go-metrics
github.com/cockroachdb/cockroach/pkg/util/humanizeutil
github.com/cockroachdb/cockroach/pkg/util/log/logflags
github.com/cockroachdb/cockroach/pkg/util/envutil
github.com/cockroachdb/cockroach/pkg/sql/pgwire/pgerror
github.com/cockroachdb/cockroach/vendor/github.com/elastic/gosigar
github.com/cockroachdb/cockroach/pkg/util/timeutil
github.com/cockroachdb/cockroach/pkg/util/randutil
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/raft/raftpb
github.com/cockroachdb/cockroach/pkg/util
github.com/cockroachdb/cockroach/vendor/github.com/google/btree
github.com/cockroachdb/cockroach/vendor/github.com/kr/pretty
github.com/cockroachdb/cockroach/vendor/golang.org/x/time/rate
database/sql
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/internal/colltab
github.com/cockroachdb/cockroach/vendor/github.com/coreos/etcd/raft
github.com/cockroachdb/cockroach/pkg/ui
os/exec
go/scanner
go/ast
github.com/cockroachdb/cockroach/vendor/golang.org/x/text/collate
github.com/cockroachdb/cockroach/pkg/util/sdnotify
github.com/cockroachdb/cockroach/pkg/util/leaktest
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup
net/http/httptrace
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/credentials
github.com/cockroachdb/cockroach/vendor/github.com/lib/pq
testing/internal/testdeps
net/http
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/peer
go/doc
go/parser
go/build
github.com/cockroachdb/cockroach/pkg/testutils/buildutil
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go
github.com/cockroachdb/cockroach/pkg/util/httputil
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/thrift_0_9_2/lib/go/thrift
github.com/cockroachdb/cockroach/vendor/github.com/rlmcpherson/s3gof3r
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/trace
github.com/cockroachdb/cockroach/vendor/golang.org/x/net/http2
github.com/cockroachdb/cockroach/vendor/github.com/prometheus/common/expfmt
github.com/cockroachdb/cockroach/vendor/github.com/rubyist/circuitbreaker
github.com/cockroachdb/cockroach/vendor/github.com/elazarl/go-bindata-assetfs
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/opentracing-go/ext
github.com/cockroachdb/cockroach/vendor/github.com/opentracing/basictracer-go
expvar
net/http/pprof
github.com/cockroachdb/cockroach/vendor/github.com/rcrowley/go-metrics/exp
github.com/cockroachdb/cockroach/pkg/util/log
github.com/cockroachdb/cockroach/pkg/util/hlc
github.com/cockroachdb/cockroach/pkg/util/metric
github.com/cockroachdb/cockroach/pkg/util/cache
github.com/cockroachdb/cockroach/pkg/storage/engine/enginepb
github.com/cockroachdb/cockroach/pkg/sql/mon
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/lightstep_thrift
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport
github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/cmux
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/thrift_rpc
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc
github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go/collectorpb
github.com/cockroachdb/cockroach/vendor/github.com/lightstep/lightstep-tracer-go
github.com/cockroachdb/cockroach/pkg/util/protoutil
github.com/cockroachdb/cockroach/pkg/util/tracing
github.com/cockroachdb/cockroach/pkg/roachpb
github.com/cockroachdb/cockroach/pkg/util/stop
github.com/cockroachdb/cockroach/pkg/keys
github.com/cockroachdb/cockroach/pkg/sql/parser
github.com/cockroachdb/cockroach/pkg/storage/storagebase
github.com/cockroachdb/cockroach/pkg/ts/tspb
github.com/cockroachdb/cockroach/pkg/security
github.com/cockroachdb/cockroach/pkg/util/netutil
github.com/cockroachdb/cockroach/pkg/util/grpcutil
github.com/cockroachdb/cockroach/pkg/config
github.com/cockroachdb/cockroach/pkg/internal/client
github.com/cockroachdb/cockroach/pkg/base
github.com/cockroachdb/cockroach/pkg/security/securitytest
github.com/cockroachdb/cockroach/pkg/gossip/resolver
github.com/cockroachdb/cockroach/pkg/rpc
github.com/cockroachdb/cockroach/pkg/testutils
github.com/cockroachdb/cockroach/pkg/storage/engine
github.com/cockroachdb/cockroach/pkg/sql/sqlutil
github.com/cockroachdb/cockroach/pkg/gossip
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/internal.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_internal.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/internal.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/data.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_data.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/data.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
github.com/cockroachdb/cockroach/pkg/server/status
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/roachpb/metadata.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_roachpb_metadata.pb.cc:1:
pkg/storage/engine/cockroach/pkg/roachpb/metadata.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/server/status
pkg/server/status/runtime_jemalloc.go:25:32: fatal error: jemalloc/jemalloc.h: No such file or directory
// #include <jemalloc/jemalloc.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/mvcc.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_storage_engine_enginepb_mvcc.pb.cc:1:
pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/mvcc.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/encoding.cc:18:27: fatal error: rocksdb/slice.h: No such file or directory
#include "rocksdb/slice.h"
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/eventlistener.cc:17:38: fatal error: rocksdb/table_properties.h: No such file or directory
#include <rocksdb/table_properties.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/rocksdb.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_storage_engine_enginepb_rocksdb.pb.cc:1:
pkg/storage/engine/cockroach/pkg/storage/engine/enginepb/rocksdb.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/util/hlc/timestamp.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_util_hlc_timestamp.pb.cc:1:
pkg/storage/engine/cockroach/pkg/util/hlc/timestamp.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
pkg/storage/engine/db.cc:19:48: fatal error: google/protobuf/stubs/stringprintf.h: No such file or directory
#include <google/protobuf/stubs/stringprintf.h>
^
compilation terminated.
# github.com/cockroachdb/cockroach/pkg/storage/engine
In file included from pkg/storage/engine/cockroach/pkg/util/unresolved_addr.pb.cc:5:0,
from pkg/storage/engine/cockroach_pkg_util_unresolved_addr.pb.cc:1:
pkg/storage/engine/cockroach/pkg/util/unresolved_addr.pb.h:9:42: fatal error: google/protobuf/stubs/common.h: No such file or directory
#include <google/protobuf/stubs/common.h>
^
compilation terminated.
github.com/cockroachdb/cockroach/pkg/gossip/simulation
github.com/cockroachdb/cockroach/pkg/kv
github.com/cockroachdb/cockroach/pkg/sql/sqlbase
github.com/cockroachdb/cockroach/pkg/testutils/sqlutils
github.com/cockroachdb/cockroach/pkg/testutils/serverutils
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun
github.com/cockroachdb/cockroach/pkg/sql/distsqlplan
make: *** [stress] Error 2
Makefile:192: recipe for target 'stress' failed
``` | non_main | kv unknown failed under stress sha parameters cockroach proposer evaluated kv false tags goflags race stress build found a failed test gopath set to go git submodule update init github com cockroachdb cockroach pkg cmd ncpus github com cockroachdb cockroach pkg cmd returncheck github com cockroachdb cockroach vendor github com misspell cmd misspell github com cockroachdb cockroach vendor github com cockroachdb crlfmt github com cockroachdb cockroach vendor github com cockroachdb stress github com cockroachdb cockroach vendor github com golang lint golint github com cockroachdb cockroach pkg cmd metacheck github com cockroachdb cockroach vendor github com masterminds glide github com cockroachdb cockroach vendor github com google pprof github com cockroachdb cockroach vendor github com grpc ecosystem grpc gateway protoc gen grpc gateway github com cockroachdb cockroach vendor github com jteeuwen go bindata go bindata github com cockroachdb cockroach vendor github com kisielk errcheck github com cockroachdb cockroach vendor github com mattn goveralls github com cockroachdb cockroach vendor github com mdempsky unconvert github com cockroachdb cockroach vendor github com mibk dupl github com cockroachdb cockroach vendor github com wadey gocovmerge github com cockroachdb cockroach vendor golang org x tools cmd goimports github com cockroachdb cockroach vendor golang org x tools cmd goyacc github com cockroachdb cockroach vendor golang org x tools cmd guru github com cockroachdb cockroach vendor golang org x tools cmd stringer touch go src github com cockroachdb cockroach bin bootstrap go list tags make unknown linux gnu f cc x tools unknown linux gnu bin unknown linux gnu gcc cxx x tools unknown linux gnu bin unknown linux gnu g go test v race installsuffix release tags make unknown linux gnu ldflags s w extldflags static libgcc static libstdc x github com cockroachdb cockroach pkg build typ release i c importpath o dir stress test cd dir if then cockroach stress true stress maxtime maxfails stderr stress test test run test timeout test v fi github com cockroachdb cockroach pkg kv bin bash runtime internal sys runtime internal atomic runtime errors runtime cgo runtime race internal race sync atomic unicode unicode encoding math container list crypto subtle crypto internal cipherhw internal nettrace vendor golang org x crypto vendor golang org x crypto sync unicode github com cockroachdb cockroach vendor github com biogo store llrb github com cockroachdb cockroach vendor google golang org grpc internal github com cockroachdb cockroach vendor google golang org grpc naming github com cockroachdb cockroach vendor github com vividcortex ewma container ring github com cockroachdb cockroach vendor github com lib pq oid io syscall internal singleflight github com cockroachdb cockroach pkg util syncutil github com cockroachdb cockroach vendor golang org x sync singleflight github com cockroachdb cockroach pkg util bufalloc hash crypto cipher runtime trace crypto hmac hash hash hash fnv bytes strings strconv math rand github com cockroachdb cockroach pkg util shuffle bufio vendor golang org x text transform text tabwriter github com cockroachdb cockroach vendor golang org x text transform github com cockroachdb cockroach vendor github com kr text path html crypto reflect crypto aes crypto encoding github com cockroachdb cockroach vendor github com petermattis goid crypto crypto crypto time internal syscall unix crypto github com cockroachdb cockroach vendor golang org x sys unix github com cockroachdb cockroach vendor golang org x crypto blowfish github com cockroachdb cockroach vendor golang org x crypto ssh terminal os github com cockroachdb cockroach vendor google golang org grpc keepalive os signal fmt encoding binary sort encoding pem path filepath regexp syntax runtime debug github com cockroachdb cockroach vendor github com gogo protobuf sortkeys github com cockroachdb cockroach vendor github com grpc ecosystem grpc gateway utilities github com cockroachdb cockroach vendor github com prometheus common internal bitbucket org ww goautoneg github com cockroachdb cockroach vendor github com facebookgo clock crypto des vendor golang org x crypto internal github com cockroachdb cockroach vendor github com golang snappy io ioutil github com cockroachdb cockroach vendor github com montanaflynn stats container heap vendor golang org x crypto github com cockroachdb cockroach vendor golang org x text internal tag math big encoding gob encoding hex context github com cockroachdb cockroach vendor github com pkg errors flag encoding json log encoding csv net github com cockroachdb cockroach vendor github com opentracing opentracing go log github com cockroachdb cockroach vendor golang org x net context compress flate regexp vendor golang org x net hpack vendor golang org x net idna vendor golang org x text unicode norm vendor golang org x text width mime compress gzip github com cockroachdb cockroach pkg util caller github com cockroachdb cockroach vendor github com gogo protobuf proto mime quotedprintable net http internal net url github com cockroachdb cockroach vendor golang org x net internal timeseries text template parse os user database sql driver github com cockroachdb cockroach pkg util interval github com cockroachdb cockroach vendor github com golang protobuf proto crypto elliptic encoding crypto rand crypto dsa crypto rsa github com cockroachdb cockroach vendor github com cockroachdb apd github com cockroachdb cockroach vendor github com dustin go humanize crypto ecdsa crypto pkix text template github com cockroachdb cockroach pkg util duration github com cockroachdb cockroach vendor golang org x net hpack github com cockroachdb cockroach vendor golang org x net idna github com cockroachdb cockroach vendor google golang org grpc codes github com cockroachdb cockroach vendor google golang org grpc grpclog github com cockroachdb cockroach vendor google golang org grpc metadata github com cockroachdb cockroach vendor google golang org grpc tap encoding xml github com cockroachdb cockroach vendor golang org x crypto bcrypt github com cockroachdb cockroach pkg util encoding github com cockroachdb cockroach pkg util retry github com cockroachdb cockroach vendor gopkg in yaml github com cockroachdb cockroach vendor github com cenk backoff github com cockroachdb cockroach vendor github com golang protobuf jsonpb github com cockroachdb cockroach pkg build github com cockroachdb cockroach vendor github com gogo protobuf types html template github com cockroachdb cockroach vendor github com grpc ecosystem grpc gateway runtime internal github com cockroachdb cockroach vendor github com golang protobuf ptypes timestamp github com cockroachdb cockroach vendor github com opentracing basictracer go wire github com cockroachdb cockroach vendor github com codahale hdrhistogram github com cockroachdb cockroach vendor github com prometheus client model go github com cockroachdb cockroach vendor github com matttproud golang protobuf extensions pbutil github com cockroachdb cockroach vendor github com prometheus common model internal pprof profile testing github com cockroachdb cockroach pkg settings compress zlib github com cockroachdb cockroach pkg sql privilege github com cockroachdb cockroach vendor github com knz strtime go token crypto github com cockroachdb cockroach vendor github com pflag vendor golang org x net lex httplex net textproto github com cockroachdb cockroach vendor github com satori go uuid github com cockroachdb cockroach pkg util uuid mime multipart github com cockroachdb cockroach vendor golang org x net lex httplex github com cockroachdb cockroach vendor github com gogo protobuf jsonpb github com cockroachdb cockroach vendor google golang org grpc stats log syslog runtime pprof internal protopprof runtime pprof go constant github com cockroachdb cockroach vendor golang org x text language github com cockroachdb cockroach vendor golang org x text unicode norm crypto tls github com cockroachdb cockroach vendor github com rcrowley go metrics github com cockroachdb cockroach pkg util humanizeutil github com cockroachdb cockroach pkg util log logflags github com cockroachdb cockroach pkg util envutil github com cockroachdb cockroach pkg sql pgwire pgerror github com cockroachdb cockroach vendor github com elastic gosigar github com cockroachdb cockroach pkg util timeutil github com cockroachdb cockroach pkg util randutil github com cockroachdb cockroach vendor github com coreos etcd raft raftpb github com cockroachdb cockroach pkg util github com cockroachdb cockroach vendor github com google btree github com cockroachdb cockroach vendor github com kr pretty github com cockroachdb cockroach vendor golang org x time rate database sql github com cockroachdb cockroach vendor golang org x text internal colltab github com cockroachdb cockroach vendor github com coreos etcd raft github com cockroachdb cockroach pkg ui os exec go scanner go ast github com cockroachdb cockroach vendor golang org x text collate github com cockroachdb cockroach pkg util sdnotify github com cockroachdb cockroach pkg util leaktest github com cockroachdb cockroach vendor golang org x sync errgroup net http httptrace github com cockroachdb cockroach vendor google golang org grpc credentials github com cockroachdb cockroach vendor github com lib pq testing internal testdeps net http github com cockroachdb cockroach vendor google golang org grpc peer go doc go parser go build github com cockroachdb cockroach pkg testutils buildutil github com cockroachdb cockroach vendor github com opentracing opentracing go github com cockroachdb cockroach pkg util httputil github com cockroachdb cockroach vendor github com lightstep lightstep tracer go thrift lib go thrift github com cockroachdb cockroach vendor github com rlmcpherson github com cockroachdb cockroach vendor golang org x net trace github com cockroachdb cockroach vendor golang org x net github com cockroachdb cockroach vendor github com prometheus common expfmt github com cockroachdb cockroach vendor github com rubyist circuitbreaker github com cockroachdb cockroach vendor github com elazarl go bindata assetfs github com cockroachdb cockroach vendor github com opentracing opentracing go ext github com cockroachdb cockroach vendor github com opentracing basictracer go expvar net http pprof github com cockroachdb cockroach vendor github com rcrowley go metrics exp github com cockroachdb cockroach pkg util log github com cockroachdb cockroach pkg util hlc github com cockroachdb cockroach pkg util metric github com cockroachdb cockroach pkg util cache github com cockroachdb cockroach pkg storage engine enginepb github com cockroachdb cockroach pkg sql mon github com cockroachdb cockroach vendor github com lightstep lightstep tracer go lightstep thrift github com cockroachdb cockroach vendor google golang org grpc transport github com cockroachdb cockroach vendor github com cockroachdb cmux github com cockroachdb cockroach vendor github com lightstep lightstep tracer go thrift rpc github com cockroachdb cockroach vendor google golang org grpc github com cockroachdb cockroach vendor github com grpc ecosystem grpc gateway runtime github com cockroachdb cockroach vendor github com lightstep lightstep tracer go collectorpb github com cockroachdb cockroach vendor github com lightstep lightstep tracer go github com cockroachdb cockroach pkg util protoutil github com cockroachdb cockroach pkg util tracing github com cockroachdb cockroach pkg roachpb github com cockroachdb cockroach pkg util stop github com cockroachdb cockroach pkg keys github com cockroachdb cockroach pkg sql parser github com cockroachdb cockroach pkg storage storagebase github com cockroachdb cockroach pkg ts tspb github com cockroachdb cockroach pkg security github com cockroachdb cockroach pkg util netutil github com cockroachdb cockroach pkg util grpcutil github com cockroachdb cockroach pkg config github com cockroachdb cockroach pkg internal client github com cockroachdb cockroach pkg base github com cockroachdb cockroach pkg security securitytest github com cockroachdb cockroach pkg gossip resolver github com cockroachdb cockroach pkg rpc github com cockroachdb cockroach pkg testutils github com cockroachdb cockroach pkg storage engine github com cockroachdb cockroach pkg sql sqlutil github com cockroachdb cockroach pkg gossip github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg roachpb internal pb cc from pkg storage engine cockroach pkg roachpb internal pb cc pkg storage engine cockroach pkg roachpb internal pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg roachpb data pb cc from pkg storage engine cockroach pkg roachpb data pb cc pkg storage engine cockroach pkg roachpb data pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg server status github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg roachpb metadata pb cc from pkg storage engine cockroach pkg roachpb metadata pb cc pkg storage engine cockroach pkg roachpb metadata pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg server status pkg server status runtime jemalloc go fatal error jemalloc jemalloc h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg storage engine enginepb mvcc pb cc from pkg storage engine cockroach pkg storage engine enginepb mvcc pb cc pkg storage engine cockroach pkg storage engine enginepb mvcc pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine pkg storage engine encoding cc fatal error rocksdb slice h no such file or directory include rocksdb slice h compilation terminated github com cockroachdb cockroach pkg storage engine pkg storage engine eventlistener cc fatal error rocksdb table properties h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg storage engine enginepb rocksdb pb cc from pkg storage engine cockroach pkg storage engine enginepb rocksdb pb cc pkg storage engine cockroach pkg storage engine enginepb rocksdb pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg util hlc timestamp pb cc from pkg storage engine cockroach pkg util hlc timestamp pb cc pkg storage engine cockroach pkg util hlc timestamp pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine pkg storage engine db cc fatal error google protobuf stubs stringprintf h no such file or directory include compilation terminated github com cockroachdb cockroach pkg storage engine in file included from pkg storage engine cockroach pkg util unresolved addr pb cc from pkg storage engine cockroach pkg util unresolved addr pb cc pkg storage engine cockroach pkg util unresolved addr pb h fatal error google protobuf stubs common h no such file or directory include compilation terminated github com cockroachdb cockroach pkg gossip simulation github com cockroachdb cockroach pkg kv github com cockroachdb cockroach pkg sql sqlbase github com cockroachdb cockroach pkg testutils sqlutils github com cockroachdb cockroach pkg testutils serverutils github com cockroachdb cockroach pkg sql distsqlrun github com cockroachdb cockroach pkg sql distsqlplan make error makefile recipe for target stress failed | 0 |
116,011 | 9,818,650,125 | IssuesEvent | 2019-06-13 19:49:20 | magento-engcom/msi | https://api.github.com/repos/magento-engcom/msi | closed | MFTF Logged in Customer ordered Grouped product with child products assigned to default Stock from Homepage | MFTF (Functional Test Coverage) | https://app.hiptest.com/projects/69435/test-plan/folders/419537/scenarios/1437519 | 1.0 | MFTF Logged in Customer ordered Grouped product with child products assigned to default Stock from Homepage - https://app.hiptest.com/projects/69435/test-plan/folders/419537/scenarios/1437519 | non_main | mftf logged in customer ordered grouped product with child products assigned to default stock from homepage | 0 |
4,524 | 23,523,218,282 | IssuesEvent | 2022-08-19 08:19:46 | rustsec/advisory-db | https://api.github.com/repos/rustsec/advisory-db | closed | `ansi_term` appears unmaintained | Unmaintained | A [maintenance inquiry](https://github.com/ogham/rust-ansi-term/issues/72) has been open since August 2021 without response. The most recent release & PR merge was in September 2019 and multiple PRs + issues are outstanding. | True | `ansi_term` appears unmaintained - A [maintenance inquiry](https://github.com/ogham/rust-ansi-term/issues/72) has been open since August 2021 without response. The most recent release & PR merge was in September 2019 and multiple PRs + issues are outstanding. | main | ansi term appears unmaintained a has been open since august without response the most recent release pr merge was in september and multiple prs issues are outstanding | 1 |
879 | 4,541,609,012 | IssuesEvent | 2016-09-09 18:22:57 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | OpenStack os_server module defines auto_ip and floating_ip_pools as mutually exclusive | affects_2.0 bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
/cloud/openstack
##### ANSIBLE VERSION
```
ansible 2.0.0.2
config file = /Users/sebastian/helpers/ansible/ansible-projects.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The os_server module doesn't allow auto_ip and floating_ip_pools to be defined together.
This prevents me to programmatically decide whether to assign an floatingip or not.
##### STEPS TO REPRODUCE
I setup multiple servers, some of which should receive a floatingip, some don't.
So i define a var floating_ip_pool per server. If it is not set, i want to disable floatingip assignment via auto_ip: no and floating_ip_pools: no.
```
- name: Generate instances for all defined servers
os_server:
name: "{{ item }}"
image: "{{ os_image }}"
floating_ip_pools: "{{ item.floating_ip_pool | default(no) }}"
auto_ip: "{% if item.floating_ip_pool is defined %}yes{% else %}no{% endif %}"
with_items: "{{groups.all}}"
```
##### EXPECTED RESULTS
I want all servers with auto_ip: no not have a floatingip, no matter what is defined in floating_ip_pools.
##### ACTUAL RESULTS
Ansible won't allow me to define both parameters at once, so there is no way to handle this use case programatically.
```
TASK [os-server : Generate instances for all defined servers] ********
failed: [localhost] => (item=my-server0) => {"failed": true, "item": "my-server0", "msg": "parameters are mutually exclusive: ['auto_ip', 'floating_ip_pools']"}
```
| True | OpenStack os_server module defines auto_ip and floating_ip_pools as mutually exclusive - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
/cloud/openstack
##### ANSIBLE VERSION
```
ansible 2.0.0.2
config file = /Users/sebastian/helpers/ansible/ansible-projects.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The os_server module doesn't allow auto_ip and floating_ip_pools to be defined together.
This prevents me to programmatically decide whether to assign an floatingip or not.
##### STEPS TO REPRODUCE
I setup multiple servers, some of which should receive a floatingip, some don't.
So i define a var floating_ip_pool per server. If it is not set, i want to disable floatingip assignment via auto_ip: no and floating_ip_pools: no.
```
- name: Generate instances for all defined servers
os_server:
name: "{{ item }}"
image: "{{ os_image }}"
floating_ip_pools: "{{ item.floating_ip_pool | default(no) }}"
auto_ip: "{% if item.floating_ip_pool is defined %}yes{% else %}no{% endif %}"
with_items: "{{groups.all}}"
```
##### EXPECTED RESULTS
I want all servers with auto_ip: no not have a floatingip, no matter what is defined in floating_ip_pools.
##### ACTUAL RESULTS
Ansible won't allow me to define both parameters at once, so there is no way to handle this use case programatically.
```
TASK [os-server : Generate instances for all defined servers] ********
failed: [localhost] => (item=my-server0) => {"failed": true, "item": "my-server0", "msg": "parameters are mutually exclusive: ['auto_ip', 'floating_ip_pools']"}
```
| main | openstack os server module defines auto ip and floating ip pools as mutually exclusive issue type bug report component name cloud openstack ansible version ansible config file users sebastian helpers ansible ansible projects cfg configured module search path default w o overrides configuration n a os environment n a summary the os server module doesn t allow auto ip and floating ip pools to be defined together this prevents me to programmatically decide whether to assign an floatingip or not steps to reproduce i setup multiple servers some of which should receive a floatingip some don t so i define a var floating ip pool per server if it is not set i want to disable floatingip assignment via auto ip no and floating ip pools no name generate instances for all defined servers os server name item image os image floating ip pools item floating ip pool default no auto ip if item floating ip pool is defined yes else no endif with items groups all expected results i want all servers with auto ip no not have a floatingip no matter what is defined in floating ip pools actual results ansible won t allow me to define both parameters at once so there is no way to handle this use case programatically task failed item my failed true item my msg parameters are mutually exclusive | 1 |
1,764 | 6,575,021,315 | IssuesEvent | 2017-09-11 14:48:05 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | interface with vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module `setup`
in a playbook as `gather_facts: yes`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
ansible running from centos 2.6.32-573.22.1.el6.x86_64
managing CheckPoint Gaia (RedHat based) 2.6.18-92cpx86_64
##### SUMMARY
<!--- Explain the problem briefly -->
An interface with a vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN so it cannot be used as variable.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
There is an vlan interface in linux
```
# ip a
...
eth2-01.100@eth2-01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/ether 00:1c:7f:65:91:d4 brd ff:ff:ff:ff:ff:ff
inet 10.35.192.12/28 brd 10.35.192.15 scope global eth2-01.100
...
```
ansible -m setup returns interface with vlan like this
```
"ansible_eth2_01.100": {
"active": true,
"device": "eth2-01.100",
"ipv4": {
"address": "10.35.192.12",
"broadcast": "10.35.192.15",
"netmask": "255.255.255.240",
"network": "10.35.192.0"
},
"macaddress": "00:1c:7f:65:91:d4",
"mtu": 1500,
"promisc": false,
"type": "ether"
},
```
I want to use this fact in playbook as variable
```
{{ ansible_eth2_01.100.ipv4.address }}
```
but I got an error: `Error, in the future this will be a fatal error.: 'dict' object has no element 100.`
<!--- Paste example playbooks or commands between quotes below -->
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Following this issue https://github.com/ansible/ansible/issues/6879 I believe that this should be fixed by changing `ansible_eth2_01.100` to `ansible_eth2_01_100`
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Error, in the future this will be a fatal error.: 'dict' object has no element 100
```
| True | interface with vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module `setup`
in a playbook as `gather_facts: yes`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
ansible running from centos 2.6.32-573.22.1.el6.x86_64
managing CheckPoint Gaia (RedHat based) 2.6.18-92cpx86_64
##### SUMMARY
<!--- Explain the problem briefly -->
An interface with a vlan is returned with as INTERFACE.VLAN instead of INTERFACE_VLAN so it cannot be used as variable.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
There is an vlan interface in linux
```
# ip a
...
eth2-01.100@eth2-01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/ether 00:1c:7f:65:91:d4 brd ff:ff:ff:ff:ff:ff
inet 10.35.192.12/28 brd 10.35.192.15 scope global eth2-01.100
...
```
ansible -m setup returns interface with vlan like this
```
"ansible_eth2_01.100": {
"active": true,
"device": "eth2-01.100",
"ipv4": {
"address": "10.35.192.12",
"broadcast": "10.35.192.15",
"netmask": "255.255.255.240",
"network": "10.35.192.0"
},
"macaddress": "00:1c:7f:65:91:d4",
"mtu": 1500,
"promisc": false,
"type": "ether"
},
```
I want to use this fact in playbook as variable
```
{{ ansible_eth2_01.100.ipv4.address }}
```
but I got an error: `Error, in the future this will be a fatal error.: 'dict' object has no element 100.`
<!--- Paste example playbooks or commands between quotes below -->
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Following this issue https://github.com/ansible/ansible/issues/6879 I believe that this should be fixed by changing `ansible_eth2_01.100` to `ansible_eth2_01_100`
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Error, in the future this will be a fatal error.: 'dict' object has no element 100
```
| main | interface with vlan is returned with as interface vlan instead of interface vlan issue type bug report component name module setup in a playbook as gather facts yes ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible running from centos managing checkpoint gaia redhat based summary an interface with a vlan is returned with as interface vlan instead of interface vlan so it cannot be used as variable steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used there is an vlan interface in linux ip a mtu qdisc noqueue link ether brd ff ff ff ff ff ff inet brd scope global ansible m setup returns interface with vlan like this ansible active true device address broadcast netmask network macaddress mtu promisc false type ether i want to use this fact in playbook as variable ansible address but i got an error error in the future this will be a fatal error dict object has no element expected results following this issue i believe that this should be fixed by changing ansible to ansible actual results error in the future this will be a fatal error dict object has no element | 1 |
1,160 | 5,053,154,144 | IssuesEvent | 2016-12-21 06:41:11 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Cask request: FortiClient | awaiting maintainer feedback cask request | ### Cask details
**Name**: FortiClient
**Homepage**: http://forticlient.com
**Download URL**: http://fortinetweb.s3.amazonaws.com/forticlient/downloads/FortiClientOnlineInstaller.dmg
**Description**: FortiClient extends the power of FortiGate's Unified threat management to endpoints on your network. Desktops, laptops, tablets & smartphones, FortiClient enables every device - local or remote, stationary or mobile - to integrate with your FortiGate. With no per-seat license fees, FortiClient takes the headaches out of managing multiple endpoints so your users & guests can work efficiently anywhere, without compromising your security. It's the end-point solution for your FortiGate network. | True | Cask request: FortiClient - ### Cask details
**Name**: FortiClient
**Homepage**: http://forticlient.com
**Download URL**: http://fortinetweb.s3.amazonaws.com/forticlient/downloads/FortiClientOnlineInstaller.dmg
**Description**: FortiClient extends the power of FortiGate's Unified threat management to endpoints on your network. Desktops, laptops, tablets & smartphones, FortiClient enables every device - local or remote, stationary or mobile - to integrate with your FortiGate. With no per-seat license fees, FortiClient takes the headaches out of managing multiple endpoints so your users & guests can work efficiently anywhere, without compromising your security. It's the end-point solution for your FortiGate network. | main | cask request forticlient cask details name forticlient homepage download url description forticlient extends the power of fortigate s unified threat management to endpoints on your network desktops laptops tablets smartphones forticlient enables every device local or remote stationary or mobile to integrate with your fortigate with no per seat license fees forticlient takes the headaches out of managing multiple endpoints so your users guests can work efficiently anywhere without compromising your security it s the end point solution for your fortigate network | 1 |
3,891 | 17,290,836,404 | IssuesEvent | 2021-07-24 18:09:42 | xanmod/linux | https://api.github.com/repos/xanmod/linux | reopened | Tons of bad page map errors | reported to maintainers | Hello,
I don't know if this bug report is actionable at all as I am probably using too fancy compiler flags and run it on an unsupported distro, but maybe someone knows what's going wrong here. I was using Xanmod succesfully on openSUSE Tumbleweed until 5.12.14 and 5.13, they spit out tons of "bad page map errors" during compilation workloads (see dmesg-log attached).
As compiler I use LLVM/Clang 12 provided by SUSE and added a Kernel patch by a Google engineer (see: https://github.com/ClangBuiltLinux/linux/issues/1369#issuecomment-832307198) to make LTO work with TRIM_UNUSED_KSYMS.
[dmesg-errors.txt](https://github.com/xanmod/linux/files/6760346/dmesg-errors.txt)
[config.txt](https://github.com/xanmod/linux/files/6760347/config.txt)
[Makefile.txt](https://github.com/xanmod/linux/files/6760348/Makefile.txt)
I already replaced the CPU, as I faced some quirky WHEA-errors on Windows on the same machine the last days which are now gone, so the hardware should be fine. | True | Tons of bad page map errors - Hello,
I don't know if this bug report is actionable at all as I am probably using too fancy compiler flags and run it on an unsupported distro, but maybe someone knows what's going wrong here. I was using Xanmod succesfully on openSUSE Tumbleweed until 5.12.14 and 5.13, they spit out tons of "bad page map errors" during compilation workloads (see dmesg-log attached).
As compiler I use LLVM/Clang 12 provided by SUSE and added a Kernel patch by a Google engineer (see: https://github.com/ClangBuiltLinux/linux/issues/1369#issuecomment-832307198) to make LTO work with TRIM_UNUSED_KSYMS.
[dmesg-errors.txt](https://github.com/xanmod/linux/files/6760346/dmesg-errors.txt)
[config.txt](https://github.com/xanmod/linux/files/6760347/config.txt)
[Makefile.txt](https://github.com/xanmod/linux/files/6760348/Makefile.txt)
I already replaced the CPU, as I faced some quirky WHEA-errors on Windows on the same machine the last days which are now gone, so the hardware should be fine. | main | tons of bad page map errors hello i don t know if this bug report is actionable at all as i am probably using too fancy compiler flags and run it on an unsupported distro but maybe someone knows what s going wrong here i was using xanmod succesfully on opensuse tumbleweed until and they spit out tons of bad page map errors during compilation workloads see dmesg log attached as compiler i use llvm clang provided by suse and added a kernel patch by a google engineer see to make lto work with trim unused ksyms i already replaced the cpu as i faced some quirky whea errors on windows on the same machine the last days which are now gone so the hardware should be fine | 1 |
399,198 | 27,230,920,188 | IssuesEvent | 2023-02-21 13:11:14 | KELLERAGfuerDruckmesstechnik/keller_protocol_python | https://api.github.com/repos/KELLERAGfuerDruckmesstechnik/keller_protocol_python | closed | Fix Readme | documentation | "todo: Hinweise keller_protocol.py + Genereller Ablauf mit Beispiel von 2-3 Aufrufen"
@Luwe91 | 1.0 | Fix Readme - "todo: Hinweise keller_protocol.py + Genereller Ablauf mit Beispiel von 2-3 Aufrufen"
@Luwe91 | non_main | fix readme todo hinweise keller protocol py genereller ablauf mit beispiel von aufrufen | 0 |
2,703 | 9,499,032,555 | IssuesEvent | 2019-04-24 04:37:51 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | reopened | New owner requires refresh before it shows up on landing page | Maintainability page state | If manage access is used to add a new owner to a resource, then the manage access panel is closed, the new owner does not show up on the landing page until the landing page is refreshed. Consider having close manage access trigger a page refresh so that actions in manage access are shown to have taken effect. | True | New owner requires refresh before it shows up on landing page - If manage access is used to add a new owner to a resource, then the manage access panel is closed, the new owner does not show up on the landing page until the landing page is refreshed. Consider having close manage access trigger a page refresh so that actions in manage access are shown to have taken effect. | main | new owner requires refresh before it shows up on landing page if manage access is used to add a new owner to a resource then the manage access panel is closed the new owner does not show up on the landing page until the landing page is refreshed consider having close manage access trigger a page refresh so that actions in manage access are shown to have taken effect | 1 |
31,175 | 13,453,104,216 | IssuesEvent | 2020-09-08 23:57:51 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Update Officer registration and account pages | Product: Vision Zero in Action Service: Apps Workgroup: AMD Workgroup: VZ | Changes to [current form](https://atd.knack.com/vza#add-officer-account/) and "officer" user role:
- [ ] Rename "Add Account" page "Sign Up" and update slug
- [ ] Add conditional "What is your rank?" field
- Choices: "Probationary Officer", "Officer", "Corporal", "Detective"
- If "Probationary Officer" is selected, display "Probationary Officers are not eligible to participate in the Vision Zero in Action program."
- Otherwise, unhide the rest of the form
- [ ] Remove embedded APD Sector Map
Changes to [Account Settings modal](https://atd.knack.com/vza#account-settings/)

_Note: the tasks below are old; probably not applicable since we're using ADFS, right?..._
- [ ] Create strong default password for all officer accounts and record in 1Password
- [ ] Remove password field from form, set default officer password to 👆
- [ ] Remove "Email" field from form
- [ ] Create new "Email Input" field and add to form with label "Email"
- [ ] Rule to set (original) "Email" field value to `[Email Input]@ausps.org`
| 1.0 | Update Officer registration and account pages - Changes to [current form](https://atd.knack.com/vza#add-officer-account/) and "officer" user role:
- [ ] Rename "Add Account" page "Sign Up" and update slug
- [ ] Add conditional "What is your rank?" field
- Choices: "Probationary Officer", "Officer", "Corporal", "Detective"
- If "Probationary Officer" is selected, display "Probationary Officers are not eligible to participate in the Vision Zero in Action program."
- Otherwise, unhide the rest of the form
- [ ] Remove embedded APD Sector Map
Changes to [Account Settings modal](https://atd.knack.com/vza#account-settings/)

_Note: the tasks below are old; probably not applicable since we're using ADFS, right?..._
- [ ] Create strong default password for all officer accounts and record in 1Password
- [ ] Remove password field from form, set default officer password to 👆
- [ ] Remove "Email" field from form
- [ ] Create new "Email Input" field and add to form with label "Email"
- [ ] Rule to set (original) "Email" field value to `[Email Input]@ausps.org`
| non_main | update officer registration and account pages changes to and officer user role rename add account page sign up and update slug add conditional what is your rank field choices probationary officer officer corporal detective if probationary officer is selected display probationary officers are not eligible to participate in the vision zero in action program otherwise unhide the rest of the form remove embedded apd sector map changes to note the tasks below are old probably not applicable since we re using adfs right create strong default password for all officer accounts and record in remove password field from form set default officer password to 👆 remove email field from form create new email input field and add to form with label email rule to set original email field value to ausps org | 0 |
5,453 | 27,289,180,701 | IssuesEvent | 2023-02-23 15:28:19 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Change the name of `User` role | type: enhancement affects: ux work: frontend status: blocked restricted: maintainers | The naming of the User role is a bit confusing. We are adding a new user and the name of the role is also User. We should come up with a different terminology for the non-admin user role.

| True | Change the name of `User` role - The naming of the User role is a bit confusing. We are adding a new user and the name of the role is also User. We should come up with a different terminology for the non-admin user role.

| main | change the name of user role the naming of the user role is a bit confusing we are adding a new user and the name of the role is also user we should come up with a different terminology for the non admin user role | 1 |
174,998 | 13,528,272,166 | IssuesEvent | 2020-09-15 16:28:58 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | Feature Test Summary for grpc-1.0 and grpcClient-1.0 | Feature Test Summary team:Sirius | **1) Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.**
A new FAT bucket has been added to test both new gRPC features: `com.ibm.ws.grpc_fat`. The intention of this bucket is to test `grpc-1.0` and `grpcClient-1.0` as extensively as possible: basic functionality, the new configuration parameters, application deployment behavior, integration with other features, etc. are verified.
Current test classes:
`HelloWorldTest`: basic client and server test
`HelloWorldTlsTest`: basic test with TLS enabled
`HelloWorldCDITests`: basic test with CDI artifacts
`SecureHelloWorldTest`: basic test with servlet security enabled
`ClientConfigTests`: validate the server.xml grpcClient configuration options
`GrpcMetricsTest`: verifies metrics integration
`ServiceConfigTests`: validate the server.xml grpc (service) configuration options
`ServiceSupportTests`: verify various application configurations
`ServiceInterceptorTests`: test liberty ServerInterceptor integration
`ClientInterceptorTests`: test liberty ClientInterceptor integration
`StoreServicesTests`: test a large microservices-style collection of apps
`StoreServicesSecurityTests`: additional security tests
If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?
N/A
**2) Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.**
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
**Confidence assessment TBD**
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
for epic: #8637
| 1.0 | Feature Test Summary for grpc-1.0 and grpcClient-1.0 - **1) Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.**
A new FAT bucket has been added to test both new gRPC features: `com.ibm.ws.grpc_fat`. The intention of this bucket is to test `grpc-1.0` and `grpcClient-1.0` as extensively as possible: basic functionality, the new configuration parameters, application deployment behavior, integration with other features, etc. are verified.
Current test classes:
`HelloWorldTest`: basic client and server test
`HelloWorldTlsTest`: basic test with TLS enabled
`HelloWorldCDITests`: basic test with CDI artifacts
`SecureHelloWorldTest`: basic test with servlet security enabled
`ClientConfigTests`: validate the server.xml grpcClient configuration options
`GrpcMetricsTest`: verifies metrics integration
`ServiceConfigTests`: validate the server.xml grpc (service) configuration options
`ServiceSupportTests`: verify various application configurations
`ServiceInterceptorTests`: test liberty ServerInterceptor integration
`ClientInterceptorTests`: test liberty ClientInterceptor integration
`StoreServicesTests`: test a large microservices-style collection of apps
`StoreServicesSecurityTests`: additional security tests
If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?
N/A
**2) Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.**
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
**Confidence assessment TBD**
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
for epic: #8637
| non_main | feature test summary for grpc and grpcclient describe the test strategy approach for this feature and describe how the approach verifies the functions delivered by this feature the description should include the positive and negative testing done whether all testing is automated what manual tests exist if any and where the tests are stored source control automated testing is expected for all features with manual testing considered an exception to the rule a new fat bucket has been added to test both new grpc features com ibm ws grpc fat the intention of this bucket is to test grpc and grpcclient as extensively as possible basic functionality the new configuration parameters application deployment behavior integration with other features etc are verified current test classes helloworldtest basic client and server test helloworldtlstest basic test with tls enabled helloworldcditests basic test with cdi artifacts securehelloworldtest basic test with servlet security enabled clientconfigtests validate the server xml grpcclient configuration options grpcmetricstest verifies metrics integration serviceconfigtests validate the server xml grpc service configuration options servicesupporttests verify various application configurations serviceinterceptortests test liberty serverinterceptor integration clientinterceptortests test liberty clientinterceptor integration storeservicestests test a large microservices style collection of apps storeservicessecuritytests additional security tests if delivering tests outside of the standard liberty fat framework do the tests push the results into cognitive testing database if not consult with the csi team who can provide advice and verify if results are being received n a collectively as a team you need to assess your confidence in the testing delivered based on the values below this should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole please indicate your confidence in the testing up to and including fat delivered with this feature by selecting one of these values no automated testing delivered we have minimal automated coverage of the feature including golden paths there is a relatively high risk that defects or issues could be found in this feature we have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here error outlying scenarios are not really covered there are likely risks that issues may exist in the golden paths we have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error outlying scenarios there is a risk when the feature is used outside the golden paths however we are confident on the golden path note this may still be a valid end state for a feature things like beta features may well suffice at this level we have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error outlying scenarios while more testing of the error outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide we have delivered all automated testing we believe is needed for this feature the testing covers all golden path cases as well as all the error outlying scenarios that make sense we are not aware of any gaps in the testing at this time no manual testing is required to verify this feature confidence assessment tbd based on your answer above for any answer other than a or please provide details of what drove your answer please be aware it may be perfectly reasonable in some scenarios to deliver with any value above we may accept no automated testing is needed for some features we may be happy with low levels of testing on samples for instance so please don t feel the need to drive to a we need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid what are the gaps what is the risk etc please also provide links to the follow on work that is needed to close the gaps should you deem it needed for epic | 0 |
346,878 | 10,421,180,924 | IssuesEvent | 2019-09-16 04:55:13 | msep2019/MSEP_2019_3 | https://api.github.com/repos/msep2019/MSEP_2019_3 | closed | Extract keywords from description of CWE | Medium Priority functionality | Jamal wants to extract the useful keywords from the description field in the CWE database so that it can be used to search for attack patterns in CAPEC. | 1.0 | Extract keywords from description of CWE - Jamal wants to extract the useful keywords from the description field in the CWE database so that it can be used to search for attack patterns in CAPEC. | non_main | extract keywords from description of cwe jamal wants to extract the useful keywords from the description field in the cwe database so that it can be used to search for attack patterns in capec | 0 |
960 | 4,704,674,950 | IssuesEvent | 2016-10-13 12:21:34 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ios_facts: `dir all-filesystems | include Directory`not supported on all devices | affects_2.2 bug_report in progress networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_facts
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100)
lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100)
lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/my_modules/']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
3750
flash:c3750-advipservicesk9-mz.122-44.SE4.bin"
WS-C3750-24PS-S
##### SUMMARY
ogenstad:
>`dir all-filesystems | include Directory` this is not a valid command on all ios devices.
>If it’s used in the ios_facts module there needs to be some checks to catch those errors.
>I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work.
> I.e. not use `gather_subset: all`
Thanks to @ben-cirrus (from networktocode Slack) for this bug report
##### STEPS TO REPRODUCE
<!---
- hosts: "{{ hosts }}"
any_errors_fatal: true
connection: local
gather_facts: no
vars:
cli:
host: "{{ ip_addr }}"
username: "{{ user }}"
password: "{{ password }}"
transport: cli
tasks:
- ios_facts:
provider: "{{ cli }}"
gather_subset: all
[lab]
labswitch ip_addr=10.254.9.11
-->
##### EXPECTED RESULTS
No backtrace, facts returned
##### ACTUAL RESULTS
```
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: get_ios_facts.yml ****************************************************
1 plays in get_ios_facts.yml
PLAY [lab,] ********************************************************************
TASK [ios_facts] ***************************************************************
task path: /root/napalm-testing/get_ios_facts.yml:14
Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py
<labswitch> ESTABLISH LOCAL CONNECTION FOR USER: root
<labswitch> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" && echo ansible-tmp-1473154099.74-15933157338277="` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" ) && sleep 0'
<labswitch> PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py
<labswitch> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0'
<labswitch> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 455, in <module>
main()
File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 437, in main
runner.run()
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 163, in run
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 88, in run_commands
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py", line 66, in run_commands
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute
ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory
^
% Invalid input detected at '^' marker.
NSW-CHQ-SW-LAB#
fatal: [labswitch]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_facts"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 455, in <module>\n main()\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 437, in main\n runner.run()\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 163, in run\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 88, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\", line 66, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @get_ios_facts.retry
PLAY RECAP *********************************************************************
labswitch : ok=0 changed=0 unreachable=0 failed=1
```
| True | ios_facts: `dir all-filesystems | include Directory`not supported on all devices - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_facts
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100)
lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100)
lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/my_modules/']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
3750
flash:c3750-advipservicesk9-mz.122-44.SE4.bin"
WS-C3750-24PS-S
##### SUMMARY
ogenstad:
>`dir all-filesystems | include Directory` this is not a valid command on all ios devices.
>If it’s used in the ios_facts module there needs to be some checks to catch those errors.
>I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work.
> I.e. not use `gather_subset: all`
Thanks to @ben-cirrus (from networktocode Slack) for this bug report
##### STEPS TO REPRODUCE
<!---
- hosts: "{{ hosts }}"
any_errors_fatal: true
connection: local
gather_facts: no
vars:
cli:
host: "{{ ip_addr }}"
username: "{{ user }}"
password: "{{ password }}"
transport: cli
tasks:
- ios_facts:
provider: "{{ cli }}"
gather_subset: all
[lab]
labswitch ip_addr=10.254.9.11
-->
##### EXPECTED RESULTS
No backtrace, facts returned
##### ACTUAL RESULTS
```
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: get_ios_facts.yml ****************************************************
1 plays in get_ios_facts.yml
PLAY [lab,] ********************************************************************
TASK [ios_facts] ***************************************************************
task path: /root/napalm-testing/get_ios_facts.yml:14
Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py
<labswitch> ESTABLISH LOCAL CONNECTION FOR USER: root
<labswitch> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" && echo ansible-tmp-1473154099.74-15933157338277="` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" ) && sleep 0'
<labswitch> PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py
<labswitch> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0'
<labswitch> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 455, in <module>
main()
File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 437, in main
runner.run()
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 163, in run
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 88, in run_commands
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py", line 66, in run_commands
File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute
ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory
^
% Invalid input detected at '^' marker.
NSW-CHQ-SW-LAB#
fatal: [labswitch]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_facts"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 455, in <module>\n main()\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 437, in main\n runner.run()\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 163, in run\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 88, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\", line 66, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @get_ios_facts.retry
PLAY RECAP *********************************************************************
labswitch : ok=0 changed=0 unreachable=0 failed=1
```
| main | ios facts dir all filesystems include directory not supported on all devices issue type bug report component name ios facts ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration os environment flash mz bin ws s summary ogenstad dir all filesystems include directory this is not a valid command on all ios devices if it’s used in the ios facts module there needs to be some checks to catch those errors i haven’t tested the ios facts module yet but if you can just disable that check i’m guessing it would work i e not use gather subset all thanks to ben cirrus from networktocode slack for this bug report steps to reproduce hosts hosts any errors fatal true connection local gather facts no vars cli host ip addr username user password password transport cli tasks ios facts provider cli gather subset all labswitch ip addr expected results no backtrace facts returned actual results using etc ansible ansible cfg as config file playbook get ios facts yml plays in get ios facts yml play task task path root napalm testing get ios facts yml using module file root ansible lib ansible modules core network ios ios facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpxzhjfd to root ansible tmp ansible tmp ios facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios facts py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios facts py line in main file tmp ansible ansible module ios facts py line in main runner run file tmp ansible ansible modlib zip ansible module utils netcli py line in run file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands file tmp ansible ansible modlib zip ansible module utils ios py line in run commands file tmp ansible ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response dir all filesystems include directory invalid input detected at marker nsw chq sw lab fatal failed changed false failed true invocation module name ios facts module stderr traceback most recent call last n file tmp ansible ansible module ios facts py line in n main n file tmp ansible ansible module ios facts py line in main n runner run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands n file tmp ansible ansible modlib zip ansible module utils ios py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response dir all filesystems include directory r n r n invalid input detected at marker r n r nnsw chq sw lab n module stdout msg module failure to retry use limit get ios facts retry play recap labswitch ok changed unreachable failed | 1 |
362,923 | 25,398,867,445 | IssuesEvent | 2022-11-22 10:35:10 | NetAppDocs/ontap-metrocluster | https://api.github.com/repos/NetAppDocs/ontap-metrocluster | closed | Update data LIF and cluster management LIF migration syntax information to include home-node | documentation | Page: [Removing a Disaster Recovery group](https://docs.netapp.com/us-en/ontap-metrocluster/upgrade/concept_removing_a_disaster_recovery_group.html)
Update the data LIF and cluster management LIF migration command syntax in steps e and f below to include node information.
e. Migrate all data LIFs to home nodes in another DR group.
network interface show -home-node old_node
network interface modify -vserver svm-name -lif data-lif -home-port port-id
f. Migrate the cluster management LIF to a home node in another DR group.
network interface show -role cluster-mgmt
network interface modify -vserver svm-name -lif data-lif -home-port port-id
Node management and inter-cluster LIFs are not migrated.
For the data LIF and cluster management LIF migration syntax, use the following syntax instead since we are moving them to a different home node.
network interface modify -vserver svm-name -lif data-lif -home-node new_node -home-port port-id
network interface modify -vserver svm-name -lif cluster_mgmt -home-node new_node -home-port port-id
| 1.0 | Update data LIF and cluster management LIF migration syntax information to include home-node - Page: [Removing a Disaster Recovery group](https://docs.netapp.com/us-en/ontap-metrocluster/upgrade/concept_removing_a_disaster_recovery_group.html)
Update the data LIF and cluster management LIF migration command syntax in steps e and f below to include node information.
e. Migrate all data LIFs to home nodes in another DR group.
network interface show -home-node old_node
network interface modify -vserver svm-name -lif data-lif -home-port port-id
f. Migrate the cluster management LIF to a home node in another DR group.
network interface show -role cluster-mgmt
network interface modify -vserver svm-name -lif data-lif -home-port port-id
Node management and inter-cluster LIFs are not migrated.
For the data LIF and cluster management LIF migration syntax, use the following syntax instead since we are moving them to a different home node.
network interface modify -vserver svm-name -lif data-lif -home-node new_node -home-port port-id
network interface modify -vserver svm-name -lif cluster_mgmt -home-node new_node -home-port port-id
| non_main | update data lif and cluster management lif migration syntax information to include home node page update the data lif and cluster management lif migration command syntax in steps e and f below to include node information e migrate all data lifs to home nodes in another dr group network interface show home node old node network interface modify vserver svm name lif data lif home port port id f migrate the cluster management lif to a home node in another dr group network interface show role cluster mgmt network interface modify vserver svm name lif data lif home port port id node management and inter cluster lifs are not migrated for the data lif and cluster management lif migration syntax use the following syntax instead since we are moving them to a different home node network interface modify vserver svm name lif data lif home node new node home port port id network interface modify vserver svm name lif cluster mgmt home node new node home port port id | 0 |
1,869 | 6,577,493,199 | IssuesEvent | 2017-09-12 01:17:58 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | subversion update documentation. | affects_2.3 docs_report waiting_on_maintainer | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
subversion
##### SUMMARY
- `For password / secret arguments no_log=True should be set`: This isn't set for the `password` documentation section.
- `Requirements should be documented, using the requirements=[] field`: The `svn` requirement is documented under `notes`, but it looks like that should migrate to `requirements`.
- `Does module use check_mode? Could it be modified to use it? Document it`: It does, its not documented.
| True | subversion update documentation. - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
subversion
##### SUMMARY
- `For password / secret arguments no_log=True should be set`: This isn't set for the `password` documentation section.
- `Requirements should be documented, using the requirements=[] field`: The `svn` requirement is documented under `notes`, but it looks like that should migrate to `requirements`.
- `Does module use check_mode? Could it be modified to use it? Document it`: It does, its not documented.
| main | subversion update documentation issue type documentation report component name subversion summary for password secret arguments no log true should be set this isn t set for the password documentation section requirements should be documented using the requirements field the svn requirement is documented under notes but it looks like that should migrate to requirements does module use check mode could it be modified to use it document it it does its not documented | 1 |
2,344 | 8,377,696,948 | IssuesEvent | 2018-10-06 04:36:56 | tgstation/tgstation-server | https://api.github.com/repos/tgstation/tgstation-server | closed | RepositoryManager.Dispose() is called twice on instance shutdown | Area: Repository Component Issue Maintainability Issue Review | Disconcerting to say the least | True | RepositoryManager.Dispose() is called twice on instance shutdown - Disconcerting to say the least | main | repositorymanager dispose is called twice on instance shutdown disconcerting to say the least | 1 |
1,020 | 4,805,063,421 | IssuesEvent | 2016-11-02 15:10:25 | saarcashflow/spider_reddituser | https://api.github.com/repos/saarcashflow/spider_reddituser | closed | get all comment data. not just some.. spider should get data not calculate shit and stuff | MAINTAINABILITY USEFULNESS | #4
| True | get all comment data. not just some.. spider should get data not calculate shit and stuff - #4
| main | get all comment data not just some spider should get data not calculate shit and stuff | 1 |
20 | 2,515,475,270 | IssuesEvent | 2015-01-15 18:54:06 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | closed | Remove the backwards-compatible auth source | enhancement maintainability started | `lib/SimpleSAML/Auth/BWC.php` must go away, it's been deprecated for a while now. | True | Remove the backwards-compatible auth source - `lib/SimpleSAML/Auth/BWC.php` must go away, it's been deprecated for a while now. | main | remove the backwards compatible auth source lib simplesaml auth bwc php must go away it s been deprecated for a while now | 1 |
4,108 | 19,513,324,035 | IssuesEvent | 2021-12-29 04:56:41 | aws/aws-sam-cli-app-templates | https://api.github.com/repos/aws/aws-sam-cli-app-templates | closed | Just like NodeJs, create .NET Core 3.1 C# **Quick Start: Web Backend** | maintainer/need-response | It would be nice to have the same NodeJs Quick Start Templates for .NET Core 3.1 | True | Just like NodeJs, create .NET Core 3.1 C# **Quick Start: Web Backend** - It would be nice to have the same NodeJs Quick Start Templates for .NET Core 3.1 | main | just like nodejs create net core c quick start web backend it would be nice to have the same nodejs quick start templates for net core | 1 |
289,977 | 21,797,658,081 | IssuesEvent | 2022-05-15 21:27:29 | Ergo-Lend/exle-dot | https://api.github.com/repos/Ergo-Lend/exle-dot | closed | Main page Documentation | documentation | Improve main page documentation. Include description, setup, running, debug, design?
Use docusaur to have a backend docs?
https://docusaurus.io/docs/installation
| 1.0 | Main page Documentation - Improve main page documentation. Include description, setup, running, debug, design?
Use docusaur to have a backend docs?
https://docusaurus.io/docs/installation
| non_main | main page documentation improve main page documentation include description setup running debug design use docusaur to have a backend docs | 0 |
238,910 | 7,784,437,118 | IssuesEvent | 2018-06-06 13:17:20 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | closed | My Imports | priority/high | - [x] Allow any authenticated user to view any import on My Imports page. There's no reason to block access. Default filter to the current user.
- [x] Add filter params to the query string, so that users can share link to an import when filing an issue
- [x] Add pagination | 1.0 | My Imports - - [x] Allow any authenticated user to view any import on My Imports page. There's no reason to block access. Default filter to the current user.
- [x] Add filter params to the query string, so that users can share link to an import when filing an issue
- [x] Add pagination | non_main | my imports allow any authenticated user to view any import on my imports page there s no reason to block access default filter to the current user add filter params to the query string so that users can share link to an import when filing an issue add pagination | 0 |
133,366 | 18,297,207,387 | IssuesEvent | 2021-10-05 21:43:22 | ghc-dev/Steven-Martin | https://api.github.com/repos/ghc-dev/Steven-Martin | opened | CVE-2019-16869 (High) detected in netty-codec-http-4.1.39.Final.jar | security vulnerability | ## CVE-2019-16869 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: Steven-Martin/build.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Steven-Martin/commit/44043b2ba67622d5df46974213613fbc10f61032">44043b2ba67622d5df46974213613fbc10f61032</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a "Transfer-Encoding : chunked" line), which leads to HTTP request smuggling.
<p>Publish Date: 2019-09-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869>CVE-2019-16869</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16869","vulnerabilityDetails":"Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a \"Transfer-Encoding : chunked\" line), which leads to HTTP request smuggling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-16869 (High) detected in netty-codec-http-4.1.39.Final.jar - ## CVE-2019-16869 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: Steven-Martin/build.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Steven-Martin/commit/44043b2ba67622d5df46974213613fbc10f61032">44043b2ba67622d5df46974213613fbc10f61032</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a "Transfer-Encoding : chunked" line), which leads to HTTP request smuggling.
<p>Publish Date: 2019-09-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869>CVE-2019-16869</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16869","vulnerabilityDetails":"Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a \"Transfer-Encoding : chunked\" line), which leads to HTTP request smuggling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in netty codec http final jar cve high severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file steven martin build gradle path to vulnerable library caches modules files io netty netty codec http final netty codec http final jar dependency hierarchy x netty codec http final jar vulnerable library found in head commit a href found in base branch master vulnerability details netty before final mishandles whitespace before the colon in http headers such as a transfer encoding chunked line which leads to http request smuggling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all final io netty netty codec http final rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree io netty netty codec http final isminimumfixversionavailable true minimumfixversion io netty netty all final io netty netty codec http final basebranches vulnerabilityidentifier cve vulnerabilitydetails netty before final mishandles whitespace before the colon in http headers such as a transfer encoding chunked line which leads to http request smuggling vulnerabilityurl | 0 |
3,684 | 15,037,897,931 | IssuesEvent | 2021-02-02 16:50:45 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | Vulnerability roundup 92: avian-1.2.0: 2 advisories [7.8] | 1.severity: security 9.needs: maintainer feedback | [search](https://search.nix.gsc.io/?q=avian&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=avian+in%3Apath&type=Code)
* [ ] [CVE-2020-17360](https://nvd.nist.gov/vuln/detail/CVE-2020-17360) CVSSv3=7.8 (nixos-unstable)
* [ ] [CVE-2020-17361](https://nvd.nist.gov/vuln/detail/CVE-2020-17361) CVSSv3=5.5 (nixos-unstable)
Scanned versions: nixos-unstable: c59ea8b8a0e.
Cc @earldouglas
| True | Vulnerability roundup 92: avian-1.2.0: 2 advisories [7.8] - [search](https://search.nix.gsc.io/?q=avian&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=avian+in%3Apath&type=Code)
* [ ] [CVE-2020-17360](https://nvd.nist.gov/vuln/detail/CVE-2020-17360) CVSSv3=7.8 (nixos-unstable)
* [ ] [CVE-2020-17361](https://nvd.nist.gov/vuln/detail/CVE-2020-17361) CVSSv3=5.5 (nixos-unstable)
Scanned versions: nixos-unstable: c59ea8b8a0e.
Cc @earldouglas
| main | vulnerability roundup avian advisories nixos unstable nixos unstable scanned versions nixos unstable cc earldouglas | 1 |
146,444 | 19,404,080,473 | IssuesEvent | 2021-12-19 17:48:13 | vincenzodistasio97/home-cloud | https://api.github.com/repos/vincenzodistasio97/home-cloud | opened | CVE-2021-29060 (Medium) detected in color-string-1.5.3.tgz | security vulnerability | ## CVE-2021-29060 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary>
<p>Parser and generator for CSS color strings</p>
<p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p>
<p>Path to dependency file: home-cloud/client/package.json</p>
<p>Path to vulnerable library: home-cloud/client/node_modules/color-string/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-colormin-4.0.3.tgz
- color-3.1.2.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/home-cloud/commit/0eb270221557ac4df481974af8dfb9ea1288bc9b">0eb270221557ac4df481974af8dfb9ea1288bc9b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Color-String version 1.5.5 and below which occurs when the application is provided and checks a crafted invalid HWB string.
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29060>CVE-2021-29060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-257v-vj4p-3w2h">https://github.com/advisories/GHSA-257v-vj4p-3w2h</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: color-string - 1.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29060 (Medium) detected in color-string-1.5.3.tgz - ## CVE-2021-29060 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary>
<p>Parser and generator for CSS color strings</p>
<p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p>
<p>Path to dependency file: home-cloud/client/package.json</p>
<p>Path to vulnerable library: home-cloud/client/node_modules/color-string/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-colormin-4.0.3.tgz
- color-3.1.2.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/home-cloud/commit/0eb270221557ac4df481974af8dfb9ea1288bc9b">0eb270221557ac4df481974af8dfb9ea1288bc9b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Color-String version 1.5.5 and below which occurs when the application is provided and checks a crafted invalid HWB string.
<p>Publish Date: 2021-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29060>CVE-2021-29060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-257v-vj4p-3w2h">https://github.com/advisories/GHSA-257v-vj4p-3w2h</a></p>
<p>Release Date: 2021-06-21</p>
<p>Fix Resolution: color-string - 1.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in color string tgz cve medium severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file home cloud client package json path to vulnerable library home cloud client node modules color string package json dependency hierarchy react scripts tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss colormin tgz color tgz x color string tgz vulnerable library found in head commit a href found in base branch master vulnerability details a regular expression denial of service redos vulnerability was discovered in color string version and below which occurs when the application is provided and checks a crafted invalid hwb string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color string step up your open source security game with whitesource | 0 |
5,618 | 28,113,910,948 | IssuesEvent | 2023-03-31 09:17:27 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Wagtail 4 upgrade | engineering maintain | ### Description
Following on from #9674 - we want to upgrade Wagtail to version 4.0 (or possibly the latest version)
When it comes to upgrading, it's best to follow the increments in the versions. So to upgrade we will want to run the upgrades in the following order (ref)[https://docs.wagtail.org/en/stable/releases/index.html]:
- [x] upgrade to 3.0 1 then run test suite
- [x] upgrade to 3.0 2 then run test suite
- [x] upgrade to 3.0 3 then run test suite
- [x] upgrade to 4.0 then run test suite
- ... etc
It's a good idea to create single commits for each of the upgrade steps.
### What needs addressing for the upgrade to version 4
What needs to be addressed is explained in the [docs](https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations) but for clarity here is a brief outline:
- [x] Check for and `Page.serve()` overrides and fix accordingly
- [x] Live preview panel X-Frame-Options header [ref](https://docs.wagtail.org/en/stable/releases/4.0.html#opening-links-within-the-live-preview-panel)
- [x] The PageRevision model has been replaced with a generic Revision model. Check for use of PageRevision
- [x] Multiple method/class naming updates and replacements - E.G BaseSetting replaced by BaseSiteSetting
### Additional context
- Wagtail 4 release notes upgrade considerations https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
### Developer notes
- [x] Create an upgrade branch from main
- [x] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check`
- [x] Check the new version’s release notes https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
- [x] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first;
- [x] Upgrade supporting requirements (Python, Django) if necessary
- [x] Upgrade Wagtail
- [x] Make new migration (might result in none).
- [x] Migrate database changes (locally)
- [x] Implement needed changes from upgrade considerations (see above)
- [x] Perform testing
- [x] Run test suites
- [x] Smoke test site / testing journeys (manually on the site)
- [x] Smoke test admin (Click around in the admin to see if anything is broken)
- [x] Check for new deprecations `python -Wa manage.py check` and fix if necessary
## Acceptance criteria
- [x] Wagtail is upgraded to (at least) version 4.0
- [x] Infrastructure difference between prod/staging is documented
- [x] Generic upgrade plan is in place for future upgrades
| True | Wagtail 4 upgrade - ### Description
Following on from #9674 - we want to upgrade Wagtail to version 4.0 (or possibly the latest version)
When it comes to upgrading, it's best to follow the increments in the versions. So to upgrade we will want to run the upgrades in the following order (ref)[https://docs.wagtail.org/en/stable/releases/index.html]:
- [x] upgrade to 3.0 1 then run test suite
- [x] upgrade to 3.0 2 then run test suite
- [x] upgrade to 3.0 3 then run test suite
- [x] upgrade to 4.0 then run test suite
- ... etc
It's a good idea to create single commits for each of the upgrade steps.
### What needs addressing for the upgrade to version 4
What needs to be addressed is explained in the [docs](https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations) but for clarity here is a brief outline:
- [x] Check for and `Page.serve()` overrides and fix accordingly
- [x] Live preview panel X-Frame-Options header [ref](https://docs.wagtail.org/en/stable/releases/4.0.html#opening-links-within-the-live-preview-panel)
- [x] The PageRevision model has been replaced with a generic Revision model. Check for use of PageRevision
- [x] Multiple method/class naming updates and replacements - E.G BaseSetting replaced by BaseSiteSetting
### Additional context
- Wagtail 4 release notes upgrade considerations https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
### Developer notes
- [x] Create an upgrade branch from main
- [x] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check`
- [x] Check the new version’s release notes https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
- [x] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first;
- [x] Upgrade supporting requirements (Python, Django) if necessary
- [x] Upgrade Wagtail
- [x] Make new migration (might result in none).
- [x] Migrate database changes (locally)
- [x] Implement needed changes from upgrade considerations (see above)
- [x] Perform testing
- [x] Run test suites
- [x] Smoke test site / testing journeys (manually on the site)
- [x] Smoke test admin (Click around in the admin to see if anything is broken)
- [x] Check for new deprecations `python -Wa manage.py check` and fix if necessary
## Acceptance criteria
- [x] Wagtail is upgraded to (at least) version 4.0
- [x] Infrastructure difference between prod/staging is documented
- [x] Generic upgrade plan is in place for future upgrades
| main | wagtail upgrade description following on from we want to upgrade wagtail to version or possibly the latest version when it comes to upgrading it s best to follow the increments in the versions so to upgrade we will want to run the upgrades in the following order ref upgrade to then run test suite upgrade to then run test suite upgrade to then run test suite upgrade to then run test suite etc it s a good idea to create single commits for each of the upgrade steps what needs addressing for the upgrade to version what needs to be addressed is explained in the but for clarity here is a brief outline check for and page serve overrides and fix accordingly live preview panel x frame options header the pagerevision model has been replaced with a generic revision model check for use of pagerevision multiple method class naming updates and replacements e g basesetting replaced by basesitesetting additional context wagtail release notes upgrade considerations developer notes create an upgrade branch from main check your project’s console output for any deprecation warnings and fix them where necessary python wa manage py check check the new version’s release notes check the compatible django python versions for any dependencies that need upgrading first upgrade supporting requirements python django if necessary upgrade wagtail make new migration might result in none migrate database changes locally implement needed changes from upgrade considerations see above perform testing run test suites smoke test site testing journeys manually on the site smoke test admin click around in the admin to see if anything is broken check for new deprecations python wa manage py check and fix if necessary acceptance criteria wagtail is upgraded to at least version infrastructure difference between prod staging is documented generic upgrade plan is in place for future upgrades | 1 |
4,665 | 24,119,900,672 | IssuesEvent | 2022-09-20 17:43:53 | usefulmove/comp | https://api.github.com/repos/usefulmove/comp | closed | Unit test setup is cumbersome | maintainability | The current unit test structure is cumbersome and more difficult to maintain than is ideal. It makes sense to revisit this structure at some point. | True | Unit test setup is cumbersome - The current unit test structure is cumbersome and more difficult to maintain than is ideal. It makes sense to revisit this structure at some point. | main | unit test setup is cumbersome the current unit test structure is cumbersome and more difficult to maintain than is ideal it makes sense to revisit this structure at some point | 1 |
3,811 | 16,530,126,641 | IssuesEvent | 2021-05-27 04:06:53 | tensorflow/models | https://api.github.com/repos/tensorflow/models | closed | Unable to download BikeVideoDataset.tar for vid2depth | models:research stat:awaiting maintainer type:support | Hi,
I downloaded this dataset earlier but due to some reason, I need to re download the dataset but unable to do so. | True | Unable to download BikeVideoDataset.tar for vid2depth - Hi,
I downloaded this dataset earlier but due to some reason, I need to re download the dataset but unable to do so. | main | unable to download bikevideodataset tar for hi i downloaded this dataset earlier but due to some reason i need to re download the dataset but unable to do so | 1 |
25,683 | 5,191,166,896 | IssuesEvent | 2017-01-21 17:33:54 | facebook/jest | https://api.github.com/repos/facebook/jest | closed | New Jest Homepage Hero | Documentation | Also breaking this out of #2625, replicating @cpojer's comment here:
* Remove the excessive background color (only keep it in the top with the links).
* Center "Jest" and add "🃏 Painless JavaScript Testing" below, just like on the React website. * Remove the logo on the right that's shown on big screens.
* Maybe remove the two lines about how it is used and the news section (I know @hramos doesn't like it, so let's kill it).
Something like the following but vertically compressed:  | 1.0 | New Jest Homepage Hero - Also breaking this out of #2625, replicating @cpojer's comment here:
* Remove the excessive background color (only keep it in the top with the links).
* Center "Jest" and add "🃏 Painless JavaScript Testing" below, just like on the React website. * Remove the logo on the right that's shown on big screens.
* Maybe remove the two lines about how it is used and the news section (I know @hramos doesn't like it, so let's kill it).
Something like the following but vertically compressed:  | non_main | new jest homepage hero also breaking this out of replicating cpojer s comment here remove the excessive background color only keep it in the top with the links center jest and add 🃏 painless javascript testing below just like on the react website remove the logo on the right that s shown on big screens maybe remove the two lines about how it is used and the news section i know hramos doesn t like it so let s kill it something like the following but vertically compressed | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.