Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
102,800
| 12,823,497,572
|
IssuesEvent
|
2020-07-06 11:50:56
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Setting app parameters in Studio designer
|
Epic area/app-parameters kind/analysis solution/studio/designer status/draft status/won't fix
|
## Description
EPIC for app-parameter issues in MVP3
Lorem ipsum
## In scope
> What's in scope of this analysis?
Lorem ipsum
## Out of scope
> What's **out** of scope for this analysis?
Lorem ipsum
## Constraints
> Constraints or requirements (technical or functional) that affects this analysis.
Lorem ipsum
## Analysis
Lorem ipsum
## Conclusion
> Short summary of the proposed solution.
## Tasks
- [ ] Is this issue labeled with a correct area label?
- [ ] QA has been done
|
1.0
|
Setting app parameters in Studio designer - ## Description
EPIC for app-parameter issues in MVP3
Lorem ipsum
## In scope
> What's in scope of this analysis?
Lorem ipsum
## Out of scope
> What's **out** of scope for this analysis?
Lorem ipsum
## Constraints
> Constraints or requirements (technical or functional) that affects this analysis.
Lorem ipsum
## Analysis
Lorem ipsum
## Conclusion
> Short summary of the proposed solution.
## Tasks
- [ ] Is this issue labeled with a correct area label?
- [ ] QA has been done
|
non_test
|
setting app parameters in studio designer description epic for app parameter issues in lorem ipsum in scope what s in scope of this analysis lorem ipsum out of scope what s out of scope for this analysis lorem ipsum constraints constraints or requirements technical or functional that affects this analysis lorem ipsum analysis lorem ipsum conclusion short summary of the proposed solution tasks is this issue labeled with a correct area label qa has been done
| 0
|
629,981
| 20,073,272,862
|
IssuesEvent
|
2022-02-04 09:48:48
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Custom columns not appearing in result set when selecting subset of fields
|
Type:Bug Priority:P1 .Frontend .Reproduced
|
Create a question on the sample dataset on Orders, and create a custom column called "adjective" who's formula is `case([Total] > 100, "expensive", "cheap")`
The custom column appears in the results when all fields on Orders are selected, but does not appear when only a subset of fields are selected:
NOTE: most important is `release-x.42.x` It is possible this bug does not manifest in `master`. Will need a e2e test when we have narrowed and identified.




|
1.0
|
Custom columns not appearing in result set when selecting subset of fields - Create a question on the sample dataset on Orders, and create a custom column called "adjective" who's formula is `case([Total] > 100, "expensive", "cheap")`
The custom column appears in the results when all fields on Orders are selected, but does not appear when only a subset of fields are selected:
NOTE: most important is `release-x.42.x` It is possible this bug does not manifest in `master`. Will need a e2e test when we have narrowed and identified.




|
non_test
|
custom columns not appearing in result set when selecting subset of fields create a question on the sample dataset on orders and create a custom column called adjective who s formula is case expensive cheap the custom column appears in the results when all fields on orders are selected but does not appear when only a subset of fields are selected note most important is release x x it is possible this bug does not manifest in master will need a test when we have narrowed and identified
| 0
|
291,405
| 25,144,718,758
|
IssuesEvent
|
2022-11-10 03:40:24
|
ZcashFoundation/zebra
|
https://api.github.com/repos/ZcashFoundation/zebra
|
closed
|
Get transactions from the non-finalized state in the send transactions test
|
C-bug S-needs-triage P-Medium :zap: I-slow C-testing A-rpc
|
## Motivation
Currently, the send transactions test is very slow, because it:
- copies the entire Zebra finalized state directory
- syncs to the tip
- gets transactions from the copied finalized state
- runs the test on the old finalized state
This is ok for now, but it might become a problem if the state gets much bigger, or we need to modify that test a lot.
### Designs
Instead, the test could:
- use the original Zebra cached state directory
- sync to the tip
- get transactions from at least 3 blocks in the non-finalized state via JSON-RPC
- run the test on the updated finalized state (which won't have all those non-finalized transactions)
|
1.0
|
Get transactions from the non-finalized state in the send transactions test - ## Motivation
Currently, the send transactions test is very slow, because it:
- copies the entire Zebra finalized state directory
- syncs to the tip
- gets transactions from the copied finalized state
- runs the test on the old finalized state
This is ok for now, but it might become a problem if the state gets much bigger, or we need to modify that test a lot.
### Designs
Instead, the test could:
- use the original Zebra cached state directory
- sync to the tip
- get transactions from at least 3 blocks in the non-finalized state via JSON-RPC
- run the test on the updated finalized state (which won't have all those non-finalized transactions)
|
test
|
get transactions from the non finalized state in the send transactions test motivation currently the send transactions test is very slow because it copies the entire zebra finalized state directory syncs to the tip gets transactions from the copied finalized state runs the test on the old finalized state this is ok for now but it might become a problem if the state gets much bigger or we need to modify that test a lot designs instead the test could use the original zebra cached state directory sync to the tip get transactions from at least blocks in the non finalized state via json rpc run the test on the updated finalized state which won t have all those non finalized transactions
| 1
|
165,209
| 20,574,342,649
|
IssuesEvent
|
2022-03-04 01:46:43
|
slothymonk/reviewing-a-pull-request
|
https://api.github.com/repos/slothymonk/reviewing-a-pull-request
|
closed
|
CVE-2020-7595 (High) detected in nokogiri-1.10.3.gem - autoclosed
|
security vulnerability
|
## CVE-2020-7595 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.3.gem</b></p></summary>
<p>Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.3.gem">https://rubygems.org/gems/nokogiri-1.10.3.gem</a></p>
<p>Path to dependency file: /reviewing-a-pull-request/Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/nokogiri-1.10.3.gem</p>
<p>
Dependency Hierarchy:
- github-pages-198.gem (Root Library)
- jekyll-mentions-1.4.1.gem
- html-pipeline-2.11.0.gem
- :x: **nokogiri-1.10.3.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation.
<p>Publish Date: 2020-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7595>CVE-2020-7595</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202010-04">https://security.gentoo.org/glsa/202010-04</a></p>
<p>Fix Resolution: All libxml2 users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=dev-libs/libxml2-2.9.10 >= </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7595 (High) detected in nokogiri-1.10.3.gem - autoclosed - ## CVE-2020-7595 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.3.gem</b></p></summary>
<p>Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.3.gem">https://rubygems.org/gems/nokogiri-1.10.3.gem</a></p>
<p>Path to dependency file: /reviewing-a-pull-request/Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/nokogiri-1.10.3.gem</p>
<p>
Dependency Hierarchy:
- github-pages-198.gem (Root Library)
- jekyll-mentions-1.4.1.gem
- html-pipeline-2.11.0.gem
- :x: **nokogiri-1.10.3.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation.
<p>Publish Date: 2020-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7595>CVE-2020-7595</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202010-04">https://security.gentoo.org/glsa/202010-04</a></p>
<p>Fix Resolution: All libxml2 users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=dev-libs/libxml2-2.9.10 >= </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in nokogiri gem autoclosed cve high severity vulnerability vulnerable library nokogiri gem nokogiri 鋸 is an html xml sax and reader parser among nokogiri s many features is the ability to search documents via xpath or selectors library home page a href path to dependency file reviewing a pull request gemfile lock path to vulnerable library var lib gems cache nokogiri gem dependency hierarchy github pages gem root library jekyll mentions gem html pipeline gem x nokogiri gem vulnerable library vulnerability details xmlstringlendecodeentities in parser c in has an infinite loop in a certain end of file situation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href fix resolution all users should upgrade to the latest version emerge sync emerge ask oneshot verbose dev libs step up your open source security game with whitesource
| 0
|
213,242
| 16,507,340,314
|
IssuesEvent
|
2021-05-25 21:06:45
|
NillerMedDild/Enigmatica6
|
https://api.github.com/repos/NillerMedDild/Enigmatica6
|
closed
|
Twilight Forest
|
Status: Ready For Testing Suggestion
|
**CurseForge Link**
https://www.curseforge.com/minecraft/mc-mods/the-twilight-forest
**Mod description**
I feel like the description is pretty much known at this point :p
**Why would you like the mod added?**
Even though its not finished i believe the mod can actually have a new approach to the modpack with its new databased spawning, you guys can make the twilight forest have worth while loot in the final castle such as: Antimatter,Polonium,Totems of undying, Machine casing, Artifacts anything that would make actually progressing through the twilight forest worth whole, you guys could even buff the bosses up so it makes the fights alot more challenging
Hope it makes it into the pack :)
|
1.0
|
Twilight Forest -
**CurseForge Link**
https://www.curseforge.com/minecraft/mc-mods/the-twilight-forest
**Mod description**
I feel like the description is pretty much known at this point :p
**Why would you like the mod added?**
Even though its not finished i believe the mod can actually have a new approach to the modpack with its new databased spawning, you guys can make the twilight forest have worth while loot in the final castle such as: Antimatter,Polonium,Totems of undying, Machine casing, Artifacts anything that would make actually progressing through the twilight forest worth whole, you guys could even buff the bosses up so it makes the fights alot more challenging
Hope it makes it into the pack :)
|
test
|
twilight forest curseforge link mod description i feel like the description is pretty much known at this point p why would you like the mod added even though its not finished i believe the mod can actually have a new approach to the modpack with its new databased spawning you guys can make the twilight forest have worth while loot in the final castle such as antimatter polonium totems of undying machine casing artifacts anything that would make actually progressing through the twilight forest worth whole you guys could even buff the bosses up so it makes the fights alot more challenging hope it makes it into the pack
| 1
|
257,772
| 22,209,160,830
|
IssuesEvent
|
2022-06-07 17:28:44
|
Hamlib/Hamlib
|
https://api.github.com/repos/Hamlib/Hamlib
|
closed
|
FT-991 rig split behavior
|
bug needs test fixed
|
Here are the results with rigctl Hamlib 4.5~git Wed Jun 01 22:20:14 2022 +0000 SHA=ce1d86:
- This driver looks more stable than the other ones. I saw neither absurd frequencies nor unwanted changes to the band stack memories.
- Only the effect that 0 Hz is displayed briefly with every transmission is still there. I therefore took a closer look. This happens as follows:
- During normal non-split operation the top line of my FT-991's display shows the rig QRG (usually VFO A), and in the second top line Clarifier setting is displayed. Thus, usually this line shows: "CLAR 0 Hz".
- When Split Operation is set to "Rig", at my FT-991 the split mode is activated. This changes the second top line for example to "SPLIT VFO B 50.31150"
- Now it comes: When transmitting with Split Operation = Rig, each time it comes for about 0.5 seconds again "CLAR 0 Hz" and then display switches back to "SPLIT VFO B 50.31150". (= The “0 Hz” comes from the Clarifier setting.) This means, that – for whatever reason – split mode must be briefly disabled and the enabled again. As said, during each transmission. This is the bug.
|
1.0
|
FT-991 rig split behavior - Here are the results with rigctl Hamlib 4.5~git Wed Jun 01 22:20:14 2022 +0000 SHA=ce1d86:
- This driver looks more stable than the other ones. I saw neither absurd frequencies nor unwanted changes to the band stack memories.
- Only the effect that 0 Hz is displayed briefly with every transmission is still there. I therefore took a closer look. This happens as follows:
- During normal non-split operation the top line of my FT-991's display shows the rig QRG (usually VFO A), and in the second top line Clarifier setting is displayed. Thus, usually this line shows: "CLAR 0 Hz".
- When Split Operation is set to "Rig", at my FT-991 the split mode is activated. This changes the second top line for example to "SPLIT VFO B 50.31150"
- Now it comes: When transmitting with Split Operation = Rig, each time it comes for about 0.5 seconds again "CLAR 0 Hz" and then display switches back to "SPLIT VFO B 50.31150". (= The “0 Hz” comes from the Clarifier setting.) This means, that – for whatever reason – split mode must be briefly disabled and the enabled again. As said, during each transmission. This is the bug.
|
test
|
ft rig split behavior here are the results with rigctl hamlib git wed jun sha this driver looks more stable than the other ones i saw neither absurd frequencies nor unwanted changes to the band stack memories only the effect that hz is displayed briefly with every transmission is still there i therefore took a closer look this happens as follows during normal non split operation the top line of my ft s display shows the rig qrg usually vfo a and in the second top line clarifier setting is displayed thus usually this line shows clar hz when split operation is set to rig at my ft the split mode is activated this changes the second top line for example to split vfo b now it comes when transmitting with split operation rig each time it comes for about seconds again clar hz and then display switches back to split vfo b the “ hz” comes from the clarifier setting this means that – for whatever reason – split mode must be briefly disabled and the enabled again as said during each transmission this is the bug
| 1
|
103,714
| 8,940,773,727
|
IssuesEvent
|
2019-01-24 01:11:55
|
apache/incubator-mxnet
|
https://api.github.com/repos/apache/incubator-mxnet
|
closed
|
ARM QEMU test in CI failed unrelated PR
|
ARM CI Question Test
|
## Description
Test with ARM QEMU fails with some kind of network interruption...
Makes wonder about these network issues where dependencies fail to download... should we put in a retry function, so that we don't have to restart our PRs when there's a transient error?
## Error
```
runtime_functions.py: 2018-11-26 03:47:02,687 ['run_ut_py3_qemu']
⢎⡑ ⣰⡀ ⢀⣀ ⡀⣀ ⣰⡀ ⠄ ⣀⡀ ⢀⡀ ⡎⢱ ⣏⡉ ⡷⢾ ⡇⢸
⠢⠜ ⠘⠤ ⠣⠼ ⠏ ⠘⠤ ⠇ ⠇⠸ ⣑⡺ ⠣⠪ ⠧⠤ ⠇⠸ ⠣⠜
runtime_functions.py: 2018-11-26 03:47:02,765 Starting VM, ssh port redirected to localhost:2222 (inside docker, not exposed by default)
runtime_functions.py: 2018-11-26 03:47:02,765 Starting in non-interactive mode. Terminal output is disabled.
runtime_functions.py: 2018-11-26 03:47:02,766 waiting for ssh to be open in the VM (timeout 300s)
runtime_functions.py: 2018-11-26 03:47:46,729 wait_ssh_open: port 127.0.0.1:2222 is open and ssh is ready
runtime_functions.py: 2018-11-26 03:47:46,729 VM is online and SSH is up
runtime_functions.py: 2018-11-26 03:47:46,729 Provisioning the VM with artifacts and sources
ssh_exchange_identification: read: Connection reset by peer
rsync: safe_write failed to write 4 bytes to socket [sender]: Broken pipe (32)
rsync error: unexplained error (code 255) at io.c(320) [sender=3.1.1]
runtime_functions.py: 2018-11-26 03:47:46,916 Shutdown via ssh
ssh_exchange_identification: read: Connection reset by peer
Traceback (most recent call last):
File "./runtime_functions.py", line 66, in run_ut_py3_qemu
qemu_provision(vm.ssh_port)
File "/work/vmcontrol.py", line 186, in qemu_provision
qemu_rsync(ssh_port, x, 'mxnet_dist/')
File "/work/vmcontrol.py", line 175, in qemu_rsync
check_call(['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p{}'.format(ssh_port), '-a', local_path, 'qemu@localhost:{}'.format(remote_path)])
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p2222', '-a', '/work/mxnet/build/mxnet-1.4.0-py2.py3-none-any.whl', 'qemu@localhost:mxnet_dist/']' returned non-zero exit status 255
```
|
1.0
|
ARM QEMU test in CI failed unrelated PR - ## Description
Test with ARM QEMU fails with some kind of network interruption...
Makes wonder about these network issues where dependencies fail to download... should we put in a retry function, so that we don't have to restart our PRs when there's a transient error?
## Error
```
runtime_functions.py: 2018-11-26 03:47:02,687 ['run_ut_py3_qemu']
⢎⡑ ⣰⡀ ⢀⣀ ⡀⣀ ⣰⡀ ⠄ ⣀⡀ ⢀⡀ ⡎⢱ ⣏⡉ ⡷⢾ ⡇⢸
⠢⠜ ⠘⠤ ⠣⠼ ⠏ ⠘⠤ ⠇ ⠇⠸ ⣑⡺ ⠣⠪ ⠧⠤ ⠇⠸ ⠣⠜
runtime_functions.py: 2018-11-26 03:47:02,765 Starting VM, ssh port redirected to localhost:2222 (inside docker, not exposed by default)
runtime_functions.py: 2018-11-26 03:47:02,765 Starting in non-interactive mode. Terminal output is disabled.
runtime_functions.py: 2018-11-26 03:47:02,766 waiting for ssh to be open in the VM (timeout 300s)
runtime_functions.py: 2018-11-26 03:47:46,729 wait_ssh_open: port 127.0.0.1:2222 is open and ssh is ready
runtime_functions.py: 2018-11-26 03:47:46,729 VM is online and SSH is up
runtime_functions.py: 2018-11-26 03:47:46,729 Provisioning the VM with artifacts and sources
ssh_exchange_identification: read: Connection reset by peer
rsync: safe_write failed to write 4 bytes to socket [sender]: Broken pipe (32)
rsync error: unexplained error (code 255) at io.c(320) [sender=3.1.1]
runtime_functions.py: 2018-11-26 03:47:46,916 Shutdown via ssh
ssh_exchange_identification: read: Connection reset by peer
Traceback (most recent call last):
File "./runtime_functions.py", line 66, in run_ut_py3_qemu
qemu_provision(vm.ssh_port)
File "/work/vmcontrol.py", line 186, in qemu_provision
qemu_rsync(ssh_port, x, 'mxnet_dist/')
File "/work/vmcontrol.py", line 175, in qemu_rsync
check_call(['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p{}'.format(ssh_port), '-a', local_path, 'qemu@localhost:{}'.format(remote_path)])
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p2222', '-a', '/work/mxnet/build/mxnet-1.4.0-py2.py3-none-any.whl', 'qemu@localhost:mxnet_dist/']' returned non-zero exit status 255
```
|
test
|
arm qemu test in ci failed unrelated pr description test with arm qemu fails with some kind of network interruption makes wonder about these network issues where dependencies fail to download should we put in a retry function so that we don t have to restart our prs when there s a transient error error runtime functions py ⢎⡑ ⣰⡀ ⢀⣀ ⡀⣀ ⣰⡀ ⠄ ⣀⡀ ⢀⡀ ⡎⢱ ⣏⡉ ⡷⢾ ⡇⢸ ⠢⠜ ⠘⠤ ⠣⠼ ⠏ ⠘⠤ ⠇ ⠇⠸ ⣑⡺ ⠣⠪ ⠧⠤ ⠇⠸ ⠣⠜ runtime functions py starting vm ssh port redirected to localhost inside docker not exposed by default runtime functions py starting in non interactive mode terminal output is disabled runtime functions py waiting for ssh to be open in the vm timeout runtime functions py wait ssh open port is open and ssh is ready runtime functions py vm is online and ssh is up runtime functions py provisioning the vm with artifacts and sources ssh exchange identification read connection reset by peer rsync safe write failed to write bytes to socket broken pipe rsync error unexplained error code at io c runtime functions py shutdown via ssh ssh exchange identification read connection reset by peer traceback most recent call last file runtime functions py line in run ut qemu qemu provision vm ssh port file work vmcontrol py line in qemu provision qemu rsync ssh port x mxnet dist file work vmcontrol py line in qemu rsync check call file usr lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status
| 1
|
275,713
| 23,932,453,382
|
IssuesEvent
|
2022-09-10 18:59:15
|
hajimehoshi/ebiten
|
https://api.github.com/repos/hajimehoshi/ebiten
|
closed
|
Test on Windows Server
|
os:windows test
|
OpenGL version might be unexpectedly old. DirectX should be used anyway?
Related #739
|
1.0
|
Test on Windows Server - OpenGL version might be unexpectedly old. DirectX should be used anyway?
Related #739
|
test
|
test on windows server opengl version might be unexpectedly old directx should be used anyway related
| 1
|
469,610
| 13,521,961,975
|
IssuesEvent
|
2020-09-15 07:51:33
|
buddyboss/buddyboss-platform
|
https://api.github.com/repos/buddyboss/buddyboss-platform
|
opened
|
Sort Group Types Alphabetically
|
feature: enhancement priority: medium
|
**Is your feature request related to a problem? Please describe.**
When there are several group types, the group type dropdown is not user friendly, some users are having some hard time to find the group type they are looking for.
**Describe the solution you'd like**
Be able to sort the Group Type filter in the Group directory so that the user can easily find and select a group type.
**Screenshot**

**Support ticket links**
https://secure.helpscout.net/conversation/1273251749/97145
|
1.0
|
Sort Group Types Alphabetically - **Is your feature request related to a problem? Please describe.**
When there are several group types, the group type dropdown is not user friendly, some users are having some hard time to find the group type they are looking for.
**Describe the solution you'd like**
Be able to sort the Group Type filter in the Group directory so that the user can easily find and select a group type.
**Screenshot**

**Support ticket links**
https://secure.helpscout.net/conversation/1273251749/97145
|
non_test
|
sort group types alphabetically is your feature request related to a problem please describe when there are several group types the group type dropdown is not user friendly some users are having some hard time to find the group type they are looking for describe the solution you d like be able to sort the group type filter in the group directory so that the user can easily find and select a group type screenshot support ticket links
| 0
|
461,818
| 13,236,670,967
|
IssuesEvent
|
2020-08-18 20:13:28
|
amici-ursi/redbear
|
https://api.github.com/repos/amici-ursi/redbear
|
closed
|
cog / redbear / config: implement member_commands, personal_commands, muted_members in Config
|
enhancement high priority
|
These were previously stored in pdsettings.p and we should use the new system. This needs to be in place for many other features to work.
#3
|
1.0
|
cog / redbear / config: implement member_commands, personal_commands, muted_members in Config - These were previously stored in pdsettings.p and we should use the new system. This needs to be in place for many other features to work.
#3
|
non_test
|
cog redbear config implement member commands personal commands muted members in config these were previously stored in pdsettings p and we should use the new system this needs to be in place for many other features to work
| 0
|
87,293
| 8,071,411,828
|
IssuesEvent
|
2018-08-06 13:07:27
|
timogoudzwaard/arcatering-app
|
https://api.github.com/repos/timogoudzwaard/arcatering-app
|
closed
|
Add tests to Register component
|
unit test
|
Test if...
- the component is able to render
- child components render
- fields render correctly
- the onLoading function renders the correct field
- error handling works
|
1.0
|
Add tests to Register component - Test if...
- the component is able to render
- child components render
- fields render correctly
- the onLoading function renders the correct field
- error handling works
|
test
|
add tests to register component test if the component is able to render child components render fields render correctly the onloading function renders the correct field error handling works
| 1
|
190,271
| 14,540,368,737
|
IssuesEvent
|
2020-12-15 13:13:43
|
ubtue/DatenProbleme
|
https://api.github.com/repos/ubtue/DatenProbleme
|
closed
|
ISSN 1771-1347 | Reforme, Humanisme, Renaissance | DOI
|
Fehlerquelle: Translator Zotero_SEMI-AUTO ready for testing
|
https://www.cairn.info/revue-reforme-humanisme-renaissance-2020-2-page-13.htm
Am unteren Seitenrand befindet sich ein DOI. Dieser wird nicht erfasst.


|
1.0
|
ISSN 1771-1347 | Reforme, Humanisme, Renaissance | DOI - https://www.cairn.info/revue-reforme-humanisme-renaissance-2020-2-page-13.htm
Am unteren Seitenrand befindet sich ein DOI. Dieser wird nicht erfasst.


|
test
|
issn reforme humanisme renaissance doi am unteren seitenrand befindet sich ein doi dieser wird nicht erfasst
| 1
|
136,606
| 11,053,754,821
|
IssuesEvent
|
2019-12-10 12:04:46
|
brave/brave-ios
|
https://api.github.com/repos/brave/brave-ios
|
closed
|
Unit Tests: Data Sync Classes
|
Epic: CI/Tests QA/No enhancement release-notes/exclude
|
All core data models have unit tests already.
The only untested classes in Data framework are those related to data sync.
|
1.0
|
Unit Tests: Data Sync Classes - All core data models have unit tests already.
The only untested classes in Data framework are those related to data sync.
|
test
|
unit tests data sync classes all core data models have unit tests already the only untested classes in data framework are those related to data sync
| 1
|
18,402
| 3,389,374,628
|
IssuesEvent
|
2015-11-30 01:15:51
|
jgirald/ES2015C
|
https://api.github.com/repos/jgirald/ES2015C
|
closed
|
Sonido Animaciones Militar Persa (secondary actions)
|
Animation Character Design Medium Priority Persian Sound Team A
|
### Descripción
* Crear sonidos para animaciones de los personajes.
* Añadir los sonidos a las animaciones del Militar Persa:
* Die
* Creation
### Acceptance criteria
Sonidos de las animaciones preparados para utilizar en integración.
### Esfuerzo estimado:
1/2 hr
|
1.0
|
Sonido Animaciones Militar Persa (secondary actions) - ### Descripción
* Crear sonidos para animaciones de los personajes.
* Añadir los sonidos a las animaciones del Militar Persa:
* Die
* Creation
### Acceptance criteria
Sonidos de las animaciones preparados para utilizar en integración.
### Esfuerzo estimado:
1/2 hr
|
non_test
|
sonido animaciones militar persa secondary actions descripción crear sonidos para animaciones de los personajes añadir los sonidos a las animaciones del militar persa die creation acceptance criteria sonidos de las animaciones preparados para utilizar en integración esfuerzo estimado hr
| 0
|
19,268
| 6,694,890,040
|
IssuesEvent
|
2017-10-10 05:18:14
|
commontk/CTK
|
https://api.github.com/repos/commontk/CTK
|
closed
|
Building problem with qt5-ctk
|
Build System
|
Hello,
I would like to build the qt5-ctk library with cmake using QT 5.4.0 with mingw32.
I changed the version in ctkMacroSetupQt.cmake to "5" with
set(CTK_QT_VERSION "5" CACHE STRING "Expected Qt version")
but I get the same Error Message when trying with 4 (which is also installed).
Do you have any idea what this could cause?
This is the output of cmake:
CMake Error at CMake/ctkMacroSetupQt.cmake:74 (message):
error: Qt4 was not found on your system. You probably need to set the
QT_QMAKE_EXECUTABLE variable
Call Stack (most recent call first):
CMakeLists.txt:398 (ctkMacroSetupQt)
-- Found unsuitable Qt version "5.4.0" from C:/Qt/5.4/mingw491_32/bin/qmake.exe
-- Configuring incomplete, errors occurred!
See also "C:/Users/rsalinas/Documents/Software/HelloCmake/CTK-qt5/CTK-superbuild/CMakeFiles/CMakeOutput.log".
Thanks a lot,
Ricardo Salinas
|
1.0
|
Building problem with qt5-ctk - Hello,
I would like to build the qt5-ctk library with cmake using QT 5.4.0 with mingw32.
I changed the version in ctkMacroSetupQt.cmake to "5" with
set(CTK_QT_VERSION "5" CACHE STRING "Expected Qt version")
but I get the same Error Message when trying with 4 (which is also installed).
Do you have any idea what this could cause?
This is the output of cmake:
CMake Error at CMake/ctkMacroSetupQt.cmake:74 (message):
error: Qt4 was not found on your system. You probably need to set the
QT_QMAKE_EXECUTABLE variable
Call Stack (most recent call first):
CMakeLists.txt:398 (ctkMacroSetupQt)
-- Found unsuitable Qt version "5.4.0" from C:/Qt/5.4/mingw491_32/bin/qmake.exe
-- Configuring incomplete, errors occurred!
See also "C:/Users/rsalinas/Documents/Software/HelloCmake/CTK-qt5/CTK-superbuild/CMakeFiles/CMakeOutput.log".
Thanks a lot,
Ricardo Salinas
|
non_test
|
building problem with ctk hello i would like to build the ctk library with cmake using qt with i changed the version in ctkmacrosetupqt cmake to with set ctk qt version cache string expected qt version but i get the same error message when trying with which is also installed do you have any idea what this could cause this is the output of cmake cmake error at cmake ctkmacrosetupqt cmake message error was not found on your system you probably need to set the qt qmake executable variable call stack most recent call first cmakelists txt ctkmacrosetupqt found unsuitable qt version from c qt bin qmake exe configuring incomplete errors occurred see also c users rsalinas documents software hellocmake ctk ctk superbuild cmakefiles cmakeoutput log thanks a lot ricardo salinas
| 0
|
74,200
| 7,389,001,283
|
IssuesEvent
|
2018-03-16 06:30:17
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Refactor needed because of deprecation of `did-get-response-details`
|
QA/test-plan-specified muon refactoring release-notes/exclude
|
## Test plan
1. Launch Brave and enable payments
2. Open 3 new tabs
3. Switch between the 3 tabs and load a unique site in each
4. Switch between the tabs and wait a few seconds, to build a history
5. Visit the payments screen. Visit time should be recorded properly
## Description
When doing Chromium 65 upgrade, the emitting of `did-get-response-details` was removed with https://github.com/brave/muon/commit/00d729f2585964027deff728a1ef3b32f7f21d65
Unless replacement code is found, this code will not execute
|
1.0
|
Refactor needed because of deprecation of `did-get-response-details` - ## Test plan
1. Launch Brave and enable payments
2. Open 3 new tabs
3. Switch between the 3 tabs and load a unique site in each
4. Switch between the tabs and wait a few seconds, to build a history
5. Visit the payments screen. Visit time should be recorded properly
## Description
When doing Chromium 65 upgrade, the emitting of `did-get-response-details` was removed with https://github.com/brave/muon/commit/00d729f2585964027deff728a1ef3b32f7f21d65
Unless replacement code is found, this code will not execute
|
test
|
refactor needed because of deprecation of did get response details test plan launch brave and enable payments open new tabs switch between the tabs and load a unique site in each switch between the tabs and wait a few seconds to build a history visit the payments screen visit time should be recorded properly description when doing chromium upgrade the emitting of did get response details was removed with unless replacement code is found this code will not execute
| 1
|
51,620
| 6,187,180,136
|
IssuesEvent
|
2017-07-04 06:35:13
|
Microsoft/vsts-tasks
|
https://api.github.com/repos/Microsoft/vsts-tasks
|
closed
|
TFS 2015 Release Cannot Get “Run Functional Tests” Task to Work on Multiple Machines
|
Area: Test
|
On-prem TFS 2015 Update 3.
I have multiple machines (different Operating Systems) that I want to run my tests on. I'm having issues getting this simple flow to work successfully. Here's what I've tried:
1. Deploy Test Agent task on multiple machines are successful.
2. If I put multiple machines in one "Run Functional Tests" task, it will execute the test one ONE of those machines in step 1 only (and will complete successful if this is the first task). Logs here: [ReleaseLogs_57.zip](https://github.com/Microsoft/vsts-tasks/files/1077767/ReleaseLogs_57.zip)
2. If I set up 2 separate tasks, one for each machine, the 1st task will execute successfully, but as seen in bullet 2, the test is run on ANY ONE of the machines in step 1 (NOT the specific one specified for the task). In the example attached, the 1st task is set up to run on Win7, but the test was actually executed on the Win8 machine.
Then the 2nd task (which is set up to run against the Win10 machine) will not complete, no matter what machine or test I put in it. Logs for this scenario attached:
[ReleaseLogs_60.zip](https://github.com/Microsoft/vsts-tasks/files/1078061/ReleaseLogs_60.zip)
It seems that the PS script(s) for this task is broken in our environment. Here's the zip file of the entire "tasks" folder for your reference.
[tasks.zip](https://github.com/Microsoft/vsts-tasks/files/1080899/tasks.zip)
Thanks!
|
1.0
|
TFS 2015 Release Cannot Get “Run Functional Tests” Task to Work on Multiple Machines - On-prem TFS 2015 Update 3.
I have multiple machines (different Operating Systems) that I want to run my tests on. I'm having issues getting this simple flow to work successfully. Here's what I've tried:
1. Deploy Test Agent task on multiple machines are successful.
2. If I put multiple machines in one "Run Functional Tests" task, it will execute the test one ONE of those machines in step 1 only (and will complete successful if this is the first task). Logs here: [ReleaseLogs_57.zip](https://github.com/Microsoft/vsts-tasks/files/1077767/ReleaseLogs_57.zip)
2. If I set up 2 separate tasks, one for each machine, the 1st task will execute successfully, but as seen in bullet 2, the test is run on ANY ONE of the machines in step 1 (NOT the specific one specified for the task). In the example attached, the 1st task is set up to run on Win7, but the test was actually executed on the Win8 machine.
Then the 2nd task (which is set up to run against the Win10 machine) will not complete, no matter what machine or test I put in it. Logs for this scenario attached:
[ReleaseLogs_60.zip](https://github.com/Microsoft/vsts-tasks/files/1078061/ReleaseLogs_60.zip)
It seems that the PS script(s) for this task is broken in our environment. Here's the zip file of the entire "tasks" folder for your reference.
[tasks.zip](https://github.com/Microsoft/vsts-tasks/files/1080899/tasks.zip)
Thanks!
|
test
|
tfs release cannot get “run functional tests” task to work on multiple machines on prem tfs update i have multiple machines different operating systems that i want to run my tests on i m having issues getting this simple flow to work successfully here s what i ve tried deploy test agent task on multiple machines are successful if i put multiple machines in one run functional tests task it will execute the test one one of those machines in step only and will complete successful if this is the first task logs here if i set up separate tasks one for each machine the task will execute successfully but as seen in bullet the test is run on any one of the machines in step not the specific one specified for the task in the example attached the task is set up to run on but the test was actually executed on the machine then the task which is set up to run against the machine will not complete no matter what machine or test i put in it logs for this scenario attached it seems that the ps script s for this task is broken in our environment here s the zip file of the entire tasks folder for your reference thanks
| 1
|
11,007
| 8,869,683,574
|
IssuesEvent
|
2019-01-11 06:39:47
|
hashmapinc/Tempus
|
https://api.github.com/repos/hashmapinc/Tempus
|
closed
|
Create an NLB for Tempus via the ingress controller
|
Complete bug infrastructure/issue
|
Blocked by #359
Child of #361
Create a layer 4 load balancer for tempus manually (until it is officially supported by K8s). This will attach to the ingress controller
|
1.0
|
Create an NLB for Tempus via the ingress controller - Blocked by #359
Child of #361
Create a layer 4 load balancer for tempus manually (until it is officially supported by K8s). This will attach to the ingress controller
|
non_test
|
create an nlb for tempus via the ingress controller blocked by child of create a layer load balancer for tempus manually until it is officially supported by this will attach to the ingress controller
| 0
|
326,614
| 28,006,698,916
|
IssuesEvent
|
2023-03-27 15:41:14
|
nucleus-security/Test-repo
|
https://api.github.com/repos/nucleus-security/Test-repo
|
opened
|
Nucleus: [Critical] - 440041
|
Test
|
Source: QUALYS
Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target:
Asset name: 192.168.56.103 - IP: 192.168.56.103
Asset name: 192.168.56.131 - IP: 192.168.56.131
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-May/022827.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:1319: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-May/022827.html)
References:
QID: 440041
CVE: CVE-2017-5754, CVE-2018-8897, CVE-2017-7645, CVE-2017-8824, CVE-2017-13166, CVE-2017-18017, CVE-2017-1000410
Category: CentOS
PCI Flagged: yes
Vendor References: CESA-2018:1319 centos 6
Bugtraq IDs: 102101, 102378, 97950, 102056, 104071, 102367, 99843, 106128
Severity: Critical
Exploitable: Yes
Date Discovered: 2023-03-12 08:04:44
Please see https://nucleus-qa1.nucleussec.com/nucleus/public/app/index.php?sso=b3JnX2lkJTNEMSUyNmRvbWFpbiUzRG51Y2xldXNzZWMuY29t#vuln/1000028/NDQwMDQx/UVVBTFlT/VnVsbi1Db21wbGlhbmNl/false/MTAwMDAyOA--/c3VtbWFyeQ--/false/MjAyMy0wMy0xMiAwODowNDo0NA-- for more information on these vulnerabilities
Issue was manually created by Nucleus user: Selenium user
|
1.0
|
Nucleus: [Critical] - 440041 - Source: QUALYS
Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: This vulnerability could be exploited to gain complete access to sensitive information. Malicious users could also use this vulnerability to change all the contents or configuration on the system. Additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable.
Target:
Asset name: 192.168.56.103 - IP: 192.168.56.103
Asset name: 192.168.56.131 - IP: 192.168.56.131
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-May/022827.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2018:1319: centos 6 (https://lists.centos.org/pipermail/centos-announce/2018-May/022827.html)
References:
QID: 440041
CVE: CVE-2017-5754, CVE-2018-8897, CVE-2017-7645, CVE-2017-8824, CVE-2017-13166, CVE-2017-18017, CVE-2017-1000410
Category: CentOS
PCI Flagged: yes
Vendor References: CESA-2018:1319 centos 6
Bugtraq IDs: 102101, 102378, 97950, 102056, 104071, 102367, 99843, 106128
Severity: Critical
Exploitable: Yes
Date Discovered: 2023-03-12 08:04:44
Please see https://nucleus-qa1.nucleussec.com/nucleus/public/app/index.php?sso=b3JnX2lkJTNEMSUyNmRvbWFpbiUzRG51Y2xldXNzZWMuY29t#vuln/1000028/NDQwMDQx/UVVBTFlT/VnVsbi1Db21wbGlhbmNl/false/MTAwMDAyOA--/c3VtbWFyeQ--/false/MjAyMy0wMy0xMiAwODowNDo0NA-- for more information on these vulnerabilities
Issue was manually created by Nucleus user: Selenium user
|
test
|
nucleus source qualys description centos has released security update for kernel to fix the vulnerabilities affected products centos impact this vulnerability could be exploited to gain complete access to sensitive information malicious users could also use this vulnerability to change all the contents or configuration on the system additionally this vulnerability can also be used to cause a complete denial of service and could render the resource completely unavailable target asset name ip asset name ip solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory centos for updates and patch information patch following are links for downloading patches to fix the vulnerabilities cesa centos references qid cve cve cve cve cve cve cve cve category centos pci flagged yes vendor references cesa centos bugtraq ids severity critical exploitable yes date discovered please see for more information on these vulnerabilities issue was manually created by nucleus user selenium user
| 1
|
248,728
| 7,935,327,738
|
IssuesEvent
|
2018-07-09 04:19:44
|
minio/minio-py
|
https://api.github.com/repos/minio/minio-py
|
closed
|
fput_object: AWS S3 multipart upload fails
|
priority: medium
|
minio version: 4.0.2
## Reproduce Steps
The bug happened on our system, and some minimal steps would be this (not tested, sorry):
```python
from minio import Minio
minio_client = Minio(...) # setup to run against AWS S3
amz_meta_data = {'x_amz_meta-sha256': 'foo'}
minio_client.fput_object(bucket_name='our-bucket-name',
object_name='large_file.jpeg',
file_path='/path/to/large_file.jpeg',
content_type='image/jpeg',
metadata=amz_meta_data)
```
## Current Behaviour
An `InvalidArgument: InvalidArgument: message: Invalid Argument` exception is thrown, the upload is not finished.
## Possible Solution
I debugged this, and what happens is apparently that an AWS S3 multipart upload is initiated, which consists of (among other things) a [Initiate Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html) and an [Upload Part](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html) request.
The initiate multipart request runs through, but the upload part request has a 400 error response:
```
<Error><Code>InvalidArgument</Code><Message>Metadata cannot be specified in this context.</Message><ArgumentName>x-amz-meta-sha256</ArgumentName><ArgumentValue>1f/29/1f29c29fb3fde0d6f9a211d74872c1b0b267d09e83e6a18bfb7b1a4d71c0352e
```
Some googling found [this answer on the AWS forum](https://forums.aws.amazon.com/thread.jspa?threadID=223994), where it is pointed out that you shouldn't pass metadata to the upload part request, only to the initiate multipart upload request.
Therefore, I suspect the solution is to adjust the minio code accordingly.
|
1.0
|
fput_object: AWS S3 multipart upload fails - minio version: 4.0.2
## Reproduce Steps
The bug happened on our system, and some minimal steps would be this (not tested, sorry):
```python
from minio import Minio
minio_client = Minio(...) # setup to run against AWS S3
amz_meta_data = {'x_amz_meta-sha256': 'foo'}
minio_client.fput_object(bucket_name='our-bucket-name',
object_name='large_file.jpeg',
file_path='/path/to/large_file.jpeg',
content_type='image/jpeg',
metadata=amz_meta_data)
```
## Current Behaviour
An `InvalidArgument: InvalidArgument: message: Invalid Argument` exception is thrown, the upload is not finished.
## Possible Solution
I debugged this, and what happens is apparently that an AWS S3 multipart upload is initiated, which consists of (among other things) a [Initiate Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html) and an [Upload Part](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html) request.
The initiate multipart request runs through, but the upload part request has a 400 error response:
```
<Error><Code>InvalidArgument</Code><Message>Metadata cannot be specified in this context.</Message><ArgumentName>x-amz-meta-sha256</ArgumentName><ArgumentValue>1f/29/1f29c29fb3fde0d6f9a211d74872c1b0b267d09e83e6a18bfb7b1a4d71c0352e
```
Some googling found [this answer on the AWS forum](https://forums.aws.amazon.com/thread.jspa?threadID=223994), where it is pointed out that you shouldn't pass metadata to the upload part request, only to the initiate multipart upload request.
Therefore, I suspect the solution is to adjust the minio code accordingly.
|
non_test
|
fput object aws multipart upload fails minio version reproduce steps the bug happened on our system and some minimal steps would be this not tested sorry python from minio import minio minio client minio setup to run against aws amz meta data x amz meta foo minio client fput object bucket name our bucket name object name large file jpeg file path path to large file jpeg content type image jpeg metadata amz meta data current behaviour an invalidargument invalidargument message invalid argument exception is thrown the upload is not finished possible solution i debugged this and what happens is apparently that an aws multipart upload is initiated which consists of among other things a and an request the initiate multipart request runs through but the upload part request has a error response invalidargument metadata cannot be specified in this context x amz meta some googling found where it is pointed out that you shouldn t pass metadata to the upload part request only to the initiate multipart upload request therefore i suspect the solution is to adjust the minio code accordingly
| 0
|
53,561
| 13,179,468,393
|
IssuesEvent
|
2020-08-12 10:59:52
|
rockyFierro/WEBPT19_TEAM_BUILDER
|
https://api.github.com/repos/rockyFierro/WEBPT19_TEAM_BUILDER
|
closed
|
FINAL STRETCH
|
build your form form submission functionality stretch
|
#### More Stretch Problems
After finishing your required elements, you can push your work further. These goals may or may not be things you have learned in this module but they build on the material you just studied. Time allowing, stretch your limits and see if you can deliver on the following optional goals:
- [ ] Follow the steps above to edit members. This is difficult to do, and the architecture is tough. But it is a great skill to practice! Pay attention the the implementation details, and to the architecture. There are many ways to accomplish this. When you finish, can you think of another way?
- [ ] Build another layer of your App so that you can keep track of multiple teams, each with their own encapsulated list of team members.
- [ ] Look into the various strategies around form validation. What happens if you try to enter a number as a team-members name? Does your App allow for that? Should it? What happens if you try and enter a function as the value to one of your fields? How could this be dangerous? How might you prevent it?
- [x] Style the forms. There are some subtle browser defaults for input tags that might need to be overwritten based on their state (active, focus, hover, etc.); Keep those CSS skill sharp.
|
1.0
|
FINAL STRETCH -
#### More Stretch Problems
After finishing your required elements, you can push your work further. These goals may or may not be things you have learned in this module but they build on the material you just studied. Time allowing, stretch your limits and see if you can deliver on the following optional goals:
- [ ] Follow the steps above to edit members. This is difficult to do, and the architecture is tough. But it is a great skill to practice! Pay attention the the implementation details, and to the architecture. There are many ways to accomplish this. When you finish, can you think of another way?
- [ ] Build another layer of your App so that you can keep track of multiple teams, each with their own encapsulated list of team members.
- [ ] Look into the various strategies around form validation. What happens if you try to enter a number as a team-members name? Does your App allow for that? Should it? What happens if you try and enter a function as the value to one of your fields? How could this be dangerous? How might you prevent it?
- [x] Style the forms. There are some subtle browser defaults for input tags that might need to be overwritten based on their state (active, focus, hover, etc.); Keep those CSS skill sharp.
|
non_test
|
final stretch more stretch problems after finishing your required elements you can push your work further these goals may or may not be things you have learned in this module but they build on the material you just studied time allowing stretch your limits and see if you can deliver on the following optional goals follow the steps above to edit members this is difficult to do and the architecture is tough but it is a great skill to practice pay attention the the implementation details and to the architecture there are many ways to accomplish this when you finish can you think of another way build another layer of your app so that you can keep track of multiple teams each with their own encapsulated list of team members look into the various strategies around form validation what happens if you try to enter a number as a team members name does your app allow for that should it what happens if you try and enter a function as the value to one of your fields how could this be dangerous how might you prevent it style the forms there are some subtle browser defaults for input tags that might need to be overwritten based on their state active focus hover etc keep those css skill sharp
| 0
|
714,413
| 24,560,688,545
|
IssuesEvent
|
2022-10-12 19:58:28
|
Poobslag/turbofat
|
https://api.github.com/repos/Poobslag/turbofat
|
opened
|
"insert line" level effect should be able to insert entire boxes
|
priority-4
|
Currently, line inserts are handled one-at-a-time which means 3x3 boxes can't be inserted -- only a series of 3x1 boxes. It would be better if line inserts could insert a bunch of rows at once, and insert an entire 3x3 box.
|
1.0
|
"insert line" level effect should be able to insert entire boxes - Currently, line inserts are handled one-at-a-time which means 3x3 boxes can't be inserted -- only a series of 3x1 boxes. It would be better if line inserts could insert a bunch of rows at once, and insert an entire 3x3 box.
|
non_test
|
insert line level effect should be able to insert entire boxes currently line inserts are handled one at a time which means boxes can t be inserted only a series of boxes it would be better if line inserts could insert a bunch of rows at once and insert an entire box
| 0
|
69,631
| 9,310,932,174
|
IssuesEvent
|
2019-03-25 20:03:49
|
exercism/javascript
|
https://api.github.com/repos/exercism/javascript
|
closed
|
Suggestion: prefer eslint syntax over style
|
chore documentation
|
With [more](https://github.com/prettier/prettier) and [more](https://github.com/xojs/xo) tools becoming available to automatically format the code base to a certain set of style rules (and [more comprehensive](https://github.com/prettier/prettier-eslint/issues/101) than `eslint --fix`, I'm also more inclined to stop teaching style preference, as indicated by mentor notes, mentor guidance docs and discussions on slack.
## The current state
Currently, the javavscript `package.json` sets a _very restrictive_ code style (`airbnb`) and I *don't* think this:
- helps the student understand the language
- helps the student get fluent in a language
- helps the mentor mentoring as downloading exercises can result in wibbly wobblies all over the place
We don't instruct or enforce these rules (https://github.com/exercism/javascript/issues/44#issuecomment-416760562) strictly, but I seem them in mentoring comments and most IDE's will run these _automagically_ if present.
## Recommendation
I therefore recommend to drop down to [eslint:recommend](eslint:recommended"](https://eslint.org/docs/rules/)) or if we must have a non-company style guide, use [standard](https://github.com/standard/eslint-config-standard) with semicolon rules disabled (people should decide about ASI themselves -- I don't think pushing people who know about ASI helps anyone, and TS doesn't have them either).
If this is something we want, I think doing it sooner rather than later is good and will speed up #480 greatly. The eslintignore list is still quite long and I don't think tháts particularly helpful.
As a final note, I think we should _better_ communicate to the students to run `npm/yarn install` and then `npm/yarn test`. I also suggest running `test` followed by `lint` so that someone is only bugged once the test succeeds, instead of using a `"pretest": "lint"`:
```json
{
"scripts": {
"test": "test:jest && test:lint",
"test:jest": "...",
"test:lint": "..."
}
}
```
|
1.0
|
Suggestion: prefer eslint syntax over style - With [more](https://github.com/prettier/prettier) and [more](https://github.com/xojs/xo) tools becoming available to automatically format the code base to a certain set of style rules (and [more comprehensive](https://github.com/prettier/prettier-eslint/issues/101) than `eslint --fix`, I'm also more inclined to stop teaching style preference, as indicated by mentor notes, mentor guidance docs and discussions on slack.
## The current state
Currently, the javavscript `package.json` sets a _very restrictive_ code style (`airbnb`) and I *don't* think this:
- helps the student understand the language
- helps the student get fluent in a language
- helps the mentor mentoring as downloading exercises can result in wibbly wobblies all over the place
We don't instruct or enforce these rules (https://github.com/exercism/javascript/issues/44#issuecomment-416760562) strictly, but I seem them in mentoring comments and most IDE's will run these _automagically_ if present.
## Recommendation
I therefore recommend to drop down to [eslint:recommend](eslint:recommended"](https://eslint.org/docs/rules/)) or if we must have a non-company style guide, use [standard](https://github.com/standard/eslint-config-standard) with semicolon rules disabled (people should decide about ASI themselves -- I don't think pushing people who know about ASI helps anyone, and TS doesn't have them either).
If this is something we want, I think doing it sooner rather than later is good and will speed up #480 greatly. The eslintignore list is still quite long and I don't think tháts particularly helpful.
As a final note, I think we should _better_ communicate to the students to run `npm/yarn install` and then `npm/yarn test`. I also suggest running `test` followed by `lint` so that someone is only bugged once the test succeeds, instead of using a `"pretest": "lint"`:
```json
{
"scripts": {
"test": "test:jest && test:lint",
"test:jest": "...",
"test:lint": "..."
}
}
```
|
non_test
|
suggestion prefer eslint syntax over style with and tools becoming available to automatically format the code base to a certain set of style rules and than eslint fix i m also more inclined to stop teaching style preference as indicated by mentor notes mentor guidance docs and discussions on slack the current state currently the javavscript package json sets a very restrictive code style airbnb and i don t think this helps the student understand the language helps the student get fluent in a language helps the mentor mentoring as downloading exercises can result in wibbly wobblies all over the place we don t instruct or enforce these rules strictly but i seem them in mentoring comments and most ide s will run these automagically if present recommendation i therefore recommend to drop down to eslint recommended or if we must have a non company style guide use with semicolon rules disabled people should decide about asi themselves i don t think pushing people who know about asi helps anyone and ts doesn t have them either if this is something we want i think doing it sooner rather than later is good and will speed up greatly the eslintignore list is still quite long and i don t think tháts particularly helpful as a final note i think we should better communicate to the students to run npm yarn install and then npm yarn test i also suggest running test followed by lint so that someone is only bugged once the test succeeds instead of using a pretest lint json scripts test test jest test lint test jest test lint
| 0
|
10,828
| 27,424,080,674
|
IssuesEvent
|
2023-03-01 18:51:49
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
closed
|
Board Review: Azure IoT Models Repository Client (Python)
|
architecture board-review
|
## Contacts and Timeline
* Responsible service team: Azure IoT Portal & UX
* Main contacts:
* Developer: carter.tinney@microsoft.com
* Product Manager: ricardo.minguez@microsoft.com
* Dev Lead: paymaun.heidari@microsoft.com
* Expected code complete date: March 17 2020
* Expected release date: (Unknown - This is a preview version of a package)
## About the Service
* Link to documentation introducing/describing the service: N/A
* Link to the service REST APIs: N/A
* Link to GitHub issue for previous review sessions, if applicable: N/A
## About the client library
* Name of the client library: Azure Iot Models Repository (azure-iot-modelsrepository)
* Languages for this review: Python
## Artifacts required (per language)
### Python
* APIView Link: https://apiview.dev/Assemblies/Review/d043acebf7b7407fb4533d9814d1e334
* Link to Champion Scenarios/Quickstart samples: [Link to samples](https://github.com/cartertinney/azure-sdk-for-python/tree/master/sdk/iot/azure-iot-modelsrepository/samples)
* PR: https://github.com/Azure/azure-sdk-for-python/pull/17180
|
1.0
|
Board Review: Azure IoT Models Repository Client (Python) - ## Contacts and Timeline
* Responsible service team: Azure IoT Portal & UX
* Main contacts:
* Developer: carter.tinney@microsoft.com
* Product Manager: ricardo.minguez@microsoft.com
* Dev Lead: paymaun.heidari@microsoft.com
* Expected code complete date: March 17 2020
* Expected release date: (Unknown - This is a preview version of a package)
## About the Service
* Link to documentation introducing/describing the service: N/A
* Link to the service REST APIs: N/A
* Link to GitHub issue for previous review sessions, if applicable: N/A
## About the client library
* Name of the client library: Azure Iot Models Repository (azure-iot-modelsrepository)
* Languages for this review: Python
## Artifacts required (per language)
### Python
* APIView Link: https://apiview.dev/Assemblies/Review/d043acebf7b7407fb4533d9814d1e334
* Link to Champion Scenarios/Quickstart samples: [Link to samples](https://github.com/cartertinney/azure-sdk-for-python/tree/master/sdk/iot/azure-iot-modelsrepository/samples)
* PR: https://github.com/Azure/azure-sdk-for-python/pull/17180
|
non_test
|
board review azure iot models repository client python contacts and timeline responsible service team azure iot portal ux main contacts developer carter tinney microsoft com product manager ricardo minguez microsoft com dev lead paymaun heidari microsoft com expected code complete date march expected release date unknown this is a preview version of a package about the service link to documentation introducing describing the service n a link to the service rest apis n a link to github issue for previous review sessions if applicable n a about the client library name of the client library azure iot models repository azure iot modelsrepository languages for this review python artifacts required per language python apiview link link to champion scenarios quickstart samples pr
| 0
|
328,957
| 28,142,904,746
|
IssuesEvent
|
2023-04-02 06:05:56
|
gama-platform/gama
|
https://api.github.com/repos/gama-platform/gama
|
closed
|
Meta-data for required plugin (or invisible pragma) of GAML syntax
|
🤗 Enhancement 👍 Fix to be tested
|
**Is your request related to a problem? Please describe.**
Trying to have dynamic installation plugins/features , i have problem with finding the right plugin that provide the extensions of GAML syntax
**Describe the improvement you'd like**
For example, with a standard GAMA , when we edit the model that use extensions syntax (R, netcdf, Gaming, launchpad....) , an error syntax occurs ("xxx was not declared"). But we cant know which is/are the plugins need to be installed.
**Describe alternatives you've considered**
Some information should have , as "declared in ..... plugin" or even a quick fix with lead to the installation of that plugin.
|
1.0
|
Meta-data for required plugin (or invisible pragma) of GAML syntax - **Is your request related to a problem? Please describe.**
Trying to have dynamic installation plugins/features , i have problem with finding the right plugin that provide the extensions of GAML syntax
**Describe the improvement you'd like**
For example, with a standard GAMA , when we edit the model that use extensions syntax (R, netcdf, Gaming, launchpad....) , an error syntax occurs ("xxx was not declared"). But we cant know which is/are the plugins need to be installed.
**Describe alternatives you've considered**
Some information should have , as "declared in ..... plugin" or even a quick fix with lead to the installation of that plugin.
|
test
|
meta data for required plugin or invisible pragma of gaml syntax is your request related to a problem please describe trying to have dynamic installation plugins features i have problem with finding the right plugin that provide the extensions of gaml syntax describe the improvement you d like for example with a standard gama when we edit the model that use extensions syntax r netcdf gaming launchpad an error syntax occurs xxx was not declared but we cant know which is are the plugins need to be installed describe alternatives you ve considered some information should have as declared in plugin or even a quick fix with lead to the installation of that plugin
| 1
|
350,310
| 24,978,250,267
|
IssuesEvent
|
2022-11-02 09:39:05
|
AY2223S1-CS2103T-W13-1/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-W13-1/tp
|
closed
|
[PE-D][Tester C] For unparticipate command suggestion
|
bug duplicate fixable bug.documentationbug
|
For unparticipate command, it works when you try to pass in a component that does not exist for a student, (command is executed since the input goes away), however just that nothing occurs. Maybe should state such a case in the user guide?
<!--session: 1666945157096-9798dbfe-290b-4667-b806-b70f2831affe-->
<!--Version: Web v3.4.4-->
-------------
Labels: `type.DocumentationBug` `severity.VeryLow`
original: maxng17/ped#11
|
1.0
|
[PE-D][Tester C] For unparticipate command suggestion - For unparticipate command, it works when you try to pass in a component that does not exist for a student, (command is executed since the input goes away), however just that nothing occurs. Maybe should state such a case in the user guide?
<!--session: 1666945157096-9798dbfe-290b-4667-b806-b70f2831affe-->
<!--Version: Web v3.4.4-->
-------------
Labels: `type.DocumentationBug` `severity.VeryLow`
original: maxng17/ped#11
|
non_test
|
for unparticipate command suggestion for unparticipate command it works when you try to pass in a component that does not exist for a student command is executed since the input goes away however just that nothing occurs maybe should state such a case in the user guide labels type documentationbug severity verylow original ped
| 0
|
100,420
| 11,194,945,872
|
IssuesEvent
|
2020-01-03 03:47:22
|
mgp25/Instagram-API
|
https://api.github.com/repos/mgp25/Instagram-API
|
closed
|
Update Wiki
|
documentation
|
## Prerequisites
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer completely.
- Put an `x` into all the boxes [ ] relevant to your issue (like so [x]).
- Use the *Preview* tab to see how your issue will actually look like, before sending it.
- Understand that we will *CLOSE* (without answering) *all* issues related to `challenge_required`, `checkpoint_required`, `feedback_required` or `sentry_block`. They've already been answered in the Wiki and *countless* closed tickets in the past!
- Do not post screenshots of error messages or code.
---
### Before submitting an issue make sure you have:
- [x] [Searched](https://github.com/mgp25/Instagram-API/search?type=Issues) the bugtracker for similar issues including **closed** ones
- [x] [Read the FAQ](https://github.com/mgp25/Instagram-API/wiki/FAQ)
- [x] [Read the wiki](https://github.com/mgp25/Instagram-API/wiki)
- [x] [Reviewed the examples](https://github.com/mgp25/Instagram-API/tree/master/examples)
- [x] [Installed the api using ``composer``](https://github.com/mgp25/Instagram-API#installation)
- [x] [Using latest API release](https://github.com/mgp25/Instagram-API/releases)
### Purpose of your issue?
- [ ] Bug report (encountered problems/errors)
- [ ] Feature request (request for new functionality)
- [ ] Question
- [x] Other
---
https://github.com/mgp25/Instagram-API/wiki#instagram-direct
This section has outdated information
`$ig->direct->sendPost($recipients, $mediaId);`
Instagram needs compulsory 3 parameters. Third parameter is missing.
3rd one is called options
```
$options = [
'media_type' => 'video', //compulsory, it doesn't care photo can also be sent as video and it will sent normally
'text' => 'wow 22', // optional, if you want to send some message with media.
];
```
`$ig->direct->sendPost($recipients, $mediaId, $options);`
|
1.0
|
Update Wiki - ## Prerequisites
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer completely.
- Put an `x` into all the boxes [ ] relevant to your issue (like so [x]).
- Use the *Preview* tab to see how your issue will actually look like, before sending it.
- Understand that we will *CLOSE* (without answering) *all* issues related to `challenge_required`, `checkpoint_required`, `feedback_required` or `sentry_block`. They've already been answered in the Wiki and *countless* closed tickets in the past!
- Do not post screenshots of error messages or code.
---
### Before submitting an issue make sure you have:
- [x] [Searched](https://github.com/mgp25/Instagram-API/search?type=Issues) the bugtracker for similar issues including **closed** ones
- [x] [Read the FAQ](https://github.com/mgp25/Instagram-API/wiki/FAQ)
- [x] [Read the wiki](https://github.com/mgp25/Instagram-API/wiki)
- [x] [Reviewed the examples](https://github.com/mgp25/Instagram-API/tree/master/examples)
- [x] [Installed the api using ``composer``](https://github.com/mgp25/Instagram-API#installation)
- [x] [Using latest API release](https://github.com/mgp25/Instagram-API/releases)
### Purpose of your issue?
- [ ] Bug report (encountered problems/errors)
- [ ] Feature request (request for new functionality)
- [ ] Question
- [x] Other
---
https://github.com/mgp25/Instagram-API/wiki#instagram-direct
This section has outdated information
`$ig->direct->sendPost($recipients, $mediaId);`
Instagram needs compulsory 3 parameters. Third parameter is missing.
3rd one is called options
```
$options = [
'media_type' => 'video', //compulsory, it doesn't care photo can also be sent as video and it will sent normally
'text' => 'wow 22', // optional, if you want to send some message with media.
];
```
`$ig->direct->sendPost($recipients, $mediaId, $options);`
|
non_test
|
update wiki prerequisites you will be asked some questions and requested to provide some information please read them carefully and answer completely put an x into all the boxes relevant to your issue like so use the preview tab to see how your issue will actually look like before sending it understand that we will close without answering all issues related to challenge required checkpoint required feedback required or sentry block they ve already been answered in the wiki and countless closed tickets in the past do not post screenshots of error messages or code before submitting an issue make sure you have the bugtracker for similar issues including closed ones purpose of your issue bug report encountered problems errors feature request request for new functionality question other this section has outdated information ig direct sendpost recipients mediaid instagram needs compulsory parameters third parameter is missing one is called options options media type video compulsory it doesn t care photo can also be sent as video and it will sent normally text wow optional if you want to send some message with media ig direct sendpost recipients mediaid options
| 0
|
491,728
| 14,170,118,060
|
IssuesEvent
|
2020-11-12 14:08:37
|
ajency/Dhanda-App
|
https://api.github.com/repos/ajency/Dhanda-App
|
closed
|
The staff type should be pre-selected since we are only editing it. It is by default set to Monthly even if the type is " daily " or " Weekly "
|
Assigned to QA Priority: High bug
|
Link: https://drive.google.com/file/d/1F0pBpKPn0C9aMjJ0DmB3vTMJP-JRBkJh/view
|
1.0
|
The staff type should be pre-selected since we are only editing it. It is by default set to Monthly even if the type is " daily " or " Weekly " -
Link: https://drive.google.com/file/d/1F0pBpKPn0C9aMjJ0DmB3vTMJP-JRBkJh/view
|
non_test
|
the staff type should be pre selected since we are only editing it it is by default set to monthly even if the type is daily or weekly link
| 0
|
91,616
| 8,310,279,687
|
IssuesEvent
|
2018-09-24 10:07:53
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
[RAID] MC Majordomo Executus' guards don't always attack the party.
|
DB Fix included Good first issue Help wanted Instance - Raid - Vanilla Needs Testing/Feedback
|
**Description**: Attacking the Flamewalker Healers doesn't trigger Majordomo Executus or the Flamewalker Elite mobs to attack the party. I was able to kill all of the healers without interference which ruined the fight mechanics. This may also occur when attacking the Flamewalker Elite mobs, but I haven't tested yet.
https://imgur.com/a/6Sbf9
**Branch:**
Master
**Commit Hash:**
c15f15ebb0a113a5cf20fc11afb89b2bcff9ec6d
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/48926042-raid-mc-majordomo-executus-guards-don-t-always-attack-the-party?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
[RAID] MC Majordomo Executus' guards don't always attack the party. - **Description**: Attacking the Flamewalker Healers doesn't trigger Majordomo Executus or the Flamewalker Elite mobs to attack the party. I was able to kill all of the healers without interference which ruined the fight mechanics. This may also occur when attacking the Flamewalker Elite mobs, but I haven't tested yet.
https://imgur.com/a/6Sbf9
**Branch:**
Master
**Commit Hash:**
c15f15ebb0a113a5cf20fc11afb89b2bcff9ec6d
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/48926042-raid-mc-majordomo-executus-guards-don-t-always-attack-the-party?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F40032087&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
test
|
mc majordomo executus guards don t always attack the party description attacking the flamewalker healers doesn t trigger majordomo executus or the flamewalker elite mobs to attack the party i was able to kill all of the healers without interference which ruined the fight mechanics this may also occur when attacking the flamewalker elite mobs but i haven t tested yet branch master commit hash want to back this issue we accept bounties via
| 1
|
77,181
| 15,499,744,605
|
IssuesEvent
|
2021-03-11 08:24:02
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
Put User doesn't always overwrite custom metadata
|
:Security/Authentication >bug
|
https://discuss.elastic.co/t/how-to-remove-custom-metadata-fields-from-users-and-roles/266646/2
By design, a PUT on a user does not overwrite their password unless the password (or hash) is in the request body.
That means, that within the NativeUsersStore, the Put will actually perform an update on the underlying document.
Because `metadata` is stored as a nested object, the semantics of that update means that metadata fields are not always removed from the document, even though they are supposed to be (and are for other object types like roles).
To be extra confusing putting a user with `metadata: {}` will preserve all existing metadata, but not specifying `metadata` at all will remove all metadata (and `metadata: null` is not allowed by the rest parser).
We will need to think carefully about how to fix this without breaking existing workflows that rely on the bug.
Examples below:
```
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"found": false
}
===
PUT /_security/user/test {}
{"roles":[],"password":"this is a password","metadata":{"test 1":1}}
===
{ "created": true }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 8,
"_seq_no": 23,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": { "test 1": 1 },
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": { "test 1": 1 },
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[],"metadata":{"test 2":2}}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 9,
"_seq_no": 24,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 1": 1,
"test 2": 2
},
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 2": 2,
"test 1": 1
},
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[],"metadata":{}}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 9,
"_seq_no": 24,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 1": 1,
"test 2": 2
},
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 2": 2,
"test 1": 1
},
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[]}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 10,
"_seq_no": 25,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": null,
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {},
"enabled": true
}
}
```
|
True
|
Put User doesn't always overwrite custom metadata - https://discuss.elastic.co/t/how-to-remove-custom-metadata-fields-from-users-and-roles/266646/2
By design, a PUT on a user does not overwrite their password unless the password (or hash) is in the request body.
That means, that within the NativeUsersStore, the Put will actually perform an update on the underlying document.
Because `metadata` is stored as a nested object, the semantics of that update means that metadata fields are not always removed from the document, even though they are supposed to be (and are for other object types like roles).
To be extra confusing putting a user with `metadata: {}` will preserve all existing metadata, but not specifying `metadata` at all will remove all metadata (and `metadata: null` is not allowed by the rest parser).
We will need to think carefully about how to fix this without breaking existing workflows that rely on the bug.
Examples below:
```
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"found": false
}
===
PUT /_security/user/test {}
{"roles":[],"password":"this is a password","metadata":{"test 1":1}}
===
{ "created": true }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 8,
"_seq_no": 23,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": { "test 1": 1 },
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": { "test 1": 1 },
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[],"metadata":{"test 2":2}}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 9,
"_seq_no": 24,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 1": 1,
"test 2": 2
},
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 2": 2,
"test 1": 1
},
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[],"metadata":{}}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 9,
"_seq_no": 24,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 1": 1,
"test 2": 2
},
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {
"test 2": 2,
"test 1": 1
},
"enabled": true
}
}
===
PUT /_security/user/test {}
{"roles":[]}
===
{ "created": false }
===
GET /.security/_doc/user-test {}
===
{
"_index": ".security-7",
"_type": "_doc",
"_id": "user-test",
"_version": 10,
"_seq_no": 25,
"_primary_term": 2,
"found": true,
"_source": {
"username": "test",
"password": "$2a$10$wDDS11yWheRfK2ypdYnlkOvX5Mbh/h38i9Ig9hE.1QHpUY0RlFeLq",
"roles": [],
"full_name": null,
"email": null,
"metadata": null,
"enabled": true,
"type": "user"
}
}
===
GET /_security/user/test {}
===
{
"test": {
"username": "test",
"roles": [],
"full_name": null,
"email": null,
"metadata": {},
"enabled": true
}
}
```
|
non_test
|
put user doesn t always overwrite custom metadata by design a put on a user does not overwrite their password unless the password or hash is in the request body that means that within the nativeusersstore the put will actually perform an update on the underlying document because metadata is stored as a nested object the semantics of that update means that metadata fields are not always removed from the document even though they are supposed to be and are for other object types like roles to be extra confusing putting a user with metadata will preserve all existing metadata but not specifying metadata at all will remove all metadata and metadata null is not allowed by the rest parser we will need to think carefully about how to fix this without breaking existing workflows that rely on the bug examples below get security doc user test index security type doc id user test found false put security user test roles password this is a password metadata test created true get security doc user test index security type doc id user test version seq no primary term found true source username test password roles full name null email null metadata test enabled true type user get security user test test username test roles full name null email null metadata test enabled true put security user test roles metadata test created false get security doc user test index security type doc id user test version seq no primary term found true source username test password roles full name null email null metadata test test enabled true type user get security user test test username test roles full name null email null metadata test test enabled true put security user test roles metadata created false get security doc user test index security type doc id user test version seq no primary term found true source username test password roles full name null email null metadata test test enabled true type user get security user test test username test roles full name null email null metadata test test enabled true put security user test roles created false get security doc user test index security type doc id user test version seq no primary term found true source username test password roles full name null email null metadata null enabled true type user get security user test test username test roles full name null email null metadata enabled true
| 0
|
349,217
| 31,788,659,766
|
IssuesEvent
|
2023-09-13 00:14:07
|
ewlu/riscv-gnu-toolchain
|
https://api.github.com/repos/ewlu/riscv-gnu-toolchain
|
closed
|
Testsuite Status 4e2d53c943400e8b5d49a7d5aab4a1ad046aefba
|
bug testsuite-failure
|
# Summary
|Testsuite Failures|Additional Info|
|---|---|
|gcc-linux-rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt-lp64d-4e2d53c943400e8b5d49a7d5aab4a1ad046aefba-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|New Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|Resolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|3/1|0/0|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|Unresolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|54/40|12/5|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv32 Vector Crypto ilp32d medlow|93/70|16/9|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv32gcv ilp32d medlow|90/69|16/9|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64 Bitmanip lp64d medlow|35/31|10/4|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64 Vector Crypto lp64d medlow|48/46|11/5|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64gcv lp64d medlow|48/46|11/5|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64imafdc lp64d medlow multilib|10/6|10/4|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: RVA23U64 profile lp64d medlow|1246/366|1202/333|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32 Bitmanip ilp32d medlow|55/11|46/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32 Vector Crypto ilp32d medlow|96/45|50/12|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32gc ilp32d medlow multilib|55/11|46/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32gcv ilp32d medlow|92/41|50/12|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64 Bitmanip lp64d medlow|54/16|44/7|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64 Vector Crypto lp64d medlow|74/36|45/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64gc lp64d medlow multilib|47/9|44/7|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64gcv lp64d medlow|70/32|45/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
## Resolved Failures Across All Affected Targets (1 targets / 15 total targets)
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
```
Associated run is: https://github.com/patrick-rivos/riscv-gnu-toolchain/actions/runs/6056751094
|
1.0
|
Testsuite Status 4e2d53c943400e8b5d49a7d5aab4a1ad046aefba - # Summary
|Testsuite Failures|Additional Info|
|---|---|
|gcc-linux-rv64imafdcv_zicond_zawrs_zbc_zvkng_zvksg_zvbb_zvbc_zicsr_zba_zbb_zbs_zicbom_zicbop_zicboz_zfhmin_zkt-lp64d-4e2d53c943400e8b5d49a7d5aab4a1ad046aefba-non-multilib|Cannot find testsuite artifact. Likely caused by testsuite timeout.|
|New Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|Resolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|3/1|0/0|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|Unresolved Failures|gcc|g++|gfortran|Previous Hash|
|---|---|---|---|---|
|linux: rv32 Bitmanip ilp32d medlow|54/40|12/5|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv32 Vector Crypto ilp32d medlow|93/70|16/9|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv32gcv ilp32d medlow|90/69|16/9|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64 Bitmanip lp64d medlow|35/31|10/4|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64 Vector Crypto lp64d medlow|48/46|11/5|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64gcv lp64d medlow|48/46|11/5|79/14|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|linux: rv64imafdc lp64d medlow multilib|10/6|10/4|12/2|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: RVA23U64 profile lp64d medlow|1246/366|1202/333|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32 Bitmanip ilp32d medlow|55/11|46/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32 Vector Crypto ilp32d medlow|96/45|50/12|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32gc ilp32d medlow multilib|55/11|46/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv32gcv ilp32d medlow|92/41|50/12|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64 Bitmanip lp64d medlow|54/16|44/7|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64 Vector Crypto lp64d medlow|74/36|45/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64gc lp64d medlow multilib|47/9|44/7|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
|newlib: rv64gcv lp64d medlow|70/32|45/8|0/0|[80907b03c8e72cdcd597f1359fda21163ec22107](https://github.com/gcc-mirror/gcc/compare/80907b03c8e72cdcd597f1359fda21163ec22107...4e2d53c943400e8b5d49a7d5aab4a1ad046aefba)|
## Resolved Failures Across All Affected Targets (1 targets / 15 total targets)
```
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 0" 2
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 2" 1
FAIL: gcc.dg/tree-prof/time-profiler-2.c scan-ipa-dump-times profile "Read tp_first_run: 3" 1
```
Associated run is: https://github.com/patrick-rivos/riscv-gnu-toolchain/actions/runs/6056751094
|
test
|
testsuite status summary testsuite failures additional info gcc linux zicond zawrs zbc zvkng zvksg zvbb zvbc zicsr zba zbb zbs zicbom zicbop zicboz zfhmin zkt non multilib cannot find testsuite artifact likely caused by testsuite timeout new failures gcc g gfortran previous hash resolved failures gcc g gfortran previous hash linux bitmanip medlow unresolved failures gcc g gfortran previous hash linux bitmanip medlow linux vector crypto medlow linux medlow linux bitmanip medlow linux vector crypto medlow linux medlow linux medlow multilib newlib profile medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow multilib newlib medlow newlib bitmanip medlow newlib vector crypto medlow newlib medlow multilib newlib medlow resolved failures across all affected targets targets total targets fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run fail gcc dg tree prof time profiler c scan ipa dump times profile read tp first run associated run is
| 1
|
225,439
| 17,859,055,376
|
IssuesEvent
|
2021-09-05 16:01:00
|
tulip-control/tulip-control
|
https://api.github.com/repos/tulip-control/tulip-control
|
closed
|
updating the binaries used in CI tests for the dependency `storm`
|
testing
|
On the branch `ci_update_download` I have updated the CI configuration and dependencies (this was motivated by `bintray.com` being discontinued, as explained in a section below). The only remaining error:
https://travis-ci.com/github/tulip-control/tulip-control/jobs/510638673#L15866
reads:
```
File "/home/travis/build/tulip-control/tulip-control/tests/stormpy_interface_test.py", line 25, in <module>
from tulip.interfaces import stormpy as stormpy_int
File "/home/travis/virtualenv/python3.9.1/lib/python3.9/site-packages/tulip/interfaces/stormpy.py", line 39, in <module>
import stormpy
File "/home/travis/build/tulip-control/tulip-control/stormpy-1.6.2/lib/stormpy/__init__.py", line 6, in <module>
from . import core
ImportError: libboost_filesystem.so.1.65.1: cannot open shared object file: No such file or directory
```
This error appears to be from [`storm`](https://github.com/moves-rwth/storm), to which [`stormpy`](https://github.com/moves-rwth/stormpy) interfaces. `stormpy` is built on the CI run:
https://github.com/tulip-control/tulip-control/blob/ece84cdccb4deb4cb14b144df7e5b7f5697eb869/extern/get-stormpy.sh#L79
so I would expect that it links (if it links at all) to the version of `libboost_filesystem.so` that is present on `focal` (and thus not raise the above error. `stormpy` is built using [`pybind11`](https://github.com/pybind/pybind11), and the [`core`](https://github.com/moves-rwth/stormpy/tree/1.6.2/src/core) mentioned in the traceback above is a C++ extension that is [built with](https://github.com/moves-rwth/stormpy/blob/1.6.2/src/mod_core.cpp#L14) `pybind11`). That the error is from `storm` is confirmed by attempting to run
[`storm --version`](https://www.stormchecker.org/documentation/usage/running-storm.html#first-steps)
in the CI run, which [raises](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L15483):
```
./storm/build/bin/storm: error while loading shared libraries: libboost_system.so.1.65.1: cannot open shared object file: No such file or directory
```
(`libboost_filesystem.so.1.65.1` was the dependence in the traceback from above, but that was from `stormpy`, which is linked to `storm` when built, so which libraries are loaded from `stormpy`, and in which order, can differ from which libraries are loaded, and in which order, when calling `storm --version`.)
Indeed, the dependence on `libboost_filesystem` from above is confirmed as follows after unpacking the archive downloaded from https://sourceforge.net/projects/cudd-mirror/files/storm.tar.xz/download and `cd` inside it:
```
> pwd
.../storm/build/lib
> strings * | ag libboost_filesystem
libboost_filesystem.so.1.65.1
> ag --search-binary libboost_filesystem
Binary file libstorm.so matches.
```
(Using [`string`](https://en.wikipedia.org/wiki/Strings_(Unix)) and [`ag`](https://github.com/ggreer/the_silver_searcher).)
(Boost is a [requirement of `storm`](https://www.stormchecker.org/documentation/obtain-storm/dependencies.html#boost).)
Also, the error does not seem to be from the dependency of `storm`, because [`ldd carl/libcarl.so.14.20` prints in the CI run](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L2586-L2597):
```
linux-vdso.so.1 (0x00007ffe28fa5000)
libgmpxx.so.4 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmpxx.so.4 (0x00007f6708d80000)
libgmp.so.10 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmp.so.10 (0x00007f6708b0a000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6708af5000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6708ad2000)
libcln.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libcln.so.6 (0x00007f670875b000)
libginac.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libginac.so.6 (0x00007f67082ba000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f67080d9000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6707f8a000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6707f6f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6707d7d000)
/lib64/ld-linux-x86-64.so.2 (0x00007f670924a000)
```
whereas [`ldd storm/build/lib/libstorm.so` prints in the CI run](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L15461-L15482):
```
linux-vdso.so.1 (0x00007ffd11d93000)
libcarl.so.14.20 => /home/travis/build/tulip-control/tulip-control/carl/libcarl.so.14.20 (0x00007f5bb1a68000)
libginac.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libginac.so.6 (0x00007f5bb15c9000)
libcln.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libcln.so.6 (0x00007f5bb1252000)
libgmpxx.so.4 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmpxx.so.4 (0x00007f5bb104b000)
libgmp.so.10 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmp.so.10 (0x00007f5bb0dd3000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5bb0da1000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5bb0bc0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5bb0a71000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5bb0a56000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5bb0864000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5bb3d42000)
libboost_filesystem.so.1.65.1 => not found
libboost_system.so.1.65.1 => not found
libz3.so.4 => /lib/x86_64-linux-gnu/libz3.so.4 (0x00007f5baf2c2000)
libglpk.so.40 => /lib/x86_64-linux-gnu/libglpk.so.40 (0x00007f5baefe3000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5baefdd000)
libcolamd.so.2 => /lib/x86_64-linux-gnu/libcolamd.so.2 (0x00007f5baefd2000)
libamd.so.2 => /lib/x86_64-linux-gnu/libamd.so.2 (0x00007f5baefc7000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f5baefab000)
libltdl.so.7 => /lib/x86_64-linux-gnu/libltdl.so.7 (0x00007f5baefa0000)
libsuitesparseconfig.so.5 => /lib/x86_64-linux-gnu/libsuitesparseconfig.so.5 (0x00007f5baef9b000)
```
In particular, notice in the `ldd` output for `libstorm.so` the lines:
```
libboost_filesystem.so.1.65.1 => not found
libboost_system.so.1.65.1 => not found
```
So the above error is raised because the binary of `storm` has been linked to `libboost_filesystem.so.1.65.1`, which [is present on Ubuntu `bionic`](https://packages.ubuntu.com/bionic/libboost-filesystem-dev), but absent on Ubuntu `focal`. On Ubuntu `focal`, [`libboost-all-dev (1.71.0.0ubuntu2)` is available](https://packages.ubuntu.com/focal/libboost-filesystem-dev).
Currently, binaries for `storm` are downloaded from https://sourceforge.net/projects/cudd-mirror/files/storm.tar.xz/download (also for https://sourceforge.net/projects/cudd-mirror/files/carl.tar.xz/download, though I do not know whether that program will run on `focal`--from the first `ldd` output above it seems that it might).
Approaches to consider:
- updating the binaries at: https://sourceforge.net/projects/cudd-mirror/files/
- uploading new binaries as "release" assets at: https://github.com/tulip-control/data. The `data` repository content would be minimal, simply describing textually what version of the executables are built using which script from the `tulip-control` repository. This would avoid committing binaries in `git` (even though the purpose of the `data` repository could be regarded as a place for binaries, among other kinds of files).
It will be some time before I will have access to a machine where I can build newer versions of these binaries suitable for use on Travis CI (the CI builds will fail until the issue is addressed). In any case, the changes from branch `ci_update_download` could be merged now into the mainline branch of `tulip`. Then the remaining update for the CI tests to pass would be to change the `storm` binaries.
## About caching binaries of dependencies on Travis CI
Caching on Travis CI the results of building `stormpy`-related packages has been [considered before](https://github.com/tulip-control/tulip-control/pull/237#issuecomment-703171430). However, building these programs takes hours (https://github.com/tulip-control/tulip-control/pull/237#issuecomment-703168131, https://github.com/tulip-control/tulip-control/pull/237#issuecomment-720674642), so I do not know how feasible it is to build the binaries on Travis CI.
An issue with such builds on Travis CI would be [timeouts](https://docs.travis-ci.com/user/customizing-the-build/#build-timeouts) (10 minutes if no output is produced by a CI instance, and 50 minutes overall for each CI instance). It appears that [timeouts can be extended](https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received) with `travis_wait`. However, `travis_wait` changes the timeout related to producing output, not the overall timeout. I do not know whether there is any way to change the overall timeout.
In addition, Travis CI [caches expire after 45 days](https://docs.travis-ci.com/user/caching/#caches-expiration) (for `travis-ci.com`). So building caches of executables on Travis CI would not avoid periodic rebuilds.
## Changes to CI configuration on branch `ci_update_download`
In the CI environment where `tulip` is tested, `gr1c` was downloaded from Bintray:
https://github.com/tulip-control/tulip-control/blob/bb004422b575dccea8d19c33acfeb04b37c62a5a/.travis.yml#L73
[Bintray](https://bintray.com) [shut down on May 1st, 2021](https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/). The CI setup script [raises](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772690567#L2031) an error about the SHA-256 hash.
On the branch `ci_update_download` I changed `tulip`'s CI configuration file `.travis.yml` to download the file https://github.com/tulip-control/gr1c/releases/download/v0.13.0/gr1c-0.13.0-Linux_x86-64.tar.gz (and in doing so I also bumped the vesion of `gr1c` used in the tests from version [0.11.0](https://github.com/tulip-control/gr1c/releases/tag/v0.11.0) to version [0.13.0](https://github.com/tulip-control/gr1c/releases/tag/v0.13.0)). This change happened in `tulip` commit https://github.com/tulip-control/tulip-control/commit/f63d882b7c42da816d233d250c6453c67c644de7.
Using this change, the [CI raised an error](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772877421#L2833) that seems to relate to the version of [GLIBC](https://en.wikipedia.org/wiki/GNU_C_Library):
> ValueError: invalid literal for int() with base 10: "gr1c: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by gr1c)"
The requirement of GLIBC 2.29 is confirmed using `readelf -s gr1c-0.13.0-Linux_x86-64/gr1c`, which starts with:
```
Symbol table '.dynsym' contains 63 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND free@GLIBC_2.2.5 (2)
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND log2@GLIBC_2.29 (3)
```
(`greadelf` on macOS is [Readelf](https://en.wikipedia.org/wiki/Readelf), which can be installed on macOS by installing the [MacPorts](https://www.macports.org) package [`binutils`](https://ports.macports.org/port/binutils/summary), on Linux `ldd gr1c-0.13.0-Linux_x86-64/gr1c` is possible too).
So I changed the OS for the CI run to `focal` (Ubuntu 20.04), from `bionic` (Ubuntu 18.04). This [raised another error](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772881862#L266):
> The command "sudo -E apt-get -yq --no-install-suggests --no-install-recommends $(travis_apt_get_options) install gfortran libatlas-base-dev liblapack-dev libgmp-dev libmpfr-dev graphviz libglpk-dev libboost-dev libboost-filesystem-dev libboost-program-options-dev libboost-regex-dev libboost-test-dev libeigen3-dev z3 libz3-dev python-z3 libhwloc-dev" failed and exited with 100 during .
This error is because the `apt` package [`python-z3`](https://packages.ubuntu.com/bionic/python-z3), which is available on `bionic`, is not available on `focal` (from the default package repository). Instead, the corresponding package on `focal` is [`python3-z3`](https://packages.ubuntu.com/focal/python3-z3). So I changed the CI configuration to install `python3-z3`.
(`tulip` now supports only Python 3, so installing `python3-z3` suffices. Even if Python 2 was still supported, and thus tested in CI, `python3-z3` is required only for testing the interface to `stormpy`. The package `stormpy` requires Python 3, so the tests for that interface were run only on Python 3. Thus, it would have sufficed to conditionally intall `python3-z3` only if the CI run was for Python 3. In any case, this conditional installation is not needed now.)
After these changes, and an update to fetch an archive of a more recent commit of `slugs` (which contains Python scripts updated to Python 3), the error from the `storm` binary remains, which is discussed above.
|
1.0
|
updating the binaries used in CI tests for the dependency `storm` - On the branch `ci_update_download` I have updated the CI configuration and dependencies (this was motivated by `bintray.com` being discontinued, as explained in a section below). The only remaining error:
https://travis-ci.com/github/tulip-control/tulip-control/jobs/510638673#L15866
reads:
```
File "/home/travis/build/tulip-control/tulip-control/tests/stormpy_interface_test.py", line 25, in <module>
from tulip.interfaces import stormpy as stormpy_int
File "/home/travis/virtualenv/python3.9.1/lib/python3.9/site-packages/tulip/interfaces/stormpy.py", line 39, in <module>
import stormpy
File "/home/travis/build/tulip-control/tulip-control/stormpy-1.6.2/lib/stormpy/__init__.py", line 6, in <module>
from . import core
ImportError: libboost_filesystem.so.1.65.1: cannot open shared object file: No such file or directory
```
This error appears to be from [`storm`](https://github.com/moves-rwth/storm), to which [`stormpy`](https://github.com/moves-rwth/stormpy) interfaces. `stormpy` is built on the CI run:
https://github.com/tulip-control/tulip-control/blob/ece84cdccb4deb4cb14b144df7e5b7f5697eb869/extern/get-stormpy.sh#L79
so I would expect that it links (if it links at all) to the version of `libboost_filesystem.so` that is present on `focal` (and thus not raise the above error. `stormpy` is built using [`pybind11`](https://github.com/pybind/pybind11), and the [`core`](https://github.com/moves-rwth/stormpy/tree/1.6.2/src/core) mentioned in the traceback above is a C++ extension that is [built with](https://github.com/moves-rwth/stormpy/blob/1.6.2/src/mod_core.cpp#L14) `pybind11`). That the error is from `storm` is confirmed by attempting to run
[`storm --version`](https://www.stormchecker.org/documentation/usage/running-storm.html#first-steps)
in the CI run, which [raises](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L15483):
```
./storm/build/bin/storm: error while loading shared libraries: libboost_system.so.1.65.1: cannot open shared object file: No such file or directory
```
(`libboost_filesystem.so.1.65.1` was the dependence in the traceback from above, but that was from `stormpy`, which is linked to `storm` when built, so which libraries are loaded from `stormpy`, and in which order, can differ from which libraries are loaded, and in which order, when calling `storm --version`.)
Indeed, the dependence on `libboost_filesystem` from above is confirmed as follows after unpacking the archive downloaded from https://sourceforge.net/projects/cudd-mirror/files/storm.tar.xz/download and `cd` inside it:
```
> pwd
.../storm/build/lib
> strings * | ag libboost_filesystem
libboost_filesystem.so.1.65.1
> ag --search-binary libboost_filesystem
Binary file libstorm.so matches.
```
(Using [`string`](https://en.wikipedia.org/wiki/Strings_(Unix)) and [`ag`](https://github.com/ggreer/the_silver_searcher).)
(Boost is a [requirement of `storm`](https://www.stormchecker.org/documentation/obtain-storm/dependencies.html#boost).)
Also, the error does not seem to be from the dependency of `storm`, because [`ldd carl/libcarl.so.14.20` prints in the CI run](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L2586-L2597):
```
linux-vdso.so.1 (0x00007ffe28fa5000)
libgmpxx.so.4 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmpxx.so.4 (0x00007f6708d80000)
libgmp.so.10 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmp.so.10 (0x00007f6708b0a000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6708af5000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6708ad2000)
libcln.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libcln.so.6 (0x00007f670875b000)
libginac.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libginac.so.6 (0x00007f67082ba000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f67080d9000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f6707f8a000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6707f6f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6707d7d000)
/lib64/ld-linux-x86-64.so.2 (0x00007f670924a000)
```
whereas [`ldd storm/build/lib/libstorm.so` prints in the CI run](https://travis-ci.com/github/tulip-control/tulip-control/jobs/510660367#L15461-L15482):
```
linux-vdso.so.1 (0x00007ffd11d93000)
libcarl.so.14.20 => /home/travis/build/tulip-control/tulip-control/carl/libcarl.so.14.20 (0x00007f5bb1a68000)
libginac.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libginac.so.6 (0x00007f5bb15c9000)
libcln.so.6 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libcln.so.6 (0x00007f5bb1252000)
libgmpxx.so.4 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmpxx.so.4 (0x00007f5bb104b000)
libgmp.so.10 => /home/travis/build/tulip-control/tulip-control/carl/resources/lib/libgmp.so.10 (0x00007f5bb0dd3000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5bb0da1000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5bb0bc0000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5bb0a71000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5bb0a56000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5bb0864000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5bb3d42000)
libboost_filesystem.so.1.65.1 => not found
libboost_system.so.1.65.1 => not found
libz3.so.4 => /lib/x86_64-linux-gnu/libz3.so.4 (0x00007f5baf2c2000)
libglpk.so.40 => /lib/x86_64-linux-gnu/libglpk.so.40 (0x00007f5baefe3000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5baefdd000)
libcolamd.so.2 => /lib/x86_64-linux-gnu/libcolamd.so.2 (0x00007f5baefd2000)
libamd.so.2 => /lib/x86_64-linux-gnu/libamd.so.2 (0x00007f5baefc7000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f5baefab000)
libltdl.so.7 => /lib/x86_64-linux-gnu/libltdl.so.7 (0x00007f5baefa0000)
libsuitesparseconfig.so.5 => /lib/x86_64-linux-gnu/libsuitesparseconfig.so.5 (0x00007f5baef9b000)
```
In particular, notice in the `ldd` output for `libstorm.so` the lines:
```
libboost_filesystem.so.1.65.1 => not found
libboost_system.so.1.65.1 => not found
```
So the above error is raised because the binary of `storm` has been linked to `libboost_filesystem.so.1.65.1`, which [is present on Ubuntu `bionic`](https://packages.ubuntu.com/bionic/libboost-filesystem-dev), but absent on Ubuntu `focal`. On Ubuntu `focal`, [`libboost-all-dev (1.71.0.0ubuntu2)` is available](https://packages.ubuntu.com/focal/libboost-filesystem-dev).
Currently, binaries for `storm` are downloaded from https://sourceforge.net/projects/cudd-mirror/files/storm.tar.xz/download (also for https://sourceforge.net/projects/cudd-mirror/files/carl.tar.xz/download, though I do not know whether that program will run on `focal`--from the first `ldd` output above it seems that it might).
Approaches to consider:
- updating the binaries at: https://sourceforge.net/projects/cudd-mirror/files/
- uploading new binaries as "release" assets at: https://github.com/tulip-control/data. The `data` repository content would be minimal, simply describing textually what version of the executables are built using which script from the `tulip-control` repository. This would avoid committing binaries in `git` (even though the purpose of the `data` repository could be regarded as a place for binaries, among other kinds of files).
It will be some time before I will have access to a machine where I can build newer versions of these binaries suitable for use on Travis CI (the CI builds will fail until the issue is addressed). In any case, the changes from branch `ci_update_download` could be merged now into the mainline branch of `tulip`. Then the remaining update for the CI tests to pass would be to change the `storm` binaries.
## About caching binaries of dependencies on Travis CI
Caching on Travis CI the results of building `stormpy`-related packages has been [considered before](https://github.com/tulip-control/tulip-control/pull/237#issuecomment-703171430). However, building these programs takes hours (https://github.com/tulip-control/tulip-control/pull/237#issuecomment-703168131, https://github.com/tulip-control/tulip-control/pull/237#issuecomment-720674642), so I do not know how feasible it is to build the binaries on Travis CI.
An issue with such builds on Travis CI would be [timeouts](https://docs.travis-ci.com/user/customizing-the-build/#build-timeouts) (10 minutes if no output is produced by a CI instance, and 50 minutes overall for each CI instance). It appears that [timeouts can be extended](https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received) with `travis_wait`. However, `travis_wait` changes the timeout related to producing output, not the overall timeout. I do not know whether there is any way to change the overall timeout.
In addition, Travis CI [caches expire after 45 days](https://docs.travis-ci.com/user/caching/#caches-expiration) (for `travis-ci.com`). So building caches of executables on Travis CI would not avoid periodic rebuilds.
## Changes to CI configuration on branch `ci_update_download`
In the CI environment where `tulip` is tested, `gr1c` was downloaded from Bintray:
https://github.com/tulip-control/tulip-control/blob/bb004422b575dccea8d19c33acfeb04b37c62a5a/.travis.yml#L73
[Bintray](https://bintray.com) [shut down on May 1st, 2021](https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/). The CI setup script [raises](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772690567#L2031) an error about the SHA-256 hash.
On the branch `ci_update_download` I changed `tulip`'s CI configuration file `.travis.yml` to download the file https://github.com/tulip-control/gr1c/releases/download/v0.13.0/gr1c-0.13.0-Linux_x86-64.tar.gz (and in doing so I also bumped the vesion of `gr1c` used in the tests from version [0.11.0](https://github.com/tulip-control/gr1c/releases/tag/v0.11.0) to version [0.13.0](https://github.com/tulip-control/gr1c/releases/tag/v0.13.0)). This change happened in `tulip` commit https://github.com/tulip-control/tulip-control/commit/f63d882b7c42da816d233d250c6453c67c644de7.
Using this change, the [CI raised an error](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772877421#L2833) that seems to relate to the version of [GLIBC](https://en.wikipedia.org/wiki/GNU_C_Library):
> ValueError: invalid literal for int() with base 10: "gr1c: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by gr1c)"
The requirement of GLIBC 2.29 is confirmed using `readelf -s gr1c-0.13.0-Linux_x86-64/gr1c`, which starts with:
```
Symbol table '.dynsym' contains 63 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FUNC GLOBAL DEFAULT UND free@GLIBC_2.2.5 (2)
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND log2@GLIBC_2.29 (3)
```
(`greadelf` on macOS is [Readelf](https://en.wikipedia.org/wiki/Readelf), which can be installed on macOS by installing the [MacPorts](https://www.macports.org) package [`binutils`](https://ports.macports.org/port/binutils/summary), on Linux `ldd gr1c-0.13.0-Linux_x86-64/gr1c` is possible too).
So I changed the OS for the CI run to `focal` (Ubuntu 20.04), from `bionic` (Ubuntu 18.04). This [raised another error](https://travis-ci.org/github/tulip-control/tulip-control/jobs/772881862#L266):
> The command "sudo -E apt-get -yq --no-install-suggests --no-install-recommends $(travis_apt_get_options) install gfortran libatlas-base-dev liblapack-dev libgmp-dev libmpfr-dev graphviz libglpk-dev libboost-dev libboost-filesystem-dev libboost-program-options-dev libboost-regex-dev libboost-test-dev libeigen3-dev z3 libz3-dev python-z3 libhwloc-dev" failed and exited with 100 during .
This error is because the `apt` package [`python-z3`](https://packages.ubuntu.com/bionic/python-z3), which is available on `bionic`, is not available on `focal` (from the default package repository). Instead, the corresponding package on `focal` is [`python3-z3`](https://packages.ubuntu.com/focal/python3-z3). So I changed the CI configuration to install `python3-z3`.
(`tulip` now supports only Python 3, so installing `python3-z3` suffices. Even if Python 2 was still supported, and thus tested in CI, `python3-z3` is required only for testing the interface to `stormpy`. The package `stormpy` requires Python 3, so the tests for that interface were run only on Python 3. Thus, it would have sufficed to conditionally intall `python3-z3` only if the CI run was for Python 3. In any case, this conditional installation is not needed now.)
After these changes, and an update to fetch an archive of a more recent commit of `slugs` (which contains Python scripts updated to Python 3), the error from the `storm` binary remains, which is discussed above.
|
test
|
updating the binaries used in ci tests for the dependency storm on the branch ci update download i have updated the ci configuration and dependencies this was motivated by bintray com being discontinued as explained in a section below the only remaining error reads file home travis build tulip control tulip control tests stormpy interface test py line in from tulip interfaces import stormpy as stormpy int file home travis virtualenv lib site packages tulip interfaces stormpy py line in import stormpy file home travis build tulip control tulip control stormpy lib stormpy init py line in from import core importerror libboost filesystem so cannot open shared object file no such file or directory this error appears to be from to which interfaces stormpy is built on the ci run so i would expect that it links if it links at all to the version of libboost filesystem so that is present on focal and thus not raise the above error stormpy is built using and the mentioned in the traceback above is a c extension that is that the error is from storm is confirmed by attempting to run in the ci run which storm build bin storm error while loading shared libraries libboost system so cannot open shared object file no such file or directory libboost filesystem so was the dependence in the traceback from above but that was from stormpy which is linked to storm when built so which libraries are loaded from stormpy and in which order can differ from which libraries are loaded and in which order when calling storm version indeed the dependence on libboost filesystem from above is confirmed as follows after unpacking the archive downloaded from and cd inside it pwd storm build lib strings ag libboost filesystem libboost filesystem so ag search binary libboost filesystem binary file libstorm so matches using and boost is a also the error does not seem to be from the dependency of storm because linux vdso so libgmpxx so home travis build tulip control tulip control carl resources lib libgmpxx so libgmp so home travis build tulip control tulip control carl resources lib libgmp so libdl so lib linux gnu libdl so libpthread so lib linux gnu libpthread so libcln so home travis build tulip control tulip control carl resources lib libcln so libginac so home travis build tulip control tulip control carl resources lib libginac so libstdc so lib linux gnu libstdc so libm so lib linux gnu libm so libgcc s so lib linux gnu libgcc s so libc so lib linux gnu libc so ld linux so whereas linux vdso so libcarl so home travis build tulip control tulip control carl libcarl so libginac so home travis build tulip control tulip control carl resources lib libginac so libcln so home travis build tulip control tulip control carl resources lib libcln so libgmpxx so home travis build tulip control tulip control carl resources lib libgmpxx so libgmp so home travis build tulip control tulip control carl resources lib libgmp so libpthread so lib linux gnu libpthread so libstdc so lib linux gnu libstdc so libm so lib linux gnu libm so libgcc s so lib linux gnu libgcc s so libc so lib linux gnu libc so ld linux so libboost filesystem so not found libboost system so not found so lib linux gnu so libglpk so lib linux gnu libglpk so libdl so lib linux gnu libdl so libcolamd so lib linux gnu libcolamd so libamd so lib linux gnu libamd so libz so lib linux gnu libz so libltdl so lib linux gnu libltdl so libsuitesparseconfig so lib linux gnu libsuitesparseconfig so in particular notice in the ldd output for libstorm so the lines libboost filesystem so not found libboost system so not found so the above error is raised because the binary of storm has been linked to libboost filesystem so which but absent on ubuntu focal on ubuntu focal currently binaries for storm are downloaded from also for though i do not know whether that program will run on focal from the first ldd output above it seems that it might approaches to consider updating the binaries at uploading new binaries as release assets at the data repository content would be minimal simply describing textually what version of the executables are built using which script from the tulip control repository this would avoid committing binaries in git even though the purpose of the data repository could be regarded as a place for binaries among other kinds of files it will be some time before i will have access to a machine where i can build newer versions of these binaries suitable for use on travis ci the ci builds will fail until the issue is addressed in any case the changes from branch ci update download could be merged now into the mainline branch of tulip then the remaining update for the ci tests to pass would be to change the storm binaries about caching binaries of dependencies on travis ci caching on travis ci the results of building stormpy related packages has been however building these programs takes hours so i do not know how feasible it is to build the binaries on travis ci an issue with such builds on travis ci would be minutes if no output is produced by a ci instance and minutes overall for each ci instance it appears that with travis wait however travis wait changes the timeout related to producing output not the overall timeout i do not know whether there is any way to change the overall timeout in addition travis ci for travis ci com so building caches of executables on travis ci would not avoid periodic rebuilds changes to ci configuration on branch ci update download in the ci environment where tulip is tested was downloaded from bintray the ci setup script an error about the sha hash on the branch ci update download i changed tulip s ci configuration file travis yml to download the file and in doing so i also bumped the vesion of used in the tests from version to version this change happened in tulip commit using this change the that seems to relate to the version of valueerror invalid literal for int with base lib linux gnu libm so version glibc not found required by the requirement of glibc is confirmed using readelf s linux which starts with symbol table dynsym contains entries num value size type bind vis ndx name notype local default und func global default und free glibc func global default und glibc greadelf on macos is which can be installed on macos by installing the package on linux ldd linux is possible too so i changed the os for the ci run to focal ubuntu from bionic ubuntu this the command sudo e apt get yq no install suggests no install recommends travis apt get options install gfortran libatlas base dev liblapack dev libgmp dev libmpfr dev graphviz libglpk dev libboost dev libboost filesystem dev libboost program options dev libboost regex dev libboost test dev dev dev python libhwloc dev failed and exited with during this error is because the apt package which is available on bionic is not available on focal from the default package repository instead the corresponding package on focal is so i changed the ci configuration to install tulip now supports only python so installing suffices even if python was still supported and thus tested in ci is required only for testing the interface to stormpy the package stormpy requires python so the tests for that interface were run only on python thus it would have sufficed to conditionally intall only if the ci run was for python in any case this conditional installation is not needed now after these changes and an update to fetch an archive of a more recent commit of slugs which contains python scripts updated to python the error from the storm binary remains which is discussed above
| 1
|
84,485
| 7,923,456,466
|
IssuesEvent
|
2018-07-05 14:07:41
|
tendermint/tendermint
|
https://api.github.com/repos/tendermint/tendermint
|
closed
|
test_apps failure on Jenkins
|
bug test
|
Bug report
tendermint greg/persistent-script-fix branch (error unrelated to changes). Branched from develop.
Command:
```
make test_apps
```
Result: (When Starting counter_over grpc)
```
panic: message/group field types.Validator:bytes without pointer
```
Full details at [https://ci.interblock.io/job/01.Start.Suite2/20/console](https://ci.interblock.io/job/03.Test.Apps/45/console) , built from [https://ci.interblock.io/job/01.Start.Suite2/20/console](https://ci.interblock.io/job/01.Start.Suite2/20/console).
Is this maybe because of the go-amino changes?
|
1.0
|
test_apps failure on Jenkins - Bug report
tendermint greg/persistent-script-fix branch (error unrelated to changes). Branched from develop.
Command:
```
make test_apps
```
Result: (When Starting counter_over grpc)
```
panic: message/group field types.Validator:bytes without pointer
```
Full details at [https://ci.interblock.io/job/01.Start.Suite2/20/console](https://ci.interblock.io/job/03.Test.Apps/45/console) , built from [https://ci.interblock.io/job/01.Start.Suite2/20/console](https://ci.interblock.io/job/01.Start.Suite2/20/console).
Is this maybe because of the go-amino changes?
|
test
|
test apps failure on jenkins bug report tendermint greg persistent script fix branch error unrelated to changes branched from develop command make test apps result when starting counter over grpc panic message group field types validator bytes without pointer full details at built from is this maybe because of the go amino changes
| 1
|
54,046
| 23,135,567,950
|
IssuesEvent
|
2022-07-28 14:04:41
|
agera-edc/MinimumViableDataspace
|
https://api.github.com/repos/agera-edc/MinimumViableDataspace
|
closed
|
Registration Service - Participant onboarding- Verifies participant JWS
|
story registration-service
|
Feature agera-edc/MinimumViableDataspaceFork#24
After https://github.com/agera-edc/DataSpaceConnector/issues/311
After https://github.com/agera-edc/DataSpaceConnector/issues/269
## Description
When a participant interacts with Registration service it should send a signed JWS to registry service and registry service should verify participant JWS. In order to do so registry service should first get participant DID document from DID url defined in JWS and use public key from this DID to verify signed JWS.
spec : https://github.com/Metaform/mvd/blob/main/registration-service/registration-service-tech-spec.md
Note that a fix for https://github.com/eclipse-dataspaceconnector/DataSpaceConnector/issues/1176 is being developed, in order to include audience check
## Acceptance Criteria
- [x] Update ADR about enrollment endpoint if required. (#209)
- [x] Participant should send a signed JWS when interacting with registration service.
- [x] Registration service verifies participant's JWS using it's participant public key.
- [x] Test coverage.
- [x] Uses updated EDC code after fixing https://github.com/eclipse-dataspaceconnector/DataSpaceConnector/issues/1176
## Tasks
- [x] Update ADR.
- [x] Participant create signed JWS. It contains DID document url of participant.
- [x] Participant send signed JWS to registration service during any interaction.
- [x] Registration service able to verify participant's JWS.
- [x] Test coverage
|
1.0
|
Registration Service - Participant onboarding- Verifies participant JWS - Feature agera-edc/MinimumViableDataspaceFork#24
After https://github.com/agera-edc/DataSpaceConnector/issues/311
After https://github.com/agera-edc/DataSpaceConnector/issues/269
## Description
When a participant interacts with Registration service it should send a signed JWS to registry service and registry service should verify participant JWS. In order to do so registry service should first get participant DID document from DID url defined in JWS and use public key from this DID to verify signed JWS.
spec : https://github.com/Metaform/mvd/blob/main/registration-service/registration-service-tech-spec.md
Note that a fix for https://github.com/eclipse-dataspaceconnector/DataSpaceConnector/issues/1176 is being developed, in order to include audience check
## Acceptance Criteria
- [x] Update ADR about enrollment endpoint if required. (#209)
- [x] Participant should send a signed JWS when interacting with registration service.
- [x] Registration service verifies participant's JWS using it's participant public key.
- [x] Test coverage.
- [x] Uses updated EDC code after fixing https://github.com/eclipse-dataspaceconnector/DataSpaceConnector/issues/1176
## Tasks
- [x] Update ADR.
- [x] Participant create signed JWS. It contains DID document url of participant.
- [x] Participant send signed JWS to registration service during any interaction.
- [x] Registration service able to verify participant's JWS.
- [x] Test coverage
|
non_test
|
registration service participant onboarding verifies participant jws feature agera edc minimumviabledataspacefork after after description when a participant interacts with registration service it should send a signed jws to registry service and registry service should verify participant jws in order to do so registry service should first get participant did document from did url defined in jws and use public key from this did to verify signed jws spec note that a fix for is being developed in order to include audience check acceptance criteria update adr about enrollment endpoint if required participant should send a signed jws when interacting with registration service registration service verifies participant s jws using it s participant public key test coverage uses updated edc code after fixing tasks update adr participant create signed jws it contains did document url of participant participant send signed jws to registration service during any interaction registration service able to verify participant s jws test coverage
| 0
|
8,892
| 7,474,055,237
|
IssuesEvent
|
2018-04-03 17:08:46
|
roundcube/roundcubemail
|
https://api.github.com/repos/roundcube/roundcubemail
|
closed
|
MX injection and type juggling vulnerabilities
|
C: Security bug
|
Hello,
I'm here to report two vulnerabilities I have found while doing research on Roundcube 1.3.4, which are also present in your last release [1.3.5](https://github.com/roundcube/roundcubemail/releases/download/1.3.5/roundcubemail-1.3.5-complete.tar.gz).
This two bugs are **not** exploitable in the wild, at least to my current knowledge; nonetheless fixing them should be a priority of yours because they could be chained with other minor stuff and then become exploitable in a realistic, attacker-pov efficient way. Plus, with the ongoing grow of this project you may introduce features that could be used to leverage this stuff.
Since the bugs are not so easy to spot, especially the mx injection, I'll now try to explain myself in the clearest way possible, the code I'll refer to it's the 1.3.5. I'll conclude with a brief summary.
**MX Injection**
On function **archive.php:move_messages()** we have:
<img width="817" alt="schermata 2018-03-28 alle 05 13 23" src="https://user-images.githubusercontent.com/8234144/38006857-12628354-3247-11e8-8cb6-8c1f2ef503ec.png">
A little bit of context:
- rcmail::get_uids inside the foreach cycle it's responsible to get $mbox from $uids which is passed via POST (line 132) (but anyway passing them by GET will work too); if provided with a format like ID-MBOX it will split the thing and have $uids =array(ID) and $mbox ="MBOX"; Fine.
- The first IF and ELSE IF (line 153 and 157) set our prerequisite to exploit the bug: the archive folder has to be set, and the archive_type must be set and be different from "folder" that's because the function move_messages_worker() (line 168) do his job right: will call archive.php:move_messages_worker() which will call rcube_imap.php:move_message() which will call rcube_storage.php:parse_uids() which sanitize $uids.
The problem lies in that else branch (archive.php line 170):
- **line 176** _archive.php:move_messages()_ calls fetch_headers($mbox, $uids);
- **line 1235** _rcube_imap.php:fetch_headers()_ calls fetchHeaders($folder,$msgs) where $folder is $mbox and $msgs is $uids
- **line 2600** _rcube_imap_generic.php:fetchHeaders()_ calls fetch($mailbox, $message_set, $is_uid, $query_items);
- rcube_imap_generic.php:fetch() it's a core function used everywhere for doing is job: fetching things.
<img width="893" alt="schermata 2018-03-28 alle 05 36 21" src="https://user-images.githubusercontent.com/8234144/38007501-0125cdfa-324a-11e8-9bca-b8447d74679e.png">
On **line 2360** $mailbox it's checked and the function returns false, so the attacker can't exploit that but, no check are done on $message_set which, still, is our user-controlled input which will end in - **line 2369** - the command to the MX server causing an MX injection.
**PHP Type Juggling**
This is far more easy to spot and straightforward, few words: on _rcube.php:check_request()_ we have
<img width="950" alt="schermata 2018-03-28 alle 05 50 54" src="https://user-images.githubusercontent.com/8234144/38007921-2ee2d57e-324c-11e8-81f2-ef83d888583e.png">
as you can see every check it's performed just with the == operator which is a loose not strict operator.
This is not exploitable right now, and it's just a theorical bug, because you just use HTTP Paramaters which are strings, not typed but if you'll introduce JSON then this will become easily exploitable and will cause a CSRF bypass.
`php > var_dump("84829randomstring-csrfs9499" == TRUE);
bool(true)
php > var_dump("84829randomstring-csrfs9499" === TRUE);
bool(false)
`
Nonetheless as I said in my introduction you should fix this: what if I opened a "JSON for post parameters" request as a feature request?
I hope I made myself enough clear, if you need more explanation: I am willing to help. When you fix this I'd like to write and publish a technical blog post about my findings ( the mx injection it's quite hided and nice, I think) - if that's okay with you.
PS: I think this issue should be private, not familiar with github if that's possible maybe we should do that.
|
True
|
MX injection and type juggling vulnerabilities - Hello,
I'm here to report two vulnerabilities I have found while doing research on Roundcube 1.3.4, which are also present in your last release [1.3.5](https://github.com/roundcube/roundcubemail/releases/download/1.3.5/roundcubemail-1.3.5-complete.tar.gz).
This two bugs are **not** exploitable in the wild, at least to my current knowledge; nonetheless fixing them should be a priority of yours because they could be chained with other minor stuff and then become exploitable in a realistic, attacker-pov efficient way. Plus, with the ongoing grow of this project you may introduce features that could be used to leverage this stuff.
Since the bugs are not so easy to spot, especially the mx injection, I'll now try to explain myself in the clearest way possible, the code I'll refer to it's the 1.3.5. I'll conclude with a brief summary.
**MX Injection**
On function **archive.php:move_messages()** we have:
<img width="817" alt="schermata 2018-03-28 alle 05 13 23" src="https://user-images.githubusercontent.com/8234144/38006857-12628354-3247-11e8-8cb6-8c1f2ef503ec.png">
A little bit of context:
- rcmail::get_uids inside the foreach cycle it's responsible to get $mbox from $uids which is passed via POST (line 132) (but anyway passing them by GET will work too); if provided with a format like ID-MBOX it will split the thing and have $uids =array(ID) and $mbox ="MBOX"; Fine.
- The first IF and ELSE IF (line 153 and 157) set our prerequisite to exploit the bug: the archive folder has to be set, and the archive_type must be set and be different from "folder" that's because the function move_messages_worker() (line 168) do his job right: will call archive.php:move_messages_worker() which will call rcube_imap.php:move_message() which will call rcube_storage.php:parse_uids() which sanitize $uids.
The problem lies in that else branch (archive.php line 170):
- **line 176** _archive.php:move_messages()_ calls fetch_headers($mbox, $uids);
- **line 1235** _rcube_imap.php:fetch_headers()_ calls fetchHeaders($folder,$msgs) where $folder is $mbox and $msgs is $uids
- **line 2600** _rcube_imap_generic.php:fetchHeaders()_ calls fetch($mailbox, $message_set, $is_uid, $query_items);
- rcube_imap_generic.php:fetch() it's a core function used everywhere for doing is job: fetching things.
<img width="893" alt="schermata 2018-03-28 alle 05 36 21" src="https://user-images.githubusercontent.com/8234144/38007501-0125cdfa-324a-11e8-9bca-b8447d74679e.png">
On **line 2360** $mailbox it's checked and the function returns false, so the attacker can't exploit that but, no check are done on $message_set which, still, is our user-controlled input which will end in - **line 2369** - the command to the MX server causing an MX injection.
**PHP Type Juggling**
This is far more easy to spot and straightforward, few words: on _rcube.php:check_request()_ we have
<img width="950" alt="schermata 2018-03-28 alle 05 50 54" src="https://user-images.githubusercontent.com/8234144/38007921-2ee2d57e-324c-11e8-81f2-ef83d888583e.png">
as you can see every check it's performed just with the == operator which is a loose not strict operator.
This is not exploitable right now, and it's just a theorical bug, because you just use HTTP Paramaters which are strings, not typed but if you'll introduce JSON then this will become easily exploitable and will cause a CSRF bypass.
`php > var_dump("84829randomstring-csrfs9499" == TRUE);
bool(true)
php > var_dump("84829randomstring-csrfs9499" === TRUE);
bool(false)
`
Nonetheless as I said in my introduction you should fix this: what if I opened a "JSON for post parameters" request as a feature request?
I hope I made myself enough clear, if you need more explanation: I am willing to help. When you fix this I'd like to write and publish a technical blog post about my findings ( the mx injection it's quite hided and nice, I think) - if that's okay with you.
PS: I think this issue should be private, not familiar with github if that's possible maybe we should do that.
|
non_test
|
mx injection and type juggling vulnerabilities hello i m here to report two vulnerabilities i have found while doing research on roundcube which are also present in your last release this two bugs are not exploitable in the wild at least to my current knowledge nonetheless fixing them should be a priority of yours because they could be chained with other minor stuff and then become exploitable in a realistic attacker pov efficient way plus with the ongoing grow of this project you may introduce features that could be used to leverage this stuff since the bugs are not so easy to spot especially the mx injection i ll now try to explain myself in the clearest way possible the code i ll refer to it s the i ll conclude with a brief summary mx injection on function archive php move messages we have img width alt schermata alle src a little bit of context rcmail get uids inside the foreach cycle it s responsible to get mbox from uids which is passed via post line but anyway passing them by get will work too if provided with a format like id mbox it will split the thing and have uids array id and mbox mbox fine the first if and else if line and set our prerequisite to exploit the bug the archive folder has to be set and the archive type must be set and be different from folder that s because the function move messages worker line do his job right will call archive php move messages worker which will call rcube imap php move message which will call rcube storage php parse uids which sanitize uids the problem lies in that else branch archive php line line archive php move messages calls fetch headers mbox uids line rcube imap php fetch headers calls fetchheaders folder msgs where folder is mbox and msgs is uids line rcube imap generic php fetchheaders calls fetch mailbox message set is uid query items rcube imap generic php fetch it s a core function used everywhere for doing is job fetching things img width alt schermata alle src on line mailbox it s checked and the function returns false so the attacker can t exploit that but no check are done on message set which still is our user controlled input which will end in line the command to the mx server causing an mx injection php type juggling this is far more easy to spot and straightforward few words on rcube php check request we have img width alt schermata alle src as you can see every check it s performed just with the operator which is a loose not strict operator this is not exploitable right now and it s just a theorical bug because you just use http paramaters which are strings not typed but if you ll introduce json then this will become easily exploitable and will cause a csrf bypass php var dump true bool true php var dump true bool false nonetheless as i said in my introduction you should fix this what if i opened a json for post parameters request as a feature request i hope i made myself enough clear if you need more explanation i am willing to help when you fix this i d like to write and publish a technical blog post about my findings the mx injection it s quite hided and nice i think if that s okay with you ps i think this issue should be private not familiar with github if that s possible maybe we should do that
| 0
|
14,737
| 3,420,348,704
|
IssuesEvent
|
2015-12-08 14:29:00
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
Реализовать в сервисе получения таймингов по таскам (/file/download_bp_timing) - механизм "настраиваемых полей"
|
active test _activiti _wf-base
|
- [x] 1) так-же, как и в сервисе /file/downloadTasksData добавить опциональный параметр saFields, который может содержать список полей и формул их расчета (разделенные точкой с запятой). Итоги которых должны добавляться как доп-поля итоговой отдаваемой таблицы
- [x] 2) формат каждого элемента поля такой(на примере):
"nAccepted=(nMinutes>&&sAssignedLogin!=null?1:0)"
где:
- nAccepted - название того поля, которое получит итоговое вычисленное значение
"nMinutes>&&bAssigned=true?1:0" - это та формула, которая должна вычислиться
- nMinutes - переменная, которая хранит в себе значение уже существующего или посчитанного поля формируемой таблицы
- sAssignedLogin - переменная, хранящая в себе системное значение (в данном случае догин сотрудника, на которого ассайнута юзертаска, о которой будет написано ниже)
- [x] 3) Вычисление формулы должно произвожиться методами js, как это реализовано при эскалации(пример):
Класс: org.wf.dp.dniprorada.base.service.escalation.EscalationHelper
Метод:
private boolean getResultOfCondition(Map<String, Object> jsonData,
- [x] 4) Создать системные переменные, участвующие в вычислениях:
- [x] 4.1) "sAssignedLogin" - в которую нужно сохранять логин ассайнутого сотрудника на обрабатываемую юзертаску
- [x] 4.2) "sID_UserTask" - в которую нужно сохранять ИД юзертаски
- [x] 5) В итоге, параметр saFields может содержать примерно такое значение:
"nAccepted=(nMinutes>&&sAssignedLogin!=null?1:0);nCount=(sID_UserTask='usertask1'?1:0);nRejected=(sSolution='reject1'?1:0)"
и это добавит вконце таблиці такие поля(колонки): nAccepted;nCount;nRejected
с соответствующими вычеслинными значениями
- [x] 6) Дописать доку на вики
|
1.0
|
Реализовать в сервисе получения таймингов по таскам (/file/download_bp_timing) - механизм "настраиваемых полей" - - [x] 1) так-же, как и в сервисе /file/downloadTasksData добавить опциональный параметр saFields, который может содержать список полей и формул их расчета (разделенные точкой с запятой). Итоги которых должны добавляться как доп-поля итоговой отдаваемой таблицы
- [x] 2) формат каждого элемента поля такой(на примере):
"nAccepted=(nMinutes>&&sAssignedLogin!=null?1:0)"
где:
- nAccepted - название того поля, которое получит итоговое вычисленное значение
"nMinutes>&&bAssigned=true?1:0" - это та формула, которая должна вычислиться
- nMinutes - переменная, которая хранит в себе значение уже существующего или посчитанного поля формируемой таблицы
- sAssignedLogin - переменная, хранящая в себе системное значение (в данном случае догин сотрудника, на которого ассайнута юзертаска, о которой будет написано ниже)
- [x] 3) Вычисление формулы должно произвожиться методами js, как это реализовано при эскалации(пример):
Класс: org.wf.dp.dniprorada.base.service.escalation.EscalationHelper
Метод:
private boolean getResultOfCondition(Map<String, Object> jsonData,
- [x] 4) Создать системные переменные, участвующие в вычислениях:
- [x] 4.1) "sAssignedLogin" - в которую нужно сохранять логин ассайнутого сотрудника на обрабатываемую юзертаску
- [x] 4.2) "sID_UserTask" - в которую нужно сохранять ИД юзертаски
- [x] 5) В итоге, параметр saFields может содержать примерно такое значение:
"nAccepted=(nMinutes>&&sAssignedLogin!=null?1:0);nCount=(sID_UserTask='usertask1'?1:0);nRejected=(sSolution='reject1'?1:0)"
и это добавит вконце таблиці такие поля(колонки): nAccepted;nCount;nRejected
с соответствующими вычеслинными значениями
- [x] 6) Дописать доку на вики
|
test
|
реализовать в сервисе получения таймингов по таскам file download bp timing механизм настраиваемых полей так же как и в сервисе file downloadtasksdata добавить опциональный параметр safields который может содержать список полей и формул их расчета разделенные точкой с запятой итоги которых должны добавляться как доп поля итоговой отдаваемой таблицы формат каждого элемента поля такой на примере naccepted nminutes sassignedlogin null где naccepted название того поля которое получит итоговое вычисленное значение nminutes bassigned true это та формула которая должна вычислиться nminutes переменная которая хранит в себе значение уже существующего или посчитанного поля формируемой таблицы sassignedlogin переменная хранящая в себе системное значение в данном случае догин сотрудника на которого ассайнута юзертаска о которой будет написано ниже вычисление формулы должно произвожиться методами js как это реализовано при эскалации пример класс org wf dp dniprorada base service escalation escalationhelper метод private boolean getresultofcondition map jsondata создать системные переменные участвующие в вычислениях sassignedlogin в которую нужно сохранять логин ассайнутого сотрудника на обрабатываемую юзертаску sid usertask в которую нужно сохранять ид юзертаски в итоге параметр safields может содержать примерно такое значение naccepted nminutes sassignedlogin null ncount sid usertask nrejected ssolution и это добавит вконце таблиці такие поля колонки naccepted ncount nrejected с соответствующими вычеслинными значениями дописать доку на вики
| 1
|
287,864
| 24,869,094,179
|
IssuesEvent
|
2022-10-27 14:03:25
|
opencollective/opencollective
|
https://api.github.com/repos/opencollective/opencollective
|
closed
|
Add tests for "sendMessage" mutation
|
complexity → simple api test bounty $200
|
We're [introducing](https://github.com/opencollective/opencollective-api/pull/8020) a new `sendMessage` mutation on our public API that is a portage of the `sendMessageToCollective` mutation from our legacy API. This mutation did not have tests. We want to add some in `test/server/graphql/v2/mutation/AccountMutations.test.ts`:
```es6
describe('sendMessage', () => {
it('sends the message by email') => {}
it('cannot inject code in the email (XSS)') => {}
it('returns an error if not authenticated') => {}
it('returns an error if collective cannot be contacted') => {}
it('returns an error if the feature is blocked for user') => {}
it('returns an error if the message is invalid') => {}
})
|
1.0
|
Add tests for "sendMessage" mutation - We're [introducing](https://github.com/opencollective/opencollective-api/pull/8020) a new `sendMessage` mutation on our public API that is a portage of the `sendMessageToCollective` mutation from our legacy API. This mutation did not have tests. We want to add some in `test/server/graphql/v2/mutation/AccountMutations.test.ts`:
```es6
describe('sendMessage', () => {
it('sends the message by email') => {}
it('cannot inject code in the email (XSS)') => {}
it('returns an error if not authenticated') => {}
it('returns an error if collective cannot be contacted') => {}
it('returns an error if the feature is blocked for user') => {}
it('returns an error if the message is invalid') => {}
})
|
test
|
add tests for sendmessage mutation we re a new sendmessage mutation on our public api that is a portage of the sendmessagetocollective mutation from our legacy api this mutation did not have tests we want to add some in test server graphql mutation accountmutations test ts describe sendmessage it sends the message by email it cannot inject code in the email xss it returns an error if not authenticated it returns an error if collective cannot be contacted it returns an error if the feature is blocked for user it returns an error if the message is invalid
| 1
|
59,523
| 6,654,449,535
|
IssuesEvent
|
2017-09-29 12:53:12
|
openbmc/openbmc-test-automation
|
https://api.github.com/repos/openbmc/openbmc-test-automation
|
opened
|
REST image upload load testing
|
bug Test
|
- [x] Load file over network via REST
- [x] Continue the process for more iteration
- [x] Observe how the local client code and BMC REST sever behaves
|
1.0
|
REST image upload load testing - - [x] Load file over network via REST
- [x] Continue the process for more iteration
- [x] Observe how the local client code and BMC REST sever behaves
|
test
|
rest image upload load testing load file over network via rest continue the process for more iteration observe how the local client code and bmc rest sever behaves
| 1
|
167,866
| 6,348,057,774
|
IssuesEvent
|
2017-07-28 08:57:47
|
k0shk0sh/FastHub
|
https://api.github.com/repos/k0shk0sh/FastHub
|
closed
|
Viewing wiki doc of neovim gives error
|
Priority: Medium Status: Completed Type: Enhancement
|
**FastHub Version: 4.0.3**
Was just browsing the neovim wiki doc and clicking the section from introduction to users gives server communication error.
This error does not happen after the users section
|
1.0
|
Viewing wiki doc of neovim gives error - **FastHub Version: 4.0.3**
Was just browsing the neovim wiki doc and clicking the section from introduction to users gives server communication error.
This error does not happen after the users section
|
non_test
|
viewing wiki doc of neovim gives error fasthub version was just browsing the neovim wiki doc and clicking the section from introduction to users gives server communication error this error does not happen after the users section
| 0
|
703,136
| 24,147,560,514
|
IssuesEvent
|
2022-09-21 20:16:31
|
smcnab1/op-question-mark
|
https://api.github.com/repos/smcnab1/op-question-mark
|
closed
|
[BUG] Marquee text in dark mode
|
✔️Status: Confirmed 🐛Type: Bug 🏔Priority: High 👗For: Frontend
|
## **🐛Bug Report**
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
* Add text variable to marquee on home to keep with colour scheme in dark mode
---
**To Reproduce**
<!-- Steps to reproduce the error:
(e.g.:)
1. Use x argument / navigate to
2. Fill this information
3. Go to...
4. See error -->
<!-- Write the steps here (add or remove as many steps as needed)-->
1.
2.
3.
4.
---
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
*
---
**Screenshots**
<!-- If applicable, add screenshots or videos to help explain your problem. -->
---
**Desktop (please complete the following information):**
<!-- use all the applicable bulleted list element for this specific issue,
and remove all the bulleted list elements that are not relevant for this issue. -->
- OS:
- Browser
- Version
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
---
**Additional context**
<!-- Add any other context or additional information about the problem here.-->
*
<!--📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
Oh, hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Please read our Rules of Conduct at this repository's `.github/CODE_OF_CONDUCT.md`
📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛-->
|
1.0
|
[BUG] Marquee text in dark mode - ## **🐛Bug Report**
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
* Add text variable to marquee on home to keep with colour scheme in dark mode
---
**To Reproduce**
<!-- Steps to reproduce the error:
(e.g.:)
1. Use x argument / navigate to
2. Fill this information
3. Go to...
4. See error -->
<!-- Write the steps here (add or remove as many steps as needed)-->
1.
2.
3.
4.
---
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
*
---
**Screenshots**
<!-- If applicable, add screenshots or videos to help explain your problem. -->
---
**Desktop (please complete the following information):**
<!-- use all the applicable bulleted list element for this specific issue,
and remove all the bulleted list elements that are not relevant for this issue. -->
- OS:
- Browser
- Version
**Smartphone (please complete the following information):**
- Device:
- OS:
- Browser
- Version
---
**Additional context**
<!-- Add any other context or additional information about the problem here.-->
*
<!--📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
Oh, hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Please read our Rules of Conduct at this repository's `.github/CODE_OF_CONDUCT.md`
📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛-->
|
non_test
|
marquee text in dark mode 🐛bug report describe the bug add text variable to marquee on home to keep with colour scheme in dark mode to reproduce steps to reproduce the error e g use x argument navigate to fill this information go to see error expected behavior screenshots desktop please complete the following information use all the applicable bulleted list element for this specific issue and remove all the bulleted list elements that are not relevant for this issue os browser version smartphone please complete the following information device os browser version additional context 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one please read our rules of conduct at this repository s github code of conduct md 📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛📛
| 0
|
102,215
| 31,862,200,594
|
IssuesEvent
|
2023-09-15 11:47:19
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
Build failure: katana
|
0.kind: build failure
|
### Steps To Reproduce
Steps to reproduce the behavior:
1. `nix build nixpkgs#katana`
2. observe that build exited with success code, but `result` directory is empty.
### Build log
```
katana> unpacking sources
katana> unpacking source archive /nix/store/rdivf3r4i8xigkbn3yvr4mjsqk5phmfa-source
katana> source root is source
katana> patching sources
katana> updateAutotoolsGnuConfigScriptsPhase
katana> configuring
katana> building
katana> Building subPackage ./cmd/katana
katana> package github.com/projectdiscovery/katana/cmd/katana
katana> imports github.com/projectdiscovery/katana/internal/runner
katana> imports github.com/projectdiscovery/katana/pkg/engine/hybrid
katana> imports github.com/projectdiscovery/katana/pkg/engine/common
katana> imports github.com/projectdiscovery/katana/pkg/engine/parser
katana> imports github.com/projectdiscovery/katana/pkg/utils
katana> imports github.com/BishopFox/jsluice
katana> imports github.com/smacker/go-tree-sitter/javascript: build constraints exclude all Go files in /build/source/vendor/github.com/smacker/go-tree-sitter/javascript
katana> running tests
katana> package github.com/projectdiscovery/katana/cmd/katana
katana> imports github.com/projectdiscovery/katana/internal/runner
katana> imports github.com/projectdiscovery/katana/pkg/engine/hybrid
katana> imports github.com/projectdiscovery/katana/pkg/engine/common
katana> imports github.com/projectdiscovery/katana/pkg/engine/parser
katana> imports github.com/projectdiscovery/katana/pkg/utils
katana> imports github.com/BishopFox/jsluice
katana> imports github.com/smacker/go-tree-sitter/javascript: build constraints exclude all Go files in /build/source/vendor/github.com/smacker/go-tree-sitter/javascript
katana> installing
katana> post-installation fixup
katana> shrinking RPATHs of ELF executables and libraries in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3
katana> checking for references to /build/ in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3...
katana> patching script interpreter paths in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3
```
### Additional context
Removing the following [line](https://github.com/NixOS/nixpkgs/blob/a7c9c812e0c4d544cebf4566158560220346c9b2/pkgs/tools/security/katana/default.nix#L19) fixes the issue for me.
```diff
- CGO_ENABLED = 0;
```
Disclaimer: I'm not familiar with golang so I'm not sure what this does or why it was added in the first place.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
@dit7ya
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 6.1.51, NixOS, 23.11 (Tapir), 23.11.20230908.db9208a`
- multi-user?: `yes`
- sandbox: `yes`
- version: `nix-env (Nix) 2.17.0`
- channels(root): `"home-manager, nixos, nixos-unstable"`
- nixpkgs: `/etc/nix/inputs/nixpkgs`
```
|
1.0
|
Build failure: katana - ### Steps To Reproduce
Steps to reproduce the behavior:
1. `nix build nixpkgs#katana`
2. observe that build exited with success code, but `result` directory is empty.
### Build log
```
katana> unpacking sources
katana> unpacking source archive /nix/store/rdivf3r4i8xigkbn3yvr4mjsqk5phmfa-source
katana> source root is source
katana> patching sources
katana> updateAutotoolsGnuConfigScriptsPhase
katana> configuring
katana> building
katana> Building subPackage ./cmd/katana
katana> package github.com/projectdiscovery/katana/cmd/katana
katana> imports github.com/projectdiscovery/katana/internal/runner
katana> imports github.com/projectdiscovery/katana/pkg/engine/hybrid
katana> imports github.com/projectdiscovery/katana/pkg/engine/common
katana> imports github.com/projectdiscovery/katana/pkg/engine/parser
katana> imports github.com/projectdiscovery/katana/pkg/utils
katana> imports github.com/BishopFox/jsluice
katana> imports github.com/smacker/go-tree-sitter/javascript: build constraints exclude all Go files in /build/source/vendor/github.com/smacker/go-tree-sitter/javascript
katana> running tests
katana> package github.com/projectdiscovery/katana/cmd/katana
katana> imports github.com/projectdiscovery/katana/internal/runner
katana> imports github.com/projectdiscovery/katana/pkg/engine/hybrid
katana> imports github.com/projectdiscovery/katana/pkg/engine/common
katana> imports github.com/projectdiscovery/katana/pkg/engine/parser
katana> imports github.com/projectdiscovery/katana/pkg/utils
katana> imports github.com/BishopFox/jsluice
katana> imports github.com/smacker/go-tree-sitter/javascript: build constraints exclude all Go files in /build/source/vendor/github.com/smacker/go-tree-sitter/javascript
katana> installing
katana> post-installation fixup
katana> shrinking RPATHs of ELF executables and libraries in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3
katana> checking for references to /build/ in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3...
katana> patching script interpreter paths in /nix/store/4767aldkqi90586wahb3i2v508dz3379-katana-1.0.3
```
### Additional context
Removing the following [line](https://github.com/NixOS/nixpkgs/blob/a7c9c812e0c4d544cebf4566158560220346c9b2/pkgs/tools/security/katana/default.nix#L19) fixes the issue for me.
```diff
- CGO_ENABLED = 0;
```
Disclaimer: I'm not familiar with golang so I'm not sure what this does or why it was added in the first place.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
@dit7ya
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 6.1.51, NixOS, 23.11 (Tapir), 23.11.20230908.db9208a`
- multi-user?: `yes`
- sandbox: `yes`
- version: `nix-env (Nix) 2.17.0`
- channels(root): `"home-manager, nixos, nixos-unstable"`
- nixpkgs: `/etc/nix/inputs/nixpkgs`
```
|
non_test
|
build failure katana steps to reproduce steps to reproduce the behavior nix build nixpkgs katana observe that build exited with success code but result directory is empty build log katana unpacking sources katana unpacking source archive nix store source katana source root is source katana patching sources katana updateautotoolsgnuconfigscriptsphase katana configuring katana building katana building subpackage cmd katana katana package github com projectdiscovery katana cmd katana katana imports github com projectdiscovery katana internal runner katana imports github com projectdiscovery katana pkg engine hybrid katana imports github com projectdiscovery katana pkg engine common katana imports github com projectdiscovery katana pkg engine parser katana imports github com projectdiscovery katana pkg utils katana imports github com bishopfox jsluice katana imports github com smacker go tree sitter javascript build constraints exclude all go files in build source vendor github com smacker go tree sitter javascript katana running tests katana package github com projectdiscovery katana cmd katana katana imports github com projectdiscovery katana internal runner katana imports github com projectdiscovery katana pkg engine hybrid katana imports github com projectdiscovery katana pkg engine common katana imports github com projectdiscovery katana pkg engine parser katana imports github com projectdiscovery katana pkg utils katana imports github com bishopfox jsluice katana imports github com smacker go tree sitter javascript build constraints exclude all go files in build source vendor github com smacker go tree sitter javascript katana installing katana post installation fixup katana shrinking rpaths of elf executables and libraries in nix store katana katana checking for references to build in nix store katana katana patching script interpreter paths in nix store katana additional context removing the following fixes the issue for me diff cgo enabled disclaimer i m not familiar with golang so i m not sure what this does or why it was added in the first place notify maintainers please people who are in the meta maintainers list of the offending package or module if in doubt check git blame for whoever last touched something metadata please run nix shell p nix info run nix info m and paste the result console nix shell p nix info run nix info m system linux host os linux nixos tapir multi user yes sandbox yes version nix env nix channels root home manager nixos nixos unstable nixpkgs etc nix inputs nixpkgs
| 0
|
62,185
| 6,779,689,857
|
IssuesEvent
|
2017-10-29 03:09:36
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
[Feature:Builds][Conformance] s2i build with a root user image should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel]
|
kind/test-flake priority/P1
|
```
/tmp/openshift/build-rpm-release/tito/rpmbuild-originPTPqU0/BUILD/origin-3.7.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:104
Expected error:
<*util.ExitError | 0xc42197e600>: {
Cmd: "oc adm --config=/tmp/cluster-admin.kubeconfig --namespace=extended-test-s2i-build-root-mbwdf-mjc4g policy add-scc-to-user privileged system:serviceaccount:extended-test-s2i-build-root-mbwdf-mjc4g:builder",
StdErr: "Error from server (Conflict): Operation cannot be fulfilled on securitycontextconstraints.security.openshift.io \"privileged\": the object has been modified; please apply your changes to the latest version and try again",
ExitError: {
ProcessState: {
pid: 17913,
status: 256,
rusage: {
Utime: {Sec: 0, Usec: 196524},
Stime: {Sec: 0, Usec: 28545},
Maxrss: 48672,
Ixrss: 0,
Idrss: 0,
Isrss: 0,
Minflt: 13633,
Majflt: 0,
Nswap: 0,
Inblock: 0,
Oublock: 0,
Msgsnd: 0,
Msgrcv: 0,
Nsignals: 0,
Nvcsw: 956,
Nivcsw: 3,
},
},
Stderr: nil,
},
}
exit status 1
not to have occurred
/tmp/openshift/build-rpm-release/tito/rpmbuild-originPTPqU0/BUILD/origin-3.7.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:77
```
|
1.0
|
[Feature:Builds][Conformance] s2i build with a root user image should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel] - ```
/tmp/openshift/build-rpm-release/tito/rpmbuild-originPTPqU0/BUILD/origin-3.7.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:104
Expected error:
<*util.ExitError | 0xc42197e600>: {
Cmd: "oc adm --config=/tmp/cluster-admin.kubeconfig --namespace=extended-test-s2i-build-root-mbwdf-mjc4g policy add-scc-to-user privileged system:serviceaccount:extended-test-s2i-build-root-mbwdf-mjc4g:builder",
StdErr: "Error from server (Conflict): Operation cannot be fulfilled on securitycontextconstraints.security.openshift.io \"privileged\": the object has been modified; please apply your changes to the latest version and try again",
ExitError: {
ProcessState: {
pid: 17913,
status: 256,
rusage: {
Utime: {Sec: 0, Usec: 196524},
Stime: {Sec: 0, Usec: 28545},
Maxrss: 48672,
Ixrss: 0,
Idrss: 0,
Isrss: 0,
Minflt: 13633,
Majflt: 0,
Nswap: 0,
Inblock: 0,
Oublock: 0,
Msgsnd: 0,
Msgrcv: 0,
Nsignals: 0,
Nvcsw: 956,
Nivcsw: 3,
},
},
Stderr: nil,
},
}
exit status 1
not to have occurred
/tmp/openshift/build-rpm-release/tito/rpmbuild-originPTPqU0/BUILD/origin-3.7.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:77
```
|
test
|
build with a root user image should create a root build and pass with a privileged scc tmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin test extended builds root go expected error cmd oc adm config tmp cluster admin kubeconfig namespace extended test build root mbwdf policy add scc to user privileged system serviceaccount extended test build root mbwdf builder stderr error from server conflict operation cannot be fulfilled on securitycontextconstraints security openshift io privileged the object has been modified please apply your changes to the latest version and try again exiterror processstate pid status rusage utime sec usec stime sec usec maxrss ixrss idrss isrss minflt majflt nswap inblock oublock msgsnd msgrcv nsignals nvcsw nivcsw stderr nil exit status not to have occurred tmp openshift build rpm release tito rpmbuild build origin output local go src github com openshift origin test extended builds root go
| 1
|
52,273
| 6,225,954,243
|
IssuesEvent
|
2017-07-10 17:21:35
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
closed
|
Test failure: Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_/_RefCharArrayTest_RefCharArrayTest_cmd
|
arch-arm32 test-run-uwp-coreclr
|
Opened on behalf of @Jiayili1
The test `Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_/_RefCharArrayTest_RefCharArrayTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\5a1b9d39-82a2-498d-8010-97efb5cf2676\Unzip\Reports\Interop.RefCharArray\RefCharArrayTest\RefCharArrayTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" RefCharArrayTest.exe \r
Beginning scenario: Pinvoke,Cdecl\r
ERROR!!!-e01: Unexpected ExceptionSystem.DllNotFoundException: Unable to load DLL 'RefCharArrayNative': The specified module could not be found. (Exception from HRESULT: 0x8007007E)\r
at Test.MarshalRefCharArray_Cdecl(Char[]& arr)\r
at Test.TestMethod_PInvoke_Cdecl()\r
Failed!\r
Expected: 100\r
Actual: 101\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\5a1b9d39-82a2-498d-8010-97efb5cf2676\Unzip\RefCharArrayTest\RefCharArrayTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_._RefCharArrayTest_RefCharArrayTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.RefCharArray.XUnitWrapper/analysis/xunit/Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_~2F_RefCharArrayTest_RefCharArrayTest_cmd
|
1.0
|
Test failure: Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_/_RefCharArrayTest_RefCharArrayTest_cmd - Opened on behalf of @Jiayili1
The test `Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_/_RefCharArrayTest_RefCharArrayTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\5a1b9d39-82a2-498d-8010-97efb5cf2676\Unzip\Reports\Interop.RefCharArray\RefCharArrayTest\RefCharArrayTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" RefCharArrayTest.exe \r
Beginning scenario: Pinvoke,Cdecl\r
ERROR!!!-e01: Unexpected ExceptionSystem.DllNotFoundException: Unable to load DLL 'RefCharArrayNative': The specified module could not be found. (Exception from HRESULT: 0x8007007E)\r
at Test.MarshalRefCharArray_Cdecl(Char[]& arr)\r
at Test.TestMethod_PInvoke_Cdecl()\r
Failed!\r
Expected: 100\r
Actual: 101\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\5a1b9d39-82a2-498d-8010-97efb5cf2676\Unzip\RefCharArrayTest\RefCharArrayTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_._RefCharArrayTest_RefCharArrayTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.RefCharArray.XUnitWrapper/analysis/xunit/Interop_RefCharArray._RefCharArrayTest_RefCharArrayTest_~2F_RefCharArrayTest_RefCharArrayTest_cmd
|
test
|
test failure interop refchararray refchararraytest refchararraytest refchararraytest refchararraytest cmd opened on behalf of the test interop refchararray refchararraytest refchararraytest refchararraytest refchararraytest cmd has failed return code raw output file c dotnetbuild work work unzip reports interop refchararray refchararraytest refchararraytest output txt raw output begin execution r c dotnetbuild work payload corerun exe refchararraytest exe r beginning scenario pinvoke cdecl r error unexpected exceptionsystem dllnotfoundexception unable to load dll refchararraynative the specified module could not be found exception from hresult r at test marshalrefchararray cdecl char arr r at test testmethod pinvoke cdecl r failed r expected r actual r end execution failed r failed r test harness exitcode is r to run the test set core root c dotnetbuild work payload c dotnetbuild work work unzip refchararraytest refchararraytest cmd r expected true r actual false stack trace at interop refchararray refchararraytest refchararraytest refchararraytest refchararraytest cmd build master core tests failing configurations windows arm detail
| 1
|
13,965
| 8,756,239,931
|
IssuesEvent
|
2018-12-14 17:05:47
|
php-coder/mystamps
|
https://api.github.com/repos/php-coder/mystamps
|
opened
|
/collection/{slug}: show a list of series at the end on small devices
|
area/markup area/usability kind/improvement trivial
|
When I open a page with a collection, I see a long list of the series first and only after that there is a collection statistics. In order to make it more usable, we should move a series list to the end.
|
True
|
/collection/{slug}: show a list of series at the end on small devices - When I open a page with a collection, I see a long list of the series first and only after that there is a collection statistics. In order to make it more usable, we should move a series list to the end.
|
non_test
|
collection slug show a list of series at the end on small devices when i open a page with a collection i see a long list of the series first and only after that there is a collection statistics in order to make it more usable we should move a series list to the end
| 0
|
323,177
| 23,937,480,047
|
IssuesEvent
|
2022-09-11 12:45:29
|
karelkryda/DKRCommands
|
https://api.github.com/repos/karelkryda/DKRCommands
|
closed
|
[Bug]: no constuctor?
|
bug documentation
|
### ⚠️ Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find similar issue
### ✏ Description
basically can't initialize DKRCommands, it says
```ruby
This expression is not constructable.
Type 'typeof import(".../node_modules/dkrcommands/dist/index")' has no construct signatures.
```
my import is as follows:
```js
import DKRCommands from "dkrcommands"
```
### 👟 Reproduction steps
just ```npm install dkrcommands``` and paste basic example from documentation
### 🔢 DKRCommands version
1.0.0-beta.1
### 💻 Operating system
Linux
### 🔢 Node.js version
18.9.0
### 👀 Expected behavior
works
### 😥 Actual behavior
doesn't work
### 📝 Relevant log output
_No response_
|
1.0
|
[Bug]: no constuctor? - ### ⚠️ Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find similar issue
### ✏ Description
basically can't initialize DKRCommands, it says
```ruby
This expression is not constructable.
Type 'typeof import(".../node_modules/dkrcommands/dist/index")' has no construct signatures.
```
my import is as follows:
```js
import DKRCommands from "dkrcommands"
```
### 👟 Reproduction steps
just ```npm install dkrcommands``` and paste basic example from documentation
### 🔢 DKRCommands version
1.0.0-beta.1
### 💻 Operating system
Linux
### 🔢 Node.js version
18.9.0
### 👀 Expected behavior
works
### 😥 Actual behavior
doesn't work
### 📝 Relevant log output
_No response_
|
non_test
|
no constuctor ⚠️ please verify that this bug has not been raised before i checked and didn t find similar issue ✏ description basically can t initialize dkrcommands it says ruby this expression is not constructable type typeof import node modules dkrcommands dist index has no construct signatures my import is as follows js import dkrcommands from dkrcommands 👟 reproduction steps just npm install dkrcommands and paste basic example from documentation 🔢 dkrcommands version beta 💻 operating system linux 🔢 node js version 👀 expected behavior works 😥 actual behavior doesn t work 📝 relevant log output no response
| 0
|
60,440
| 3,129,418,558
|
IssuesEvent
|
2015-09-09 01:03:58
|
cs2103aug2015-w15-3j/main
|
https://api.github.com/repos/cs2103aug2015-w15-3j/main
|
opened
|
Setup a basic UI with displayView
|
priority.medium
|
Used to display relevant search and result of command. Not to be confused with feedback view.
|
1.0
|
Setup a basic UI with displayView - Used to display relevant search and result of command. Not to be confused with feedback view.
|
non_test
|
setup a basic ui with displayview used to display relevant search and result of command not to be confused with feedback view
| 0
|
259,672
| 22,504,589,302
|
IssuesEvent
|
2022-06-23 14:31:34
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
opened
|
Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Santana do Garambéu
|
generalization test development
|
DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Santana do Garambéu.
|
1.0
|
Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Santana do Garambéu - DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Santana do Garambéu.
|
test
|
teste de generalizacao para a tag terceiro setor dados de parcerias santana do garambéu dod realizar o teste de generalização do validador da tag terceiro setor dados de parcerias para o município de santana do garambéu
| 1
|
29,188
| 4,474,307,970
|
IssuesEvent
|
2016-08-26 08:55:36
|
pixelhumain/communecter
|
https://api.github.com/repos/pixelhumain/communecter
|
closed
|
Impossible de se logguer après un changement d'email sur un user
|
asap bug to test
|
1. Changer son adresse mail sur sa page de profil
2. Se déconnecter
3. Essayer de se relogguer
=> impossible : l'email et le user ne correspondent pas.
Du à la manière d'encoder le password
|
1.0
|
Impossible de se logguer après un changement d'email sur un user - 1. Changer son adresse mail sur sa page de profil
2. Se déconnecter
3. Essayer de se relogguer
=> impossible : l'email et le user ne correspondent pas.
Du à la manière d'encoder le password
|
test
|
impossible de se logguer après un changement d email sur un user changer son adresse mail sur sa page de profil se déconnecter essayer de se relogguer impossible l email et le user ne correspondent pas du à la manière d encoder le password
| 1
|
740,750
| 25,765,970,145
|
IssuesEvent
|
2022-12-09 01:52:51
|
fsu-fall2022-capstone/Project-Group-4
|
https://api.github.com/repos/fsu-fall2022-capstone/Project-Group-4
|
closed
|
Expand Upon Shop UI
|
enhancement high priority
|
The Shop UI needs to be configured to be able to hold any number of shop items for towers and boons as referenced in #72. Conceivably, it can be done by setting up additional buttons on the UI to "go to the next page" so to speak. Perhaps separate it into Towers and Boons for the shop? It'll be up to whoever ends up implementing it.
In addition, perhaps adding a button to the UI that can hide/show the shop menu to allow for the screen to be decluttered?
|
1.0
|
Expand Upon Shop UI - The Shop UI needs to be configured to be able to hold any number of shop items for towers and boons as referenced in #72. Conceivably, it can be done by setting up additional buttons on the UI to "go to the next page" so to speak. Perhaps separate it into Towers and Boons for the shop? It'll be up to whoever ends up implementing it.
In addition, perhaps adding a button to the UI that can hide/show the shop menu to allow for the screen to be decluttered?
|
non_test
|
expand upon shop ui the shop ui needs to be configured to be able to hold any number of shop items for towers and boons as referenced in conceivably it can be done by setting up additional buttons on the ui to go to the next page so to speak perhaps separate it into towers and boons for the shop it ll be up to whoever ends up implementing it in addition perhaps adding a button to the ui that can hide show the shop menu to allow for the screen to be decluttered
| 0
|
68,798
| 7,110,795,125
|
IssuesEvent
|
2018-01-17 11:58:44
|
opengeospatial/ets-wfs20
|
https://api.github.com/repos/opengeospatial/ets-wfs20
|
closed
|
Data availability tests has opaque failures
|
enhancement priority:low status:to-verify testbed13
|
The data availability precondition is opaque in that it might fails for a host of other reasons, without reporting what the actual problem is. Examples found debugging GeoServer:
- The constraints reporting SOAP, XML and KVP binding support reported "true" instead of "TRUE" as the value. The internal checks found no bindings and merrily reported "no data", without any explanation. This test should be configured on its own
- If the first binding tried fails the rest is skipped because an exception is thrown, see here:
https://github.com/opengeospatial/ets-wfs20/blob/e4d7d143398400c0b0642c4213d0916ab674374e/src/main/java/org/opengis/cite/iso19142/util/DataSampler.java#L258
Again the tester is left with a criptic error message and little clue as to what might be going on (the notion that a failing binding disables everything else is not "obvious")
|
1.0
|
Data availability tests has opaque failures - The data availability precondition is opaque in that it might fails for a host of other reasons, without reporting what the actual problem is. Examples found debugging GeoServer:
- The constraints reporting SOAP, XML and KVP binding support reported "true" instead of "TRUE" as the value. The internal checks found no bindings and merrily reported "no data", without any explanation. This test should be configured on its own
- If the first binding tried fails the rest is skipped because an exception is thrown, see here:
https://github.com/opengeospatial/ets-wfs20/blob/e4d7d143398400c0b0642c4213d0916ab674374e/src/main/java/org/opengis/cite/iso19142/util/DataSampler.java#L258
Again the tester is left with a criptic error message and little clue as to what might be going on (the notion that a failing binding disables everything else is not "obvious")
|
test
|
data availability tests has opaque failures the data availability precondition is opaque in that it might fails for a host of other reasons without reporting what the actual problem is examples found debugging geoserver the constraints reporting soap xml and kvp binding support reported true instead of true as the value the internal checks found no bindings and merrily reported no data without any explanation this test should be configured on its own if the first binding tried fails the rest is skipped because an exception is thrown see here again the tester is left with a criptic error message and little clue as to what might be going on the notion that a failing binding disables everything else is not obvious
| 1
|
321,551
| 27,538,513,028
|
IssuesEvent
|
2023-03-07 06:29:02
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Update default search provider list for South Korea
|
l10n priority/P3 feature/search QA/Yes release-notes/include QA/Blocked feature/settings OS/Android OS/Desktop QA/Test-All-Platforms retention
|
It should have Naver and Daum search provider as a default.
|
1.0
|
Update default search provider list for South Korea - It should have Naver and Daum search provider as a default.
|
test
|
update default search provider list for south korea it should have naver and daum search provider as a default
| 1
|
266,975
| 23,270,357,670
|
IssuesEvent
|
2022-08-04 22:08:22
|
w3c/csswg-drafts
|
https://api.github.com/repos/w3c/csswg-drafts
|
closed
|
[selectors] [css-conditional] Detecting :has() restrictions
|
selectors-4 Closed Accepted by CSSWG Resolution Needs Testcase (WPT) css-conditional-4
|
Multiple implementations want to ship the `:has()` selector with a variety of limitations in order to prevent performance footguns and complexity:
* https://github.com/w3c/csswg-drafts/issues/6399
* https://github.com/w3c/csswg-drafts/issues/6952
* https://github.com/w3c/csswg-drafts/issues/7211
* https://github.com/w3c/csswg-drafts/issues/7212
However, since `:has()` has forgiving parsing like `:is()` / `:where()`, it's not possible to detect these limitations easily, and it won't be possible to do so if we ever remove these limitations.
For `:is()` / `:where()` it's not a problem since it's generally assumed that if the selector is valid outside of them it'll be valid inside of them... But for the limitations that folks are imposing on `:has()` (and which for the record I agree with) this is not true.
Perhaps we should special-case `@supports selector(..)` to _not_ use forgiving parsing? Otherwise the only way to potentially detect this is with script (by reading back the serialization of `:has()` and see if the relevant selectors have been dropped).
cc @byung-woo @anttijk
|
1.0
|
[selectors] [css-conditional] Detecting :has() restrictions - Multiple implementations want to ship the `:has()` selector with a variety of limitations in order to prevent performance footguns and complexity:
* https://github.com/w3c/csswg-drafts/issues/6399
* https://github.com/w3c/csswg-drafts/issues/6952
* https://github.com/w3c/csswg-drafts/issues/7211
* https://github.com/w3c/csswg-drafts/issues/7212
However, since `:has()` has forgiving parsing like `:is()` / `:where()`, it's not possible to detect these limitations easily, and it won't be possible to do so if we ever remove these limitations.
For `:is()` / `:where()` it's not a problem since it's generally assumed that if the selector is valid outside of them it'll be valid inside of them... But for the limitations that folks are imposing on `:has()` (and which for the record I agree with) this is not true.
Perhaps we should special-case `@supports selector(..)` to _not_ use forgiving parsing? Otherwise the only way to potentially detect this is with script (by reading back the serialization of `:has()` and see if the relevant selectors have been dropped).
cc @byung-woo @anttijk
|
test
|
detecting has restrictions multiple implementations want to ship the has selector with a variety of limitations in order to prevent performance footguns and complexity however since has has forgiving parsing like is where it s not possible to detect these limitations easily and it won t be possible to do so if we ever remove these limitations for is where it s not a problem since it s generally assumed that if the selector is valid outside of them it ll be valid inside of them but for the limitations that folks are imposing on has and which for the record i agree with this is not true perhaps we should special case supports selector to not use forgiving parsing otherwise the only way to potentially detect this is with script by reading back the serialization of has and see if the relevant selectors have been dropped cc byung woo anttijk
| 1
|
701,680
| 24,103,508,148
|
IssuesEvent
|
2022-09-20 04:40:16
|
WordPress/openverse-frontend
|
https://api.github.com/repos/WordPress/openverse-frontend
|
closed
|
Component: VFooter
|
🟨 priority: medium 🌟 goal: addition 🕹 aspect: interface
|
_All work for the 'Create new header and footer' milestone should be done under the `new_header` feature flag._
## Description
<!-- Describe the component, including different states. Do not include screenshots. -->
For search pages, we use a footer with links to other content pages. These links can be arranged linearly or, on small devices, in a 2-column grid.
For non-search pages (such as the home page and content pages), the footer does not show links or the Openverse logo as those are present in the header.
## API
<!-- Tentatively specify the props, state and emitted events of the component. -->
`VFooter` does not take any props, nor does it emit events. It uses the [`use-pages` composable](https://github.com/WordPress/openverse-frontend/blob/b91e74ef9a03e87df751cce8e963fa1578002657/src/composables/use-pages.ts) to get the list of pages and the current page.
## Code samples
<!-- Share pseudocode templates or high-level implementation code; or delete the section entirely. -->
## Dependencies
<!-- Name the components that this component depends on, including issues or PRs; or delete the section entirely if the component is independent. -->
- `VLocaleSwitcher` (for changing the site language)
## References
<!-- Include as many references to prior art as you deem necessary or helpful. -->
<!-- Place a link to the Figma node of the component from the Design Library: https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library -->
- **Figma for Footer Content:** https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=2960%3A8483
- **Figma for Footer Internal:** https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=2960%3A8290
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in implementing this component.
|
1.0
|
Component: VFooter - _All work for the 'Create new header and footer' milestone should be done under the `new_header` feature flag._
## Description
<!-- Describe the component, including different states. Do not include screenshots. -->
For search pages, we use a footer with links to other content pages. These links can be arranged linearly or, on small devices, in a 2-column grid.
For non-search pages (such as the home page and content pages), the footer does not show links or the Openverse logo as those are present in the header.
## API
<!-- Tentatively specify the props, state and emitted events of the component. -->
`VFooter` does not take any props, nor does it emit events. It uses the [`use-pages` composable](https://github.com/WordPress/openverse-frontend/blob/b91e74ef9a03e87df751cce8e963fa1578002657/src/composables/use-pages.ts) to get the list of pages and the current page.
## Code samples
<!-- Share pseudocode templates or high-level implementation code; or delete the section entirely. -->
## Dependencies
<!-- Name the components that this component depends on, including issues or PRs; or delete the section entirely if the component is independent. -->
- `VLocaleSwitcher` (for changing the site language)
## References
<!-- Include as many references to prior art as you deem necessary or helpful. -->
<!-- Place a link to the Figma node of the component from the Design Library: https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library -->
- **Figma for Footer Content:** https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=2960%3A8483
- **Figma for Footer Internal:** https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=2960%3A8290
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in implementing this component.
|
non_test
|
component vfooter all work for the create new header and footer milestone should be done under the new header feature flag description for search pages we use a footer with links to other content pages these links can be arranged linearly or on small devices in a column grid for non search pages such as the home page and content pages the footer does not show links or the openverse logo as those are present in the header api vfooter does not take any props nor does it emit events it uses the to get the list of pages and the current page code samples dependencies vlocaleswitcher for changing the site language references figma for footer content figma for footer internal implementation 🙋 i would be interested in implementing this component
| 0
|
16,290
| 3,516,659,537
|
IssuesEvent
|
2016-01-12 01:05:31
|
PlayScriptRedux/playscript
|
https://api.github.com/repos/PlayScriptRedux/playscript
|
opened
|
Functions within Functions - Error CS0584: Internal compiler error:
|
☠ language non-compliance ⚠test needed
|
Related: #112
Error CS0584: Internal compiler error: Expression `Mono.CSharp.LocalVariableReference' didn't set its type in DoResolve (CS0584) (functionOfFunctions)
Valid ActionScript:
```
package
{
import flash.display.Sprite;
import flash.events.Event;
public class MainClass
{
public static function Main():void
{
var sprite:Sprite = new Sprite();
sprite.addEventListener("myEvent",
function(e:Event) : void
{
trace("myEvent");
step1();
});
sprite.dispatchEvent( new Event("myEvent", true ) );
function step1():void {
trace("step1");
step2();
}
function step2():void {
trace("step2");
}
}
}
}
```
Output:
```
[trace] myEvent
[trace] step1
[trace] step2
```
|
1.0
|
Functions within Functions - Error CS0584: Internal compiler error: - Related: #112
Error CS0584: Internal compiler error: Expression `Mono.CSharp.LocalVariableReference' didn't set its type in DoResolve (CS0584) (functionOfFunctions)
Valid ActionScript:
```
package
{
import flash.display.Sprite;
import flash.events.Event;
public class MainClass
{
public static function Main():void
{
var sprite:Sprite = new Sprite();
sprite.addEventListener("myEvent",
function(e:Event) : void
{
trace("myEvent");
step1();
});
sprite.dispatchEvent( new Event("myEvent", true ) );
function step1():void {
trace("step1");
step2();
}
function step2():void {
trace("step2");
}
}
}
}
```
Output:
```
[trace] myEvent
[trace] step1
[trace] step2
```
|
test
|
functions within functions error internal compiler error related error internal compiler error expression mono csharp localvariablereference didn t set its type in doresolve functionoffunctions valid actionscript package import flash display sprite import flash events event public class mainclass public static function main void var sprite sprite new sprite sprite addeventlistener myevent function e event void trace myevent sprite dispatchevent new event myevent true function void trace function void trace output myevent
| 1
|
243,571
| 20,425,135,579
|
IssuesEvent
|
2022-02-24 02:26:18
|
sharpvik/hasky
|
https://api.github.com/repos/sharpvik/hasky
|
closed
|
Runtime testing
|
test runtime
|
1. ~~Closure application and execution~~
2. Garbage collection for every base type.
|
1.0
|
Runtime testing - 1. ~~Closure application and execution~~
2. Garbage collection for every base type.
|
test
|
runtime testing closure application and execution garbage collection for every base type
| 1
|
119,666
| 10,060,348,300
|
IssuesEvent
|
2019-07-22 18:36:13
|
PRUNERS/FLiT
|
https://api.github.com/repos/PRUNERS/FLiT
|
closed
|
Bisect --delete removes fPIC objects
|
bug make python tests
|
## Bug Report
**Describe the problem**
The `--delete` argument for `flit bisect` will delete the `bisect-XX/obj` directory, containing fPIC object files. This means each step for the symbol bisect needs to recompile these object files. Not okay!
**Suggested Fix**
Don't remove the `obj` directory. Keep it around until that bisect is fully finished.
**Alternative approaches:**
Put the fPIC object files in the top-level `obj` directory. This, however, has problems of race conditions when running in parallel if two bisects are running against the same compilation and are doing symbol bisect on the same file (but perhaps a different precision or a different test).
|
1.0
|
Bisect --delete removes fPIC objects - ## Bug Report
**Describe the problem**
The `--delete` argument for `flit bisect` will delete the `bisect-XX/obj` directory, containing fPIC object files. This means each step for the symbol bisect needs to recompile these object files. Not okay!
**Suggested Fix**
Don't remove the `obj` directory. Keep it around until that bisect is fully finished.
**Alternative approaches:**
Put the fPIC object files in the top-level `obj` directory. This, however, has problems of race conditions when running in parallel if two bisects are running against the same compilation and are doing symbol bisect on the same file (but perhaps a different precision or a different test).
|
test
|
bisect delete removes fpic objects bug report describe the problem the delete argument for flit bisect will delete the bisect xx obj directory containing fpic object files this means each step for the symbol bisect needs to recompile these object files not okay suggested fix don t remove the obj directory keep it around until that bisect is fully finished alternative approaches put the fpic object files in the top level obj directory this however has problems of race conditions when running in parallel if two bisects are running against the same compilation and are doing symbol bisect on the same file but perhaps a different precision or a different test
| 1
|
208,782
| 15,932,743,290
|
IssuesEvent
|
2021-04-14 06:23:03
|
elastic/cloud-on-k8s
|
https://api.github.com/repos/elastic/cloud-on-k8s
|
opened
|
E2E OCP cluster is not created
|
:ci >test
|
Looks like it is related to the recent changes we made on the [OCP deployer](https://github.com/elastic/cloud-on-k8s/pull/4387) :
```
02:42:00 Error: clientVersion must not be empty
02:42:00 clientVersion must not be empty
02:42:00 make: *** [Makefile:500: run-deployer] Error 1
02:42:00 Makefile:48: recipe for target 'ci-internal' failed
```
At first glance I would say this is because `clientVersion` is not set for `ocp-ci`:
```yaml
- id: ocp-ci
operation: create
clusterName: ci
provider: ocp
machineType: n1-standard-8
serviceAccount: true
ocp:
region: europe-west2
nodeCount: 3
- id: ocp-dev
operation: create
clusterName: dev
clientVersion: 4.7.0
provider: ocp
machineType: n1-standard-8
serviceAccount: true
ocp:
region: europe-west1
nodeCount: 3
```
|
1.0
|
E2E OCP cluster is not created - Looks like it is related to the recent changes we made on the [OCP deployer](https://github.com/elastic/cloud-on-k8s/pull/4387) :
```
02:42:00 Error: clientVersion must not be empty
02:42:00 clientVersion must not be empty
02:42:00 make: *** [Makefile:500: run-deployer] Error 1
02:42:00 Makefile:48: recipe for target 'ci-internal' failed
```
At first glance I would say this is because `clientVersion` is not set for `ocp-ci`:
```yaml
- id: ocp-ci
operation: create
clusterName: ci
provider: ocp
machineType: n1-standard-8
serviceAccount: true
ocp:
region: europe-west2
nodeCount: 3
- id: ocp-dev
operation: create
clusterName: dev
clientVersion: 4.7.0
provider: ocp
machineType: n1-standard-8
serviceAccount: true
ocp:
region: europe-west1
nodeCount: 3
```
|
test
|
ocp cluster is not created looks like it is related to the recent changes we made on the error clientversion must not be empty clientversion must not be empty make error makefile recipe for target ci internal failed at first glance i would say this is because clientversion is not set for ocp ci yaml id ocp ci operation create clustername ci provider ocp machinetype standard serviceaccount true ocp region europe nodecount id ocp dev operation create clustername dev clientversion provider ocp machinetype standard serviceaccount true ocp region europe nodecount
| 1
|
114,055
| 9,673,334,046
|
IssuesEvent
|
2019-05-22 07:15:33
|
zkSNACKs/WalletWasabi
|
https://api.github.com/repos/zkSNACKs/WalletWasabi
|
closed
|
Coinjoin Transactions Network Fee
|
UX question/research stability/testing
|
Yesterday we experienced what happens when network fees suddenly increase significantly and keep high for a day, basically Wasabi continues broadcasting transactions until the mempool policies start preventing the acceptance of new descendant transactions. In this case users cannot mix and those that mixed cannot spend their coins.
The problem is complex and the solutions could be a bit tricky too. This issue is for discussion the alternatives. I list them below and then share my point of view at the end.
## Alternatives
### Alternative 1 - Do Nothing
Do nothing sound wrong but it is technically the best solution and we can see that nothing too terrible happened, except for the fact that there is a reduction in the coordinator's income.
### Alternative 2 - Increase the network fee (reduce the confirmation target)
The idea of doing one or two transaction a day was left behind and we are doing several transactions a day so, the risk of having a long chain of unconfirmed CJ transactions is currently higher than what we originally estimated.
### Alternative 3 - Introduce the previous unconfirmed CJ tx in the fee estimation calculus
This is child coinjoin transactions pay for parent coinjoin transactions. The idea could be implemented in many different way but the basic idea is pay more fees to the network while more unconfirmed cj are waiting in the mempool.
### Alternative 4 - Detect the chain of unconfirmed CJ transactions is too long and pay for confirmation
Let say we don't want to have more than X unconfirmed cj transactions so, when that threshold is reached we spend the coordinator's fee output paying enough for the whole chain of cj transactions.
-------------------
My point of view is that `alternative 1` is possible and it is hard to say in advance whether it is the best or not. OTOH this kind of situation breaks the flow of users and money so, it looks like we could try something else.
`Alternative 2` is the second more easy and it could help. The problem with fee estimation using past fee to predict current fee leads to very bad estimations when the target is long, because it says: given in the last 24hs transactions paid 2sat/bytes on average then the estimated fee is 2 sat/bytes, when in reality transactions in the last 4hs have been paying 150sat/bytes. In summary, reducing the target can help but given it is based on past data it will still pay less when the fees rise and will pay more when the fees drop.
`Alternative 3` requires an overreaction, i mean, if we see 10 unconfimed coinjoin transaction then the next one have to see how much fee transacrions paid in the last block in average and do calculate how much fee to pay, basically it is calculated as the current cj tx size in bytes multiplied by the more recent fee rate more the sum of the differences between the current fee and the paid fee for each previous cj tx.
`Alternative 4` is a bit crazy but possible. The easiest way to do it is manually by @nopara73, after detecting let say 10 cj transactions in the mempool (CJ1, CJ2, .... C10) we can spend the CJ5's coordinator fee output paying a very high fee to make profitable for miners to mine C1, C2, C3, C4 and C5. In this way the coinjoins creation doesn't stop.
|
1.0
|
Coinjoin Transactions Network Fee - Yesterday we experienced what happens when network fees suddenly increase significantly and keep high for a day, basically Wasabi continues broadcasting transactions until the mempool policies start preventing the acceptance of new descendant transactions. In this case users cannot mix and those that mixed cannot spend their coins.
The problem is complex and the solutions could be a bit tricky too. This issue is for discussion the alternatives. I list them below and then share my point of view at the end.
## Alternatives
### Alternative 1 - Do Nothing
Do nothing sound wrong but it is technically the best solution and we can see that nothing too terrible happened, except for the fact that there is a reduction in the coordinator's income.
### Alternative 2 - Increase the network fee (reduce the confirmation target)
The idea of doing one or two transaction a day was left behind and we are doing several transactions a day so, the risk of having a long chain of unconfirmed CJ transactions is currently higher than what we originally estimated.
### Alternative 3 - Introduce the previous unconfirmed CJ tx in the fee estimation calculus
This is child coinjoin transactions pay for parent coinjoin transactions. The idea could be implemented in many different way but the basic idea is pay more fees to the network while more unconfirmed cj are waiting in the mempool.
### Alternative 4 - Detect the chain of unconfirmed CJ transactions is too long and pay for confirmation
Let say we don't want to have more than X unconfirmed cj transactions so, when that threshold is reached we spend the coordinator's fee output paying enough for the whole chain of cj transactions.
-------------------
My point of view is that `alternative 1` is possible and it is hard to say in advance whether it is the best or not. OTOH this kind of situation breaks the flow of users and money so, it looks like we could try something else.
`Alternative 2` is the second more easy and it could help. The problem with fee estimation using past fee to predict current fee leads to very bad estimations when the target is long, because it says: given in the last 24hs transactions paid 2sat/bytes on average then the estimated fee is 2 sat/bytes, when in reality transactions in the last 4hs have been paying 150sat/bytes. In summary, reducing the target can help but given it is based on past data it will still pay less when the fees rise and will pay more when the fees drop.
`Alternative 3` requires an overreaction, i mean, if we see 10 unconfimed coinjoin transaction then the next one have to see how much fee transacrions paid in the last block in average and do calculate how much fee to pay, basically it is calculated as the current cj tx size in bytes multiplied by the more recent fee rate more the sum of the differences between the current fee and the paid fee for each previous cj tx.
`Alternative 4` is a bit crazy but possible. The easiest way to do it is manually by @nopara73, after detecting let say 10 cj transactions in the mempool (CJ1, CJ2, .... C10) we can spend the CJ5's coordinator fee output paying a very high fee to make profitable for miners to mine C1, C2, C3, C4 and C5. In this way the coinjoins creation doesn't stop.
|
test
|
coinjoin transactions network fee yesterday we experienced what happens when network fees suddenly increase significantly and keep high for a day basically wasabi continues broadcasting transactions until the mempool policies start preventing the acceptance of new descendant transactions in this case users cannot mix and those that mixed cannot spend their coins the problem is complex and the solutions could be a bit tricky too this issue is for discussion the alternatives i list them below and then share my point of view at the end alternatives alternative do nothing do nothing sound wrong but it is technically the best solution and we can see that nothing too terrible happened except for the fact that there is a reduction in the coordinator s income alternative increase the network fee reduce the confirmation target the idea of doing one or two transaction a day was left behind and we are doing several transactions a day so the risk of having a long chain of unconfirmed cj transactions is currently higher than what we originally estimated alternative introduce the previous unconfirmed cj tx in the fee estimation calculus this is child coinjoin transactions pay for parent coinjoin transactions the idea could be implemented in many different way but the basic idea is pay more fees to the network while more unconfirmed cj are waiting in the mempool alternative detect the chain of unconfirmed cj transactions is too long and pay for confirmation let say we don t want to have more than x unconfirmed cj transactions so when that threshold is reached we spend the coordinator s fee output paying enough for the whole chain of cj transactions my point of view is that alternative is possible and it is hard to say in advance whether it is the best or not otoh this kind of situation breaks the flow of users and money so it looks like we could try something else alternative is the second more easy and it could help the problem with fee estimation using past fee to predict current fee leads to very bad estimations when the target is long because it says given in the last transactions paid bytes on average then the estimated fee is sat bytes when in reality transactions in the last have been paying bytes in summary reducing the target can help but given it is based on past data it will still pay less when the fees rise and will pay more when the fees drop alternative requires an overreaction i mean if we see unconfimed coinjoin transaction then the next one have to see how much fee transacrions paid in the last block in average and do calculate how much fee to pay basically it is calculated as the current cj tx size in bytes multiplied by the more recent fee rate more the sum of the differences between the current fee and the paid fee for each previous cj tx alternative is a bit crazy but possible the easiest way to do it is manually by after detecting let say cj transactions in the mempool we can spend the s coordinator fee output paying a very high fee to make profitable for miners to mine and in this way the coinjoins creation doesn t stop
| 1
|
75,303
| 7,468,409,459
|
IssuesEvent
|
2018-04-02 18:52:36
|
learn-co-curriculum/python-intro-to-strings
|
https://api.github.com/repos/learn-co-curriculum/python-intro-to-strings
|
closed
|
Images not showing up in curriculum
|
Test
|
<img width="1075" alt="screen shot 2018-03-18 at 1 33 31 pm" src="https://user-images.githubusercontent.com/13033515/37568840-02c32388-2ab1-11e8-9ee9-95abb583367d.png">
|
1.0
|
Images not showing up in curriculum - <img width="1075" alt="screen shot 2018-03-18 at 1 33 31 pm" src="https://user-images.githubusercontent.com/13033515/37568840-02c32388-2ab1-11e8-9ee9-95abb583367d.png">
|
test
|
images not showing up in curriculum img width alt screen shot at pm src
| 1
|
296,382
| 25,547,194,879
|
IssuesEvent
|
2022-11-29 19:54:41
|
LiskHQ/lisk-sdk
|
https://api.github.com/repos/LiskHQ/lisk-sdk
|
opened
|
Add missing unit tests for Transfer and Crosschain Transfer Command
|
type: test framework
|
## Transfer command
1. `Verify` should throw error if recipient address for tokenID doesn't exist in user sub-store.
## Crosschain Transfer Command
1. `Execute` should transfer amount from sender account to corresponding escrow account and log `TransferCrosschainEvent`.
|
1.0
|
Add missing unit tests for Transfer and Crosschain Transfer Command - ## Transfer command
1. `Verify` should throw error if recipient address for tokenID doesn't exist in user sub-store.
## Crosschain Transfer Command
1. `Execute` should transfer amount from sender account to corresponding escrow account and log `TransferCrosschainEvent`.
|
test
|
add missing unit tests for transfer and crosschain transfer command transfer command verify should throw error if recipient address for tokenid doesn t exist in user sub store crosschain transfer command execute should transfer amount from sender account to corresponding escrow account and log transfercrosschainevent
| 1
|
113,110
| 9,629,235,061
|
IssuesEvent
|
2019-05-15 09:09:16
|
khartec/waltz
|
https://api.github.com/repos/khartec/waltz
|
closed
|
when selecting app for data feed creation we should show a preview of the app
|
fixed (test & close) small change
|
Users reporting that just showing app name is insufficient. Should also have an 'open in new tab' link for the app.
|
1.0
|
when selecting app for data feed creation we should show a preview of the app - Users reporting that just showing app name is insufficient. Should also have an 'open in new tab' link for the app.
|
test
|
when selecting app for data feed creation we should show a preview of the app users reporting that just showing app name is insufficient should also have an open in new tab link for the app
| 1
|
162,793
| 12,691,239,964
|
IssuesEvent
|
2020-06-21 16:03:29
|
Vachok/ftpplus
|
https://api.github.com/repos/Vachok/ftpplus
|
closed
|
testGetRawResult
|
Medium TestQuality bug info
|
Execute InternetSyncTest::testGetRawResult**testGetRawResult**
*InternetSyncTest*
*{"16:02:08.453":"Creating: G:\\My\_Proj\\FtpClientPlus\\modules\\networker\\inetstats\\ok\\10.200.213.85-0.txt\nNo original FILE! 10.200.213.85.csv\nExiting: Sun Jun 14 16:01:38 MSK 2020 created 0 rows. 10.200.213.85 Last online: <a href=\"/ad?10.200.213.85\"><font color=\"red\">10.200.213.85 : Login isn"} expected [true] but found [false]*
*java.lang.AssertionError*
|
1.0
|
testGetRawResult - Execute InternetSyncTest::testGetRawResult**testGetRawResult**
*InternetSyncTest*
*{"16:02:08.453":"Creating: G:\\My\_Proj\\FtpClientPlus\\modules\\networker\\inetstats\\ok\\10.200.213.85-0.txt\nNo original FILE! 10.200.213.85.csv\nExiting: Sun Jun 14 16:01:38 MSK 2020 created 0 rows. 10.200.213.85 Last online: <a href=\"/ad?10.200.213.85\"><font color=\"red\">10.200.213.85 : Login isn"} expected [true] but found [false]*
*java.lang.AssertionError*
|
test
|
testgetrawresult execute internetsynctest testgetrawresult testgetrawresult internetsynctest creating g my proj ftpclientplus modules networker inetstats ok txt nno original file csv nexiting sun jun msk created rows last online login isn expected but found java lang assertionerror
| 1
|
685,130
| 23,444,651,589
|
IssuesEvent
|
2022-08-15 18:19:21
|
TheButterbrotMan/Deathdusk
|
https://api.github.com/repos/TheButterbrotMan/Deathdusk
|
closed
|
[Bug]: Crashing while using beds
|
bug help wanted high priority
|
### What happened?
Trying to use beds in foreign places, like those pre-fab houses or this one like camp tent just outside an MCA village. The game didn't actually crash, as in produce a crash report, it just froze and loaded infinitely. ALSO. It was saying the bed was occupied despite the bed being empty a few times.
### Version
2.1.1
### Provide the log
There was no crash log. It didn't produce one
|
1.0
|
[Bug]: Crashing while using beds - ### What happened?
Trying to use beds in foreign places, like those pre-fab houses or this one like camp tent just outside an MCA village. The game didn't actually crash, as in produce a crash report, it just froze and loaded infinitely. ALSO. It was saying the bed was occupied despite the bed being empty a few times.
### Version
2.1.1
### Provide the log
There was no crash log. It didn't produce one
|
non_test
|
crashing while using beds what happened trying to use beds in foreign places like those pre fab houses or this one like camp tent just outside an mca village the game didn t actually crash as in produce a crash report it just froze and loaded infinitely also it was saying the bed was occupied despite the bed being empty a few times version provide the log there was no crash log it didn t produce one
| 0
|
347,136
| 10,425,257,696
|
IssuesEvent
|
2019-09-16 15:05:26
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.pornhub.com - desktop site instead of mobile site
|
browser-firefox-mobile engine-gecko priority-critical type-tracking-protection-basic
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://www.pornhub.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: very slow
**Steps to Reproduce**:
Very slow
[](https://webcompat.com/uploads/2019/9/2ee3f55e-6a60-4f73-86c2-9cd101e0a287.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>tracking content blocked: true (basic)</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://www.pornhub.com/" line: 0}]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.pornhub.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://www.pornhub.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: very slow
**Steps to Reproduce**:
Very slow
[](https://webcompat.com/uploads/2019/9/2ee3f55e-6a60-4f73-86c2-9cd101e0a287.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190909131947</li><li>tracking content blocked: true (basic)</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://www.pornhub.com/" line: 0}]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]', u'[console.log(ajaxPost: Error getting data from AJAX call) https://cdn1d-static-shared.phncdn.com/mg_utils-1.0.0.js?cache=2019091102:1:13430]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser yes problem type desktop site instead of mobile site description very slow steps to reproduce very slow browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked true basic gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u u from with ❤️
| 0
|
247,835
| 20,988,340,871
|
IssuesEvent
|
2022-03-29 06:53:18
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: sqlsmith/setup=tpch-sf1/setting=default failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-sql-queries
|
roachtest.sqlsmith/setup=tpch-sf1/setting=default [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=default) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=tpch-sf1/setting=default/run_1
cluster.go:1868,sqlsmith.go:101,sqlsmith.go:304,test_runner.go:875: one or more parallel execution failure
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).ParallelE
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2042
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Parallel
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1923
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Start
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cockroach.go:167
| github.com/cockroachdb/cockroach/pkg/roachprod.Start
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:660
| main.(*clusterImpl).StartE
| main/pkg/cmd/roachtest/cluster.go:1826
| main.(*clusterImpl).Start
| main/pkg/cmd/roachtest/cluster.go:1867
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerSQLSmith.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/sqlsmith.go:101
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerSQLSmith.func4.1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/sqlsmith.go:304
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:875
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) one or more parallel execution failure
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpch-sf1/setting=default.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-14417
|
2.0
|
roachtest: sqlsmith/setup=tpch-sf1/setting=default failed - roachtest.sqlsmith/setup=tpch-sf1/setting=default [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/sqlsmith/setup=tpch-sf1/setting=default) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=tpch-sf1/setting=default/run_1
cluster.go:1868,sqlsmith.go:101,sqlsmith.go:304,test_runner.go:875: one or more parallel execution failure
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).ParallelE
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2042
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Parallel
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1923
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Start
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cockroach.go:167
| github.com/cockroachdb/cockroach/pkg/roachprod.Start
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:660
| main.(*clusterImpl).StartE
| main/pkg/cmd/roachtest/cluster.go:1826
| main.(*clusterImpl).Start
| main/pkg/cmd/roachtest/cluster.go:1867
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerSQLSmith.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/sqlsmith.go:101
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerSQLSmith.func4.1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/sqlsmith.go:304
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:875
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) one or more parallel execution failure
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpch-sf1/setting=default.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-14417
|
test
|
roachtest sqlsmith setup tpch setting default failed roachtest sqlsmith setup tpch setting default with on master the test failed on branch master cloud gce test artifacts and logs in artifacts sqlsmith setup tpch setting default run cluster go sqlsmith go sqlsmith go test runner go one or more parallel execution failure attached stack trace stack trace github com cockroachdb cockroach pkg roachprod install syncedcluster parallele github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster parallel github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster start github com cockroachdb cockroach pkg roachprod install cockroach go github com cockroachdb cockroach pkg roachprod start github com cockroachdb cockroach pkg roachprod roachprod go main clusterimpl starte main pkg cmd roachtest cluster go main clusterimpl start main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests registersqlsmith github com cockroachdb cockroach pkg cmd roachtest tests sqlsmith go github com cockroachdb cockroach pkg cmd roachtest tests registersqlsmith github com cockroachdb cockroach pkg cmd roachtest tests sqlsmith go main testrunner runtest main pkg cmd roachtest test runner go runtime goexit goroot src runtime asm s wraps one or more parallel execution failure error types withstack withstack errutil leaferror help see see cc cockroachdb sql queries jira issue crdb
| 1
|
335,213
| 30,018,244,840
|
IssuesEvent
|
2023-06-26 20:33:25
|
microsoft/vscode-python-debugger
|
https://api.github.com/repos/microsoft/vscode-python-debugger
|
opened
|
TPI: Debugging Python, Test automatic configuration
|
testplan-item
|
Refs: https://github.com/microsoft/vscode-python/issues/19503
- [ ] macOS
- [ ] linux
- [ ] windows
Complexity: 3
Author: @paulacamargo25
---
Prerequisites:
- Install python, version >= 3.7
- Install the [`debugpy`](https://marketplace.visualstudio.com/items?itemName=ms-python.debugpy&ssr=false) extension.
Automatically detect a Flask application and run it with the correct Debug Configuration.
### Steps
Part 1: Debugging python file
1. Create a python file with a simple code.
2. Head over to the Run And Debug tab, and click on Show all automatic debug configurations.
<img width="390" alt="Screen Shot 2022-07-25 at 2 09 44 PM" src="https://user-images.githubusercontent.com/17892325/180874578-32d868f9-b93f-4460-8643-e1b18b3147e9.png">
3. A window will open with a list of options, choose `Debugpy`.
4. You should now see a list of debug options, and there should be the Python File option.
### Verification
1. Make sure that the application has been executed correctly, you can put some breakpoints, to test that the debugging works.
2. . If you repeat the steps and instead of clicking the option, you click the wheel, it should open the launch.json file with the configuration prefilled. Make sure this is correct and can be debugged.
3. Another form to show the automatic configuration is typing 'debug ' (with a space) in Quick open (⌘P) or by triggering the Debug: Select and Start Debugging command. Test that the recognition works here too.
Part 2: Try other automatic configuration
1. There are automatic configurations implemented for Django, FastApi and Flask, if you have any of these projects you can try that they also work with them. Because this functionality has already been tested in the Python extension, you don't need to test each one. The idea of this tpi is to make sure that it also works in the `debugpy` extension. Trying one of them is enough.
|
1.0
|
TPI: Debugging Python, Test automatic configuration - Refs: https://github.com/microsoft/vscode-python/issues/19503
- [ ] macOS
- [ ] linux
- [ ] windows
Complexity: 3
Author: @paulacamargo25
---
Prerequisites:
- Install python, version >= 3.7
- Install the [`debugpy`](https://marketplace.visualstudio.com/items?itemName=ms-python.debugpy&ssr=false) extension.
Automatically detect a Flask application and run it with the correct Debug Configuration.
### Steps
Part 1: Debugging python file
1. Create a python file with a simple code.
2. Head over to the Run And Debug tab, and click on Show all automatic debug configurations.
<img width="390" alt="Screen Shot 2022-07-25 at 2 09 44 PM" src="https://user-images.githubusercontent.com/17892325/180874578-32d868f9-b93f-4460-8643-e1b18b3147e9.png">
3. A window will open with a list of options, choose `Debugpy`.
4. You should now see a list of debug options, and there should be the Python File option.
### Verification
1. Make sure that the application has been executed correctly, you can put some breakpoints, to test that the debugging works.
2. . If you repeat the steps and instead of clicking the option, you click the wheel, it should open the launch.json file with the configuration prefilled. Make sure this is correct and can be debugged.
3. Another form to show the automatic configuration is typing 'debug ' (with a space) in Quick open (⌘P) or by triggering the Debug: Select and Start Debugging command. Test that the recognition works here too.
Part 2: Try other automatic configuration
1. There are automatic configurations implemented for Django, FastApi and Flask, if you have any of these projects you can try that they also work with them. Because this functionality has already been tested in the Python extension, you don't need to test each one. The idea of this tpi is to make sure that it also works in the `debugpy` extension. Trying one of them is enough.
|
test
|
tpi debugging python test automatic configuration refs macos linux windows complexity author prerequisites install python version install the extension automatically detect a flask application and run it with the correct debug configuration steps part debugging python file create a python file with a simple code head over to the run and debug tab and click on show all automatic debug configurations img width alt screen shot at pm src a window will open with a list of options choose debugpy you should now see a list of debug options and there should be the python file option verification make sure that the application has been executed correctly you can put some breakpoints to test that the debugging works if you repeat the steps and instead of clicking the option you click the wheel it should open the launch json file with the configuration prefilled make sure this is correct and can be debugged another form to show the automatic configuration is typing debug with a space in quick open ⌘p or by triggering the debug select and start debugging command test that the recognition works here too part try other automatic configuration there are automatic configurations implemented for django fastapi and flask if you have any of these projects you can try that they also work with them because this functionality has already been tested in the python extension you don t need to test each one the idea of this tpi is to make sure that it also works in the debugpy extension trying one of them is enough
| 1
|
129,780
| 18,109,741,651
|
IssuesEvent
|
2021-09-23 01:02:19
|
maorkuriel/ksa
|
https://api.github.com/repos/maorkuriel/ksa
|
opened
|
CVE-2017-9804 (High) detected in struts2-core-2.3.31.jar, xwork-core-2.3.31.jar
|
security vulnerability
|
## CVE-2017-9804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>struts2-core-2.3.31.jar</b>, <b>xwork-core-2.3.31.jar</b></p></summary>
<p>
<details><summary><b>struts2-core-2.3.31.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/struts2-core/">http://struts.apache.org/struts2-core/</a></p>
<p>Path to dependency file: ksa/ksa-web-root/ksa-statistics-web/pom.xml</p>
<p>Path to vulnerable library: er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.31.jar** (Vulnerable Library)
</details>
<details><summary><b>xwork-core-2.3.31.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: ksa/ksa-web-root/ksa-bd-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.31.jar (Root Library)
- :x: **xwork-core-2.3.31.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.
<p>Publish Date: 2017-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804>CVE-2017-9804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/struts/tree/STRUTS_2_3_34/">https://github.com/apache/struts/tree/STRUTS_2_3_34/</a></p>
<p>Release Date: 2017-09-20</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.31","packageFilePaths":["/ksa-web-root/ksa-statistics-web/pom.xml","/ksa-web-root/ksa-web/pom.xml","/ksa-web-root/ksa-logistics-web/pom.xml","/ksa-web-root/ksa-system-web/pom.xml","/ksa-web-root/ksa-finance-web/pom.xml","/ksa-web-root/ksa-security-web/pom.xml","/ksa-web-core/pom.xml","/ksa-web-root/ksa-bd-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.struts:struts2-core:2.3.31","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13"},{"packageType":"Java","groupId":"org.apache.struts.xwork","packageName":"xwork-core","packageVersion":"2.3.31","packageFilePaths":["/ksa-web-root/ksa-bd-web/pom.xml","/ksa-web-root/ksa-finance-web/pom.xml","/ksa-web-root/ksa-system-web/pom.xml","/ksa-web-core/pom.xml","/ksa-web-root/ksa-web/pom.xml","/ksa-web-root/ksa-logistics-web/pom.xml","/ksa-web-root/ksa-security-web/pom.xml","/ksa-web-root/ksa-statistics-web/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.struts:struts2-core:2.3.31;org.apache.struts.xwork:xwork-core:2.3.31","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-9804","vulnerabilityDetails":"In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-9804 (High) detected in struts2-core-2.3.31.jar, xwork-core-2.3.31.jar - ## CVE-2017-9804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>struts2-core-2.3.31.jar</b>, <b>xwork-core-2.3.31.jar</b></p></summary>
<p>
<details><summary><b>struts2-core-2.3.31.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/struts2-core/">http://struts.apache.org/struts2-core/</a></p>
<p>Path to dependency file: ksa/ksa-web-root/ksa-statistics-web/pom.xml</p>
<p>Path to vulnerable library: er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar,er/.m2/repository/org/apache/struts/struts2-core/2.3.31/struts2-core-2.3.31.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.31.jar** (Vulnerable Library)
</details>
<details><summary><b>xwork-core-2.3.31.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: ksa/ksa-web-root/ksa-bd-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar,/home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.31/xwork-core-2.3.31.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.31.jar (Root Library)
- :x: **xwork-core-2.3.31.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.
<p>Publish Date: 2017-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804>CVE-2017-9804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/struts/tree/STRUTS_2_3_34/">https://github.com/apache/struts/tree/STRUTS_2_3_34/</a></p>
<p>Release Date: 2017-09-20</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.31","packageFilePaths":["/ksa-web-root/ksa-statistics-web/pom.xml","/ksa-web-root/ksa-web/pom.xml","/ksa-web-root/ksa-logistics-web/pom.xml","/ksa-web-root/ksa-system-web/pom.xml","/ksa-web-root/ksa-finance-web/pom.xml","/ksa-web-root/ksa-security-web/pom.xml","/ksa-web-core/pom.xml","/ksa-web-root/ksa-bd-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.struts:struts2-core:2.3.31","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13"},{"packageType":"Java","groupId":"org.apache.struts.xwork","packageName":"xwork-core","packageVersion":"2.3.31","packageFilePaths":["/ksa-web-root/ksa-bd-web/pom.xml","/ksa-web-root/ksa-finance-web/pom.xml","/ksa-web-root/ksa-system-web/pom.xml","/ksa-web-core/pom.xml","/ksa-web-root/ksa-web/pom.xml","/ksa-web-root/ksa-logistics-web/pom.xml","/ksa-web-root/ksa-security-web/pom.xml","/ksa-web-root/ksa-statistics-web/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.struts:struts2-core:2.3.31;org.apache.struts.xwork:xwork-core:2.3.31","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.34,org.apache.struts:struts2-core:2.5.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-9804","vulnerabilityDetails":"In Apache Struts 2.3.7 through 2.3.33 and 2.5 through 2.5.12, if an application allows entering a URL in a form field and built-in URLValidator is used, it is possible to prepare a special URL which will be used to overload server process when performing validation of the URL. NOTE: this vulnerability exists because of an incomplete fix for S2-047 / CVE-2017-7672.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9804","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in core jar xwork core jar cve high severity vulnerability vulnerable libraries core jar xwork core jar core jar apache struts library home page a href path to dependency file ksa ksa web root ksa statistics web pom xml path to vulnerable library er repository org apache struts core core jar er repository org apache struts core core jar er repository org apache struts core core jar er repository org apache struts core core jar er repository org apache struts core core jar er repository org apache struts core core jar ksa web root ksa web target root web inf lib core jar er repository org apache struts core core jar er repository org apache struts core core jar dependency hierarchy x core jar vulnerable library xwork core jar apache struts library home page a href path to dependency file ksa ksa web root ksa bd web pom xml path to vulnerable library home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar ksa web root ksa web target root web inf lib xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar home wss scanner repository org apache struts xwork xwork core xwork core jar dependency hierarchy core jar root library x xwork core jar vulnerable library found in base branch master vulnerability details in apache struts through and through if an application allows entering a url in a form field and built in urlvalidator is used it is possible to prepare a special url which will be used to overload server process when performing validation of the url note this vulnerability exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache struts core org apache struts core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache struts core isminimumfixversionavailable true minimumfixversion org apache struts core org apache struts core packagetype java groupid org apache struts xwork packagename xwork core packageversion packagefilepaths istransitivedependency true dependencytree org apache struts core org apache struts xwork xwork core isminimumfixversionavailable true minimumfixversion org apache struts core org apache struts core basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache struts through and through if an application allows entering a url in a form field and built in urlvalidator is used it is possible to prepare a special url which will be used to overload server process when performing validation of the url note this vulnerability exists because of an incomplete fix for cve vulnerabilityurl
| 0
|
231,149
| 18,750,594,234
|
IssuesEvent
|
2021-11-05 01:01:46
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
System.Text.Json.Tests hang on a Checked CoreCLR
|
area-System.Text.Json disabled-test no recent activity
|
https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-45451-merge-d5c2411bf7a94e53aa/System.Text.Json.Tests/console.8c3fdb0f.log?sv=2019-07-07&se=2020-12-22T01%3A58%3A57Z&sr=c&sp=rl&sig=jVvW5f1Z0z%2BebQhjuzWnYHsuErZs%2BRgJdJI0dR22FP4%3D
```
/private/tmp/helix/working/AACC08F8/w/BA4409BB/e /private/tmp/helix/working/AACC08F8/w/BA4409BB/e
Discovering: System.Text.Json.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Text.Json.Tests (found 2262 of 2294 test cases)
Starting: System.Text.Json.Tests (parallel test collections = on, max threads = 4)
System.Text.Json.Tests: [Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_String.ReadSimpleObjectAsync', Elapsed: 00:07:21
[Long Running Test] 'System.Text.Json.Tests.Utf8JsonWriterTests.Writing3MBBase64Bytes', Elapsed: 00:07:19
[Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_Span.MultipleTypes', Elapsed: 00:07:24
[Long Running Test] 'System.Text.Json.Tests.JsonDocumentTests.ParseJson_UnseekableStream_Async_BadBOM', Elapsed: 00:06:39
[Long Running Test] 'System.Text.Json.Tests.Utf8JsonReaderTests.ReadInvalidJsonStringsWithComments', Elapsed: 00:02:03
[Long Running Test] 'System.Text.Json.Serialization.Tests.StreamTests.RoundTripAsync', Elapsed: 00:05:37
[Long Running Test] 'System.Text.Json.Serialization.Tests.NumberHandlingTests.DictionariesRoundTrip', Elapsed: 00:05:04
[Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_Stream.MultipleTypes', Elapsed: 00:05:22
[Long Running Test] 'System.Text.Json.Serialization.Tests.ContinuationTests.ShouldWorkAtAnyPosition_Sequence', Elapsed: 00:04:02
```
Happened on OSX on a Checked CoreCLR.
I just checked on Kusto and this has happened 11 times in the past day.
Hit in: https://github.com/dotnet/runtime/pull/45451
Maybe we just need to mark as `SkipOnCoreClr("", RuntimeConfiguration.Checked)`?
|
1.0
|
System.Text.Json.Tests hang on a Checked CoreCLR - https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-45451-merge-d5c2411bf7a94e53aa/System.Text.Json.Tests/console.8c3fdb0f.log?sv=2019-07-07&se=2020-12-22T01%3A58%3A57Z&sr=c&sp=rl&sig=jVvW5f1Z0z%2BebQhjuzWnYHsuErZs%2BRgJdJI0dR22FP4%3D
```
/private/tmp/helix/working/AACC08F8/w/BA4409BB/e /private/tmp/helix/working/AACC08F8/w/BA4409BB/e
Discovering: System.Text.Json.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Text.Json.Tests (found 2262 of 2294 test cases)
Starting: System.Text.Json.Tests (parallel test collections = on, max threads = 4)
System.Text.Json.Tests: [Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_String.ReadSimpleObjectAsync', Elapsed: 00:07:21
[Long Running Test] 'System.Text.Json.Tests.Utf8JsonWriterTests.Writing3MBBase64Bytes', Elapsed: 00:07:19
[Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_Span.MultipleTypes', Elapsed: 00:07:24
[Long Running Test] 'System.Text.Json.Tests.JsonDocumentTests.ParseJson_UnseekableStream_Async_BadBOM', Elapsed: 00:06:39
[Long Running Test] 'System.Text.Json.Tests.Utf8JsonReaderTests.ReadInvalidJsonStringsWithComments', Elapsed: 00:02:03
[Long Running Test] 'System.Text.Json.Serialization.Tests.StreamTests.RoundTripAsync', Elapsed: 00:05:37
[Long Running Test] 'System.Text.Json.Serialization.Tests.NumberHandlingTests.DictionariesRoundTrip', Elapsed: 00:05:04
[Long Running Test] 'System.Text.Json.Serialization.Tests.ConstructorTests_Stream.MultipleTypes', Elapsed: 00:05:22
[Long Running Test] 'System.Text.Json.Serialization.Tests.ContinuationTests.ShouldWorkAtAnyPosition_Sequence', Elapsed: 00:04:02
```
Happened on OSX on a Checked CoreCLR.
I just checked on Kusto and this has happened 11 times in the past day.
Hit in: https://github.com/dotnet/runtime/pull/45451
Maybe we just need to mark as `SkipOnCoreClr("", RuntimeConfiguration.Checked)`?
|
test
|
system text json tests hang on a checked coreclr private tmp helix working w e private tmp helix working w e discovering system text json tests method display classandmethod method display options none discovered system text json tests found of test cases starting system text json tests parallel test collections on max threads system text json tests system text json serialization tests constructortests string readsimpleobjectasync elapsed system text json tests elapsed system text json serialization tests constructortests span multipletypes elapsed system text json tests jsondocumenttests parsejson unseekablestream async badbom elapsed system text json tests readinvalidjsonstringswithcomments elapsed system text json serialization tests streamtests roundtripasync elapsed system text json serialization tests numberhandlingtests dictionariesroundtrip elapsed system text json serialization tests constructortests stream multipletypes elapsed system text json serialization tests continuationtests shouldworkatanyposition sequence elapsed happened on osx on a checked coreclr i just checked on kusto and this has happened times in the past day hit in maybe we just need to mark as skiponcoreclr runtimeconfiguration checked
| 1
|
265,302
| 23,159,222,058
|
IssuesEvent
|
2022-07-29 15:50:20
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test: TestLogic_udf failed
|
C-test-failure O-robot branch-master
|
pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test.TestLogic_udf [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/5902772?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/5902772?buildTab=artifacts#/) on master @ [1129fbc650fe3a037b03aea1e5f1d8078618cb1c](https://github.com/cockroachdb/cockroach/commits/1129fbc650fe3a037b03aea1e5f1d8078618cb1c):
```
=== RUN TestLogic_udf
test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/64a43d384fb5a8838750de8877e59d45/logTestLogic_udf2684271147
test_log_scope.go:80: use -show-logs to present logs inline
[06:43:57] setting distsql_workmem='77297B';
[06:43:57] rng seed: 3415018278470284350
[06:43:57] --- queries start here (file: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf)
[06:43:57] CREATE OR REPLACE FUNCTION f(a int) RETURNS INT LANGUAGE SQL AS 'SELECT 1';
[06:43:57] CREATE FUNCTION f(a int) RETURNS INT LEAKPROOF STABLE LANGUAGE SQL AS 'SELECT 1';
[06:43:57] CREATE FUNCTION f() RETURNS INT IMMUTABLE LANGUAGE SQL AS $$ SELECT 'hello' $$;
[06:43:57] CREATE TABLE t_implicit_type(a INT PRIMARY KEY, b STRING);;
[06:43:57] rewrote:
CREATE TABLE t_implicit_type (a INT8 PRIMARY KEY, b STRING, FAMILY (b), FAMILY (a));
[06:43:57] CREATE FUNCTION f() RETURNS INT IMMUTABLE LANGUAGE SQL AS $$ SELECT a, b from t_implicit_type $$;
[06:43:57] CREATE FUNCTION f() RETURNS t_implicit_type IMMUTABLE LANGUAGE SQL AS $$ SELECT * from t_implicit_type $$;
[06:43:57] -- OK;
[06:43:57] -- FAIL
logic.go:2216:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:18:
expected:
pq: unimplemented: functions do not currently support \* expressions\nHINT: You have attempted to use a feature that is not yet implemented\.\nSee: https://go\.crdb\.dev/issue-v/10028/v22\.2
got:
pq: unimplemented: functions do not currently support * expressions
HINT: You have attempted to use a feature that is not yet implemented.
See: https://go.crdb.dev/issue-v/10028/dev
[06:43:57] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf with config fakedist-disk: 6 tests, 1 failures
logic.go:3570:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:21: error while processing
logic.go:3570: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:21: too many errors encountered, skipping the rest of the input
panic.go:500: -- test log scope end --
--- FAIL: TestLogic_udf (1.88s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_udf.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-18168
|
1.0
|
pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test: TestLogic_udf failed - pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test.TestLogic_udf [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/5902772?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/5902772?buildTab=artifacts#/) on master @ [1129fbc650fe3a037b03aea1e5f1d8078618cb1c](https://github.com/cockroachdb/cockroach/commits/1129fbc650fe3a037b03aea1e5f1d8078618cb1c):
```
=== RUN TestLogic_udf
test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/64a43d384fb5a8838750de8877e59d45/logTestLogic_udf2684271147
test_log_scope.go:80: use -show-logs to present logs inline
[06:43:57] setting distsql_workmem='77297B';
[06:43:57] rng seed: 3415018278470284350
[06:43:57] --- queries start here (file: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf)
[06:43:57] CREATE OR REPLACE FUNCTION f(a int) RETURNS INT LANGUAGE SQL AS 'SELECT 1';
[06:43:57] CREATE FUNCTION f(a int) RETURNS INT LEAKPROOF STABLE LANGUAGE SQL AS 'SELECT 1';
[06:43:57] CREATE FUNCTION f() RETURNS INT IMMUTABLE LANGUAGE SQL AS $$ SELECT 'hello' $$;
[06:43:57] CREATE TABLE t_implicit_type(a INT PRIMARY KEY, b STRING);;
[06:43:57] rewrote:
CREATE TABLE t_implicit_type (a INT8 PRIMARY KEY, b STRING, FAMILY (b), FAMILY (a));
[06:43:57] CREATE FUNCTION f() RETURNS INT IMMUTABLE LANGUAGE SQL AS $$ SELECT a, b from t_implicit_type $$;
[06:43:57] CREATE FUNCTION f() RETURNS t_implicit_type IMMUTABLE LANGUAGE SQL AS $$ SELECT * from t_implicit_type $$;
[06:43:57] -- OK;
[06:43:57] -- FAIL
logic.go:2216:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:18:
expected:
pq: unimplemented: functions do not currently support \* expressions\nHINT: You have attempted to use a feature that is not yet implemented\.\nSee: https://go\.crdb\.dev/issue-v/10028/v22\.2
got:
pq: unimplemented: functions do not currently support * expressions
HINT: You have attempted to use a feature that is not yet implemented.
See: https://go.crdb.dev/issue-v/10028/dev
[06:43:57] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf with config fakedist-disk: 6 tests, 1 failures
logic.go:3570:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:21: error while processing
logic.go:3570: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3538/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/sql/logictest/tests/fakedist-disk/fakedist-disk_test_/fakedist-disk_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/udf:21: too many errors encountered, skipping the rest of the input
panic.go:500: -- test log scope end --
--- FAIL: TestLogic_udf (1.88s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_udf.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-18168
|
test
|
pkg sql logictest tests fakedist disk fakedist disk test testlogic udf failed pkg sql logictest tests fakedist disk fakedist disk test testlogic udf with on master run testlogic udf test log scope go test logs captured to artifacts tmp tmp logtestlogic test log scope go use show logs to present logs inline setting distsql workmem rng seed queries start here file home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg sql logictest tests fakedist disk fakedist disk test fakedist disk test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test udf create or replace function f a int returns int language sql as select create function f a int returns int leakproof stable language sql as select create function f returns int immutable language sql as select hello create table t implicit type a int primary key b string rewrote create table t implicit type a primary key b string family b family a create function f returns int immutable language sql as select a b from t implicit type create function f returns t implicit type immutable language sql as select from t implicit type ok fail logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg sql logictest tests fakedist disk fakedist disk test fakedist disk test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test udf expected pq unimplemented functions do not currently support expressions nhint you have attempted to use a feature that is not yet implemented nsee got pq unimplemented functions do not currently support expressions hint you have attempted to use a feature that is not yet implemented see done home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg sql logictest tests fakedist disk fakedist disk test fakedist disk test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test udf with config fakedist disk tests failures logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg sql logictest tests fakedist disk fakedist disk test fakedist disk test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test udf error while processing logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg sql logictest tests fakedist disk fakedist disk test fakedist disk test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test udf too many errors encountered skipping the rest of the input panic go test log scope end fail testlogic udf help see also jira issue crdb
| 1
|
278,211
| 8,638,137,091
|
IssuesEvent
|
2018-11-23 13:46:41
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mobile.twitter.com - design is broken
|
browser-firefox-mobile priority-critical
|
<!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://mobile.twitter.com/login/error?username_or_email=donnashelton765%40gmail.com&redirect_after_login=%2Fintent%2Ftweet%3Fui_metrics%3D%257B%2522rf%2522%253A%257B%2522c2b102ffd94c0d8bc63f624b956c80cf84d47c72b8eb742ffe41e65edffe3420%2522%253A1%252C%2522a24031d44b02b90b36c0946dc2280a07045a310c26606c495a71d431c3a91a71%2522%253A0%252C%2522ac9eaf4e3307c19b602947f9d83df9b6e3f24f2c57e2483a84f8fb06da00acea%2522%253A-153%252C%2522e2f31cf7104cd32c413e1bd74fe34c920407f918c84b667be477700db34fe490%2522%253A8%257D%252C%2522s%2522%253A%2522YKXdf0n5yyQIJfoULl2vB5pE7gcqnzgrgmeAhtR1dDt9h4ADAxt0-3ezsYe8J_Vr9jF2GcqmyjygOa9w6F51Ki48ck0L49iTLg3lCSzjLmcls0ifO8Sg4q_VEN97H9deoOvC4hJKidCn_JZ_W35n3GUtauj4VNX9EC01MUOkaCY6T9R4-a5ktfxix2YN1nWfFmSOAIfryWh-_L1_cdqHd0ut8kvkfuAfWXa9Wct0beh2H2e-DiYqYtnTPONkW7MTSdD6b5IvxiwqGhoUAauAQ1Akq7FsOuGXLnADNFnHbg-cE2A7p13dkwzNlPur-ENc5-n5c-zGcdvcrPDEQ8yE2QAAAWdAsRt_%2522%257D%26url%3Dhttps%253A%252F%252Fwebcompat.com%252Fissues%252F21746%26text%3DI%2520just%2520filed%2520a%2520bug%2520on%2520the%2520internet%253A%26session%255Busername_or_email%255D%3Ddonnashelton765%2540gmail.com%26status%3DI%2520just%2520filed%2520a%2520bug%2520on%2520the%2520internet%253A%2520https%253A%252F%252Fwebcompat.com%252Fissues%252F21746%2520via%2520%2540webcompat%26via%3Dwebcompat
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: spam
**Steps to Reproduce**:
Site is unusable
[](https://webcompat.com/uploads/2018/11/0f410583-5391-44e7-b9ab-f54c681cb0ba.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181121100030</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
mobile.twitter.com - design is broken - <!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://mobile.twitter.com/login/error?username_or_email=donnashelton765%40gmail.com&redirect_after_login=%2Fintent%2Ftweet%3Fui_metrics%3D%257B%2522rf%2522%253A%257B%2522c2b102ffd94c0d8bc63f624b956c80cf84d47c72b8eb742ffe41e65edffe3420%2522%253A1%252C%2522a24031d44b02b90b36c0946dc2280a07045a310c26606c495a71d431c3a91a71%2522%253A0%252C%2522ac9eaf4e3307c19b602947f9d83df9b6e3f24f2c57e2483a84f8fb06da00acea%2522%253A-153%252C%2522e2f31cf7104cd32c413e1bd74fe34c920407f918c84b667be477700db34fe490%2522%253A8%257D%252C%2522s%2522%253A%2522YKXdf0n5yyQIJfoULl2vB5pE7gcqnzgrgmeAhtR1dDt9h4ADAxt0-3ezsYe8J_Vr9jF2GcqmyjygOa9w6F51Ki48ck0L49iTLg3lCSzjLmcls0ifO8Sg4q_VEN97H9deoOvC4hJKidCn_JZ_W35n3GUtauj4VNX9EC01MUOkaCY6T9R4-a5ktfxix2YN1nWfFmSOAIfryWh-_L1_cdqHd0ut8kvkfuAfWXa9Wct0beh2H2e-DiYqYtnTPONkW7MTSdD6b5IvxiwqGhoUAauAQ1Akq7FsOuGXLnADNFnHbg-cE2A7p13dkwzNlPur-ENc5-n5c-zGcdvcrPDEQ8yE2QAAAWdAsRt_%2522%257D%26url%3Dhttps%253A%252F%252Fwebcompat.com%252Fissues%252F21746%26text%3DI%2520just%2520filed%2520a%2520bug%2520on%2520the%2520internet%253A%26session%255Busername_or_email%255D%3Ddonnashelton765%2540gmail.com%26status%3DI%2520just%2520filed%2520a%2520bug%2520on%2520the%2520internet%253A%2520https%253A%252F%252Fwebcompat.com%252Fissues%252F21746%2520via%2520%2540webcompat%26via%3Dwebcompat
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: spam
**Steps to Reproduce**:
Site is unusable
[](https://webcompat.com/uploads/2018/11/0f410583-5391-44e7-b9ab-f54c681cb0ba.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181121100030</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
mobile twitter com design is broken url browser version firefox mobile operating system android tested another browser yes problem type design is broken description spam steps to reproduce site is unusable browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages from with ❤️
| 0
|
334,449
| 29,865,791,920
|
IssuesEvent
|
2023-06-20 03:40:02
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_avg_pool2d_backward3_cpu (__main__.CpuTests)
|
triaged module: flaky-tests skipped module: inductor
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_avg_pool2d_backward3_cpu&suite=CpuTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14022504720).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_avg_pool2d_backward3_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py` or `inductor/test_torchinductor.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @bertmaher
|
1.0
|
DISABLED test_avg_pool2d_backward3_cpu (__main__.CpuTests) - Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_avg_pool2d_backward3_cpu&suite=CpuTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14022504720).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_avg_pool2d_backward3_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py` or `inductor/test_torchinductor.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @bertmaher
|
test
|
disabled test avg cpu main cputests platforms rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test avg cpu there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path inductor test torchinductor py or inductor test torchinductor py cc voznesenskym penguinwu eikanwang guobing chen xiaobingsuper zhuhaozhe blzheng xia weiwen wenzhe nrv jiayisunx ipiszy ngimel bertmaher
| 1
|
238,345
| 26,098,136,715
|
IssuesEvent
|
2022-12-27 01:03:45
|
samqws-test/soundcloud-redux
|
https://api.github.com/repos/samqws-test/soundcloud-redux
|
closed
|
winston-2.3.1.tgz: 1 vulnerabilities (highest severity is: 7.8) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>winston-2.3.1.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/winston/node_modules/async/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/samqws-test/soundcloud-redux/commit/7a35925975446ad0b61403482d228ef51ec67899">7a35925975446ad0b61403482d228ef51ec67899</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (winston version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-43138](https://www.mend.io/vulnerability-database/CVE-2021-43138) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | async-1.0.0.tgz | Transitive | 2.4.6 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43138</summary>
### Vulnerable Library - <b>async-1.0.0.tgz</b></p>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.0.0.tgz">https://registry.npmjs.org/async/-/async-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/winston/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- winston-2.3.1.tgz (Root Library)
- :x: **async-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-test/soundcloud-redux/commit/7a35925975446ad0b61403482d228ef51ec67899">7a35925975446ad0b61403482d228ef51ec67899</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 2.6.4</p>
<p>Direct dependency fix Resolution (winston): 2.4.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
winston-2.3.1.tgz: 1 vulnerabilities (highest severity is: 7.8) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>winston-2.3.1.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/winston/node_modules/async/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/samqws-test/soundcloud-redux/commit/7a35925975446ad0b61403482d228ef51ec67899">7a35925975446ad0b61403482d228ef51ec67899</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (winston version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-43138](https://www.mend.io/vulnerability-database/CVE-2021-43138) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | async-1.0.0.tgz | Transitive | 2.4.6 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43138</summary>
### Vulnerable Library - <b>async-1.0.0.tgz</b></p>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.0.0.tgz">https://registry.npmjs.org/async/-/async-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/winston/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- winston-2.3.1.tgz (Root Library)
- :x: **async-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-test/soundcloud-redux/commit/7a35925975446ad0b61403482d228ef51ec67899">7a35925975446ad0b61403482d228ef51ec67899</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 2.6.4</p>
<p>Direct dependency fix Resolution (winston): 2.4.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_test
|
winston tgz vulnerabilities highest severity is autoclosed vulnerable library winston tgz path to dependency file package json path to vulnerable library node modules winston node modules async package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in winston version remediation available high async tgz transitive details cve vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules winston node modules async package json dependency hierarchy winston tgz root library x async tgz vulnerable library found in head commit a href found in base branch main vulnerability details in async before and x before a malicious user can obtain privileges via the mapvalues method aka lib internal iterator js createobjectiterator prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution winston rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
70,947
| 9,462,903,467
|
IssuesEvent
|
2019-04-17 16:25:53
|
botanicus/docker-project-manager
|
https://api.github.com/repos/botanicus/docker-project-manager
|
opened
|
Minimal ZSH configuration
|
documentation
|
This probably will just be a section in README, but it'd be handy to be able to copy & paste from somewhere:
1. Install ZSH.
2. `chsh -s $(which zsh)`
3. `wget https://raw.githubusercontent.com/botanicus/dotfiles/master/.zsh/host.zsh -o ~/.zshrc`
|
1.0
|
Minimal ZSH configuration - This probably will just be a section in README, but it'd be handy to be able to copy & paste from somewhere:
1. Install ZSH.
2. `chsh -s $(which zsh)`
3. `wget https://raw.githubusercontent.com/botanicus/dotfiles/master/.zsh/host.zsh -o ~/.zshrc`
|
non_test
|
minimal zsh configuration this probably will just be a section in readme but it d be handy to be able to copy paste from somewhere install zsh chsh s which zsh wget o zshrc
| 0
|
673,596
| 23,021,612,106
|
IssuesEvent
|
2022-07-22 05:30:42
|
GoogleContainerTools/skaffold
|
https://api.github.com/repos/GoogleContainerTools/skaffold
|
opened
|
skaffold run on a kpt sample fails
|
priority/p1 v2-beta1-bugbash
|
skaffold v2 run on a kpt sample fails with error below
```
➜ kpt_demo git:(guestbook-sample-v2) ✗ skaffoldv2 run -d gcr.io/tejal-gke1
Generating tags...
- redis-slave -> gcr.io/tejal-gke1/redis-slave:latest
- php-redis -> gcr.io/tejal-gke1/php-redis:latest
- skaffold-helm -> gcr.io/tejal-gke1/skaffold-helm:latest
Checking cache...
- redis-slave: Not found. Building
- php-redis: Not found. Building
- skaffold-helm: Found Remotely
Starting build...
Building [php-redis]...
Target platforms: [linux/amd64]
[+] Building 2.0s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.28kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:5-apache 0.4s
=> [1/8] FROM docker.io/library/php:5-apache@sha256:0a40fd273961b99d8afe69a61a68c73c04bc0caa9de384d3b2dd9e7986eec86d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 99B 0.0s
=> CACHED [2/8] RUN apt-get update 0.0s
=> CACHED [3/8] RUN pear channel-discover pear.nrk.io 0.0s
=> [4/8] RUN sed -i 's#ErrorLog /proc/self/fd/2#ErrorLog "|$/bin/cat 1>\&2"#' /etc/apache2/apache2.conf 0.4s
=> [5/8] RUN sed -i 's#CustomLog /proc/self/fd/1 combined#CustomLog "|/bin/cat" combined#' /etc/apache2/apache2.conf 0.6s
=> [6/8] ADD guestbook.php /var/www/html/guestbook.php 0.0s
=> [7/8] ADD controllers.js /var/www/html/controllers.js 0.0s
=> [8/8] ADD index.html /var/www/html/index.html 0.0s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:9fa5e148a120f132ab4ab6d28d13e1c00211386f333b67025f6396874a2b341f 0.0s
=> => naming to gcr.io/tejal-gke1/php-redis:latest 0.0s
The push refers to repository [gcr.io/tejal-gke1/php-redis]
9d8f79935bd0: Preparing
1ee55c2860d0: Preparing
09483629043a: Preparing
1c0461fb933b: Preparing
c0b9b72a15d2: Preparing
1bd657aec138: Preparing
b7220cccc556: Preparing
1aab22401f12: Preparing
13ab94c9aa15: Preparing
588ee8a7eeec: Preparing
bebcda512a6d: Preparing
5ce59bfe8a3a: Preparing
d89c229e40ae: Preparing
9311481e1bdc: Preparing
4dd88f8a7689: Preparing
b1841504f6c8: Preparing
6eb3cfd4ad9e: Preparing
82bded2c3a7c: Preparing
b87a266e6a9c: Preparing
3c816b4ead84: Preparing
588ee8a7eeec: Waiting
bebcda512a6d: Waiting
5ce59bfe8a3a: Waiting
d89c229e40ae: Waiting
9311481e1bdc: Waiting
4dd88f8a7689: Waiting
b1841504f6c8: Waiting
6eb3cfd4ad9e: Waiting
82bded2c3a7c: Waiting
b87a266e6a9c: Waiting
3c816b4ead84: Waiting
b7220cccc556: Waiting
1bd657aec138: Waiting
1aab22401f12: Waiting
13ab94c9aa15: Waiting
9d8f79935bd0: Pushed
09483629043a: Pushed
c0b9b72a15d2: Pushed
1ee55c2860d0: Pushed
1c0461fb933b: Pushed
1aab22401f12: Pushed
13ab94c9aa15: Pushed
1bd657aec138: Pushed
b7220cccc556: Pushed
588ee8a7eeec: Pushed
bebcda512a6d: Pushed
5ce59bfe8a3a: Pushed
d89c229e40ae: Pushed
6eb3cfd4ad9e: Layer already exists
82bded2c3a7c: Layer already exists
b87a266e6a9c: Layer already exists
9311481e1bdc: Pushed
3c816b4ead84: Layer already exists
4dd88f8a7689: Pushed
b1841504f6c8: Pushed
latest: digest: sha256:94b89c1b8d0a8fd0fe4794d557f75a9863dbe4643bf1ac007d8d39e549bc4fdd size: 4492
Build [php-redis] succeeded
Building [redis-slave]...
Target platforms: [linux/amd64]
[+] Building 1.2s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 40B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/redis:3.2.9 0.9s
=> [internal] load build context 0.0s
=> => transferring context: 28B 0.0s
=> [1/2] FROM docker.io/library/redis:3.2.9@sha256:613b3726ddff603e2730f7f4ae7796d63632f17a9cd82d787d60084b8b0109f1 0.0s
=> CACHED [2/2] COPY ./run.sh / 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d42582432da23d01d50a4d43d650a06bc34ae7cfd2ed3bbdbd5cfc5325017460 0.0s
=> => naming to gcr.io/tejal-gke1/redis-slave:latest 0.0s
Build [redis-slave] succeeded
Starting test...
error: directory "redis-slave" already exists, please delete the directory and retry
Package ".kpt-pipeline":
Successfully executed 0 function(s) in 1 package(s).
Tags used in deployment:
- redis-slave -> gcr.io/tejal-gke1/redis-slave:latest@sha256:54ee313a1b259a7e027fdc22295b40d36a92eb24b1b1283d57f2d021a575e369
- php-redis -> gcr.io/tejal-gke1/php-redis:latest@sha256:94b89c1b8d0a8fd0fe4794d557f75a9863dbe4643bf1ac007d8d39e549bc4fdd
- skaffold-helm -> gcr.io/tejal-gke1/skaffold-helm:latest@sha256:c89a3d80352340151828252d1d1478617d3939d3923ab92dca7062e201b29c61
Starting deploy...
- error: no objects passed to apply
kubectl apply: exit status 1
➜ kpt_demo git:(guestbook-sample-v2) ✗ vim skaffold.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ ls .kpt-pipeline
Kptfile manifests.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ cat .kpt-pipeline/manifests.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ cat .kpt-pipeline/Kptfile
apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
name: .kpt-pipeline
info:
description: sample description
pipeline:
mutators:
- image: gcr.io/kpt-fn/set-annotations:v0.1
configMap:
author: yuwenma-5
- image: gcr.io/kpt-fn/create-setters:unstable
configMap:
app: guestbook
- image: gcr.io/kpt-fn/apply-setters:unstable
configMap:
app: guestbook-yuwen
validators:
- image: gcr.io/kpt-fn/kubeval:v0.1
➜ kpt_demo git:(guestbook-sample-v2) ✗
```
Steps to reproduce.
1) checkout sample https://github.com/yuwenma/skaffold/tree/guestbook-sample-v2
2) Fix the obsolete config. Copy paste below yaml into skaffold.yaml
<details>
```
apiVersion: skaffold/v3alpha1
kind: Config
metadata:
name: guestbook
build:
artifacts:
- image: redis-slave
context: redis-slave/
- image: php-redis
context: php-redis
- image: skaffold-helm
context: helm
tagPolicy:
sha256: {}
local:
push: true
manifests:
kustomize:
paths:
- php-redis/config/*.yaml
rawYaml:
- redis-master/*.yaml
kpt:
- redis-slave/
helm:
releases:
- name: skaffold-helm
chartPath: helm/charts
transform:
- name: set-annotations
configMap:
- "author:yuwenma-5"
- name: create-setters
configMap:
- "app:guestbook"
- name: apply-setters
configMap:
- "app:guestbook-yuwen"
validate:
- name: kubeval
deploy:
kpt:
# namespace: default
# name: inventory-90195255
# inventoryID: 3cec9ce7-d9eb-4503-b44c-ffa6ca3a0d49
```
</details>
3) run `skaffold init`
|
1.0
|
skaffold run on a kpt sample fails - skaffold v2 run on a kpt sample fails with error below
```
➜ kpt_demo git:(guestbook-sample-v2) ✗ skaffoldv2 run -d gcr.io/tejal-gke1
Generating tags...
- redis-slave -> gcr.io/tejal-gke1/redis-slave:latest
- php-redis -> gcr.io/tejal-gke1/php-redis:latest
- skaffold-helm -> gcr.io/tejal-gke1/skaffold-helm:latest
Checking cache...
- redis-slave: Not found. Building
- php-redis: Not found. Building
- skaffold-helm: Found Remotely
Starting build...
Building [php-redis]...
Target platforms: [linux/amd64]
[+] Building 2.0s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.28kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:5-apache 0.4s
=> [1/8] FROM docker.io/library/php:5-apache@sha256:0a40fd273961b99d8afe69a61a68c73c04bc0caa9de384d3b2dd9e7986eec86d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 99B 0.0s
=> CACHED [2/8] RUN apt-get update 0.0s
=> CACHED [3/8] RUN pear channel-discover pear.nrk.io 0.0s
=> [4/8] RUN sed -i 's#ErrorLog /proc/self/fd/2#ErrorLog "|$/bin/cat 1>\&2"#' /etc/apache2/apache2.conf 0.4s
=> [5/8] RUN sed -i 's#CustomLog /proc/self/fd/1 combined#CustomLog "|/bin/cat" combined#' /etc/apache2/apache2.conf 0.6s
=> [6/8] ADD guestbook.php /var/www/html/guestbook.php 0.0s
=> [7/8] ADD controllers.js /var/www/html/controllers.js 0.0s
=> [8/8] ADD index.html /var/www/html/index.html 0.0s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:9fa5e148a120f132ab4ab6d28d13e1c00211386f333b67025f6396874a2b341f 0.0s
=> => naming to gcr.io/tejal-gke1/php-redis:latest 0.0s
The push refers to repository [gcr.io/tejal-gke1/php-redis]
9d8f79935bd0: Preparing
1ee55c2860d0: Preparing
09483629043a: Preparing
1c0461fb933b: Preparing
c0b9b72a15d2: Preparing
1bd657aec138: Preparing
b7220cccc556: Preparing
1aab22401f12: Preparing
13ab94c9aa15: Preparing
588ee8a7eeec: Preparing
bebcda512a6d: Preparing
5ce59bfe8a3a: Preparing
d89c229e40ae: Preparing
9311481e1bdc: Preparing
4dd88f8a7689: Preparing
b1841504f6c8: Preparing
6eb3cfd4ad9e: Preparing
82bded2c3a7c: Preparing
b87a266e6a9c: Preparing
3c816b4ead84: Preparing
588ee8a7eeec: Waiting
bebcda512a6d: Waiting
5ce59bfe8a3a: Waiting
d89c229e40ae: Waiting
9311481e1bdc: Waiting
4dd88f8a7689: Waiting
b1841504f6c8: Waiting
6eb3cfd4ad9e: Waiting
82bded2c3a7c: Waiting
b87a266e6a9c: Waiting
3c816b4ead84: Waiting
b7220cccc556: Waiting
1bd657aec138: Waiting
1aab22401f12: Waiting
13ab94c9aa15: Waiting
9d8f79935bd0: Pushed
09483629043a: Pushed
c0b9b72a15d2: Pushed
1ee55c2860d0: Pushed
1c0461fb933b: Pushed
1aab22401f12: Pushed
13ab94c9aa15: Pushed
1bd657aec138: Pushed
b7220cccc556: Pushed
588ee8a7eeec: Pushed
bebcda512a6d: Pushed
5ce59bfe8a3a: Pushed
d89c229e40ae: Pushed
6eb3cfd4ad9e: Layer already exists
82bded2c3a7c: Layer already exists
b87a266e6a9c: Layer already exists
9311481e1bdc: Pushed
3c816b4ead84: Layer already exists
4dd88f8a7689: Pushed
b1841504f6c8: Pushed
latest: digest: sha256:94b89c1b8d0a8fd0fe4794d557f75a9863dbe4643bf1ac007d8d39e549bc4fdd size: 4492
Build [php-redis] succeeded
Building [redis-slave]...
Target platforms: [linux/amd64]
[+] Building 1.2s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 40B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/redis:3.2.9 0.9s
=> [internal] load build context 0.0s
=> => transferring context: 28B 0.0s
=> [1/2] FROM docker.io/library/redis:3.2.9@sha256:613b3726ddff603e2730f7f4ae7796d63632f17a9cd82d787d60084b8b0109f1 0.0s
=> CACHED [2/2] COPY ./run.sh / 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d42582432da23d01d50a4d43d650a06bc34ae7cfd2ed3bbdbd5cfc5325017460 0.0s
=> => naming to gcr.io/tejal-gke1/redis-slave:latest 0.0s
Build [redis-slave] succeeded
Starting test...
error: directory "redis-slave" already exists, please delete the directory and retry
Package ".kpt-pipeline":
Successfully executed 0 function(s) in 1 package(s).
Tags used in deployment:
- redis-slave -> gcr.io/tejal-gke1/redis-slave:latest@sha256:54ee313a1b259a7e027fdc22295b40d36a92eb24b1b1283d57f2d021a575e369
- php-redis -> gcr.io/tejal-gke1/php-redis:latest@sha256:94b89c1b8d0a8fd0fe4794d557f75a9863dbe4643bf1ac007d8d39e549bc4fdd
- skaffold-helm -> gcr.io/tejal-gke1/skaffold-helm:latest@sha256:c89a3d80352340151828252d1d1478617d3939d3923ab92dca7062e201b29c61
Starting deploy...
- error: no objects passed to apply
kubectl apply: exit status 1
➜ kpt_demo git:(guestbook-sample-v2) ✗ vim skaffold.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ ls .kpt-pipeline
Kptfile manifests.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ cat .kpt-pipeline/manifests.yaml
➜ kpt_demo git:(guestbook-sample-v2) ✗ cat .kpt-pipeline/Kptfile
apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
name: .kpt-pipeline
info:
description: sample description
pipeline:
mutators:
- image: gcr.io/kpt-fn/set-annotations:v0.1
configMap:
author: yuwenma-5
- image: gcr.io/kpt-fn/create-setters:unstable
configMap:
app: guestbook
- image: gcr.io/kpt-fn/apply-setters:unstable
configMap:
app: guestbook-yuwen
validators:
- image: gcr.io/kpt-fn/kubeval:v0.1
➜ kpt_demo git:(guestbook-sample-v2) ✗
```
Steps to reproduce.
1) checkout sample https://github.com/yuwenma/skaffold/tree/guestbook-sample-v2
2) Fix the obsolete config. Copy paste below yaml into skaffold.yaml
<details>
```
apiVersion: skaffold/v3alpha1
kind: Config
metadata:
name: guestbook
build:
artifacts:
- image: redis-slave
context: redis-slave/
- image: php-redis
context: php-redis
- image: skaffold-helm
context: helm
tagPolicy:
sha256: {}
local:
push: true
manifests:
kustomize:
paths:
- php-redis/config/*.yaml
rawYaml:
- redis-master/*.yaml
kpt:
- redis-slave/
helm:
releases:
- name: skaffold-helm
chartPath: helm/charts
transform:
- name: set-annotations
configMap:
- "author:yuwenma-5"
- name: create-setters
configMap:
- "app:guestbook"
- name: apply-setters
configMap:
- "app:guestbook-yuwen"
validate:
- name: kubeval
deploy:
kpt:
# namespace: default
# name: inventory-90195255
# inventoryID: 3cec9ce7-d9eb-4503-b44c-ffa6ca3a0d49
```
</details>
3) run `skaffold init`
|
non_test
|
skaffold run on a kpt sample fails skaffold run on a kpt sample fails with error below ➜ kpt demo git guestbook sample ✗ run d gcr io tejal generating tags redis slave gcr io tejal redis slave latest php redis gcr io tejal php redis latest skaffold helm gcr io tejal skaffold helm latest checking cache redis slave not found building php redis not found building skaffold helm found remotely starting build building target platforms building finished load build definition from dockerfile transferring dockerfile load dockerignore transferring context load metadata for docker io library php apache from docker io library php apache load build context transferring context cached run apt get update cached run pear channel discover pear nrk io run sed i s errorlog proc self fd errorlog bin cat etc conf run sed i s customlog proc self fd combined customlog bin cat combined etc conf add guestbook php var www html guestbook php add controllers js var www html controllers js add index html var www html index html exporting to image exporting layers writing image naming to gcr io tejal php redis latest the push refers to repository preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing preparing waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting pushed pushed pushed pushed pushed pushed pushed pushed pushed pushed pushed pushed pushed layer already exists layer already exists layer already exists pushed layer already exists pushed pushed latest digest size build succeeded building target platforms building finished load build definition from dockerfile transferring dockerfile load dockerignore transferring context load metadata for docker io library redis load build context transferring context from docker io library redis cached copy run sh exporting to image exporting layers writing image naming to gcr io tejal redis slave latest build succeeded starting test error directory redis slave already exists please delete the directory and retry package kpt pipeline successfully executed function s in package s tags used in deployment redis slave gcr io tejal redis slave latest php redis gcr io tejal php redis latest skaffold helm gcr io tejal skaffold helm latest starting deploy error no objects passed to apply kubectl apply exit status ➜ kpt demo git guestbook sample ✗ vim skaffold yaml ➜ kpt demo git guestbook sample ✗ ls kpt pipeline kptfile manifests yaml ➜ kpt demo git guestbook sample ✗ cat kpt pipeline manifests yaml ➜ kpt demo git guestbook sample ✗ cat kpt pipeline kptfile apiversion kpt dev kind kptfile metadata name kpt pipeline info description sample description pipeline mutators image gcr io kpt fn set annotations configmap author yuwenma image gcr io kpt fn create setters unstable configmap app guestbook image gcr io kpt fn apply setters unstable configmap app guestbook yuwen validators image gcr io kpt fn kubeval ➜ kpt demo git guestbook sample ✗ steps to reproduce checkout sample fix the obsolete config copy paste below yaml into skaffold yaml apiversion skaffold kind config metadata name guestbook build artifacts image redis slave context redis slave image php redis context php redis image skaffold helm context helm tagpolicy local push true manifests kustomize paths php redis config yaml rawyaml redis master yaml kpt redis slave helm releases name skaffold helm chartpath helm charts transform name set annotations configmap author yuwenma name create setters configmap app guestbook name apply setters configmap app guestbook yuwen validate name kubeval deploy kpt namespace default name inventory inventoryid run skaffold init
| 0
|
24,738
| 4,107,251,081
|
IssuesEvent
|
2016-06-06 12:18:19
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
jQuery noConflict bug
|
Bug Core: compatibility Core: handsontable Improvement suggestion Priority: normal Released Tested
|
Hi, i found issue in 0.24.2 version when using jquery in no conflict mode it throw Uncaught TypeError
http://jsbin.com/warujotaku
|
1.0
|
jQuery noConflict bug - Hi, i found issue in 0.24.2 version when using jquery in no conflict mode it throw Uncaught TypeError
http://jsbin.com/warujotaku
|
test
|
jquery noconflict bug hi i found issue in version when using jquery in no conflict mode it throw uncaught typeerror
| 1
|
9,012
| 3,249,770,921
|
IssuesEvent
|
2015-10-18 12:50:41
|
borgbackup/borg
|
https://api.github.com/repos/borgbackup/borg
|
closed
|
add high level description about changes in borgbackup compared to attic
|
documentation
|
- in general: lots of tickets from attic issue tracker fixed (bugs, features), see #5 for a list
- less chunk management overhead (less memory and disk usage) via adjustable chunker params
- faster remote cache resync (useful when backing up multiple machines into same repo)
- compression: lz4 and lzma compression, adjustable compression levels
- better error messages / exception handling
- repokey replaces problematic passphrase mode (you can't change the passphrase nor the pbkdf2 iteration count in "passphrase" mode)
- simple sparse file support
- can read special files (e.g. block devices) or stdin directly
- mkdir-based locking is more compatible than attic's posix locking
- tested on misc. linux systems, 32 and 64bit, freebsd, openbsd, netbsd, mac os x
- use fadvise to not spoil / blow up the cache
- better output for verbose mode, progress indication
|
1.0
|
add high level description about changes in borgbackup compared to attic - - in general: lots of tickets from attic issue tracker fixed (bugs, features), see #5 for a list
- less chunk management overhead (less memory and disk usage) via adjustable chunker params
- faster remote cache resync (useful when backing up multiple machines into same repo)
- compression: lz4 and lzma compression, adjustable compression levels
- better error messages / exception handling
- repokey replaces problematic passphrase mode (you can't change the passphrase nor the pbkdf2 iteration count in "passphrase" mode)
- simple sparse file support
- can read special files (e.g. block devices) or stdin directly
- mkdir-based locking is more compatible than attic's posix locking
- tested on misc. linux systems, 32 and 64bit, freebsd, openbsd, netbsd, mac os x
- use fadvise to not spoil / blow up the cache
- better output for verbose mode, progress indication
|
non_test
|
add high level description about changes in borgbackup compared to attic in general lots of tickets from attic issue tracker fixed bugs features see for a list less chunk management overhead less memory and disk usage via adjustable chunker params faster remote cache resync useful when backing up multiple machines into same repo compression and lzma compression adjustable compression levels better error messages exception handling repokey replaces problematic passphrase mode you can t change the passphrase nor the iteration count in passphrase mode simple sparse file support can read special files e g block devices or stdin directly mkdir based locking is more compatible than attic s posix locking tested on misc linux systems and freebsd openbsd netbsd mac os x use fadvise to not spoil blow up the cache better output for verbose mode progress indication
| 0
|
259,024
| 22,365,906,197
|
IssuesEvent
|
2022-06-16 04:01:50
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[BUG] test TestPlanHandler_OnChanged sometimes fails
|
bug area/backend priority/2 area/test area/upgrade-related not-require/test-plan backport-needed/1.0.3
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The unit tests sometimes fail with:
```
--- FAIL: TestPlanHandler_OnChanged (0.01s)
plan_controller_test.go:120:
Error Trace: plan_controller_test.go:120
Error: Not equal:
expected: upgrade.output{plan:(*v1.Plan)(nil), upgrade:(*v1beta1.Upgrade)(0xc000416b60), err:error(nil)}
actual : upgrade.output{plan:(*v1.Plan)(nil), upgrade:(*v1beta1.Upgrade)(0xc0000b5860), err:error(nil)}
Diff:
--- Expected
+++ Actual
@@ -68,3 +68,3 @@
Status: (v1.ConditionStatus) (len=4) "True",
- LastUpdateTime: (string) (len=20) "2022-03-31T07:00:51Z",
+ LastUpdateTime: (string) (len=20) "2022-03-31T07:00:52Z",
LastTransitionTime: (string) "",
Test: TestPlanHandler_OnChanged
Messages: case "set NodesPrepared condition when prepare plan completes"
time="2022-03-31T07:00:52Z" level=info msg="Creating upgrade repo image"
time="2022-03-31T07:00:52Z" level=info msg="Creating upgrade repo image"
FAIL
coverage: 26.7% of statements
FAIL github.com/harvester/harvester/pkg/controller/master/upgrade 0.232s
```
https://drone-publish.rancher.io/harvester/harvester/574/1/2
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Support bundle**
<!-- You can generate a support bundle in the bottom of Harvester UI. It includes logs and configurations that help diagnose the issue. -->
**Environment:**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Sometimes the process of test takes more than 1 second, we need to clean the plan timestamp.
|
2.0
|
[BUG] test TestPlanHandler_OnChanged sometimes fails - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The unit tests sometimes fail with:
```
--- FAIL: TestPlanHandler_OnChanged (0.01s)
plan_controller_test.go:120:
Error Trace: plan_controller_test.go:120
Error: Not equal:
expected: upgrade.output{plan:(*v1.Plan)(nil), upgrade:(*v1beta1.Upgrade)(0xc000416b60), err:error(nil)}
actual : upgrade.output{plan:(*v1.Plan)(nil), upgrade:(*v1beta1.Upgrade)(0xc0000b5860), err:error(nil)}
Diff:
--- Expected
+++ Actual
@@ -68,3 +68,3 @@
Status: (v1.ConditionStatus) (len=4) "True",
- LastUpdateTime: (string) (len=20) "2022-03-31T07:00:51Z",
+ LastUpdateTime: (string) (len=20) "2022-03-31T07:00:52Z",
LastTransitionTime: (string) "",
Test: TestPlanHandler_OnChanged
Messages: case "set NodesPrepared condition when prepare plan completes"
time="2022-03-31T07:00:52Z" level=info msg="Creating upgrade repo image"
time="2022-03-31T07:00:52Z" level=info msg="Creating upgrade repo image"
FAIL
coverage: 26.7% of statements
FAIL github.com/harvester/harvester/pkg/controller/master/upgrade 0.232s
```
https://drone-publish.rancher.io/harvester/harvester/574/1/2
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Support bundle**
<!-- You can generate a support bundle in the bottom of Harvester UI. It includes logs and configurations that help diagnose the issue. -->
**Environment:**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Sometimes the process of test takes more than 1 second, we need to clean the plan timestamp.
|
test
|
test testplanhandler onchanged sometimes fails describe the bug the unit tests sometimes fail with fail testplanhandler onchanged plan controller test go error trace plan controller test go error not equal expected upgrade output plan plan nil upgrade upgrade err error nil actual upgrade output plan plan nil upgrade upgrade err error nil diff expected actual status conditionstatus len true lastupdatetime string len lastupdatetime string len lasttransitiontime string test testplanhandler onchanged messages case set nodesprepared condition when prepare plan completes time level info msg creating upgrade repo image time level info msg creating upgrade repo image fail coverage of statements fail github com harvester harvester pkg controller master upgrade expected behavior support bundle environment harvester iso version underlying infrastructure e g baremetal with dell poweredge additional context sometimes the process of test takes more than second we need to clean the plan timestamp
| 1
|
201,867
| 15,227,981,887
|
IssuesEvent
|
2021-02-18 10:53:26
|
WPChill/modula-theme
|
https://api.github.com/repos/WPChill/modula-theme
|
closed
|
Border 1px solid #DDD
|
needs testing
|

---
Am pus borders pe forms, contrastul intre acel albastru folosit de noi si gri-ul de fundal era f. slab ... nu prea vedeai unde incepe sau unde se termina un field
|
1.0
|
Border 1px solid #DDD - 
---
Am pus borders pe forms, contrastul intre acel albastru folosit de noi si gri-ul de fundal era f. slab ... nu prea vedeai unde incepe sau unde se termina un field
|
test
|
border solid ddd am pus borders pe forms contrastul intre acel albastru folosit de noi si gri ul de fundal era f slab nu prea vedeai unde incepe sau unde se termina un field
| 1
|
145,298
| 11,683,959,923
|
IssuesEvent
|
2020-03-05 05:15:35
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
certain extensions breaking due to WebSQL being disabled
|
QA/Test-Plan-Specified QA/Yes bug feature/extensions
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
It appears that some extensions are failing due to us disabling WebSQL as per https://github.com/brave/brave-core/pull/4463.
<img width="611" alt="Screen Shot 2020-03-04 at 9 55 19 PM (1)" src="https://user-images.githubusercontent.com/2602313/75949779-4466e500-5e75-11ea-9ffa-c16fd1fa492d.png">
<img width="471" alt="Screen Shot 2020-03-04 at 11 48 38 PM" src="https://user-images.githubusercontent.com/2602313/75949780-44ff7b80-5e75-11ea-870d-378a641c61b9.png">
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.6.56 CR: 80.0.3987.122 (Dev)` or `1.7.39 CR: 80.0.3987.132 (Nightly)`
2. install https://chrome.google.com/webstore/detail/speed-dial-fvd-new-tab-pa/llaficoajjainaijghjlofdfmbjpebpa/related?hl=en
## Actual result:
<!--Please add screenshots if needed-->
<img width="1062" alt="Screen Shot 2020-03-05 at 12 00 15 AM" src="https://user-images.githubusercontent.com/2602313/75949666-edf9a680-5e74-11ea-83fb-846ece54e2c5.png">
## Expected result:
<img width="1127" alt="Screen Shot 2020-03-04 at 11 55 21 PM" src="https://user-images.githubusercontent.com/2602313/75949661-e9cd8900-5e74-11ea-8447-425860594a2f.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
```
Brave | 1.6.56 Chromium: 80.0.3987.122 (Official Build) dev (64-bit)
-- | --
Revision | cf72c4c4f7db75bc3da689cd76513962d31c7b52-refs/branch-heads/3987@{#943}
OS | macOS Version 10.15.3 (Build 19D76)
```
```
Brave | 1.7.39 Chromium: 80.0.3987.132 (Official Build) nightly (64-bit)
-- | --
Revision | fcea73228632975e052eb90fcf6cd1752d3b42b4-refs/branch-heads/3987@{#974}
OS | macOS Version 10.15.3 (Build 19D76)
```
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `No`
- Can you reproduce this issue with the beta channel? `No`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @bsclifton @rebron @brave/legacy_qa
|
1.0
|
certain extensions breaking due to WebSQL being disabled - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
It appears that some extensions are failing due to us disabling WebSQL as per https://github.com/brave/brave-core/pull/4463.
<img width="611" alt="Screen Shot 2020-03-04 at 9 55 19 PM (1)" src="https://user-images.githubusercontent.com/2602313/75949779-4466e500-5e75-11ea-9ffa-c16fd1fa492d.png">
<img width="471" alt="Screen Shot 2020-03-04 at 11 48 38 PM" src="https://user-images.githubusercontent.com/2602313/75949780-44ff7b80-5e75-11ea-870d-378a641c61b9.png">
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.6.56 CR: 80.0.3987.122 (Dev)` or `1.7.39 CR: 80.0.3987.132 (Nightly)`
2. install https://chrome.google.com/webstore/detail/speed-dial-fvd-new-tab-pa/llaficoajjainaijghjlofdfmbjpebpa/related?hl=en
## Actual result:
<!--Please add screenshots if needed-->
<img width="1062" alt="Screen Shot 2020-03-05 at 12 00 15 AM" src="https://user-images.githubusercontent.com/2602313/75949666-edf9a680-5e74-11ea-83fb-846ece54e2c5.png">
## Expected result:
<img width="1127" alt="Screen Shot 2020-03-04 at 11 55 21 PM" src="https://user-images.githubusercontent.com/2602313/75949661-e9cd8900-5e74-11ea-8447-425860594a2f.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
```
Brave | 1.6.56 Chromium: 80.0.3987.122 (Official Build) dev (64-bit)
-- | --
Revision | cf72c4c4f7db75bc3da689cd76513962d31c7b52-refs/branch-heads/3987@{#943}
OS | macOS Version 10.15.3 (Build 19D76)
```
```
Brave | 1.7.39 Chromium: 80.0.3987.132 (Official Build) nightly (64-bit)
-- | --
Revision | fcea73228632975e052eb90fcf6cd1752d3b42b4-refs/branch-heads/3987@{#974}
OS | macOS Version 10.15.3 (Build 19D76)
```
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `No`
- Can you reproduce this issue with the beta channel? `No`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @bsclifton @rebron @brave/legacy_qa
|
test
|
certain extensions breaking due to websql being disabled have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description it appears that some extensions are failing due to us disabling websql as per img width alt screen shot at pm src img width alt screen shot at pm src steps to reproduce install cr dev or cr nightly install actual result img width alt screen shot at am src expected result img width alt screen shot at pm src reproduces how often reproducible using the above str brave version brave version info brave chromium official build dev bit revision refs branch heads os macos version build brave chromium official build nightly bit revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release no can you reproduce this issue with the beta channel no can you reproduce this issue with the dev channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information ccing bsclifton rebron brave legacy qa
| 1
|
318,219
| 27,295,885,014
|
IssuesEvent
|
2023-02-23 20:17:57
|
rancher/fleet
|
https://api.github.com/repos/rancher/fleet
|
closed
|
Fleet reconciliation creates significant load on API server and it's database
|
[zube]: To Test area/performance kind/bug priority/high loe/XL area/fleet team/fleet fleet/epic
|
In our setup we have a K3S (v1.20.6+k3s1) cluster with MySQL backend (Aurora RDS) managed by Rancher (v2.5.8).
We use fleet (**v0.3.5**) to deploy resources into k3s cluster. Installation is quite massive and have **231 fleet bundles** installed.
We noticed that network throughput from the MySQL database became a bottleneck for API performance and during investigation noticed that fleet constantly pulls helm secrets from K8S API when reconciling failing workloads.
At the point when throughput reached about 4 Gbit/s we started investigation and was able to get it down to about 1.5 Gbit/s by limiting helm history to 2 releases in fleet bundles configuration.
**I suggest that fleet should have an option to tune the reconciliation behavior to somehow make it less aggressive.**
Here is some data from an experiment I conducted to demonstrate the behavior:
```
# K3S Cluster
kubectl get secrets --all-namespaces | grep ' sh.helm' | wc -l
564
```
```
# Rancher Cluster
$ kubectl -n fleet-default get bundles | tail -n +2 | wc -l
231
$ kubectl -n fleet-default get bundles | grep NotReady | grep 'statefulset\|deployment' | wc -l
4
```
With 4 failing deployment database bandwidth stays around 1.5 Gbit/s and fleet is pulling about 3 secrets per second.
During the experiment, to see how behavior changes without failing deployments, I deleted bundles that were failing.
```
$ kubectl -n fleet-default delete bundles ...
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
$ kubectl -n fleet-default get bundles | grep NotReady | grep 'statefulset\|deployment' | wc -l
0
```
On the graphs below you can see database throughput and API read rate for secrets. Between blue bars are periods when there are no failing resources in fleet:
<img width="844" alt="Screenshot 2021-08-24 at 14 10 19" src="https://user-images.githubusercontent.com/751681/130613960-eb8df269-7b9c-4002-9b3f-059e032d54a3.png">
<img width="837" alt="Screenshot 2021-08-24 at 14 08 49" src="https://user-images.githubusercontent.com/751681/130613967-fb970d95-966c-4f5f-91e3-8fd93601d80b.png">
During high API rate periods fleet-agent logs are full of:
```
time="..." level=info msg="getting history for release ..."
```
and
```
time="..." level=error msg="bundle ...: deployment.apps ... error] Progress deadline exceeded"
```
|
1.0
|
Fleet reconciliation creates significant load on API server and it's database - In our setup we have a K3S (v1.20.6+k3s1) cluster with MySQL backend (Aurora RDS) managed by Rancher (v2.5.8).
We use fleet (**v0.3.5**) to deploy resources into k3s cluster. Installation is quite massive and have **231 fleet bundles** installed.
We noticed that network throughput from the MySQL database became a bottleneck for API performance and during investigation noticed that fleet constantly pulls helm secrets from K8S API when reconciling failing workloads.
At the point when throughput reached about 4 Gbit/s we started investigation and was able to get it down to about 1.5 Gbit/s by limiting helm history to 2 releases in fleet bundles configuration.
**I suggest that fleet should have an option to tune the reconciliation behavior to somehow make it less aggressive.**
Here is some data from an experiment I conducted to demonstrate the behavior:
```
# K3S Cluster
kubectl get secrets --all-namespaces | grep ' sh.helm' | wc -l
564
```
```
# Rancher Cluster
$ kubectl -n fleet-default get bundles | tail -n +2 | wc -l
231
$ kubectl -n fleet-default get bundles | grep NotReady | grep 'statefulset\|deployment' | wc -l
4
```
With 4 failing deployment database bandwidth stays around 1.5 Gbit/s and fleet is pulling about 3 secrets per second.
During the experiment, to see how behavior changes without failing deployments, I deleted bundles that were failing.
```
$ kubectl -n fleet-default delete bundles ...
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
bundle.fleet.cattle.io ... deleted
$ kubectl -n fleet-default get bundles | grep NotReady | grep 'statefulset\|deployment' | wc -l
0
```
On the graphs below you can see database throughput and API read rate for secrets. Between blue bars are periods when there are no failing resources in fleet:
<img width="844" alt="Screenshot 2021-08-24 at 14 10 19" src="https://user-images.githubusercontent.com/751681/130613960-eb8df269-7b9c-4002-9b3f-059e032d54a3.png">
<img width="837" alt="Screenshot 2021-08-24 at 14 08 49" src="https://user-images.githubusercontent.com/751681/130613967-fb970d95-966c-4f5f-91e3-8fd93601d80b.png">
During high API rate periods fleet-agent logs are full of:
```
time="..." level=info msg="getting history for release ..."
```
and
```
time="..." level=error msg="bundle ...: deployment.apps ... error] Progress deadline exceeded"
```
|
test
|
fleet reconciliation creates significant load on api server and it s database in our setup we have a cluster with mysql backend aurora rds managed by rancher we use fleet to deploy resources into cluster installation is quite massive and have fleet bundles installed we noticed that network throughput from the mysql database became a bottleneck for api performance and during investigation noticed that fleet constantly pulls helm secrets from api when reconciling failing workloads at the point when throughput reached about gbit s we started investigation and was able to get it down to about gbit s by limiting helm history to releases in fleet bundles configuration i suggest that fleet should have an option to tune the reconciliation behavior to somehow make it less aggressive here is some data from an experiment i conducted to demonstrate the behavior cluster kubectl get secrets all namespaces grep sh helm wc l rancher cluster kubectl n fleet default get bundles tail n wc l kubectl n fleet default get bundles grep notready grep statefulset deployment wc l with failing deployment database bandwidth stays around gbit s and fleet is pulling about secrets per second during the experiment to see how behavior changes without failing deployments i deleted bundles that were failing kubectl n fleet default delete bundles bundle fleet cattle io deleted bundle fleet cattle io deleted bundle fleet cattle io deleted bundle fleet cattle io deleted kubectl n fleet default get bundles grep notready grep statefulset deployment wc l on the graphs below you can see database throughput and api read rate for secrets between blue bars are periods when there are no failing resources in fleet img width alt screenshot at src img width alt screenshot at src during high api rate periods fleet agent logs are full of time level info msg getting history for release and time level error msg bundle deployment apps error progress deadline exceeded
| 1
|
243,880
| 20,595,206,185
|
IssuesEvent
|
2022-03-05 11:32:07
|
ManimCommunity/manim
|
https://api.github.com/repos/ManimCommunity/manim
|
opened
|
Add option to automatically generate diffs of failing graphical unit tests
|
enhancement testing
|
## Enhancement proposal
Will allow developers to more easily check how graphical unit tests are failing. One step towards allowing our GitHub CI to automatically post these diffs as well.
## Additional comments
<!-- Add further context that you think might be relevant. -->
|
1.0
|
Add option to automatically generate diffs of failing graphical unit tests - ## Enhancement proposal
Will allow developers to more easily check how graphical unit tests are failing. One step towards allowing our GitHub CI to automatically post these diffs as well.
## Additional comments
<!-- Add further context that you think might be relevant. -->
|
test
|
add option to automatically generate diffs of failing graphical unit tests enhancement proposal will allow developers to more easily check how graphical unit tests are failing one step towards allowing our github ci to automatically post these diffs as well additional comments
| 1
|
288,328
| 24,898,735,928
|
IssuesEvent
|
2022-10-28 18:27:59
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
add more unit tests for batch fixer
|
Area-IDE Test Concept-Continuous Improvement
|
see #320 for more detail.
basically, we need a way unit tests for batch fixer covering many corner cases.
|
1.0
|
add more unit tests for batch fixer - see #320 for more detail.
basically, we need a way unit tests for batch fixer covering many corner cases.
|
test
|
add more unit tests for batch fixer see for more detail basically we need a way unit tests for batch fixer covering many corner cases
| 1
|
40,246
| 5,283,274,799
|
IssuesEvent
|
2017-02-07 20:59:14
|
dealii/dealii
|
https://api.github.com/repos/dealii/dealii
|
closed
|
fe_abf_02 fails on Sierra with clang 8.0.0
|
Tests
|
```
2828 - fe/abf_02.debug (Failed)
2829 - fe/abf_02.release (Failed)
```
diffs look big:
```
----------------
##9 #:5 <== 3.76386
##9 #:5 ==> 2.64943
@ Absolute error = 1.1144300000e+0, Relative error = 4.2063009779e-1
----------------
##10 #:5 <== 16
##10 #:5 ==> 9
@ Absolute error = 7.0000000000e+0, Relative error = 7.7777777778e-1
----------------
##12 #:8 <== 2.50000
##12 #:8 ==> -0.618870
@ Absolute error = 3.1188700000e+0, Relative error = 5.0396205988e+0
```
|
1.0
|
fe_abf_02 fails on Sierra with clang 8.0.0 - ```
2828 - fe/abf_02.debug (Failed)
2829 - fe/abf_02.release (Failed)
```
diffs look big:
```
----------------
##9 #:5 <== 3.76386
##9 #:5 ==> 2.64943
@ Absolute error = 1.1144300000e+0, Relative error = 4.2063009779e-1
----------------
##10 #:5 <== 16
##10 #:5 ==> 9
@ Absolute error = 7.0000000000e+0, Relative error = 7.7777777778e-1
----------------
##12 #:8 <== 2.50000
##12 #:8 ==> -0.618870
@ Absolute error = 3.1188700000e+0, Relative error = 5.0396205988e+0
```
|
test
|
fe abf fails on sierra with clang fe abf debug failed fe abf release failed diffs look big absolute error relative error absolute error relative error absolute error relative error
| 1
|
236,612
| 18,103,667,854
|
IssuesEvent
|
2021-09-22 16:42:15
|
domxjs/domx
|
https://api.github.com/repos/domxjs/domx
|
closed
|
Publish package Router; v0.1.0 (and supporting packages)
|
documentation
|
## Packages to publish
- Router 0.1.0
- Event Map 0.7.0
- DataElement 0.5.0
- StateChange 0.7.0
- linkProp 0.3.0
- testUtils 0.2.0
## Development
- [x] Checkout package branch.
- [x] Merge master into branch.
- [x] Develop and commit to branch until feature completion.
## Feature completion
- [x] Build, run tests and coverage.
- [x] Create badges `npm run badges`
- [x] Update package readme, changelog, and package version.
- [x] Update root readme, and changelog.
- [x] Merge into master.
## Update master version
- [x] Update package version.
- [x] Push git tags `git push origin --tags`.
- [x] Add release to git; add changelog notes for the specific release.
- [x] Publish new packages.
|
1.0
|
Publish package Router; v0.1.0 (and supporting packages) - ## Packages to publish
- Router 0.1.0
- Event Map 0.7.0
- DataElement 0.5.0
- StateChange 0.7.0
- linkProp 0.3.0
- testUtils 0.2.0
## Development
- [x] Checkout package branch.
- [x] Merge master into branch.
- [x] Develop and commit to branch until feature completion.
## Feature completion
- [x] Build, run tests and coverage.
- [x] Create badges `npm run badges`
- [x] Update package readme, changelog, and package version.
- [x] Update root readme, and changelog.
- [x] Merge into master.
## Update master version
- [x] Update package version.
- [x] Push git tags `git push origin --tags`.
- [x] Add release to git; add changelog notes for the specific release.
- [x] Publish new packages.
|
non_test
|
publish package router and supporting packages packages to publish router event map dataelement statechange linkprop testutils development checkout package branch merge master into branch develop and commit to branch until feature completion feature completion build run tests and coverage create badges npm run badges update package readme changelog and package version update root readme and changelog merge into master update master version update package version push git tags git push origin tags add release to git add changelog notes for the specific release publish new packages
| 0
|
20,091
| 3,792,307,993
|
IssuesEvent
|
2016-03-22 09:07:35
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
ClientMapStoreTest.mapStore_OperationQueue_AtMaxCapacity_Test
|
Team: Client Team: Core Type: Test-Failure
|
```
java.lang.AssertionError: Expected exception: com.hazelcast.map.ReachedMaxSizeException
at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:32)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:88)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/Hazelcast-3.x-sonar/com.hazelcast$hazelcast-client/996/testReport/junit/com.hazelcast.client.map/ClientMapStoreTest/mapStore_OperationQueue_AtMaxCapacity_Test/
|
1.0
|
ClientMapStoreTest.mapStore_OperationQueue_AtMaxCapacity_Test - ```
java.lang.AssertionError: Expected exception: com.hazelcast.map.ReachedMaxSizeException
at org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:32)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:88)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/Hazelcast-3.x-sonar/com.hazelcast$hazelcast-client/996/testReport/junit/com.hazelcast.client.map/ClientMapStoreTest/mapStore_OperationQueue_AtMaxCapacity_Test/
|
test
|
clientmapstoretest mapstore operationqueue atmaxcapacity test java lang assertionerror expected exception com hazelcast map reachedmaxsizeexception at org junit internal runners statements expectexception evaluate expectexception java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java
| 1
|
28,930
| 4,446,326,233
|
IssuesEvent
|
2016-08-20 16:10:27
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
circleci: failed tests: TestEventLog
|
Robot test-failure
|
The following test appears to have failed:
[#21598](https://circleci.com/gh/cockroachdb/cockroach/21598):
```
E160820 16:03:05.148101 acceptance/cluster/localcluster.go:521 node=0 status=die
E160820 16:03:05.577587 acceptance/cluster/localcluster.go:521 node=0 status=restart
I160820 16:03:52.785842 acceptance/cluster/localcluster.go:671 stopping
I160820 16:03:52.786039 acceptance/cluster/localcluster.go:571 event stream done, resetting...: net/http: request canceled
I160820 16:03:52.786058 acceptance/cluster/localcluster.go:594 events monitor exits
--- FAIL: TestEventLog (52.49s)
testing.go:117: acceptance/event_log_test.go:173, condition failed to evaluate within 45s: Expected only one node restart event, found 0
=== RUN TestDockerFinagle
--- SKIP: TestDockerFinagle (0.00s)
finagle_test.go:21: #8332. Upstream has a 2s timeout, disabled until we run tests somewhere more consistent.
=== RUN TestFreezeCluster
--- SKIP: TestFreezeCluster (0.00s)
freeze_test.go:40: #7957
=== RUN TestGossipPeerings
I160820 16:03:54.727903 acceptance/cluster/localcluster.go:299 Initializing Cluster AdHoc 3x1:
{"name":"AdHoc 3x1","nodes":[{"count":3,"stores":[{"count":1,"max_ranges":0}]}],"duration":5000000000}
--
--- SKIP: TestBuildBabyCluster (0.00s)
terraform_test.go:30: only enabled during testing
=== RUN TestFiveNodesAndWriters
--- SKIP: TestFiveNodesAndWriters (0.00s)
util_test.go:366: skipping since not run against remote cluster
FAIL
ok github.com/cockroachdb/cockroach/acceptance 1337s
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
circleci: failed tests: TestEventLog - The following test appears to have failed:
[#21598](https://circleci.com/gh/cockroachdb/cockroach/21598):
```
E160820 16:03:05.148101 acceptance/cluster/localcluster.go:521 node=0 status=die
E160820 16:03:05.577587 acceptance/cluster/localcluster.go:521 node=0 status=restart
I160820 16:03:52.785842 acceptance/cluster/localcluster.go:671 stopping
I160820 16:03:52.786039 acceptance/cluster/localcluster.go:571 event stream done, resetting...: net/http: request canceled
I160820 16:03:52.786058 acceptance/cluster/localcluster.go:594 events monitor exits
--- FAIL: TestEventLog (52.49s)
testing.go:117: acceptance/event_log_test.go:173, condition failed to evaluate within 45s: Expected only one node restart event, found 0
=== RUN TestDockerFinagle
--- SKIP: TestDockerFinagle (0.00s)
finagle_test.go:21: #8332. Upstream has a 2s timeout, disabled until we run tests somewhere more consistent.
=== RUN TestFreezeCluster
--- SKIP: TestFreezeCluster (0.00s)
freeze_test.go:40: #7957
=== RUN TestGossipPeerings
I160820 16:03:54.727903 acceptance/cluster/localcluster.go:299 Initializing Cluster AdHoc 3x1:
{"name":"AdHoc 3x1","nodes":[{"count":3,"stores":[{"count":1,"max_ranges":0}]}],"duration":5000000000}
--
--- SKIP: TestBuildBabyCluster (0.00s)
terraform_test.go:30: only enabled during testing
=== RUN TestFiveNodesAndWriters
--- SKIP: TestFiveNodesAndWriters (0.00s)
util_test.go:366: skipping since not run against remote cluster
FAIL
ok github.com/cockroachdb/cockroach/acceptance 1337s
```
Please assign, take a look and update the issue accordingly.
|
test
|
circleci failed tests testeventlog the following test appears to have failed acceptance cluster localcluster go node status die acceptance cluster localcluster go node status restart acceptance cluster localcluster go stopping acceptance cluster localcluster go event stream done resetting net http request canceled acceptance cluster localcluster go events monitor exits fail testeventlog testing go acceptance event log test go condition failed to evaluate within expected only one node restart event found run testdockerfinagle skip testdockerfinagle finagle test go upstream has a timeout disabled until we run tests somewhere more consistent run testfreezecluster skip testfreezecluster freeze test go run testgossippeerings acceptance cluster localcluster go initializing cluster adhoc name adhoc nodes duration skip testbuildbabycluster terraform test go only enabled during testing run testfivenodesandwriters skip testfivenodesandwriters util test go skipping since not run against remote cluster fail ok github com cockroachdb cockroach acceptance please assign take a look and update the issue accordingly
| 1
|
407,920
| 27,637,235,960
|
IssuesEvent
|
2023-03-10 15:18:23
|
mongodben/mongodb-oracle
|
https://api.github.com/repos/mongodben/mongodb-oracle
|
closed
|
Update README
|
documentation
|
update the repo readme to have an overview of final state of the product.
include architecture overview in there.
|
1.0
|
Update README - update the repo readme to have an overview of final state of the product.
include architecture overview in there.
|
non_test
|
update readme update the repo readme to have an overview of final state of the product include architecture overview in there
| 0
|
239,439
| 19,897,469,187
|
IssuesEvent
|
2022-01-25 01:50:35
|
DnD-Montreal/session-tome
|
https://api.github.com/repos/DnD-Montreal/session-tome
|
opened
|
Create Cypress Tests for Create DM Entries
|
test acceptance test
|
## Description
Write E2E Cypress tests for #111 Create DM Entries.
## Possible Implementation
- View DM Entries
- Create a DM Entry
|
2.0
|
Create Cypress Tests for Create DM Entries - ## Description
Write E2E Cypress tests for #111 Create DM Entries.
## Possible Implementation
- View DM Entries
- Create a DM Entry
|
test
|
create cypress tests for create dm entries description write cypress tests for create dm entries possible implementation view dm entries create a dm entry
| 1
|
190,581
| 14,562,753,915
|
IssuesEvent
|
2020-12-17 00:48:47
|
BookStackApp/BookStack
|
https://api.github.com/repos/BookStackApp/BookStack
|
closed
|
PDF Export Issue if '&' Used in Title
|
:bug: Bug :mag: Testing required
|
### **Describe the bug**
I noticed I get an `An unknown error has occurred` issue without any mention in the logs. It only causes an issues when I put the `&` in the title line. e.g. "Categories & Packages"
**PDF Export working** if title is "Categories and Packages"
**PDF Export NOT working** if title is "Categories & Packages"
### **Your Configuration (please complete the following information):**
- Exact BookStack Version (Found in settings): 0.24.0
- PHP Version: 7.0.32
- Hosting Method (Nginx/Apache/Docker): Apache
### **Additional context**
Not sure if it is related to #804 or even #1097
|
1.0
|
PDF Export Issue if '&' Used in Title - ### **Describe the bug**
I noticed I get an `An unknown error has occurred` issue without any mention in the logs. It only causes an issues when I put the `&` in the title line. e.g. "Categories & Packages"
**PDF Export working** if title is "Categories and Packages"
**PDF Export NOT working** if title is "Categories & Packages"
### **Your Configuration (please complete the following information):**
- Exact BookStack Version (Found in settings): 0.24.0
- PHP Version: 7.0.32
- Hosting Method (Nginx/Apache/Docker): Apache
### **Additional context**
Not sure if it is related to #804 or even #1097
|
test
|
pdf export issue if used in title describe the bug i noticed i get an an unknown error has occurred issue without any mention in the logs it only causes an issues when i put the in the title line e g categories packages pdf export working if title is categories and packages pdf export not working if title is categories packages your configuration please complete the following information exact bookstack version found in settings php version hosting method nginx apache docker apache additional context not sure if it is related to or even
| 1
|
140,519
| 11,349,427,874
|
IssuesEvent
|
2020-01-24 04:53:22
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[test-failed]: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts - homepage app sample data dashboard should launch sample logs data set dashboard
|
failed-test test-cloud
|
**Version: 7.6**
**Class: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts**
**Stack Trace:**
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="launchSampleDataSetlogs"])
Wait timed out after 10017ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/ossGrp1/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17
at process._tickCallback (internal/process/next_tick.js:68:7)
at onFailure (test/common/services/retry/retry_for_success.ts:28:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:13)
_Platform: cloud_
_Build Num: 42_
|
2.0
|
[test-failed]: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts - homepage app sample data dashboard should launch sample logs data set dashboard - **Version: 7.6**
**Class: Chrome UI Functional Tests.test/functional/apps/home/_sample_data·ts**
**Stack Trace:**
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="launchSampleDataSetlogs"])
Wait timed out after 10017ms
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/ossGrp1/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17
at process._tickCallback (internal/process/next_tick.js:68:7)
at onFailure (test/common/services/retry/retry_for_success.ts:28:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:13)
_Platform: cloud_
_Build Num: 42_
|
test
|
chrome ui functional tests test functional apps home sample data·ts homepage app sample data dashboard should launch sample logs data set dashboard version class chrome ui functional tests test functional apps home sample data·ts stack trace error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts platform cloud build num
| 1
|
273,772
| 23,784,528,447
|
IssuesEvent
|
2022-09-02 08:51:09
|
jdi-testing/jdi-light
|
https://api.github.com/repos/jdi-testing/jdi-light
|
closed
|
Update test-site: element "chip-group"
|
TestSite Vuetify
|
Update test-site: element "chip-group":
- [x] center-active (y/n)
- [x] theme (dark/light)
- [x] max (0-n)
- [x] next-icon(icon)
- [x] prev-icon(icon)
- [x] show-arrows (y/n)
|
1.0
|
Update test-site: element "chip-group" - Update test-site: element "chip-group":
- [x] center-active (y/n)
- [x] theme (dark/light)
- [x] max (0-n)
- [x] next-icon(icon)
- [x] prev-icon(icon)
- [x] show-arrows (y/n)
|
test
|
update test site element chip group update test site element chip group center active y n theme dark light max n next icon icon prev icon icon show arrows y n
| 1
|
103,185
| 8,882,663,782
|
IssuesEvent
|
2019-01-14 13:51:36
|
dictation-toolbox/dragonfly
|
https://api.github.com/repos/dictation-toolbox/dragonfly
|
closed
|
Dragonfly's unit and doc tests are broken
|
Bug Testing
|
The tests don't run properly at the moment (at least for me) because of relative importing. I'm working on fixing this. I'd like to make the test suites work properly so they can be run through Travis CI.
Various tests in [test_engine_text.py](https://github.com/Danesprite/dragonfly/blob/master/dragonfly/test/test_engine_text.py) are duplicates or should be moved elsewhere so that they apply to all engines. The Pocket Sphinx engine tests need to be adjusted too, but I'll do that with the rework I've been planning for that engine.
|
1.0
|
Dragonfly's unit and doc tests are broken - The tests don't run properly at the moment (at least for me) because of relative importing. I'm working on fixing this. I'd like to make the test suites work properly so they can be run through Travis CI.
Various tests in [test_engine_text.py](https://github.com/Danesprite/dragonfly/blob/master/dragonfly/test/test_engine_text.py) are duplicates or should be moved elsewhere so that they apply to all engines. The Pocket Sphinx engine tests need to be adjusted too, but I'll do that with the rework I've been planning for that engine.
|
test
|
dragonfly s unit and doc tests are broken the tests don t run properly at the moment at least for me because of relative importing i m working on fixing this i d like to make the test suites work properly so they can be run through travis ci various tests in are duplicates or should be moved elsewhere so that they apply to all engines the pocket sphinx engine tests need to be adjusted too but i ll do that with the rework i ve been planning for that engine
| 1
|
253,909
| 8,067,269,077
|
IssuesEvent
|
2018-08-05 04:37:54
|
Entrana/EntranaBugs
|
https://api.github.com/repos/Entrana/EntranaBugs
|
closed
|
Trading
|
bug high priority server issue
|
Trading currently doesn't add items to the correct interface item container. Server issue.
|
1.0
|
Trading - Trading currently doesn't add items to the correct interface item container. Server issue.
|
non_test
|
trading trading currently doesn t add items to the correct interface item container server issue
| 0
|
68,302
| 8,247,900,158
|
IssuesEvent
|
2018-09-11 16:48:48
|
GoogleCloudPlatform/agones
|
https://api.github.com/repos/GoogleCloudPlatform/agones
|
closed
|
Use functional parameters in Controller creation
|
kind/cleanup kind/design
|
The constructor for Controller in controller.go is becoming unwieldy and has the potential to be error-prone as more parameters are added.
Go makes use of [functional parameters](https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis) in a similar way that some other languages use Builders. We should consider doing so as well to help readability as more developers start working on the project.
|
1.0
|
Use functional parameters in Controller creation - The constructor for Controller in controller.go is becoming unwieldy and has the potential to be error-prone as more parameters are added.
Go makes use of [functional parameters](https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis) in a similar way that some other languages use Builders. We should consider doing so as well to help readability as more developers start working on the project.
|
non_test
|
use functional parameters in controller creation the constructor for controller in controller go is becoming unwieldy and has the potential to be error prone as more parameters are added go makes use of in a similar way that some other languages use builders we should consider doing so as well to help readability as more developers start working on the project
| 0
|
32,073
| 4,745,410,971
|
IssuesEvent
|
2016-10-21 07:13:01
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Use RequestFactor for post request mocking
|
area: testing-coverage help wanted
|
@showell thanks to @umairwaheed I learned that Django has a standard thing for making mock request objects that we should probably consider using:
https://docs.djangoproject.com/en/1.9/topics/testing/advanced/#the-request-factory
|
1.0
|
Use RequestFactor for post request mocking - @showell thanks to @umairwaheed I learned that Django has a standard thing for making mock request objects that we should probably consider using:
https://docs.djangoproject.com/en/1.9/topics/testing/advanced/#the-request-factory
|
test
|
use requestfactor for post request mocking showell thanks to umairwaheed i learned that django has a standard thing for making mock request objects that we should probably consider using
| 1
|
246,337
| 20,834,366,672
|
IssuesEvent
|
2022-03-20 00:19:29
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
RangeControl: Does not support half step increments (i.e. 0.5)
|
Needs Testing [Package] Components
|
### Description
I am using the RangeControl component and it doesn't support half steps (i.e. setting the `step` attribute to `0.5`)
```
<RangeControl
label="Rating (0 - 10)"
value={rating}
onChange={onChange}
min={minRating}
max={maxRating}
step={0.5}
/>
```
When I test the slider after making this update, the slider increments by 1, not by 0.5.
### Step-by-step reproduction instructions
1. Create a Gutenberg block.
2. Use the RangeControl component in your edit function and set the step attribute to 0.5.
3. Use RangleControl in Block Editor and see that it increments by 1, versus 0.5
### Screenshots, screen recording, code snippet
_No response_
### Environment info
- WordPress 5.9.2
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes
|
1.0
|
RangeControl: Does not support half step increments (i.e. 0.5) - ### Description
I am using the RangeControl component and it doesn't support half steps (i.e. setting the `step` attribute to `0.5`)
```
<RangeControl
label="Rating (0 - 10)"
value={rating}
onChange={onChange}
min={minRating}
max={maxRating}
step={0.5}
/>
```
When I test the slider after making this update, the slider increments by 1, not by 0.5.
### Step-by-step reproduction instructions
1. Create a Gutenberg block.
2. Use the RangeControl component in your edit function and set the step attribute to 0.5.
3. Use RangleControl in Block Editor and see that it increments by 1, versus 0.5
### Screenshots, screen recording, code snippet
_No response_
### Environment info
- WordPress 5.9.2
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes
|
test
|
rangecontrol does not support half step increments i e description i am using the rangecontrol component and it doesn t support half steps i e setting the step attribute to rangecontrol label rating value rating onchange onchange min minrating max maxrating step when i test the slider after making this update the slider increments by not by step by step reproduction instructions create a gutenberg block use the rangecontrol component in your edit function and set the step attribute to use ranglecontrol in block editor and see that it increments by versus screenshots screen recording code snippet no response environment info wordpress please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg yes
| 1
|
320,031
| 27,417,536,024
|
IssuesEvent
|
2023-03-01 14:43:22
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
acceptance: skip TestDockerCLI/test_sql_monitor
|
C-test-failure skipped-test T-server-and-security
|
[Test history](https://teamcity.cockroachdb.com/project.html?projectId=Cockroach_Ci_Tests&testNameId=-8278297318459355472&tab=testDetails) suggests it's flaked 7 times or so in the past 24 hrs
Jira issue: CRDB-23207
|
2.0
|
acceptance: skip TestDockerCLI/test_sql_monitor - [Test history](https://teamcity.cockroachdb.com/project.html?projectId=Cockroach_Ci_Tests&testNameId=-8278297318459355472&tab=testDetails) suggests it's flaked 7 times or so in the past 24 hrs
Jira issue: CRDB-23207
|
test
|
acceptance skip testdockercli test sql monitor suggests it s flaked times or so in the past hrs jira issue crdb
| 1
|
94,098
| 15,962,333,434
|
IssuesEvent
|
2021-04-16 01:04:38
|
RG4421/nucleus
|
https://api.github.com/repos/RG4421/nucleus
|
opened
|
CVE-2021-23337 (High) detected in lodash-3.10.1.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: nucleus/packages/@nucleus/package.json</p>
<p>Path to vulnerable library: nucleus/packages/@nucleus/node_modules/applause/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-addon-docs-0.6.16.tgz (Root Library)
- ember-component-css-0.7.4.tgz
- broccoli-replace-0.12.0.tgz
- applause-1.2.2.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/packages/@nucleus/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-cli-addon-docs:0.6.16;ember-component-css:0.7.4;broccoli-replace:0.12.0;applause:1.2.2;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23337 (High) detected in lodash-3.10.1.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: nucleus/packages/@nucleus/package.json</p>
<p>Path to vulnerable library: nucleus/packages/@nucleus/node_modules/applause/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-addon-docs-0.6.16.tgz (Root Library)
- ember-component-css-0.7.4.tgz
- broccoli-replace-0.12.0.tgz
- applause-1.2.2.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/packages/@nucleus/package.json"],"isTransitiveDependency":true,"dependencyTree":"ember-cli-addon-docs:0.6.16;ember-component-css:0.7.4;broccoli-replace:0.12.0;applause:1.2.2;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file nucleus packages nucleus package json path to vulnerable library nucleus packages nucleus node modules applause node modules lodash package json dependency hierarchy ember cli addon docs tgz root library ember component css tgz broccoli replace tgz applause tgz x lodash tgz vulnerable library found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree ember cli addon docs ember component css broccoli replace applause lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to command injection via the template function vulnerabilityurl
| 0
|
6,053
| 2,806,801,248
|
IssuesEvent
|
2015-05-15 06:52:05
|
IDgis/geo-publisher-test
|
https://api.github.com/repos/IDgis/geo-publisher-test
|
closed
|
Tiling wordt niet uitgevoerd
|
readyfortest Vraag voor IDgis
|
De tiling van de groeplayer is door André Weijmer op acc aangevraagd, maar niet uitgevoerd. Ook geprobeerd de laag “gemeentegrenzen Overijssel” te tilen. Dit lukt ook niet. Het lijkt erop dat de actie in de wachtrij geplaatst wordt.
|
1.0
|
Tiling wordt niet uitgevoerd - De tiling van de groeplayer is door André Weijmer op acc aangevraagd, maar niet uitgevoerd. Ook geprobeerd de laag “gemeentegrenzen Overijssel” te tilen. Dit lukt ook niet. Het lijkt erop dat de actie in de wachtrij geplaatst wordt.
|
test
|
tiling wordt niet uitgevoerd de tiling van de groeplayer is door andré weijmer op acc aangevraagd maar niet uitgevoerd ook geprobeerd de laag “gemeentegrenzen overijssel” te tilen dit lukt ook niet het lijkt erop dat de actie in de wachtrij geplaatst wordt
| 1
|
45,065
| 18,360,611,150
|
IssuesEvent
|
2021-10-09 06:15:02
|
zabatonni/uptime
|
https://api.github.com/repos/zabatonni/uptime
|
opened
|
🛑 CMS radioservices is down
|
status cms-radioservices
|
In [`f882cc4`](https://github.com/zabatonni/uptime/commit/f882cc40a0c0f177b13173ce974e8289b817865a
), CMS radioservices (https://articles.cms.radioservices.sk/) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 CMS radioservices is down - In [`f882cc4`](https://github.com/zabatonni/uptime/commit/f882cc40a0c0f177b13173ce974e8289b817865a
), CMS radioservices (https://articles.cms.radioservices.sk/) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
non_test
|
🛑 cms radioservices is down in cms radioservices was down http code response time ms
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.