Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,934
| 9,829,960,687
|
IssuesEvent
|
2019-06-16 03:26:47
|
scorelab/senz
|
https://api.github.com/repos/scorelab/senz
|
closed
|
Containerize the senz-web
|
GoogleSummerOfCode2019
|
**Description**
The senz-web is not containerized
**Solution**
Make different containers for frontend and backend and use docker-compose to interact with one another.
|
1.0
|
Containerize the senz-web - **Description**
The senz-web is not containerized
**Solution**
Make different containers for frontend and backend and use docker-compose to interact with one another.
|
non_process
|
containerize the senz web description the senz web is not containerized solution make different containers for frontend and backend and use docker compose to interact with one another
| 0
|
21,150
| 28,127,346,079
|
IssuesEvent
|
2023-03-31 18:59:09
|
hsmusic/hsmusic-data
|
https://api.github.com/repos/hsmusic/hsmusic-data
|
closed
|
Update/add listening links for UNDERTALE/DELTARUNE albums
|
scope: beyond type: data fix type: involved process what: URLs
|
UNDERTALE Soundtrack (excludes CD/Vinyl bonus tracks):
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_l6pEkEJgy577R-aDlJ3Gkp5rmlgIOu8bc
- Add Spotify links: https://open.spotify.com/album/2M2Ae2SvZe3fmzUtlVOV5Z
DELTARUNE Chapter 1 OST:
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_lTzhIII4QNQ8oQDT-4okKVtmoQqdFBiMs
- Add Spotify links: https://open.spotify.com/album/6putGW0KxGMrgTZzplp2pF
DELTARUNE Chapter 2 OST:
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_kADMv-61zFDA5jOl2rTBQpqaa8celjh54
- Add official full album YouTube link: https://www.youtube.com/watch?v=u9YheqFuyng
- Add Spotify links: https://open.spotify.com/album/7DAiPXN3HdbktwwFzQXqrZ
|
1.0
|
Update/add listening links for UNDERTALE/DELTARUNE albums - UNDERTALE Soundtrack (excludes CD/Vinyl bonus tracks):
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_l6pEkEJgy577R-aDlJ3Gkp5rmlgIOu8bc
- Add Spotify links: https://open.spotify.com/album/2M2Ae2SvZe3fmzUtlVOV5Z
DELTARUNE Chapter 1 OST:
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_lTzhIII4QNQ8oQDT-4okKVtmoQqdFBiMs
- Add Spotify links: https://open.spotify.com/album/6putGW0KxGMrgTZzplp2pF
DELTARUNE Chapter 2 OST:
- Replace YouTube track links and playlist with official uploads: https://www.youtube.com/playlist?list=OLAK5uy_kADMv-61zFDA5jOl2rTBQpqaa8celjh54
- Add official full album YouTube link: https://www.youtube.com/watch?v=u9YheqFuyng
- Add Spotify links: https://open.spotify.com/album/7DAiPXN3HdbktwwFzQXqrZ
|
process
|
update add listening links for undertale deltarune albums undertale soundtrack excludes cd vinyl bonus tracks replace youtube track links and playlist with official uploads add spotify links deltarune chapter ost replace youtube track links and playlist with official uploads add spotify links deltarune chapter ost replace youtube track links and playlist with official uploads add official full album youtube link add spotify links
| 1
|
27,820
| 13,431,927,380
|
IssuesEvent
|
2020-09-07 07:43:17
|
trixi-framework/Trixi.jl
|
https://api.github.com/repos/trixi-framework/Trixi.jl
|
opened
|
multiply_dimensionwise: mortars etc.
|
performance
|
Once https://github.com/JuliaLang/julia/pull/37444 is merged into mainstream Julia, we should revisit the choice of `MortarMatrix` in DG methods. Currently, we use `mortar_forward_upper::SMatrix` etc. which basically disables LoopVectorization in `@tullio` there. Once the performance of getting pointers to `view`s is increased, we should check whether it's better to use `mortar_forward_upper::Matrix` with LoopVectorization instead.
Xref https://discourse.julialang.org/t/tullio-loopvectorization-pointer-becomes-slow-for-views-because-of-base-memory-offset-for-ndims-5/46178
|
True
|
multiply_dimensionwise: mortars etc. - Once https://github.com/JuliaLang/julia/pull/37444 is merged into mainstream Julia, we should revisit the choice of `MortarMatrix` in DG methods. Currently, we use `mortar_forward_upper::SMatrix` etc. which basically disables LoopVectorization in `@tullio` there. Once the performance of getting pointers to `view`s is increased, we should check whether it's better to use `mortar_forward_upper::Matrix` with LoopVectorization instead.
Xref https://discourse.julialang.org/t/tullio-loopvectorization-pointer-becomes-slow-for-views-because-of-base-memory-offset-for-ndims-5/46178
|
non_process
|
multiply dimensionwise mortars etc once is merged into mainstream julia we should revisit the choice of mortarmatrix in dg methods currently we use mortar forward upper smatrix etc which basically disables loopvectorization in tullio there once the performance of getting pointers to view s is increased we should check whether it s better to use mortar forward upper matrix with loopvectorization instead xref
| 0
|
58,448
| 14,398,713,499
|
IssuesEvent
|
2020-12-03 09:57:29
|
joncampbell123/dosbox-x
|
https://api.github.com/repos/joncampbell123/dosbox-x
|
closed
|
flatpak build problem
|
build issues platform: Linux
|
**Describe the bug**
Ok, that only took one release :-)
For some reason, that I'm not sure about the flatpak build process no longer works. The build seems to get stuck in a continuous loop. It seems it successfully does the autogen.sh and configure steps, then starts the make step and shows the following:
```
...
config.status: executing depfiles commands
Running: make
make: Warning: File 'Makefile.am' has modification time 1275 s in the future
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /run/build/dosbox-x/missing aclocal-1.16
cd . && /bin/sh /run/build/dosbox-x/missing automake-1.16 --foreign
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /run/build/dosbox-x/missing autoconf
/bin/sh ./config.status --recheck
running CONFIG_SHELL=/bin/sh /bin/sh ./configure --enable-core-inline --enable-debug=heavy --enable-sdl2 CFLAGS=-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection LDFLAGS=-L/app/lib -Wl,-z,relro,-z,now -Wl,--as-needed CXXFLAGS=-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-create --no-recursion
checking build system type... x86_64-pc-linux-gnu
...
```
And the configure and make just keeps looping.
**To Reproduce**
Steps to reproduce the behavior:
```
git clone --recursive https://github.com/flathub/com.dosbox_x.DOSBox-X.git
cd com.dosbox_x.DOSBox-X
flatpak install flathub org.freedesktop.Sdk//20.08 -y
flatpak-builder --force-clean --install --user -y builddir com.dosbox_x.DOSBox-X.yaml
```
**Additional context**
By default the yaml file will use buildsystem=autotool and that worked with the last version. I tried switching to buildsystem=simple and specifying the build-commands, but that also gets into the same circular configure issue.
e.g.
```
- name: dosbox-x
buildsystem: simple
build-commands:
- ./autogen.sh
- ./configure --enable-core-inline --enable-debug=heavy --enable-sdl2
- make
- make install
sources:
- type: archive
url: https://github.com/joncampbell123/dosbox-x/archive/dosbox-x-v0.83.7.tar.gz
sha256: 9cdfa3267c340a869255d8eb1c4ebf4adde47c22854e1d013da22190350bfbb3
post-install:
- install -Dm644 /app/share/icons/hicolor/scalable/apps/dosbox-x.svg /app/share/icons/hicolor/scalable/apps/${FLATPAK_ID}.svg
- desktop-file-edit --set-key=Icon --set-value=${FLATPAK_ID} /app/share/applications/${FLATPAK_ID}.desktop
```
I also tried replacing the build-commands with just a simple ``./build-debug-sdl2`` and then it crashes out with a:
```
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... configure: error: newly created file is older than distributed files!
Check your system clock
Error: module dosbox-x: Child process exited with code 1
```
To clarify the flatpak build runs in an environment with EPOCH set to zero. This is intentional, and I don't think you can disable it.
Loop during regular build:
```
```
|
1.0
|
flatpak build problem - **Describe the bug**
Ok, that only took one release :-)
For some reason, that I'm not sure about the flatpak build process no longer works. The build seems to get stuck in a continuous loop. It seems it successfully does the autogen.sh and configure steps, then starts the make step and shows the following:
```
...
config.status: executing depfiles commands
Running: make
make: Warning: File 'Makefile.am' has modification time 1275 s in the future
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /run/build/dosbox-x/missing aclocal-1.16
cd . && /bin/sh /run/build/dosbox-x/missing automake-1.16 --foreign
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /run/build/dosbox-x/missing autoconf
/bin/sh ./config.status --recheck
running CONFIG_SHELL=/bin/sh /bin/sh ./configure --enable-core-inline --enable-debug=heavy --enable-sdl2 CFLAGS=-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection LDFLAGS=-L/app/lib -Wl,-z,relro,-z,now -Wl,--as-needed CXXFLAGS=-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection --no-create --no-recursion
checking build system type... x86_64-pc-linux-gnu
...
```
And the configure and make just keeps looping.
**To Reproduce**
Steps to reproduce the behavior:
```
git clone --recursive https://github.com/flathub/com.dosbox_x.DOSBox-X.git
cd com.dosbox_x.DOSBox-X
flatpak install flathub org.freedesktop.Sdk//20.08 -y
flatpak-builder --force-clean --install --user -y builddir com.dosbox_x.DOSBox-X.yaml
```
**Additional context**
By default the yaml file will use buildsystem=autotool and that worked with the last version. I tried switching to buildsystem=simple and specifying the build-commands, but that also gets into the same circular configure issue.
e.g.
```
- name: dosbox-x
buildsystem: simple
build-commands:
- ./autogen.sh
- ./configure --enable-core-inline --enable-debug=heavy --enable-sdl2
- make
- make install
sources:
- type: archive
url: https://github.com/joncampbell123/dosbox-x/archive/dosbox-x-v0.83.7.tar.gz
sha256: 9cdfa3267c340a869255d8eb1c4ebf4adde47c22854e1d013da22190350bfbb3
post-install:
- install -Dm644 /app/share/icons/hicolor/scalable/apps/dosbox-x.svg /app/share/icons/hicolor/scalable/apps/${FLATPAK_ID}.svg
- desktop-file-edit --set-key=Icon --set-value=${FLATPAK_ID} /app/share/applications/${FLATPAK_ID}.desktop
```
I also tried replacing the build-commands with just a simple ``./build-debug-sdl2`` and then it crashes out with a:
```
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... configure: error: newly created file is older than distributed files!
Check your system clock
Error: module dosbox-x: Child process exited with code 1
```
To clarify the flatpak build runs in an environment with EPOCH set to zero. This is intentional, and I don't think you can disable it.
Loop during regular build:
```
```
|
non_process
|
flatpak build problem describe the bug ok that only took one release for some reason that i m not sure about the flatpak build process no longer works the build seems to get stuck in a continuous loop it seems it successfully does the autogen sh and configure steps then starts the make step and shows the following config status executing depfiles commands running make make warning file makefile am has modification time s in the future cdpath zsh version cd bin sh run build dosbox x missing aclocal cd bin sh run build dosbox x missing automake foreign cdpath zsh version cd bin sh run build dosbox x missing autoconf bin sh config status recheck running config shell bin sh bin sh configure enable core inline enable debug heavy enable cflags g pipe wp d fortify source wp d glibcxx assertions fexceptions fstack protector strong grecord gcc switches fasynchronous unwind tables fstack clash protection fcf protection ldflags l app lib wl z relro z now wl as needed cxxflags g pipe wp d fortify source wp d glibcxx assertions fexceptions fstack protector strong grecord gcc switches fasynchronous unwind tables fstack clash protection fcf protection no create no recursion checking build system type pc linux gnu and the configure and make just keeps looping to reproduce steps to reproduce the behavior git clone recursive cd com dosbox x dosbox x flatpak install flathub org freedesktop sdk y flatpak builder force clean install user y builddir com dosbox x dosbox x yaml additional context by default the yaml file will use buildsystem autotool and that worked with the last version i tried switching to buildsystem simple and specifying the build commands but that also gets into the same circular configure issue e g name dosbox x buildsystem simple build commands autogen sh configure enable core inline enable debug heavy enable make make install sources type archive url post install install app share icons hicolor scalable apps dosbox x svg app share icons hicolor scalable apps flatpak id svg desktop file edit set key icon set value flatpak id app share applications flatpak id desktop i also tried replacing the build commands with just a simple build debug and then it crashes out with a checking for a bsd compatible install usr bin install c checking whether build environment is sane configure error newly created file is older than distributed files check your system clock error module dosbox x child process exited with code to clarify the flatpak build runs in an environment with epoch set to zero this is intentional and i don t think you can disable it loop during regular build
| 0
|
290,716
| 21,896,949,439
|
IssuesEvent
|
2022-05-20 09:32:49
|
Avaiga/taipy-doc
|
https://api.github.com/repos/Avaiga/taipy-doc
|
closed
|
Fix canonical links during docs generation
|
documentation
|
Today we are generating the following canonical data:
`<link rel="canonical" href="../../../release/1.0/getting_started/step_02/ReadMe/">`
We should have something like:
`<link rel="canonical" href="https://docs.taipy.io/en/{latest}/release/1.0/getting_started/step_02/ReadMe/">`
Remove `taipy_version` from mkdocs_template and hard code it in the with the site_url. => Remove
GS_DOCLINK = re.compile(r"(href=\")http://docs.taipy\.io(.*?\")", re.M | re.S)
html_content, n_changes = GS_DOCLINK.subn(f"\\1{env.conf['site_url']}\\2", html_content)
if n_changes != 0:
file_was_changed = True
|
1.0
|
Fix canonical links during docs generation - Today we are generating the following canonical data:
`<link rel="canonical" href="../../../release/1.0/getting_started/step_02/ReadMe/">`
We should have something like:
`<link rel="canonical" href="https://docs.taipy.io/en/{latest}/release/1.0/getting_started/step_02/ReadMe/">`
Remove `taipy_version` from mkdocs_template and hard code it in the with the site_url. => Remove
GS_DOCLINK = re.compile(r"(href=\")http://docs.taipy\.io(.*?\")", re.M | re.S)
html_content, n_changes = GS_DOCLINK.subn(f"\\1{env.conf['site_url']}\\2", html_content)
if n_changes != 0:
file_was_changed = True
|
non_process
|
fix canonical links during docs generation today we are generating the following canonical data we should have something like link rel canonical href remove taipy version from mkdocs template and hard code it in the with the site url remove gs doclink re compile r href re m re s html content n changes gs doclink subn f env conf html content if n changes file was changed true
| 0
|
99,422
| 4,054,982,849
|
IssuesEvent
|
2016-05-24 14:11:54
|
MinetestForFun/server-minetestforfun
|
https://api.github.com/repos/MinetestForFun/server-minetestforfun
|
closed
|
some textures missing
|
Media ➤ Graphics Modding ➤ BugFix Priority: Low
|
After visiting MFF Classic, I noticed in my debug file.
> 2016-04-27 09:45:59: ERROR[Main]: generateImage(): Could not load image "moretrees_acacia_wood.png" while building texture
> 2016-04-27 09:46:02: WARNING[Main]: Irrlicht: Could not open file of texture: vachette2.png
> 2016-04-27 09:51:32: WARNING[Main]: Irrlicht: Could not open file of texture: Goat.png
>
Something seems wrong there.
|
1.0
|
some textures missing - After visiting MFF Classic, I noticed in my debug file.
> 2016-04-27 09:45:59: ERROR[Main]: generateImage(): Could not load image "moretrees_acacia_wood.png" while building texture
> 2016-04-27 09:46:02: WARNING[Main]: Irrlicht: Could not open file of texture: vachette2.png
> 2016-04-27 09:51:32: WARNING[Main]: Irrlicht: Could not open file of texture: Goat.png
>
Something seems wrong there.
|
non_process
|
some textures missing after visiting mff classic i noticed in my debug file error generateimage could not load image moretrees acacia wood png while building texture warning irrlicht could not open file of texture png warning irrlicht could not open file of texture goat png something seems wrong there
| 0
|
14,687
| 17,798,491,511
|
IssuesEvent
|
2021-09-01 03:09:03
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Johnny Chimpo
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Johnny Chimpo
Type (film/tv show): TV Show
Film or show in which it appears: Super Troopers
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? Middle
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=UJtQhv9dp9o
|
1.0
|
Johnny Chimpo - Please add as much of the following info as you can:
Title: Johnny Chimpo
Type (film/tv show): TV Show
Film or show in which it appears: Super Troopers
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? Middle
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=UJtQhv9dp9o
|
process
|
johnny chimpo please add as much of the following info as you can title johnny chimpo type film tv show tv show film or show in which it appears super troopers is the parent film show streaming anywhere yes about when in the parent film show does it appear middle actual footage of the film show can be seen yes no yes
| 1
|
7,064
| 10,219,209,380
|
IssuesEvent
|
2019-08-15 17:58:52
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Write translation guide for Resources.resx files
|
process
|
Let's write a guide to help translators translate our `Resources.resx` files.
This guide is gonna be used in the issue and PR templates as a reference.
The PR to close this issue needs to be approved by both @amulware and @maritoggl .
|
1.0
|
Write translation guide for Resources.resx files - Let's write a guide to help translators translate our `Resources.resx` files.
This guide is gonna be used in the issue and PR templates as a reference.
The PR to close this issue needs to be approved by both @amulware and @maritoggl .
|
process
|
write translation guide for resources resx files let s write a guide to help translators translate our resources resx files this guide is gonna be used in the issue and pr templates as a reference the pr to close this issue needs to be approved by both amulware and maritoggl
| 1
|
25,303
| 12,540,741,641
|
IssuesEvent
|
2020-06-05 10:57:03
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
reopened
|
Defer Image Decode unavailable on OPPO A57 when scroll fast
|
created via performance template
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a performance problem, then fill our the template below.
Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Details
<!--
1. Please tell us exactly how to reproduce the problem you are running into, and how you measured the performance.
2. Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
3. Switch flutter to master channel and run this app on a physical device
using profile or release mode. Verify that the performance issue can be
reproduced there.
The bleeding edge master channel is encouraged here because Flutter is
constantly fixing bugs and improving its performance. Your problem in an
older Flutter version may have already been solved in the master channel.
-->
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Flutter Version**
Flutter 1.17.1 • channel stable • https://github.com/flutter/flutter.git
Framework • revision f7a6a7906b • 2020-05-12 18:39:00 -0700
Engine • revision 6bc433c6b6
Tools • Dart 2.8.2
**Target OS version/browser:**
6.0.1
**Devices:**
OPPO A57
## Logs
I create a gridview to load image from asset, when I scroll very fast I notice the velocity of gridview is 0, so Image.dart will decode all image.
|
True
|
Defer Image Decode unavailable on OPPO A57 when scroll fast - <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a performance problem, then fill our the template below.
Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Details
<!--
1. Please tell us exactly how to reproduce the problem you are running into, and how you measured the performance.
2. Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
3. Switch flutter to master channel and run this app on a physical device
using profile or release mode. Verify that the performance issue can be
reproduced there.
The bleeding edge master channel is encouraged here because Flutter is
constantly fixing bugs and improving its performance. Your problem in an
older Flutter version may have already been solved in the master channel.
-->
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Flutter Version**
Flutter 1.17.1 • channel stable • https://github.com/flutter/flutter.git
Framework • revision f7a6a7906b • 2020-05-12 18:39:00 -0700
Engine • revision 6bc433c6b6
Tools • Dart 2.8.2
**Target OS version/browser:**
6.0.1
**Devices:**
OPPO A57
## Logs
I create a gridview to load image from asset, when I scroll very fast I notice the velocity of gridview is 0, so Image.dart will decode all image.
|
non_process
|
defer image decode unavailable on oppo when scroll fast thank you for using flutter if you are looking for support please check out our documentation or consider asking a question on stack overflow if you have found a performance problem then fill our the template below please read our guide to filing a bug first details please tell us exactly how to reproduce the problem you are running into and how you measured the performance please attach a small application ideally just one main dart file that reproduces the problem you could use for this switch flutter to master channel and run this app on a physical device using profile or release mode verify that the performance issue can be reproduced there the bleeding edge master channel is encouraged here because flutter is constantly fixing bugs and improving its performance your problem in an older flutter version may have already been solved in the master channel please tell us which target platform s the problem occurs android ios web macos linux windows which target os version for web browser is the test system running does the problem occur on emulator simulator as well as on physical devices flutter version flutter • channel stable • framework • revision • engine • revision tools • dart target os version browser devices oppo logs i create a gridview to load image from asset when i scroll very fast i notice the velocity of gridview is so image dart will decode all image
| 0
|
18,981
| 24,969,276,496
|
IssuesEvent
|
2022-11-01 22:39:58
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Split from Issue 2334: Eliminate false positive "String literals should not be duplicated" code smells.
|
bug process
|
### Description
Just that one highly-visible code smell for this Issue; all other code smells still to be done in [Issue 2334](https://github.com/hashgraph/hedera-mirror-node/issues/2334).
### Steps to reproduce
Run SonarCloud on the hedera-mirror-node repository.
### Additional context
_No response_
### Hedera network
mainnet
### Version
v0.67
### Operating system
_No response_
|
1.0
|
Split from Issue 2334: Eliminate false positive "String literals should not be duplicated" code smells. - ### Description
Just that one highly-visible code smell for this Issue; all other code smells still to be done in [Issue 2334](https://github.com/hashgraph/hedera-mirror-node/issues/2334).
### Steps to reproduce
Run SonarCloud on the hedera-mirror-node repository.
### Additional context
_No response_
### Hedera network
mainnet
### Version
v0.67
### Operating system
_No response_
|
process
|
split from issue eliminate false positive string literals should not be duplicated code smells description just that one highly visible code smell for this issue all other code smells still to be done in steps to reproduce run sonarcloud on the hedera mirror node repository additional context no response hedera network mainnet version operating system no response
| 1
|
340,456
| 24,655,719,177
|
IssuesEvent
|
2022-10-17 23:13:12
|
hbalki/SWE573
|
https://api.github.com/repos/hbalki/SWE573
|
closed
|
Creating User Scenario for Web App Project
|
documentation assignment high priority in progress
|
This issue was created to complete the Web Application Project User Scenario. A real persona will be listened to complete this problem.
|
1.0
|
Creating User Scenario for Web App Project - This issue was created to complete the Web Application Project User Scenario. A real persona will be listened to complete this problem.
|
non_process
|
creating user scenario for web app project this issue was created to complete the web application project user scenario a real persona will be listened to complete this problem
| 0
|
13,351
| 15,813,519,286
|
IssuesEvent
|
2021-04-05 07:49:11
|
e4exp/paper_manager_abstract
|
https://api.github.com/repos/e4exp/paper_manager_abstract
|
opened
|
Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks
|
2021 Data Augmentation Natural Language Processing Transformer
|
- https://arxiv.org/abs/2010.02394
- 2021
Mixupは,入力例とそれに対応するラベルを線形に補間する最新のデータ補強技術である.
Mixupは、入力例と対応するラベルを線形補間する最新のデータ拡張技術であり、画像をピクセルレベルで補間することで、画像分類に強い効果を示している。
この研究に触発され、本稿では、
i) テキストデータは生のフォーマットではほとんど混合できないため、mixupを自然言語処理タスクにどのように適用するか、
ii) BERTなどの変換器ベースの学習モデルにおいてmixupが依然として有効であるかどうかを調査する。
この目的を達成するために、我々は、エンド・ツー・エンドの学習システムを維持しつつ、広範囲のNLPタスクのために、「mixup-transformer」と名付けた変換器ベースの事前学習アーキテクチャにmixupを組み込む。
本研究では,GLUEベンチマークを用いた大規模な実験を行い,提案するフレームワークを評価する.
さらに、学習データを一定の比率で削減することで、低リソースのシナリオにおけるmixup-transformerの性能を検証することも行った。
我々の研究により、mixupは事前に学習された言語モデルに対するドメインに依存しないデータ補強技術であり、結果としてトランスフォーマーベースのモデルの大幅な性能向上につながることが示された。
|
1.0
|
Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks - - https://arxiv.org/abs/2010.02394
- 2021
Mixupは,入力例とそれに対応するラベルを線形に補間する最新のデータ補強技術である.
Mixupは、入力例と対応するラベルを線形補間する最新のデータ拡張技術であり、画像をピクセルレベルで補間することで、画像分類に強い効果を示している。
この研究に触発され、本稿では、
i) テキストデータは生のフォーマットではほとんど混合できないため、mixupを自然言語処理タスクにどのように適用するか、
ii) BERTなどの変換器ベースの学習モデルにおいてmixupが依然として有効であるかどうかを調査する。
この目的を達成するために、我々は、エンド・ツー・エンドの学習システムを維持しつつ、広範囲のNLPタスクのために、「mixup-transformer」と名付けた変換器ベースの事前学習アーキテクチャにmixupを組み込む。
本研究では,GLUEベンチマークを用いた大規模な実験を行い,提案するフレームワークを評価する.
さらに、学習データを一定の比率で削減することで、低リソースのシナリオにおけるmixup-transformerの性能を検証することも行った。
我々の研究により、mixupは事前に学習された言語モデルに対するドメインに依存しないデータ補強技術であり、結果としてトランスフォーマーベースのモデルの大幅な性能向上につながることが示された。
|
process
|
mixup transformer dynamic data augmentation for nlp tasks mixupは,入力例とそれに対応するラベルを線形に補間する最新のデータ補強技術である. mixupは、入力例と対応するラベルを線形補間する最新のデータ拡張技術であり、画像をピクセルレベルで補間することで、画像分類に強い効果を示している。 この研究に触発され、本稿では、 i テキストデータは生のフォーマットではほとんど混合できないため、mixupを自然言語処理タスクにどのように適用するか、 ii bertなどの変換器ベースの学習モデルにおいてmixupが依然として有効であるかどうかを調査する。 この目的を達成するために、我々は、エンド・ツー・エンドの学習システムを維持しつつ、広範囲のnlpタスクのために、「mixup transformer」と名付けた変換器ベースの事前学習アーキテクチャにmixupを組み込む。 本研究では,glueベンチマークを用いた大規模な実験を行い,提案するフレームワークを評価する. さらに、学習データを一定の比率で削減することで、低リソースのシナリオにおけるmixup transformerの性能を検証することも行った。 我々の研究により、mixupは事前に学習された言語モデルに対するドメインに依存しないデータ補強技術であり、結果としてトランスフォーマーベースのモデルの大幅な性能向上につながることが示された。
| 1
|
22,221
| 30,771,268,297
|
IssuesEvent
|
2023-07-30 23:27:51
|
danrleypereira/verzel-pleno-prova
|
https://api.github.com/repos/danrleypereira/verzel-pleno-prova
|
closed
|
Aprimoramentos na aplicação de gerenciamento de veículos
|
enhancement feature Processo Seletivo
|
Durante o desenvolvimento e revisão de código do aplicativo de gerenciamento de veículos, identificamos várias áreas onde melhorias podem ser feitas para melhorar a eficiência, segurança e usabilidade do aplicativo.
**Componentização da Lógica de Paginação**
A lógica de paginação está atualmente contida no componente Cars. Para melhorar a reutilização de código e a manutenibilidade, essa lógica deve ser movida para seu próprio componente.
**Criação de um Componente de Edição/Registro de Veículos**
Atualmente, a funcionalidade de edição e registro de veículos é realizada no componente HoveredVehicle. Isso deve ser movido para um novo componente para separar as responsabilidades dos componentes.
**Proteção das Rotas com Base na Expiração do Token**
Para melhorar a segurança do aplicativo, as rotas devem ser protegidas com base na expiração do token do usuário. Isso significa que se o token do usuário expirou, ele deve ser redirecionado para a tela de login.
**Tratamento de Rota Inexistente**
Quando o usuário tenta navegar para uma rota que não existe, ele deve ser redirecionado para uma rota específica, como a página inicial ou uma página de erro personalizada.
**Melhorias no Componente VehicleForm**
O componente VehicleForm requer algumas melhorias para evitar a mudança entre inputs controlados e não controlados. O estado inicial dos veículos deve ser definido para evitar inputs não controlados.
**_Tarefas:_**
- [x] Mover a lógica de paginação para seu próprio componente.
- [x] Criar um componente de edição/registro de veículos.
- [x] Implementar proteção de rotas com base na expiração do token.
- [x] Implementar redirecionamento para rota específica quando o usuário tenta acessar uma rota que não existe.
- [x] Fazer melhorias no componente VehicleForm para evitar a mudança entre inputs controlados e não controlados.
|
1.0
|
Aprimoramentos na aplicação de gerenciamento de veículos - Durante o desenvolvimento e revisão de código do aplicativo de gerenciamento de veículos, identificamos várias áreas onde melhorias podem ser feitas para melhorar a eficiência, segurança e usabilidade do aplicativo.
**Componentização da Lógica de Paginação**
A lógica de paginação está atualmente contida no componente Cars. Para melhorar a reutilização de código e a manutenibilidade, essa lógica deve ser movida para seu próprio componente.
**Criação de um Componente de Edição/Registro de Veículos**
Atualmente, a funcionalidade de edição e registro de veículos é realizada no componente HoveredVehicle. Isso deve ser movido para um novo componente para separar as responsabilidades dos componentes.
**Proteção das Rotas com Base na Expiração do Token**
Para melhorar a segurança do aplicativo, as rotas devem ser protegidas com base na expiração do token do usuário. Isso significa que se o token do usuário expirou, ele deve ser redirecionado para a tela de login.
**Tratamento de Rota Inexistente**
Quando o usuário tenta navegar para uma rota que não existe, ele deve ser redirecionado para uma rota específica, como a página inicial ou uma página de erro personalizada.
**Melhorias no Componente VehicleForm**
O componente VehicleForm requer algumas melhorias para evitar a mudança entre inputs controlados e não controlados. O estado inicial dos veículos deve ser definido para evitar inputs não controlados.
**_Tarefas:_**
- [x] Mover a lógica de paginação para seu próprio componente.
- [x] Criar um componente de edição/registro de veículos.
- [x] Implementar proteção de rotas com base na expiração do token.
- [x] Implementar redirecionamento para rota específica quando o usuário tenta acessar uma rota que não existe.
- [x] Fazer melhorias no componente VehicleForm para evitar a mudança entre inputs controlados e não controlados.
|
process
|
aprimoramentos na aplicação de gerenciamento de veículos durante o desenvolvimento e revisão de código do aplicativo de gerenciamento de veículos identificamos várias áreas onde melhorias podem ser feitas para melhorar a eficiência segurança e usabilidade do aplicativo componentização da lógica de paginação a lógica de paginação está atualmente contida no componente cars para melhorar a reutilização de código e a manutenibilidade essa lógica deve ser movida para seu próprio componente criação de um componente de edição registro de veículos atualmente a funcionalidade de edição e registro de veículos é realizada no componente hoveredvehicle isso deve ser movido para um novo componente para separar as responsabilidades dos componentes proteção das rotas com base na expiração do token para melhorar a segurança do aplicativo as rotas devem ser protegidas com base na expiração do token do usuário isso significa que se o token do usuário expirou ele deve ser redirecionado para a tela de login tratamento de rota inexistente quando o usuário tenta navegar para uma rota que não existe ele deve ser redirecionado para uma rota específica como a página inicial ou uma página de erro personalizada melhorias no componente vehicleform o componente vehicleform requer algumas melhorias para evitar a mudança entre inputs controlados e não controlados o estado inicial dos veículos deve ser definido para evitar inputs não controlados tarefas mover a lógica de paginação para seu próprio componente criar um componente de edição registro de veículos implementar proteção de rotas com base na expiração do token implementar redirecionamento para rota específica quando o usuário tenta acessar uma rota que não existe fazer melhorias no componente vehicleform para evitar a mudança entre inputs controlados e não controlados
| 1
|
230,556
| 7,611,563,726
|
IssuesEvent
|
2018-05-01 14:25:10
|
mutasemNidal/Adopt-Saba
|
https://api.github.com/repos/mutasemNidal/Adopt-Saba
|
closed
|
Create login page for the application
|
Difficulty 2 Lower Priority
|
create the log in page and design it and connect the log in button to the main app page
**duration** 3 days
|
1.0
|
Create login page for the application - create the log in page and design it and connect the log in button to the main app page
**duration** 3 days
|
non_process
|
create login page for the application create the log in page and design it and connect the log in button to the main app page duration days
| 0
|
8,831
| 2,612,905,548
|
IssuesEvent
|
2015-02-27 17:25:43
|
chrsmith/windows-package-manager
|
https://api.github.com/repos/chrsmith/windows-package-manager
|
closed
|
Some packages install unwanted software
|
auto-migrated Type-Defect
|
```
What steps will reproduce the problem?
1. install a package
2. watch chrome getting installed and replace your current browser as default
3.
What is the expected output? What do you see instead?
self explanatory.
i think it was cdburnerxp
What version of the product are you using? On what operating system?
1.15.6 on windows xp sp3
Please provide any additional information below.
```
Original issue reported on code.google.com by `jerobarr...@gmail.com` on 13 Feb 2012 at 9:43
|
1.0
|
Some packages install unwanted software - ```
What steps will reproduce the problem?
1. install a package
2. watch chrome getting installed and replace your current browser as default
3.
What is the expected output? What do you see instead?
self explanatory.
i think it was cdburnerxp
What version of the product are you using? On what operating system?
1.15.6 on windows xp sp3
Please provide any additional information below.
```
Original issue reported on code.google.com by `jerobarr...@gmail.com` on 13 Feb 2012 at 9:43
|
non_process
|
some packages install unwanted software what steps will reproduce the problem install a package watch chrome getting installed and replace your current browser as default what is the expected output what do you see instead self explanatory i think it was cdburnerxp what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by jerobarr gmail com on feb at
| 0
|
69,148
| 22,207,402,881
|
IssuesEvent
|
2022-06-07 15:56:51
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Different labels for same actions in widget title ↔ room info
|
T-Defect S-Tolerable A-Widgets X-Needs-Design O-Uncommon
|
### Steps to reproduce
1. Add a custom widget to a room
2. Open the room info menu
### Outcome
#### What did you expect?
- Room info should show the same buttons as in the widget title
#### What happened instead?
- It shows different buttons (that do the same)

### Operating system
Ubuntu 22.04 LTS
### Browser information
Firefox 100.0.2 (64-bit)
### URL for webapp
https://develop.element.io/
### Application version
Element version: b44df4bcc311-react-3a20cb17038b-js-2982bd79f610 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Different labels for same actions in widget title ↔ room info - ### Steps to reproduce
1. Add a custom widget to a room
2. Open the room info menu
### Outcome
#### What did you expect?
- Room info should show the same buttons as in the widget title
#### What happened instead?
- It shows different buttons (that do the same)

### Operating system
Ubuntu 22.04 LTS
### Browser information
Firefox 100.0.2 (64-bit)
### URL for webapp
https://develop.element.io/
### Application version
Element version: b44df4bcc311-react-3a20cb17038b-js-2982bd79f610 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
non_process
|
different labels for same actions in widget title ↔ room info steps to reproduce add a custom widget to a room open the room info menu outcome what did you expect room info should show the same buttons as in the widget title what happened instead it shows different buttons that do the same operating system ubuntu lts browser information firefox bit url for webapp application version element version react js olm version homeserver no response will you send logs no
| 0
|
26,531
| 4,750,323,409
|
IssuesEvent
|
2016-10-22 09:01:19
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
interp1d (kind=zero) returns wrong value for rightmost interpolation point
|
defect scipy.interpolate
|
```python
import scipy
from scipy.interpolate import interp1d
f1 = interp1d([0, 1], [0.0, 1.0], kind='zero')
f2 = interp1d([0, 1, 2], [0.0, 1.0, 2.0], kind='zero')
f3 = interp1d([0, 1, 2, 3], [0.0, 1.0, 2.0, 3.0], kind='zero')
print(f1([0, 1]))
print(f2([0, 1, 2]))
print(f3([0, 1, 2, 3]))
print(scipy.__version__)
# Output:
# [ 0. 0.]
# [ 0. 1. 1.]
# [ 0. 1. 2. 2.]
# 0.17.1
# Expected output:
# [ 0. 1.]
# [ 0. 1. 2.]
# [ 0. 1. 2. 3.]
# 0.17.1
```
|
1.0
|
interp1d (kind=zero) returns wrong value for rightmost interpolation point - ```python
import scipy
from scipy.interpolate import interp1d
f1 = interp1d([0, 1], [0.0, 1.0], kind='zero')
f2 = interp1d([0, 1, 2], [0.0, 1.0, 2.0], kind='zero')
f3 = interp1d([0, 1, 2, 3], [0.0, 1.0, 2.0, 3.0], kind='zero')
print(f1([0, 1]))
print(f2([0, 1, 2]))
print(f3([0, 1, 2, 3]))
print(scipy.__version__)
# Output:
# [ 0. 0.]
# [ 0. 1. 1.]
# [ 0. 1. 2. 2.]
# 0.17.1
# Expected output:
# [ 0. 1.]
# [ 0. 1. 2.]
# [ 0. 1. 2. 3.]
# 0.17.1
```
|
non_process
|
kind zero returns wrong value for rightmost interpolation point python import scipy from scipy interpolate import kind zero kind zero kind zero print print print print scipy version output expected output
| 0
|
490,941
| 14,142,443,544
|
IssuesEvent
|
2020-11-10 14:04:15
|
endel/mazmorra
|
https://api.github.com/repos/endel/mazmorra
|
closed
|
Rewrite CharacterBuilder to use the DOM instead.
|
priority-low
|
- [ ] Implement character preview (+rotation) in the DOM + Canvas
- [ ] Use DOM for character preferences
- [ ] Name
- [ ] Class
- [ ] Eye color
- [ ] Body color
- [ ] Hair style
- [ ] Hair color
- [ ] Start!
- [ ] (Remove all usages of `three-text2d`)
|
1.0
|
Rewrite CharacterBuilder to use the DOM instead. - - [ ] Implement character preview (+rotation) in the DOM + Canvas
- [ ] Use DOM for character preferences
- [ ] Name
- [ ] Class
- [ ] Eye color
- [ ] Body color
- [ ] Hair style
- [ ] Hair color
- [ ] Start!
- [ ] (Remove all usages of `three-text2d`)
|
non_process
|
rewrite characterbuilder to use the dom instead implement character preview rotation in the dom canvas use dom for character preferences name class eye color body color hair style hair color start remove all usages of three
| 0
|
3,649
| 6,688,376,207
|
IssuesEvent
|
2017-10-08 14:05:28
|
TraningManagementSystem/tms
|
https://api.github.com/repos/TraningManagementSystem/tms
|
opened
|
CircleCIを導入してみる
|
dev process
|
### Description
CircleCIを導入してみる。
----
### Details
TravisCIはすでに超簡易的なビルドのみ導入してるが、
CircleCIについては何もできていないので現状で勢力が大きそうな2つのCIツールは試すこととする。
----
### Relation Issue
なし
----
|
1.0
|
CircleCIを導入してみる - ### Description
CircleCIを導入してみる。
----
### Details
TravisCIはすでに超簡易的なビルドのみ導入してるが、
CircleCIについては何もできていないので現状で勢力が大きそうな2つのCIツールは試すこととする。
----
### Relation Issue
なし
----
|
process
|
circleciを導入してみる description circleciを導入してみる。 details travisciはすでに超簡易的なビルドのみ導入してるが、 。 relation issue なし
| 1
|
200,105
| 15,089,853,212
|
IssuesEvent
|
2021-02-06 08:02:41
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: ycsb/C/nodes=3 failed
|
C-test-failure O-roachtest O-robot branch-master release-blocker
|
[(roachtest).ycsb/C/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=buildLog) on [master@81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a](https://github.com/cockroachdb/cockroach/commits/81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a):
```
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_080226.636_n4_workload_run_ycsb
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2650257-1612594943-37-n4cpu8:4 -- ./workload run ycsb --init --insert-count=1000000 --workload=C --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload run ycsb --init --insert-count=1000000 --workload=C --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2687,ycsb.go:62,ycsb.go:79,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.registerYCSB.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:62
| main.registerYCSB.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:79
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/ycsb/C/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=artifacts#/ycsb/C/nodes=3)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aycsb%2FC%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: ycsb/C/nodes=3 failed - [(roachtest).ycsb/C/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=buildLog) on [master@81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a](https://github.com/cockroachdb/cockroach/commits/81a2c26a104fa8cc7e8b530b837ffb6ff85ddc5a):
```
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_080226.636_n4_workload_run_ycsb
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2650257-1612594943-37-n4cpu8:4 -- ./workload run ycsb --init --insert-count=1000000 --workload=C --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload run ycsb --init --insert-count=1000000 --workload=C --concurrency=144 --splits=3 --histograms=perf/stats.json --select-for-update=true --ramp=1m --duration=10m {pgurl:1-3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2687,ycsb.go:62,ycsb.go:79,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.registerYCSB.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:62
| main.registerYCSB.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/ycsb.go:79
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/ycsb/C/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2650257&tab=artifacts#/ycsb/C/nodes=3)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aycsb%2FC%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest ycsb c nodes failed on runtime goexit usr local go src runtime asm s wraps output in run workload run ycsb wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run ycsb init insert count workload c concurrency splits histograms perf stats json select for update true ramp duration pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload run ycsb init insert count workload c concurrency splits histograms perf stats json select for update true ramp duration pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go ycsb go ycsb go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerycsb home agent work go src github com cockroachdb cockroach pkg cmd roachtest ycsb go main registerycsb home agent work go src github com cockroachdb cockroach pkg cmd roachtest ycsb go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
| 0
|
212,622
| 23,931,395,450
|
IssuesEvent
|
2022-09-10 15:53:00
|
mheob/react-ui-library
|
https://api.github.com/repos/mheob/react-ui-library
|
closed
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz
|
wontfix security vulnerability
|
## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- stylelint-config-rational-order-0.1.2.tgz (Root Library)
- stylelint-9.10.1.tgz
- globby-9.2.0.tgz
- fast-glob-2.2.7.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mheob/react-ui-library/commit/c4400d34d77734d6906624db8b697771e5e294da">c4400d34d77734d6906624db8b697771e5e294da</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- stylelint-config-rational-order-0.1.2.tgz (Root Library)
- stylelint-9.10.1.tgz
- globby-9.2.0.tgz
- fast-glob-2.2.7.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mheob/react-ui-library/commit/c4400d34d77734d6906624db8b697771e5e294da">c4400d34d77734d6906624db8b697771e5e294da</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in glob parent tgz cve high severity vulnerability vulnerable library glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy stylelint config rational order tgz root library stylelint tgz globby tgz fast glob tgz x glob parent tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with mend
| 0
|
20,407
| 27,065,998,982
|
IssuesEvent
|
2023-02-14 00:30:34
|
nephio-project/sig-release
|
https://api.github.com/repos/nephio-project/sig-release
|
opened
|
Define and document contributor guidelines for the Nephio project.
|
area/process-mgmt
|
We need to define and document contributor guidelines for the project. We can take a lot of inspiration from the k8s project for this such as signing CLAs, contributor etiquettes, PR process, getting started etc. Here are the relevant links from the k8s project
https://github.com/kubernetes/community/tree/master/contributors/guide
https://github.com/kubernetes/community/blob/master/contributors/guide/contributing.md
https://www.kubernetes.dev/docs/contributor-cheatsheet/
Based on the above we need to define the guidelines as per our needs and document them in GitHub and/or wiki.
|
1.0
|
Define and document contributor guidelines for the Nephio project. - We need to define and document contributor guidelines for the project. We can take a lot of inspiration from the k8s project for this such as signing CLAs, contributor etiquettes, PR process, getting started etc. Here are the relevant links from the k8s project
https://github.com/kubernetes/community/tree/master/contributors/guide
https://github.com/kubernetes/community/blob/master/contributors/guide/contributing.md
https://www.kubernetes.dev/docs/contributor-cheatsheet/
Based on the above we need to define the guidelines as per our needs and document them in GitHub and/or wiki.
|
process
|
define and document contributor guidelines for the nephio project we need to define and document contributor guidelines for the project we can take a lot of inspiration from the project for this such as signing clas contributor etiquettes pr process getting started etc here are the relevant links from the project based on the above we need to define the guidelines as per our needs and document them in github and or wiki
| 1
|
825,159
| 31,275,453,429
|
IssuesEvent
|
2023-08-22 05:48:18
|
Team-Ampersand/Dotori-iOS
|
https://api.github.com/repos/Team-Ampersand/Dotori-iOS
|
opened
|
UIKitUtil로부터 Adapter 분리
|
🔨 Refactor 2️⃣Priority: Medium
|
### Describe
현재 UIKitUtil에 Table/Collection Adapter가 있는데.. 이것은 분리하는게 좋을거같습니다
### Additional
_No response_
|
1.0
|
UIKitUtil로부터 Adapter 분리 - ### Describe
현재 UIKitUtil에 Table/Collection Adapter가 있는데.. 이것은 분리하는게 좋을거같습니다
### Additional
_No response_
|
non_process
|
uikitutil로부터 adapter 분리 describe 현재 uikitutil에 table collection adapter가 있는데 이것은 분리하는게 좋을거같습니다 additional no response
| 0
|
20,975
| 27,825,806,877
|
IssuesEvent
|
2023-03-19 18:55:07
|
CrazyOldBuffalo/FYP-Smart-Speaker
|
https://api.github.com/repos/CrazyOldBuffalo/FYP-Smart-Speaker
|
closed
|
Implement TTS Services to allow CalDAVServices to output as Speech
|
CalDAV Development Voice Processing
|
Add TTS service to allow the application to output CalDAVServices requests as Sound to the user after they select a command via console inputs.
Apply TTS to the following commands from the CalDAVServices:
- [x] List all Calendars
- [x] List all events on a specific calendar day
- [x] Add an event to the calendar
- [ ] Delete an event on the calendar
- [ ] (Opt) Edit an event on the calendar
|
1.0
|
Implement TTS Services to allow CalDAVServices to output as Speech - Add TTS service to allow the application to output CalDAVServices requests as Sound to the user after they select a command via console inputs.
Apply TTS to the following commands from the CalDAVServices:
- [x] List all Calendars
- [x] List all events on a specific calendar day
- [x] Add an event to the calendar
- [ ] Delete an event on the calendar
- [ ] (Opt) Edit an event on the calendar
|
process
|
implement tts services to allow caldavservices to output as speech add tts service to allow the application to output caldavservices requests as sound to the user after they select a command via console inputs apply tts to the following commands from the caldavservices list all calendars list all events on a specific calendar day add an event to the calendar delete an event on the calendar opt edit an event on the calendar
| 1
|
318,243
| 23,709,293,540
|
IssuesEvent
|
2022-08-30 06:16:44
|
johannes-schliephake/nextcloud-passwords-ios
|
https://api.github.com/repos/johannes-schliephake/nextcloud-passwords-ios
|
closed
|
App not round on the App Store
|
documentation
|
Hello, I am on the French app store and the app cannot be found, whether on iPad or iPhone. When I try to airdrop the app to someone, the app store opens and the message that the app is not available in your region appears. However, I installed it barely 1 month ago.
|
1.0
|
App not round on the App Store - Hello, I am on the French app store and the app cannot be found, whether on iPad or iPhone. When I try to airdrop the app to someone, the app store opens and the message that the app is not available in your region appears. However, I installed it barely 1 month ago.
|
non_process
|
app not round on the app store hello i am on the french app store and the app cannot be found whether on ipad or iphone when i try to airdrop the app to someone the app store opens and the message that the app is not available in your region appears however i installed it barely month ago
| 0
|
88,732
| 8,175,669,223
|
IssuesEvent
|
2018-08-28 03:28:17
|
empirical-org/Empirical-Core
|
https://api.github.com/repos/empirical-org/Empirical-Core
|
closed
|
Set up continuous integration with Jenkins for the mono repo
|
chore in progress testing
|
**This issue will likely be split.**
I am branching off of the the mono repo branch and adding a CI layer with Jenkins.
Goals.
1. Pull requests with passing builds and tests should be automatically merged into develop.
2. Pushes and pull requests to develop should deploy to staging if the build and tests pass
3. Pushes and pull requests to master should deploy to production if the build tests pass
4. Test suite should be run in parallel and optimized for speed in all ways possible.
|
1.0
|
Set up continuous integration with Jenkins for the mono repo - **This issue will likely be split.**
I am branching off of the the mono repo branch and adding a CI layer with Jenkins.
Goals.
1. Pull requests with passing builds and tests should be automatically merged into develop.
2. Pushes and pull requests to develop should deploy to staging if the build and tests pass
3. Pushes and pull requests to master should deploy to production if the build tests pass
4. Test suite should be run in parallel and optimized for speed in all ways possible.
|
non_process
|
set up continuous integration with jenkins for the mono repo this issue will likely be split i am branching off of the the mono repo branch and adding a ci layer with jenkins goals pull requests with passing builds and tests should be automatically merged into develop pushes and pull requests to develop should deploy to staging if the build and tests pass pushes and pull requests to master should deploy to production if the build tests pass test suite should be run in parallel and optimized for speed in all ways possible
| 0
|
14,903
| 18,291,746,893
|
IssuesEvent
|
2021-10-05 15:56:22
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Add/admin screen > Change 'User' to 'admin' in the following error message
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:
1. Login to PM
2. Click on the admins tab
3. Click on add admin or edit admin
4. do not assign any permissions
5. Click on the 'Add admin user and invite' button and observe
ER: in the following error message, 'user' should be changed to 'admin'.

|
3.0
|
[PM] Add/admin screen > Change 'User' to 'admin' in the following error message - Steps:
1. Login to PM
2. Click on the admins tab
3. Click on add admin or edit admin
4. do not assign any permissions
5. Click on the 'Add admin user and invite' button and observe
ER: in the following error message, 'user' should be changed to 'admin'.

|
process
|
add admin screen change user to admin in the following error message steps login to pm click on the admins tab click on add admin or edit admin do not assign any permissions click on the add admin user and invite button and observe er in the following error message user should be changed to admin
| 1
|
195,661
| 14,742,372,451
|
IssuesEvent
|
2021-01-07 12:11:00
|
php-coder/mystamps
|
https://api.github.com/repos/php-coder/mystamps
|
closed
|
Update wiremock-maven-plugin to 7.0.0
|
area/integration tests kind/dependency-update
|
Changelogs:
* wiremock-maven-plugin
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/5.0.1
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/6.0.0
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/7.0.0
* wiremock
- [x] 2.26.0 https://groups.google.com/g/wiremock-user/c/V1jU9hjylyU
- [x] 2.26.1 https://groups.google.com/g/wiremock-user/c/FUAm24E_w84
- [x] 2.26.2 https://groups.google.com/g/wiremock-user/c/kfC3ge5PFG4
- [x] 2.26.3 https://groups.google.com/g/wiremock-user/c/oeO1sNRYViI
- [x] 2.27.0 https://groups.google.com/g/wiremock-user/c/dFqib2uesiY
- [x] 2.27.1 and 2.27.2 https://groups.google.com/g/wiremock-user/c/MUziA29jCLA
|
1.0
|
Update wiremock-maven-plugin to 7.0.0 - Changelogs:
* wiremock-maven-plugin
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/5.0.1
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/6.0.0
- [x] https://github.com/automatictester/wiremock-maven-plugin/releases/tag/7.0.0
* wiremock
- [x] 2.26.0 https://groups.google.com/g/wiremock-user/c/V1jU9hjylyU
- [x] 2.26.1 https://groups.google.com/g/wiremock-user/c/FUAm24E_w84
- [x] 2.26.2 https://groups.google.com/g/wiremock-user/c/kfC3ge5PFG4
- [x] 2.26.3 https://groups.google.com/g/wiremock-user/c/oeO1sNRYViI
- [x] 2.27.0 https://groups.google.com/g/wiremock-user/c/dFqib2uesiY
- [x] 2.27.1 and 2.27.2 https://groups.google.com/g/wiremock-user/c/MUziA29jCLA
|
non_process
|
update wiremock maven plugin to changelogs wiremock maven plugin wiremock and
| 0
|
10,774
| 13,595,821,849
|
IssuesEvent
|
2020-09-22 04:18:30
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Auto reject release if new version is available.
|
Pri2 devops-cicd-process/tech devops/prod support-request
|
"If you have multiple runs executing simultaneously, you must approve or reject each of them independently."
CD pipeline could offer new version to be approved many times daily basis in case of full automatic CD.
It is very annoying to reject each old version manually - it should be option to allow define it on Approval & Check view that old versions are automatically rejected if new version is asked for approval.
If it is known REST API call to update pipeline/approval, please advise.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @shashban
* Microsoft Alias: **shashban**
|
1.0
|
Auto reject release if new version is available. - "If you have multiple runs executing simultaneously, you must approve or reject each of them independently."
CD pipeline could offer new version to be approved many times daily basis in case of full automatic CD.
It is very annoying to reject each old version manually - it should be option to allow define it on Approval & Check view that old versions are automatically rejected if new version is asked for approval.
If it is known REST API call to update pipeline/approval, please advise.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @shashban
* Microsoft Alias: **shashban**
|
process
|
auto reject release if new version is available if you have multiple runs executing simultaneously you must approve or reject each of them independently cd pipeline could offer new version to be approved many times daily basis in case of full automatic cd it is very annoying to reject each old version manually it should be option to allow define it on approval check view that old versions are automatically rejected if new version is asked for approval if it is known rest api call to update pipeline approval please advise document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login shashban microsoft alias shashban
| 1
|
484,029
| 13,933,017,634
|
IssuesEvent
|
2020-10-22 08:07:45
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - video or audio doesn't play
|
browser-fenix engine-gecko ml-needsdiagnosis-false priority-important
|
<!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60255 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/fr/kb/nouveautes-firefox-android?as=u&utm_source=inproduct&redirectslug=nouveautes-firefox-android-79&redirectlocale=fr
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: There is no video
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201016094031</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/c46cdb21-96a7-482d-84f2-9133dc8d4e8e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
support.mozilla.org - video or audio doesn't play - <!-- @browser: Firefox Mobile 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/60255 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/fr/kb/nouveautes-firefox-android?as=u&utm_source=inproduct&redirectslug=nouveautes-firefox-android-79&redirectlocale=fr
**Browser / Version**: Firefox Mobile 83.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: There is no video
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201016094031</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/c46cdb21-96a7-482d-84f2-9133dc8d4e8e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
support mozilla org video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes chrome problem type video or audio doesn t play description there is no video steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
433
| 2,865,855,429
|
IssuesEvent
|
2015-06-05 01:27:39
|
besasm/EMGAATS
|
https://api.github.com/repos/besasm/EMGAATS
|
opened
|
Figure out the details of the model class
|
process question
|
Model Work Flow
1. create model
2. fill network
3. edit network
4. create simulation
5 add boundary conditions
6. deploy to engine
7. run engine
8. extract and post process results (should these be stored in the simulation class?)
9. map results (determine connection between simulation and network)
Working assumptions:
A. Simulations and Network are owned by the model.
B. Simulations persist (we save simulations to file)
if we go backwards in the workflow do we purge dependent steps. For example once simulations are created (step 4) can we go back to step 3 and do more edits? If so should we purge simulation collections?
What is controlled by gui and what is controlled by class?
For example if a new (or revised) network is added to a model, does set Network class purge simulations?
If a user decides to go to step 4 then the network could be locked. If the user decides to unlock and edit then (after big warning) simulations get purged.
White board photo at link below.

|
1.0
|
Figure out the details of the model class - Model Work Flow
1. create model
2. fill network
3. edit network
4. create simulation
5 add boundary conditions
6. deploy to engine
7. run engine
8. extract and post process results (should these be stored in the simulation class?)
9. map results (determine connection between simulation and network)
Working assumptions:
A. Simulations and Network are owned by the model.
B. Simulations persist (we save simulations to file)
if we go backwards in the workflow do we purge dependent steps. For example once simulations are created (step 4) can we go back to step 3 and do more edits? If so should we purge simulation collections?
What is controlled by gui and what is controlled by class?
For example if a new (or revised) network is added to a model, does set Network class purge simulations?
If a user decides to go to step 4 then the network could be locked. If the user decides to unlock and edit then (after big warning) simulations get purged.
White board photo at link below.

|
process
|
figure out the details of the model class model work flow create model fill network edit network create simulation add boundary conditions deploy to engine run engine extract and post process results should these be stored in the simulation class map results determine connection between simulation and network working assumptions a simulations and network are owned by the model b simulations persist we save simulations to file if we go backwards in the workflow do we purge dependent steps for example once simulations are created step can we go back to step and do more edits if so should we purge simulation collections what is controlled by gui and what is controlled by class for example if a new or revised network is added to a model does set network class purge simulations if a user decides to go to step then the network could be locked if the user decides to unlock and edit then after big warning simulations get purged white board photo at link below
| 1
|
25,662
| 25,570,646,090
|
IssuesEvent
|
2022-11-30 17:23:50
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Duplicated LLVM submodules
|
usability
|
With the update to LLVM 15, which was great, CH now has multiple copies of the same LLVM submodules:
* libunwind: Unwind fork. AFAICT heavily modified.
* libcxx: Only minor differences with libc++ (LLVM 14)
* libcxxabi: Only minor differences with libc++-abi (LLVM 14)
* llvm-project: Has all the submodules (LLVM 15)
I think it'd be way easier to keep things in sync and update properly if at least libcxx, libcxxabi were using the llvm-project submodule code instead. I'm not sure about libunwind since that seems to include lots of changes that might be harder to integrate and keep updated.
|
True
|
Duplicated LLVM submodules - With the update to LLVM 15, which was great, CH now has multiple copies of the same LLVM submodules:
* libunwind: Unwind fork. AFAICT heavily modified.
* libcxx: Only minor differences with libc++ (LLVM 14)
* libcxxabi: Only minor differences with libc++-abi (LLVM 14)
* llvm-project: Has all the submodules (LLVM 15)
I think it'd be way easier to keep things in sync and update properly if at least libcxx, libcxxabi were using the llvm-project submodule code instead. I'm not sure about libunwind since that seems to include lots of changes that might be harder to integrate and keep updated.
|
non_process
|
duplicated llvm submodules with the update to llvm which was great ch now has multiple copies of the same llvm submodules libunwind unwind fork afaict heavily modified libcxx only minor differences with libc llvm libcxxabi only minor differences with libc abi llvm llvm project has all the submodules llvm i think it d be way easier to keep things in sync and update properly if at least libcxx libcxxabi were using the llvm project submodule code instead i m not sure about libunwind since that seems to include lots of changes that might be harder to integrate and keep updated
| 0
|
8,543
| 11,715,170,088
|
IssuesEvent
|
2020-03-09 13:43:09
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
closed
|
Provide support for running multiple pumps in same app
|
area:message-processing feature
|
Provide support for running multiple pumps in same app to allow resource consolidation
|
1.0
|
Provide support for running multiple pumps in same app - Provide support for running multiple pumps in same app to allow resource consolidation
|
process
|
provide support for running multiple pumps in same app provide support for running multiple pumps in same app to allow resource consolidation
| 1
|
221,140
| 24,590,716,039
|
IssuesEvent
|
2022-10-14 01:45:03
|
raindigi/snippet-generator
|
https://api.github.com/repos/raindigi/snippet-generator
|
opened
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.1.0.tgz
|
security vulnerability
|
## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.1.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /snippet-generator/package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- html-webpack-plugin-3.2.0.tgz (Root Library)
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.1.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.1.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.1.0.tgz</a></p>
<p>Path to dependency file: /snippet-generator/package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- babel-loader-7.1.4.tgz (Root Library)
- :x: **loader-utils-1.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/snippet-generator/commit/048036a9ab01caa8b88a2ac5e83d724da33ace80">048036a9ab01caa8b88a2ac5e83d724da33ace80</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (html-webpack-plugin): 5.0.0</p><p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (babel-loader): 8.2.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.1.0.tgz - ## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.1.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /snippet-generator/package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- html-webpack-plugin-3.2.0.tgz (Root Library)
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.1.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.1.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.1.0.tgz</a></p>
<p>Path to dependency file: /snippet-generator/package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- babel-loader-7.1.4.tgz (Root Library)
- :x: **loader-utils-1.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/snippet-generator/commit/048036a9ab01caa8b88a2ac5e83d724da33ace80">048036a9ab01caa8b88a2ac5e83d724da33ace80</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (html-webpack-plugin): 5.0.0</p><p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (babel-loader): 8.2.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file snippet generator package json path to vulnerable library node modules html webpack plugin node modules loader utils package json dependency hierarchy html webpack plugin tgz root library x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file snippet generator package json path to vulnerable library node modules loader utils package json dependency hierarchy babel loader tgz root library x loader utils tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution html webpack plugin fix resolution loader utils direct dependency fix resolution babel loader step up your open source security game with mend
| 0
|
286,449
| 21,576,343,331
|
IssuesEvent
|
2022-05-02 14:07:10
|
plastinin/let-s-code
|
https://api.github.com/repos/plastinin/let-s-code
|
opened
|
Вынести задачи в отдельные каталог
|
documentation enhancement
|
- [ ] Реализовать сохранение задач в каталоге [task_repo] репозитория в формате json/yaml/xml
- [ ] Реализовать возможность загрузки задач из репозитория
|
1.0
|
Вынести задачи в отдельные каталог - - [ ] Реализовать сохранение задач в каталоге [task_repo] репозитория в формате json/yaml/xml
- [ ] Реализовать возможность загрузки задач из репозитория
|
non_process
|
вынести задачи в отдельные каталог реализовать сохранение задач в каталоге репозитория в формате json yaml xml реализовать возможность загрузки задач из репозитория
| 0
|
221,396
| 7,382,606,573
|
IssuesEvent
|
2018-03-15 05:57:24
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
closed
|
Unable to restart/reboot when using Minikube ISO
|
kind/bug priority/major status/needs-investigation
|
```
$ minishift start --iso-url minikube
$ minishift ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ sudo reboot
Connection to 192.168.42.207 closed by remote host.
Cannot establish SSH connection to the VM: exit status 255
$ minishift ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
$
```
Reproducible with:
```
$ minishift delete --force; minishift start --iso-url minikube; minishift stop; minishift start --iso-url minikube; minishift ssh -- docker ps
```
It will actually never do the second `minishift start`. If you break it off or wait until:
```
-- Starting Minishift VM ........................................................... FAIL E0312 00:12:35.169344 12990 start.go:369] Error starting the VM: Error configuring authorization on host: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded. Retrying.
Error starting the VM: Error configuring authorization on host: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
```
When you use `minishift ssh -- docker ps` you will notice the error message as above: `Cannot connect to the Docker daemon`
|
1.0
|
Unable to restart/reboot when using Minikube ISO -
```
$ minishift start --iso-url minikube
$ minishift ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ sudo reboot
Connection to 192.168.42.207 closed by remote host.
Cannot establish SSH connection to the VM: exit status 255
$ minishift ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
$
```
Reproducible with:
```
$ minishift delete --force; minishift start --iso-url minikube; minishift stop; minishift start --iso-url minikube; minishift ssh -- docker ps
```
It will actually never do the second `minishift start`. If you break it off or wait until:
```
-- Starting Minishift VM ........................................................... FAIL E0312 00:12:35.169344 12990 start.go:369] Error starting the VM: Error configuring authorization on host: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded. Retrying.
Error starting the VM: Error configuring authorization on host: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
```
When you use `minishift ssh -- docker ps` you will notice the error message as above: `Cannot connect to the Docker daemon`
|
non_process
|
unable to restart reboot when using minikube iso minishift start iso url minikube minishift ssh docker ps container id image command created status ports names sudo reboot connection to closed by remote host cannot establish ssh connection to the vm exit status minishift ssh docker ps cannot connect to the docker daemon at unix var run docker sock is the docker daemon running reproducible with minishift delete force minishift start iso url minikube minishift stop minishift start iso url minikube minishift ssh docker ps it will actually never do the second minishift start if you break it off or wait until starting minishift vm fail start go error starting the vm error configuring authorization on host too many retries waiting for ssh to be available last error maximum number of retries exceeded retrying error starting the vm error configuring authorization on host too many retries waiting for ssh to be available last error maximum number of retries exceeded when you use minishift ssh docker ps you will notice the error message as above cannot connect to the docker daemon
| 0
|
2,986
| 5,966,911,260
|
IssuesEvent
|
2017-05-30 14:57:05
|
eranhd/Anti-Drug-Jerusalem
|
https://api.github.com/repos/eranhd/Anti-Drug-Jerusalem
|
closed
|
הוספת סוג המשתמש שנרצה להוסיף
|
in process
|
בהוספת משתמש נוכל ליצור משתמש רק דרך ההרשאות של אותו המשתמש והוא יהיה בן שלו אוטומטי, למשל מנהל מחוז לא יוכל ליצור מנהל מחוז אחר, רק מנהל כללי יוכל ליצור מנהל מחוז
|
1.0
|
הוספת סוג המשתמש שנרצה להוסיף - בהוספת משתמש נוכל ליצור משתמש רק דרך ההרשאות של אותו המשתמש והוא יהיה בן שלו אוטומטי, למשל מנהל מחוז לא יוכל ליצור מנהל מחוז אחר, רק מנהל כללי יוכל ליצור מנהל מחוז
|
process
|
הוספת סוג המשתמש שנרצה להוסיף בהוספת משתמש נוכל ליצור משתמש רק דרך ההרשאות של אותו המשתמש והוא יהיה בן שלו אוטומטי למשל מנהל מחוז לא יוכל ליצור מנהל מחוז אחר רק מנהל כללי יוכל ליצור מנהל מחוז
| 1
|
88,487
| 11,099,670,030
|
IssuesEvent
|
2019-12-16 17:28:32
|
ipfs/docs
|
https://api.github.com/repos/ipfs/docs
|
reopened
|
Docs beta metrics collection: Implement
|
OKR 2: Metrics/Research/Testing Size: M design-front-end difficulty:moderate docs-ipfs
|
**_This issue is part of [Epic 2A: Define/implement beta site metrics collection](https://github.com/ipfs/docs/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen++%22epic+2a%22)._**
Implement collection of the metrics defined in #310, including means of displaying/reporting on a twice-monthly basis.
See https://github.com/ipfs/docs/issues/310#issuecomment-562165787 for the list of metrics to be collected.
|
1.0
|
Docs beta metrics collection: Implement - **_This issue is part of [Epic 2A: Define/implement beta site metrics collection](https://github.com/ipfs/docs/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen++%22epic+2a%22)._**
Implement collection of the metrics defined in #310, including means of displaying/reporting on a twice-monthly basis.
See https://github.com/ipfs/docs/issues/310#issuecomment-562165787 for the list of metrics to be collected.
|
non_process
|
docs beta metrics collection implement this issue is part of implement collection of the metrics defined in including means of displaying reporting on a twice monthly basis see for the list of metrics to be collected
| 0
|
14,350
| 17,374,005,801
|
IssuesEvent
|
2021-07-30 17:54:56
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Confusion between deployment pipeline and a release pipeline
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
Hi - I've been getting a new CI/CD pipeline setup for a greenfield project, but one thing is bothering me:
I do not fully understand the difference between deployment pipelines and a release pipeline. They both seem to enable the same thing: pushing builds to environments. One is yaml based, one is UI based (from what I can tell).
I see in the documentation there is a menu item for "Deploy apps", and "Deploy apps (Classic)". Deploy apps (Classic) talks about the release pipeline stuff (stages, automations, artifacts -- this is what I've been building) - but its wording makes me think it is going to be discontinued, as if it is the "Old version".
When I try to search for any kind of help on "Release pipelines", most of the time it takes me to documentation on yaml-based pipelines, which is confusing.
Should I be building my greenfield deployment pipeline with the documentation linked in this issue (yaml based), or a release pipeline?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Confusion between deployment pipeline and a release pipeline -
Hi - I've been getting a new CI/CD pipeline setup for a greenfield project, but one thing is bothering me:
I do not fully understand the difference between deployment pipelines and a release pipeline. They both seem to enable the same thing: pushing builds to environments. One is yaml based, one is UI based (from what I can tell).
I see in the documentation there is a menu item for "Deploy apps", and "Deploy apps (Classic)". Deploy apps (Classic) talks about the release pipeline stuff (stages, automations, artifacts -- this is what I've been building) - but its wording makes me think it is going to be discontinued, as if it is the "Old version".
When I try to search for any kind of help on "Release pipelines", most of the time it takes me to documentation on yaml-based pipelines, which is confusing.
Should I be building my greenfield deployment pipeline with the documentation linked in this issue (yaml based), or a release pipeline?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
confusion between deployment pipeline and a release pipeline hi i ve been getting a new ci cd pipeline setup for a greenfield project but one thing is bothering me i do not fully understand the difference between deployment pipelines and a release pipeline they both seem to enable the same thing pushing builds to environments one is yaml based one is ui based from what i can tell i see in the documentation there is a menu item for deploy apps and deploy apps classic deploy apps classic talks about the release pipeline stuff stages automations artifacts this is what i ve been building but its wording makes me think it is going to be discontinued as if it is the old version when i try to search for any kind of help on release pipelines most of the time it takes me to documentation on yaml based pipelines which is confusing should i be building my greenfield deployment pipeline with the documentation linked in this issue yaml based or a release pipeline document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
14,670
| 17,787,654,412
|
IssuesEvent
|
2021-08-31 13:01:01
|
googleapis/python-trace
|
https://api.github.com/repos/googleapis/python-trace
|
closed
|
Dependency Dashboard
|
api: cloudtrace type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/pytest-6.x -->[chore(deps): update dependency pytest to v6.2.5](../pull/124)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/pytest-6.x -->[chore(deps): update dependency pytest to v6.2.5](../pull/124)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull check this box to trigger a request for renovate to run again on this repository
| 1
|
11,103
| 13,941,836,317
|
IssuesEvent
|
2020-10-22 20:02:35
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Tone Equalizer with eigf causes weird artifacts when zoomed in and sometimes crashes when zooming out
|
bug: pending priority: high scope: image processing
|
<!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When using the new eigf masking option of the tone equalizer, everything works fine when completely zoomed out. While zooming in (seems to be at zoom >30%) the image looks completely streaky, like one giant artifact, however with retained colors (see screenshot). Zooming back out or deactivating tone equalizer while zoomed in reverts to the normal image, however sometimes (rarely) darktable segfaults upon zooming out or deactivating tone equalizer.
Switching to the mask view in tone equalizer while being zoomed into the picture shows a normal mask.
Exporting the file appears to work fine, the exported JPG looks normal.
**To Reproduce**
1. Open fresh file
2. activate tone equalizer with eigf (I do not even have to change any settings)
3. Zoom into the image (appears to start if zoomed to more than 30%)
4. Image is one big artifact
5. zooming back out or deactivating tone equalizer sometimes (not always) segfaults with error message:
```
munmap_chunk(): invalid pointer
[1] 161243 abort (core dumped) darktable
```
6. Switching on the mask view while zoomed in shows the proper mask
**Expected behavior**
No artifacts and no crashes...
**Screenshots**
Zoomed out:

Zoomed in:

**Platform (please complete the following information):**
- Darktable Version: 3.3.0~git1181.7cc1296fd
- OS: Pop.OS 20.04 (Kernel 5.4.0-7642-generic)
- OpenCL activated or not, makes no difference
- Intel i5-6200U, Intel HD Graphics 520
**Additional context**
I tried this on multiple olympus raw files, it always happened.
Also happens on JPG files
|
1.0
|
Tone Equalizer with eigf causes weird artifacts when zoomed in and sometimes crashes when zooming out - <!-- IMPORTANT
Bug reports that do not make an effort to help the developers will be closed without notice.
Make sure that this bug has not already been opened and/or closed by searching the issues on GitHub, as duplicate bug reports will be closed.
A bug report simply stating that Darktable crashes is unhelpful, so please fill in most of the items below and provide detailed information.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When using the new eigf masking option of the tone equalizer, everything works fine when completely zoomed out. While zooming in (seems to be at zoom >30%) the image looks completely streaky, like one giant artifact, however with retained colors (see screenshot). Zooming back out or deactivating tone equalizer while zoomed in reverts to the normal image, however sometimes (rarely) darktable segfaults upon zooming out or deactivating tone equalizer.
Switching to the mask view in tone equalizer while being zoomed into the picture shows a normal mask.
Exporting the file appears to work fine, the exported JPG looks normal.
**To Reproduce**
1. Open fresh file
2. activate tone equalizer with eigf (I do not even have to change any settings)
3. Zoom into the image (appears to start if zoomed to more than 30%)
4. Image is one big artifact
5. zooming back out or deactivating tone equalizer sometimes (not always) segfaults with error message:
```
munmap_chunk(): invalid pointer
[1] 161243 abort (core dumped) darktable
```
6. Switching on the mask view while zoomed in shows the proper mask
**Expected behavior**
No artifacts and no crashes...
**Screenshots**
Zoomed out:

Zoomed in:

**Platform (please complete the following information):**
- Darktable Version: 3.3.0~git1181.7cc1296fd
- OS: Pop.OS 20.04 (Kernel 5.4.0-7642-generic)
- OpenCL activated or not, makes no difference
- Intel i5-6200U, Intel HD Graphics 520
**Additional context**
I tried this on multiple olympus raw files, it always happened.
Also happens on JPG files
|
process
|
tone equalizer with eigf causes weird artifacts when zoomed in and sometimes crashes when zooming out important bug reports that do not make an effort to help the developers will be closed without notice make sure that this bug has not already been opened and or closed by searching the issues on github as duplicate bug reports will be closed a bug report simply stating that darktable crashes is unhelpful so please fill in most of the items below and provide detailed information describe the bug when using the new eigf masking option of the tone equalizer everything works fine when completely zoomed out while zooming in seems to be at zoom the image looks completely streaky like one giant artifact however with retained colors see screenshot zooming back out or deactivating tone equalizer while zoomed in reverts to the normal image however sometimes rarely darktable segfaults upon zooming out or deactivating tone equalizer switching to the mask view in tone equalizer while being zoomed into the picture shows a normal mask exporting the file appears to work fine the exported jpg looks normal to reproduce open fresh file activate tone equalizer with eigf i do not even have to change any settings zoom into the image appears to start if zoomed to more than image is one big artifact zooming back out or deactivating tone equalizer sometimes not always segfaults with error message munmap chunk invalid pointer abort core dumped darktable switching on the mask view while zoomed in shows the proper mask expected behavior no artifacts and no crashes screenshots zoomed out zoomed in platform please complete the following information darktable version os pop os kernel generic opencl activated or not makes no difference intel intel hd graphics additional context i tried this on multiple olympus raw files it always happened also happens on jpg files
| 1
|
61,051
| 6,721,891,458
|
IssuesEvent
|
2017-10-16 13:27:20
|
puikinsh/bonkers
|
https://api.github.com/repos/puikinsh/bonkers
|
closed
|
No quick edit shortcuts
|
tested
|

---
Just to be clear, I'm referring to the circled pencil icon. Screenshot above is taken from MedZone. Bonkers doesn't have any.
|
1.0
|
No quick edit shortcuts - 
---
Just to be clear, I'm referring to the circled pencil icon. Screenshot above is taken from MedZone. Bonkers doesn't have any.
|
non_process
|
no quick edit shortcuts just to be clear i m referring to the circled pencil icon screenshot above is taken from medzone bonkers doesn t have any
| 0
|
337,401
| 24,538,239,151
|
IssuesEvent
|
2022-10-11 23:27:00
|
MystenLabs/sui
|
https://api.github.com/repos/MystenLabs/sui
|
closed
|
Use zip215 Vs ed25519 everywhere
|
Type: Documentation crypto
|
This is to avoid any confusion and surprises. Will sync up with @joyqvq and @huitseeker
Apart from docs, we might need to change variable names keytool strings etc. We should proceed with caution due to potential backwards compatibility issues.
|
1.0
|
Use zip215 Vs ed25519 everywhere - This is to avoid any confusion and surprises. Will sync up with @joyqvq and @huitseeker
Apart from docs, we might need to change variable names keytool strings etc. We should proceed with caution due to potential backwards compatibility issues.
|
non_process
|
use vs everywhere this is to avoid any confusion and surprises will sync up with joyqvq and huitseeker apart from docs we might need to change variable names keytool strings etc we should proceed with caution due to potential backwards compatibility issues
| 0
|
17,022
| 22,392,184,530
|
IssuesEvent
|
2022-06-17 08:48:49
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Add single file export option to atlas to PDF algorithm (Request in QGIS)
|
Processing Alg 3.24
|
### Request for documentation
From pull request QGIS/qgis#46260
Author: @nyalldawson
QGIS version: 3.24
**Add single file export option to atlas to PDF algorithm**
### PR Description:
Supersedes https://github.com/qgis/QGIS/pull/44915
### Commits tagged with [need-docs] or [FEATURE]
"[feature] Add processing algorithm for exporting an atlas to multiple\nPDF files\n\nFollows the same logic as the existing \"Export atlas layout as PDF\"\nalgorithm, but exports each atlas feature as a separate PDF file"
|
1.0
|
Add single file export option to atlas to PDF algorithm (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#46260
Author: @nyalldawson
QGIS version: 3.24
**Add single file export option to atlas to PDF algorithm**
### PR Description:
Supersedes https://github.com/qgis/QGIS/pull/44915
### Commits tagged with [need-docs] or [FEATURE]
"[feature] Add processing algorithm for exporting an atlas to multiple\nPDF files\n\nFollows the same logic as the existing \"Export atlas layout as PDF\"\nalgorithm, but exports each atlas feature as a separate PDF file"
|
process
|
add single file export option to atlas to pdf algorithm request in qgis request for documentation from pull request qgis qgis author nyalldawson qgis version add single file export option to atlas to pdf algorithm pr description supersedes commits tagged with or add processing algorithm for exporting an atlas to multiple npdf files n nfollows the same logic as the existing export atlas layout as pdf nalgorithm but exports each atlas feature as a separate pdf file
| 1
|
4,591
| 2,867,374,929
|
IssuesEvent
|
2015-06-05 12:57:41
|
SaftIng/Saft
|
https://api.github.com/repos/SaftIng/Saft
|
closed
|
Create list of supported formats (Parser + Serializer)
|
API-change Backend Data Documentation
|
We need a list of format strings and their pendant of the real world. For instance rdfxml as format, which stands for RDF/XML. I would prefer the current usage from [EasyRdf](https://github.com/njh/easyrdf/blob/master/lib/Format.php#L253).
- rdf-json: `application/json`
- json-ld: `application/json`*
- rdfa: `text/html`
- rdf-xml: `application/rdf+xml`
- n-triples: `application/n-triples`
- n-quads: `application/n-quads`
- turtle: `text/turtle` (`application/x-turtle`, `application/turtle`)
- trig: `application/trig`
maybe:
- n3
- hdt
http://librdf.org/raptor/api/raptor-formats-types-index.html
|
1.0
|
Create list of supported formats (Parser + Serializer) - We need a list of format strings and their pendant of the real world. For instance rdfxml as format, which stands for RDF/XML. I would prefer the current usage from [EasyRdf](https://github.com/njh/easyrdf/blob/master/lib/Format.php#L253).
- rdf-json: `application/json`
- json-ld: `application/json`*
- rdfa: `text/html`
- rdf-xml: `application/rdf+xml`
- n-triples: `application/n-triples`
- n-quads: `application/n-quads`
- turtle: `text/turtle` (`application/x-turtle`, `application/turtle`)
- trig: `application/trig`
maybe:
- n3
- hdt
http://librdf.org/raptor/api/raptor-formats-types-index.html
|
non_process
|
create list of supported formats parser serializer we need a list of format strings and their pendant of the real world for instance rdfxml as format which stands for rdf xml i would prefer the current usage from rdf json application json json ld application json rdfa text html rdf xml application rdf xml n triples application n triples n quads application n quads turtle text turtle application x turtle application turtle trig application trig maybe hdt
| 0
|
17,952
| 23,957,076,547
|
IssuesEvent
|
2022-09-12 15:44:46
|
GregTechCEu/gt-ideas
|
https://api.github.com/repos/GregTechCEu/gt-ideas
|
opened
|
Pink Drink Production (by Carbon)
|
processing chain
|
Gentlemen... now that the purple drink has been fleshed out and the lean chain is available to the general population (myself), I think we can agree that it is truly time to go above and beyond with our drink synthesis. The purple drink has many benefits to a gameplay-oriented project such as this; for instance, it is very funny. Therefore, in order to elevate this gameplay to a new level I would like to introduce a proposal. What, you may ask, could possibly compete with both the complexity and necessity of the purple drink? Allow me to share my innovative new processing line, the pink drink. A fitting companion to the purple drink, the pink drink would be created in a similar way.
Pepto-Bismol synthesis begins with the production of methylcellulose, which is created through the mixing of cellulose with dichloromethane. This methylcellulose is then mixed with Aluminum Magnesium Silicate and Bismuth Subsalicylate to form Pepto-Bismol Base Dust.
Bismuth Subsalicylate, a pre-requisite for the above dust, is formed by mixing bismuth, sodium hydroxide, and water in a IV or above mixer.
Benzoic acid is formed through partial oxidation of toluene over a cobalt naphthenate catalyst.
Pepto-Bismol, then, is created by mixing Pepto-Bismol Base Dust, Benzoic acid, any dyeRed, and water in an HV or above mixer.
Now we have determined a reasonable pathway to Pepto-Bismol production, we come to the second ingredient in the pink drink.
We begin with the production of Benzhydrole derivatives, which is a complex and mostly meaningless chain, entirely focused on the production of Benzhydrole Chloride (chlorodiphenylmethane). I have slightly simplified this (though I have not done any balancing yet) through the below reactions.
Benzyl Chloride + Benzene + Aluminum Chloride (catalyst) => Diphenylmethane
Diphenylmethane + Air + Copper Dust (catalyst) => Benzophenone
Sodium Carbonate + Borax => Sodium Metaborate
Sodium Metaborate + Magnesium + Hydrogen => Sodium Borohydride
Benzophenone + Ethanol + Sodium Hydroxide Solution + Sodium Borohydride => chlorodiphenylmethane + difluorobenzhydrole (Needs ZPM LCR)
ethylene oxide + dimethylamine => dimethylaminoethanol
chlorodiphenylmethane + dimethylaminoethanol => diphenhydramine hydrochloride (DPH)
stearic acid + Magnesium => Magnesium Stearate
diphenhydramine hydrochloride (DPH) + Magnesium Stearate + cellulose + any dyePink => Benadryl Dust
Benadryl Dust + nuggets mold => 18 Benadryl (forming press)
Benadryl => Crushed Benadryl (macerator)
36 Crushed Benadryl + 1000mb Pepto-Bismol => 1000mb Pink Drink Preparation (mixer)
1000mb Pink Drink Preparation + 500mb Carbon Dioxide => 1500mb Pink Drink (IV or above mixer)
## Sources
https://ptb.discord.com/channels/701354865217110096/704379576742183132/952383135339937922
|
1.0
|
Pink Drink Production (by Carbon) - Gentlemen... now that the purple drink has been fleshed out and the lean chain is available to the general population (myself), I think we can agree that it is truly time to go above and beyond with our drink synthesis. The purple drink has many benefits to a gameplay-oriented project such as this; for instance, it is very funny. Therefore, in order to elevate this gameplay to a new level I would like to introduce a proposal. What, you may ask, could possibly compete with both the complexity and necessity of the purple drink? Allow me to share my innovative new processing line, the pink drink. A fitting companion to the purple drink, the pink drink would be created in a similar way.
Pepto-Bismol synthesis begins with the production of methylcellulose, which is created through the mixing of cellulose with dichloromethane. This methylcellulose is then mixed with Aluminum Magnesium Silicate and Bismuth Subsalicylate to form Pepto-Bismol Base Dust.
Bismuth Subsalicylate, a pre-requisite for the above dust, is formed by mixing bismuth, sodium hydroxide, and water in a IV or above mixer.
Benzoic acid is formed through partial oxidation of toluene over a cobalt naphthenate catalyst.
Pepto-Bismol, then, is created by mixing Pepto-Bismol Base Dust, Benzoic acid, any dyeRed, and water in an HV or above mixer.
Now we have determined a reasonable pathway to Pepto-Bismol production, we come to the second ingredient in the pink drink.
We begin with the production of Benzhydrole derivatives, which is a complex and mostly meaningless chain, entirely focused on the production of Benzhydrole Chloride (chlorodiphenylmethane). I have slightly simplified this (though I have not done any balancing yet) through the below reactions.
Benzyl Chloride + Benzene + Aluminum Chloride (catalyst) => Diphenylmethane
Diphenylmethane + Air + Copper Dust (catalyst) => Benzophenone
Sodium Carbonate + Borax => Sodium Metaborate
Sodium Metaborate + Magnesium + Hydrogen => Sodium Borohydride
Benzophenone + Ethanol + Sodium Hydroxide Solution + Sodium Borohydride => chlorodiphenylmethane + difluorobenzhydrole (Needs ZPM LCR)
ethylene oxide + dimethylamine => dimethylaminoethanol
chlorodiphenylmethane + dimethylaminoethanol => diphenhydramine hydrochloride (DPH)
stearic acid + Magnesium => Magnesium Stearate
diphenhydramine hydrochloride (DPH) + Magnesium Stearate + cellulose + any dyePink => Benadryl Dust
Benadryl Dust + nuggets mold => 18 Benadryl (forming press)
Benadryl => Crushed Benadryl (macerator)
36 Crushed Benadryl + 1000mb Pepto-Bismol => 1000mb Pink Drink Preparation (mixer)
1000mb Pink Drink Preparation + 500mb Carbon Dioxide => 1500mb Pink Drink (IV or above mixer)
## Sources
https://ptb.discord.com/channels/701354865217110096/704379576742183132/952383135339937922
|
process
|
pink drink production by carbon gentlemen now that the purple drink has been fleshed out and the lean chain is available to the general population myself i think we can agree that it is truly time to go above and beyond with our drink synthesis the purple drink has many benefits to a gameplay oriented project such as this for instance it is very funny therefore in order to elevate this gameplay to a new level i would like to introduce a proposal what you may ask could possibly compete with both the complexity and necessity of the purple drink allow me to share my innovative new processing line the pink drink a fitting companion to the purple drink the pink drink would be created in a similar way pepto bismol synthesis begins with the production of methylcellulose which is created through the mixing of cellulose with dichloromethane this methylcellulose is then mixed with aluminum magnesium silicate and bismuth subsalicylate to form pepto bismol base dust bismuth subsalicylate a pre requisite for the above dust is formed by mixing bismuth sodium hydroxide and water in a iv or above mixer benzoic acid is formed through partial oxidation of toluene over a cobalt naphthenate catalyst pepto bismol then is created by mixing pepto bismol base dust benzoic acid any dyered and water in an hv or above mixer now we have determined a reasonable pathway to pepto bismol production we come to the second ingredient in the pink drink we begin with the production of benzhydrole derivatives which is a complex and mostly meaningless chain entirely focused on the production of benzhydrole chloride chlorodiphenylmethane i have slightly simplified this though i have not done any balancing yet through the below reactions benzyl chloride benzene aluminum chloride catalyst diphenylmethane diphenylmethane air copper dust catalyst benzophenone sodium carbonate borax sodium metaborate sodium metaborate magnesium hydrogen sodium borohydride benzophenone ethanol sodium hydroxide solution sodium borohydride chlorodiphenylmethane difluorobenzhydrole needs zpm lcr ethylene oxide dimethylamine dimethylaminoethanol chlorodiphenylmethane dimethylaminoethanol diphenhydramine hydrochloride dph stearic acid magnesium magnesium stearate diphenhydramine hydrochloride dph magnesium stearate cellulose any dyepink benadryl dust benadryl dust nuggets mold benadryl forming press benadryl crushed benadryl macerator crushed benadryl pepto bismol pink drink preparation mixer pink drink preparation carbon dioxide pink drink iv or above mixer sources
| 1
|
5,487
| 8,359,268,397
|
IssuesEvent
|
2018-10-03 07:37:46
|
bitshares/bitshares-community-ui
|
https://api.github.com/repos/bitshares/bitshares-community-ui
|
closed
|
Dev branch
|
process question tools
|
Guys, what do you think if we got another branch for current development works (`dev`) that could be supervised by me and @youaresofunny, as we gonna accept PRs into it with work for current issues. And that `dev` branch would be merged into `staging` once every other week or so (aka our internal release) with your reviews? Basically we gonna ship PRs to staging in a batch. I think that would be convenient for everybody involved.
|
1.0
|
Dev branch - Guys, what do you think if we got another branch for current development works (`dev`) that could be supervised by me and @youaresofunny, as we gonna accept PRs into it with work for current issues. And that `dev` branch would be merged into `staging` once every other week or so (aka our internal release) with your reviews? Basically we gonna ship PRs to staging in a batch. I think that would be convenient for everybody involved.
|
process
|
dev branch guys what do you think if we got another branch for current development works dev that could be supervised by me and youaresofunny as we gonna accept prs into it with work for current issues and that dev branch would be merged into staging once every other week or so aka our internal release with your reviews basically we gonna ship prs to staging in a batch i think that would be convenient for everybody involved
| 1
|
9,417
| 12,414,988,255
|
IssuesEvent
|
2020-05-22 15:26:12
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
PARs - Upload success & error messages
|
EPIC - PARs process
|
Had a chat with @roughprada and it was decided:
- on successful form submission, there shoud be a success screen as per [the design](https://app.zeplin.io/project/5dd51ae21205c944f8c1d35b/screen/5ebbfcbb5f5cec74857f3c70)
- if the there's an error submitting the form, an [error summary component](https://design-system.service.gov.uk/components/error-summary/) should be shown above the page heading on the "Check your answers" page (this matches the GOV.UK design system)
- no loading screen (for the time being at least), just submit the form when they click "Accept and send" on the "Check your answers" page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it.
|
1.0
|
PARs - Upload success & error messages - Had a chat with @roughprada and it was decided:
- on successful form submission, there shoud be a success screen as per [the design](https://app.zeplin.io/project/5dd51ae21205c944f8c1d35b/screen/5ebbfcbb5f5cec74857f3c70)
- if the there's an error submitting the form, an [error summary component](https://design-system.service.gov.uk/components/error-summary/) should be shown above the page heading on the "Check your answers" page (this matches the GOV.UK design system)
- no loading screen (for the time being at least), just submit the form when they click "Accept and send" on the "Check your answers" page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it.
|
process
|
pars upload success error messages had a chat with roughprada and it was decided on successful form submission there shoud be a success screen as per if the there s an error submitting the form an should be shown above the page heading on the check your answers page this matches the gov uk design system no loading screen for the time being at least just submit the form when they click accept and send on the check your answers page and then either show them the success screen or add an error summary to the top of the page and scroll the user up to the top so they can see it
| 1
|
200
| 2,609,398,259
|
IssuesEvent
|
2015-02-26 14:36:51
|
luc-github/Repetier-Firmware-0.92
|
https://api.github.com/repos/luc-github/Repetier-Firmware-0.92
|
closed
|
Loading left filament on 2.0 on left dripbox
|
enhancement Waiting to be processed
|
When loading new filament into the left extruder while the unit is at home position, the unit stays at the home position. This causes the new filament to dump out between the print bed and the right extruder's dripbox.
Luckily there is a straight opening from the left head to the base of the printer, but it would be more convenient for the old filament to go into the proper location.
|
1.0
|
Loading left filament on 2.0 on left dripbox - When loading new filament into the left extruder while the unit is at home position, the unit stays at the home position. This causes the new filament to dump out between the print bed and the right extruder's dripbox.
Luckily there is a straight opening from the left head to the base of the printer, but it would be more convenient for the old filament to go into the proper location.
|
process
|
loading left filament on on left dripbox when loading new filament into the left extruder while the unit is at home position the unit stays at the home position this causes the new filament to dump out between the print bed and the right extruder s dripbox luckily there is a straight opening from the left head to the base of the printer but it would be more convenient for the old filament to go into the proper location
| 1
|
678,840
| 23,212,900,372
|
IssuesEvent
|
2022-08-02 11:47:30
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.hyatt.com - desktop site instead of mobile site
|
browser-firefox priority-normal engine-gecko
|
<!-- @browser: Firefox 103.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108283 -->
**URL**: https://www.hyatt.com/en-US/member/sign-in/traditional
**Browser / Version**: Firefox 103.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
If there are textboxes more than 3, the saved user name doesn't show in the the textbox for user name.
But the text box for password works.
This works fine on Google Chrome and Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.hyatt.com - desktop site instead of mobile site - <!-- @browser: Firefox 103.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:103.0) Gecko/20100101 Firefox/103.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108283 -->
**URL**: https://www.hyatt.com/en-US/member/sign-in/traditional
**Browser / Version**: Firefox 103.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
If there are textboxes more than 3, the saved user name doesn't show in the the textbox for user name.
But the text box for password works.
This works fine on Google Chrome and Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
desktop site instead of mobile site url browser version firefox operating system windows tested another browser yes chrome problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce if there are textboxes more than the saved user name doesn t show in the the textbox for user name but the text box for password works this works fine on google chrome and edge browser configuration none from with ❤️
| 0
|
604,555
| 18,714,523,217
|
IssuesEvent
|
2021-11-03 01:24:55
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
Figure out what is causing all the duplicate payments in Name Request
|
Priority1 ENTITY Name Request
|
Namerequest has seen a significant increase in duplicate payment issues (*) in Prod environment.
This ticket is to investigate various causes and fix them (here or in separate tickets). See comments below for details.
(*) To be precise, the issue is that multiple (separate) NRs are being created and paid for, at least several minutes (and sometimes hours) apart, requesting the same name and (mostly) the same user info.
|
1.0
|
Figure out what is causing all the duplicate payments in Name Request - Namerequest has seen a significant increase in duplicate payment issues (*) in Prod environment.
This ticket is to investigate various causes and fix them (here or in separate tickets). See comments below for details.
(*) To be precise, the issue is that multiple (separate) NRs are being created and paid for, at least several minutes (and sometimes hours) apart, requesting the same name and (mostly) the same user info.
|
non_process
|
figure out what is causing all the duplicate payments in name request namerequest has seen a significant increase in duplicate payment issues in prod environment this ticket is to investigate various causes and fix them here or in separate tickets see comments below for details to be precise the issue is that multiple separate nrs are being created and paid for at least several minutes and sometimes hours apart requesting the same name and mostly the same user info
| 0
|
175,142
| 14,517,770,727
|
IssuesEvent
|
2020-12-13 20:56:09
|
sanskrit-lexicon/MWS
|
https://api.github.com/repos/sanskrit-lexicon/MWS
|
closed
|
Coding of Sanskrit words in MW -
|
Documentation
|
A comment [elsewhere](https://github.com/sanskrit-lexicon/Cologne/issues/190#issuecomment-345402396) deserves a full documentation of the relation between the printed text of MW and the various ways this text is represented in the digitization.
I'll attempt to do that documentation here when time permits.
|
1.0
|
Coding of Sanskrit words in MW - - A comment [elsewhere](https://github.com/sanskrit-lexicon/Cologne/issues/190#issuecomment-345402396) deserves a full documentation of the relation between the printed text of MW and the various ways this text is represented in the digitization.
I'll attempt to do that documentation here when time permits.
|
non_process
|
coding of sanskrit words in mw a comment deserves a full documentation of the relation between the printed text of mw and the various ways this text is represented in the digitization i ll attempt to do that documentation here when time permits
| 0
|
67,387
| 9,037,830,457
|
IssuesEvent
|
2019-02-09 14:39:05
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Need documentation for MQTT
|
area: Documentation bug priority: medium
|
**Describe the bug**
Need documentation for the MQTT feature in zephyr
|
1.0
|
Need documentation for MQTT - **Describe the bug**
Need documentation for the MQTT feature in zephyr
|
non_process
|
need documentation for mqtt describe the bug need documentation for the mqtt feature in zephyr
| 0
|
2,170
| 5,019,622,482
|
IssuesEvent
|
2016-12-14 12:24:58
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [ENG]
|
Language: English Process: Someone is working on this issue Process: [1] Writing in progress
|
# Video title
POUR LA LIBERTÉ DU NET ET LA SOUVERAINETÉ NUMÉRIQUE
URL
https://www.youtube.com/watch?v=lx7kVw4Byrk
Youtube subtitle language
English
Duration
00:53
URL subtitles
https://www.youtube.com/timedtext_editor?bl=vmp&action_mde_edit_form=1&lang=en&ui=hd&v=lx7kVw4Byrk&tab=captions&ref=player
|
2.0
|
[subtitles] [ENG] - # Video title
POUR LA LIBERTÉ DU NET ET LA SOUVERAINETÉ NUMÉRIQUE
URL
https://www.youtube.com/watch?v=lx7kVw4Byrk
Youtube subtitle language
English
Duration
00:53
URL subtitles
https://www.youtube.com/timedtext_editor?bl=vmp&action_mde_edit_form=1&lang=en&ui=hd&v=lx7kVw4Byrk&tab=captions&ref=player
|
process
|
video title pour la liberté du net et la souveraineté numérique url youtube subtitle language english duration url subtitles
| 1
|
1,405
| 3,968,812,353
|
IssuesEvent
|
2016-05-03 20:59:44
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
ServiceControllerTests.StartWithArguments failed in CI
|
blocking-clean-ci System.ServiceProcess test bug
|
http://dotnet-ci.cloudapp.net/job/dotnet_corefx/job/outerloop_win10_debug/136/consoleFull
```
06:48:44 System.ServiceProcess.Tests.ServiceControllerTests.StartWithArguments [FAIL]
06:48:44 Assert.Equal() Failure
06:48:44 Expected: Running
06:48:44 Actual: StartPending
06:48:44 Stack Trace:
06:48:44 d:\j\workspace\outerloop_win---0cba2915\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs(133,0): at System.ServiceProcess.Tests.ServiceControllerTests.StartWithArguments()
```
|
1.0
|
ServiceControllerTests.StartWithArguments failed in CI - http://dotnet-ci.cloudapp.net/job/dotnet_corefx/job/outerloop_win10_debug/136/consoleFull
```
06:48:44 System.ServiceProcess.Tests.ServiceControllerTests.StartWithArguments [FAIL]
06:48:44 Assert.Equal() Failure
06:48:44 Expected: Running
06:48:44 Actual: StartPending
06:48:44 Stack Trace:
06:48:44 d:\j\workspace\outerloop_win---0cba2915\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs(133,0): at System.ServiceProcess.Tests.ServiceControllerTests.StartWithArguments()
```
|
process
|
servicecontrollertests startwitharguments failed in ci system serviceprocess tests servicecontrollertests startwitharguments assert equal failure expected running actual startpending stack trace d j workspace outerloop win src system serviceprocess servicecontroller tests system serviceprocess servicecontroller tests servicecontrollertests cs at system serviceprocess tests servicecontrollertests startwitharguments
| 1
|
13,148
| 15,572,771,756
|
IssuesEvent
|
2021-03-17 07:36:53
|
bitpal/bitpal_umbrella
|
https://api.github.com/repos/bitpal/bitpal_umbrella
|
opened
|
Recurrent payments
|
Payment processor enhancement
|
Would be fantastic to support recurring payments in some fashion.
In BCH these smart contract-based approaches exists:
* CashChannels for recurring payments
https://blog.bitjson.com/cashchannels-recurring-payments-for-bitcoin-cash-3b274fbfa6e2
* Mecenas recurring payment (support Patreon-like services)
https://github.com/KarolTrzeszczkowski/Mecenas-recurring-payment-EC-plugin
But we really need to coordinate with some wallet creator to get it rolling in a good way.
|
1.0
|
Recurrent payments - Would be fantastic to support recurring payments in some fashion.
In BCH these smart contract-based approaches exists:
* CashChannels for recurring payments
https://blog.bitjson.com/cashchannels-recurring-payments-for-bitcoin-cash-3b274fbfa6e2
* Mecenas recurring payment (support Patreon-like services)
https://github.com/KarolTrzeszczkowski/Mecenas-recurring-payment-EC-plugin
But we really need to coordinate with some wallet creator to get it rolling in a good way.
|
process
|
recurrent payments would be fantastic to support recurring payments in some fashion in bch these smart contract based approaches exists cashchannels for recurring payments mecenas recurring payment support patreon like services but we really need to coordinate with some wallet creator to get it rolling in a good way
| 1
|
382,037
| 11,299,919,264
|
IssuesEvent
|
2020-01-17 12:24:36
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
discordapp.com - see bug description
|
browser-firefox-reality engine-gecko ml-needsdiagnosis-false priority-important
|
<!-- @browser: Android 7.1.1 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile VR; rv:72.0) Gecko/72.0 Firefox/72.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://discordapp.com/channels/617401239604297728/624252010815815690
**Browser / Version**: Android 7.1.1
**Operating System**: Android 7.1.1
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: I cannot add an image to my discord post.
**Steps to Reproduce**:
Nothing happens when I click on the + sign to add an image
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
discordapp.com - see bug description - <!-- @browser: Android 7.1.1 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile VR; rv:72.0) Gecko/72.0 Firefox/72.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://discordapp.com/channels/617401239604297728/624252010815815690
**Browser / Version**: Android 7.1.1
**Operating System**: Android 7.1.1
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: I cannot add an image to my discord post.
**Steps to Reproduce**:
Nothing happens when I click on the + sign to add an image
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
discordapp com see bug description url browser version android operating system android tested another browser no problem type something else description i cannot add an image to my discord post steps to reproduce nothing happens when i click on the sign to add an image browser configuration none from with ❤️
| 0
|
241,627
| 20,152,573,695
|
IssuesEvent
|
2022-02-09 13:48:46
|
blockframes/blockframes
|
https://api.github.com/repos/blockframes/blockframes
|
closed
|
add a script to run e2e tests with Cypress in incognito mode
|
Dev - Test / Quality Assurance DevOps
|
add a small script so we can open the test in incognito mode with one line of code. That might be useful in the future if we want to test without any cache issues.
|
1.0
|
add a script to run e2e tests with Cypress in incognito mode -
add a small script so we can open the test in incognito mode with one line of code. That might be useful in the future if we want to test without any cache issues.
|
non_process
|
add a script to run tests with cypress in incognito mode add a small script so we can open the test in incognito mode with one line of code that might be useful in the future if we want to test without any cache issues
| 0
|
353,490
| 10,553,179,520
|
IssuesEvent
|
2019-10-03 16:38:34
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Tor window always starts with default shields setting
|
feature/shields feature/tor feature/tor/guest-semantics priority/P4
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Tor window always starts with default shields settings when user has different shields settings in normal profile
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Change any settings in default shields settings in normal window
2. Launch tor window
3. Check it is applied to tor window
## Actual result:
<!--Please add screenshots if needed-->
Tor window always starts with default shields setting
## Expected result:
Tor window starts with custom shields settings of normal window
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
always
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
every version
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds?
yes
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? No
- Is the issue reproducible on the latest version of Chrome? No
### Additional Information
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
Currently, this is expected behavior because tor window uses different profile with other normal windows.
|
1.0
|
Tor window always starts with default shields setting - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Tor window always starts with default shields settings when user has different shields settings in normal profile
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Change any settings in default shields settings in normal window
2. Launch tor window
3. Check it is applied to tor window
## Actual result:
<!--Please add screenshots if needed-->
Tor window always starts with default shields setting
## Expected result:
Tor window starts with custom shields settings of normal window
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
always
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
every version
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds?
yes
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? No
- Is the issue reproducible on the latest version of Chrome? No
### Additional Information
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
Currently, this is expected behavior because tor window uses different profile with other normal windows.
|
non_process
|
tor window always starts with default shields setting have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description tor window always starts with default shields settings when user has different shields settings in normal profile steps to reproduce change any settings in default shields settings in normal window launch tor window check it is applied to tor window actual result tor window always starts with default shields setting expected result tor window starts with custom shields settings of normal window reproduces how often always brave version brave version info every version reproducible on current release does it reproduce on brave browser dev beta builds yes website problems only does the issue resolve itself when disabling brave shields no is the issue reproducible on the latest version of chrome no additional information currently this is expected behavior because tor window uses different profile with other normal windows
| 0
|
21,595
| 29,995,049,035
|
IssuesEvent
|
2023-06-26 04:26:24
|
james77777778/keras-aug
|
https://api.github.com/repos/james77777778/keras-aug
|
closed
|
Verify mixed precision functionality
|
bug preprocessing augmentation
|
Should verify with `tf.debugging.enable_check_numerics()`
- RandomHSV: `tf.image.rgb_to_hsv`
|
1.0
|
Verify mixed precision functionality - Should verify with `tf.debugging.enable_check_numerics()`
- RandomHSV: `tf.image.rgb_to_hsv`
|
process
|
verify mixed precision functionality should verify with tf debugging enable check numerics randomhsv tf image rgb to hsv
| 1
|
4,724
| 7,568,231,463
|
IssuesEvent
|
2018-04-22 17:59:48
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Confusing process.release.compareVersion API
|
meta process
|
It might only be me but the current implementation is not intuitive for me because I expect the output to be exactly the opposite of what it is. Even though it is not right or wrong either way.
Let's give an example.
The current master version is `10.0.0-pre`.
That way I expect `9.0.0` to return `-1` because the input value is lower than the current version. Instead, it is `1`, because the current version is bigger than the input value. So it depends on what viewpoint the user is in.
For me, it is more intuitive to have the input value being the main important part and not the current version (I hope it is clear what I want to express...).
So I would like to hear from others @nodejs/collaborators how they feel about this (it did not yet land on any release, so there would theoretically still be time to change this without being semver-major).
|
1.0
|
Confusing process.release.compareVersion API - It might only be me but the current implementation is not intuitive for me because I expect the output to be exactly the opposite of what it is. Even though it is not right or wrong either way.
Let's give an example.
The current master version is `10.0.0-pre`.
That way I expect `9.0.0` to return `-1` because the input value is lower than the current version. Instead, it is `1`, because the current version is bigger than the input value. So it depends on what viewpoint the user is in.
For me, it is more intuitive to have the input value being the main important part and not the current version (I hope it is clear what I want to express...).
So I would like to hear from others @nodejs/collaborators how they feel about this (it did not yet land on any release, so there would theoretically still be time to change this without being semver-major).
|
process
|
confusing process release compareversion api it might only be me but the current implementation is not intuitive for me because i expect the output to be exactly the opposite of what it is even though it is not right or wrong either way let s give an example the current master version is pre that way i expect to return because the input value is lower than the current version instead it is because the current version is bigger than the input value so it depends on what viewpoint the user is in for me it is more intuitive to have the input value being the main important part and not the current version i hope it is clear what i want to express so i would like to hear from others nodejs collaborators how they feel about this it did not yet land on any release so there would theoretically still be time to change this without being semver major
| 1
|
137,070
| 30,622,872,298
|
IssuesEvent
|
2023-07-24 09:27:14
|
Dart-Code/Dart-Code
|
https://api.github.com/repos/Dart-Code/Dart-Code
|
closed
|
Duplicate DevTools in language status area
|
fixed in vs code
|
This was after change analyzer path in settings, so perhaps the old one is not being disposed correctly during a silent restart.

|
1.0
|
Duplicate DevTools in language status area - This was after change analyzer path in settings, so perhaps the old one is not being disposed correctly during a silent restart.

|
non_process
|
duplicate devtools in language status area this was after change analyzer path in settings so perhaps the old one is not being disposed correctly during a silent restart
| 0
|
18,650
| 24,581,073,167
|
IssuesEvent
|
2022-10-13 15:39:12
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] Questionnaire resource > Active tasks > 'Start' date and 'end date' are getting displayed when active tasks are configured with anchor date schedule type
|
Bug P1 Response datastore Process: Fixed Process: Tested QA Process: Tested dev
|
AR: Questionnaire resource > Active tasks > 'Start' date and 'end date' getting displayed when active tasks are configured with anchor date schedule type
ER: Questionnaire resource > Active tasks > 'Start' date and 'end date' should not get displayed when active tasks are configured with anchor date schedule type
Note:
1. Issues observed for active tasks when those are configured with anchor date scheduling type
2. Issue observed for questionnaires when those are configured with a one-time anchor date

|
3.0
|
[FHIR] Questionnaire resource > Active tasks > 'Start' date and 'end date' are getting displayed when active tasks are configured with anchor date schedule type - AR: Questionnaire resource > Active tasks > 'Start' date and 'end date' getting displayed when active tasks are configured with anchor date schedule type
ER: Questionnaire resource > Active tasks > 'Start' date and 'end date' should not get displayed when active tasks are configured with anchor date schedule type
Note:
1. Issues observed for active tasks when those are configured with anchor date scheduling type
2. Issue observed for questionnaires when those are configured with a one-time anchor date

|
process
|
questionnaire resource active tasks start date and end date are getting displayed when active tasks are configured with anchor date schedule type ar questionnaire resource active tasks start date and end date getting displayed when active tasks are configured with anchor date schedule type er questionnaire resource active tasks start date and end date should not get displayed when active tasks are configured with anchor date schedule type note issues observed for active tasks when those are configured with anchor date scheduling type issue observed for questionnaires when those are configured with a one time anchor date
| 1
|
113,415
| 14,434,556,681
|
IssuesEvent
|
2020-12-07 07:18:08
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Data modeling reuse - Designer POC
|
area/data-modeling kind/user-story solution/studio/designer
|
## Description
Should cover points 1-2 in #5206
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
See #5206 for details.
## Acceptance criteria
> Describe criteria here (i.e. What is allowed/not allowed (negative tesing), validations, error messages and warnings etc.)
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
- [ ]implement POC for designer
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
1.0
|
Data modeling reuse - Designer POC - ## Description
Should cover points 1-2 in #5206
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
See #5206 for details.
## Acceptance criteria
> Describe criteria here (i.e. What is allowed/not allowed (negative tesing), validations, error messages and warnings etc.)
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
- [ ]implement POC for designer
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
non_process
|
data modeling reuse designer poc description should cover points in screenshots screenshots or links to figma make sure your sketch is public considerations see for details acceptance criteria describe criteria here i e what is allowed not allowed negative tesing validations error messages and warnings etc specification tasks development tasks are defined test design decide test need development tasks implement poc for designer definition of done verify that this issue meets only for project members before closing documentation is updated if relevant technical documentation docs altinn studio user documentation altinn github io docs qa manual test is complete if relevant automated test is implemented if relevant all tasks in this userstory are closed i e remaining tasks are moved to other user stories or marked obsolete
| 0
|
289,445
| 24,989,723,985
|
IssuesEvent
|
2022-11-02 17:41:29
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[chip-test] chip_sw_padctrl_attributes
|
Component:ChipLevelTest
|
### Test point name
[chip_sw_padctrl_attributes](https://github.com/lowRISC/opentitan/blob/b65e2705eb4ee814b46939d44f84d1adfee133c3/hw/top_earlgrey/data/chip_testplan.hjson#L406)
### Host side component
SystemVerilog
### OpenTitanTool infrastructure implemented
_No response_
### Contact person
@msfschaffner
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [x] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [x] Device-side (C) component developed
- [x] Bazel build rules developed
- [ ] Host-side component developed
- [ ] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
|
1.0
|
[chip-test] chip_sw_padctrl_attributes - ### Test point name
[chip_sw_padctrl_attributes](https://github.com/lowRISC/opentitan/blob/b65e2705eb4ee814b46939d44f84d1adfee133c3/hw/top_earlgrey/data/chip_testplan.hjson#L406)
### Host side component
SystemVerilog
### OpenTitanTool infrastructure implemented
_No response_
### Contact person
@msfschaffner
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [x] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [x] Device-side (C) component developed
- [x] Bazel build rules developed
- [ ] Host-side component developed
- [ ] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
|
non_process
|
chip sw padctrl attributes test point name host side component systemverilog opentitantool infrastructure implemented no response contact person msfschaffner checklist please fill out this checklist as items are completed link to prs and issues as appropriate check if existing test covers most or all of this testpoint if so either extend said test to cover all points or skip the next checkboxes device side c component developed bazel build rules developed host side component developed hjson test plan updated with test name so it shows up in the dashboard test added to dvsim nightly regression and passing at time of checking
| 0
|
7,651
| 10,739,135,963
|
IssuesEvent
|
2019-10-29 15:52:12
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
opened
|
Update modal - update the text to provide more information
|
Apply Process State Dept.
|
Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
Updated content needed:
Current Screen Shot:

|
1.0
|
Update modal - update the text to provide more information - Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
Updated content needed:
Current Screen Shot:

|
process
|
update modal update the text to provide more information who applicants what update the update application modal why in order to provide additional information acceptance criteria update the update application modal to provide more information related to closing time est and if you update make sure you submit updated content needed current screen shot
| 1
|
284,939
| 8,752,389,783
|
IssuesEvent
|
2018-12-14 02:46:56
|
atulmish/UnitConversion
|
https://api.github.com/repos/atulmish/UnitConversion
|
closed
|
StyleCop doesn't seem to recognize parameters in sln.dotsettings
|
low-priority
|
# Issue short description
As the title. Example picture:

## Expected Behavior
By adding "exceptions" to UnitConversion.sln.dotsettings StyleCop shouldn't show warnings for that specified issue.
## Actual Behavior
Issues are still displayed.
## Steps to Reproduce
By cloning the [UnitConversion repo](https://github.com/gkampolis/UnitConversion), this should be reproducible (or it's a problem in my settings somewhere).
## Technical Details (OS, project version, relevant details)
* Visual Studio Community 2017 15.5.5 (latest version)
* StyleCop as [Visual Studio extension](https://marketplace.visualstudio.com/items?itemName=ChrisDahlberg.StyleCop) Version: 5.0.6419.0 (latest)
* Windows 10
|
1.0
|
StyleCop doesn't seem to recognize parameters in sln.dotsettings - # Issue short description
As the title. Example picture:

## Expected Behavior
By adding "exceptions" to UnitConversion.sln.dotsettings StyleCop shouldn't show warnings for that specified issue.
## Actual Behavior
Issues are still displayed.
## Steps to Reproduce
By cloning the [UnitConversion repo](https://github.com/gkampolis/UnitConversion), this should be reproducible (or it's a problem in my settings somewhere).
## Technical Details (OS, project version, relevant details)
* Visual Studio Community 2017 15.5.5 (latest version)
* StyleCop as [Visual Studio extension](https://marketplace.visualstudio.com/items?itemName=ChrisDahlberg.StyleCop) Version: 5.0.6419.0 (latest)
* Windows 10
|
non_process
|
stylecop doesn t seem to recognize parameters in sln dotsettings issue short description as the title example picture expected behavior by adding exceptions to unitconversion sln dotsettings stylecop shouldn t show warnings for that specified issue actual behavior issues are still displayed steps to reproduce by cloning the this should be reproducible or it s a problem in my settings somewhere technical details os project version relevant details visual studio community latest version stylecop as version latest windows
| 0
|
3,638
| 6,670,944,583
|
IssuesEvent
|
2017-10-04 03:31:25
|
material-motion/material-motion-js
|
https://api.github.com/repos/material-motion/material-motion-js
|
closed
|
Investigate popmotion or anime.js for web tweens
|
Needs research Process
|
There are a [few motion libraries](https://material-motion.gitbooks.io/material-motion-starmap/content/community_index/web.html) for JS right now. They are usually DOMful, but may have already solved the transform overloading. Look into it, and see if we can use any of that logic (or the library wholesale) to solve #36.
|
1.0
|
Investigate popmotion or anime.js for web tweens - There are a [few motion libraries](https://material-motion.gitbooks.io/material-motion-starmap/content/community_index/web.html) for JS right now. They are usually DOMful, but may have already solved the transform overloading. Look into it, and see if we can use any of that logic (or the library wholesale) to solve #36.
|
process
|
investigate popmotion or anime js for web tweens there are a for js right now they are usually domful but may have already solved the transform overloading look into it and see if we can use any of that logic or the library wholesale to solve
| 1
|
16,818
| 22,060,935,572
|
IssuesEvent
|
2022-05-30 17:43:24
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
closed
|
Allow multi interface mocks
|
enhancement kmock kmock-processor
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently only single interface mocks are allowed, but not the union of n-parents with n>1.
This could be "easily" done by also consuming Arrays via the already existing Annotations or via new Annotations (possibly the better way since it keeps the type system intact).
|
1.0
|
Allow multi interface mocks - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently only single interface mocks are allowed, but not the union of n-parents with n>1.
This could be "easily" done by also consuming Arrays via the already existing Annotations or via new Annotations (possibly the better way since it keeps the type system intact).
|
process
|
allow multi interface mocks description currently only single interface mocks are allowed but not the union of n parents with n this could be easily done by also consuming arrays via the already existing annotations or via new annotations possibly the better way since it keeps the type system intact
| 1
|
17,540
| 23,350,087,733
|
IssuesEvent
|
2022-08-09 22:24:00
|
oxidecomputer/hubris
|
https://api.github.com/repos/oxidecomputer/hubris
|
opened
|
Want thermal trip black box data
|
service processor product
|
We recently had a thermal trip event occur because I forgot to install an air-flow shroud. While the system correctly detected that we had a thermal trip and a humility get_state made that very obvious, However, the next question I had was: What was the last temperature we saw?
To answer this I think there are a few things we want to do here in increasing involvement:
* Have some kind of ring buffer of data that we've read so we can denote what the last set of values we saw were. While the last valid measurement is useful, having a few entries that led there will help. This is probably tied into the broader thermal loop restructuring.
* Eventually we want to make sure that we're holding on to data and evacuating it over the management network and transforming it into metrics data (cc @jgallagher, @bnaecker). That may also want to have a different policy of how much data is kept around to cover some amount of network disconnects and other periods were such services are not available. As an example, say we cold start a rack and one system has a thermal trip of the CPU (or really any other component such as a VR), we'd want to keep that around long enough so someone else could contact and discover us and get that information out.
|
1.0
|
Want thermal trip black box data - We recently had a thermal trip event occur because I forgot to install an air-flow shroud. While the system correctly detected that we had a thermal trip and a humility get_state made that very obvious, However, the next question I had was: What was the last temperature we saw?
To answer this I think there are a few things we want to do here in increasing involvement:
* Have some kind of ring buffer of data that we've read so we can denote what the last set of values we saw were. While the last valid measurement is useful, having a few entries that led there will help. This is probably tied into the broader thermal loop restructuring.
* Eventually we want to make sure that we're holding on to data and evacuating it over the management network and transforming it into metrics data (cc @jgallagher, @bnaecker). That may also want to have a different policy of how much data is kept around to cover some amount of network disconnects and other periods were such services are not available. As an example, say we cold start a rack and one system has a thermal trip of the CPU (or really any other component such as a VR), we'd want to keep that around long enough so someone else could contact and discover us and get that information out.
|
process
|
want thermal trip black box data we recently had a thermal trip event occur because i forgot to install an air flow shroud while the system correctly detected that we had a thermal trip and a humility get state made that very obvious however the next question i had was what was the last temperature we saw to answer this i think there are a few things we want to do here in increasing involvement have some kind of ring buffer of data that we ve read so we can denote what the last set of values we saw were while the last valid measurement is useful having a few entries that led there will help this is probably tied into the broader thermal loop restructuring eventually we want to make sure that we re holding on to data and evacuating it over the management network and transforming it into metrics data cc jgallagher bnaecker that may also want to have a different policy of how much data is kept around to cover some amount of network disconnects and other periods were such services are not available as an example say we cold start a rack and one system has a thermal trip of the cpu or really any other component such as a vr we d want to keep that around long enough so someone else could contact and discover us and get that information out
| 1
|
85,292
| 16,627,828,797
|
IssuesEvent
|
2021-06-03 12:00:12
|
odpi/egeria
|
https://api.github.com/repos/odpi/egeria
|
closed
|
OLS deprecated function calls
|
code-quality
|
#4341 related
Following deprecated function calls should be replaced with new calls:
Class | Call | Line
--|--|--
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-api/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/ffdc/OpenLineageException.java | FunctionCall: OCFCheckedExceptionBase() | 30
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-api/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/ffdc/OpenLineageException.java | FunctionCall: OCFCheckedExceptionBase() | 57
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 268
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 306
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 245
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: PropertyServerException() | 288
|
1.0
|
OLS deprecated function calls - #4341 related
Following deprecated function calls should be replaced with new calls:
Class | Call | Line
--|--|--
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-api/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/ffdc/OpenLineageException.java | FunctionCall: OCFCheckedExceptionBase() | 30
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-api/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/ffdc/OpenLineageException.java | FunctionCall: OCFCheckedExceptionBase() | 57
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 268
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 306
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: OMAGConfigurationErrorException() | 245
egeria/open-metadata-implementation/governance-servers/open-lineage-services/open-lineage-services-server/src/main/java/org/odpi/openmetadata/governanceservers/openlineage/admin/OpenLineageServerOperationalServices.java | FunctionCall: PropertyServerException() | 288
|
non_process
|
ols deprecated function calls related following deprecated function calls should be replaced with new calls class call line egeria open metadata implementation governance servers open lineage services open lineage services api src main java org odpi openmetadata governanceservers openlineage ffdc openlineageexception java functioncall ocfcheckedexceptionbase egeria open metadata implementation governance servers open lineage services open lineage services api src main java org odpi openmetadata governanceservers openlineage ffdc openlineageexception java functioncall ocfcheckedexceptionbase egeria open metadata implementation governance servers open lineage services open lineage services server src main java org odpi openmetadata governanceservers openlineage admin openlineageserveroperationalservices java functioncall omagconfigurationerrorexception egeria open metadata implementation governance servers open lineage services open lineage services server src main java org odpi openmetadata governanceservers openlineage admin openlineageserveroperationalservices java functioncall omagconfigurationerrorexception egeria open metadata implementation governance servers open lineage services open lineage services server src main java org odpi openmetadata governanceservers openlineage admin openlineageserveroperationalservices java functioncall omagconfigurationerrorexception egeria open metadata implementation governance servers open lineage services open lineage services server src main java org odpi openmetadata governanceservers openlineage admin openlineageserveroperationalservices java functioncall propertyserverexception
| 0
|
10,915
| 13,691,273,102
|
IssuesEvent
|
2020-09-30 15:20:51
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
When CLI and Client are not in sync, generate errors with TypeError: Cannot read property '0' of undefined
|
bug/2-confirmed kind/bug process/candidate team/typescript topic: cli-generate
|
## Bug description
Note: This is related to https://github.com/prisma/prisma/issues/3729, the only difference is that the versions of CLI and Client are switched in `package.json` and the returned error is different.
When CLI and Client are not in sync, generate errors with
`TypeError: Cannot read property 'type' of undefined`
## How to reproduce
1. package.json
```
{
"name": "p2-dev",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"devDependencies": {
"@prisma/client": "2.8.0-dev.27",
"@prisma/cli": "2.8.0-dev.9"
}
}
```
2. schema.prisma
```
datasource db {
provider = "postgresql"
url = "postgresql://this-will-not-be-used-but-overwritten-in-the-prisma-client-constructor"
}
generator client {
provider = "prisma-client-js"
}
model User {
email String @default("")
id String @id
name String @default("")
}
```
3. Run `yarn; yarn prisma generate`
4. Observe the error
```
divyendusingh [p2-dev]$ yarn; yarn prisma generate 130 ↵
yarn install v1.22.4
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
[3/4] 🔗 Linking dependencies...
[4/4] 🔨 Building fresh packages...
success Saved lockfile.
✨ Done in 6.08s.
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma generate
Prisma Schema loaded from schema.prisma
Error:
TypeError: Cannot read property '0' of undefined
at DMMFClass.resolveInputTypes (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9940:39)
at new DMMFClass (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9919:10)
at new TSClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:10316:17)
at buildClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11391:18)
at generateClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11446:45)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
```
## Expected behavior
It should not crash
## Prisma information
```
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma --version
@prisma/cli : 2.8.0-dev.27
Current platform : darwin
Query Engine : query-engine 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/prisma-fmt-darwin)
Studio : 0.291.0
Done in 2.05s.
```
|
1.0
|
When CLI and Client are not in sync, generate errors with TypeError: Cannot read property '0' of undefined -
## Bug description
Note: This is related to https://github.com/prisma/prisma/issues/3729, the only difference is that the versions of CLI and Client are switched in `package.json` and the returned error is different.
When CLI and Client are not in sync, generate errors with
`TypeError: Cannot read property 'type' of undefined`
## How to reproduce
1. package.json
```
{
"name": "p2-dev",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"devDependencies": {
"@prisma/client": "2.8.0-dev.27",
"@prisma/cli": "2.8.0-dev.9"
}
}
```
2. schema.prisma
```
datasource db {
provider = "postgresql"
url = "postgresql://this-will-not-be-used-but-overwritten-in-the-prisma-client-constructor"
}
generator client {
provider = "prisma-client-js"
}
model User {
email String @default("")
id String @id
name String @default("")
}
```
3. Run `yarn; yarn prisma generate`
4. Observe the error
```
divyendusingh [p2-dev]$ yarn; yarn prisma generate 130 ↵
yarn install v1.22.4
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
[3/4] 🔗 Linking dependencies...
[4/4] 🔨 Building fresh packages...
success Saved lockfile.
✨ Done in 6.08s.
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma generate
Prisma Schema loaded from schema.prisma
Error:
TypeError: Cannot read property '0' of undefined
at DMMFClass.resolveInputTypes (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9940:39)
at new DMMFClass (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9919:10)
at new TSClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:10316:17)
at buildClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11391:18)
at generateClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11446:45)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
```
## Expected behavior
It should not crash
## Prisma information
```
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma --version
@prisma/cli : 2.8.0-dev.27
Current platform : darwin
Query Engine : query-engine 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/prisma-fmt-darwin)
Studio : 0.291.0
Done in 2.05s.
```
|
process
|
when cli and client are not in sync generate errors with typeerror cannot read property of undefined bug description note this is related to the only difference is that the versions of cli and client are switched in package json and the returned error is different when cli and client are not in sync generate errors with typeerror cannot read property type of undefined how to reproduce package json name dev version main index js license mit devdependencies prisma client dev prisma cli dev schema prisma datasource db provider postgresql url postgresql this will not be used but overwritten in the prisma client constructor generator client provider prisma client js model user email string default id string id name string default run yarn yarn prisma generate observe the error divyendusingh yarn yarn prisma generate ↵ yarn install 🔍 resolving packages 🚚 fetching packages 🔗 linking dependencies 🔨 building fresh packages success saved lockfile ✨ done in yarn run users divyendusingh documents prisma triage dev node modules bin prisma generate prisma schema loaded from schema prisma error typeerror cannot read property of undefined at dmmfclass resolveinputtypes users divyendusingh documents prisma triage dev node modules prisma client generator build index js at new dmmfclass users divyendusingh documents prisma triage dev node modules prisma client generator build index js at new tsclient users divyendusingh documents prisma triage dev node modules prisma client generator build index js at buildclient users divyendusingh documents prisma triage dev node modules prisma client generator build index js at generateclient users divyendusingh documents prisma triage dev node modules prisma client generator build index js at processticksandrejections internal process task queues js expected behavior it should not crash prisma information yarn run users divyendusingh documents prisma triage dev node modules bin prisma version prisma cli dev current platform darwin query engine query engine at node modules prisma cli query engine darwin migration engine migration engine cli at node modules prisma cli migration engine darwin introspection engine introspection core at node modules prisma cli introspection engine darwin format binary prisma fmt at node modules prisma cli prisma fmt darwin studio done in
| 1
|
4,598
| 7,439,934,779
|
IssuesEvent
|
2018-03-27 08:30:02
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
closed
|
Global proxy suffers from 'illegal argument String.blit' while uploading
|
process_duplicate type_bug
|
It causes either to the `AlbaOsd` to be disqualified (see #797 ) or (when the Fail fast workaround is in place, the proxy to shut die)
```
alba/proxy - 2264 - fatal - Fail fast:"Raised by primitive operation at file \"src/tools/lwt_bytes2.ml.cppo\", line 44, characters 14-39
Called from file \"src/tools/object_reader.ml\", line 93, characters 4-49
Called from file \"src/tools/prelude.ml\", line 971, characters 2-6
Called from file \"src/alba_client_upload.ml\", line 493, characters 6-104
Called from file \"src/alba_client_upload.ml\", line 667, characters 4-46
Called from file \"src/core/lwt.ml\", line 785, characters 20-24
Called from file \"src/alba_client_upload.ml\", line 670, characters 2-159
Called from file \"src/alba_client_sequence.ml\", line 30, characters 4-578
Called from file \"src/list.ml\", line 325, characters 13-17
Called from file \"src/list.ml\" (inlined), line 346, characters 15-31
Called from file \"src/tools/prelude.ml\" (inlined), line 140, characters 16-34
Called from file \"src/tools/prelude.ml\" (inlined), line 933, characters 13-25
Called from file \"src/alba_client_sequence.ml\", line 57, characters 2-285
Called from file \"src/core/lwt.ml\", line 715, characters 66-71
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 306, characters 2-34
Called from file \"src/unix/lwt_unix.cppo.ml\", line 218, characters 31-51
...
```
|
1.0
|
Global proxy suffers from 'illegal argument String.blit' while uploading - It causes either to the `AlbaOsd` to be disqualified (see #797 ) or (when the Fail fast workaround is in place, the proxy to shut die)
```
alba/proxy - 2264 - fatal - Fail fast:"Raised by primitive operation at file \"src/tools/lwt_bytes2.ml.cppo\", line 44, characters 14-39
Called from file \"src/tools/object_reader.ml\", line 93, characters 4-49
Called from file \"src/tools/prelude.ml\", line 971, characters 2-6
Called from file \"src/alba_client_upload.ml\", line 493, characters 6-104
Called from file \"src/alba_client_upload.ml\", line 667, characters 4-46
Called from file \"src/core/lwt.ml\", line 785, characters 20-24
Called from file \"src/alba_client_upload.ml\", line 670, characters 2-159
Called from file \"src/alba_client_sequence.ml\", line 30, characters 4-578
Called from file \"src/list.ml\", line 325, characters 13-17
Called from file \"src/list.ml\" (inlined), line 346, characters 15-31
Called from file \"src/tools/prelude.ml\" (inlined), line 140, characters 16-34
Called from file \"src/tools/prelude.ml\" (inlined), line 933, characters 13-25
Called from file \"src/alba_client_sequence.ml\", line 57, characters 2-285
Called from file \"src/core/lwt.ml\", line 715, characters 66-71
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 202, characters 8-15
Called from file \"src/core/lwt.ml\", line 306, characters 2-34
Called from file \"src/unix/lwt_unix.cppo.ml\", line 218, characters 31-51
...
```
|
process
|
global proxy suffers from illegal argument string blit while uploading it causes either to the albaosd to be disqualified see or when the fail fast workaround is in place the proxy to shut die alba proxy fatal fail fast raised by primitive operation at file src tools lwt ml cppo line characters called from file src tools object reader ml line characters called from file src tools prelude ml line characters called from file src alba client upload ml line characters called from file src alba client upload ml line characters called from file src core lwt ml line characters called from file src alba client upload ml line characters called from file src alba client sequence ml line characters called from file src list ml line characters called from file src list ml inlined line characters called from file src tools prelude ml inlined line characters called from file src tools prelude ml inlined line characters called from file src alba client sequence ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src core lwt ml line characters called from file src unix lwt unix cppo ml line characters
| 1
|
305
| 2,736,749,196
|
IssuesEvent
|
2015-04-19 18:49:31
|
ChelseaStats/issues
|
https://api.github.com/repos/ChelseaStats/issues
|
closed
|
football_league April 17 2015 at 11:05AM
|
to process tweet
|
<blockquote class="twitter-tweet">
<p>So which of the three shortlisted players do you think should win the prize at the <a href="http://u.thechels.uk/1b9vjTj">#FLAwards</a> on Sunday evening? <a href="http://u.thechels.uk/1IloIiN">http://u.thechels.uk/1b9vjTk</a></p>
— The Football League (@football_league) <a href="http://u.thechels.uk/1IloCI7">April 17, 2015</a>
</blockquote>
<br><br>
April 17, 2015 at 11:05AM<br>
via Twitter<br><hr><br><br>
http://u.thechels.uk/1b9vj5z
|
1.0
|
football_league April 17 2015 at 11:05AM - <blockquote class="twitter-tweet">
<p>So which of the three shortlisted players do you think should win the prize at the <a href="http://u.thechels.uk/1b9vjTj">#FLAwards</a> on Sunday evening? <a href="http://u.thechels.uk/1IloIiN">http://u.thechels.uk/1b9vjTk</a></p>
— The Football League (@football_league) <a href="http://u.thechels.uk/1IloCI7">April 17, 2015</a>
</blockquote>
<br><br>
April 17, 2015 at 11:05AM<br>
via Twitter<br><hr><br><br>
http://u.thechels.uk/1b9vj5z
|
process
|
football league april at so which of the three shortlisted players do you think should win the prize at the a href on sunday evening a href mdash the football league football league april at via twitter
| 1
|
802,380
| 28,959,737,649
|
IssuesEvent
|
2023-05-10 00:43:32
|
steedos/steedos-platform
|
https://api.github.com/repos/steedos/steedos-platform
|
closed
|
[Bug]: 批准过程删除附件,服务端报错,前台貌似正常删除
|
bug done priority: High
|
### Description
服务端报错
### Steps To Reproduce 重现步骤

### Version 版本
2.4
|
1.0
|
[Bug]: 批准过程删除附件,服务端报错,前台貌似正常删除 - ### Description
服务端报错
### Steps To Reproduce 重现步骤

### Version 版本
2.4
|
non_process
|
批准过程删除附件,服务端报错,前台貌似正常删除 description 服务端报错 steps to reproduce 重现步骤 version 版本
| 0
|
14,216
| 17,136,455,206
|
IssuesEvent
|
2021-07-13 03:16:40
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Add the entity_id to the add_process_metadata processor
|
:Processors Auditbeat Stalled Team:SIEM enhancement
|
**Describe the enhancement:**
Add the event_id value from the Auditbeat System module to the add_process_metadata processor.
With the following metadata processor I should be able to enrich each Process event so that each event contains two entity_id fields, one for the event, and the entity_id of the parent process
processors:
- add_process_metadata:
match_pids: [process.ppid]
target: process.parent
**Describe a specific use case for the enhancement or feature:**
When investigating a process I would like to be able to also see all activity by the parent process. For example, if I am investigating a security alert on a process whose parent process is bash I would use a Kibana dashboard to query elasticsearch for all events with the matching parent process entity_id to see all activity by that specific bash process. Right now I only have the ppid for investigations and this field is not unique enough when querying an elasticsearch instance with events from more than one system.
See https://discuss.elastic.co/t/auditbeat-system-module-add-parent-process-entity-id-field-to-process-and-socket-events/174610 for discussion about this.
|
1.0
|
Add the entity_id to the add_process_metadata processor - **Describe the enhancement:**
Add the event_id value from the Auditbeat System module to the add_process_metadata processor.
With the following metadata processor I should be able to enrich each Process event so that each event contains two entity_id fields, one for the event, and the entity_id of the parent process
processors:
- add_process_metadata:
match_pids: [process.ppid]
target: process.parent
**Describe a specific use case for the enhancement or feature:**
When investigating a process I would like to be able to also see all activity by the parent process. For example, if I am investigating a security alert on a process whose parent process is bash I would use a Kibana dashboard to query elasticsearch for all events with the matching parent process entity_id to see all activity by that specific bash process. Right now I only have the ppid for investigations and this field is not unique enough when querying an elasticsearch instance with events from more than one system.
See https://discuss.elastic.co/t/auditbeat-system-module-add-parent-process-entity-id-field-to-process-and-socket-events/174610 for discussion about this.
|
process
|
add the entity id to the add process metadata processor describe the enhancement add the event id value from the auditbeat system module to the add process metadata processor with the following metadata processor i should be able to enrich each process event so that each event contains two entity id fields one for the event and the entity id of the parent process processors add process metadata match pids target process parent describe a specific use case for the enhancement or feature when investigating a process i would like to be able to also see all activity by the parent process for example if i am investigating a security alert on a process whose parent process is bash i would use a kibana dashboard to query elasticsearch for all events with the matching parent process entity id to see all activity by that specific bash process right now i only have the ppid for investigations and this field is not unique enough when querying an elasticsearch instance with events from more than one system see for discussion about this
| 1
|
473,007
| 13,634,947,171
|
IssuesEvent
|
2020-09-25 01:24:52
|
Kedyn/fusliez-notes
|
https://api.github.com/repos/Kedyn/fusliez-notes
|
closed
|
Implement a color change menu for the player images
|
Priority: High Status: In Progress Type: Enhancement
|
- [ ] When a user clicks on a player image, a menu displaying all available colors should appear above the clicked image.
- [ ] Only one color should be assigned to a player. For the case where a user selects a color that is already taken, the two colors should swap *(tentative?)*.
- [ ] Clicking on the same player image should toggle the menu's display.
- [ ] Clicking on a different player image while the menu is open should reposition the menu above the most recently clicked player image.
- [ ] Selecting a color from the menu should close the menu.
- [ ] Clicking away from the menu when it is open should close the menu.
|
1.0
|
Implement a color change menu for the player images - - [ ] When a user clicks on a player image, a menu displaying all available colors should appear above the clicked image.
- [ ] Only one color should be assigned to a player. For the case where a user selects a color that is already taken, the two colors should swap *(tentative?)*.
- [ ] Clicking on the same player image should toggle the menu's display.
- [ ] Clicking on a different player image while the menu is open should reposition the menu above the most recently clicked player image.
- [ ] Selecting a color from the menu should close the menu.
- [ ] Clicking away from the menu when it is open should close the menu.
|
non_process
|
implement a color change menu for the player images when a user clicks on a player image a menu displaying all available colors should appear above the clicked image only one color should be assigned to a player for the case where a user selects a color that is already taken the two colors should swap tentative clicking on the same player image should toggle the menu s display clicking on a different player image while the menu is open should reposition the menu above the most recently clicked player image selecting a color from the menu should close the menu clicking away from the menu when it is open should close the menu
| 0
|
2,527
| 5,288,720,476
|
IssuesEvent
|
2017-02-08 15:46:56
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] Windows argument escaping is fragile
|
Bug Process Status: Needs Review
|
It is quite easy to break `ProcessUtils::escapeArgument` on Windows, because it does not follow all the various argument-parsing rules.
I discovered this while working on Composer's [XdebugHandler](https://github.com/composer/composer/blob/master/src/Composer/XdebugHandler.php) (which restarts the process and needs a robust way to escape its passed-in arguments) and after much research came up with a single function [Winbox\Args::escape](https://github.com/johnstevenson/winbox-args/blob/master/src/Args.php#L29) to handle this.
To illustrate the problems, we can pass some contrived arguments to both escape functions and get php to print out its `$argv`:
Given this `composer.json`:
``` json
{
"require": {
"symfony/process": "^2.1",
"winbox/args": "^1.0"
}
}
```
and this script:
``` php
<?php
require __DIR__ . '/vendor/autoload.php';
$params = [
PHP_BINARY,
'-r',
'print_r($argv);',
'--',
'path=%PATH%',
'quote="',
'colors=red & blue',
];
run($params, ['Symfony\Component\Process\ProcessUtils', 'escapeArgument']);
run($params, ['Winbox\Args', 'escape']);
function run($params, $escaper)
{
$command = implode(' ', array_map($escaper, $params));
printf("%s command line:\n%s\n\n", $escaper[0], $command);
passthru($command);
echo PHP_EOL;
}
```
the expected output from `print_r($argv);` is:
```
Array
(
[0] => -
[1] => path=%PATH%
[2] => quote="
[3] => colors=red & blue
)
```
The actual output using `Winbox\Args` is as above, whereas the output using `ProcessUtils` is:
```
Array
(
[0] => -
[1] => path=C:\WINDOWS\system32 ... and the rest of the PATH variable
[2] => quote="
[3] => colors=red
)
'blue"' is not recognized as an internal or external command,
operable program or batch file.
```
The unexpected path-expansion is a simple logic error, but the argument splitting (`colors=red` followed by cmd.exe trying to run a program called `blue`) highlights a more serious problem:
- the escaped arguments are not totally self-contained.
What's happening here is that `quote="` is escaped as `"quote=\""` and while this will work fine on its own, the odd number of double-quotes may corrupt subsequent arguments. In this case the escaped `"colors=red & blue"` is interpreted by cmd.exe as an argument (`"colors=red`) followed by the special character `&` which signifies a separate command (`blue"`). See [How cmd.exe parses a command](https://github.com/johnstevenson/winbox-args/wiki/How-cmd.exe-parses-a-command) for more information.
The wiki at [winbox-args](https://github.com/johnstevenson/winbox-args) details the various hoops you have to go through to try and make this stuff work.
|
1.0
|
[Process] Windows argument escaping is fragile - It is quite easy to break `ProcessUtils::escapeArgument` on Windows, because it does not follow all the various argument-parsing rules.
I discovered this while working on Composer's [XdebugHandler](https://github.com/composer/composer/blob/master/src/Composer/XdebugHandler.php) (which restarts the process and needs a robust way to escape its passed-in arguments) and after much research came up with a single function [Winbox\Args::escape](https://github.com/johnstevenson/winbox-args/blob/master/src/Args.php#L29) to handle this.
To illustrate the problems, we can pass some contrived arguments to both escape functions and get php to print out its `$argv`:
Given this `composer.json`:
``` json
{
"require": {
"symfony/process": "^2.1",
"winbox/args": "^1.0"
}
}
```
and this script:
``` php
<?php
require __DIR__ . '/vendor/autoload.php';
$params = [
PHP_BINARY,
'-r',
'print_r($argv);',
'--',
'path=%PATH%',
'quote="',
'colors=red & blue',
];
run($params, ['Symfony\Component\Process\ProcessUtils', 'escapeArgument']);
run($params, ['Winbox\Args', 'escape']);
function run($params, $escaper)
{
$command = implode(' ', array_map($escaper, $params));
printf("%s command line:\n%s\n\n", $escaper[0], $command);
passthru($command);
echo PHP_EOL;
}
```
the expected output from `print_r($argv);` is:
```
Array
(
[0] => -
[1] => path=%PATH%
[2] => quote="
[3] => colors=red & blue
)
```
The actual output using `Winbox\Args` is as above, whereas the output using `ProcessUtils` is:
```
Array
(
[0] => -
[1] => path=C:\WINDOWS\system32 ... and the rest of the PATH variable
[2] => quote="
[3] => colors=red
)
'blue"' is not recognized as an internal or external command,
operable program or batch file.
```
The unexpected path-expansion is a simple logic error, but the argument splitting (`colors=red` followed by cmd.exe trying to run a program called `blue`) highlights a more serious problem:
- the escaped arguments are not totally self-contained.
What's happening here is that `quote="` is escaped as `"quote=\""` and while this will work fine on its own, the odd number of double-quotes may corrupt subsequent arguments. In this case the escaped `"colors=red & blue"` is interpreted by cmd.exe as an argument (`"colors=red`) followed by the special character `&` which signifies a separate command (`blue"`). See [How cmd.exe parses a command](https://github.com/johnstevenson/winbox-args/wiki/How-cmd.exe-parses-a-command) for more information.
The wiki at [winbox-args](https://github.com/johnstevenson/winbox-args) details the various hoops you have to go through to try and make this stuff work.
|
process
|
windows argument escaping is fragile it is quite easy to break processutils escapeargument on windows because it does not follow all the various argument parsing rules i discovered this while working on composer s which restarts the process and needs a robust way to escape its passed in arguments and after much research came up with a single function to handle this to illustrate the problems we can pass some contrived arguments to both escape functions and get php to print out its argv given this composer json json require symfony process winbox args and this script php php require dir vendor autoload php params php binary r print r argv path path quote colors red blue run params run params function run params escaper command implode array map escaper params printf s command line n s n n escaper command passthru command echo php eol the expected output from print r argv is array path path quote colors red blue the actual output using winbox args is as above whereas the output using processutils is array path c windows and the rest of the path variable quote colors red blue is not recognized as an internal or external command operable program or batch file the unexpected path expansion is a simple logic error but the argument splitting colors red followed by cmd exe trying to run a program called blue highlights a more serious problem the escaped arguments are not totally self contained what s happening here is that quote is escaped as quote and while this will work fine on its own the odd number of double quotes may corrupt subsequent arguments in this case the escaped colors red blue is interpreted by cmd exe as an argument colors red followed by the special character which signifies a separate command blue see for more information the wiki at details the various hoops you have to go through to try and make this stuff work
| 1
|
9,122
| 12,197,765,633
|
IssuesEvent
|
2020-04-29 21:23:37
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Explore-Opps PHAST - Disabled/Selected baseline value
|
Process Heating
|
Child issue of #3625
PHasT > Assessment > Explore-Opportunities
The user would like to see the baseline and modificaton values side by side
|
1.0
|
Explore-Opps PHAST - Disabled/Selected baseline value - Child issue of #3625
PHasT > Assessment > Explore-Opportunities
The user would like to see the baseline and modificaton values side by side
|
process
|
explore opps phast disabled selected baseline value child issue of phast assessment explore opportunities the user would like to see the baseline and modificaton values side by side
| 1
|
10,676
| 13,461,588,607
|
IssuesEvent
|
2020-09-09 14:59:37
|
nlpie/mtap
|
https://api.github.com/repos/nlpie/mtap
|
closed
|
Add "--workers" option for processing server on Java
|
area/framework/processing kind/enhancement lang/java
|
Java currently uses the default GRPC thread pool on java.
|
1.0
|
Add "--workers" option for processing server on Java - Java currently uses the default GRPC thread pool on java.
|
process
|
add workers option for processing server on java java currently uses the default grpc thread pool on java
| 1
|
556
| 3,020,031,615
|
IssuesEvent
|
2015-07-31 03:12:46
|
benjamingr/RegExp.escape
|
https://api.github.com/repos/benjamingr/RegExp.escape
|
closed
|
Prepare me for presenting this to TC39
|
process
|
I'd like a list of what should be presented to TC39. Maybe even a quick slid.es deck or something. Here's my initial set of questions that I have:
- What issues need TC39's input, and could feasibly be decided in a final manner at the meeting? Versus, which are still open for general discussion?
- Is the API final, or is it going to grow some second parameter?
- What stage do you think is appropriate to ask for? 0, 1, or maybe 2?
- Please do https://github.com/benjamingr/RegExp.escape/issues/31
|
1.0
|
Prepare me for presenting this to TC39 - I'd like a list of what should be presented to TC39. Maybe even a quick slid.es deck or something. Here's my initial set of questions that I have:
- What issues need TC39's input, and could feasibly be decided in a final manner at the meeting? Versus, which are still open for general discussion?
- Is the API final, or is it going to grow some second parameter?
- What stage do you think is appropriate to ask for? 0, 1, or maybe 2?
- Please do https://github.com/benjamingr/RegExp.escape/issues/31
|
process
|
prepare me for presenting this to i d like a list of what should be presented to maybe even a quick slid es deck or something here s my initial set of questions that i have what issues need s input and could feasibly be decided in a final manner at the meeting versus which are still open for general discussion is the api final or is it going to grow some second parameter what stage do you think is appropriate to ask for or maybe please do
| 1
|
10,824
| 13,609,366,558
|
IssuesEvent
|
2020-09-23 05:03:29
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Branch filters and internal links [DOT 2.x develop branch]
|
feature preprocess/branch-filtering priority/low
|
On the samples here:
http://www.oxygenxml.com/forum/files/branchFiltering2.zip
The "index.dita" has an xref like:
```
<xref href="someOtherTopic.dita#topic_rl4_rc2_ys"/>
```
When the second branch is created, there is an "index-1.html" which has a reference to "someOtherTopic.html" instead of "someOtherTopic-1.html" so that the references would be self-contained on one branch.
Related links seems to work fine though.
|
1.0
|
Branch filters and internal links [DOT 2.x develop branch] - On the samples here:
http://www.oxygenxml.com/forum/files/branchFiltering2.zip
The "index.dita" has an xref like:
```
<xref href="someOtherTopic.dita#topic_rl4_rc2_ys"/>
```
When the second branch is created, there is an "index-1.html" which has a reference to "someOtherTopic.html" instead of "someOtherTopic-1.html" so that the references would be self-contained on one branch.
Related links seems to work fine though.
|
process
|
branch filters and internal links on the samples here the index dita has an xref like when the second branch is created there is an index html which has a reference to someothertopic html instead of someothertopic html so that the references would be self contained on one branch related links seems to work fine though
| 1
|
731,816
| 25,231,971,236
|
IssuesEvent
|
2022-11-14 20:38:47
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
closed
|
Remove references to "Travis" from code and readmes
|
Type: Technical issue Priority: 3 - Low
|
**Describe the issue**
We have a bunch of references to Travis throughout the code, like:
- readme references
- an entire script folder called Travis: https://github.com/medic/cht-core/tree/master/scripts/travis
- the extensive use of the `TRAVIS_BUILD_NUMBER` environment variable
- e2e tests conditional `IS_TRAVIS`
- [etc](https://github.com/medic/cht-core/search?q=travis)
**Describe the improvement you'd like**
Change references to Travis to a provider neutral name, like CI, `CI_BUILD_NUMBER`, `IS_CI` etc.
**Describe alternatives you've considered**
Leave them as is, but it's triggering me all the time :D
|
1.0
|
Remove references to "Travis" from code and readmes - **Describe the issue**
We have a bunch of references to Travis throughout the code, like:
- readme references
- an entire script folder called Travis: https://github.com/medic/cht-core/tree/master/scripts/travis
- the extensive use of the `TRAVIS_BUILD_NUMBER` environment variable
- e2e tests conditional `IS_TRAVIS`
- [etc](https://github.com/medic/cht-core/search?q=travis)
**Describe the improvement you'd like**
Change references to Travis to a provider neutral name, like CI, `CI_BUILD_NUMBER`, `IS_CI` etc.
**Describe alternatives you've considered**
Leave them as is, but it's triggering me all the time :D
|
non_process
|
remove references to travis from code and readmes describe the issue we have a bunch of references to travis throughout the code like readme references an entire script folder called travis the extensive use of the travis build number environment variable tests conditional is travis describe the improvement you d like change references to travis to a provider neutral name like ci ci build number is ci etc describe alternatives you ve considered leave them as is but it s triggering me all the time d
| 0
|
90,506
| 11,407,054,459
|
IssuesEvent
|
2020-01-31 15:29:12
|
patternfly/patternfly-org
|
https://api.github.com/repos/patternfly/patternfly-org
|
closed
|
Experimental features are not discoverable
|
PF4 Design PF4 website issue infrastructure
|
Since experimental components are only located in the experimental section, they are difficult to discover.
I have noticed that product team members did not know about experimental components that would have been useful for them. One solution would be to also display the experimental components in the main navigation with the other components, but clearly mark them as experimental.
|
1.0
|
Experimental features are not discoverable - Since experimental components are only located in the experimental section, they are difficult to discover.
I have noticed that product team members did not know about experimental components that would have been useful for them. One solution would be to also display the experimental components in the main navigation with the other components, but clearly mark them as experimental.
|
non_process
|
experimental features are not discoverable since experimental components are only located in the experimental section they are difficult to discover i have noticed that product team members did not know about experimental components that would have been useful for them one solution would be to also display the experimental components in the main navigation with the other components but clearly mark them as experimental
| 0
|
716,164
| 24,624,225,914
|
IssuesEvent
|
2022-10-16 09:55:26
|
Giving-Coupons/giving-coupons
|
https://api.github.com/repos/Giving-Coupons/giving-coupons
|
closed
|
Admin campaign list issue
|
priority: 3
|
> Btw @sivayogasubramanian is this same logic using `startOf('day')` and `endOf('day')` supposed to apply to the campaigns list too? Because otherwise it shows the wrong status if the campaign starts and end on the same day?
_Originally posted by @zognin in https://github.com/Giving-Coupons/giving-coupons/pull/158#discussion_r996275842_
|
1.0
|
Admin campaign list issue - > Btw @sivayogasubramanian is this same logic using `startOf('day')` and `endOf('day')` supposed to apply to the campaigns list too? Because otherwise it shows the wrong status if the campaign starts and end on the same day?
_Originally posted by @zognin in https://github.com/Giving-Coupons/giving-coupons/pull/158#discussion_r996275842_
|
non_process
|
admin campaign list issue btw sivayogasubramanian is this same logic using startof day and endof day supposed to apply to the campaigns list too because otherwise it shows the wrong status if the campaign starts and end on the same day originally posted by zognin in
| 0
|
15,801
| 19,987,638,443
|
IssuesEvent
|
2022-01-30 22:20:03
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Proposal to analyze blockchain data of Bisq trades to report on failed trades, arbitration instances and long-term locked funds
|
was:approved a:proposal re:processes
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
This is a proposal for myself to analyze the blockchain data for 200 historic trades with fees paid with BSQ and 200 historic trades with fees trades paid with BTC to answer the following questions:
- What percentage of trades with trade fees paid with BSQ failed
- What percentage of trades with trade fees paid with BTC failed
- What percentage of trades with trade fees paid with BSQ went to arbitration
- What percentage of trades with trade fees paid with BTC went to arbitration
- What percentage of trades with trade fees paid with BSQ still have locked funds after 3 months
- What percentage of trades with trade fees paid with BTC still have locked funds after 3 months
- Number of trades entering arbitration as a percentage of total Bisq trades
- The number, hopefully zero, of any trades going to arbitration that are paying out to a different address than the burningmans BTC address 34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC
### Problem
Currently the following are unknown:
- How many of trades on Bisq fail. The mediators are not aware of all the trades that fail, and not all user with failed trades request reimbursements
- The number of trades entering arbitration as a percentage of Bisq trades, This should be simple enough to work out but I do not think this is being done.
- The number of trades that for one reason or another end up with locked funds after 3 months. This should not happen but if both traders go AWOL then it could. Also bugs could cause this.
- If there a significant difference between failed trades between fees fain in BTC or BSQ. I assume not, but would be good to check.
### Methodology
- For trades fees paid in BTC. I will look at the first 200 trades on Bisq from block height 703000 (October 2020) onwards where fees were paid to 38bZBj5peYS3Husdz7AH3gEUiUbYRD951t
- For trades fees paid in BSQ. I will look at the first 200 trades on Bisq from block height 705000, a later block height than above to not duplicate results, (October 2020) onwards using https://bisq.markets/blocks
- To look at trades ending in arbitration I will compare the number of deposits going to 34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC vs the total trades on the Bisq platform
### Outcome
I will provide a summary report under this proposal detailing answering the above questions and suggesting any relevant data. I will also include any observations / suggestions as necessary.
### Cost
I will request $500 USD worth of BSQ for completing the above.
I think the above would be good to implement. If anyone has any data they think I should also collect then please let me know. Also I plan to do this manually so if anyone thinks there is a way to do this quicker via a script etc, or for lower cost, please let me know.
|
1.0
|
Proposal to analyze blockchain data of Bisq trades to report on failed trades, arbitration instances and long-term locked funds - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
This is a proposal for myself to analyze the blockchain data for 200 historic trades with fees paid with BSQ and 200 historic trades with fees trades paid with BTC to answer the following questions:
- What percentage of trades with trade fees paid with BSQ failed
- What percentage of trades with trade fees paid with BTC failed
- What percentage of trades with trade fees paid with BSQ went to arbitration
- What percentage of trades with trade fees paid with BTC went to arbitration
- What percentage of trades with trade fees paid with BSQ still have locked funds after 3 months
- What percentage of trades with trade fees paid with BTC still have locked funds after 3 months
- Number of trades entering arbitration as a percentage of total Bisq trades
- The number, hopefully zero, of any trades going to arbitration that are paying out to a different address than the burningmans BTC address 34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC
### Problem
Currently the following are unknown:
- How many of trades on Bisq fail. The mediators are not aware of all the trades that fail, and not all user with failed trades request reimbursements
- The number of trades entering arbitration as a percentage of Bisq trades, This should be simple enough to work out but I do not think this is being done.
- The number of trades that for one reason or another end up with locked funds after 3 months. This should not happen but if both traders go AWOL then it could. Also bugs could cause this.
- If there a significant difference between failed trades between fees fain in BTC or BSQ. I assume not, but would be good to check.
### Methodology
- For trades fees paid in BTC. I will look at the first 200 trades on Bisq from block height 703000 (October 2020) onwards where fees were paid to 38bZBj5peYS3Husdz7AH3gEUiUbYRD951t
- For trades fees paid in BSQ. I will look at the first 200 trades on Bisq from block height 705000, a later block height than above to not duplicate results, (October 2020) onwards using https://bisq.markets/blocks
- To look at trades ending in arbitration I will compare the number of deposits going to 34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC vs the total trades on the Bisq platform
### Outcome
I will provide a summary report under this proposal detailing answering the above questions and suggesting any relevant data. I will also include any observations / suggestions as necessary.
### Cost
I will request $500 USD worth of BSQ for completing the above.
I think the above would be good to implement. If anyone has any data they think I should also collect then please let me know. Also I plan to do this manually so if anyone thinks there is a way to do this quicker via a script etc, or for lower cost, please let me know.
|
process
|
proposal to analyze blockchain data of bisq trades to report on failed trades arbitration instances and long term locked funds this is a bisq network proposal please familiarize yourself with the this is a proposal for myself to analyze the blockchain data for historic trades with fees paid with bsq and historic trades with fees trades paid with btc to answer the following questions what percentage of trades with trade fees paid with bsq failed what percentage of trades with trade fees paid with btc failed what percentage of trades with trade fees paid with bsq went to arbitration what percentage of trades with trade fees paid with btc went to arbitration what percentage of trades with trade fees paid with bsq still have locked funds after months what percentage of trades with trade fees paid with btc still have locked funds after months number of trades entering arbitration as a percentage of total bisq trades the number hopefully zero of any trades going to arbitration that are paying out to a different address than the burningmans btc address problem currently the following are unknown how many of trades on bisq fail the mediators are not aware of all the trades that fail and not all user with failed trades request reimbursements the number of trades entering arbitration as a percentage of bisq trades this should be simple enough to work out but i do not think this is being done the number of trades that for one reason or another end up with locked funds after months this should not happen but if both traders go awol then it could also bugs could cause this if there a significant difference between failed trades between fees fain in btc or bsq i assume not but would be good to check methodology for trades fees paid in btc i will look at the first trades on bisq from block height october onwards where fees were paid to for trades fees paid in bsq i will look at the first trades on bisq from block height a later block height than above to not duplicate results october onwards using to look at trades ending in arbitration i will compare the number of deposits going to vs the total trades on the bisq platform outcome i will provide a summary report under this proposal detailing answering the above questions and suggesting any relevant data i will also include any observations suggestions as necessary cost i will request usd worth of bsq for completing the above i think the above would be good to implement if anyone has any data they think i should also collect then please let me know also i plan to do this manually so if anyone thinks there is a way to do this quicker via a script etc or for lower cost please let me know
| 1
|
21,069
| 28,017,206,478
|
IssuesEvent
|
2023-03-28 00:19:35
|
nephio-project/sig-release
|
https://api.github.com/repos/nephio-project/sig-release
|
opened
|
Ensure Nephio project and repos are open to contributors
|
area/process-mgmt sig/release
|
Project is open for commits from all contributors, with OWNERs files and other review processes in place
|
1.0
|
Ensure Nephio project and repos are open to contributors - Project is open for commits from all contributors, with OWNERs files and other review processes in place
|
process
|
ensure nephio project and repos are open to contributors project is open for commits from all contributors with owners files and other review processes in place
| 1
|
383,557
| 11,357,849,819
|
IssuesEvent
|
2020-01-25 09:23:03
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Taints should be more verbose in errors
|
kind/feature priority/important-longterm sig/scheduling
|
When trying to see why pods don't start using either kubectl get events or kubectl describe pod , the base message is not detailed enough.
Warning FailedScheduling 5s (x4 over 65s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
What would you like to be added:
Details of the taint and details of what it didn't tolerate
Why is this needed:
To get much much better debug information by default when trying to understand why pods can't launch.
|
1.0
|
Taints should be more verbose in errors - When trying to see why pods don't start using either kubectl get events or kubectl describe pod , the base message is not detailed enough.
Warning FailedScheduling 5s (x4 over 65s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
What would you like to be added:
Details of the taint and details of what it didn't tolerate
Why is this needed:
To get much much better debug information by default when trying to understand why pods can't launch.
|
non_process
|
taints should be more verbose in errors when trying to see why pods don t start using either kubectl get events or kubectl describe pod the base message is not detailed enough warning failedscheduling over default scheduler nodes are available node s had taints that the pod didn t tolerate what would you like to be added details of the taint and details of what it didn t tolerate why is this needed to get much much better debug information by default when trying to understand why pods can t launch
| 0
|
800,265
| 28,359,150,533
|
IssuesEvent
|
2023-04-12 09:29:38
|
wasmerio/wasmer
|
https://api.github.com/repos/wasmerio/wasmer
|
closed
|
Segfault with `Memory::grow`/`MemoryView` interaction
|
bug priority-high
|
<!-- Thanks for the bug report! -->
### Describe the bug
<!--
A clear and concise description of what the bug is.
Copy and paste the result of executing the following in your shell, so we can know the version of wasmer, Rust (if available) and architecture of your environment.
-->
https://docs.rs/wasmer/3.1.0/wasmer/struct.MemoryView.html documents:
> After a memory is grown a view must not be used anymore.
However, it is possible to do this in safe code and instead of producing an error or panicking this results in a segfault. So there is a soundness issue with the interaction of `Memory::grow` and `Memory::view`.
`wasmer v3.1.0 | rustc 1.66.1 (90743e729 2023-01-10) | x86_64`
### Steps to reproduce
<!--
Include steps that will help us recreate the issue.
For example,
1. Go to '…'
2. Compile with '…'
3. Run '…'
4. See error
If applicable, add a link to a test case (as a zip file or link to a repository we can clone).
-->
```rs
use wasmer::{Memory, MemoryType, Store};
fn main() {
// The default configuration gives memory plenty of room to grow in-place.
let tunables = wasmer::BaseTunables {
static_memory_bound: 4.into(),
static_memory_offset_guard_size: 1024 * 64 * 4,
dynamic_memory_offset_guard_size: 1024 * 64 * 4,
};
let engine = wasmer::EngineBuilder::headless().engine();
let mut store = Store::new_with_tunables(engine, tunables);
let memory = Memory::new(&mut store, MemoryType::new(10, None, false)).unwrap();
let view = memory.view(&store);
memory.grow(&mut store, 1024).unwrap();
view.write_u8(0, 4u8).unwrap();
}
```
```shell
> cargo r
fish: Job 1, 'cargo r' terminated by signal SIGSEGV (Address boundary error)
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
One of the following:
* Producing an error, panicking, or aborting.
* An API that statically prevents this, e.g. by having the `Store` borrowed for the lifetime of `MemoryView`.
* An unsafe API.
### Actual behavior
<!--
A clear and concise description of what actually happened.
If applicable, add screenshots to help explain your problem.
-->
Access to freed memory in safe code. Segfault.
<!-- Add any other context about the problem here. -->
|
1.0
|
Segfault with `Memory::grow`/`MemoryView` interaction - <!-- Thanks for the bug report! -->
### Describe the bug
<!--
A clear and concise description of what the bug is.
Copy and paste the result of executing the following in your shell, so we can know the version of wasmer, Rust (if available) and architecture of your environment.
-->
https://docs.rs/wasmer/3.1.0/wasmer/struct.MemoryView.html documents:
> After a memory is grown a view must not be used anymore.
However, it is possible to do this in safe code and instead of producing an error or panicking this results in a segfault. So there is a soundness issue with the interaction of `Memory::grow` and `Memory::view`.
`wasmer v3.1.0 | rustc 1.66.1 (90743e729 2023-01-10) | x86_64`
### Steps to reproduce
<!--
Include steps that will help us recreate the issue.
For example,
1. Go to '…'
2. Compile with '…'
3. Run '…'
4. See error
If applicable, add a link to a test case (as a zip file or link to a repository we can clone).
-->
```rs
use wasmer::{Memory, MemoryType, Store};
fn main() {
// The default configuration gives memory plenty of room to grow in-place.
let tunables = wasmer::BaseTunables {
static_memory_bound: 4.into(),
static_memory_offset_guard_size: 1024 * 64 * 4,
dynamic_memory_offset_guard_size: 1024 * 64 * 4,
};
let engine = wasmer::EngineBuilder::headless().engine();
let mut store = Store::new_with_tunables(engine, tunables);
let memory = Memory::new(&mut store, MemoryType::new(10, None, false)).unwrap();
let view = memory.view(&store);
memory.grow(&mut store, 1024).unwrap();
view.write_u8(0, 4u8).unwrap();
}
```
```shell
> cargo r
fish: Job 1, 'cargo r' terminated by signal SIGSEGV (Address boundary error)
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
One of the following:
* Producing an error, panicking, or aborting.
* An API that statically prevents this, e.g. by having the `Store` borrowed for the lifetime of `MemoryView`.
* An unsafe API.
### Actual behavior
<!--
A clear and concise description of what actually happened.
If applicable, add screenshots to help explain your problem.
-->
Access to freed memory in safe code. Segfault.
<!-- Add any other context about the problem here. -->
|
non_process
|
segfault with memory grow memoryview interaction describe the bug a clear and concise description of what the bug is copy and paste the result of executing the following in your shell so we can know the version of wasmer rust if available and architecture of your environment documents after a memory is grown a view must not be used anymore however it is possible to do this in safe code and instead of producing an error or panicking this results in a segfault so there is a soundness issue with the interaction of memory grow and memory view wasmer rustc steps to reproduce include steps that will help us recreate the issue for example go to … compile with … run … see error if applicable add a link to a test case as a zip file or link to a repository we can clone rs use wasmer memory memorytype store fn main the default configuration gives memory plenty of room to grow in place let tunables wasmer basetunables static memory bound into static memory offset guard size dynamic memory offset guard size let engine wasmer enginebuilder headless engine let mut store store new with tunables engine tunables let memory memory new mut store memorytype new none false unwrap let view memory view store memory grow mut store unwrap view write unwrap shell cargo r fish job cargo r terminated by signal sigsegv address boundary error expected behavior one of the following producing an error panicking or aborting an api that statically prevents this e g by having the store borrowed for the lifetime of memoryview an unsafe api actual behavior a clear and concise description of what actually happened if applicable add screenshots to help explain your problem access to freed memory in safe code segfault
| 0
|
486,529
| 14,010,658,093
|
IssuesEvent
|
2020-10-29 05:42:19
|
SunstriderEmu/BugTracker
|
https://api.github.com/repos/SunstriderEmu/BugTracker
|
closed
|
The Lurker
|
confirmed core high priority
|
Tonight we did Lurker and it did a whirlwind while he was sprouting and killed some people. It was super strange because it happened so fast even for a whirl but it showed that he was going to whirl in the timer as it happened. The most strange thing of all is that it happened only once while sprout was rotating and it moved like 30-40 degrees only before the bug occured.
**Expected behavior**
I expected that sprout will rotate 360 degrees from when it started and then whirlwinds should trigger
**Screenshots/videos**
https://clips.twitch.tv/SmilingBitterBottleBleedPurple
|
1.0
|
The Lurker - Tonight we did Lurker and it did a whirlwind while he was sprouting and killed some people. It was super strange because it happened so fast even for a whirl but it showed that he was going to whirl in the timer as it happened. The most strange thing of all is that it happened only once while sprout was rotating and it moved like 30-40 degrees only before the bug occured.
**Expected behavior**
I expected that sprout will rotate 360 degrees from when it started and then whirlwinds should trigger
**Screenshots/videos**
https://clips.twitch.tv/SmilingBitterBottleBleedPurple
|
non_process
|
the lurker tonight we did lurker and it did a whirlwind while he was sprouting and killed some people it was super strange because it happened so fast even for a whirl but it showed that he was going to whirl in the timer as it happened the most strange thing of all is that it happened only once while sprout was rotating and it moved like degrees only before the bug occured expected behavior i expected that sprout will rotate degrees from when it started and then whirlwinds should trigger screenshots videos
| 0
|
15,071
| 18,766,003,178
|
IssuesEvent
|
2021-11-06 00:28:43
|
googleapis/java-translate
|
https://api.github.com/repos/googleapis/java-translate
|
closed
|
com.example.translate.DeleteGlossaryTests: testDeleteGlossary failed
|
priority: p2 type: process api: translate flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b), [Sponge](http://sponge2/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:92)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.example.translate.CreateGlossary.createGlossary(CreateGlossary.java:90)
at com.example.translate.DeleteGlossaryTests.setUp(DeleteGlossaryTests.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
1.0
|
com.example.translate.DeleteGlossaryTests: testDeleteGlossary failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b), [Sponge](http://sponge2/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:92)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.example.translate.CreateGlossary.createGlossary(CreateGlossary.java:90)
at com.example.translate.DeleteGlossaryTests.setUp(DeleteGlossaryTests.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: The service is currently unavailable.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
process
|
com example translate deleteglossarytests testdeleteglossary failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable the service is currently unavailable at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com example translate createglossary createglossary createglossary java at com example translate deleteglossarytests setup deleteglossarytests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runbefores invokemethod runbefores java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable the service is currently unavailable at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception unavailable the service is currently unavailable at io grpc status asruntimeexception status java more
| 1
|
148,670
| 5,694,421,187
|
IssuesEvent
|
2017-04-15 13:11:14
|
ScreepsOCS/screeps.behaviour-action-pattern
|
https://api.github.com/repos/ScreepsOCS/screeps.behaviour-action-pattern
|
closed
|
inactive storage is unusable
|
! Bug # Core High Priority
|
If you claim a room with storage already in it, but your RCL is too low for it to be active, it can cause bugs #1269 resolved issues for upgraders, but a more thorough check is needed to make sure our creeps only consider the storage when it's actually active.
|
1.0
|
inactive storage is unusable - If you claim a room with storage already in it, but your RCL is too low for it to be active, it can cause bugs #1269 resolved issues for upgraders, but a more thorough check is needed to make sure our creeps only consider the storage when it's actually active.
|
non_process
|
inactive storage is unusable if you claim a room with storage already in it but your rcl is too low for it to be active it can cause bugs resolved issues for upgraders but a more thorough check is needed to make sure our creeps only consider the storage when it s actually active
| 0
|
101,633
| 8,791,998,772
|
IssuesEvent
|
2018-12-21 14:42:46
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
closed
|
OSX shared classes extended system test failures
|
test failure
|
https://ci.eclipse.org/openj9/job/Test-extended.system-JDK11-osx_x86-64_cmprssptrs/1
SharedClassesAPI_0
SharedClasses.SCM01.MultiThreadMultiCL_0
SharedClasses.SCM23.MultiCL_0
SharedClasses.SCM23.MultiThread_0
|
1.0
|
OSX shared classes extended system test failures - https://ci.eclipse.org/openj9/job/Test-extended.system-JDK11-osx_x86-64_cmprssptrs/1
SharedClassesAPI_0
SharedClasses.SCM01.MultiThreadMultiCL_0
SharedClasses.SCM23.MultiCL_0
SharedClasses.SCM23.MultiThread_0
|
non_process
|
osx shared classes extended system test failures sharedclassesapi sharedclasses multithreadmulticl sharedclasses multicl sharedclasses multithread
| 0
|
6,159
| 9,039,024,431
|
IssuesEvent
|
2019-02-10 01:27:39
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[BottomNavigation] MVP BottomNavigationController
|
[BottomNavigation] type:Process
|
This was filed as an internal issue. If you are a Googler, please visit [b/119189442](http://b/119189442) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/119189442](http://b/119189442)
- Blocked by: https://github.com/material-components/material-components-ios/issues/5659
- Blocked by: https://github.com/material-components/material-components-ios/issues/5660
- Blocked by: https://github.com/material-components/material-components-ios/issues/5661
|
1.0
|
[BottomNavigation] MVP BottomNavigationController - This was filed as an internal issue. If you are a Googler, please visit [b/119189442](http://b/119189442) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/119189442](http://b/119189442)
- Blocked by: https://github.com/material-components/material-components-ios/issues/5659
- Blocked by: https://github.com/material-components/material-components-ios/issues/5660
- Blocked by: https://github.com/material-components/material-components-ios/issues/5661
|
process
|
mvp bottomnavigationcontroller this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by blocked by blocked by
| 1
|
16,196
| 20,681,055,223
|
IssuesEvent
|
2022-03-10 13:57:29
|
hoprnet/hoprnet
|
https://api.github.com/repos/hoprnet/hoprnet
|
opened
|
Create devrel process
|
new issue processes
|
<!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
- Matrix:
- Internal team discussion
- Github:
- Tracking issues, bugs, features, PR
- Forum:
- Main technical discussion space
- Discord:
- Have the community manager moderate questions and ping the tech team
- Direct technical topics to the forum if possible
- Have Andius instruct mr_vavillon to act as a community manager
- communicate the process with community team
|
1.0
|
Create devrel process - <!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
- Matrix:
- Internal team discussion
- Github:
- Tracking issues, bugs, features, PR
- Forum:
- Main technical discussion space
- Discord:
- Have the community manager moderate questions and ping the tech team
- Direct technical topics to the forum if possible
- Have Andius instruct mr_vavillon to act as a community manager
- communicate the process with community team
|
process
|
create devrel process matrix internal team discussion github tracking issues bugs features pr forum main technical discussion space discord have the community manager moderate questions and ping the tech team direct technical topics to the forum if possible have andius instruct mr vavillon to act as a community manager communicate the process with community team
| 1
|
25,519
| 12,676,314,810
|
IssuesEvent
|
2020-06-19 04:46:27
|
rusefi/rusefi
|
https://api.github.com/repos/rusefi/rusefi
|
closed
|
[PERFORMANCE] hardware continues integration needs a high RPM high tooth count test case
|
Performance
|
CI on real hardware should assert 10K RPM at 60/2 with self-stimulation or smth :)
|
True
|
[PERFORMANCE] hardware continues integration needs a high RPM high tooth count test case - CI on real hardware should assert 10K RPM at 60/2 with self-stimulation or smth :)
|
non_process
|
hardware continues integration needs a high rpm high tooth count test case ci on real hardware should assert rpm at with self stimulation or smth
| 0
|
5,946
| 8,773,191,582
|
IssuesEvent
|
2018-12-18 16:15:44
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Firestore: support features blacklisted in conformance tests
|
api: firestore type: process
|
PR #6290 blacklisted a number of conformance tests because we do not currently support the usecases they support:
- `get-*` tests (because we use `BatchGetDocuments` API rather than the `GetDocument` API.
- `listen-*` tests exercise the "watch" features (since landed in PR #6191).
- `update_paths-*` tests (they've been excluded forever, with a note that Python lacked the support).
- Tests involving unimplemented / incorrectly implemented "transforms" (`DELETE`, `ARRAY_REMOVE`, `ARRAY_UNION`).
- `query-*` tests have been (inadvertently) skipped since being copied in from `google-cloud-common` in PR #5351).
This is a tracking issue for removing those skips / blacklist entries:
- [x] Update `Document.get` to use the `GetDocument` API, and re-enable the (one) `get-basic.textproto` test. (#6534)
- [x] Fix the support for the `DELETE` transform (one failing test) and enable those tests. (#6559)
- [x] Figure out how to support the `ARRAY_REMOVE` transform, and enable those tests. (#6651)
- [x] Figure out how to support the `ARRAY_UNION` transform, and enable those tests. (#6651)
- [x] Enable the `query-*` tests and make changes needed for them to pass. (#6839)
- [ ] Enable the `listen-*` tests and make changes needed for them to pass.
Not in scope:
~~- Figure how to support the "update with paths" use case, and enable those tests.~~
|
1.0
|
Firestore: support features blacklisted in conformance tests - PR #6290 blacklisted a number of conformance tests because we do not currently support the usecases they support:
- `get-*` tests (because we use `BatchGetDocuments` API rather than the `GetDocument` API.
- `listen-*` tests exercise the "watch" features (since landed in PR #6191).
- `update_paths-*` tests (they've been excluded forever, with a note that Python lacked the support).
- Tests involving unimplemented / incorrectly implemented "transforms" (`DELETE`, `ARRAY_REMOVE`, `ARRAY_UNION`).
- `query-*` tests have been (inadvertently) skipped since being copied in from `google-cloud-common` in PR #5351).
This is a tracking issue for removing those skips / blacklist entries:
- [x] Update `Document.get` to use the `GetDocument` API, and re-enable the (one) `get-basic.textproto` test. (#6534)
- [x] Fix the support for the `DELETE` transform (one failing test) and enable those tests. (#6559)
- [x] Figure out how to support the `ARRAY_REMOVE` transform, and enable those tests. (#6651)
- [x] Figure out how to support the `ARRAY_UNION` transform, and enable those tests. (#6651)
- [x] Enable the `query-*` tests and make changes needed for them to pass. (#6839)
- [ ] Enable the `listen-*` tests and make changes needed for them to pass.
Not in scope:
~~- Figure how to support the "update with paths" use case, and enable those tests.~~
|
process
|
firestore support features blacklisted in conformance tests pr blacklisted a number of conformance tests because we do not currently support the usecases they support get tests because we use batchgetdocuments api rather than the getdocument api listen tests exercise the watch features since landed in pr update paths tests they ve been excluded forever with a note that python lacked the support tests involving unimplemented incorrectly implemented transforms delete array remove array union query tests have been inadvertently skipped since being copied in from google cloud common in pr this is a tracking issue for removing those skips blacklist entries update document get to use the getdocument api and re enable the one get basic textproto test fix the support for the delete transform one failing test and enable those tests figure out how to support the array remove transform and enable those tests figure out how to support the array union transform and enable those tests enable the query tests and make changes needed for them to pass enable the listen tests and make changes needed for them to pass not in scope figure how to support the update with paths use case and enable those tests
| 1
|
19,570
| 3,226,833,175
|
IssuesEvent
|
2015-10-10 16:56:19
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
Dopri5 ODE integrator with step-size control
|
defect scipy.integrate
|
I am trying to solve a simple example with the `dopri5` integrator in `scipy.integrate.ode`. As the documentation states
> This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state, f):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-4, nsteps=500,
first_step=1e-6, max_step=1e-1, verbosity=True)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
state_ode_f.set_f_params(force(0))
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.successful() and state_ode_f.t < (t_final):
state_ode_f.set_f_params(force(state_ode_f.t))
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
1.00000000 1.0000e+00 0.000e+00 0.000e+00 True
Hence, only one time-step is computed which is obviously incorrect.
This works with `vode` and `zvode` integrators
|
1.0
|
Dopri5 ODE integrator with step-size control - I am trying to solve a simple example with the `dopri5` integrator in `scipy.integrate.ode`. As the documentation states
> This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state, f):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-4, nsteps=500,
first_step=1e-6, max_step=1e-1, verbosity=True)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
state_ode_f.set_f_params(force(0))
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.successful() and state_ode_f.t < (t_final):
state_ode_f.set_f_params(force(state_ode_f.t))
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
1.00000000 1.0000e+00 0.000e+00 0.000e+00 True
Hence, only one time-step is computed which is obviously incorrect.
This works with `vode` and `zvode` integrators
|
non_process
|
ode integrator with step size control i am trying to solve a simple example with the integrator in scipy integrate ode as the documentation states this is an explicit runge kutta method of order due to dormand prince with stepsize control and dense output this should work so here is my example import numpy as np from scipy integrate import ode import matplotlib pyplot as plt def massspring with force t state f simple dynamics model m ddx t k x t f t unpack the state vector x state xd state these are our constants k newtons per metre m kilograms compute acceleration xdd xdd k x f m return the two state derivatives return def force t excitation force force amplitude freq frequency omega np pi freq angular frequency return np sin omega t time range t start t final main program state ode f ode massspring with force state ode f set integrator rtol nsteps first step max step verbosity true initial conditions state ode f set initial value state ode f set f params force sol np array dtype float print time t t timestep t dx t t ddx t t state ode f successful while state ode f successful and state ode f t t final state ode f set f params force state ode f t state ode f integrate t final step true sol np append sol state ode f y axis print t t t t format state ode f t sol sol state ode f y state ode f y state ode f successful the result i get is time timestep dx ddx state ode f successful true hence only one time step is computed which is obviously incorrect this works with vode and zvode integrators
| 0
|
72,597
| 24,194,243,524
|
IssuesEvent
|
2022-09-23 21:11:42
|
jezzsantos/automate.plugin-rider
|
https://api.github.com/repos/jezzsantos/automate.plugin-rider
|
opened
|
Editing a DraftElement raises exception
|
defect-functional
|
When editing a DraftElement that has some null properties,
editing the element can actually result in new properties being added to the tree.
We need to raise both `treeNodesInserted` as well as `treeNodeChanged` events.
Otherwise, it throw when just raising `treeNodeChanged`
|
1.0
|
Editing a DraftElement raises exception - When editing a DraftElement that has some null properties,
editing the element can actually result in new properties being added to the tree.
We need to raise both `treeNodesInserted` as well as `treeNodeChanged` events.
Otherwise, it throw when just raising `treeNodeChanged`
|
non_process
|
editing a draftelement raises exception when editing a draftelement that has some null properties editing the element can actually result in new properties being added to the tree we need to raise both treenodesinserted as well as treenodechanged events otherwise it throw when just raising treenodechanged
| 0
|
1,411
| 3,972,048,697
|
IssuesEvent
|
2016-05-04 14:11:41
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
Wrong processing link that was created in non-top window and added to top window.
|
AREA: client COMPLEXITY: easy SYSTEM: resource processing TYPE: bug
|
Files to reproduce:
### Index.js
```javascript
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
var content = '';
if (req.url === '/')
content = fs.readFileSync('index.html');
else if (req.url === '/iframe.html')
content = fs.readFileSync('iframe.html');
res.end(content);
}).listen(3000);
```
###Index.html
```
javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Main content</h1>
<iframe src="/iframe.html"></iframe>
</body>
</html>
```
### Iframe.html
```javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Iframe content</h1>
<script>
var link = document.createElement('a');
link.href = '/url.html';
link.textContent = 'link';
window.top.document.body.appendChild(link);
</script>
</body>
</html>
```
|
1.0
|
Wrong processing link that was created in non-top window and added to top window. - Files to reproduce:
### Index.js
```javascript
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
var content = '';
if (req.url === '/')
content = fs.readFileSync('index.html');
else if (req.url === '/iframe.html')
content = fs.readFileSync('iframe.html');
res.end(content);
}).listen(3000);
```
###Index.html
```
javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Main content</h1>
<iframe src="/iframe.html"></iframe>
</body>
</html>
```
### Iframe.html
```javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Iframe content</h1>
<script>
var link = document.createElement('a');
link.href = '/url.html';
link.textContent = 'link';
window.top.document.body.appendChild(link);
</script>
</body>
</html>
```
|
process
|
wrong processing link that was created in non top window and added to top window files to reproduce index js javascript var http require http var fs require fs http createserver function req res var content if req url content fs readfilesync index html else if req url iframe html content fs readfilesync iframe html res end content listen index html javascript title main content iframe html javascript title iframe content var link document createelement a link href url html link textcontent link window top document body appendchild link
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.