Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,629
| 6,665,332,497
|
IssuesEvent
|
2017-10-03 00:27:19
|
IIIF/api
|
https://api.github.com/repos/IIIF/api
|
closed
|
Document how to propose changes
|
editorial process website
|
From @tcramer, it would be good to have documentation on how to easily create a branch, edit the markdown, and submit a PR to the site.
This would also be easier if there were multiple repositories, with different committers.
|
1.0
|
Document how to propose changes - From @tcramer, it would be good to have documentation on how to easily create a branch, edit the markdown, and submit a PR to the site.
This would also be easier if there were multiple repositories, with different committers.
|
process
|
document how to propose changes from tcramer it would be good to have documentation on how to easily create a branch edit the markdown and submit a pr to the site this would also be easier if there were multiple repositories with different committers
| 1
|
507,681
| 14,680,157,028
|
IssuesEvent
|
2020-12-31 09:13:24
|
k8smeetup/website-tasks
|
https://api.github.com/repos/k8smeetup/website-tasks
|
opened
|
/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
|
lang/zh priority/P0 sync/update version/master welcome
|
Source File: [/docs/tasks/debug-application-cluster/resource-usage-monitoring.md](https://github.com/kubernetes/website/blob/master/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md content/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
```
|
1.0
|
/docs/tasks/debug-application-cluster/resource-usage-monitoring.md - Source File: [/docs/tasks/debug-application-cluster/resource-usage-monitoring.md](https://github.com/kubernetes/website/blob/master/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md content/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
```
|
non_process
|
docs tasks debug application cluster resource usage monitoring md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs tasks debug application cluster resource usage monitoring md content zh docs tasks debug application cluster resource usage monitoring md 跨分支持查看原始文档更新差异 git diff release master content en docs tasks debug application cluster resource usage monitoring md
| 0
|
161,754
| 12,561,641,894
|
IssuesEvent
|
2020-06-08 02:01:41
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
TST: slow test_nearest_upsample_with_limit with tzlocal
|
Testing
|
Doing `pytest pandas/tests --skip-slow --durations=10` the top two entries are
```
71.76s call pandas/tests/resample/test_datetime_index.py::test_nearest_upsample_with_limit[tzlocal()-30S-Y]
59.86s call pandas/tests/resample/test_datetime_index.py::test_nearest_upsample_with_limit[tzlocal()-30S-10M]
```
Because this uses the tz_aware_fixture, there isn't a trivial way to add a pytest.mark.slow to this, but figuring something out may be worthwhile.
|
1.0
|
TST: slow test_nearest_upsample_with_limit with tzlocal - Doing `pytest pandas/tests --skip-slow --durations=10` the top two entries are
```
71.76s call pandas/tests/resample/test_datetime_index.py::test_nearest_upsample_with_limit[tzlocal()-30S-Y]
59.86s call pandas/tests/resample/test_datetime_index.py::test_nearest_upsample_with_limit[tzlocal()-30S-10M]
```
Because this uses the tz_aware_fixture, there isn't a trivial way to add a pytest.mark.slow to this, but figuring something out may be worthwhile.
|
non_process
|
tst slow test nearest upsample with limit with tzlocal doing pytest pandas tests skip slow durations the top two entries are call pandas tests resample test datetime index py test nearest upsample with limit call pandas tests resample test datetime index py test nearest upsample with limit because this uses the tz aware fixture there isn t a trivial way to add a pytest mark slow to this but figuring something out may be worthwhile
| 0
|
8,043
| 11,218,174,304
|
IssuesEvent
|
2020-01-07 10:51:24
|
AmpersandTarski/Ampersand
|
https://api.github.com/repos/AmpersandTarski/Ampersand
|
closed
|
Bug in Ampersand Dockerfile
|
deployment priority:normal software process
|
When I tried to test a new development version on branch `feature/fixAtlasComplete`, I was unable to produce an Ampersand image from the Dockerfile in the Ampersand repo.
#### Version of ampersand that was used
I encountered this issue while working in commit AmpersandTarski/Ampersand@1ed837d0895cb53be3e555f15687246f227f1df4.
My working directory contains a clone of the github repository AmpersandTarski/Ampersand.
```
% git status
On branch feature/fixAtlasComplete
Your branch is up to date with 'origin/feature/fixAtlasComplete'.
```
#### What I expected
I expected the command `docker build .` to execute without mistakes.
#### What happened instead
This is what I got (some intermediate lines are omitted in the following log)
```
sjo00577@BA92-C02T81JCGTDY Ampersand % git status
On branch feature/fixAtlasComplete
Your branch is up to date with 'origin/feature/fixAtlasComplete'.
nothing to commit, working tree clean
sjo00577@BA92-C02T81JCGTDY Ampersand % docker build -t docker.pkg.github.com/ampersandtarski/ampersand/ampersand:latest .
Sending build context to Docker daemon 3.406MB
Step 1/13 : FROM haskell:8.6.5 AS buildstage
---> bae585027ddb
Step 2/13 : RUN mkdir /opt/ampersand
---> Using cache
---> 14be82101078
Step 3/13 : WORKDIR /opt/ampersand
---> Using cache
---> 9fccfc7dcff7
Step 4/13 : COPY stack.yaml package.yaml /opt/ampersand/
---> Using cache
---> c88a75e48cc6
Step 5/13 : RUN stack build --dependencies-only
---> Running in 6483d2630284
Downloading lts-14.17 build plan ...
Downloaded lts-14.17 build plan.
Updating package index Hackage (mirrored at https://s3.amazonaws.com/hackage.fpcomplete.com/) ...
Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/
Downloading root
Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/
...
rio-0.1.12.0: copy/register
vector-algorithms-0.8.0.3: copy/register
graphviz-2999.20.0.3: copy/register
aeson-1.4.6.0: copy/register
JuicyPixels-3.3.4: copy/register
-- While building package Cabal-2.4.1.0 using:
/root/.stack/setup-exe-cache/x86_64-linux/Cabal-simple_mPHDZzAJ_2.4.0.1_ghc-8.6.5 --builddir=.stack-work/dist/x86_64-linux/Cabal-2.4.0.1 build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure (-9) (THIS MAY INDICATE OUT OF MEMORY)
Logs have been written to: /opt/ampersand/.stack-work/logs/Cabal-2.4.1.0.log
Configuring Cabal-2.4.1.0...
Preprocessing library for Cabal-2.4.1.0..
Building library for Cabal-2.4.1.0..
[ 1 of 220] Compiling Distribution.Compat.Binary ( Distribution/Compat/Binary.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Binary.o )
[ 2 of 220] Compiling Distribution.Compat.Directory ( Distribution/Compat/Directory.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Directory.o )
[ 3 of 220] Compiling Distribution.Compat.Exception ( Distribution/Compat/Exception.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Exception.o )
[ 4 of 220] Compiling Distribution.Compat.Internal.TempFile ( Distribution/Compat/Internal/TempFile.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Internal/TempFile.o )
...
[167 of 220] Compiling Distribution.PackageDescription.FieldGrammar ( Distribution/PackageDescription/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/FieldGrammar.o )
[168 of 220] Compiling Distribution.PackageDescription.PrettyPrint ( Distribution/PackageDescription/PrettyPrint.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/PrettyPrint.o )
[169 of 220] Compiling Distribution.PackageDescription.Parsec ( Distribution/PackageDescription/Parsec.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/Parsec.o )
[170 of 220] Compiling Distribution.FieldGrammar.FieldDescrs ( Distribution/FieldGrammar/FieldDescrs.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/FieldGrammar/FieldDescrs.o )
[171 of 220] Compiling Distribution.Types.InstalledPackageInfo.FieldGrammar ( Distribution/Types/InstalledPackageInfo/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Types/InstalledPackageInfo/FieldGrammar.o )
The command '/bin/sh -c stack build --dependencies-only' returned a non-zero code: 1
```
#### Steps to reproduce
1.
2.
3.
4.
#### Screenshot / Video
#### Context / Source of Dockerfile
```
# The purpose of this docker file is to produce a latest Ampersand-compiler in the form of a docker image.
FROM haskell:8.6.5 AS buildstage
RUN mkdir /opt/ampersand
WORKDIR /opt/ampersand
# Start with a docker-layer that contains build dependencies, to maximize the reuse of these dependencies by docker's cache mechanism.
# Only updates to the files stack.yaml package.yaml will rebuild this layer; all other changes use the cache.
# Expect stack to give warnings in this step, which you can ignore.
# Idea taken from https://medium.com/permutive/optimized-docker-builds-for-haskell-76a9808eb10b
COPY stack.yaml package.yaml /opt/ampersand/
RUN stack build --dependencies-only
# Copy the rest of the application
# See .dockerignore for files/folders that are not copied
COPY . /opt/ampersand
# These ARGs are available as ENVs in next RUN and are needed for compiling the Ampersand compiler to have the right versioning info
ARG GIT_SHA
ARG GIT_Branch
# Build Ampersand compiler and install in /root/.local/bin
RUN stack install
# Show the results of the build stage
RUN ls -al /root/.local/bin
# Create a light-weight image that has the Ampersand compiler available
FROM ubuntu
COPY --from=buildstage /root/.local/bin/ampersand /bin/
ENTRYPOINT ["/bin/ampersand"]
```
|
1.0
|
Bug in Ampersand Dockerfile - When I tried to test a new development version on branch `feature/fixAtlasComplete`, I was unable to produce an Ampersand image from the Dockerfile in the Ampersand repo.
#### Version of ampersand that was used
I encountered this issue while working in commit AmpersandTarski/Ampersand@1ed837d0895cb53be3e555f15687246f227f1df4.
My working directory contains a clone of the github repository AmpersandTarski/Ampersand.
```
% git status
On branch feature/fixAtlasComplete
Your branch is up to date with 'origin/feature/fixAtlasComplete'.
```
#### What I expected
I expected the command `docker build .` to execute without mistakes.
#### What happened instead
This is what I got (some intermediate lines are omitted in the following log)
```
sjo00577@BA92-C02T81JCGTDY Ampersand % git status
On branch feature/fixAtlasComplete
Your branch is up to date with 'origin/feature/fixAtlasComplete'.
nothing to commit, working tree clean
sjo00577@BA92-C02T81JCGTDY Ampersand % docker build -t docker.pkg.github.com/ampersandtarski/ampersand/ampersand:latest .
Sending build context to Docker daemon 3.406MB
Step 1/13 : FROM haskell:8.6.5 AS buildstage
---> bae585027ddb
Step 2/13 : RUN mkdir /opt/ampersand
---> Using cache
---> 14be82101078
Step 3/13 : WORKDIR /opt/ampersand
---> Using cache
---> 9fccfc7dcff7
Step 4/13 : COPY stack.yaml package.yaml /opt/ampersand/
---> Using cache
---> c88a75e48cc6
Step 5/13 : RUN stack build --dependencies-only
---> Running in 6483d2630284
Downloading lts-14.17 build plan ...
Downloaded lts-14.17 build plan.
Updating package index Hackage (mirrored at https://s3.amazonaws.com/hackage.fpcomplete.com/) ...
Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/
Downloading root
Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/
...
rio-0.1.12.0: copy/register
vector-algorithms-0.8.0.3: copy/register
graphviz-2999.20.0.3: copy/register
aeson-1.4.6.0: copy/register
JuicyPixels-3.3.4: copy/register
-- While building package Cabal-2.4.1.0 using:
/root/.stack/setup-exe-cache/x86_64-linux/Cabal-simple_mPHDZzAJ_2.4.0.1_ghc-8.6.5 --builddir=.stack-work/dist/x86_64-linux/Cabal-2.4.0.1 build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure (-9) (THIS MAY INDICATE OUT OF MEMORY)
Logs have been written to: /opt/ampersand/.stack-work/logs/Cabal-2.4.1.0.log
Configuring Cabal-2.4.1.0...
Preprocessing library for Cabal-2.4.1.0..
Building library for Cabal-2.4.1.0..
[ 1 of 220] Compiling Distribution.Compat.Binary ( Distribution/Compat/Binary.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Binary.o )
[ 2 of 220] Compiling Distribution.Compat.Directory ( Distribution/Compat/Directory.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Directory.o )
[ 3 of 220] Compiling Distribution.Compat.Exception ( Distribution/Compat/Exception.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Exception.o )
[ 4 of 220] Compiling Distribution.Compat.Internal.TempFile ( Distribution/Compat/Internal/TempFile.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Internal/TempFile.o )
...
[167 of 220] Compiling Distribution.PackageDescription.FieldGrammar ( Distribution/PackageDescription/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/FieldGrammar.o )
[168 of 220] Compiling Distribution.PackageDescription.PrettyPrint ( Distribution/PackageDescription/PrettyPrint.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/PrettyPrint.o )
[169 of 220] Compiling Distribution.PackageDescription.Parsec ( Distribution/PackageDescription/Parsec.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/Parsec.o )
[170 of 220] Compiling Distribution.FieldGrammar.FieldDescrs ( Distribution/FieldGrammar/FieldDescrs.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/FieldGrammar/FieldDescrs.o )
[171 of 220] Compiling Distribution.Types.InstalledPackageInfo.FieldGrammar ( Distribution/Types/InstalledPackageInfo/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Types/InstalledPackageInfo/FieldGrammar.o )
The command '/bin/sh -c stack build --dependencies-only' returned a non-zero code: 1
```
#### Steps to reproduce
1.
2.
3.
4.
#### Screenshot / Video
#### Context / Source of Dockerfile
```
# The purpose of this docker file is to produce a latest Ampersand-compiler in the form of a docker image.
FROM haskell:8.6.5 AS buildstage
RUN mkdir /opt/ampersand
WORKDIR /opt/ampersand
# Start with a docker-layer that contains build dependencies, to maximize the reuse of these dependencies by docker's cache mechanism.
# Only updates to the files stack.yaml package.yaml will rebuild this layer; all other changes use the cache.
# Expect stack to give warnings in this step, which you can ignore.
# Idea taken from https://medium.com/permutive/optimized-docker-builds-for-haskell-76a9808eb10b
COPY stack.yaml package.yaml /opt/ampersand/
RUN stack build --dependencies-only
# Copy the rest of the application
# See .dockerignore for files/folders that are not copied
COPY . /opt/ampersand
# These ARGs are available as ENVs in next RUN and are needed for compiling the Ampersand compiler to have the right versioning info
ARG GIT_SHA
ARG GIT_Branch
# Build Ampersand compiler and install in /root/.local/bin
RUN stack install
# Show the results of the build stage
RUN ls -al /root/.local/bin
# Create a light-weight image that has the Ampersand compiler available
FROM ubuntu
COPY --from=buildstage /root/.local/bin/ampersand /bin/
ENTRYPOINT ["/bin/ampersand"]
```
|
process
|
bug in ampersand dockerfile when i tried to test a new development version on branch feature fixatlascomplete i was unable to produce an ampersand image from the dockerfile in the ampersand repo version of ampersand that was used i encountered this issue while working in commit ampersandtarski ampersand my working directory contains a clone of the github repository ampersandtarski ampersand git status on branch feature fixatlascomplete your branch is up to date with origin feature fixatlascomplete what i expected i expected the command docker build to execute without mistakes what happened instead this is what i got some intermediate lines are omitted in the following log ampersand git status on branch feature fixatlascomplete your branch is up to date with origin feature fixatlascomplete nothing to commit working tree clean ampersand docker build t docker pkg github com ampersandtarski ampersand ampersand latest sending build context to docker daemon step from haskell as buildstage step run mkdir opt ampersand using cache step workdir opt ampersand using cache step copy stack yaml package yaml opt ampersand using cache step run stack build dependencies only running in downloading lts build plan downloaded lts build plan updating package index hackage mirrored at selected mirror downloading root selected mirror rio copy register vector algorithms copy register graphviz copy register aeson copy register juicypixels copy register while building package cabal using root stack setup exe cache linux cabal simple mphdzzaj ghc builddir stack work dist linux cabal build ghc options ddump hi ddump to file process exited with code exitfailure this may indicate out of memory logs have been written to opt ampersand stack work logs cabal log configuring cabal preprocessing library for cabal building library for cabal compiling distribution compat binary distribution compat binary hs stack work dist linux cabal build distribution compat binary o compiling distribution compat directory distribution compat directory hs stack work dist linux cabal build distribution compat directory o compiling distribution compat exception distribution compat exception hs stack work dist linux cabal build distribution compat exception o compiling distribution compat internal tempfile distribution compat internal tempfile hs stack work dist linux cabal build distribution compat internal tempfile o compiling distribution packagedescription fieldgrammar distribution packagedescription fieldgrammar hs stack work dist linux cabal build distribution packagedescription fieldgrammar o compiling distribution packagedescription prettyprint distribution packagedescription prettyprint hs stack work dist linux cabal build distribution packagedescription prettyprint o compiling distribution packagedescription parsec distribution packagedescription parsec hs stack work dist linux cabal build distribution packagedescription parsec o compiling distribution fieldgrammar fielddescrs distribution fieldgrammar fielddescrs hs stack work dist linux cabal build distribution fieldgrammar fielddescrs o compiling distribution types installedpackageinfo fieldgrammar distribution types installedpackageinfo fieldgrammar hs stack work dist linux cabal build distribution types installedpackageinfo fieldgrammar o the command bin sh c stack build dependencies only returned a non zero code steps to reproduce screenshot video context source of dockerfile the purpose of this docker file is to produce a latest ampersand compiler in the form of a docker image from haskell as buildstage run mkdir opt ampersand workdir opt ampersand start with a docker layer that contains build dependencies to maximize the reuse of these dependencies by docker s cache mechanism only updates to the files stack yaml package yaml will rebuild this layer all other changes use the cache expect stack to give warnings in this step which you can ignore idea taken from copy stack yaml package yaml opt ampersand run stack build dependencies only copy the rest of the application see dockerignore for files folders that are not copied copy opt ampersand these args are available as envs in next run and are needed for compiling the ampersand compiler to have the right versioning info arg git sha arg git branch build ampersand compiler and install in root local bin run stack install show the results of the build stage run ls al root local bin create a light weight image that has the ampersand compiler available from ubuntu copy from buildstage root local bin ampersand bin entrypoint
| 1
|
20,329
| 26,971,595,607
|
IssuesEvent
|
2023-02-09 05:38:02
|
AvaloniaUI/Avalonia
|
https://api.github.com/repos/AvaloniaUI/Avalonia
|
closed
|
TextBlock "Invalid size returned for measure" on headless platform
|
bug area-textprocessing
|
**Describe the bug**
Experimenting moving my unit tests to the headless platform per the suggestion following the event args public constructor stuff, and ran into an issue with TextBlock.
I've found [this](https://github.com/kekekeks/Avalonia-unit-testing-with-headless-platform) which helped with figuring out how to get the headless platform initialized (with a few tweaks for changes in 11.0). This simple test repros the problem
```C#
[Fact]
public void TextBlockWorks()
{
var wnd = new Window();
wnd.Content = new TextBlock() { Text = "Text" };
wnd.LayoutManager.ExecuteInitialLayoutPass();
}
```
The `TextLayout` within `TextBlock` is returning `NaN` for `Bounds.Height` in `MeasureOverride` and it seems all of the metrics are set to `NaN` on `ShapedTextRun`. As far as I can tell, the font & glyph typeface initialize ok

- OS: Windows
- Version 11-preview5
|
1.0
|
TextBlock "Invalid size returned for measure" on headless platform - **Describe the bug**
Experimenting moving my unit tests to the headless platform per the suggestion following the event args public constructor stuff, and ran into an issue with TextBlock.
I've found [this](https://github.com/kekekeks/Avalonia-unit-testing-with-headless-platform) which helped with figuring out how to get the headless platform initialized (with a few tweaks for changes in 11.0). This simple test repros the problem
```C#
[Fact]
public void TextBlockWorks()
{
var wnd = new Window();
wnd.Content = new TextBlock() { Text = "Text" };
wnd.LayoutManager.ExecuteInitialLayoutPass();
}
```
The `TextLayout` within `TextBlock` is returning `NaN` for `Bounds.Height` in `MeasureOverride` and it seems all of the metrics are set to `NaN` on `ShapedTextRun`. As far as I can tell, the font & glyph typeface initialize ok

- OS: Windows
- Version 11-preview5
|
process
|
textblock invalid size returned for measure on headless platform describe the bug experimenting moving my unit tests to the headless platform per the suggestion following the event args public constructor stuff and ran into an issue with textblock i ve found which helped with figuring out how to get the headless platform initialized with a few tweaks for changes in this simple test repros the problem c public void textblockworks var wnd new window wnd content new textblock text text wnd layoutmanager executeinitiallayoutpass the textlayout within textblock is returning nan for bounds height in measureoverride and it seems all of the metrics are set to nan on shapedtextrun as far as i can tell the font glyph typeface initialize ok os windows version
| 1
|
16,837
| 22,087,302,875
|
IssuesEvent
|
2022-06-01 01:01:45
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
opened
|
Add acceptance test support for eth_getTransactionReceipt
|
enhancement P2 process
|
### Problem
The current acceptance tests implemented in #119 were not able to include `eth_getTransactionReceipt`.
### Solution
Add coverage for `eth_getTransactionReceipt`
Investigate why call produces
```
err: {
"type": "PrecheckStatusError",
"message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID",
"stack":
StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID
at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5)
at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12)
at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24)
at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16)
at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28)
at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5)
"name": "StatusError",
"status": {
"_code": 15
},
```
### Alternatives
_No response_
|
1.0
|
Add acceptance test support for eth_getTransactionReceipt - ### Problem
The current acceptance tests implemented in #119 were not able to include `eth_getTransactionReceipt`.
### Solution
Add coverage for `eth_getTransactionReceipt`
Investigate why call produces
```
err: {
"type": "PrecheckStatusError",
"message": "transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID",
"stack":
StatusError: transaction 0.0.2@1654029652.985219400 failed precheck with status INVALID_ACCOUNT_ID
at new PrecheckStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/PrecheckStatusError.cjs:43:5)
at AccountInfoQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:431:12)
at CostQuery._mapStatusError (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/CostQuery.cjs:155:24)
at CostQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:519:22)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at AccountInfoQuery.getCost (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/account/AccountInfoQuery.cjs:144:16)
at AccountInfoQuery._beforeExecute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/query/Query.cjs:267:28)
at AccountInfoQuery.execute (.../hedera-json-rpc-relay/packages/relay/node_modules/@hashgraph/sdk/lib/Executable.cjs:411:5)
"name": "StatusError",
"status": {
"_code": 15
},
```
### Alternatives
_No response_
|
process
|
add acceptance test support for eth gettransactionreceipt problem the current acceptance tests implemented in were not able to include eth gettransactionreceipt solution add coverage for eth gettransactionreceipt investigate why call produces err type precheckstatuserror message transaction failed precheck with status invalid account id stack statuserror transaction failed precheck with status invalid account id at new precheckstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib precheckstatuserror cjs at accountinfoquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at costquery mapstatuserror hedera json rpc relay packages relay node modules hashgraph sdk lib query costquery cjs at costquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs at processticksandrejections node internal process task queues at accountinfoquery getcost hedera json rpc relay packages relay node modules hashgraph sdk lib account accountinfoquery cjs at accountinfoquery beforeexecute hedera json rpc relay packages relay node modules hashgraph sdk lib query query cjs at accountinfoquery execute hedera json rpc relay packages relay node modules hashgraph sdk lib executable cjs name statuserror status code alternatives no response
| 1
|
532,632
| 15,560,498,807
|
IssuesEvent
|
2021-03-16 12:47:41
|
unep-grid/map-x-mgl
|
https://api.github.com/repos/unep-grid/map-x-mgl
|
closed
|
Publication of views using geoserver take too much time
|
dependency related discussion priority 2
|
After configuring available OGC services of a source, updating a view that use this source will trigger an update of GeoServer database. This operation will take a very long time, which blocks the server side app of MapX.
The more source we have published, the more time this operation will take.
Possible solutions :
- Solve the issue upstream: avoid using GeoServer. Updating layer list is too slow. No viable solution found. Using MapServer and mapfile instead ?
- Upgrade Shiny to enable to use promises support.
If anyone has a better idea, please write a comment. :)
|
1.0
|
Publication of views using geoserver take too much time - After configuring available OGC services of a source, updating a view that use this source will trigger an update of GeoServer database. This operation will take a very long time, which blocks the server side app of MapX.
The more source we have published, the more time this operation will take.
Possible solutions :
- Solve the issue upstream: avoid using GeoServer. Updating layer list is too slow. No viable solution found. Using MapServer and mapfile instead ?
- Upgrade Shiny to enable to use promises support.
If anyone has a better idea, please write a comment. :)
|
non_process
|
publication of views using geoserver take too much time after configuring available ogc services of a source updating a view that use this source will trigger an update of geoserver database this operation will take a very long time which blocks the server side app of mapx the more source we have published the more time this operation will take possible solutions solve the issue upstream avoid using geoserver updating layer list is too slow no viable solution found using mapserver and mapfile instead upgrade shiny to enable to use promises support if anyone has a better idea please write a comment
| 0
|
12,694
| 15,075,332,130
|
IssuesEvent
|
2021-02-05 01:49:21
|
LodestoneHQ/lodestone
|
https://api.github.com/repos/LodestoneHQ/lodestone
|
closed
|
Rename lodestone-processor `master` branch to `main` branch
|
area/processor type/maintenance
|
New repositories are getting created with the default branch named `main`. For consistency sake, we should rename the existing `master` branches to `main`.
Also need to update the DockerHub repo build config that builds the `latest` tag.
Simple instructions here: https://stevenmortimer.com/5-steps-to-change-github-default-branch-from-master-to-main/
|
1.0
|
Rename lodestone-processor `master` branch to `main` branch - New repositories are getting created with the default branch named `main`. For consistency sake, we should rename the existing `master` branches to `main`.
Also need to update the DockerHub repo build config that builds the `latest` tag.
Simple instructions here: https://stevenmortimer.com/5-steps-to-change-github-default-branch-from-master-to-main/
|
process
|
rename lodestone processor master branch to main branch new repositories are getting created with the default branch named main for consistency sake we should rename the existing master branches to main also need to update the dockerhub repo build config that builds the latest tag simple instructions here
| 1
|
6,017
| 8,822,539,231
|
IssuesEvent
|
2019-01-02 09:49:32
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
muliple select in search when selecting everything, it says that only one is chosen
|
2.0.6 Fixed Process bug
|
muliple select in search when selecting everything, it says that only one is chosen, and when you try to update something like a status it only updates it for the first entity that you uses to go into multiple selection with

|
1.0
|
muliple select in search when selecting everything, it says that only one is chosen - muliple select in search when selecting everything, it says that only one is chosen, and when you try to update something like a status it only updates it for the first entity that you uses to go into multiple selection with

|
process
|
muliple select in search when selecting everything it says that only one is chosen muliple select in search when selecting everything it says that only one is chosen and when you try to update something like a status it only updates it for the first entity that you uses to go into multiple selection with
| 1
|
193,161
| 15,369,266,565
|
IssuesEvent
|
2021-03-02 07:05:36
|
kaustubhgupta/PortfolioFy
|
https://api.github.com/repos/kaustubhgupta/PortfolioFy
|
opened
|
[DOCS] Missing information about some parameters
|
documentation
|
Some of the parameters were added to the action but their usage is missing. Needed to add them in README and Docs too
|
1.0
|
[DOCS] Missing information about some parameters - Some of the parameters were added to the action but their usage is missing. Needed to add them in README and Docs too
|
non_process
|
missing information about some parameters some of the parameters were added to the action but their usage is missing needed to add them in readme and docs too
| 0
|
495,913
| 14,289,979,659
|
IssuesEvent
|
2020-11-23 20:08:09
|
ac2cz/FoxTelem
|
https://api.github.com/repos/ac2cz/FoxTelem
|
closed
|
The values in the RT/MAX/MIN table should be converted
|
enhancement low priority wontfix
|
This was not in 1.08 because it is tricky. Need to work out how to flow the conversion through.
|
1.0
|
The values in the RT/MAX/MIN table should be converted - This was not in 1.08 because it is tricky. Need to work out how to flow the conversion through.
|
non_process
|
the values in the rt max min table should be converted this was not in because it is tricky need to work out how to flow the conversion through
| 0
|
263,154
| 28,022,894,305
|
IssuesEvent
|
2023-03-28 07:09:08
|
Sultan-QA/vsix-nuget-simple-project
|
https://api.github.com/repos/Sultan-QA/vsix-nuget-simple-project
|
opened
|
microsoft.visualstudio.sdk.16.0.206.nupkg: 1 vulnerabilities (highest severity is: 7.5)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.visualstudio.sdk.16.0.206.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /Nuget_VSIX_Project/Nuget_VSIX_Project.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.3/system.net.http.4.3.3.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Sultan-QA/vsix-nuget-simple-project/commit/56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a">56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (microsoft.visualstudio.sdk.16.0.206.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.net.http.4.3.3.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-8292</summary>
### Vulnerable Library - <b>system.net.http.4.3.3.nupkg</b></p>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.3.nupkg">https://api.nuget.org/packages/system.net.http.4.3.3.nupkg</a></p>
<p>Path to dependency file: /Nuget_VSIX_Project/Nuget_VSIX_Project.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.3/system.net.http.4.3.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.visualstudio.sdk.16.0.206.nupkg (Root Library)
- microsoft.visualstudio.package.languageservice.15.0.16.0.28729.nupkg
- microsoft.visualstudio.shell.15.0.16.0.28729.nupkg
- microsoft.visualstudio.shell.framework.16.0.28729.nupkg
- microsoft.visualstudio.utilities.16.0.28729.nupkg
- streamjsonrpc.1.5.43.nupkg
- :x: **system.net.http.4.3.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Sultan-QA/vsix-nuget-simple-project/commit/56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a">56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
<p></p>
</details>
|
True
|
microsoft.visualstudio.sdk.16.0.206.nupkg: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.visualstudio.sdk.16.0.206.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /Nuget_VSIX_Project/Nuget_VSIX_Project.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.3/system.net.http.4.3.3.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Sultan-QA/vsix-nuget-simple-project/commit/56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a">56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (microsoft.visualstudio.sdk.16.0.206.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.net.http.4.3.3.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-8292</summary>
### Vulnerable Library - <b>system.net.http.4.3.3.nupkg</b></p>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.3.nupkg">https://api.nuget.org/packages/system.net.http.4.3.3.nupkg</a></p>
<p>Path to dependency file: /Nuget_VSIX_Project/Nuget_VSIX_Project.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.3/system.net.http.4.3.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.visualstudio.sdk.16.0.206.nupkg (Root Library)
- microsoft.visualstudio.package.languageservice.15.0.16.0.28729.nupkg
- microsoft.visualstudio.shell.15.0.16.0.28729.nupkg
- microsoft.visualstudio.shell.framework.16.0.28729.nupkg
- microsoft.visualstudio.utilities.16.0.28729.nupkg
- streamjsonrpc.1.5.43.nupkg
- :x: **system.net.http.4.3.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Sultan-QA/vsix-nuget-simple-project/commit/56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a">56f8f636f0dc1b4c31231d35ac1ac2755b58ca6a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
<p></p>
</details>
|
non_process
|
microsoft visualstudio sdk nupkg vulnerabilities highest severity is vulnerable library microsoft visualstudio sdk nupkg path to dependency file nuget vsix project nuget vsix project csproj path to vulnerable library home wss scanner nuget packages system net http system net http nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in microsoft visualstudio sdk nupkg version remediation available high system net http nupkg transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file nuget vsix project nuget vsix project csproj path to vulnerable library home wss scanner nuget packages system net http system net http nupkg dependency hierarchy microsoft visualstudio sdk nupkg root library microsoft visualstudio package languageservice nupkg microsoft visualstudio shell nupkg microsoft visualstudio shell framework nupkg microsoft visualstudio utilities nupkg streamjsonrpc nupkg x system net http nupkg vulnerable library found in head commit a href found in base branch main vulnerability details an information disclosure vulnerability exists in net core when authentication information is inadvertently exposed in a redirect aka net core information disclosure vulnerability this affects net core net core net core powershell core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution system net http microsoft powershell commands utility rc
| 0
|
5,539
| 8,392,423,521
|
IssuesEvent
|
2018-10-09 17:35:19
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
'BackoffFailed' in logging system tests.
|
api: logging flaky testing type: process
|
System tests for logging are failing like this currently:
```
_____________________________________________________________________________________________________ TestLogging.test_log_handler_sync ______________________________________________________________________________________________________
Traceback (most recent call last):
File "/Users/crwilcox/workspace/google-cloud-python/logging/tests/system/test_system.py", line 264, in test_log_handler_sync
entries = _list_entries(logger)
File "/Users/crwilcox/workspace/google-cloud-python/logging/tests/system/test_system.py", line 67, in _list_entries
return outer(logger)
File "/Users/crwilcox/workspace/google-cloud-python/logging/.nox/sys-3-6/lib/python3.6/site-packages/test_utils/retry.py", line 95, in wrapped_function
return to_wrap(*args, **kwargs)
File "/Users/crwilcox/workspace/google-cloud-python/logging/.nox/sys-3-6/lib/python3.6/site-packages/test_utils/retry.py", line 155, in wrapped_function
raise BackoffFailed()
test_utils.retry.BackoffFailed
```
|
1.0
|
'BackoffFailed' in logging system tests. - System tests for logging are failing like this currently:
```
_____________________________________________________________________________________________________ TestLogging.test_log_handler_sync ______________________________________________________________________________________________________
Traceback (most recent call last):
File "/Users/crwilcox/workspace/google-cloud-python/logging/tests/system/test_system.py", line 264, in test_log_handler_sync
entries = _list_entries(logger)
File "/Users/crwilcox/workspace/google-cloud-python/logging/tests/system/test_system.py", line 67, in _list_entries
return outer(logger)
File "/Users/crwilcox/workspace/google-cloud-python/logging/.nox/sys-3-6/lib/python3.6/site-packages/test_utils/retry.py", line 95, in wrapped_function
return to_wrap(*args, **kwargs)
File "/Users/crwilcox/workspace/google-cloud-python/logging/.nox/sys-3-6/lib/python3.6/site-packages/test_utils/retry.py", line 155, in wrapped_function
raise BackoffFailed()
test_utils.retry.BackoffFailed
```
|
process
|
backofffailed in logging system tests system tests for logging are failing like this currently testlogging test log handler sync traceback most recent call last file users crwilcox workspace google cloud python logging tests system test system py line in test log handler sync entries list entries logger file users crwilcox workspace google cloud python logging tests system test system py line in list entries return outer logger file users crwilcox workspace google cloud python logging nox sys lib site packages test utils retry py line in wrapped function return to wrap args kwargs file users crwilcox workspace google cloud python logging nox sys lib site packages test utils retry py line in wrapped function raise backofffailed test utils retry backofffailed
| 1
|
212,644
| 16,471,623,869
|
IssuesEvent
|
2021-05-23 14:30:55
|
IntellectualSites/PlotSquared
|
https://api.github.com/repos/IntellectualSites/PlotSquared
|
opened
|
Players that are denied from a plot when they are offline can rejoin to bypass
|
Requires Testing
|
### Server Implementation
Tuinity
### Server Version
1.16.5
### Describe the bug
This is a bug discovered by some players that I was able to reproduce, if you have two accounts online on the same plot (One account being the plot owner, the second account being a normal user) the normal user can leave the server and then the plot owner can use /plot deny on that user. When the normal user joins the server again they will be on the plot even though they got denied from the plot while they were offline:

I'm not sure about how this could be fixed but I'm hoping someone has an idea
### To Reproduce
1. Have two accounts online on the same plot, a plot owner and a normal user
2. Log out on the normal user and have the plot owner run /plot deny on the normal user
3. Log on to the server with the normal user and you will be on the plot even though you got denied
### Expected behaviour
The player should be sent to spawn
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/3883ba7cd0764cf7bbb6202321f936ca
### PlotSquared Version
PlotSquared-Bukkit-5.13.11-Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
1.0
|
Players that are denied from a plot when they are offline can rejoin to bypass - ### Server Implementation
Tuinity
### Server Version
1.16.5
### Describe the bug
This is a bug discovered by some players that I was able to reproduce, if you have two accounts online on the same plot (One account being the plot owner, the second account being a normal user) the normal user can leave the server and then the plot owner can use /plot deny on that user. When the normal user joins the server again they will be on the plot even though they got denied from the plot while they were offline:

I'm not sure about how this could be fixed but I'm hoping someone has an idea
### To Reproduce
1. Have two accounts online on the same plot, a plot owner and a normal user
2. Log out on the normal user and have the plot owner run /plot deny on the normal user
3. Log on to the server with the normal user and you will be on the plot even though you got denied
### Expected behaviour
The player should be sent to spawn
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/3883ba7cd0764cf7bbb6202321f936ca
### PlotSquared Version
PlotSquared-Bukkit-5.13.11-Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
non_process
|
players that are denied from a plot when they are offline can rejoin to bypass server implementation tuinity server version describe the bug this is a bug discovered by some players that i was able to reproduce if you have two accounts online on the same plot one account being the plot owner the second account being a normal user the normal user can leave the server and then the plot owner can use plot deny on that user when the normal user joins the server again they will be on the plot even though they got denied from the plot while they were offline i m not sure about how this could be fixed but i m hoping someone has an idea to reproduce have two accounts online on the same plot a plot owner and a normal user log out on the normal user and have the plot owner run plot deny on the normal user log on to the server with the normal user and you will be on the plot even though you got denied expected behaviour the player should be sent to spawn screenshots videos no response error log if applicable no response plot debugpaste plotsquared version plotsquared bukkit premium checklist i have included a plot debugpaste i am using the newest build from and the issue still persists anything else no response
| 0
|
153,886
| 13,529,782,676
|
IssuesEvent
|
2020-09-15 18:51:09
|
airctic/icedata
|
https://api.github.com/repos/airctic/icedata
|
closed
|
Add COCO, VOC, and Birds README
|
documentation enhancement good first issue
|
## 📓 Documentation Update
**What part of documentation was unclear or wrong?**
Add the COCO, VOC, and Birds README, and create their corresponding documentation
|
1.0
|
Add COCO, VOC, and Birds README - ## 📓 Documentation Update
**What part of documentation was unclear or wrong?**
Add the COCO, VOC, and Birds README, and create their corresponding documentation
|
non_process
|
add coco voc and birds readme 📓 documentation update what part of documentation was unclear or wrong add the coco voc and birds readme and create their corresponding documentation
| 0
|
3,879
| 6,817,714,247
|
IssuesEvent
|
2017-11-07 00:53:01
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Generated switch statements from makeClass
|
apps-makeClass status-inprocess type-enhancement
|
If a smart contract uses leading underbars (_), then all items in the switch statement check that first character. This is slower than it needs to be.
|
1.0
|
Generated switch statements from makeClass - If a smart contract uses leading underbars (_), then all items in the switch statement check that first character. This is slower than it needs to be.
|
process
|
generated switch statements from makeclass if a smart contract uses leading underbars then all items in the switch statement check that first character this is slower than it needs to be
| 1
|
78,573
| 22,308,394,872
|
IssuesEvent
|
2022-06-13 14:51:49
|
ramp4-pcar4/ramp4-pcar4
|
https://api.github.com/repos/ramp4-pcar4/ramp4-pcar4
|
closed
|
Critical Security Patch - Eventsource
|
effort: small flavour: build priority: must type: preventative needs: estimate
|
Looking like there is a 🚨 _critical_ 🚨 security thing going on with one of our dependencies
> When fetching an url with a link to an external site (Redirect), the users Cookies & Autorisation headers are leaked to the third party application. According to the same-origin-policy, the header should be "sanitized."
Library `eventsource` needs to be upgraded to `v1.1.1`.
Looking at our lock file, it appears this is imported via `webpack-dev-server`, (imports `sockjs-client` which imports `eventsource`).
So would attempt to update `webpack-dev-server` version. In fact do we even use this now that Vite is doing dev serves?
Alternately this [failing PR](https://github.com/ramp4-pcar4/ramp4-pcar4/pull/1103/files) just modifies the lock file, not sure if that is valid and if it would get undone the next `npm install`
|
1.0
|
Critical Security Patch - Eventsource - Looking like there is a 🚨 _critical_ 🚨 security thing going on with one of our dependencies
> When fetching an url with a link to an external site (Redirect), the users Cookies & Autorisation headers are leaked to the third party application. According to the same-origin-policy, the header should be "sanitized."
Library `eventsource` needs to be upgraded to `v1.1.1`.
Looking at our lock file, it appears this is imported via `webpack-dev-server`, (imports `sockjs-client` which imports `eventsource`).
So would attempt to update `webpack-dev-server` version. In fact do we even use this now that Vite is doing dev serves?
Alternately this [failing PR](https://github.com/ramp4-pcar4/ramp4-pcar4/pull/1103/files) just modifies the lock file, not sure if that is valid and if it would get undone the next `npm install`
|
non_process
|
critical security patch eventsource looking like there is a 🚨 critical 🚨 security thing going on with one of our dependencies when fetching an url with a link to an external site redirect the users cookies autorisation headers are leaked to the third party application according to the same origin policy the header should be sanitized library eventsource needs to be upgraded to looking at our lock file it appears this is imported via webpack dev server imports sockjs client which imports eventsource so would attempt to update webpack dev server version in fact do we even use this now that vite is doing dev serves alternately this just modifies the lock file not sure if that is valid and if it would get undone the next npm install
| 0
|
50,517
| 13,187,552,251
|
IssuesEvent
|
2020-08-13 03:47:11
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
Problem with fillratio and boost (Trac #865)
|
Migrated from Trac combo reconstruction defect
|
I checked out V04-10-00 of icerec. CMake ran successfully. Running make gave this error message:
```text
[ 22%] Building CXX object fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o
/Users/steven/IceCube/icerec/src/fill-ratio/private/fill-ratio/FastMinBall.cxx:54:9:
error: use of undeclared identifier 'make_iterator_range'; did you mean
'boost::make_iterator_range'?
return make_iterator_range(dom_begin, dom_end);
^~~~~~~~~~~~~~~~~~~
boost::make_iterator_range
/usr/local/include/boost/range/iterator_range_core.hpp:745:9: note: 'boost
::make_iterator_range' declared here
make_iterator_range( IteratorT Begin, IteratorT End )
^
1 error generated.
make[2]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o] Error 1
make[1]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/all] Error 2
make: *** [all] Error 2
```
Following the suggested change in the message fixed the problem
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/865
, reported by steven.wren and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"description": "I checked out V04-10-00 of icerec. CMake ran successfully. Running make gave this error message:\n{{{\n[ 22%] Building CXX object fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o\n/Users/steven/IceCube/icerec/src/fill-ratio/private/fill-ratio/FastMinBall.cxx:54:9:\n error: use of undeclared identifier 'make_iterator_range'; did you mean\n 'boost::make_iterator_range'?\nreturn make_iterator_range(dom_begin, dom_end);\n^~~~~~~~~~~~~~~~~~~\nboost::make_iterator_range\n/usr/local/include/boost/range/iterator_range_core.hpp:745:9: note: 'boost\n ::make_iterator_range' declared here\nmake_iterator_range( IteratorT Begin, IteratorT End )\n^\n1 error generated.\nmake[2]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o] Error 1\nmake[1]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/all] Error 2\nmake: *** [all] Error 2\n}}}\nFollowing the suggested change in the message fixed the problem",
"reporter": "steven.wren",
"cc": "",
"resolution": "fixed",
"_ts": "1436308353324715",
"component": "combo reconstruction",
"summary": "Problem with fillratio and boost",
"priority": "normal",
"keywords": "",
"time": "2015-02-03T14:41:04",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Problem with fillratio and boost (Trac #865) - I checked out V04-10-00 of icerec. CMake ran successfully. Running make gave this error message:
```text
[ 22%] Building CXX object fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o
/Users/steven/IceCube/icerec/src/fill-ratio/private/fill-ratio/FastMinBall.cxx:54:9:
error: use of undeclared identifier 'make_iterator_range'; did you mean
'boost::make_iterator_range'?
return make_iterator_range(dom_begin, dom_end);
^~~~~~~~~~~~~~~~~~~
boost::make_iterator_range
/usr/local/include/boost/range/iterator_range_core.hpp:745:9: note: 'boost
::make_iterator_range' declared here
make_iterator_range( IteratorT Begin, IteratorT End )
^
1 error generated.
make[2]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o] Error 1
make[1]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/all] Error 2
make: *** [all] Error 2
```
Following the suggested change in the message fixed the problem
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/865
, reported by steven.wren and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"description": "I checked out V04-10-00 of icerec. CMake ran successfully. Running make gave this error message:\n{{{\n[ 22%] Building CXX object fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o\n/Users/steven/IceCube/icerec/src/fill-ratio/private/fill-ratio/FastMinBall.cxx:54:9:\n error: use of undeclared identifier 'make_iterator_range'; did you mean\n 'boost::make_iterator_range'?\nreturn make_iterator_range(dom_begin, dom_end);\n^~~~~~~~~~~~~~~~~~~\nboost::make_iterator_range\n/usr/local/include/boost/range/iterator_range_core.hpp:745:9: note: 'boost\n ::make_iterator_range' declared here\nmake_iterator_range( IteratorT Begin, IteratorT End )\n^\n1 error generated.\nmake[2]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/private/fill-ratio/FastMinBall.cxx.o] Error 1\nmake[1]: *** [fill-ratio/CMakeFiles/fill-ratio.dir/all] Error 2\nmake: *** [all] Error 2\n}}}\nFollowing the suggested change in the message fixed the problem",
"reporter": "steven.wren",
"cc": "",
"resolution": "fixed",
"_ts": "1436308353324715",
"component": "combo reconstruction",
"summary": "Problem with fillratio and boost",
"priority": "normal",
"keywords": "",
"time": "2015-02-03T14:41:04",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
problem with fillratio and boost trac i checked out of icerec cmake ran successfully running make gave this error message text building cxx object fill ratio cmakefiles fill ratio dir private fill ratio fastminball cxx o users steven icecube icerec src fill ratio private fill ratio fastminball cxx error use of undeclared identifier make iterator range did you mean boost make iterator range return make iterator range dom begin dom end boost make iterator range usr local include boost range iterator range core hpp note boost make iterator range declared here make iterator range iteratort begin iteratort end error generated make error make error make error following the suggested change in the message fixed the problem migrated from reported by steven wren and owned by json status closed changetime description i checked out of icerec cmake ran successfully running make gave this error message n n building cxx object fill ratio cmakefiles fill ratio dir private fill ratio fastminball cxx o n users steven icecube icerec src fill ratio private fill ratio fastminball cxx n error use of undeclared identifier make iterator range did you mean n boost make iterator range nreturn make iterator range dom begin dom end n nboost make iterator range n usr local include boost range iterator range core hpp note boost n make iterator range declared here nmake iterator range iteratort begin iteratort end n error generated nmake error nmake error nmake error n nfollowing the suggested change in the message fixed the problem reporter steven wren cc resolution fixed ts component combo reconstruction summary problem with fillratio and boost priority normal keywords time milestone owner type defect
| 0
|
136,518
| 5,284,176,015
|
IssuesEvent
|
2017-02-07 23:25:47
|
imrogues/abstract
|
https://api.github.com/repos/imrogues/abstract
|
opened
|
Extending the Parser
|
[priority] high [status] accepted [type] feature
|
### Description
Refactor `parser tests` and implement more properties from the AST produced by parser.
---
### Issue Checklist
- [ ] Boolean Literals
- [ ] Grouped Expressions
- [ ] If Expressions
- [ ] Function Literals
- [ ] Call Expressions
All issues in milestone: [3 Parsing](https://github.com/imrogues/abstract/milestone/3)
---
### Assignees
- [ ] Final assign @imrogues
|
1.0
|
Extending the Parser - ### Description
Refactor `parser tests` and implement more properties from the AST produced by parser.
---
### Issue Checklist
- [ ] Boolean Literals
- [ ] Grouped Expressions
- [ ] If Expressions
- [ ] Function Literals
- [ ] Call Expressions
All issues in milestone: [3 Parsing](https://github.com/imrogues/abstract/milestone/3)
---
### Assignees
- [ ] Final assign @imrogues
|
non_process
|
extending the parser description refactor parser tests and implement more properties from the ast produced by parser issue checklist boolean literals grouped expressions if expressions function literals call expressions all issues in milestone assignees final assign imrogues
| 0
|
15,275
| 19,257,162,952
|
IssuesEvent
|
2021-12-09 12:35:38
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
JS8Call won't install on Bullseye
|
bug in process
|
As a quick workaround, we could install with apt for now as the latest version is in the repository. It would be better to build from source.
|
1.0
|
JS8Call won't install on Bullseye - As a quick workaround, we could install with apt for now as the latest version is in the repository. It would be better to build from source.
|
process
|
won t install on bullseye as a quick workaround we could install with apt for now as the latest version is in the repository it would be better to build from source
| 1
|
184,347
| 21,784,873,068
|
IssuesEvent
|
2022-05-14 01:38:02
|
onokatio/blog.katio.net
|
https://api.github.com/repos/onokatio/blog.katio.net
|
closed
|
WS-2021-0154 (Medium) detected in glob-parent-3.1.0.tgz - autoclosed
|
security vulnerability
|
## WS-2021-0154 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: blog.katio.net/package.json</p>
<p>Path to vulnerable library: blog.katio.net/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.10.5.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/onokatio/blog.katio.net/commit/936580315e62ac99730c0ed7a501c46359f1c0ed">936580315e62ac99730c0ed7a501c46359f1c0ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
<p>Publish Date: 2021-01-27
<p>URL: <a href=https://github.com/gulpjs/glob-parent/commit/f9231168b0041fea3f8f954b3cceb56269fc6366>WS-2021-0154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2">https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2</a></p>
<p>Release Date: 2021-01-27</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0154 (Medium) detected in glob-parent-3.1.0.tgz - autoclosed - ## WS-2021-0154 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: blog.katio.net/package.json</p>
<p>Path to vulnerable library: blog.katio.net/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.10.5.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/onokatio/blog.katio.net/commit/936580315e62ac99730c0ed7a501c46359f1c0ed">936580315e62ac99730c0ed7a501c46359f1c0ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
<p>Publish Date: 2021-01-27
<p>URL: <a href=https://github.com/gulpjs/glob-parent/commit/f9231168b0041fea3f8f954b3cceb56269fc6366>WS-2021-0154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2">https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2</a></p>
<p>Release Date: 2021-01-27</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in glob parent tgz autoclosed ws medium severity vulnerability vulnerable library glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file blog katio net package json path to vulnerable library blog katio net node modules glob parent package json dependency hierarchy cli tgz root library chokidar tgz x glob parent tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service redos vulnerability was found in glob parent before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource
| 0
|
161,581
| 20,154,151,862
|
IssuesEvent
|
2022-02-09 15:03:30
|
kapseliboi/mimic
|
https://api.github.com/repos/kapseliboi/mimic
|
opened
|
CVE-2021-33623 (High) detected in trim-newlines-1.0.0.tgz
|
security vulnerability
|
## CVE-2021-33623 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-1.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-1.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/trim-newlines/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-2.5.0.tgz (Root Library)
- internal-ip-1.2.0.tgz
- meow-3.7.0.tgz
- :x: **trim-newlines-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/mimic/commit/6d4fe404335bf56c57080e4ab1425b65bbe3ac2f">6d4fe404335bf56c57080e4ab1425b65bbe3ac2f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (trim-newlines): 3.0.1</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 2.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33623 (High) detected in trim-newlines-1.0.0.tgz - ## CVE-2021-33623 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-1.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-1.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/trim-newlines/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-2.5.0.tgz (Root Library)
- internal-ip-1.2.0.tgz
- meow-3.7.0.tgz
- :x: **trim-newlines-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/mimic/commit/6d4fe404335bf56c57080e4ab1425b65bbe3ac2f">6d4fe404335bf56c57080e4ab1425b65bbe3ac2f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (trim-newlines): 3.0.1</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 2.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in trim newlines tgz cve high severity vulnerability vulnerable library trim newlines tgz trim newlines from the start and or end of a string library home page a href path to dependency file package json path to vulnerable library node modules trim newlines package json dependency hierarchy webpack dev server tgz root library internal ip tgz meow tgz x trim newlines tgz vulnerable library found in head commit a href found in base branch master vulnerability details the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim newlines direct dependency fix resolution webpack dev server step up your open source security game with whitesource
| 0
|
9,704
| 12,703,131,020
|
IssuesEvent
|
2020-06-22 21:35:28
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Switch to CRDs v1
|
area/API kind/feature kind/process
|
/area API
/kind process
## Describe the feature
Once https://github.com/knative/serving/issues/6582 happens, we should be able to adopt the `apiextensions.k8s.io/v1` `apiGroup`.
|
1.0
|
Switch to CRDs v1 - /area API
/kind process
## Describe the feature
Once https://github.com/knative/serving/issues/6582 happens, we should be able to adopt the `apiextensions.k8s.io/v1` `apiGroup`.
|
process
|
switch to crds area api kind process describe the feature once happens we should be able to adopt the apiextensions io apigroup
| 1
|
18,605
| 24,577,542,040
|
IssuesEvent
|
2022-10-13 13:26:31
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Study resources screen > Newly attached and published PDF is not getting displayed for the mobile participants
|
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:
1 . In SB, add resource pdf and publish
2. Go to mobile and check the added resource
3. SB > Edit resource
4. Remove the attached pdf and attach a new pdf
5. Publish the updates
6. In Mobile, go to the study resources screen and observe
AR: Old attached pdf is getting displayed for the mobile participants
ER: Newly attached and published PDF should get displayed for the mobile participants
|
3.0
|
[iOS] Study resources screen > Newly attached and published PDF is not getting displayed for the mobile participants - Steps:
1 . In SB, add resource pdf and publish
2. Go to mobile and check the added resource
3. SB > Edit resource
4. Remove the attached pdf and attach a new pdf
5. Publish the updates
6. In Mobile, go to the study resources screen and observe
AR: Old attached pdf is getting displayed for the mobile participants
ER: Newly attached and published PDF should get displayed for the mobile participants
|
process
|
study resources screen newly attached and published pdf is not getting displayed for the mobile participants steps in sb add resource pdf and publish go to mobile and check the added resource sb edit resource remove the attached pdf and attach a new pdf publish the updates in mobile go to the study resources screen and observe ar old attached pdf is getting displayed for the mobile participants er newly attached and published pdf should get displayed for the mobile participants
| 1
|
375
| 2,516,854,734
|
IssuesEvent
|
2015-01-16 09:23:01
|
enioka/jqm
|
https://api.github.com/repos/enioka/jqm
|
closed
|
Missing documentation concerning Simple web APIs
|
documentation
|
Missing documentation on simple web APIs:
http://jqm.readthedocs.org/en/jqm-all-1.2.1/client/minimal.html
|
1.0
|
Missing documentation concerning Simple web APIs - Missing documentation on simple web APIs:
http://jqm.readthedocs.org/en/jqm-all-1.2.1/client/minimal.html
|
non_process
|
missing documentation concerning simple web apis missing documentation on simple web apis
| 0
|
113,117
| 11,788,859,132
|
IssuesEvent
|
2020-03-17 16:12:55
|
LEGOL2/weather-station
|
https://api.github.com/repos/LEGOL2/weather-station
|
closed
|
Kontakt z prowadzącym + dokumentacja
|
documentation
|
@tominkoooo napiszesz mail dotyczący części, a następnie wspomożesz mnie przy dokumentacji.
|
1.0
|
Kontakt z prowadzącym + dokumentacja - @tominkoooo napiszesz mail dotyczący części, a następnie wspomożesz mnie przy dokumentacji.
|
non_process
|
kontakt z prowadzącym dokumentacja tominkoooo napiszesz mail dotyczący części a następnie wspomożesz mnie przy dokumentacji
| 0
|
45,801
| 5,962,900,726
|
IssuesEvent
|
2017-05-30 01:36:45
|
redelivre/delibera
|
https://api.github.com/repos/redelivre/delibera
|
closed
|
Melhorar parte da tela de resultados para validações
|
área: Design área: UX backlog complexidade:Baixa prioridade: Média tipo: Melhoria
|
Um das demandas de consultas públicas é validações de documento, reuniões e afins, para isso o delibera suporta colocar 2 etapas, Proposta de pauta e resultado, porém o layout do resultado após a validação está ficando a desejar como no exemplo:

|
1.0
|
Melhorar parte da tela de resultados para validações - Um das demandas de consultas públicas é validações de documento, reuniões e afins, para isso o delibera suporta colocar 2 etapas, Proposta de pauta e resultado, porém o layout do resultado após a validação está ficando a desejar como no exemplo:

|
non_process
|
melhorar parte da tela de resultados para validações um das demandas de consultas públicas é validações de documento reuniões e afins para isso o delibera suporta colocar etapas proposta de pauta e resultado porém o layout do resultado após a validação está ficando a desejar como no exemplo
| 0
|
35,729
| 2,793,020,681
|
IssuesEvent
|
2015-05-11 08:04:20
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
Feature request: Add customizable tooltip support
|
Feature Priority: low
|
Title tooltip is not good enough. We want customizable tooltips to make them more beautiful. Thanks.
|
1.0
|
Feature request: Add customizable tooltip support - Title tooltip is not good enough. We want customizable tooltips to make them more beautiful. Thanks.
|
non_process
|
feature request add customizable tooltip support title tooltip is not good enough we want customizable tooltips to make them more beautiful thanks
| 0
|
137
| 2,574,669,962
|
IssuesEvent
|
2015-02-11 18:16:45
|
robotology/yarp
|
https://api.github.com/repos/robotology/yarp
|
opened
|
Carriers.h should be moved out from impl
|
Component: Carriers Component: YARP_OS Severity: Normal Type: Process
|
Any 3rd party project using yarp should not use impl files, but in order to create carrier plugins, the yarp::os::impl::Carriers::addCarrierPrototype() is called by the template.
We should move ``yarp/os/impl/Carriers.h`` into ``yarp/os/Carriers.h`` and the class ``yarp::os::impl::Carriers`` to ``yarp::os::Carriers`` in order to solve this issue.
It shouldn't be hard to do, the only requirements is to use PIMPL to hide the usage of ``yarp::os::impl::PlatformVector<Carrier *>``.
|
1.0
|
Carriers.h should be moved out from impl - Any 3rd party project using yarp should not use impl files, but in order to create carrier plugins, the yarp::os::impl::Carriers::addCarrierPrototype() is called by the template.
We should move ``yarp/os/impl/Carriers.h`` into ``yarp/os/Carriers.h`` and the class ``yarp::os::impl::Carriers`` to ``yarp::os::Carriers`` in order to solve this issue.
It shouldn't be hard to do, the only requirements is to use PIMPL to hide the usage of ``yarp::os::impl::PlatformVector<Carrier *>``.
|
process
|
carriers h should be moved out from impl any party project using yarp should not use impl files but in order to create carrier plugins the yarp os impl carriers addcarrierprototype is called by the template we should move yarp os impl carriers h into yarp os carriers h and the class yarp os impl carriers to yarp os carriers in order to solve this issue it shouldn t be hard to do the only requirements is to use pimpl to hide the usage of yarp os impl platformvector
| 1
|
51,004
| 10,577,211,616
|
IssuesEvent
|
2019-10-07 19:36:03
|
chatwoot/chatwoot
|
https://api.github.com/repos/chatwoot/chatwoot
|
closed
|
Fix "EmptyLineBetweenBlocks" issue in app/javascript/src/assets/scss/widgets/_conversation-view.scss
|
codeclimate hacktoberfest
|
Rule declaration should be followed by an empty line
https://codeclimate.com/github/chatwoot/chatwoot/app/javascript/src/assets/scss/widgets/_conversation-view.scss#issue_5d87481187cf1900010006ce
|
1.0
|
Fix "EmptyLineBetweenBlocks" issue in app/javascript/src/assets/scss/widgets/_conversation-view.scss - Rule declaration should be followed by an empty line
https://codeclimate.com/github/chatwoot/chatwoot/app/javascript/src/assets/scss/widgets/_conversation-view.scss#issue_5d87481187cf1900010006ce
|
non_process
|
fix emptylinebetweenblocks issue in app javascript src assets scss widgets conversation view scss rule declaration should be followed by an empty line
| 0
|
2,863
| 5,824,435,430
|
IssuesEvent
|
2017-05-07 13:02:28
|
QCoDeS/Qcodes
|
https://api.github.com/repos/QCoDeS/Qcodes
|
closed
|
Restarting a notebook does not close existing connections to instruments
|
bug mulitprocessing p1
|
When debugging the new instrument server I find that restarting the notebook does not allow me to reconnect to all instruments.
Restarting my notebook and running the initialization gives me the following error `pyvisa.errors.VisaIOError: ('VI_ERROR_RSRC_BUSY (-1073807246): The resource is valid, but VISA cannot currently access it.',` (see below for full log).
This is consistent with another connection still being open. Killing the entire Jupyter process and restarting solves the problem. However this is not the way to do it for two reasons.
1. Restarting everything is an overkill and may force closing other notebooks that you do not want to kill
2. The fact that these processes remain open may indicate a more serious memory leak. Does this mean that restarting notebooks often will also leave multiple data servers and visa instruments open?
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-3b25c0dec222> in <module>()
9
10 # import as * puts all the imports and objects of the init in the global namespace
---> 11 from init.LaDucati import *
12 # this makes a widget in the corner of the window to show and control
13 # subprocesses and any output they would print to the terminal
D:\GitHubRepos\PycQED_py3\init\LaDucati.py in <module>()
58 AWG = tek.Tektronix_AWG5014(name='AWG', setup_folder=None,
59 address='TCPIP0::192.168.0.9')#, server_name=None)
---> 60 IVVI = iv.IVVI('IVVI', address='ASRL1', numdacs=16)
61 Dux = qdux.QuTech_Duplexer('Dux', address='TCPIP0::192.168.0.101')
62
d:\githubrepos\qcodes\qcodes\instrument\base.py in __new__(cls, server_name, *args, **kwargs)
67 else:
68 return RemoteInstrument(*args, instrument_class=cls,
---> 69 server_name=server_name, **kwargs)
70
71 def __init__(self, name, server_name=None, **kwargs):
d:\githubrepos\qcodes\qcodes\instrument\remote.py in __init__(self, instrument_class, server_name, *args, **kwargs)
25 manager = get_instrument_server(server_name, shared_kwargs)
26 # connect sets self.connection
---> 27 manager.connect(self, instrument_class, args, kwargs)
28
29 # bind all the different categories of actions we need
d:\githubrepos\qcodes\qcodes\instrument\server.py in connect(self, remote_instrument, instrument_class, args, kwargs)
82 conn = InstrumentConnection(
83 manager=self, instrument_class=instrument_class,
---> 84 new_id=new_id, args=args, kwargs=kwargs)
85
86 # save the information to recreate this instrument on the server
d:\githubrepos\qcodes\qcodes\instrument\server.py in __init__(self, manager, instrument_class, new_id, args, kwargs)
126 # (like visa timeout) to get back to us
127 info = manager.ask('new', instrument_class, new_id, args, kwargs,
--> 128 timeout=20)
129 for k, v in info.items():
130 setattr(self, k, v)
d:\githubrepos\qcodes\qcodes\utils\multiprocessing.py in ask(self, timeout, *query)
333 # only raise if we're not about to find a deeper error
334 raise e
--> 335 self._check_for_errors(self._expect_error)
336
337 return res
d:\githubrepos\qcodes\qcodes\utils\multiprocessing.py in _check_for_errors(self, expect_error)
298 err_type = RuntimeError
299
--> 300 raise err_type(errhead + '\n\n' + errstr)
301
302 def _check_response(self, timeout):
RuntimeError: *** error on SerialServer ***
Traceback (most recent call last):
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 177, in __init__
self.process_query(query)
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 184, in process_query
getattr(self, 'handle_' + query[0])(*(query[1:]))
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 225, in handle_new
ins = instrument_class(*args, server_name=None, **kwargs)
File "d:\githubrepos\qcodes\qcodes\instrument_drivers\QuTech\IVVI.py", line 44, in __init__
super().__init__(name, address, **kwargs)
File "d:\githubrepos\qcodes\qcodes\instrument\visa.py", line 30, in __init__
self.set_address(address)
File "d:\githubrepos\qcodes\qcodes\instrument\visa.py", line 58, in set_address
self.visa_handle = resource_manager.open_resource(address)
File "C:\Anaconda3\lib\site-packages\pyvisa\highlevel.py", line 1644, in open_resource
res.open(access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\resources\resource.py", line 203, in open
self.session, status = self._resource_manager.open_bare_resource(self._resource_name, access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\highlevel.py", line 1601, in open_bare_resource
return self.visalib.open(self.session, resource_name, access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\ctwrapper\functions.py", line 1211, in open
ret = library.viOpen(session, resource_name, access_mode, open_timeout, byref(out_session))
File "C:\Anaconda3\lib\site-packages\pyvisa\ctwrapper\highlevel.py", line 188, in _return_handler
raise errors.VisaIOError(ret_value)
pyvisa.errors.VisaIOError: ('VI_ERROR_RSRC_BUSY (-1073807246): The resource is valid, but VISA cannot currently access it.', "error processing query ('new', <class 'qcodes.instrument_drivers.QuTech.IVVI.IVVI'>, 0, ('IVVI',), {'address': 'ASRL1', 'numdacs': 16})")
```
|
1.0
|
Restarting a notebook does not close existing connections to instruments - When debugging the new instrument server I find that restarting the notebook does not allow me to reconnect to all instruments.
Restarting my notebook and running the initialization gives me the following error `pyvisa.errors.VisaIOError: ('VI_ERROR_RSRC_BUSY (-1073807246): The resource is valid, but VISA cannot currently access it.',` (see below for full log).
This is consistent with another connection still being open. Killing the entire Jupyter process and restarting solves the problem. However this is not the way to do it for two reasons.
1. Restarting everything is an overkill and may force closing other notebooks that you do not want to kill
2. The fact that these processes remain open may indicate a more serious memory leak. Does this mean that restarting notebooks often will also leave multiple data servers and visa instruments open?
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-3b25c0dec222> in <module>()
9
10 # import as * puts all the imports and objects of the init in the global namespace
---> 11 from init.LaDucati import *
12 # this makes a widget in the corner of the window to show and control
13 # subprocesses and any output they would print to the terminal
D:\GitHubRepos\PycQED_py3\init\LaDucati.py in <module>()
58 AWG = tek.Tektronix_AWG5014(name='AWG', setup_folder=None,
59 address='TCPIP0::192.168.0.9')#, server_name=None)
---> 60 IVVI = iv.IVVI('IVVI', address='ASRL1', numdacs=16)
61 Dux = qdux.QuTech_Duplexer('Dux', address='TCPIP0::192.168.0.101')
62
d:\githubrepos\qcodes\qcodes\instrument\base.py in __new__(cls, server_name, *args, **kwargs)
67 else:
68 return RemoteInstrument(*args, instrument_class=cls,
---> 69 server_name=server_name, **kwargs)
70
71 def __init__(self, name, server_name=None, **kwargs):
d:\githubrepos\qcodes\qcodes\instrument\remote.py in __init__(self, instrument_class, server_name, *args, **kwargs)
25 manager = get_instrument_server(server_name, shared_kwargs)
26 # connect sets self.connection
---> 27 manager.connect(self, instrument_class, args, kwargs)
28
29 # bind all the different categories of actions we need
d:\githubrepos\qcodes\qcodes\instrument\server.py in connect(self, remote_instrument, instrument_class, args, kwargs)
82 conn = InstrumentConnection(
83 manager=self, instrument_class=instrument_class,
---> 84 new_id=new_id, args=args, kwargs=kwargs)
85
86 # save the information to recreate this instrument on the server
d:\githubrepos\qcodes\qcodes\instrument\server.py in __init__(self, manager, instrument_class, new_id, args, kwargs)
126 # (like visa timeout) to get back to us
127 info = manager.ask('new', instrument_class, new_id, args, kwargs,
--> 128 timeout=20)
129 for k, v in info.items():
130 setattr(self, k, v)
d:\githubrepos\qcodes\qcodes\utils\multiprocessing.py in ask(self, timeout, *query)
333 # only raise if we're not about to find a deeper error
334 raise e
--> 335 self._check_for_errors(self._expect_error)
336
337 return res
d:\githubrepos\qcodes\qcodes\utils\multiprocessing.py in _check_for_errors(self, expect_error)
298 err_type = RuntimeError
299
--> 300 raise err_type(errhead + '\n\n' + errstr)
301
302 def _check_response(self, timeout):
RuntimeError: *** error on SerialServer ***
Traceback (most recent call last):
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 177, in __init__
self.process_query(query)
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 184, in process_query
getattr(self, 'handle_' + query[0])(*(query[1:]))
File "d:\githubrepos\qcodes\qcodes\instrument\server.py", line 225, in handle_new
ins = instrument_class(*args, server_name=None, **kwargs)
File "d:\githubrepos\qcodes\qcodes\instrument_drivers\QuTech\IVVI.py", line 44, in __init__
super().__init__(name, address, **kwargs)
File "d:\githubrepos\qcodes\qcodes\instrument\visa.py", line 30, in __init__
self.set_address(address)
File "d:\githubrepos\qcodes\qcodes\instrument\visa.py", line 58, in set_address
self.visa_handle = resource_manager.open_resource(address)
File "C:\Anaconda3\lib\site-packages\pyvisa\highlevel.py", line 1644, in open_resource
res.open(access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\resources\resource.py", line 203, in open
self.session, status = self._resource_manager.open_bare_resource(self._resource_name, access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\highlevel.py", line 1601, in open_bare_resource
return self.visalib.open(self.session, resource_name, access_mode, open_timeout)
File "C:\Anaconda3\lib\site-packages\pyvisa\ctwrapper\functions.py", line 1211, in open
ret = library.viOpen(session, resource_name, access_mode, open_timeout, byref(out_session))
File "C:\Anaconda3\lib\site-packages\pyvisa\ctwrapper\highlevel.py", line 188, in _return_handler
raise errors.VisaIOError(ret_value)
pyvisa.errors.VisaIOError: ('VI_ERROR_RSRC_BUSY (-1073807246): The resource is valid, but VISA cannot currently access it.', "error processing query ('new', <class 'qcodes.instrument_drivers.QuTech.IVVI.IVVI'>, 0, ('IVVI',), {'address': 'ASRL1', 'numdacs': 16})")
```
|
process
|
restarting a notebook does not close existing connections to instruments when debugging the new instrument server i find that restarting the notebook does not allow me to reconnect to all instruments restarting my notebook and running the initialization gives me the following error pyvisa errors visaioerror vi error rsrc busy the resource is valid but visa cannot currently access it see below for full log this is consistent with another connection still being open killing the entire jupyter process and restarting solves the problem however this is not the way to do it for two reasons restarting everything is an overkill and may force closing other notebooks that you do not want to kill the fact that these processes remain open may indicate a more serious memory leak does this mean that restarting notebooks often will also leave multiple data servers and visa instruments open runtimeerror traceback most recent call last in import as puts all the imports and objects of the init in the global namespace from init laducati import this makes a widget in the corner of the window to show and control subprocesses and any output they would print to the terminal d githubrepos pycqed init laducati py in awg tek tektronix name awg setup folder none address server name none ivvi iv ivvi ivvi address numdacs dux qdux qutech duplexer dux address d githubrepos qcodes qcodes instrument base py in new cls server name args kwargs else return remoteinstrument args instrument class cls server name server name kwargs def init self name server name none kwargs d githubrepos qcodes qcodes instrument remote py in init self instrument class server name args kwargs manager get instrument server server name shared kwargs connect sets self connection manager connect self instrument class args kwargs bind all the different categories of actions we need d githubrepos qcodes qcodes instrument server py in connect self remote instrument instrument class args kwargs conn instrumentconnection manager self instrument class instrument class new id new id args args kwargs kwargs save the information to recreate this instrument on the server d githubrepos qcodes qcodes instrument server py in init self manager instrument class new id args kwargs like visa timeout to get back to us info manager ask new instrument class new id args kwargs timeout for k v in info items setattr self k v d githubrepos qcodes qcodes utils multiprocessing py in ask self timeout query only raise if we re not about to find a deeper error raise e self check for errors self expect error return res d githubrepos qcodes qcodes utils multiprocessing py in check for errors self expect error err type runtimeerror raise err type errhead n n errstr def check response self timeout runtimeerror error on serialserver traceback most recent call last file d githubrepos qcodes qcodes instrument server py line in init self process query query file d githubrepos qcodes qcodes instrument server py line in process query getattr self handle query query file d githubrepos qcodes qcodes instrument server py line in handle new ins instrument class args server name none kwargs file d githubrepos qcodes qcodes instrument drivers qutech ivvi py line in init super init name address kwargs file d githubrepos qcodes qcodes instrument visa py line in init self set address address file d githubrepos qcodes qcodes instrument visa py line in set address self visa handle resource manager open resource address file c lib site packages pyvisa highlevel py line in open resource res open access mode open timeout file c lib site packages pyvisa resources resource py line in open self session status self resource manager open bare resource self resource name access mode open timeout file c lib site packages pyvisa highlevel py line in open bare resource return self visalib open self session resource name access mode open timeout file c lib site packages pyvisa ctwrapper functions py line in open ret library viopen session resource name access mode open timeout byref out session file c lib site packages pyvisa ctwrapper highlevel py line in return handler raise errors visaioerror ret value pyvisa errors visaioerror vi error rsrc busy the resource is valid but visa cannot currently access it error processing query new ivvi address numdacs
| 1
|
17,142
| 22,687,501,274
|
IssuesEvent
|
2022-07-04 15:27:28
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Add Grafana visualization for start process instance anywhere
|
area/observability team/process-automation
|
Metrics have been added to differentiate process instances started at the default start event from those started at a given element.
- https://github.com/camunda/zeebe/pull/9521
These metrics should be visualized in Grafana.
|
1.0
|
Add Grafana visualization for start process instance anywhere - Metrics have been added to differentiate process instances started at the default start event from those started at a given element.
- https://github.com/camunda/zeebe/pull/9521
These metrics should be visualized in Grafana.
|
process
|
add grafana visualization for start process instance anywhere metrics have been added to differentiate process instances started at the default start event from those started at a given element these metrics should be visualized in grafana
| 1
|
43,238
| 17,469,898,704
|
IssuesEvent
|
2021-08-07 00:36:28
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Enhancement: Hydrate other project details tabs when any tab loads
|
Service: Dev Need: 3-Could Have Type: Enhancement Product: Moped Workgroup: ATD Project: Moped v1.0
|
As a Moped user, want to quickly navigate between project details tabs instead of waiting for each tab to load when I click on it.
|
1.0
|
Enhancement: Hydrate other project details tabs when any tab loads - As a Moped user, want to quickly navigate between project details tabs instead of waiting for each tab to load when I click on it.
|
non_process
|
enhancement hydrate other project details tabs when any tab loads as a moped user want to quickly navigate between project details tabs instead of waiting for each tab to load when i click on it
| 0
|
4,850
| 7,740,103,741
|
IssuesEvent
|
2018-05-28 19:31:15
|
googlegenomics/gcp-variant-transforms
|
https://api.github.com/repos/googlegenomics/gcp-variant-transforms
|
closed
|
Add python type checking to all classes
|
P1 process
|
We have adopted PEP-484 for our type checking, but many classes are still missing them. We should add them to all existing classes.
|
1.0
|
Add python type checking to all classes - We have adopted PEP-484 for our type checking, but many classes are still missing them. We should add them to all existing classes.
|
process
|
add python type checking to all classes we have adopted pep for our type checking but many classes are still missing them we should add them to all existing classes
| 1
|
316,303
| 27,150,582,884
|
IssuesEvent
|
2023-02-17 00:40:38
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[C#] [T-SQL] [PCD] Desenvolvedor Software - PCD na [LARCO]
|
SALVADOR C# TESTES UNITARIOS MODELAGEM DE DADOS HELP WANTED VAGA PARA PCD T-SQL Stale
|
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Desenvolvedor Software - PCD na Larco
## Local
- Salvador
## Benefícios
- Ticket alimentação;
- Ticket refeição;
- Plano de saúde;
- Plano odontológico
## Requisitos
**Obrigatórios:**
- Superior completo em Análise e Desenvolvimento de Sistemas, Ciências da Computação ou Engenharia de Software;
- Experiência em programação de sistemas de menor complexidade;
- Manutenção em sistemas desktops ou Web utilizando linguagem C#;
- Desenvolvimento de cenários de testes unitários;
- Conhecimento em plataformas e metodologias de desenvolvimento de sistemas;
- Lógica de programação;
- Sistemas gerenciadores de Banco de Dados (T-SQL);
- Modelagem de dados;
- Inglês técnico.
## Contratação
- a combinar
## Nossa empresa
- Genuinamente baiana, a Larco é uma distribuidora de combustível que mais cresce no nordeste.
- A Larco nasce no ano 2000, fruto da visão empreendedora do tradicional Grupo Evangelista, fundado em1956, líder em transporte urbano e metropolitano na Bahia.
- Uma empresa ágil, que busca se diferenciar pelo atendimento personalizado e pela integridade dos combustíveis oferecidos através de uma excelente estrutura de distribuição, sendo sinônimo de Solidez e Credibilidade no mercado.
- Hoje contamos com bases de armazenagem localizadas estrategicamente pelo nordeste e sudeste.
## Como se candidatar
- Por favor envie um email para vagas@larcopetroleo.com.br com seu CV anexado.
|
1.0
|
[C#] [T-SQL] [PCD] Desenvolvedor Software - PCD na [LARCO] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Desenvolvedor Software - PCD na Larco
## Local
- Salvador
## Benefícios
- Ticket alimentação;
- Ticket refeição;
- Plano de saúde;
- Plano odontológico
## Requisitos
**Obrigatórios:**
- Superior completo em Análise e Desenvolvimento de Sistemas, Ciências da Computação ou Engenharia de Software;
- Experiência em programação de sistemas de menor complexidade;
- Manutenção em sistemas desktops ou Web utilizando linguagem C#;
- Desenvolvimento de cenários de testes unitários;
- Conhecimento em plataformas e metodologias de desenvolvimento de sistemas;
- Lógica de programação;
- Sistemas gerenciadores de Banco de Dados (T-SQL);
- Modelagem de dados;
- Inglês técnico.
## Contratação
- a combinar
## Nossa empresa
- Genuinamente baiana, a Larco é uma distribuidora de combustível que mais cresce no nordeste.
- A Larco nasce no ano 2000, fruto da visão empreendedora do tradicional Grupo Evangelista, fundado em1956, líder em transporte urbano e metropolitano na Bahia.
- Uma empresa ágil, que busca se diferenciar pelo atendimento personalizado e pela integridade dos combustíveis oferecidos através de uma excelente estrutura de distribuição, sendo sinônimo de Solidez e Credibilidade no mercado.
- Hoje contamos com bases de armazenagem localizadas estrategicamente pelo nordeste e sudeste.
## Como se candidatar
- Por favor envie um email para vagas@larcopetroleo.com.br com seu CV anexado.
|
non_process
|
desenvolvedor software pcd na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga desenvolvedor software pcd na larco local salvador benefícios ticket alimentação ticket refeição plano de saúde plano odontológico requisitos obrigatórios superior completo em análise e desenvolvimento de sistemas ciências da computação ou engenharia de software experiência em programação de sistemas de menor complexidade manutenção em sistemas desktops ou web utilizando linguagem c desenvolvimento de cenários de testes unitários conhecimento em plataformas e metodologias de desenvolvimento de sistemas lógica de programação sistemas gerenciadores de banco de dados t sql modelagem de dados inglês técnico contratação a combinar nossa empresa genuinamente baiana a larco é uma distribuidora de combustível que mais cresce no nordeste a larco nasce no ano fruto da visão empreendedora do tradicional grupo evangelista fundado líder em transporte urbano e metropolitano na bahia uma empresa ágil que busca se diferenciar pelo atendimento personalizado e pela integridade dos combustíveis oferecidos através de uma excelente estrutura de distribuição sendo sinônimo de solidez e credibilidade no mercado hoje contamos com bases de armazenagem localizadas estrategicamente pelo nordeste e sudeste como se candidatar por favor envie um email para vagas larcopetroleo com br com seu cv anexado
| 0
|
22,700
| 32,008,672,494
|
IssuesEvent
|
2023-09-21 16:24:37
|
googleapis/nodejs-pubsub
|
https://api.github.com/repos/googleapis/nodejs-pubsub
|
closed
|
Warning: a recent release failed
|
type: process api: pubsub
|
The following release PRs may have failed:
* #1820 - The release job failed -- check the build log.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #1820 - The release job failed -- check the build log.
|
process
|
warning a recent release failed the following release prs may have failed the release job failed check the build log
| 1
|
14,034
| 16,832,866,320
|
IssuesEvent
|
2021-06-18 08:04:43
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Feature request: LZ4 support in compress/decompress
|
enhancement processors
|
Please add LZ4 support in compress/decompress processors. It offers much higher decompression speed than snappy with slightly higher compression rate [1]. There is a pure-go LZ4 implementation available [2] - it should be easier to add it compare to C library bindings.
[1] https://www.percona.com/blog/2016/04/13/evaluating-database-compression-methods-update/
[2] https://github.com/pierrec/lz4
|
1.0
|
Feature request: LZ4 support in compress/decompress - Please add LZ4 support in compress/decompress processors. It offers much higher decompression speed than snappy with slightly higher compression rate [1]. There is a pure-go LZ4 implementation available [2] - it should be easier to add it compare to C library bindings.
[1] https://www.percona.com/blog/2016/04/13/evaluating-database-compression-methods-update/
[2] https://github.com/pierrec/lz4
|
process
|
feature request support in compress decompress please add support in compress decompress processors it offers much higher decompression speed than snappy with slightly higher compression rate there is a pure go implementation available it should be easier to add it compare to c library bindings
| 1
|
310,706
| 9,523,200,398
|
IssuesEvent
|
2019-04-27 15:21:15
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
AudioStreamPlayer causes unpredictable behaviour in idle process
|
bug high priority topic:audio topic:core
|
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
Godot 3.1-stable
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
So, this is a weird one!
First I thought it was the timer node (https://github.com/godotengine/godot/issues/27437).
Then @Tugsav managed to find a possible fix to this issue, but @reduz isn't sure it actually makes sense (https://github.com/godotengine/godot/pull/27458).
Now I managed to create a reproduction project, but it isn't pretty, but it shows that there is an error!
I am pretty sure that the AudioStreamPlayer is the problem, and it causes the idle process across the game to behave very weird. It stops other things running in the idle process such as Particles and Animations. In some cases I also experienced particles and animations being sped up.
**Steps to reproduce:**
Run the project I included and keep your eyes on two things!
1. See how the four animations and the particle system stops one at a time.
2. Check the log output and notice a few things.
2.1 ```ERROR: partitioner: bad comparison function; sorting will be broken
At: ./core/sort_array.h:189.```
2.2 The outputs should come every 0.1 seconds, but it will start to speed up significant and print way faster.
There are a few things to do in the project that will result in different behaviours.
- The script "Main.gd" does "audio.play()" and "audio.stop()" right after eachother in a while loop with some other code, if one of these lines or both are commented out, this whole error doesn't occur.
- The node with the "Main.gd" script attached (Node) can be placed a few different places in the scene tree that will cause different behaviours. If it is placed as the top node, just below "Main" the while loop runs forever, but eventually all the animations and the particles will stop, and the idle process runs a lot faster (see log output being printed way faster than it should)

- If Node is placed as the last node in the tree the animations and the particles will continue working but the while loop will at some point straight up stop because the timer doesn't get the timeout signal correct and therefore hang forever.

- If Node is placed after the "ExtraScenes" node the behaviour is a mix of the two above. First the animations and the particles will stop, and when they all are stopped the while loop will stop as a result of the timer failing.

So, to the solution provided by @Tugsav
Our thought about this is that the AudioStreamPlayer runs in the idle process and somehow is able to mess it up! To further prove that this is happening, go ahead and set one or more of the animation players in the scene to process mode physics and play the game again.
This will result in the animation players working as intended and never stop.
So the PR I linked to in the beginning makes the change to the AudioStreamPlayer, and that PR fixes this problem! But it might not be the correct solution as @reduz isn't completely convinced about the PR.
The solution is to make the AudioStreamPlayer run in physics process instead, just like the AudioStreamPlayer2D and AudioStreamPlayer3D does.
I really hope that this time around, the issue is way more clear, and that the reproduction project is enough to both prove that there is an issue and to help find a fix as this is quite a bad bug!
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[test.tar.gz](https://github.com/godotengine/godot/files/3032992/test.tar.gz)
|
1.0
|
AudioStreamPlayer causes unpredictable behaviour in idle process - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
Godot 3.1-stable
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
So, this is a weird one!
First I thought it was the timer node (https://github.com/godotengine/godot/issues/27437).
Then @Tugsav managed to find a possible fix to this issue, but @reduz isn't sure it actually makes sense (https://github.com/godotengine/godot/pull/27458).
Now I managed to create a reproduction project, but it isn't pretty, but it shows that there is an error!
I am pretty sure that the AudioStreamPlayer is the problem, and it causes the idle process across the game to behave very weird. It stops other things running in the idle process such as Particles and Animations. In some cases I also experienced particles and animations being sped up.
**Steps to reproduce:**
Run the project I included and keep your eyes on two things!
1. See how the four animations and the particle system stops one at a time.
2. Check the log output and notice a few things.
2.1 ```ERROR: partitioner: bad comparison function; sorting will be broken
At: ./core/sort_array.h:189.```
2.2 The outputs should come every 0.1 seconds, but it will start to speed up significant and print way faster.
There are a few things to do in the project that will result in different behaviours.
- The script "Main.gd" does "audio.play()" and "audio.stop()" right after eachother in a while loop with some other code, if one of these lines or both are commented out, this whole error doesn't occur.
- The node with the "Main.gd" script attached (Node) can be placed a few different places in the scene tree that will cause different behaviours. If it is placed as the top node, just below "Main" the while loop runs forever, but eventually all the animations and the particles will stop, and the idle process runs a lot faster (see log output being printed way faster than it should)

- If Node is placed as the last node in the tree the animations and the particles will continue working but the while loop will at some point straight up stop because the timer doesn't get the timeout signal correct and therefore hang forever.

- If Node is placed after the "ExtraScenes" node the behaviour is a mix of the two above. First the animations and the particles will stop, and when they all are stopped the while loop will stop as a result of the timer failing.

So, to the solution provided by @Tugsav
Our thought about this is that the AudioStreamPlayer runs in the idle process and somehow is able to mess it up! To further prove that this is happening, go ahead and set one or more of the animation players in the scene to process mode physics and play the game again.
This will result in the animation players working as intended and never stop.
So the PR I linked to in the beginning makes the change to the AudioStreamPlayer, and that PR fixes this problem! But it might not be the correct solution as @reduz isn't completely convinced about the PR.
The solution is to make the AudioStreamPlayer run in physics process instead, just like the AudioStreamPlayer2D and AudioStreamPlayer3D does.
I really hope that this time around, the issue is way more clear, and that the reproduction project is enough to both prove that there is an issue and to help find a fix as this is quite a bad bug!
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[test.tar.gz](https://github.com/godotengine/godot/files/3032992/test.tar.gz)
|
non_process
|
audiostreamplayer causes unpredictable behaviour in idle process please search existing issues for potential duplicates before filing yours godot version godot stable os device including version issue description so this is a weird one first i thought it was the timer node then tugsav managed to find a possible fix to this issue but reduz isn t sure it actually makes sense now i managed to create a reproduction project but it isn t pretty but it shows that there is an error i am pretty sure that the audiostreamplayer is the problem and it causes the idle process across the game to behave very weird it stops other things running in the idle process such as particles and animations in some cases i also experienced particles and animations being sped up steps to reproduce run the project i included and keep your eyes on two things see how the four animations and the particle system stops one at a time check the log output and notice a few things error partitioner bad comparison function sorting will be broken at core sort array h the outputs should come every seconds but it will start to speed up significant and print way faster there are a few things to do in the project that will result in different behaviours the script main gd does audio play and audio stop right after eachother in a while loop with some other code if one of these lines or both are commented out this whole error doesn t occur the node with the main gd script attached node can be placed a few different places in the scene tree that will cause different behaviours if it is placed as the top node just below main the while loop runs forever but eventually all the animations and the particles will stop and the idle process runs a lot faster see log output being printed way faster than it should if node is placed as the last node in the tree the animations and the particles will continue working but the while loop will at some point straight up stop because the timer doesn t get the timeout signal correct and therefore hang forever if node is placed after the extrascenes node the behaviour is a mix of the two above first the animations and the particles will stop and when they all are stopped the while loop will stop as a result of the timer failing so to the solution provided by tugsav our thought about this is that the audiostreamplayer runs in the idle process and somehow is able to mess it up to further prove that this is happening go ahead and set one or more of the animation players in the scene to process mode physics and play the game again this will result in the animation players working as intended and never stop so the pr i linked to in the beginning makes the change to the audiostreamplayer and that pr fixes this problem but it might not be the correct solution as reduz isn t completely convinced about the pr the solution is to make the audiostreamplayer run in physics process instead just like the and does i really hope that this time around the issue is way more clear and that the reproduction project is enough to both prove that there is an issue and to help find a fix as this is quite a bad bug minimal reproduction project
| 0
|
111,096
| 4,461,503,622
|
IssuesEvent
|
2016-08-24 06:01:28
|
JuliaDocs/Documenter.jl
|
https://api.github.com/repos/JuliaDocs/Documenter.jl
|
closed
|
HTML contents not sorted correctly
|
Priority: High Type: Bug
|
I just tried out the HTML renderer on 0.3.0 and I found that the `@contents` blocks are not sorted as I wrote them, e.g.
```@contents
Pages = [
"algorithms.md",
"graph-types.md",
"graphs-builtin.md",
"interface.md",
]
Depth = 1
```
will show the links in the order provided with mkdocs, but it will show "graph-types", "graphs-builtin", "interface", "algorithms" with the HTML renderer.
BTW Is there any way to only show the contents from depth 2 for a page, i.e. excluding the page title as redundant?
|
1.0
|
HTML contents not sorted correctly - I just tried out the HTML renderer on 0.3.0 and I found that the `@contents` blocks are not sorted as I wrote them, e.g.
```@contents
Pages = [
"algorithms.md",
"graph-types.md",
"graphs-builtin.md",
"interface.md",
]
Depth = 1
```
will show the links in the order provided with mkdocs, but it will show "graph-types", "graphs-builtin", "interface", "algorithms" with the HTML renderer.
BTW Is there any way to only show the contents from depth 2 for a page, i.e. excluding the page title as redundant?
|
non_process
|
html contents not sorted correctly i just tried out the html renderer on and i found that the contents blocks are not sorted as i wrote them e g contents pages algorithms md graph types md graphs builtin md interface md depth will show the links in the order provided with mkdocs but it will show graph types graphs builtin interface algorithms with the html renderer btw is there any way to only show the contents from depth for a page i e excluding the page title as redundant
| 0
|
10,152
| 13,044,162,586
|
IssuesEvent
|
2020-07-29 03:47:33
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `ConnectionID` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `ConnectionID` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `ConnectionID` from TiDB -
## Description
Port the scalar function `ConnectionID` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function connectionid from tidb description port the scalar function connectionid from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
12,304
| 14,857,812,947
|
IssuesEvent
|
2021-01-18 15:55:21
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
Using @JsonValue in the type of the content of EntityModel is now broken
|
in: core process: waiting for feedback
|
Commit f3b9738b1e99af3ed1d2a97979951dc8ae2566b4 related to issue #1354 seems to have broken the handling of @JsonValue.
A very simple test to reproduce:
```
void serialisation() throws Exception {
var value = new Value("test");
var model = EntityModel.of(value);
var string = objectMapper.writeValueAsString(model);
System.out.println(string);
}
static class Value {
@JsonValue
String theValue;
Value(String value) {
this.theValue = value;
}
}
```
This will cause the following stacktrace:
```
com.fasterxml.jackson.core.JsonGenerationException: Can not write a string, expecting field name (context: Object)
at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:2151)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:233)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._verifyPrettyValueWrite(JsonGeneratorImpl.java:223)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._verifyValueWrite(WriterBasedJsonGenerator.java:917)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.writeString(WriterBasedJsonGenerator.java:400)
at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41)
at com.fasterxml.jackson.databind.ser.std.JsonValueSerializer.serialize(JsonValueSerializer.java:181)
at org.springframework.hateoas.EntityModel$MapSuppressingUnwrappingSerializer.serialize(EntityModel.java:218)
at com.fasterxml.jackson.databind.ser.impl.UnwrappingBeanPropertyWriter.serializeAsField(UnwrappingBeanPropertyWriter.java:127)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:755)
at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:319)
at com.fasterxml.jackson.databind.ObjectMapper._writeValueAndClose(ObjectMapper.java:4407)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3661)
...
```
|
1.0
|
Using @JsonValue in the type of the content of EntityModel is now broken - Commit f3b9738b1e99af3ed1d2a97979951dc8ae2566b4 related to issue #1354 seems to have broken the handling of @JsonValue.
A very simple test to reproduce:
```
void serialisation() throws Exception {
var value = new Value("test");
var model = EntityModel.of(value);
var string = objectMapper.writeValueAsString(model);
System.out.println(string);
}
static class Value {
@JsonValue
String theValue;
Value(String value) {
this.theValue = value;
}
}
```
This will cause the following stacktrace:
```
com.fasterxml.jackson.core.JsonGenerationException: Can not write a string, expecting field name (context: Object)
at com.fasterxml.jackson.core.JsonGenerator._reportError(JsonGenerator.java:2151)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._reportCantWriteValueExpectName(JsonGeneratorImpl.java:233)
at com.fasterxml.jackson.core.json.JsonGeneratorImpl._verifyPrettyValueWrite(JsonGeneratorImpl.java:223)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._verifyValueWrite(WriterBasedJsonGenerator.java:917)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.writeString(WriterBasedJsonGenerator.java:400)
at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41)
at com.fasterxml.jackson.databind.ser.std.JsonValueSerializer.serialize(JsonValueSerializer.java:181)
at org.springframework.hateoas.EntityModel$MapSuppressingUnwrappingSerializer.serialize(EntityModel.java:218)
at com.fasterxml.jackson.databind.ser.impl.UnwrappingBeanPropertyWriter.serializeAsField(UnwrappingBeanPropertyWriter.java:127)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:755)
at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:319)
at com.fasterxml.jackson.databind.ObjectMapper._writeValueAndClose(ObjectMapper.java:4407)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3661)
...
```
|
process
|
using jsonvalue in the type of the content of entitymodel is now broken commit related to issue seems to have broken the handling of jsonvalue a very simple test to reproduce void serialisation throws exception var value new value test var model entitymodel of value var string objectmapper writevalueasstring model system out println string static class value jsonvalue string thevalue value string value this thevalue value this will cause the following stacktrace com fasterxml jackson core jsongenerationexception can not write a string expecting field name context object at com fasterxml jackson core jsongenerator reporterror jsongenerator java at com fasterxml jackson core json jsongeneratorimpl reportcantwritevalueexpectname jsongeneratorimpl java at com fasterxml jackson core json jsongeneratorimpl verifyprettyvaluewrite jsongeneratorimpl java at com fasterxml jackson core json writerbasedjsongenerator verifyvaluewrite writerbasedjsongenerator java at com fasterxml jackson core json writerbasedjsongenerator writestring writerbasedjsongenerator java at com fasterxml jackson databind ser std stringserializer serialize stringserializer java at com fasterxml jackson databind ser std jsonvalueserializer serialize jsonvalueserializer java at org springframework hateoas entitymodel mapsuppressingunwrappingserializer serialize entitymodel java at com fasterxml jackson databind ser impl unwrappingbeanpropertywriter serializeasfield unwrappingbeanpropertywriter java at com fasterxml jackson databind ser std beanserializerbase serializefields beanserializerbase java at com fasterxml jackson databind ser beanserializer serialize beanserializer java at com fasterxml jackson databind ser defaultserializerprovider serialize defaultserializerprovider java at com fasterxml jackson databind ser defaultserializerprovider serializevalue defaultserializerprovider java at com fasterxml jackson databind objectmapper writevalueandclose objectmapper java at com fasterxml jackson databind objectmapper writevalueasstring objectmapper java
| 1
|
89,399
| 8,202,634,318
|
IssuesEvent
|
2018-09-02 12:02:19
|
humera987/HumTestData
|
https://api.github.com/repos/humera987/HumTestData
|
opened
|
Humera_Test_Proj : team_user_signup_invalid
|
Humera_Test_Proj
|
Project : Humera_Test_Proj
Job : Stg
Env : Stg
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.57.51.56/users/team-sign-up
Request :
{
"name" : "fn1 ln",
"email" : "test@",
"password" : "12345678",
"company" : "123"
}
Response :
I/O error on POST request for "http://13.57.51.56/users/team-sign-up": Timeout waiting for connection from pool; nested exception is org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
Logs :
Assertion [@Response.errors == true] failed, expected value [true] but found []
--- FX Bot ---
|
1.0
|
Humera_Test_Proj : team_user_signup_invalid - Project : Humera_Test_Proj
Job : Stg
Env : Stg
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.57.51.56/users/team-sign-up
Request :
{
"name" : "fn1 ln",
"email" : "test@",
"password" : "12345678",
"company" : "123"
}
Response :
I/O error on POST request for "http://13.57.51.56/users/team-sign-up": Timeout waiting for connection from pool; nested exception is org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
Logs :
Assertion [@Response.errors == true] failed, expected value [true] but found []
--- FX Bot ---
|
non_process
|
humera test proj team user signup invalid project humera test proj job stg env stg region fxlabs us west result fail status code headers endpoint request name ln email test password company response i o error on post request for timeout waiting for connection from pool nested exception is org apache http conn connectionpooltimeoutexception timeout waiting for connection from pool logs assertion failed expected value but found fx bot
| 0
|
663,862
| 22,209,266,396
|
IssuesEvent
|
2022-06-07 17:34:49
|
gorilla-devs/ferium
|
https://api.github.com/repos/gorilla-devs/ferium
|
closed
|
Dependencies should be properly specified to support cargo's auto update
|
bug priority
|
## Description
The installation using cargo fails.
## To Reproduce
Type `cargo install ferium`.
Wait for the compilation to fail.
## Software version
- OS Linux PopOS 22.04 x64 5.17.5-76051705-generic
- Ferium Version 4.1.1
- cargo 1.61.0 (a028ae4 2022-04-29)
- rustup 1.24.3 (ce5817a94 2021-05-31)
- rustc 1.61.0 (fe5b13d68 2022-05-18)
## Additional information
Log:
```
arthur@arthur-linux:~$ cargo install ferium
Updating crates.io index
Installing ferium v4.1.1
Compiling libc v0.2.126
Compiling proc-macro2 v1.0.39
Compiling unicode-ident v1.0.0
Compiling version_check v0.9.4
Compiling syn v1.0.96
Compiling heck v0.4.0
Compiling pkg-config v0.3.25
Compiling cfg-if v1.0.0
Compiling memchr v2.5.0
Compiling serde v1.0.137
Compiling smallvec v1.8.0
Compiling version-compare v0.1.0
Compiling once_cell v1.12.0
Compiling autocfg v1.1.0
Compiling log v0.4.17
Compiling pin-project-lite v0.2.9
Compiling futures-core v0.3.21
Compiling lazy_static v1.4.0
Compiling itoa v1.0.2
Compiling typenum v1.15.0
Compiling serde_derive v1.0.137
Compiling bytes v1.1.0
Compiling untrusted v0.7.1
Compiling spin v0.5.2
Compiling slab v0.4.6
Compiling pin-utils v0.1.0
Compiling hashbrown v0.11.2
Compiling fnv v1.0.7
Compiling futures-task v0.3.21
Compiling fastrand v1.7.0
Compiling futures-util v0.3.21
Compiling rustls v0.20.6
Compiling futures-channel v0.3.21
Compiling matches v0.1.9
Compiling percent-encoding v2.1.0
Compiling base64 v0.13.0
Compiling tinyvec_macros v0.1.0
Compiling httparse v1.7.1
Compiling futures-sink v0.3.21
Compiling httpdate v1.0.2
Compiling try-lock v0.2.3
Compiling subtle v2.4.1
Compiling cache-padded v1.2.0
Compiling serde_json v1.0.81
Compiling ryu v1.0.10
Compiling tower-service v0.3.1
Compiling encoding_rs v0.8.31
Compiling futures-io v0.3.21
Compiling parking v2.0.0
Compiling waker-fn v1.1.0
Compiling regex-syntax v0.6.26
Compiling unicode-bidi v0.3.8
Compiling adler v1.0.2
Compiling mime v0.3.16
Compiling cpufeatures v0.2.2
Compiling num_threads v0.1.6
Compiling event-listener v2.5.2
Compiling time-macros v0.2.4
Compiling async-task v4.2.0
Compiling gimli v0.26.1
Compiling zstd-safe v4.1.6+zstd.1.5.2
Compiling ipnet v2.5.0
Compiling doc-comment v0.3.3
Compiling crc32fast v1.3.2
Compiling base64ct v1.0.1
Compiling crossbeam-utils v0.8.8
Compiling async-trait v0.1.56
Compiling zeroize v1.5.5
Compiling os_str_bytes v6.1.0
Compiling hyperx v1.4.0
Compiling rand_core v0.6.3
Compiling unicode-width v0.1.9
Compiling rustc-demangle v0.1.21
Compiling atomic-waker v1.0.0
Compiling bitflags v1.3.2
Compiling termcolor v1.1.3
Compiling remove_dir_all v0.5.3
Compiling strsim v0.10.0
Compiling language-tags v0.3.2
Compiling opaque-debug v0.3.0
Compiling textwrap v0.15.0
Compiling cty v0.2.2
Compiling rfd v0.8.3
Compiling arc-swap v1.5.0
Compiling murmur2 v0.1.0
Compiling constant_time_eq v0.1.5
Compiling semver v1.0.9
Compiling byteorder v1.4.3
Compiling anyhow v1.0.57
Compiling home v0.5.3
Compiling either v1.6.1
Compiling number_prefix v0.4.0
Compiling urlencoding v2.1.0
Compiling fs_extra v1.2.0
Compiling value-bag v1.0.0-alpha.9
Compiling generic-array v0.14.5
Compiling proc-macro-error-attr v1.0.4
Compiling proc-macro-error v1.0.4
Compiling unicase v2.6.0
Compiling cfg-expr v0.10.3
Compiling tracing-core v0.1.26
Compiling indexmap v1.8.2
Compiling num-traits v0.2.15
Compiling num-integer v0.1.45
Compiling num-bigint v0.4.3
Compiling http v0.2.7
Compiling form_urlencoded v1.0.1
Compiling tinyvec v1.6.0
Compiling rustls-pemfile v0.3.0
Compiling pem v1.0.2
Compiling concurrent-queue v1.2.2
Compiling miniz_oxide v0.5.3
Compiling async-lock v2.5.0
Compiling secrecy v0.8.0
Compiling password-hash v0.3.2
Compiling clap_lex v0.2.0
Compiling addr2line v0.17.0
Compiling raw-window-handle v0.4.3
Compiling itertools v0.10.3
Compiling http-body v0.4.5
Compiling unicode-normalization v0.1.19
Compiling aho-corasick v0.7.18
Compiling object v0.28.4
Compiling socket2 v0.4.4
Compiling num_cpus v1.13.1
Compiling time v0.1.44
Compiling time v0.3.9
Compiling terminal_size v0.1.17
Compiling atty v0.2.14
Compiling tempfile v3.3.0
Compiling futures-lite v1.12.0
Compiling async-channel v1.6.1
Compiling jobserver v0.1.24
Compiling quote v1.0.18
Compiling flate2 v1.0.24
Compiling idna v0.2.3
Compiling regex v1.5.6
Compiling toml v0.5.9
Compiling colored v2.0.0
Compiling cc v1.0.73
Compiling blocking v1.2.0
Compiling async-executor v1.4.1
Compiling crypto-common v0.1.3
Compiling block-buffer v0.10.2
Compiling cipher v0.3.0
Compiling size v0.1.2
Compiling console v0.15.0
Compiling system-deps v6.0.2
Compiling ring v0.16.20
Compiling zstd-sys v1.6.3+zstd.1.5.2
Compiling backtrace v0.3.65
Compiling bzip2-sys v0.1.11+1.0.8
Compiling digest v0.10.3
Compiling aes v0.7.5
Compiling dialoguer v0.10.1
Compiling indicatif v0.16.2
Compiling glib-sys v0.15.10
Compiling gobject-sys v0.15.10
Compiling gio-sys v0.15.10
Compiling atk-sys v0.15.1
Compiling gdk-sys v0.15.1
Compiling cairo-sys-rs v0.15.1
Compiling gdk-pixbuf-sys v0.15.10
Compiling pango-sys v0.15.10
Compiling gtk-sys v0.15.3
Compiling hmac v0.12.1
Compiling sha2 v0.10.2
Compiling sha1 v0.10.1
Compiling ctor v0.1.22
Compiling tokio-macros v1.7.0
Compiling tracing-attributes v0.1.21
Compiling thiserror-impl v1.0.31
Compiling snafu-derive v0.7.1
Compiling lazy-regex-proc_macros v2.3.0
Compiling serde_repr v0.1.8
Compiling async-attributes v1.1.2
Compiling pbkdf2 v0.10.1
Compiling clap_derive v3.1.18
Compiling tracing v0.1.34
Compiling lazy-regex v2.3.0
Compiling thiserror v1.0.31
Compiling simple_asn1 v0.6.2
Compiling snafu v0.7.1
Compiling webpki v0.22.0
Compiling sct v0.7.0
Compiling mio v0.8.3
Compiling want v0.3.0
Compiling polling v2.2.0
Compiling kv-log-macro v1.0.7
Compiling webpki-roots v0.22.3
Compiling clap v3.1.18
Compiling async-io v1.7.0
Compiling tokio v1.19.0
Compiling async-global-executor v2.1.0
Compiling clap_complete v3.1.4
Compiling async-std v1.11.0
Compiling serde_urlencoded v0.7.1
Compiling url v2.2.2
Compiling chrono v0.4.19
Compiling serde_path_to_error v0.1.7
Compiling tokio-util v0.7.2
Compiling tokio-rustls v0.23.4
Compiling jsonwebtoken v8.1.0
Compiling online v3.0.1
Compiling h2 v0.3.13
Compiling hyper v0.14.19
Compiling hyper-rustls v0.23.0
Compiling reqwest v0.11.10
Compiling bzip2 v0.4.3
Compiling octocrab v0.16.0
Compiling furse v1.3.0
Compiling ferinth v2.2.2
Compiling zstd v0.10.2+zstd.1.5.2
Compiling zip v0.6.2
Compiling libium v1.18.0
error[E0308]: mismatched types
--> /home/arthur/.cargo/registry/src/github.com-1ecc6299db9ec823/libium-1.18.0/src/upgrade/mod.rs:55:24
|
55 | size: Some(file.file_length),
| ^^^^^^^^^^^^^^^^ expected `u64`, found `usize`
|
help: you can convert a `usize` to a `u64` and panic if the converted value doesn't fit
|
55 | size: Some(file.file_length.try_into().unwrap()),
| ++++++++++++++++++++
For more information about this error, try `rustc --explain E0308`.
error: could not compile `libium` due to previous error
warning: build failed, waiting for other jobs to finish...
error: failed to compile `ferium v4.1.1`, intermediate artifacts can be found at `/tmp/cargo-installig0oZu`
```
|
1.0
|
Dependencies should be properly specified to support cargo's auto update - ## Description
The installation using cargo fails.
## To Reproduce
Type `cargo install ferium`.
Wait for the compilation to fail.
## Software version
- OS Linux PopOS 22.04 x64 5.17.5-76051705-generic
- Ferium Version 4.1.1
- cargo 1.61.0 (a028ae4 2022-04-29)
- rustup 1.24.3 (ce5817a94 2021-05-31)
- rustc 1.61.0 (fe5b13d68 2022-05-18)
## Additional information
Log:
```
arthur@arthur-linux:~$ cargo install ferium
Updating crates.io index
Installing ferium v4.1.1
Compiling libc v0.2.126
Compiling proc-macro2 v1.0.39
Compiling unicode-ident v1.0.0
Compiling version_check v0.9.4
Compiling syn v1.0.96
Compiling heck v0.4.0
Compiling pkg-config v0.3.25
Compiling cfg-if v1.0.0
Compiling memchr v2.5.0
Compiling serde v1.0.137
Compiling smallvec v1.8.0
Compiling version-compare v0.1.0
Compiling once_cell v1.12.0
Compiling autocfg v1.1.0
Compiling log v0.4.17
Compiling pin-project-lite v0.2.9
Compiling futures-core v0.3.21
Compiling lazy_static v1.4.0
Compiling itoa v1.0.2
Compiling typenum v1.15.0
Compiling serde_derive v1.0.137
Compiling bytes v1.1.0
Compiling untrusted v0.7.1
Compiling spin v0.5.2
Compiling slab v0.4.6
Compiling pin-utils v0.1.0
Compiling hashbrown v0.11.2
Compiling fnv v1.0.7
Compiling futures-task v0.3.21
Compiling fastrand v1.7.0
Compiling futures-util v0.3.21
Compiling rustls v0.20.6
Compiling futures-channel v0.3.21
Compiling matches v0.1.9
Compiling percent-encoding v2.1.0
Compiling base64 v0.13.0
Compiling tinyvec_macros v0.1.0
Compiling httparse v1.7.1
Compiling futures-sink v0.3.21
Compiling httpdate v1.0.2
Compiling try-lock v0.2.3
Compiling subtle v2.4.1
Compiling cache-padded v1.2.0
Compiling serde_json v1.0.81
Compiling ryu v1.0.10
Compiling tower-service v0.3.1
Compiling encoding_rs v0.8.31
Compiling futures-io v0.3.21
Compiling parking v2.0.0
Compiling waker-fn v1.1.0
Compiling regex-syntax v0.6.26
Compiling unicode-bidi v0.3.8
Compiling adler v1.0.2
Compiling mime v0.3.16
Compiling cpufeatures v0.2.2
Compiling num_threads v0.1.6
Compiling event-listener v2.5.2
Compiling time-macros v0.2.4
Compiling async-task v4.2.0
Compiling gimli v0.26.1
Compiling zstd-safe v4.1.6+zstd.1.5.2
Compiling ipnet v2.5.0
Compiling doc-comment v0.3.3
Compiling crc32fast v1.3.2
Compiling base64ct v1.0.1
Compiling crossbeam-utils v0.8.8
Compiling async-trait v0.1.56
Compiling zeroize v1.5.5
Compiling os_str_bytes v6.1.0
Compiling hyperx v1.4.0
Compiling rand_core v0.6.3
Compiling unicode-width v0.1.9
Compiling rustc-demangle v0.1.21
Compiling atomic-waker v1.0.0
Compiling bitflags v1.3.2
Compiling termcolor v1.1.3
Compiling remove_dir_all v0.5.3
Compiling strsim v0.10.0
Compiling language-tags v0.3.2
Compiling opaque-debug v0.3.0
Compiling textwrap v0.15.0
Compiling cty v0.2.2
Compiling rfd v0.8.3
Compiling arc-swap v1.5.0
Compiling murmur2 v0.1.0
Compiling constant_time_eq v0.1.5
Compiling semver v1.0.9
Compiling byteorder v1.4.3
Compiling anyhow v1.0.57
Compiling home v0.5.3
Compiling either v1.6.1
Compiling number_prefix v0.4.0
Compiling urlencoding v2.1.0
Compiling fs_extra v1.2.0
Compiling value-bag v1.0.0-alpha.9
Compiling generic-array v0.14.5
Compiling proc-macro-error-attr v1.0.4
Compiling proc-macro-error v1.0.4
Compiling unicase v2.6.0
Compiling cfg-expr v0.10.3
Compiling tracing-core v0.1.26
Compiling indexmap v1.8.2
Compiling num-traits v0.2.15
Compiling num-integer v0.1.45
Compiling num-bigint v0.4.3
Compiling http v0.2.7
Compiling form_urlencoded v1.0.1
Compiling tinyvec v1.6.0
Compiling rustls-pemfile v0.3.0
Compiling pem v1.0.2
Compiling concurrent-queue v1.2.2
Compiling miniz_oxide v0.5.3
Compiling async-lock v2.5.0
Compiling secrecy v0.8.0
Compiling password-hash v0.3.2
Compiling clap_lex v0.2.0
Compiling addr2line v0.17.0
Compiling raw-window-handle v0.4.3
Compiling itertools v0.10.3
Compiling http-body v0.4.5
Compiling unicode-normalization v0.1.19
Compiling aho-corasick v0.7.18
Compiling object v0.28.4
Compiling socket2 v0.4.4
Compiling num_cpus v1.13.1
Compiling time v0.1.44
Compiling time v0.3.9
Compiling terminal_size v0.1.17
Compiling atty v0.2.14
Compiling tempfile v3.3.0
Compiling futures-lite v1.12.0
Compiling async-channel v1.6.1
Compiling jobserver v0.1.24
Compiling quote v1.0.18
Compiling flate2 v1.0.24
Compiling idna v0.2.3
Compiling regex v1.5.6
Compiling toml v0.5.9
Compiling colored v2.0.0
Compiling cc v1.0.73
Compiling blocking v1.2.0
Compiling async-executor v1.4.1
Compiling crypto-common v0.1.3
Compiling block-buffer v0.10.2
Compiling cipher v0.3.0
Compiling size v0.1.2
Compiling console v0.15.0
Compiling system-deps v6.0.2
Compiling ring v0.16.20
Compiling zstd-sys v1.6.3+zstd.1.5.2
Compiling backtrace v0.3.65
Compiling bzip2-sys v0.1.11+1.0.8
Compiling digest v0.10.3
Compiling aes v0.7.5
Compiling dialoguer v0.10.1
Compiling indicatif v0.16.2
Compiling glib-sys v0.15.10
Compiling gobject-sys v0.15.10
Compiling gio-sys v0.15.10
Compiling atk-sys v0.15.1
Compiling gdk-sys v0.15.1
Compiling cairo-sys-rs v0.15.1
Compiling gdk-pixbuf-sys v0.15.10
Compiling pango-sys v0.15.10
Compiling gtk-sys v0.15.3
Compiling hmac v0.12.1
Compiling sha2 v0.10.2
Compiling sha1 v0.10.1
Compiling ctor v0.1.22
Compiling tokio-macros v1.7.0
Compiling tracing-attributes v0.1.21
Compiling thiserror-impl v1.0.31
Compiling snafu-derive v0.7.1
Compiling lazy-regex-proc_macros v2.3.0
Compiling serde_repr v0.1.8
Compiling async-attributes v1.1.2
Compiling pbkdf2 v0.10.1
Compiling clap_derive v3.1.18
Compiling tracing v0.1.34
Compiling lazy-regex v2.3.0
Compiling thiserror v1.0.31
Compiling simple_asn1 v0.6.2
Compiling snafu v0.7.1
Compiling webpki v0.22.0
Compiling sct v0.7.0
Compiling mio v0.8.3
Compiling want v0.3.0
Compiling polling v2.2.0
Compiling kv-log-macro v1.0.7
Compiling webpki-roots v0.22.3
Compiling clap v3.1.18
Compiling async-io v1.7.0
Compiling tokio v1.19.0
Compiling async-global-executor v2.1.0
Compiling clap_complete v3.1.4
Compiling async-std v1.11.0
Compiling serde_urlencoded v0.7.1
Compiling url v2.2.2
Compiling chrono v0.4.19
Compiling serde_path_to_error v0.1.7
Compiling tokio-util v0.7.2
Compiling tokio-rustls v0.23.4
Compiling jsonwebtoken v8.1.0
Compiling online v3.0.1
Compiling h2 v0.3.13
Compiling hyper v0.14.19
Compiling hyper-rustls v0.23.0
Compiling reqwest v0.11.10
Compiling bzip2 v0.4.3
Compiling octocrab v0.16.0
Compiling furse v1.3.0
Compiling ferinth v2.2.2
Compiling zstd v0.10.2+zstd.1.5.2
Compiling zip v0.6.2
Compiling libium v1.18.0
error[E0308]: mismatched types
--> /home/arthur/.cargo/registry/src/github.com-1ecc6299db9ec823/libium-1.18.0/src/upgrade/mod.rs:55:24
|
55 | size: Some(file.file_length),
| ^^^^^^^^^^^^^^^^ expected `u64`, found `usize`
|
help: you can convert a `usize` to a `u64` and panic if the converted value doesn't fit
|
55 | size: Some(file.file_length.try_into().unwrap()),
| ++++++++++++++++++++
For more information about this error, try `rustc --explain E0308`.
error: could not compile `libium` due to previous error
warning: build failed, waiting for other jobs to finish...
error: failed to compile `ferium v4.1.1`, intermediate artifacts can be found at `/tmp/cargo-installig0oZu`
```
|
non_process
|
dependencies should be properly specified to support cargo s auto update description the installation using cargo fails to reproduce type cargo install ferium wait for the compilation to fail software version os linux popos generic ferium version cargo rustup rustc additional information log arthur arthur linux cargo install ferium updating crates io index installing ferium compiling libc compiling proc compiling unicode ident compiling version check compiling syn compiling heck compiling pkg config compiling cfg if compiling memchr compiling serde compiling smallvec compiling version compare compiling once cell compiling autocfg compiling log compiling pin project lite compiling futures core compiling lazy static compiling itoa compiling typenum compiling serde derive compiling bytes compiling untrusted compiling spin compiling slab compiling pin utils compiling hashbrown compiling fnv compiling futures task compiling fastrand compiling futures util compiling rustls compiling futures channel compiling matches compiling percent encoding compiling compiling tinyvec macros compiling httparse compiling futures sink compiling httpdate compiling try lock compiling subtle compiling cache padded compiling serde json compiling ryu compiling tower service compiling encoding rs compiling futures io compiling parking compiling waker fn compiling regex syntax compiling unicode bidi compiling adler compiling mime compiling cpufeatures compiling num threads compiling event listener compiling time macros compiling async task compiling gimli compiling zstd safe zstd compiling ipnet compiling doc comment compiling compiling compiling crossbeam utils compiling async trait compiling zeroize compiling os str bytes compiling hyperx compiling rand core compiling unicode width compiling rustc demangle compiling atomic waker compiling bitflags compiling termcolor compiling remove dir all compiling strsim compiling language tags compiling opaque debug compiling textwrap compiling cty compiling rfd compiling arc swap compiling compiling constant time eq compiling semver compiling byteorder compiling anyhow compiling home compiling either compiling number prefix compiling urlencoding compiling fs extra compiling value bag alpha compiling generic array compiling proc macro error attr compiling proc macro error compiling unicase compiling cfg expr compiling tracing core compiling indexmap compiling num traits compiling num integer compiling num bigint compiling http compiling form urlencoded compiling tinyvec compiling rustls pemfile compiling pem compiling concurrent queue compiling miniz oxide compiling async lock compiling secrecy compiling password hash compiling clap lex compiling compiling raw window handle compiling itertools compiling http body compiling unicode normalization compiling aho corasick compiling object compiling compiling num cpus compiling time compiling time compiling terminal size compiling atty compiling tempfile compiling futures lite compiling async channel compiling jobserver compiling quote compiling compiling idna compiling regex compiling toml compiling colored compiling cc compiling blocking compiling async executor compiling crypto common compiling block buffer compiling cipher compiling size compiling console compiling system deps compiling ring compiling zstd sys zstd compiling backtrace compiling sys compiling digest compiling aes compiling dialoguer compiling indicatif compiling glib sys compiling gobject sys compiling gio sys compiling atk sys compiling gdk sys compiling cairo sys rs compiling gdk pixbuf sys compiling pango sys compiling gtk sys compiling hmac compiling compiling compiling ctor compiling tokio macros compiling tracing attributes compiling thiserror impl compiling snafu derive compiling lazy regex proc macros compiling serde repr compiling async attributes compiling compiling clap derive compiling tracing compiling lazy regex compiling thiserror compiling simple compiling snafu compiling webpki compiling sct compiling mio compiling want compiling polling compiling kv log macro compiling webpki roots compiling clap compiling async io compiling tokio compiling async global executor compiling clap complete compiling async std compiling serde urlencoded compiling url compiling chrono compiling serde path to error compiling tokio util compiling tokio rustls compiling jsonwebtoken compiling online compiling compiling hyper compiling hyper rustls compiling reqwest compiling compiling octocrab compiling furse compiling ferinth compiling zstd zstd compiling zip compiling libium error mismatched types home arthur cargo registry src github com libium src upgrade mod rs size some file file length expected found usize help you can convert a usize to a and panic if the converted value doesn t fit size some file file length try into unwrap for more information about this error try rustc explain error could not compile libium due to previous error warning build failed waiting for other jobs to finish error failed to compile ferium intermediate artifacts can be found at tmp cargo
| 0
|
8,765
| 11,883,807,753
|
IssuesEvent
|
2020-03-27 16:33:38
|
MicrosoftDocs/vsts-docs
|
https://api.github.com/repos/MicrosoftDocs/vsts-docs
|
closed
|
Add description field to Parameters
|
Pri1 devops-cicd-process/tech devops/prod
|
It would be nice to have a description field in the parameters. Especially when using object type, adding a description with a sample format of the parameter would be useful to the user.
It could be implemented as an information mark, such as the one in the parameters when creating a variable group, for instance.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#parameters)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Add description field to Parameters - It would be nice to have a description field in the parameters. Especially when using object type, adding a description with a sample format of the parameter would be useful to the user.
It could be implemented as an information mark, such as the one in the parameters when creating a variable group, for instance.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#parameters)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
add description field to parameters it would be nice to have a description field in the parameters especially when using object type adding a description with a sample format of the parameter would be useful to the user it could be implemented as an information mark such as the one in the parameters when creating a variable group for instance document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bbdc version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
9,792
| 12,806,272,032
|
IssuesEvent
|
2020-07-03 09:08:17
|
prisma/prisma-examples
|
https://api.github.com/repos/prisma/prisma-examples
|
opened
|
Projects from `/deployment-platforms` must also be available as normal, local projects that are fully tested
|
kind/improvement process/candidate
|
Currently the projects in `deployment-platforms` are kind of a blind spot as we do not have tests for them that are automatically run. Succeeding tests on PRs to these projects give a false impression then. We will try to fix this via https://github.com/prisma/prisma-examples/issues/1840, but that is a bigger undertaking.
What we can do until then is to at least test the _code_ we suggest to use on these platforms by having a _local_ version in javascript or typescript that uses the exact same project code.
Example: https://github.com/prisma/prisma-examples/tree/master/deployment-platforms/vercel-graphql currently uses Nexus and Nexus Prisma Plugin in a way no other project does, which could lead to a broken project without us knowing it:

|
1.0
|
Projects from `/deployment-platforms` must also be available as normal, local projects that are fully tested - Currently the projects in `deployment-platforms` are kind of a blind spot as we do not have tests for them that are automatically run. Succeeding tests on PRs to these projects give a false impression then. We will try to fix this via https://github.com/prisma/prisma-examples/issues/1840, but that is a bigger undertaking.
What we can do until then is to at least test the _code_ we suggest to use on these platforms by having a _local_ version in javascript or typescript that uses the exact same project code.
Example: https://github.com/prisma/prisma-examples/tree/master/deployment-platforms/vercel-graphql currently uses Nexus and Nexus Prisma Plugin in a way no other project does, which could lead to a broken project without us knowing it:

|
process
|
projects from deployment platforms must also be available as normal local projects that are fully tested currently the projects in deployment platforms are kind of a blind spot as we do not have tests for them that are automatically run succeeding tests on prs to these projects give a false impression then we will try to fix this via but that is a bigger undertaking what we can do until then is to at least test the code we suggest to use on these platforms by having a local version in javascript or typescript that uses the exact same project code example currently uses nexus and nexus prisma plugin in a way no other project does which could lead to a broken project without us knowing it
| 1
|
141,763
| 12,977,719,177
|
IssuesEvent
|
2020-07-21 21:12:24
|
DeepRegNet/DeepReg
|
https://api.github.com/repos/DeepRegNet/DeepReg
|
closed
|
Test + docs: model/network/ddf.py
|
documentation tests
|
# Issue description
Unit tests for `deepreg/model/network/ddf_dvf.py`, in pytest style.
- func `build_ddf_dvf_model`
## Type of Issue
Please delete options that are not relevant.
- [x] Documentation update
- [x] Test request
- [x] Linting request
#### What's the expected result?
Unit tests for func input outputs, lint and docs.
|
1.0
|
Test + docs: model/network/ddf.py - # Issue description
Unit tests for `deepreg/model/network/ddf_dvf.py`, in pytest style.
- func `build_ddf_dvf_model`
## Type of Issue
Please delete options that are not relevant.
- [x] Documentation update
- [x] Test request
- [x] Linting request
#### What's the expected result?
Unit tests for func input outputs, lint and docs.
|
non_process
|
test docs model network ddf py issue description unit tests for deepreg model network ddf dvf py in pytest style func build ddf dvf model type of issue please delete options that are not relevant documentation update test request linting request what s the expected result unit tests for func input outputs lint and docs
| 0
|
18,323
| 24,441,479,091
|
IssuesEvent
|
2022-10-06 14:51:34
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] js.pusher.com
|
whitelisting process
|
**Domains or links**
- `js.pusher.com`
**How did you discover your web site or domain was listed here?**
- Reported by another user.
**Have you requested removal from other sources?**
- No
|
1.0
|
[FALSE-POSITIVE?] js.pusher.com - **Domains or links**
- `js.pusher.com`
**How did you discover your web site or domain was listed here?**
- Reported by another user.
**Have you requested removal from other sources?**
- No
|
process
|
js pusher com domains or links js pusher com how did you discover your web site or domain was listed here reported by another user have you requested removal from other sources no
| 1
|
32,993
| 6,148,635,952
|
IssuesEvent
|
2017-06-27 18:16:30
|
vega/vega-lite
|
https://api.github.com/repos/vega/vega-lite
|
closed
|
Overhaul Docs
|
Documentation
|
Our docs css and markup has become riddled with tack-on solutions to individual problems. Now that we know what it looks like, and know how we want to extend it, I think it's worth rewriting following some guidelines.
I think these changes can be accomplished without losing any simplicity while making significant gains in manageability, debugging, and growth.
I suggest:
- SASS or postcss
- [BEM](http://csswizardry.com/2013/01/mindbemding-getting-your-head-round-bem-syntax/)
|
1.0
|
Overhaul Docs - Our docs css and markup has become riddled with tack-on solutions to individual problems. Now that we know what it looks like, and know how we want to extend it, I think it's worth rewriting following some guidelines.
I think these changes can be accomplished without losing any simplicity while making significant gains in manageability, debugging, and growth.
I suggest:
- SASS or postcss
- [BEM](http://csswizardry.com/2013/01/mindbemding-getting-your-head-round-bem-syntax/)
|
non_process
|
overhaul docs our docs css and markup has become riddled with tack on solutions to individual problems now that we know what it looks like and know how we want to extend it i think it s worth rewriting following some guidelines i think these changes can be accomplished without losing any simplicity while making significant gains in manageability debugging and growth i suggest sass or postcss
| 0
|
8,303
| 11,463,316,899
|
IssuesEvent
|
2020-02-07 15:47:11
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
GRPC Pinger
|
P1 enhancement grpc process
|
**Problem**
We need a script that can ping the GRPC API to retrieve some messages and validate that it is working properly.
**Solution**
The script should:
- Take as input a topic ID and how many messages it should consume (default 3)
- Download grpcurl and install it to `/usr/local/bin/grpcurl` if not present
- Invoke subscribeTopic with epoch and limit
- Grep sequence number from output and validate it matches expected
- If unexpected, restart grpc and sleep a long period to ensure grpc started (120s?)
- If expected, sleep 5s
This script should be a systemd service and added to deploy.sh
**Alternatives**
- Add this logic to custom health check that runs when health endpoint invoked, but this wouldn't restart just alert
**Additional Context**
|
1.0
|
GRPC Pinger - **Problem**
We need a script that can ping the GRPC API to retrieve some messages and validate that it is working properly.
**Solution**
The script should:
- Take as input a topic ID and how many messages it should consume (default 3)
- Download grpcurl and install it to `/usr/local/bin/grpcurl` if not present
- Invoke subscribeTopic with epoch and limit
- Grep sequence number from output and validate it matches expected
- If unexpected, restart grpc and sleep a long period to ensure grpc started (120s?)
- If expected, sleep 5s
This script should be a systemd service and added to deploy.sh
**Alternatives**
- Add this logic to custom health check that runs when health endpoint invoked, but this wouldn't restart just alert
**Additional Context**
|
process
|
grpc pinger problem we need a script that can ping the grpc api to retrieve some messages and validate that it is working properly solution the script should take as input a topic id and how many messages it should consume default download grpcurl and install it to usr local bin grpcurl if not present invoke subscribetopic with epoch and limit grep sequence number from output and validate it matches expected if unexpected restart grpc and sleep a long period to ensure grpc started if expected sleep this script should be a systemd service and added to deploy sh alternatives add this logic to custom health check that runs when health endpoint invoked but this wouldn t restart just alert additional context
| 1
|
255,738
| 19,324,136,989
|
IssuesEvent
|
2021-12-14 09:32:54
|
lab-antwerp-1/home
|
https://api.github.com/repos/lab-antwerp-1/home
|
closed
|
vocabulary; linting
|
documentation deliverable vocabulary
|
# update vocabulary/javascript/js-vocabulary.md to include 'linting'
## Content Checklist
- [x] This contribution is related to the module's objectives.
- [x] I have looked through the current repo and open PRs
- [x] I am not duplicating an existing file
- [x] I am not duplicating another person's PR
- [x] I am sure this content is in the right place
|
1.0
|
vocabulary; linting - # update vocabulary/javascript/js-vocabulary.md to include 'linting'
## Content Checklist
- [x] This contribution is related to the module's objectives.
- [x] I have looked through the current repo and open PRs
- [x] I am not duplicating an existing file
- [x] I am not duplicating another person's PR
- [x] I am sure this content is in the right place
|
non_process
|
vocabulary linting update vocabulary javascript js vocabulary md to include linting content checklist this contribution is related to the module s objectives i have looked through the current repo and open prs i am not duplicating an existing file i am not duplicating another person s pr i am sure this content is in the right place
| 0
|
7,970
| 11,156,967,237
|
IssuesEvent
|
2019-12-25 09:58:43
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Bisq Froze at Step 2 During Trades
|
an:investigation in:trade-process was:dropped
|
After updating to Bisq 1.1.6 I had two trades in locked status.
Neither trade was able to progress passed the second step on my terminal whilst the buyers where able to get to send their fiat.
I've attached my log file as suggested by the Arbitrator.
The two trades in question are... 81257 and WX92ZAFU
[Bisq Log.zip](https://github.com/bisq-network/bisq/files/3622458/Bisq.Log.zip)
|
1.0
|
Bisq Froze at Step 2 During Trades - After updating to Bisq 1.1.6 I had two trades in locked status.
Neither trade was able to progress passed the second step on my terminal whilst the buyers where able to get to send their fiat.
I've attached my log file as suggested by the Arbitrator.
The two trades in question are... 81257 and WX92ZAFU
[Bisq Log.zip](https://github.com/bisq-network/bisq/files/3622458/Bisq.Log.zip)
|
process
|
bisq froze at step during trades after updating to bisq i had two trades in locked status neither trade was able to progress passed the second step on my terminal whilst the buyers where able to get to send their fiat i ve attached my log file as suggested by the arbitrator the two trades in question are and
| 1
|
420,425
| 28,263,800,149
|
IssuesEvent
|
2023-04-07 03:40:32
|
AY2223S2-CS2113-W15-2/tp
|
https://api.github.com/repos/AY2223S2-CS2113-W15-2/tp
|
closed
|
Expected output for feature test not formatted
|
type.DocumentationBug
|
<img width="1150" alt="image" src="https://user-images.githubusercontent.com/60395825/228554938-cce98a18-96d6-4e90-9027-29aed2779cc4.png">
Consider use a screenshot instead
|
1.0
|
Expected output for feature test not formatted - <img width="1150" alt="image" src="https://user-images.githubusercontent.com/60395825/228554938-cce98a18-96d6-4e90-9027-29aed2779cc4.png">
Consider use a screenshot instead
|
non_process
|
expected output for feature test not formatted img width alt image src consider use a screenshot instead
| 0
|
7,376
| 10,514,545,675
|
IssuesEvent
|
2019-09-28 01:30:39
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
When converting SparkSQL questions to SQL timestamps are converted wrong
|
.Backend Database/Spark Priority:P3 Query Processor Type:Bug
|
This is the exact same issue as #11009 but for SparkSQL. In either 0.31 or 0.32 we improved logic converting a question to SQL so literal values are spliced in instead of leaving `?` parameter placeholders. However, the literal generated for `Timestamp`s was wrong.
|
1.0
|
When converting SparkSQL questions to SQL timestamps are converted wrong - This is the exact same issue as #11009 but for SparkSQL. In either 0.31 or 0.32 we improved logic converting a question to SQL so literal values are spliced in instead of leaving `?` parameter placeholders. However, the literal generated for `Timestamp`s was wrong.
|
process
|
when converting sparksql questions to sql timestamps are converted wrong this is the exact same issue as but for sparksql in either or we improved logic converting a question to sql so literal values are spliced in instead of leaving parameter placeholders however the literal generated for timestamp s was wrong
| 1
|
19,759
| 26,129,786,561
|
IssuesEvent
|
2022-12-29 02:02:17
|
hsmusic/hsmusic-data
|
https://api.github.com/repos/hsmusic/hsmusic-data
|
closed
|
Run an accessibility pass on all custom colors on the wiki
|
type: involved process what: albums & tracks what: art tags what: other metadata
|
Transferred from notabug#53, 17 Mar 2022:
---
There are a lot of custom colors on the wiki, which determine (most importantly) the color of associated link text (as well as general links on the colored thing's dedicated page). We should run a pass across all present colors to test for contrast against the (more or less) solid black background of wiki pages.
**Things with custom colors:**
- Album (and occasionally individual track groups)
- Group Category (e.g. "Solo Musicians", "Fan-Musician Groups")
- Art Tag
- Homepage Row
- Flash Act
- Wiki Info (i.e. the main primary color of the whole wiki)
Of these, albums and art tags are the main concern - the rest should already be pretty well suited for a black backdrop.
Yes, this includes choosing better (if non-canonical) colors than `#1313be` for [blue-bloods](https://hsmusic.wiki/tag/equius/).
|
1.0
|
Run an accessibility pass on all custom colors on the wiki - Transferred from notabug#53, 17 Mar 2022:
---
There are a lot of custom colors on the wiki, which determine (most importantly) the color of associated link text (as well as general links on the colored thing's dedicated page). We should run a pass across all present colors to test for contrast against the (more or less) solid black background of wiki pages.
**Things with custom colors:**
- Album (and occasionally individual track groups)
- Group Category (e.g. "Solo Musicians", "Fan-Musician Groups")
- Art Tag
- Homepage Row
- Flash Act
- Wiki Info (i.e. the main primary color of the whole wiki)
Of these, albums and art tags are the main concern - the rest should already be pretty well suited for a black backdrop.
Yes, this includes choosing better (if non-canonical) colors than `#1313be` for [blue-bloods](https://hsmusic.wiki/tag/equius/).
|
process
|
run an accessibility pass on all custom colors on the wiki transferred from notabug mar there are a lot of custom colors on the wiki which determine most importantly the color of associated link text as well as general links on the colored thing s dedicated page we should run a pass across all present colors to test for contrast against the more or less solid black background of wiki pages things with custom colors album and occasionally individual track groups group category e g solo musicians fan musician groups art tag homepage row flash act wiki info i e the main primary color of the whole wiki of these albums and art tags are the main concern the rest should already be pretty well suited for a black backdrop yes this includes choosing better if non canonical colors than for
| 1
|
13,133
| 15,553,007,438
|
IssuesEvent
|
2021-03-16 00:25:35
|
xatkit-bot-platform/xatkit-runtime
|
https://api.github.com/repos/xatkit-bot-platform/xatkit-runtime
|
opened
|
Remove @Ignore for Detoxify tests
|
Processors Testing
|
We have to ignore these tests for the moment because we don't have the infrastructure to run them in a CI/CD environment.
|
1.0
|
Remove @Ignore for Detoxify tests - We have to ignore these tests for the moment because we don't have the infrastructure to run them in a CI/CD environment.
|
process
|
remove ignore for detoxify tests we have to ignore these tests for the moment because we don t have the infrastructure to run them in a ci cd environment
| 1
|
21,474
| 29,508,701,928
|
IssuesEvent
|
2023-06-03 16:44:27
|
tesseract-ocr/tesseract
|
https://api.github.com/repos/tesseract-ocr/tesseract
|
closed
|
White font on dark background
|
image preprocessing
|
### Current Behavior
Missing lines with white text on dark background. Yes, the arabic part seems to be logic, because of the language German.
Input:

Output:
Warum ziehen Lachse so beharrlich
flussaufwarts?
3-Flucht vor dem Salzwasser
4-Mangel an Lebensraum
d@bbuw (ilo! yujyam clonc
### Expected Behavior
Get all text from the image.
### Suggested Fix
I have no idea, I'm not a programmer. Sorry.
### tesseract -v
tesseract 5.3.1
leptonica-1.82.0
libgif 5.2.1 : libjpeg 8d (libjpeg-turbo 2.1.5.1) : libpng 1.6.39 : libtiff 4.5.0 : zlib 1.2.11 : libwebp 1.3.0 : libopenjp2 2.5.0
Found NEON
Found libarchive 3.6.2 zlib/1.2.11 liblzma/5.4.1 bz2lib/1.0.8 liblz4/1.9.4 libzstd/1.5.4
Found libcurl/7.88.1 SecureTransport (LibreSSL/3.3.6) zlib/1.2.11 nghttp2/1.51.0
### Operating System
macOS 13 Ventura
### Other Operating System
_No response_
### uname -a
Darwin 10.0.0.3 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:53:44 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T8103 arm64
### Compiler
was taken from brew
### CPU
M1 7GPU
### Virtualization / Containers
none
### Other Information
_No response_
|
1.0
|
White font on dark background - ### Current Behavior
Missing lines with white text on dark background. Yes, the arabic part seems to be logic, because of the language German.
Input:

Output:
Warum ziehen Lachse so beharrlich
flussaufwarts?
3-Flucht vor dem Salzwasser
4-Mangel an Lebensraum
d@bbuw (ilo! yujyam clonc
### Expected Behavior
Get all text from the image.
### Suggested Fix
I have no idea, I'm not a programmer. Sorry.
### tesseract -v
tesseract 5.3.1
leptonica-1.82.0
libgif 5.2.1 : libjpeg 8d (libjpeg-turbo 2.1.5.1) : libpng 1.6.39 : libtiff 4.5.0 : zlib 1.2.11 : libwebp 1.3.0 : libopenjp2 2.5.0
Found NEON
Found libarchive 3.6.2 zlib/1.2.11 liblzma/5.4.1 bz2lib/1.0.8 liblz4/1.9.4 libzstd/1.5.4
Found libcurl/7.88.1 SecureTransport (LibreSSL/3.3.6) zlib/1.2.11 nghttp2/1.51.0
### Operating System
macOS 13 Ventura
### Other Operating System
_No response_
### uname -a
Darwin 10.0.0.3 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:53:44 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T8103 arm64
### Compiler
was taken from brew
### CPU
M1 7GPU
### Virtualization / Containers
none
### Other Information
_No response_
|
process
|
white font on dark background current behavior missing lines with white text on dark background yes the arabic part seems to be logic because of the language german input output warum ziehen lachse so beharrlich flussaufwarts flucht vor dem salzwasser mangel an lebensraum d bbuw ilo yujyam clonc expected behavior get all text from the image suggested fix i have no idea i m not a programmer sorry tesseract v tesseract leptonica libgif libjpeg libjpeg turbo libpng libtiff zlib libwebp found neon found libarchive zlib liblzma libzstd found libcurl securetransport libressl zlib operating system macos ventura other operating system no response uname a darwin darwin kernel version mon apr pdt root xnu release compiler was taken from brew cpu virtualization containers none other information no response
| 1
|
595,634
| 18,071,299,524
|
IssuesEvent
|
2021-09-21 03:28:08
|
hashicorp/terraform-cdk
|
https://api.github.com/repos/hashicorp/terraform-cdk
|
closed
|
Convert handles remote state incorrectly
|
bug priority/important-longterm feature/convert
|
<!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Affected Resource(s)
Terraform remote state data sources
### Expected Behavior
`TerraformRemoteState` from core lib used
### Actual Behavior
A `terraform` provider is added and the type `terraform.DataTerraformRemoteState` is attempted to be used
### Steps to Reproduce
Run convert / init from tf project with a remote state data source present
|
1.0
|
Convert handles remote state incorrectly - <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Affected Resource(s)
Terraform remote state data sources
### Expected Behavior
`TerraformRemoteState` from core lib used
### Actual Behavior
A `terraform` provider is added and the type `terraform.DataTerraformRemoteState` is attempted to be used
### Steps to Reproduce
Run convert / init from tf project with a remote state data source present
|
non_process
|
convert handles remote state incorrectly community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment affected resource s terraform remote state data sources expected behavior terraformremotestate from core lib used actual behavior a terraform provider is added and the type terraform dataterraformremotestate is attempted to be used steps to reproduce run convert init from tf project with a remote state data source present
| 0
|
24,007
| 4,055,655,963
|
IssuesEvent
|
2016-05-24 16:05:40
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
circleci: failed tests: TestSequenceUpdate
|
Robot test-failure
|
The following test appears to have failed:
[#17041](https://circleci.com/gh/cockroachdb/cockroach/17041):
```
3 gossip/infostore.go:286
I160428 15:49:22.345428 stopper.go:352 draining; tasks left:
2 gossip/infostore.go:286
I160428 15:49:22.345478 stopper.go:352 draining; tasks left:
1 gossip/infostore.go:286
panic: test timed out after 1m10s
goroutine 652 [running]:
panic(0x13d0140, 0xc82025fda0)
/usr/local/go/src/runtime/panic.go:464 +0x3e6
testing.startAlarm.func1()
/usr/local/go/src/testing/testing.go:725 +0x14b
created by time.goFunc
/usr/local/go/src/time/sleep.go:129 +0x3a
goroutine 1 [chan receive, 1 minutes]:
testing.RunTests(0x1aebbf8, 0x2221fc0, 0x6f, 0x6f, 0xc820253a01)
/usr/local/go/src/testing/testing.go:583 +0x8d2
testing.(*M).Run(0xc820039f08, 0x5a2ac3)
/usr/local/go/src/testing/testing.go:515 +0x81
main.main()
github.com/cockroachdb/cockroach/kv/_test/_testmain.go:280 +0x117
goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1
goroutine 22 [chan receive]:
github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x24f8e80)
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64
created by github.com/cockroachdb/cockroach/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a
goroutine 628 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc8201b21e4)
/usr/local/go/src/runtime/sema.go:47 +0x26
sync.(*WaitGroup).Wait(0xc8201b21d8)
/usr/local/go/src/sync/waitgroup.go:127 +0xb4
github.com/cockroachdb/cockroach/util/stop.(*Stopper).Stop(0xc8201b21c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:302 +0x1dd
github.com/cockroachdb/cockroach/gossip/simulation.(*Network).Stop(0xc820014240)
/go/src/github.com/cockroachdb/cockroach/gossip/simulation/network.go:176 +0x9c
github.com/cockroachdb/cockroach/gossip/simulation.(*Network).Stop-fm()
/go/src/github.com/cockroachdb/cockroach/kv/dist_sender_test.go:86 +0x20
--
testing.tRunner(0xc82008e000, 0x2222170)
/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
/usr/local/go/src/testing/testing.go:582 +0x892
goroutine 644 [select]:
google.golang.org/grpc/transport.(*Stream).Header(0xc82011ab60, 0xc82011ab60, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:221 +0x242
google.golang.org/grpc.recvResponse(0x7f5408a51f30, 0x25219a0, 0x0, 0x0, 0x0, 0x0, 0x7f5408a537d8, 0xc820219440, 0x100, 0x0, ...)
/go/src/google.golang.org/grpc/call.go:54 +0x4e
google.golang.org/grpc.Invoke(0x7f5408a53918, 0xc8201b6de0, 0x1869720, 0x1d, 0x16efd80, 0xc820227540, 0x16e1d20, 0xc820245d60, 0xc8200948c0, 0x0, ...)
/go/src/google.golang.org/grpc/call.go:178 +0xae6
github.com/cockroachdb/cockroach/rpc.(*heartbeatClient).Ping(0xc82006c398, 0x7f5408a53918, 0xc8201b6de0, 0xc820227540, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/heartbeat.pb.go:115 +0xec
github.com/cockroachdb/cockroach/rpc.(*Context).heartbeat(0xc8201b22a0, 0x7f5408a538c0, 0xc82006c398, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:237 +0x150
--
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201b21c0, 0xc820190570)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 643 [select, 1 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc82011a380)
/go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3
google.golang.org/grpc.NewConn.func1(0xc82011a380)
/go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd
goroutine 645 [select]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc82020a0f0)
/go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b
goroutine 646 [IO wait, 1 minutes]:
net.runtime_pollWait(0x7f5408a52e18, 0x72, 0xc8201fa000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8201b3c60, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8201b3c60, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8201b3c00, 0xc8201fa000, 0x8000, 0x8000, 0x0, 0x7f5408a3d028, 0xc820060080)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc82006c3a0, 0xc8201fa000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
--
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc82020a0f0)
/go/src/google.golang.org/grpc/transport/http2_client.go:743 +0x42
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a
FAIL github.com/cockroachdb/cockroach/kv 70.085s
=== RUN TestCombinable
--- PASS: TestCombinable (0.00s)
=== RUN TestMustSetInner
--- PASS: TestMustSetInner (0.00s)
=== RUN TestBatchSplit
--- PASS: TestBatchSplit (0.00s)
=== RUN TestBatchRequestGetArg
--- PASS: TestBatchRequestGetArg (0.00s)
=== RUN TestBatchRequestString
--- PASS: TestBatchRequestString (0.00s)
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
circleci: failed tests: TestSequenceUpdate - The following test appears to have failed:
[#17041](https://circleci.com/gh/cockroachdb/cockroach/17041):
```
3 gossip/infostore.go:286
I160428 15:49:22.345428 stopper.go:352 draining; tasks left:
2 gossip/infostore.go:286
I160428 15:49:22.345478 stopper.go:352 draining; tasks left:
1 gossip/infostore.go:286
panic: test timed out after 1m10s
goroutine 652 [running]:
panic(0x13d0140, 0xc82025fda0)
/usr/local/go/src/runtime/panic.go:464 +0x3e6
testing.startAlarm.func1()
/usr/local/go/src/testing/testing.go:725 +0x14b
created by time.goFunc
/usr/local/go/src/time/sleep.go:129 +0x3a
goroutine 1 [chan receive, 1 minutes]:
testing.RunTests(0x1aebbf8, 0x2221fc0, 0x6f, 0x6f, 0xc820253a01)
/usr/local/go/src/testing/testing.go:583 +0x8d2
testing.(*M).Run(0xc820039f08, 0x5a2ac3)
/usr/local/go/src/testing/testing.go:515 +0x81
main.main()
github.com/cockroachdb/cockroach/kv/_test/_testmain.go:280 +0x117
goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1
goroutine 22 [chan receive]:
github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x24f8e80)
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64
created by github.com/cockroachdb/cockroach/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a
goroutine 628 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc8201b21e4)
/usr/local/go/src/runtime/sema.go:47 +0x26
sync.(*WaitGroup).Wait(0xc8201b21d8)
/usr/local/go/src/sync/waitgroup.go:127 +0xb4
github.com/cockroachdb/cockroach/util/stop.(*Stopper).Stop(0xc8201b21c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:302 +0x1dd
github.com/cockroachdb/cockroach/gossip/simulation.(*Network).Stop(0xc820014240)
/go/src/github.com/cockroachdb/cockroach/gossip/simulation/network.go:176 +0x9c
github.com/cockroachdb/cockroach/gossip/simulation.(*Network).Stop-fm()
/go/src/github.com/cockroachdb/cockroach/kv/dist_sender_test.go:86 +0x20
--
testing.tRunner(0xc82008e000, 0x2222170)
/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
/usr/local/go/src/testing/testing.go:582 +0x892
goroutine 644 [select]:
google.golang.org/grpc/transport.(*Stream).Header(0xc82011ab60, 0xc82011ab60, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:221 +0x242
google.golang.org/grpc.recvResponse(0x7f5408a51f30, 0x25219a0, 0x0, 0x0, 0x0, 0x0, 0x7f5408a537d8, 0xc820219440, 0x100, 0x0, ...)
/go/src/google.golang.org/grpc/call.go:54 +0x4e
google.golang.org/grpc.Invoke(0x7f5408a53918, 0xc8201b6de0, 0x1869720, 0x1d, 0x16efd80, 0xc820227540, 0x16e1d20, 0xc820245d60, 0xc8200948c0, 0x0, ...)
/go/src/google.golang.org/grpc/call.go:178 +0xae6
github.com/cockroachdb/cockroach/rpc.(*heartbeatClient).Ping(0xc82006c398, 0x7f5408a53918, 0xc8201b6de0, 0xc820227540, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/heartbeat.pb.go:115 +0xec
github.com/cockroachdb/cockroach/rpc.(*Context).heartbeat(0xc8201b22a0, 0x7f5408a538c0, 0xc82006c398, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:237 +0x150
--
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8201b21c0, 0xc820190570)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 643 [select, 1 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc82011a380)
/go/src/google.golang.org/grpc/clientconn.go:510 +0x1d3
google.golang.org/grpc.NewConn.func1(0xc82011a380)
/go/src/google.golang.org/grpc/clientconn.go:321 +0x1b5
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:322 +0x4dd
goroutine 645 [select]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc82020a0f0)
/go/src/google.golang.org/grpc/transport/http2_client.go:835 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:194 +0x153b
goroutine 646 [IO wait, 1 minutes]:
net.runtime_pollWait(0x7f5408a52e18, 0x72, 0xc8201fa000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8201b3c60, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8201b3c60, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8201b3c00, 0xc8201fa000, 0x8000, 0x8000, 0x0, 0x7f5408a3d028, 0xc820060080)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc82006c3a0, 0xc8201fa000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
--
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc82020a0f0)
/go/src/google.golang.org/grpc/transport/http2_client.go:743 +0x42
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:200 +0x159a
FAIL github.com/cockroachdb/cockroach/kv 70.085s
=== RUN TestCombinable
--- PASS: TestCombinable (0.00s)
=== RUN TestMustSetInner
--- PASS: TestMustSetInner (0.00s)
=== RUN TestBatchSplit
--- PASS: TestBatchSplit (0.00s)
=== RUN TestBatchRequestGetArg
--- PASS: TestBatchRequestGetArg (0.00s)
=== RUN TestBatchRequestString
--- PASS: TestBatchRequestString (0.00s)
```
Please assign, take a look and update the issue accordingly.
|
non_process
|
circleci failed tests testsequenceupdate the following test appears to have failed gossip infostore go stopper go draining tasks left gossip infostore go stopper go draining tasks left gossip infostore go panic test timed out after goroutine panic usr local go src runtime panic go testing startalarm usr local go src testing testing go created by time gofunc usr local go src time sleep go goroutine testing runtests usr local go src testing testing go testing m run usr local go src testing testing go main main github com cockroachdb cockroach kv test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com cockroachdb cockroach util log loggingt flushdaemon go src github com cockroachdb cockroach util log clog go created by github com cockroachdb cockroach util log init go src github com cockroachdb cockroach util log clog go goroutine sync runtime semacquire usr local go src runtime sema go sync waitgroup wait usr local go src sync waitgroup go github com cockroachdb cockroach util stop stopper stop go src github com cockroachdb cockroach util stop stopper go github com cockroachdb cockroach gossip simulation network stop go src github com cockroachdb cockroach gossip simulation network go github com cockroachdb cockroach gossip simulation network stop fm go src github com cockroachdb cockroach kv dist sender test go testing trunner usr local go src testing testing go created by testing runtests usr local go src testing testing go goroutine google golang org grpc transport stream header go src google golang org grpc transport transport go google golang org grpc recvresponse go src google golang org grpc call go google golang org grpc invoke go src google golang org grpc call go github com cockroachdb cockroach rpc heartbeatclient ping go src github com cockroachdb cockroach rpc heartbeat pb go github com cockroachdb cockroach rpc context heartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go google golang org grpc newconn go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go fail github com cockroachdb cockroach kv run testcombinable pass testcombinable run testmustsetinner pass testmustsetinner run testbatchsplit pass testbatchsplit run testbatchrequestgetarg pass testbatchrequestgetarg run testbatchrequeststring pass testbatchrequeststring please assign take a look and update the issue accordingly
| 0
|
584,049
| 17,404,739,644
|
IssuesEvent
|
2021-08-03 03:10:10
|
UWB-Biocomputing/Graphitti
|
https://api.github.com/repos/UWB-Biocomputing/Graphitti
|
closed
|
Fix random number generator seeding
|
good first issue lower priority
|
There are two (sets of) RNG: rng in Globals.cpp, which is used to randomize parameters during setup, and is seeded with a hard-coded constant, and rgNormrnd/GPU Mersenne twister RNG(s) (created in SingleThreadedSpikingModel.cpp or GPUSpikingModel.cu), used to generate noise during simulation. The latter is seeded from the parameter file; the former is not.
|
1.0
|
Fix random number generator seeding - There are two (sets of) RNG: rng in Globals.cpp, which is used to randomize parameters during setup, and is seeded with a hard-coded constant, and rgNormrnd/GPU Mersenne twister RNG(s) (created in SingleThreadedSpikingModel.cpp or GPUSpikingModel.cu), used to generate noise during simulation. The latter is seeded from the parameter file; the former is not.
|
non_process
|
fix random number generator seeding there are two sets of rng rng in globals cpp which is used to randomize parameters during setup and is seeded with a hard coded constant and rgnormrnd gpu mersenne twister rng s created in singlethreadedspikingmodel cpp or gpuspikingmodel cu used to generate noise during simulation the latter is seeded from the parameter file the former is not
| 0
|
15,196
| 18,987,792,901
|
IssuesEvent
|
2021-11-22 00:33:18
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
`stdio` of spawned `child_process` stops working after sometime
|
child_process stdio
|
### Version
17.1.0
### Platform
Microsoft Windows NT 10.0.19043.0 x64
### Subsystem
_No response_
### What steps will reproduce the bug?
Here are two pieces of code to reproduce the issue: -
Javascript (temp.js):
```
"use strict";
const childProcess = require('child_process');
let pythonProcess = childProcess.spawn("python", ["temp2.py"], {cwd: "./private/PythonScripts/Temp/"});
pythonProcess.stdin.setDefaultEncoding("utf-8");
pythonProcess.stdout.setEncoding("utf-8");
const writeString = "" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n"; // 8x 128 Zeros
pythonProcess.stdout.on("error", (err) => {
console.log("Stdout Error");
if (err) {
console.log(err);
}
});
let count = 0;
setInterval(() => {
count++;
console.log(count);
pythonProcess.stdin.write(writeString, (err) => {
if (err) {
console.log("Error during writing");
console.log(err);
}
});
}, 100);
```
Python (3.10.x) (temp2.py):
```
import logging
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[logging.FileHandler('temp.log', 'w+', 'utf-8'), logging.StreamHandler()],
level=logging.INFO)
tempLogger = logging.getLogger(__name__)
if __name__ == '__main__':
tempLogger.info("Started")
while True:
inp = input()
tempLogger.info(inp)
```
DIRECTORY STRUCTURE:
```
temp.js
private
|
|-> PythonScripts
|
|-> Temp
|
|-> temp2.py
|-> temp.log
```
<hr>
When we run this code, we will see that the javascript file successfully continues to write (I assume so because the err is always undefined), but the python child_process actually receives the input only 79 times (every time).
It seems as though the pipes/buffers are getting full. I have read the docs for both, node.js regarding child_process and python for std i/o. Neither of them mentions any relevant information about clearing pipes/buffer.
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior?
Maybe this behavior is intentional (where there is a buffer limit for child_processes), but it should be mentioned in docs if that is the case along with a solution to such scenarios.
If this is not intentional, then the expected behavior is that the python code continues to read the input without getting stuck.
### What do you see instead?
(Mentioned at the bottom of the steps to reproduce)
### Additional information
Running the small piece of code provided in steps to reproduce will give a complete understanding of the issue being faced.
|
1.0
|
`stdio` of spawned `child_process` stops working after sometime - ### Version
17.1.0
### Platform
Microsoft Windows NT 10.0.19043.0 x64
### Subsystem
_No response_
### What steps will reproduce the bug?
Here are two pieces of code to reproduce the issue: -
Javascript (temp.js):
```
"use strict";
const childProcess = require('child_process');
let pythonProcess = childProcess.spawn("python", ["temp2.py"], {cwd: "./private/PythonScripts/Temp/"});
pythonProcess.stdin.setDefaultEncoding("utf-8");
pythonProcess.stdout.setEncoding("utf-8");
const writeString = "" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n"; // 8x 128 Zeros
pythonProcess.stdout.on("error", (err) => {
console.log("Stdout Error");
if (err) {
console.log(err);
}
});
let count = 0;
setInterval(() => {
count++;
console.log(count);
pythonProcess.stdin.write(writeString, (err) => {
if (err) {
console.log("Error during writing");
console.log(err);
}
});
}, 100);
```
Python (3.10.x) (temp2.py):
```
import logging
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=[logging.FileHandler('temp.log', 'w+', 'utf-8'), logging.StreamHandler()],
level=logging.INFO)
tempLogger = logging.getLogger(__name__)
if __name__ == '__main__':
tempLogger.info("Started")
while True:
inp = input()
tempLogger.info(inp)
```
DIRECTORY STRUCTURE:
```
temp.js
private
|
|-> PythonScripts
|
|-> Temp
|
|-> temp2.py
|-> temp.log
```
<hr>
When we run this code, we will see that the javascript file successfully continues to write (I assume so because the err is always undefined), but the python child_process actually receives the input only 79 times (every time).
It seems as though the pipes/buffers are getting full. I have read the docs for both, node.js regarding child_process and python for std i/o. Neither of them mentions any relevant information about clearing pipes/buffer.
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior?
Maybe this behavior is intentional (where there is a buffer limit for child_processes), but it should be mentioned in docs if that is the case along with a solution to such scenarios.
If this is not intentional, then the expected behavior is that the python code continues to read the input without getting stuck.
### What do you see instead?
(Mentioned at the bottom of the steps to reproduce)
### Additional information
Running the small piece of code provided in steps to reproduce will give a complete understanding of the issue being faced.
|
process
|
stdio of spawned child process stops working after sometime version platform microsoft windows nt subsystem no response what steps will reproduce the bug here are two pieces of code to reproduce the issue javascript temp js use strict const childprocess require child process let pythonprocess childprocess spawn python cwd private pythonscripts temp pythonprocess stdin setdefaultencoding utf pythonprocess stdout setencoding utf const writestring n zeros pythonprocess stdout on error err console log stdout error if err console log err let count setinterval count console log count pythonprocess stdin write writestring err if err console log error during writing console log err python x py import logging logging basicconfig format asctime s levelname s message s datefmt y m d h m s handlers level logging info templogger logging getlogger name if name main templogger info started while true inp input templogger info inp directory structure temp js private pythonscripts temp py temp log when we run this code we will see that the javascript file successfully continues to write i assume so because the err is always undefined but the python child process actually receives the input only times every time it seems as though the pipes buffers are getting full i have read the docs for both node js regarding child process and python for std i o neither of them mentions any relevant information about clearing pipes buffer how often does it reproduce is there a required condition always what is the expected behavior maybe this behavior is intentional where there is a buffer limit for child processes but it should be mentioned in docs if that is the case along with a solution to such scenarios if this is not intentional then the expected behavior is that the python code continues to read the input without getting stuck what do you see instead mentioned at the bottom of the steps to reproduce additional information running the small piece of code provided in steps to reproduce will give a complete understanding of the issue being faced
| 1
|
190,977
| 15,265,442,015
|
IssuesEvent
|
2021-02-22 07:19:21
|
misomazf88/git_flow_practice
|
https://api.github.com/repos/misomazf88/git_flow_practice
|
closed
|
Un commit que no sigue la convención de código o arreglo a realizar
|
documentation
|
L
El último commit tiene el siguiente mensaje:
`Update #titulo1, #imagen1, of file recetas/Colombia/ajiaco.html`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
1.0
|
Un commit que no sigue la convención de código o arreglo a realizar - L
El último commit tiene el siguiente mensaje:
`Update #titulo1, #imagen1, of file recetas/Colombia/ajiaco.html`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
non_process
|
un commit que no sigue la convención de código o arreglo a realizar l el último commit tiene el siguiente mensaje update of file recetas colombia ajiaco html este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado
| 0
|
127,507
| 27,058,476,828
|
IssuesEvent
|
2023-02-13 17:49:45
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
[wasi] Native crash in `System.Collections.Tests` - `out of bounds memory access`
|
arch-wasm area-Codegen-Interpreter-mono os-wasi
|
[Rolling build](https://dev.azure.com/dnceng-public/public/_build/results?buildId=168698), and
[log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-68c77325243d4b088b/System.Collections.Tests/1/console.9065aa0e.log?helixlogtype=result):
```
[21:07:27] info: Using random seed for test cases: 1458992135
[21:07:27] info: Using random seed for collections: 1458992135
[21:07:27] info: Starting: managed/System.Collections.Tests.dll
[21:08:38] info: Error: failed to run main module `dotnet.wasm`
[21:08:38] info:
[21:08:38] info: Caused by:
[21:08:38] info: 0: failed to invoke command default
[21:08:38] info: 1: error while executing at wasm backtrace:
[21:08:38] info: 0: 0x9f74d - <unknown>!inflated_signature_equal
[21:08:38] info: 1: 0x12bd0d - <unknown>!monoeg_g_hash_table_lookup_extended
[21:08:38] info: 2: 0x12bd71 - <unknown>!monoeg_g_hash_table_lookup
[21:08:38] info: 3: 0x9f60f - <unknown>!mono_metadata_get_inflated_signature
[21:08:38] info: 4: 0x8caf8 - <unknown>!mono_inflate_generic_signature
[21:08:38] info: 5: 0x268b7 - <unknown>!interp_transform_call
[21:08:38] info: 6: 0x1ec3b - <unknown>!generate_code
[21:08:38] info: 7: 0x30c22 - <unknown>!interp_inline_method
[21:08:38] info: 8: 0x2713e - <unknown>!interp_transform_call
[21:08:38] info: 9: 0x1ec3b - <unknown>!generate_code
[21:08:38] info: 10: 0x2a54e - <unknown>!generate
[21:08:38] info: 11: 0x2a06b - <unknown>!mono_interp_transform_method
[21:08:38] info: 12: 0x31b7c - <unknown>!tier_up_method
[21:08:38] info: 13: 0x31c65 - <unknown>!mono_interp_tier_up_frame_patchpoint
[21:08:38] info: 14: 0x10d98 - <unknown>!mono_interp_exec_method
[21:08:38] info: 15: 0x4c45 - <unknown>!interp_runtime_invoke
[21:08:38] info: 16: 0x10674c - <unknown>!mono_jit_runtime_invoke
[21:08:38] info: 17: 0xa91fd - <unknown>!do_runtime_invoke
[21:08:38] info: 18: 0xa91b4 - <unknown>!mono_runtime_invoke_checked
[21:08:38] info: 19: 0xaf44c - <unknown>!mono_runtime_try_invoke_byrefs
[21:08:38] info: 20: 0x78c78 - <unknown>!ves_icall_InternalInvoke
[21:08:38] info: 21: 0x81296 - <unknown>!ves_icall_InternalInvoke_raw
[21:08:38] info: 22: 0x1604d - <unknown>!do_icall
[21:08:38] info: 23: 0x14cf8 - <unknown>!do_icall_wrapper
[21:08:38] info: 24: 0x5f1c - <unknown>!mono_interp_exec_method
[21:08:38] info: 25: 0x4c45 - <unknown>!interp_runtime_invoke
[21:08:38] info: 26: 0x10674c - <unknown>!mono_jit_runtime_invoke
[21:08:38] info: 27: 0xa91fd - <unknown>!do_runtime_invoke
[21:08:38] info: 28: 0xa9b1b - <unknown>!mono_runtime_try_invoke
[21:08:38] info: 29: 0xadec8 - <unknown>!do_try_exec_main
[21:08:38] info: 30: 0xada3a - <unknown>!mono_runtime_run_main
[21:08:38] info: 31: 0x2fb2 - <unknown>!main
[21:08:38] info: 32: 0x13b62b - <unknown>!__main_void
[21:08:38] info: 33: 0x2898 - <unknown>!_start
[21:08:38] info: 34: 0x14dda7 - <unknown>!_start.command_export
[21:08:38] info: note: using the `WASMTIME_BACKTRACE_DETAILS=1` environment variable to may show more debugging information
[21:08:38] info: 2: wasm trap: out of bounds memory access
[21:08:38] info: Process wasmtime.exe exited with 3
```
cc @vargaz @BrzVlad
|
1.0
|
[wasi] Native crash in `System.Collections.Tests` - `out of bounds memory access` - [Rolling build](https://dev.azure.com/dnceng-public/public/_build/results?buildId=168698), and
[log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-68c77325243d4b088b/System.Collections.Tests/1/console.9065aa0e.log?helixlogtype=result):
```
[21:07:27] info: Using random seed for test cases: 1458992135
[21:07:27] info: Using random seed for collections: 1458992135
[21:07:27] info: Starting: managed/System.Collections.Tests.dll
[21:08:38] info: Error: failed to run main module `dotnet.wasm`
[21:08:38] info:
[21:08:38] info: Caused by:
[21:08:38] info: 0: failed to invoke command default
[21:08:38] info: 1: error while executing at wasm backtrace:
[21:08:38] info: 0: 0x9f74d - <unknown>!inflated_signature_equal
[21:08:38] info: 1: 0x12bd0d - <unknown>!monoeg_g_hash_table_lookup_extended
[21:08:38] info: 2: 0x12bd71 - <unknown>!monoeg_g_hash_table_lookup
[21:08:38] info: 3: 0x9f60f - <unknown>!mono_metadata_get_inflated_signature
[21:08:38] info: 4: 0x8caf8 - <unknown>!mono_inflate_generic_signature
[21:08:38] info: 5: 0x268b7 - <unknown>!interp_transform_call
[21:08:38] info: 6: 0x1ec3b - <unknown>!generate_code
[21:08:38] info: 7: 0x30c22 - <unknown>!interp_inline_method
[21:08:38] info: 8: 0x2713e - <unknown>!interp_transform_call
[21:08:38] info: 9: 0x1ec3b - <unknown>!generate_code
[21:08:38] info: 10: 0x2a54e - <unknown>!generate
[21:08:38] info: 11: 0x2a06b - <unknown>!mono_interp_transform_method
[21:08:38] info: 12: 0x31b7c - <unknown>!tier_up_method
[21:08:38] info: 13: 0x31c65 - <unknown>!mono_interp_tier_up_frame_patchpoint
[21:08:38] info: 14: 0x10d98 - <unknown>!mono_interp_exec_method
[21:08:38] info: 15: 0x4c45 - <unknown>!interp_runtime_invoke
[21:08:38] info: 16: 0x10674c - <unknown>!mono_jit_runtime_invoke
[21:08:38] info: 17: 0xa91fd - <unknown>!do_runtime_invoke
[21:08:38] info: 18: 0xa91b4 - <unknown>!mono_runtime_invoke_checked
[21:08:38] info: 19: 0xaf44c - <unknown>!mono_runtime_try_invoke_byrefs
[21:08:38] info: 20: 0x78c78 - <unknown>!ves_icall_InternalInvoke
[21:08:38] info: 21: 0x81296 - <unknown>!ves_icall_InternalInvoke_raw
[21:08:38] info: 22: 0x1604d - <unknown>!do_icall
[21:08:38] info: 23: 0x14cf8 - <unknown>!do_icall_wrapper
[21:08:38] info: 24: 0x5f1c - <unknown>!mono_interp_exec_method
[21:08:38] info: 25: 0x4c45 - <unknown>!interp_runtime_invoke
[21:08:38] info: 26: 0x10674c - <unknown>!mono_jit_runtime_invoke
[21:08:38] info: 27: 0xa91fd - <unknown>!do_runtime_invoke
[21:08:38] info: 28: 0xa9b1b - <unknown>!mono_runtime_try_invoke
[21:08:38] info: 29: 0xadec8 - <unknown>!do_try_exec_main
[21:08:38] info: 30: 0xada3a - <unknown>!mono_runtime_run_main
[21:08:38] info: 31: 0x2fb2 - <unknown>!main
[21:08:38] info: 32: 0x13b62b - <unknown>!__main_void
[21:08:38] info: 33: 0x2898 - <unknown>!_start
[21:08:38] info: 34: 0x14dda7 - <unknown>!_start.command_export
[21:08:38] info: note: using the `WASMTIME_BACKTRACE_DETAILS=1` environment variable to may show more debugging information
[21:08:38] info: 2: wasm trap: out of bounds memory access
[21:08:38] info: Process wasmtime.exe exited with 3
```
cc @vargaz @BrzVlad
|
non_process
|
native crash in system collections tests out of bounds memory access and info using random seed for test cases info using random seed for collections info starting managed system collections tests dll info error failed to run main module dotnet wasm info info caused by info failed to invoke command default info error while executing at wasm backtrace info inflated signature equal info monoeg g hash table lookup extended info monoeg g hash table lookup info mono metadata get inflated signature info mono inflate generic signature info interp transform call info generate code info interp inline method info interp transform call info generate code info generate info mono interp transform method info tier up method info mono interp tier up frame patchpoint info mono interp exec method info interp runtime invoke info mono jit runtime invoke info do runtime invoke info mono runtime invoke checked info mono runtime try invoke byrefs info ves icall internalinvoke info ves icall internalinvoke raw info do icall info do icall wrapper info mono interp exec method info interp runtime invoke info mono jit runtime invoke info do runtime invoke info mono runtime try invoke info do try exec main info mono runtime run main info main info main void info start info start command export info note using the wasmtime backtrace details environment variable to may show more debugging information info wasm trap out of bounds memory access info process wasmtime exe exited with cc vargaz brzvlad
| 0
|
2,547
| 5,310,260,686
|
IssuesEvent
|
2017-02-12 18:32:38
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Can you please provide all the error codes with its desription which we get in require('child_process').exec function?
|
child_process question
|
I am trying to start "**myService**" using following code
```
var exec = require('child_process').exec;
exec('net start myService',function(error) {
console.log(err);
});
```
Now **myService** is running and I have executed following code **again for starting the same service**.
exec('net start myService',function(error) {
console.log(err);
});
I am getting error as follows:
```
{
"stack": "Error: Command failed: The requested service has already been started.\r\n\r\nMore help is available by typing NET HELPMSG 2182.\r\n\r\n\n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: The requested service has already been started.\r\n\r\nMore help is available by typing NET HELPMSG 2182.\r\n\r\n",
"killed": false,
"code": 2,
"signal": null
}
```
I have stopped "**myService**" using following code
```
var exec = require('child_process').exec;
exec('net stop myService',function(error) {
console.log(err);
});
```
It stopped successfully. Now again I have tried **to stop the same already stopped service**
```
exec('net stop myService',function(error) {
console.log(err);
});
```
getting following error:
```
{
"stack": "Error: Command failed: The myService service is not started.\r\n\r\nMore help is available by typing NET HELPMSG 3521.\r\n\r\n\n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: The myService service is not started.\r\n\r\nMore help is available by typing NET HELPMSG 3521.\r\n\r\n",
"killed": false,
"code": 2,
"signal": null
}
```
In both the cases, I am getting `code` as `2` only. But for sometimes I am getting error without any message.
```
{
"stack": "Error: Command failed: \n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: ",
"killed": false,
"code": 2,
"signal": null
}
```
Can you please provide me with proper error codes and messages of **require('child_process').exec**?
Thanks in advance.
|
1.0
|
Can you please provide all the error codes with its desription which we get in require('child_process').exec function? - I am trying to start "**myService**" using following code
```
var exec = require('child_process').exec;
exec('net start myService',function(error) {
console.log(err);
});
```
Now **myService** is running and I have executed following code **again for starting the same service**.
exec('net start myService',function(error) {
console.log(err);
});
I am getting error as follows:
```
{
"stack": "Error: Command failed: The requested service has already been started.\r\n\r\nMore help is available by typing NET HELPMSG 2182.\r\n\r\n\n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: The requested service has already been started.\r\n\r\nMore help is available by typing NET HELPMSG 2182.\r\n\r\n",
"killed": false,
"code": 2,
"signal": null
}
```
I have stopped "**myService**" using following code
```
var exec = require('child_process').exec;
exec('net stop myService',function(error) {
console.log(err);
});
```
It stopped successfully. Now again I have tried **to stop the same already stopped service**
```
exec('net stop myService',function(error) {
console.log(err);
});
```
getting following error:
```
{
"stack": "Error: Command failed: The myService service is not started.\r\n\r\nMore help is available by typing NET HELPMSG 3521.\r\n\r\n\n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: The myService service is not started.\r\n\r\nMore help is available by typing NET HELPMSG 3521.\r\n\r\n",
"killed": false,
"code": 2,
"signal": null
}
```
In both the cases, I am getting `code` as `2` only. But for sometimes I am getting error without any message.
```
{
"stack": "Error: Command failed: \n at ChildProcess.exithandler (child_process.js:637:15)\n at ChildProcess.EventEmitter.emit (events.js:98:17)\n at maybeClose (child_process.js:743:16)\n at Process.ChildProcess._handle.onexit (child_process.js:810:5)",
"message": "Command failed: ",
"killed": false,
"code": 2,
"signal": null
}
```
Can you please provide me with proper error codes and messages of **require('child_process').exec**?
Thanks in advance.
|
process
|
can you please provide all the error codes with its desription which we get in require child process exec function i am trying to start myservice using following code var exec require child process exec exec net start myservice function error console log err now myservice is running and i have executed following code again for starting the same service exec net start myservice function error console log err i am getting error as follows stack error command failed the requested service has already been started r n r nmore help is available by typing net helpmsg r n r n n at childprocess exithandler child process js n at childprocess eventemitter emit events js n at maybeclose child process js n at process childprocess handle onexit child process js message command failed the requested service has already been started r n r nmore help is available by typing net helpmsg r n r n killed false code signal null i have stopped myservice using following code var exec require child process exec exec net stop myservice function error console log err it stopped successfully now again i have tried to stop the same already stopped service exec net stop myservice function error console log err getting following error stack error command failed the myservice service is not started r n r nmore help is available by typing net helpmsg r n r n n at childprocess exithandler child process js n at childprocess eventemitter emit events js n at maybeclose child process js n at process childprocess handle onexit child process js message command failed the myservice service is not started r n r nmore help is available by typing net helpmsg r n r n killed false code signal null in both the cases i am getting code as only but for sometimes i am getting error without any message stack error command failed n at childprocess exithandler child process js n at childprocess eventemitter emit events js n at maybeclose child process js n at process childprocess handle onexit child process js message command failed killed false code signal null can you please provide me with proper error codes and messages of require child process exec thanks in advance
| 1
|
5,029
| 7,851,508,407
|
IssuesEvent
|
2018-06-20 11:59:26
|
rivine/rivine
|
https://api.github.com/repos/rivine/rivine
|
closed
|
Consensus synced property does not update in some setups
|
process_wontfix type_bug
|
In case a node is connected to many local peers (which might also not be synced), there are situations where a node's consensus set `synced` property never becomes `true`. As a result, these nodes won't create blocks.
This probably has to do with this check during the IBD:
https://github.com/rivine/rivine/blob/master/modules/consensus/synchronize.go#L573
|
1.0
|
Consensus synced property does not update in some setups - In case a node is connected to many local peers (which might also not be synced), there are situations where a node's consensus set `synced` property never becomes `true`. As a result, these nodes won't create blocks.
This probably has to do with this check during the IBD:
https://github.com/rivine/rivine/blob/master/modules/consensus/synchronize.go#L573
|
process
|
consensus synced property does not update in some setups in case a node is connected to many local peers which might also not be synced there are situations where a node s consensus set synced property never becomes true as a result these nodes won t create blocks this probably has to do with this check during the ibd
| 1
|
16,921
| 6,306,626,539
|
IssuesEvent
|
2017-07-21 21:33:35
|
PowerShell/PowerShell
|
https://api.github.com/repos/PowerShell/PowerShell
|
closed
|
running new-pssession in a newly built PowerShell should not cause the shell to exit
|
Area-Build Area-Remoting Issue-Bug OS-Linux
|
Steps to reproduce
------------------
clone
start-psbootstrap
start-psbuild
start-psester
*BOOM*
Expected behavior
-----------------
An error message or a new pssession
Actual behavior
---------------
```
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at System.Management.Automation.Remoting.PrioritySendDataCollection.Clear() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/PriorityCollection.cs:line 158
at System.Management.Automation.Remoting.Client.BaseClientTransportManager.CloseAsync() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/BaseTransportManager.cs:line 949
at System.Management.Automation.Remoting.Client.WSManClientSessionTransportManager.CloseAsync() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/WSManTransportManager.cs:line 1219
at System.Management.Automation.Remoting.Client.BaseClientTransportManager.Finalize() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/BaseTransportManager.cs:line 998
```
Environment data
----------------
<!-- provide the output of $PSVersionTable -->
```powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 6.0.0-alpha
PSEdition Core
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 3.0.0.0
GitCommitId v6.0.0-alpha.17-67-g42f2e3ca820d7fd7f9dd57f2bbb90e5da2ab2de0
CLRVersion
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
PS /home/testuser/src/github/PowerShell> uname -a
Linux centospsct 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
|
1.0
|
running new-pssession in a newly built PowerShell should not cause the shell to exit - Steps to reproduce
------------------
clone
start-psbootstrap
start-psbuild
start-psester
*BOOM*
Expected behavior
-----------------
An error message or a new pssession
Actual behavior
---------------
```
Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at System.Management.Automation.Remoting.PrioritySendDataCollection.Clear() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/PriorityCollection.cs:line 158
at System.Management.Automation.Remoting.Client.BaseClientTransportManager.CloseAsync() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/BaseTransportManager.cs:line 949
at System.Management.Automation.Remoting.Client.WSManClientSessionTransportManager.CloseAsync() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/WSManTransportManager.cs:line 1219
at System.Management.Automation.Remoting.Client.BaseClientTransportManager.Finalize() in /home/testuser/src/github/PowerShell/src/System.Management.Automation/engine/remoting/fanin/BaseTransportManager.cs:line 998
```
Environment data
----------------
<!-- provide the output of $PSVersionTable -->
```powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 6.0.0-alpha
PSEdition Core
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 3.0.0.0
GitCommitId v6.0.0-alpha.17-67-g42f2e3ca820d7fd7f9dd57f2bbb90e5da2ab2de0
CLRVersion
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
PS /home/testuser/src/github/PowerShell> uname -a
Linux centospsct 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
|
non_process
|
running new pssession in a newly built powershell should not cause the shell to exit steps to reproduce clone start psbootstrap start psbuild start psester boom expected behavior an error message or a new pssession actual behavior unhandled exception system nullreferenceexception object reference not set to an instance of an object at system management automation remoting prioritysenddatacollection clear in home testuser src github powershell src system management automation engine remoting fanin prioritycollection cs line at system management automation remoting client baseclienttransportmanager closeasync in home testuser src github powershell src system management automation engine remoting fanin basetransportmanager cs line at system management automation remoting client wsmanclientsessiontransportmanager closeasync in home testuser src github powershell src system management automation engine remoting fanin wsmantransportmanager cs line at system management automation remoting client baseclienttransportmanager finalize in home testuser src github powershell src system management automation engine remoting fanin basetransportmanager cs line environment data powershell psversiontable name value psversion alpha psedition core pscompatibleversions buildversion gitcommitid alpha clrversion wsmanstackversion psremotingprotocolversion serializationversion ps home testuser src github powershell uname a linux centospsct smp tue dec utc gnu linux
| 0
|
312,993
| 23,450,307,104
|
IssuesEvent
|
2022-08-16 01:33:53
|
dozius/TwisterSister
|
https://api.github.com/repos/dozius/TwisterSister
|
closed
|
Question re: pinned indicator
|
documentation help wanted
|
First of all, thanks for this, it's amazing. I just got an MFT and was resigning myself to building something, this is way better than I would have done.
Question re: the pinned indicator. I couldn't find something explaining what exactly this does, but it seems to me that when something is pinned, it keeps the UI focus even though the MFT can still control other tracks/devices. Is that right? Happy to submit a PR to document this if that's correct.
Cheers
|
1.0
|
Question re: pinned indicator - First of all, thanks for this, it's amazing. I just got an MFT and was resigning myself to building something, this is way better than I would have done.
Question re: the pinned indicator. I couldn't find something explaining what exactly this does, but it seems to me that when something is pinned, it keeps the UI focus even though the MFT can still control other tracks/devices. Is that right? Happy to submit a PR to document this if that's correct.
Cheers
|
non_process
|
question re pinned indicator first of all thanks for this it s amazing i just got an mft and was resigning myself to building something this is way better than i would have done question re the pinned indicator i couldn t find something explaining what exactly this does but it seems to me that when something is pinned it keeps the ui focus even though the mft can still control other tracks devices is that right happy to submit a pr to document this if that s correct cheers
| 0
|
5,483
| 8,357,323,277
|
IssuesEvent
|
2018-10-02 21:11:43
|
WorksOnArm/cluster
|
https://api.github.com/repos/WorksOnArm/cluster
|
closed
|
WorksOnArm Proposal - Clinical Trial Image Curation and Enrollment with Edge AI
|
ai approved image-processing
|
## Clinical Trial Image Curation and Enrollment with Edge AI
### Contacts
Derek Merck PhD
Rhode Island Hospital and Brown University
Providence, RI
<derek_merck@brown.edu>
Numerous clinical and student collaborators at Rhode Island Hospital, University of Michigan Hospital, and Johns Hopkins
### Background
Radiological AI has recently emerged as an area of immense potential. However, exploration of the space is largely limited to "big players" with access to the massive amounts of patient data necessary to train sophisticated models. Although a huge amount of relevant imaging is collected through publicly funded clinical trials in the US, very little of it is properly organized or ultimately becomes available for public review and analysis. We have a unique opportunity to develop and test a system for automatic image data curation for large multi-site trials. We believe that implementing this system will result in more efficient clinical trial workflows, better use of federal healthcare research money, and the eventual publication of numerous well-organized, labelled image data sets suitable for citizen-science.
### Beneficiaries
- Image AI researchers
- Medical trialists and coordinators
- Citizen-scientists and patients someday, we hope!
### Open-source Commitment
The service stack and dockerized-components are all open-source or free, but the configuration is still very much in alpha. A secondary goal is to document and share a medical image data curation system that can be implemented inexpensively by anyone.
https://github.com/derekmerck/diana_plus
https://diana.readthedocs.io
### Infrastructure
- 1 amd64 small
- 1 armv8 large
- elastic storage, approx. 2-3TB yr 1
- 10-20 arm single-board computers
Intent is to conduct a 6 month to 1 year pilot with partial funding, followed by an application to [NIH NCATS](https://ncats.nih.gov) and other agencies for full funding to continue and expand the program.
Prefer an arm server with the ability to compile for arm32v7 depending on final choice of sbc (i.e., Pi-like vs. Jetson)
Because all image data collected is deidentified as part of the clinical trial enrollment process, we believe that no specific HIPAA certification is required beyond demonstrating best practices with respect to hardware security and system hardening.
### Description
There are 3 components of the proposed project:
1. Central Cloud Service - Provides web interfaces for direct upload, image review, and summary dashboards, manages data archive and index, updates AI models if gpu's are available
2. Remote In-Hospital Services - Monitor local radiology IS for relevant patient imaging, automatically collect images, determine patient eligibility with pre-trained AI models, deidentify data and forward for aggregation.
3. CI compilation, containerization, and testing using a staging environment that mocks interactions between the central and remote services as part of development and maintenance.
First two components are independent but synergistic.
### Contributions
I direct the Medical Image Analysis Research Program at Rhode Island Hospital. A major area of our work is related to organizing the huge amounts of patient data collected by hospitals into labelled corpuses so that they can be used efficiently for reproducible science. Previously I taught computer science at public institutions for 4 years and introduced hundreds of students to Linux and the open source community.
|
1.0
|
WorksOnArm Proposal - Clinical Trial Image Curation and Enrollment with Edge AI - ## Clinical Trial Image Curation and Enrollment with Edge AI
### Contacts
Derek Merck PhD
Rhode Island Hospital and Brown University
Providence, RI
<derek_merck@brown.edu>
Numerous clinical and student collaborators at Rhode Island Hospital, University of Michigan Hospital, and Johns Hopkins
### Background
Radiological AI has recently emerged as an area of immense potential. However, exploration of the space is largely limited to "big players" with access to the massive amounts of patient data necessary to train sophisticated models. Although a huge amount of relevant imaging is collected through publicly funded clinical trials in the US, very little of it is properly organized or ultimately becomes available for public review and analysis. We have a unique opportunity to develop and test a system for automatic image data curation for large multi-site trials. We believe that implementing this system will result in more efficient clinical trial workflows, better use of federal healthcare research money, and the eventual publication of numerous well-organized, labelled image data sets suitable for citizen-science.
### Beneficiaries
- Image AI researchers
- Medical trialists and coordinators
- Citizen-scientists and patients someday, we hope!
### Open-source Commitment
The service stack and dockerized-components are all open-source or free, but the configuration is still very much in alpha. A secondary goal is to document and share a medical image data curation system that can be implemented inexpensively by anyone.
https://github.com/derekmerck/diana_plus
https://diana.readthedocs.io
### Infrastructure
- 1 amd64 small
- 1 armv8 large
- elastic storage, approx. 2-3TB yr 1
- 10-20 arm single-board computers
Intent is to conduct a 6 month to 1 year pilot with partial funding, followed by an application to [NIH NCATS](https://ncats.nih.gov) and other agencies for full funding to continue and expand the program.
Prefer an arm server with the ability to compile for arm32v7 depending on final choice of sbc (i.e., Pi-like vs. Jetson)
Because all image data collected is deidentified as part of the clinical trial enrollment process, we believe that no specific HIPAA certification is required beyond demonstrating best practices with respect to hardware security and system hardening.
### Description
There are 3 components of the proposed project:
1. Central Cloud Service - Provides web interfaces for direct upload, image review, and summary dashboards, manages data archive and index, updates AI models if gpu's are available
2. Remote In-Hospital Services - Monitor local radiology IS for relevant patient imaging, automatically collect images, determine patient eligibility with pre-trained AI models, deidentify data and forward for aggregation.
3. CI compilation, containerization, and testing using a staging environment that mocks interactions between the central and remote services as part of development and maintenance.
First two components are independent but synergistic.
### Contributions
I direct the Medical Image Analysis Research Program at Rhode Island Hospital. A major area of our work is related to organizing the huge amounts of patient data collected by hospitals into labelled corpuses so that they can be used efficiently for reproducible science. Previously I taught computer science at public institutions for 4 years and introduced hundreds of students to Linux and the open source community.
|
process
|
worksonarm proposal clinical trial image curation and enrollment with edge ai clinical trial image curation and enrollment with edge ai contacts derek merck phd rhode island hospital and brown university providence ri numerous clinical and student collaborators at rhode island hospital university of michigan hospital and johns hopkins background radiological ai has recently emerged as an area of immense potential however exploration of the space is largely limited to big players with access to the massive amounts of patient data necessary to train sophisticated models although a huge amount of relevant imaging is collected through publicly funded clinical trials in the us very little of it is properly organized or ultimately becomes available for public review and analysis we have a unique opportunity to develop and test a system for automatic image data curation for large multi site trials we believe that implementing this system will result in more efficient clinical trial workflows better use of federal healthcare research money and the eventual publication of numerous well organized labelled image data sets suitable for citizen science beneficiaries image ai researchers medical trialists and coordinators citizen scientists and patients someday we hope open source commitment the service stack and dockerized components are all open source or free but the configuration is still very much in alpha a secondary goal is to document and share a medical image data curation system that can be implemented inexpensively by anyone infrastructure small large elastic storage approx yr arm single board computers intent is to conduct a month to year pilot with partial funding followed by an application to and other agencies for full funding to continue and expand the program prefer an arm server with the ability to compile for depending on final choice of sbc i e pi like vs jetson because all image data collected is deidentified as part of the clinical trial enrollment process we believe that no specific hipaa certification is required beyond demonstrating best practices with respect to hardware security and system hardening description there are components of the proposed project central cloud service provides web interfaces for direct upload image review and summary dashboards manages data archive and index updates ai models if gpu s are available remote in hospital services monitor local radiology is for relevant patient imaging automatically collect images determine patient eligibility with pre trained ai models deidentify data and forward for aggregation ci compilation containerization and testing using a staging environment that mocks interactions between the central and remote services as part of development and maintenance first two components are independent but synergistic contributions i direct the medical image analysis research program at rhode island hospital a major area of our work is related to organizing the huge amounts of patient data collected by hospitals into labelled corpuses so that they can be used efficiently for reproducible science previously i taught computer science at public institutions for years and introduced hundreds of students to linux and the open source community
| 1
|
64,802
| 14,682,421,286
|
IssuesEvent
|
2020-12-31 16:22:51
|
labsai/EDDI
|
https://api.github.com/repos/labsai/EDDI
|
opened
|
CVE-2019-8331 (Medium) detected in bootstrap-4.2.1.js, bootstrap-4.2.1.min.js
|
security vulnerability
|
## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-4.2.1.js</b>, <b>bootstrap-4.2.1.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-4.2.1.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-4.2.1.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-4.2.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-4.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/labsai/EDDI/commit/e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c">e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-8331 (Medium) detected in bootstrap-4.2.1.js, bootstrap-4.2.1.min.js - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-4.2.1.js</b>, <b>bootstrap-4.2.1.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-4.2.1.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-4.2.1.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-4.2.1.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.2.1/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: EDDI/apiserver/src/main/resources/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-4.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/labsai/EDDI/commit/e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c">e141334e85f823e2e1a3c8e4ac2c90fe6a35c48c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bootstrap js bootstrap min js cve medium severity vulnerability vulnerable libraries bootstrap js bootstrap min js bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library eddi apiserver src main resources js bootstrap js dependency hierarchy x bootstrap js vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library eddi apiserver src main resources js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass step up your open source security game with whitesource
| 0
|
6,096
| 8,958,117,051
|
IssuesEvent
|
2019-01-27 11:35:11
|
Fabmanager-Deauville/FabLab-Deauville
|
https://api.github.com/repos/Fabmanager-Deauville/FabLab-Deauville
|
opened
|
Combiner processing et makey makey
|
arduino processing
|
Sur processing, créer un code qui réagit avec les touches de clavier.
Interfacer makey makey, pour remplacer le bouton par des objets tangibles.
|
1.0
|
Combiner processing et makey makey - Sur processing, créer un code qui réagit avec les touches de clavier.
Interfacer makey makey, pour remplacer le bouton par des objets tangibles.
|
process
|
combiner processing et makey makey sur processing créer un code qui réagit avec les touches de clavier interfacer makey makey pour remplacer le bouton par des objets tangibles
| 1
|
314,723
| 27,019,534,241
|
IssuesEvent
|
2023-02-10 23:20:58
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
opened
|
Failing test(s): TestAccDataSourceGoogleFirebaseAndroidApp
|
test failure
|
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
### Failure rates
- 100% since 2023-02-03
### Impacted tests:
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
- TestAccDataSourceGoogleFirebaseAndroidApp
### Affected Resource(s)
<!--- List the affected resources and data sources. Use google_* if all resources or data sources are affected. --->
* google_firebase_android_app
### Nightly builds
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloudBeta_ProviderGoogleCloudBetaGoogleProject/374144?buildTab=tests&pager.currentPage=1&name=TestAccDataSourceGoogleFirebaseAndroidApp&expandedTest=95368672504330208
<!-- The error message that displays in the tests tab, for reference -->
### Message(s)
```
Error: Error creating AndroidApp: googleapi: Error 429: Quota exceeded for quota metric 'Provision requests' and limit 'Provision requests per minute' of service 'firebase.googleapis.com' for consumer 'project_number:XXXX'
```
|
1.0
|
Failing test(s): TestAccDataSourceGoogleFirebaseAndroidApp - <!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
### Failure rates
- 100% since 2023-02-03
### Impacted tests:
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
- TestAccDataSourceGoogleFirebaseAndroidApp
### Affected Resource(s)
<!--- List the affected resources and data sources. Use google_* if all resources or data sources are affected. --->
* google_firebase_android_app
### Nightly builds
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloudBeta_ProviderGoogleCloudBetaGoogleProject/374144?buildTab=tests&pager.currentPage=1&name=TestAccDataSourceGoogleFirebaseAndroidApp&expandedTest=95368672504330208
<!-- The error message that displays in the tests tab, for reference -->
### Message(s)
```
Error: Error creating AndroidApp: googleapi: Error 429: Quota exceeded for quota metric 'Provision requests' and limit 'Provision requests per minute' of service 'firebase.googleapis.com' for consumer 'project_number:XXXX'
```
|
non_process
|
failing test s testaccdatasourcegooglefirebaseandroidapp failure rates since impacted tests testaccdatasourcegooglefirebaseandroidapp affected resource s google firebase android app nightly builds message s error error creating androidapp googleapi error quota exceeded for quota metric provision requests and limit provision requests per minute of service firebase googleapis com for consumer project number xxxx
| 0
|
74,633
| 20,258,072,512
|
IssuesEvent
|
2022-02-15 02:42:20
|
WormieCorp/Cake.Addin.Analyzer
|
https://api.github.com/repos/WormieCorp/Cake.Addin.Analyzer
|
closed
|
Remove target framework netcoreapp2.1 in the test project
|
Build
|
The .NET Core 2.1 target reference is out of date, and won't be receiving any updates in the future.
As such it should be removed from the frameworks we test against in the test project.
Since only the test project references this framework, it is not considered a breaking change, and only build related.
|
1.0
|
Remove target framework netcoreapp2.1 in the test project - The .NET Core 2.1 target reference is out of date, and won't be receiving any updates in the future.
As such it should be removed from the frameworks we test against in the test project.
Since only the test project references this framework, it is not considered a breaking change, and only build related.
|
non_process
|
remove target framework in the test project the net core target reference is out of date and won t be receiving any updates in the future as such it should be removed from the frameworks we test against in the test project since only the test project references this framework it is not considered a breaking change and only build related
| 0
|
126,267
| 10,416,355,076
|
IssuesEvent
|
2019-09-14 12:53:03
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
[Windows Logging] Init-rancher-logging-fluentd workload failed to start
|
[zube]: To Test area/logging kind/bug-qa team/cn
|
<!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
1. Launch a cluster of 1 linux worker and 1 windows worker
2. Enable logging
3. After logging is enabled successfully, then disable logging
4. Enable logging again
**Result:**
`init-rancher-logging-fluentd` failed to start, error log:
```
mkdir : An item with the specified name C:\var\lib\rancher\fluentd\log already
exists.
At line:1 char:1
+ mkdir -p /var/lib/rancher/fluentd/log
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceExists: (C:\var\lib\rancher\fluentd\log:
String) [New-Item], IOException
+ FullyQualifiedErrorId : DirectoryExist,Microsoft.PowerShell.Commands.New
ItemCommand
```
If I manually delete the `C:\var\lib\rancher\fluentd\` directory in the windows node, the cluster starts successfully.
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): rancher/rancher:master-head (9/6/2019)
- Installation option (single install/HA): single
|
1.0
|
[Windows Logging] Init-rancher-logging-fluentd workload failed to start - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
1. Launch a cluster of 1 linux worker and 1 windows worker
2. Enable logging
3. After logging is enabled successfully, then disable logging
4. Enable logging again
**Result:**
`init-rancher-logging-fluentd` failed to start, error log:
```
mkdir : An item with the specified name C:\var\lib\rancher\fluentd\log already
exists.
At line:1 char:1
+ mkdir -p /var/lib/rancher/fluentd/log
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceExists: (C:\var\lib\rancher\fluentd\log:
String) [New-Item], IOException
+ FullyQualifiedErrorId : DirectoryExist,Microsoft.PowerShell.Commands.New
ItemCommand
```
If I manually delete the `C:\var\lib\rancher\fluentd\` directory in the windows node, the cluster starts successfully.
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): rancher/rancher:master-head (9/6/2019)
- Installation option (single install/HA): single
|
non_process
|
init rancher logging fluentd workload failed to start please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible launch a cluster of linux worker and windows worker enable logging after logging is enabled successfully then disable logging enable logging again result init rancher logging fluentd failed to start error log mkdir an item with the specified name c var lib rancher fluentd log already exists at line char mkdir p var lib rancher fluentd log categoryinfo resourceexists c var lib rancher fluentd log string ioexception fullyqualifiederrorid directoryexist microsoft powershell commands new itemcommand if i manually delete the c var lib rancher fluentd directory in the windows node the cluster starts successfully other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui rancher rancher master head installation option single install ha single
| 0
|
14,616
| 17,758,776,463
|
IssuesEvent
|
2021-08-29 09:34:04
|
Blazebit/blaze-persistence
|
https://api.github.com/repos/Blazebit/blaze-persistence
|
closed
|
Annotation Processor generates invalid import statements with annotations like NotNull
|
kind: bug worth: high component: entity-view-annotation-processor
|
<!--- This template is for bugs. Remove it for other issues. -->
<!--- Choose an expressive title -->
### Description
<!--- Explain what you did and maybe show some code excerpts -->
Using javax.validation annotations with annotation processor generates invalid import statements for the view implementation and the view builder.
The use case was to use those annotations in order to validate incoming views from JAX-RS requests, like `NotNull`.
### Expected behavior
<!--- What outcome would you expect? -->
I expect the annotation processor to generate a valid view class implementation.
### Actual behavior
<!--- What happened that you didn't expect? -->
The import for `@Min(20)` is right before the import of the attribute type (label).
```
package com.example;
import @javax.validation.constraints.Min(20L) java.lang.Integer;
import com.blazebit.persistence.view.EntityViewManager;
import com.blazebit.persistence.view.SerializableEntityViewManager;
import com.blazebit.persistence.view.StaticImplementation;
import com.blazebit.persistence.view.spi.type.EntityViewProxy;
import java.util.Map;
import java.util.Objects;
import javax.annotation.Generated;
```
### Steps to reproduce
<!--- Give us enough details so we can create a testcase that reproduces this problem -->
<!--- Idealy you would create a test case based on the template project from: https://github.com/Blazebit/blaze-persistence-test-case-template -->
<!--- Either attach a ZIP containing the test case or create a pull request against the test-case-template repository and put the link to the PR here -->
See: https://github.com/ahofmeister/blaze-persistence-annotation-processor-reproducer
### Environment
<!--- Environment info like e.g. -->
<!--- Version: 1.2.0-Alpha1 -->
<!--- JPA-Provider: Hibernate 5.2.7.Final -->
<!--- DBMS: PostgreSQL 9.6.2 -->
<!--- Application Server: Java SE -->
Version:
JPA-Provider:
DBMS:
Application Server: Quarkus 2.0.2.Final
|
1.0
|
Annotation Processor generates invalid import statements with annotations like NotNull - <!--- This template is for bugs. Remove it for other issues. -->
<!--- Choose an expressive title -->
### Description
<!--- Explain what you did and maybe show some code excerpts -->
Using javax.validation annotations with annotation processor generates invalid import statements for the view implementation and the view builder.
The use case was to use those annotations in order to validate incoming views from JAX-RS requests, like `NotNull`.
### Expected behavior
<!--- What outcome would you expect? -->
I expect the annotation processor to generate a valid view class implementation.
### Actual behavior
<!--- What happened that you didn't expect? -->
The import for `@Min(20)` is right before the import of the attribute type (label).
```
package com.example;
import @javax.validation.constraints.Min(20L) java.lang.Integer;
import com.blazebit.persistence.view.EntityViewManager;
import com.blazebit.persistence.view.SerializableEntityViewManager;
import com.blazebit.persistence.view.StaticImplementation;
import com.blazebit.persistence.view.spi.type.EntityViewProxy;
import java.util.Map;
import java.util.Objects;
import javax.annotation.Generated;
```
### Steps to reproduce
<!--- Give us enough details so we can create a testcase that reproduces this problem -->
<!--- Idealy you would create a test case based on the template project from: https://github.com/Blazebit/blaze-persistence-test-case-template -->
<!--- Either attach a ZIP containing the test case or create a pull request against the test-case-template repository and put the link to the PR here -->
See: https://github.com/ahofmeister/blaze-persistence-annotation-processor-reproducer
### Environment
<!--- Environment info like e.g. -->
<!--- Version: 1.2.0-Alpha1 -->
<!--- JPA-Provider: Hibernate 5.2.7.Final -->
<!--- DBMS: PostgreSQL 9.6.2 -->
<!--- Application Server: Java SE -->
Version:
JPA-Provider:
DBMS:
Application Server: Quarkus 2.0.2.Final
|
process
|
annotation processor generates invalid import statements with annotations like notnull description using javax validation annotations with annotation processor generates invalid import statements for the view implementation and the view builder the use case was to use those annotations in order to validate incoming views from jax rs requests like notnull expected behavior i expect the annotation processor to generate a valid view class implementation actual behavior the import for min is right before the import of the attribute type label package com example import javax validation constraints min java lang integer import com blazebit persistence view entityviewmanager import com blazebit persistence view serializableentityviewmanager import com blazebit persistence view staticimplementation import com blazebit persistence view spi type entityviewproxy import java util map import java util objects import javax annotation generated steps to reproduce see environment version jpa provider dbms application server quarkus final
| 1
|
6,062
| 8,900,991,995
|
IssuesEvent
|
2019-01-17 00:15:45
|
edgi-govdata-archiving/web-monitoring
|
https://api.github.com/repos/edgi-govdata-archiving/web-monitoring
|
closed
|
Data analysis to look for patterns in insignificant changes
|
gsoc-wm processing stale
|
I have spent some time as a first attempt to do this and we now have some filters which can ID a few types of insignificant changes. The work and details can be found [here](https://github.com/edgi-govdata-archiving/web-monitoring-processing/issues/80).
This issue will be used to keep track of the future developments.
|
1.0
|
Data analysis to look for patterns in insignificant changes - I have spent some time as a first attempt to do this and we now have some filters which can ID a few types of insignificant changes. The work and details can be found [here](https://github.com/edgi-govdata-archiving/web-monitoring-processing/issues/80).
This issue will be used to keep track of the future developments.
|
process
|
data analysis to look for patterns in insignificant changes i have spent some time as a first attempt to do this and we now have some filters which can id a few types of insignificant changes the work and details can be found this issue will be used to keep track of the future developments
| 1
|
265,828
| 28,298,760,909
|
IssuesEvent
|
2023-04-10 02:38:16
|
nidhi7598/linux-4.19.72
|
https://api.github.com/repos/nidhi7598/linux-4.19.72
|
closed
|
CVE-2021-26930 (High) detected in linuxlinux-4.19.254 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2021-26930 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/xen-blkback/blkback.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/xen-blkback/blkback.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 3.11 through 5.10.16, as used by Xen. To service requests to the PV backend, the driver maps grant references provided by the frontend. In this process, errors may be encountered. In one case, an error encountered earlier might be discarded by later processing, resulting in the caller assuming successful mapping, and hence subsequent operations trying to access space that wasn't mapped. In another case, internal state would be insufficiently updated, preventing safe recovery from the error. This affects drivers/block/xen-blkback/blkback.c.
<p>Publish Date: 2021-02-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-26930>CVE-2021-26930</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-26930">https://nvd.nist.gov/vuln/detail/CVE-2021-26930</a></p>
<p>Release Date: 2021-02-17</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-26930 (High) detected in linuxlinux-4.19.254 - autoclosed - ## CVE-2021-26930 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/xen-blkback/blkback.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/block/xen-blkback/blkback.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 3.11 through 5.10.16, as used by Xen. To service requests to the PV backend, the driver maps grant references provided by the frontend. In this process, errors may be encountered. In one case, an error encountered earlier might be discarded by later processing, resulting in the caller assuming successful mapping, and hence subsequent operations trying to access space that wasn't mapped. In another case, internal state would be insufficiently updated, preventing safe recovery from the error. This affects drivers/block/xen-blkback/blkback.c.
<p>Publish Date: 2021-02-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-26930>CVE-2021-26930</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-26930">https://nvd.nist.gov/vuln/detail/CVE-2021-26930</a></p>
<p>Release Date: 2021-02-17</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers block xen blkback blkback c drivers block xen blkback blkback c vulnerability details an issue was discovered in the linux kernel through as used by xen to service requests to the pv backend the driver maps grant references provided by the frontend in this process errors may be encountered in one case an error encountered earlier might be discarded by later processing resulting in the caller assuming successful mapping and hence subsequent operations trying to access space that wasn t mapped in another case internal state would be insufficiently updated preventing safe recovery from the error this affects drivers block xen blkback blkback c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
107,695
| 9,221,103,238
|
IssuesEvent
|
2019-03-11 19:07:00
|
dtag-dev-sec/tpotce
|
https://api.github.com/repos/dtag-dev-sec/tpotce
|
closed
|
Unhandled exception (KeyError)
|
bug fix testing
|
### Basic support information
- What T-Pot version are you currently using?
- Are you running on a Intel NUC or a VM?
- direct hardware install to an I5 w/8 gig of RAM
- How long has your installation been running?
1 day
- Did you install any upgrades or packages?
No. I ran update.sh -y though.
- Did you modify any scripts?
No
- Have you turned persistence on/off?
not intentionally
- How much RAM is available (login via ssh and run `htop`)?
4.38G/7.52G
- How much stress are the CPUs under (login via ssh and run `htop`)?
5%
- How much swap space is being used (login via ssh and run `htop`)?
780k/7.63G
- How much free disk space is available (login via ssh and run `sudo df -h`)?
96% free
- What is the current container status (login via ssh and run `sudo dps.sh`)?
```
========| System |========
Date: Sat Mar 9 14:28:29 CST 2019
Uptime: 14:28:29 up 11:00, 1 user, load average: 0.16, 0.35, 0.44
CPU temp:
NAME STATUS PORTS
adbhoney Up 3 hours 0.0.0.0:5555->5555/tcp
ciscoasa Up 3 hours
conpot_guardian_ast Up 3 hours 0.0.0.0:10001->10001/tcp
conpot_iec104 Up 3 hours 0.0.0.0:161->161/tcp, 0.0.0.0:2404->2404/tcp
conpot_ipmi Up 3 hours 0.0.0.0:623->623/tcp
conpot_kamstrup_382 Up 3 hours 0.0.0.0:1025->1025/tcp, 0.0.0.0:50100->50100/tcp
cowrie Up 3 hours 0.0.0.0:22-23->22-23/tcp
cyberchef Up 3 hours (healthy) 127.0.0.1:64299->8000/tcp
dionaea Up 3 hours
elasticpot Up 3 hours 0.0.0.0:9200->9200/tcp
elasticsearch Up 3 hours (healthy) 127.0.0.1:64298->9200/tcp
head Up 3 hours (healthy) 127.0.0.1:64302->9100/tcp
heralding Up 3 hours 0.0.0.0:110->110/tcp, 0.0.0.0:143->143/tcp, 0.0.0.0:993->993/tcp, 0.0.0.0:995->995/tcp, 0.0.0.0:5432->5432/tcp, 0.0.0.0:5900->5900/tcp
honeytrap Up 3 hours
kibana Up 3 hours (healthy) 127.0.0.1:64296->5601/tcp
logstash Up 3 hours (healthy)
mailoney Up 3 hours 0.0.0.0:25->25/tcp
medpot Up 3 hours 0.0.0.0:2575->2575/tcp
nginx Up 3 hours
p0f Up 3 hours
rdpy Up 3 hours 0.0.0.0:3389->3389/tcp
snare Up 3 hours 0.0.0.0:80->80/tcp
spiderfoot Up 3 hours (healthy) 127.0.0.1:64303->8080/tcp
suricata Up 3 hours
tanner Up 3 hours
tanner_api Up 3 hours
tanner_phpox Up 3 hours
tanner_redis Up 3 hours 6379/tcp
tanner_web Up 3 hours
--
```
Here is the error message from spiderfoot:
```
--
Unhandled exception (KeyError) encountered during scan. Please report this as a bug: ['Traceback (most recent call last):\n', ' File "/home/spiderfoot/sfscan.py", line 265, in startScan\n psMod.notifyListeners(firstEvent)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 338, in processHost\n self.processDomain(dom, evt)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 357, in processDomain\n self.notifyListeners(domevt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsbrute.py", line 131, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 222, in handleEvent\n self.processHost(addr, parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsharedip.py", line 133, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 262, in handleEvent\n return self.spiderFrom(spiderTarget)\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 286, in spiderFrom\n links = self.processUrl(startingPoint) # fetch first page\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 107, in processUrl\n self.contentNotify(url, fetched, self.urlEvents[url])\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 199, in contentNotify\n self.notifyListeners(event)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_email.py", line 98, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_fullcontact.py", line 151, in handleEvent\n e = SpiderFootEvent("HUMAN_NAME", data[\'fullName\'], self.__name__, event)\n', "KeyError: 'fullName'\n"]
--
```
spiderfoot is aware of the issue, and believes they have resolved it (https://github.com/smicallef/spiderfoot/issues/242). The solution they recommended is to obtain the latest version.
|
1.0
|
Unhandled exception (KeyError) - ### Basic support information
- What T-Pot version are you currently using?
- Are you running on a Intel NUC or a VM?
- direct hardware install to an I5 w/8 gig of RAM
- How long has your installation been running?
1 day
- Did you install any upgrades or packages?
No. I ran update.sh -y though.
- Did you modify any scripts?
No
- Have you turned persistence on/off?
not intentionally
- How much RAM is available (login via ssh and run `htop`)?
4.38G/7.52G
- How much stress are the CPUs under (login via ssh and run `htop`)?
5%
- How much swap space is being used (login via ssh and run `htop`)?
780k/7.63G
- How much free disk space is available (login via ssh and run `sudo df -h`)?
96% free
- What is the current container status (login via ssh and run `sudo dps.sh`)?
```
========| System |========
Date: Sat Mar 9 14:28:29 CST 2019
Uptime: 14:28:29 up 11:00, 1 user, load average: 0.16, 0.35, 0.44
CPU temp:
NAME STATUS PORTS
adbhoney Up 3 hours 0.0.0.0:5555->5555/tcp
ciscoasa Up 3 hours
conpot_guardian_ast Up 3 hours 0.0.0.0:10001->10001/tcp
conpot_iec104 Up 3 hours 0.0.0.0:161->161/tcp, 0.0.0.0:2404->2404/tcp
conpot_ipmi Up 3 hours 0.0.0.0:623->623/tcp
conpot_kamstrup_382 Up 3 hours 0.0.0.0:1025->1025/tcp, 0.0.0.0:50100->50100/tcp
cowrie Up 3 hours 0.0.0.0:22-23->22-23/tcp
cyberchef Up 3 hours (healthy) 127.0.0.1:64299->8000/tcp
dionaea Up 3 hours
elasticpot Up 3 hours 0.0.0.0:9200->9200/tcp
elasticsearch Up 3 hours (healthy) 127.0.0.1:64298->9200/tcp
head Up 3 hours (healthy) 127.0.0.1:64302->9100/tcp
heralding Up 3 hours 0.0.0.0:110->110/tcp, 0.0.0.0:143->143/tcp, 0.0.0.0:993->993/tcp, 0.0.0.0:995->995/tcp, 0.0.0.0:5432->5432/tcp, 0.0.0.0:5900->5900/tcp
honeytrap Up 3 hours
kibana Up 3 hours (healthy) 127.0.0.1:64296->5601/tcp
logstash Up 3 hours (healthy)
mailoney Up 3 hours 0.0.0.0:25->25/tcp
medpot Up 3 hours 0.0.0.0:2575->2575/tcp
nginx Up 3 hours
p0f Up 3 hours
rdpy Up 3 hours 0.0.0.0:3389->3389/tcp
snare Up 3 hours 0.0.0.0:80->80/tcp
spiderfoot Up 3 hours (healthy) 127.0.0.1:64303->8080/tcp
suricata Up 3 hours
tanner Up 3 hours
tanner_api Up 3 hours
tanner_phpox Up 3 hours
tanner_redis Up 3 hours 6379/tcp
tanner_web Up 3 hours
--
```
Here is the error message from spiderfoot:
```
--
Unhandled exception (KeyError) encountered during scan. Please report this as a bug: ['Traceback (most recent call last):\n', ' File "/home/spiderfoot/sfscan.py", line 265, in startScan\n psMod.notifyListeners(firstEvent)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 338, in processHost\n self.processDomain(dom, evt)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 357, in processDomain\n self.notifyListeners(domevt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsbrute.py", line 131, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 222, in handleEvent\n self.processHost(addr, parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsharedip.py", line 133, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 162, in handleEvent\n self.processHost(match[1], parentEvent, False)\n', ' File "/home/spiderfoot/modules/sfp_dnsresolve.py", line 334, in processHost\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_bingsearch.py", line 93, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 262, in handleEvent\n return self.spiderFrom(spiderTarget)\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 286, in spiderFrom\n links = self.processUrl(startingPoint) # fetch first page\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 107, in processUrl\n self.contentNotify(url, fetched, self.urlEvents[url])\n', ' File "/home/spiderfoot/modules/sfp_spider.py", line 199, in contentNotify\n self.notifyListeners(event)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_email.py", line 98, in handleEvent\n self.notifyListeners(evt)\n', ' File "/home/spiderfoot/sflib.py", line 1527, in notifyListeners\n listener.handleEvent(sfEvent)\n', ' File "/home/spiderfoot/modules/sfp_fullcontact.py", line 151, in handleEvent\n e = SpiderFootEvent("HUMAN_NAME", data[\'fullName\'], self.__name__, event)\n', "KeyError: 'fullName'\n"]
--
```
spiderfoot is aware of the issue, and believes they have resolved it (https://github.com/smicallef/spiderfoot/issues/242). The solution they recommended is to obtain the latest version.
|
non_process
|
unhandled exception keyerror basic support information what t pot version are you currently using are you running on a intel nuc or a vm direct hardware install to an w gig of ram how long has your installation been running day did you install any upgrades or packages no i ran update sh y though did you modify any scripts no have you turned persistence on off not intentionally how much ram is available login via ssh and run htop how much stress are the cpus under login via ssh and run htop how much swap space is being used login via ssh and run htop how much free disk space is available login via ssh and run sudo df h free what is the current container status login via ssh and run sudo dps sh system date sat mar cst uptime up user load average cpu temp name status ports adbhoney up hours tcp ciscoasa up hours conpot guardian ast up hours tcp conpot up hours tcp tcp conpot ipmi up hours tcp conpot kamstrup up hours tcp tcp cowrie up hours tcp cyberchef up hours healthy tcp dionaea up hours elasticpot up hours tcp elasticsearch up hours healthy tcp head up hours healthy tcp heralding up hours tcp tcp tcp tcp tcp tcp honeytrap up hours kibana up hours healthy tcp logstash up hours healthy mailoney up hours tcp medpot up hours tcp nginx up hours up hours rdpy up hours tcp snare up hours tcp spiderfoot up hours healthy tcp suricata up hours tanner up hours tanner api up hours tanner phpox up hours tanner redis up hours tcp tanner web up hours here is the error message from spiderfoot unhandled exception keyerror encountered during scan please report this as a bug parentevent false n file home spiderfoot modules sfp dnsresolve py line in processhost n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp bingsearch py line in handleevent n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp dnsresolve py line in handleevent n self processhost match parentevent false n file home spiderfoot modules sfp dnsresolve py line in processhost n self processdomain dom evt n file home spiderfoot modules sfp dnsresolve py line in processdomain n self notifylisteners domevt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp dnsbrute py line in handleevent n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp dnsresolve py line in handleevent n self processhost addr parentevent false n file home spiderfoot modules sfp dnsresolve py line in processhost n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp bingsharedip py line in handleevent n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp dnsresolve py line in handleevent n self processhost match parentevent false n file home spiderfoot modules sfp dnsresolve py line in processhost n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp bingsearch py line in handleevent n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp spider py line in handleevent n return self spiderfrom spidertarget n file home spiderfoot modules sfp spider py line in spiderfrom n links self processurl startingpoint fetch first page n file home spiderfoot modules sfp spider py line in processurl n self contentnotify url fetched self urlevents n file home spiderfoot modules sfp spider py line in contentnotify n self notifylisteners event n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp email py line in handleevent n self notifylisteners evt n file home spiderfoot sflib py line in notifylisteners n listener handleevent sfevent n file home spiderfoot modules sfp fullcontact py line in handleevent n e spiderfootevent human name data self name event n keyerror fullname n spiderfoot is aware of the issue and believes they have resolved it the solution they recommended is to obtain the latest version
| 0
|
11,938
| 14,707,088,834
|
IssuesEvent
|
2021-01-04 20:59:48
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Missing information on how to actually create process parameters
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
I'm trying to create new process parameters for a classic tfvc based build pipeline but i cannot find it in the UI and the link to the referenced information does not provide **any** information on how to do it.
It would be good that the referenced doc describes how to do it or at least provide additional links / information on where to find this.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1ec7e5a9-e22e-a329-14cf-6e21e65cc85f
* Version Independent ID: c96aaf4e-f5ec-c42f-05cb-e2366167fbc6
* Content: [Process parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/parameters?view=azure-devops)
* Content Source: [docs/pipelines/process/parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Missing information on how to actually create process parameters -
I'm trying to create new process parameters for a classic tfvc based build pipeline but i cannot find it in the UI and the link to the referenced information does not provide **any** information on how to do it.
It would be good that the referenced doc describes how to do it or at least provide additional links / information on where to find this.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 1ec7e5a9-e22e-a329-14cf-6e21e65cc85f
* Version Independent ID: c96aaf4e-f5ec-c42f-05cb-e2366167fbc6
* Content: [Process parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/parameters?view=azure-devops)
* Content Source: [docs/pipelines/process/parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
missing information on how to actually create process parameters i m trying to create new process parameters for a classic tfvc based build pipeline but i cannot find it in the ui and the link to the referenced information does not provide any information on how to do it it would be good that the referenced doc describes how to do it or at least provide additional links information on where to find this document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
762,468
| 26,719,630,581
|
IssuesEvent
|
2023-01-29 00:40:11
|
FRCTeam3206/2023-Winter-Season
|
https://api.github.com/repos/FRCTeam3206/2023-Winter-Season
|
closed
|
Vision Processing
|
High Priority
|
Implement vision processing so we can use the April Tags in both auton and teleop. Drive code not necessary, but preferred
### Due Date: Start ASAP, Provide weekly progress reports
|
1.0
|
Vision Processing - Implement vision processing so we can use the April Tags in both auton and teleop. Drive code not necessary, but preferred
### Due Date: Start ASAP, Provide weekly progress reports
|
non_process
|
vision processing implement vision processing so we can use the april tags in both auton and teleop drive code not necessary but preferred due date start asap provide weekly progress reports
| 0
|
26,663
| 7,857,035,941
|
IssuesEvent
|
2018-06-21 09:29:52
|
fossasia/susi_skill_cms
|
https://api.github.com/repos/fossasia/susi_skill_cms
|
opened
|
Bot Name should be same as the Skill Name
|
Botbuilder enhancement
|
**Actual Behaviour**
Bot Name and Skill Name are separate fields.
<!-- Please state here what is currently happening. -->
**Expected Behaviour**
Bot Name should be same as the Skill Name. They should be a single field.
<!-- State here what the feature should enable the user to do. -->
**Would you like to work on the issue?**
Yes
<!-- Please let us know if you can work on it or the issue should be assigned to someone else. -->
|
1.0
|
Bot Name should be same as the Skill Name - **Actual Behaviour**
Bot Name and Skill Name are separate fields.
<!-- Please state here what is currently happening. -->
**Expected Behaviour**
Bot Name should be same as the Skill Name. They should be a single field.
<!-- State here what the feature should enable the user to do. -->
**Would you like to work on the issue?**
Yes
<!-- Please let us know if you can work on it or the issue should be assigned to someone else. -->
|
non_process
|
bot name should be same as the skill name actual behaviour bot name and skill name are separate fields expected behaviour bot name should be same as the skill name they should be a single field would you like to work on the issue yes
| 0
|
195,838
| 14,785,351,420
|
IssuesEvent
|
2021-01-12 02:31:59
|
google/knative-gcp
|
https://api.github.com/repos/google/knative-gcp
|
closed
|
E2E test for Channel
|
area/observability area/test-and-release lifecycle/stale priority/2 release/2
|
**Problem**
In order to ensure that the Channel generates metrics correctly, we need an E2E test.
**[Persona:](https://github.com/knative/eventing/blob/master/docs/personas.md)**
Contributor
**Exit Criteria**
A test in the e2e suite that runs before every PR merge. The test verifies that an event sent through a Channel sends at least `event_count` metrics.
|
1.0
|
E2E test for Channel - **Problem**
In order to ensure that the Channel generates metrics correctly, we need an E2E test.
**[Persona:](https://github.com/knative/eventing/blob/master/docs/personas.md)**
Contributor
**Exit Criteria**
A test in the e2e suite that runs before every PR merge. The test verifies that an event sent through a Channel sends at least `event_count` metrics.
|
non_process
|
test for channel problem in order to ensure that the channel generates metrics correctly we need an test contributor exit criteria a test in the suite that runs before every pr merge the test verifies that an event sent through a channel sends at least event count metrics
| 0
|
310,149
| 23,323,002,251
|
IssuesEvent
|
2022-08-08 18:14:26
|
microsoft/STL
|
https://api.github.com/repos/microsoft/STL
|
closed
|
Status Chart: Investigate Firefox dark mode
|
documentation info needed
|
Reported by @h-vetinari in https://github.com/microsoft/STL/pull/2975#issuecomment-1203799529.
I need to figure out why #2975 isn't working for Firefox. `check_css()` attempts to call `load_charts()` immediately (unlike the earlier #2967), so my tentative guess is that `regex.test(sheet.href)` is failing.
|
1.0
|
Status Chart: Investigate Firefox dark mode - Reported by @h-vetinari in https://github.com/microsoft/STL/pull/2975#issuecomment-1203799529.
I need to figure out why #2975 isn't working for Firefox. `check_css()` attempts to call `load_charts()` immediately (unlike the earlier #2967), so my tentative guess is that `regex.test(sheet.href)` is failing.
|
non_process
|
status chart investigate firefox dark mode reported by h vetinari in i need to figure out why isn t working for firefox check css attempts to call load charts immediately unlike the earlier so my tentative guess is that regex test sheet href is failing
| 0
|
14,188
| 17,091,086,158
|
IssuesEvent
|
2021-07-08 17:33:06
|
googleapis/google-auth-library-python
|
https://api.github.com/repos/googleapis/google-auth-library-python
|
opened
|
Kokoro 'docs-presubmit' jobs fail consistently
|
type: process
|
E.g., [this failure](https://source.cloud.google.com/results/invocations/480d5f2b-91f0-45d8-9b71-eb4b5be28fad/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fgoogle-auth-library-python%2Fdocs%2Fdocs-presubmit/log):
```
================================================================
2021-07-08T16:53:25Z: Running the tests in a Docker container.
================================================================
KOKORO_KEYSTORE_DIR=/secrets/keystore
KOKORO_GITHUB_PULL_REQUEST_NUMBER=794
KOKORO_GITHUB_PULL_REQUEST_URL=https://github.com/googleapis/google-auth-library-python/pull/794
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/google-auth-library-python/docs/docs-presubmit
KOKORO_GIT_COMMIT=f2e2eae0c46e1e51ac5b1d3669854f9cb1ed757d
KOKORO_GITHUB_PULL_REQUEST_COMMIT=f2e2eae0c46e1e51ac5b1d3669854f9cb1ed757d
KOKORO_BUILD_NUMBER=311
KOKORO_BUILD_ID=480d5f2b-91f0-45d8-9b71-eb4b5be28fad
KOKORO_GFILE_DIR=/secrets/gfile
cat: /secrets/gfile/project-id.txt: No such file or directory
.kokoro/build.sh: line 37: gcloud: command not found
```
Seems like we need to update `.kokoro/docker/docs/Dockerfile` to get the SDK installed?
|
1.0
|
Kokoro 'docs-presubmit' jobs fail consistently - E.g., [this failure](https://source.cloud.google.com/results/invocations/480d5f2b-91f0-45d8-9b71-eb4b5be28fad/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fgoogle-auth-library-python%2Fdocs%2Fdocs-presubmit/log):
```
================================================================
2021-07-08T16:53:25Z: Running the tests in a Docker container.
================================================================
KOKORO_KEYSTORE_DIR=/secrets/keystore
KOKORO_GITHUB_PULL_REQUEST_NUMBER=794
KOKORO_GITHUB_PULL_REQUEST_URL=https://github.com/googleapis/google-auth-library-python/pull/794
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/google-auth-library-python/docs/docs-presubmit
KOKORO_GIT_COMMIT=f2e2eae0c46e1e51ac5b1d3669854f9cb1ed757d
KOKORO_GITHUB_PULL_REQUEST_COMMIT=f2e2eae0c46e1e51ac5b1d3669854f9cb1ed757d
KOKORO_BUILD_NUMBER=311
KOKORO_BUILD_ID=480d5f2b-91f0-45d8-9b71-eb4b5be28fad
KOKORO_GFILE_DIR=/secrets/gfile
cat: /secrets/gfile/project-id.txt: No such file or directory
.kokoro/build.sh: line 37: gcloud: command not found
```
Seems like we need to update `.kokoro/docker/docs/Dockerfile` to get the SDK installed?
|
process
|
kokoro docs presubmit jobs fail consistently e g running the tests in a docker container kokoro keystore dir secrets keystore kokoro github pull request number kokoro github pull request url kokoro job name cloud devrel client libraries python googleapis google auth library python docs docs presubmit kokoro git commit kokoro github pull request commit kokoro build number kokoro build id kokoro gfile dir secrets gfile cat secrets gfile project id txt no such file or directory kokoro build sh line gcloud command not found seems like we need to update kokoro docker docs dockerfile to get the sdk installed
| 1
|
18,247
| 24,323,967,634
|
IssuesEvent
|
2022-09-30 13:18:40
|
km4ack/patmenu2
|
https://api.github.com/repos/km4ack/patmenu2
|
closed
|
Enhancement - add configured pat location to the menu gui front page
|
enhancement in process
|
pat menu at the bottom has "my call" current config.
can you add to the right of those... the current set location
possibly the last station list download age
|
1.0
|
Enhancement - add configured pat location to the menu gui front page - pat menu at the bottom has "my call" current config.
can you add to the right of those... the current set location
possibly the last station list download age
|
process
|
enhancement add configured pat location to the menu gui front page pat menu at the bottom has my call current config can you add to the right of those the current set location possibly the last station list download age
| 1
|
243,710
| 20,515,154,724
|
IssuesEvent
|
2022-03-01 10:57:13
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
sql/tests: TestClusterID failed
|
C-test-failure O-robot branch-master
|
sql/tests.TestClusterID [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4481587&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4481587&tab=artifacts#/) on master @ [bf13dc6061ab080c8c4306ebabb5065331877508](https://github.com/cockroachdb/cockroach/commits/bf13dc6061ab080c8c4306ebabb5065331877508):
```
=== RUN TestClusterID
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/b0a4fc4c280ad60b26568d64fcb64d2b/logTestClusterID1921273411
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestClusterID.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
sql/tests: TestClusterID failed - sql/tests.TestClusterID [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4481587&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4481587&tab=artifacts#/) on master @ [bf13dc6061ab080c8c4306ebabb5065331877508](https://github.com/cockroachdb/cockroach/commits/bf13dc6061ab080c8c4306ebabb5065331877508):
```
=== RUN TestClusterID
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/b0a4fc4c280ad60b26568d64fcb64d2b/logTestClusterID1921273411
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestClusterID.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
sql tests testclusterid failed sql tests testclusterid with on master run testclusterid test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline help see also parameters in this failure tags bazel gss cc cockroachdb sql queries
| 0
|
8,646
| 11,789,605,292
|
IssuesEvent
|
2020-03-17 17:26:32
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Viral 'movement' terms problems
|
multi-species process
|
Hello,
@sylvainpoux reported that the following terms now cause taxon violations when annotated to viruses:
* GO:0039675 exit of virus from host cell nucleus through nuclear pore
* GO:0046802 exit of virus from host cell nucleus by nuclear egress
* GO:0046765 viral budding from nuclear membrane
* GO:0075732 viral penetration into host nucleus
This is probably due to the ongoing work in this area; I'll look at all the terms and edit as needed.
Thanks, Pascale
|
1.0
|
Viral 'movement' terms problems - Hello,
@sylvainpoux reported that the following terms now cause taxon violations when annotated to viruses:
* GO:0039675 exit of virus from host cell nucleus through nuclear pore
* GO:0046802 exit of virus from host cell nucleus by nuclear egress
* GO:0046765 viral budding from nuclear membrane
* GO:0075732 viral penetration into host nucleus
This is probably due to the ongoing work in this area; I'll look at all the terms and edit as needed.
Thanks, Pascale
|
process
|
viral movement terms problems hello sylvainpoux reported that the following terms now cause taxon violations when annotated to viruses go exit of virus from host cell nucleus through nuclear pore go exit of virus from host cell nucleus by nuclear egress go viral budding from nuclear membrane go viral penetration into host nucleus this is probably due to the ongoing work in this area i ll look at all the terms and edit as needed thanks pascale
| 1
|
14,407
| 17,459,790,339
|
IssuesEvent
|
2021-08-06 08:49:48
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issue > Set up account screen > Customer logo is not getting displayed
|
Bug P2 Participant manager Process: Fixed Process: Tested dev Process: Reopened
|
Responsive issue
AR: Set up account screen > Customer logo is not getting displayed
ER: Set up account screen > Customer logo should be displayed

|
3.0
|
[PM] Responsive issue > Set up account screen > Customer logo is not getting displayed - Responsive issue
AR: Set up account screen > Customer logo is not getting displayed
ER: Set up account screen > Customer logo should be displayed

|
process
|
responsive issue set up account screen customer logo is not getting displayed responsive issue ar set up account screen customer logo is not getting displayed er set up account screen customer logo should be displayed
| 1
|
82,107
| 32,003,537,948
|
IssuesEvent
|
2023-09-21 13:38:24
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Avatar placeholder in the 'add existing space' modal is not centered
|
T-Defect X-Regression S-Tolerable A-Spaces A-Avatar O-Occasional
|
### Steps to reproduce
1. Go to a space that you manage
2. Try to add an existing space to it
### Outcome
#### What did you expect?
The letters used as avatar placeholders should be centered
#### What happened instead?
In this modal, it's not:

### Operating system
NixOS unstable
### Browser information
Firefox 117.0
### URL for webapp
develop.element.io
### Application version
Element version: 14746004b56a-react-b1f455eb2de1-js-c7827d971cc8 Olm version: 3.2.14
### Homeserver
Not relevant
### Will you send logs?
No
|
1.0
|
Avatar placeholder in the 'add existing space' modal is not centered - ### Steps to reproduce
1. Go to a space that you manage
2. Try to add an existing space to it
### Outcome
#### What did you expect?
The letters used as avatar placeholders should be centered
#### What happened instead?
In this modal, it's not:

### Operating system
NixOS unstable
### Browser information
Firefox 117.0
### URL for webapp
develop.element.io
### Application version
Element version: 14746004b56a-react-b1f455eb2de1-js-c7827d971cc8 Olm version: 3.2.14
### Homeserver
Not relevant
### Will you send logs?
No
|
non_process
|
avatar placeholder in the add existing space modal is not centered steps to reproduce go to a space that you manage try to add an existing space to it outcome what did you expect the letters used as avatar placeholders should be centered what happened instead in this modal it s not operating system nixos unstable browser information firefox url for webapp develop element io application version element version react js olm version homeserver not relevant will you send logs no
| 0
|
21,534
| 29,829,286,167
|
IssuesEvent
|
2023-06-18 03:51:40
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Fedora 38: failed build
|
type: bug type: support / not a bug (process) team-OSS
|
### Description of the bug:
failed build on Fedora 38 aarch
```
[root@sbc-stage-a0 SPECS]# cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (Container Image Prerelease)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (Container Image Prerelease)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="Container Image"
VARIANT_ID=container
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
I use RPM packaing and cannot provide minimal example.
### Which operating system are you running Bazel on?
Fedora 38
### What is the output of `bazel info release`?
building from scratch
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
NA
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
I compiling 6.1.1 release
```
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
Build error message attached
[build-error.txt](https://github.com/bazelbuild/bazel/files/11145430/build-error.txt)
|
1.0
|
Fedora 38: failed build - ### Description of the bug:
failed build on Fedora 38 aarch
```
[root@sbc-stage-a0 SPECS]# cat /etc/os-release
NAME="Fedora Linux"
VERSION="38 (Container Image Prerelease)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (Container Image Prerelease)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="Container Image"
VARIANT_ID=container
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
I use RPM packaing and cannot provide minimal example.
### Which operating system are you running Bazel on?
Fedora 38
### What is the output of `bazel info release`?
building from scratch
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
NA
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
I compiling 6.1.1 release
```
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
Build error message attached
[build-error.txt](https://github.com/bazelbuild/bazel/files/11145430/build-error.txt)
|
process
|
fedora failed build description of the bug failed build on fedora aarch cat etc os release name fedora linux version container image prerelease id fedora version id version codename platform id platform pretty name fedora linux container image prerelease ansi color logo fedora logo icon cpe name cpe o fedoraproject fedora default hostname fedora home url documentation url support url bug report url redhat bugzilla product fedora redhat bugzilla product version redhat support product fedora redhat support product version support end variant container image variant id container what s the simplest easiest way to reproduce this bug please provide a minimal example if possible i use rpm packaing and cannot provide minimal example which operating system are you running bazel on fedora what is the output of bazel info release building from scratch if bazel info release returns development version or non git tell us how you built bazel na what s the output of git remote get url origin git rev parse master git rev parse head text i compiling release have you found anything relevant by searching the web no response any other information logs or outputs that you want to share build error message attached
| 1
|
129,890
| 10,590,566,034
|
IssuesEvent
|
2019-10-09 09:02:01
|
Vachok/ftpplus
|
https://api.github.com/repos/Vachok/ftpplus
|
closed
|
testWork [D336]
|
Medium TestQuality bug resolution_Wont Do
|
Execute DBMessengerTest::testWork**testWork**
*DBMessengerTest*
*did not expect to find [true] but found [false]*
*java.lang.AssertionError*
|
1.0
|
testWork [D336] - Execute DBMessengerTest::testWork**testWork**
*DBMessengerTest*
*did not expect to find [true] but found [false]*
*java.lang.AssertionError*
|
non_process
|
testwork execute dbmessengertest testwork testwork dbmessengertest did not expect to find but found java lang assertionerror
| 0
|
15,700
| 19,848,290,193
|
IssuesEvent
|
2022-01-21 09:24:33
|
ooi-data/CE06ISSM-RID16-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE06ISSM-RID16-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:24:32.476429.
## Details
Flow name: `CE06ISSM-RID16-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:24:32.476429.
## Details
Flow name: `CE06ISSM-RID16-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host abc dcl instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
418,290
| 12,195,822,269
|
IssuesEvent
|
2020-04-29 18:01:17
|
fagiodarkie/exaltered-plusplus
|
https://api.github.com/repos/fagiodarkie/exaltered-plusplus
|
opened
|
Ammunitions management
|
low-priority
|
Create Ammunition projects, and make it editable the same way as weapons.
|
1.0
|
Ammunitions management - Create Ammunition projects, and make it editable the same way as weapons.
|
non_process
|
ammunitions management create ammunition projects and make it editable the same way as weapons
| 0
|
809,782
| 30,210,150,945
|
IssuesEvent
|
2023-07-05 12:16:23
|
latteart-org/latteart
|
https://api.github.com/repos/latteart-org/latteart
|
closed
|
ダウンロードダイアログ内のリンクの文言が統一されていない
|
Type: Bug Priority: Could Effort: 1 Ver: develop
|
以下等のダウンロードダイアログのリンクの文言が統一されていない。
* スクリーンショット出力時
* リプレイ比較時の差分ファイル
* テストスクリプト出力時
## 期待結果
全て「ダウンロードする」に統一する。
|
1.0
|
ダウンロードダイアログ内のリンクの文言が統一されていない - 以下等のダウンロードダイアログのリンクの文言が統一されていない。
* スクリーンショット出力時
* リプレイ比較時の差分ファイル
* テストスクリプト出力時
## 期待結果
全て「ダウンロードする」に統一する。
|
non_process
|
ダウンロードダイアログ内のリンクの文言が統一されていない 以下等のダウンロードダイアログのリンクの文言が統一されていない。 スクリーンショット出力時 リプレイ比較時の差分ファイル テストスクリプト出力時 期待結果 全て「ダウンロードする」に統一する。
| 0
|
40,282
| 2,868,372,356
|
IssuesEvent
|
2015-06-05 18:27:07
|
GiraffaFS/giraffa
|
https://api.github.com/repos/GiraffaFS/giraffa
|
closed
|
Increase test case coverage for TestGiraffaFSContract.
|
enhancement Priority-Medium
|
A few tests in `TestGiraffaFSContract` are overridden as no-op to avoid the test failure. Some of them can in fact be fixed by adjusting the responses to RPC calls in Giraffa.
Eventually `TestGiraffaFSContract` should pass for Giraffa entirely.
|
1.0
|
Increase test case coverage for TestGiraffaFSContract. - A few tests in `TestGiraffaFSContract` are overridden as no-op to avoid the test failure. Some of them can in fact be fixed by adjusting the responses to RPC calls in Giraffa.
Eventually `TestGiraffaFSContract` should pass for Giraffa entirely.
|
non_process
|
increase test case coverage for testgiraffafscontract a few tests in testgiraffafscontract are overridden as no op to avoid the test failure some of them can in fact be fixed by adjusting the responses to rpc calls in giraffa eventually testgiraffafscontract should pass for giraffa entirely
| 0
|
75,959
| 14,542,440,322
|
IssuesEvent
|
2020-12-15 15:43:02
|
michaeljones/breathe
|
https://api.github.com/repos/michaeljones/breathe
|
closed
|
What is the best way to resolve the Warning re: multiple matches for a function?
|
bug code
|
`WARNING: doxygenfunction: Unable to resolve multiple matches for function`
|
1.0
|
What is the best way to resolve the Warning re: multiple matches for a function? - `WARNING: doxygenfunction: Unable to resolve multiple matches for function`
|
non_process
|
what is the best way to resolve the warning re multiple matches for a function warning doxygenfunction unable to resolve multiple matches for function
| 0
|
22,413
| 31,142,293,703
|
IssuesEvent
|
2023-08-16 01:44:55
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Restore confidence in CI
|
Epic process: flaky test topic: flake ❄️ stage: flake stale
|
### About
This epic serves to track the issues arising from the [Tech Brief: Restore Confidence in CI](https://github.com/cypress-io/prod-eng-docs/blob/main/cypress-app/restore-confidence-in-ci.md). Specifically, it will track the one-time action items as specified in the [proposal](https://github.com/cypress-io/prod-eng-docs/blob/main/cypress-app/restore-confidence-in-ci.md#proposal):
```
Take one-time action to:
- Disable and log issues for all flaky tests reported by CircleCI.
- Encapsulate all logged issues within a shared epic specifically created for this effort.
- Comment out flaky Percy snapshots in specs, linking to GH issue with a TODO to incrementally restore them, so that Percy is green unless UI changes are made.
```
### Examples
Please use the [Flaky test issue template](https://github.com/cypress-io/cypress/issues/new?assignees=&labels=topic%3A+flake+%E2%9D%84%EF%B8%8F%2Cstage%3A+fire+watch&template=4-flaky-test.yml&title=Flaky+test%3A+) for logging new issues
- Example of [issue logged for flaky test](https://github.com/cypress-io/cypress/issues/23203)
|
1.0
|
Restore confidence in CI - ### About
This epic serves to track the issues arising from the [Tech Brief: Restore Confidence in CI](https://github.com/cypress-io/prod-eng-docs/blob/main/cypress-app/restore-confidence-in-ci.md). Specifically, it will track the one-time action items as specified in the [proposal](https://github.com/cypress-io/prod-eng-docs/blob/main/cypress-app/restore-confidence-in-ci.md#proposal):
```
Take one-time action to:
- Disable and log issues for all flaky tests reported by CircleCI.
- Encapsulate all logged issues within a shared epic specifically created for this effort.
- Comment out flaky Percy snapshots in specs, linking to GH issue with a TODO to incrementally restore them, so that Percy is green unless UI changes are made.
```
### Examples
Please use the [Flaky test issue template](https://github.com/cypress-io/cypress/issues/new?assignees=&labels=topic%3A+flake+%E2%9D%84%EF%B8%8F%2Cstage%3A+fire+watch&template=4-flaky-test.yml&title=Flaky+test%3A+) for logging new issues
- Example of [issue logged for flaky test](https://github.com/cypress-io/cypress/issues/23203)
|
process
|
restore confidence in ci about this epic serves to track the issues arising from the specifically it will track the one time action items as specified in the take one time action to disable and log issues for all flaky tests reported by circleci encapsulate all logged issues within a shared epic specifically created for this effort comment out flaky percy snapshots in specs linking to gh issue with a todo to incrementally restore them so that percy is green unless ui changes are made examples please use the for logging new issues example of
| 1
|
18,894
| 24,833,469,025
|
IssuesEvent
|
2022-10-26 06:49:29
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Feature request]Add comment grammar to implement "src" file reference feature of ".mpx"
|
processing
|
For example, in order to reuse the common style rules without inflating the bundle size, in wxss file:
```css
/*
@mpx-import('path/to/file.less')
*/
```
which is equavilant to:
```vue
<style src="path/to/file.less"/>
```
|
1.0
|
[Feature request]Add comment grammar to implement "src" file reference feature of ".mpx" - For example, in order to reuse the common style rules without inflating the bundle size, in wxss file:
```css
/*
@mpx-import('path/to/file.less')
*/
```
which is equavilant to:
```vue
<style src="path/to/file.less"/>
```
|
process
|
add comment grammar to implement src file reference feature of mpx for example in order to reuse the common style rules without inflating the bundle size in wxss file css mpx import path to file less which is equavilant to vue
| 1
|
15,258
| 19,190,521,372
|
IssuesEvent
|
2021-12-05 22:38:58
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
TypeError: Couldn't find foreign struct converter for 'cairo.Context'
|
in process Bug-Minor
|
This error is seen when running xgps. To get rid of the error,
sudo apt-get install -y python-gi-cairo
Thanks to Dave, KD7UM, for pointing this out!
|
1.0
|
TypeError: Couldn't find foreign struct converter for 'cairo.Context' - This error is seen when running xgps. To get rid of the error,
sudo apt-get install -y python-gi-cairo
Thanks to Dave, KD7UM, for pointing this out!
|
process
|
typeerror couldn t find foreign struct converter for cairo context this error is seen when running xgps to get rid of the error sudo apt get install y python gi cairo thanks to dave for pointing this out
| 1
|
10,358
| 13,182,015,362
|
IssuesEvent
|
2020-08-12 15:08:26
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Better (detailed) description of custom conditions
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
I think that it is missing the section on available conditions and states of the task that can be used in condition. I'm looking for something that will resemble the built-in "Even if a previous task has failed, even if the build was canceled" condition, but instead of canceled part I need to test for "timeout". I think that section showing what task properties are available would help me to solve it e.g. `or(failed(), timeout())` :)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=classic)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Better (detailed) description of custom conditions - I think that it is missing the section on available conditions and states of the task that can be used in condition. I'm looking for something that will resemble the built-in "Even if a previous task has failed, even if the build was canceled" condition, but instead of canceled part I need to test for "timeout". I think that section showing what task properties are available would help me to solve it e.g. `or(failed(), timeout())` :)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=classic)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
better detailed description of custom conditions i think that it is missing the section on available conditions and states of the task that can be used in condition i m looking for something that will resemble the built in even if a previous task has failed even if the build was canceled condition but instead of canceled part i need to test for timeout i think that section showing what task properties are available would help me to solve it e g or failed timeout document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
7,513
| 10,594,588,833
|
IssuesEvent
|
2019-10-09 17:06:12
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Minimize partial matches when exact matches are found
|
processed
|
We have an `autocomplete` acceptance test that needs to be fixed.
[/v1/autocomplete?text=victoria](http://pelias.github.io/compare/#/v1/autocomplete%3Ftext=victoria)

Having the partial matches in the top 5 results when there are exact matches in subsequent results seems incorrect.
|
1.0
|
Minimize partial matches when exact matches are found - We have an `autocomplete` acceptance test that needs to be fixed.
[/v1/autocomplete?text=victoria](http://pelias.github.io/compare/#/v1/autocomplete%3Ftext=victoria)

Having the partial matches in the top 5 results when there are exact matches in subsequent results seems incorrect.
|
process
|
minimize partial matches when exact matches are found we have an autocomplete acceptance test that needs to be fixed having the partial matches in the top results when there are exact matches in subsequent results seems incorrect
| 1
|
21,696
| 30,192,948,800
|
IssuesEvent
|
2023-07-04 17:10:45
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] Error when changing aggregation column on previous stages
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
MLv2 throws an error when trying to update an aggregation column on previous stages:
```
Uncaught Error: :all is not ISeqable
at Object.cljs$core$seq [as seq] (core.cljs:1253:1)
at core.cljs:4447:1
at Object.sval (core.cljs:3462:1)
at Object.cljs$core$ISeqable$_seq$arity$1 (core.cljs:3519:1)
at Object.cljs$core$seq [as seq] (core.cljs:1236:1)
at core.cljs:3826:1
at Object.sval (core.cljs:3462:1)
at Object.cljs$core$ISeqable$_seq$arity$1 (core.cljs:3523:1)
at Object.cljs$core$seq [as seq] (core.cljs:1236:1)
at Object.cljs$core$INext$_next$arity$1 (core.cljs:3294:1)
```
### To Reproduce
Follow the steps from [this video](https://github.com/metabase/metabase/pull/31530#pullrequestreview-1482536703)
|
1.0
|
[MLv2] [Bug] Error when changing aggregation column on previous stages - MLv2 throws an error when trying to update an aggregation column on previous stages:
```
Uncaught Error: :all is not ISeqable
at Object.cljs$core$seq [as seq] (core.cljs:1253:1)
at core.cljs:4447:1
at Object.sval (core.cljs:3462:1)
at Object.cljs$core$ISeqable$_seq$arity$1 (core.cljs:3519:1)
at Object.cljs$core$seq [as seq] (core.cljs:1236:1)
at core.cljs:3826:1
at Object.sval (core.cljs:3462:1)
at Object.cljs$core$ISeqable$_seq$arity$1 (core.cljs:3523:1)
at Object.cljs$core$seq [as seq] (core.cljs:1236:1)
at Object.cljs$core$INext$_next$arity$1 (core.cljs:3294:1)
```
### To Reproduce
Follow the steps from [this video](https://github.com/metabase/metabase/pull/31530#pullrequestreview-1482536703)
|
process
|
error when changing aggregation column on previous stages throws an error when trying to update an aggregation column on previous stages uncaught error all is not iseqable at object cljs core seq core cljs at core cljs at object sval core cljs at object cljs core iseqable seq arity core cljs at object cljs core seq core cljs at core cljs at object sval core cljs at object cljs core iseqable seq arity core cljs at object cljs core seq core cljs at object cljs core inext next arity core cljs to reproduce follow the steps from
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.