Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
169,044
| 13,111,653,104
|
IssuesEvent
|
2020-08-04 23:40:48
|
rust-lang/cargo
|
https://api.github.com/repos/rust-lang/cargo
|
closed
|
close_output test is randomly failing
|
A-testing-cargo-itself C-bug
|
~~TLDR: Should we run some flaky tests single-threaded?~~ (Nope)
The `build::close_output` test is randomly failing on CI. There were some fixes applied in #8286 in May 26, but there appears to be more recent failures:
https://github.com/rust-lang/rust/pull/74312#issuecomment-657964827
https://github.com/rust-lang/rust/pull/74408#issuecomment-659603027
https://github.com/rust-lang/rust/pull/74908#issuecomment-665912840
https://github.com/rust-lang/rust/pull/74923 (https://github.com/rust-lang-ci/rust/runs/924743383)
The failure is:
```
---- build::close_output stdout ----
thread 'build::close_output' panicked at 'assertion failed: !status.success()', src/tools/cargo/tests/testsuite/build.rs:5016:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
I am uncertain how this is possible, so maybe someone could double check that what I wrote makes sense. [The test](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/tests/testsuite/build.rs#L4928-L5044) covers what happens when stdout or stderr is closed in the middle of the build. It uses a proc-macro as a sync point so that the test can know when compilation has started, and to emit data to stdout or stderr during the build. It should follow this sequence:
1. Starts a TCP server.
2. Starts the build.
3. The proc-macro starts building.
4. The proc-macro connects to the TCP server, and waits for the test to tell it it is OK to continue.
5. Test receives connection from proc-macro
6. Test **closes stdout**.
7. Test tells proc-macro to continue.
8. Proc-macro starts spewing stuff to stdout to cargo, which through the internal job queue ends up attempting to [write to stdout](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/src/cargo/core/compiler/job_queue.rs#L501-L503). Since stdout was closed in step 6, this should fail.
9. Cargo should exit with an error after rustc is done.
For some reason, at step 8, it successfully writes to stdout, and step 9 returns success.
I've been doing a few tests, and it gets worse based on the number of concurrent tests running. ~~When run single threaded, I cannot get it to fail (even with the system under heavy load).~~
I'm feeling this is somewhat related to #7858. Is there still a race condition, even with atomic O_CLOEXEC? That is, AIUI, the file descriptors are still inherited across `fork`, and only closed when `exec` is called. If so, then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately.
~~I'm thinking a simple solution would be to isolate these tests into a separate test executable which runs with `--test-threads=1` (or maybe a simple no-harness test?). This should prevent concurrent tests from interfering with one another. The downside is that this makes it more cumbersome to run all of the test suite.~~ (Testing shows this probably won't fix this test.)
|
1.0
|
close_output test is randomly failing - ~~TLDR: Should we run some flaky tests single-threaded?~~ (Nope)
The `build::close_output` test is randomly failing on CI. There were some fixes applied in #8286 in May 26, but there appears to be more recent failures:
https://github.com/rust-lang/rust/pull/74312#issuecomment-657964827
https://github.com/rust-lang/rust/pull/74408#issuecomment-659603027
https://github.com/rust-lang/rust/pull/74908#issuecomment-665912840
https://github.com/rust-lang/rust/pull/74923 (https://github.com/rust-lang-ci/rust/runs/924743383)
The failure is:
```
---- build::close_output stdout ----
thread 'build::close_output' panicked at 'assertion failed: !status.success()', src/tools/cargo/tests/testsuite/build.rs:5016:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
I am uncertain how this is possible, so maybe someone could double check that what I wrote makes sense. [The test](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/tests/testsuite/build.rs#L4928-L5044) covers what happens when stdout or stderr is closed in the middle of the build. It uses a proc-macro as a sync point so that the test can know when compilation has started, and to emit data to stdout or stderr during the build. It should follow this sequence:
1. Starts a TCP server.
2. Starts the build.
3. The proc-macro starts building.
4. The proc-macro connects to the TCP server, and waits for the test to tell it it is OK to continue.
5. Test receives connection from proc-macro
6. Test **closes stdout**.
7. Test tells proc-macro to continue.
8. Proc-macro starts spewing stuff to stdout to cargo, which through the internal job queue ends up attempting to [write to stdout](https://github.com/rust-lang/cargo/blob/974eb438da8ced6e3becda2bbf63d9b643eacdeb/src/cargo/core/compiler/job_queue.rs#L501-L503). Since stdout was closed in step 6, this should fail.
9. Cargo should exit with an error after rustc is done.
For some reason, at step 8, it successfully writes to stdout, and step 9 returns success.
I've been doing a few tests, and it gets worse based on the number of concurrent tests running. ~~When run single threaded, I cannot get it to fail (even with the system under heavy load).~~
I'm feeling this is somewhat related to #7858. Is there still a race condition, even with atomic O_CLOEXEC? That is, AIUI, the file descriptors are still inherited across `fork`, and only closed when `exec` is called. If so, then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately.
~~I'm thinking a simple solution would be to isolate these tests into a separate test executable which runs with `--test-threads=1` (or maybe a simple no-harness test?). This should prevent concurrent tests from interfering with one another. The downside is that this makes it more cumbersome to run all of the test suite.~~ (Testing shows this probably won't fix this test.)
|
non_process
|
close output test is randomly failing tldr should we run some flaky tests single threaded nope the build close output test is randomly failing on ci there were some fixes applied in in may but there appears to be more recent failures the failure is build close output stdout thread build close output panicked at assertion failed status success src tools cargo tests testsuite build rs note run with rust backtrace environment variable to display a backtrace i am uncertain how this is possible so maybe someone could double check that what i wrote makes sense covers what happens when stdout or stderr is closed in the middle of the build it uses a proc macro as a sync point so that the test can know when compilation has started and to emit data to stdout or stderr during the build it should follow this sequence starts a tcp server starts the build the proc macro starts building the proc macro connects to the tcp server and waits for the test to tell it it is ok to continue test receives connection from proc macro test closes stdout test tells proc macro to continue proc macro starts spewing stuff to stdout to cargo which through the internal job queue ends up attempting to since stdout was closed in step this should fail cargo should exit with an error after rustc is done for some reason at step it successfully writes to stdout and step returns success i ve been doing a few tests and it gets worse based on the number of concurrent tests running when run single threaded i cannot get it to fail even with the system under heavy load i m feeling this is somewhat related to is there still a race condition even with atomic o cloexec that is aiui the file descriptors are still inherited across fork and only closed when exec is called if so then there is a small window where the file descriptors have extra duplicates which prevent them from fully closing immediately i m thinking a simple solution would be to isolate these tests into a separate test executable which runs with test threads or maybe a simple no harness test this should prevent concurrent tests from interfering with one another the downside is that this makes it more cumbersome to run all of the test suite testing shows this probably won t fix this test
| 0
|
17,124
| 22,643,137,813
|
IssuesEvent
|
2022-07-01 05:42:05
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
[ShortPy] Missing localisation string - fluid-processing-machines-1
|
bug confirmed locale mod:pycoalprocessing mod:pyhightech
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [X] pyfusionenergy
- [ ] pyhightech
- [X] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
GNU/Linux
### What kind of issue is this?
- [ ] Compatibility
- [X] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
Unknown key: "technology-hame.fluid-processing-machines" 1
### Steps to reproduce
_No response_
### Additional context

Confirmed working with all Py mods active by another user, so only an issue with "Short" Py.

### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9025165/factorio-current.log)
_No response_
|
1.0
|
[ShortPy] Missing localisation string - fluid-processing-machines-1 - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [X] pyfusionenergy
- [ ] pyhightech
- [X] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
GNU/Linux
### What kind of issue is this?
- [ ] Compatibility
- [X] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
Unknown key: "technology-hame.fluid-processing-machines" 1
### Steps to reproduce
_No response_
### Additional context

Confirmed working with all Py mods active by another user, so only an issue with "Short" Py.

### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9025165/factorio-current.log)
_No response_
|
process
|
missing localisation string fluid processing machines mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system gnu linux what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem unknown key technology hame fluid processing machines steps to reproduce no response additional context confirmed working with all py mods active by another user so only an issue with short py log file no response
| 1
|
71,526
| 8,663,573,624
|
IssuesEvent
|
2018-11-28 17:41:34
|
byucs340ta/Fall2018
|
https://api.github.com/repos/byucs340ta/Fall2018
|
closed
|
Cannot log back in after pressing back
|
P4: Aesthetic or Design Flaw Team 2
|
When you hit the back button it logs you out and you can never return until closing the app
|
1.0
|
Cannot log back in after pressing back - When you hit the back button it logs you out and you can never return until closing the app
|
non_process
|
cannot log back in after pressing back when you hit the back button it logs you out and you can never return until closing the app
| 0
|
310,215
| 23,326,261,803
|
IssuesEvent
|
2022-08-08 21:34:45
|
arviz-devs/arviz
|
https://api.github.com/repos/arviz-devs/arviz
|
closed
|
Remove deprecated arguments from `plot_pair()`
|
Beginner User Documentation
|
`plot_kwargs` is an argument of `plot_pair()` yet it is not described in the [documentation](https://python.arviz.org/en/latest/api/generated/arviz.plot_pair.html#arviz.plot_pair) for this function.
**To Reproduce**
Go to the link and search for the description of `plot_kwargs`
**Expected behavior**
A complete docstring were `plot_kwargs` usage and behaviour is described.
|
1.0
|
Remove deprecated arguments from `plot_pair()` - `plot_kwargs` is an argument of `plot_pair()` yet it is not described in the [documentation](https://python.arviz.org/en/latest/api/generated/arviz.plot_pair.html#arviz.plot_pair) for this function.
**To Reproduce**
Go to the link and search for the description of `plot_kwargs`
**Expected behavior**
A complete docstring were `plot_kwargs` usage and behaviour is described.
|
non_process
|
remove deprecated arguments from plot pair plot kwargs is an argument of plot pair yet it is not described in the for this function to reproduce go to the link and search for the description of plot kwargs expected behavior a complete docstring were plot kwargs usage and behaviour is described
| 0
|
63
| 2,522,117,576
|
IssuesEvent
|
2015-01-19 19:30:47
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
Added CREW Test case for ...
|
process test-suite
|
```
gremlin> g.V().has('name','gremlin').inE('uses').
order().by('skill',incr).as('a').
outV().as('b').
select().by('skill').by('name') // rank the users of gremlin by their skill level
==>[a:3, b:matthias]
==>[a:4, b:marko]
==>[a:5, b:stephen]
==>[a:5, b:daniel]
```
|
1.0
|
Added CREW Test case for ... - ```
gremlin> g.V().has('name','gremlin').inE('uses').
order().by('skill',incr).as('a').
outV().as('b').
select().by('skill').by('name') // rank the users of gremlin by their skill level
==>[a:3, b:matthias]
==>[a:4, b:marko]
==>[a:5, b:stephen]
==>[a:5, b:daniel]
```
|
process
|
added crew test case for gremlin g v has name gremlin ine uses order by skill incr as a outv as b select by skill by name rank the users of gremlin by their skill level
| 1
|
11,377
| 7,525,618,820
|
IssuesEvent
|
2018-04-13 11:14:33
|
master-keying/mks
|
https://api.github.com/repos/master-keying/mks
|
closed
|
satpiler::init can be accidentally quadratic
|
performance bug question
|
The behaviour of data resizing in `satpiler::init`, shown in the code below, can be easily made accidentally quadratic.
```cpp
if (idx >= data.size())
data.resize(1+idx);
data[idx] = lit;
```
This is similar to what `suggest::key` did in issue #8
|
True
|
satpiler::init can be accidentally quadratic - The behaviour of data resizing in `satpiler::init`, shown in the code below, can be easily made accidentally quadratic.
```cpp
if (idx >= data.size())
data.resize(1+idx);
data[idx] = lit;
```
This is similar to what `suggest::key` did in issue #8
|
non_process
|
satpiler init can be accidentally quadratic the behaviour of data resizing in satpiler init shown in the code below can be easily made accidentally quadratic cpp if idx data size data resize idx data lit this is similar to what suggest key did in issue
| 0
|
50,097
| 21,012,470,426
|
IssuesEvent
|
2022-03-30 08:01:09
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
az aks install-cli fails
|
Service Attention question AKS needs-author-feedback no-recent-activity
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
az aks install-cli --install-location=~/.azure-kubectl/kubectl.exe
**Errors:**
```
[Errno 13] Permission denied: '/usr/local/bin/kubelogin'
Traceback (most recent call last):
az/lib/python3.6/shutil.py, ln 550, in move
os.rename(src, real_dst)
PermissionError: [Errno 13] Permission denied: '/tmp/tmpk4r6ywl2/bin/linux_amd64/kubelogin' -> '/usr/local/bin/kubelogin'
...
az/lib/python3.6/shutil.py, ln 121, in copyfile
with open(dst, 'wb') as fdst:
PermissionError: [Errno 13] Permission denied: '/usr/local/bin/kubelogin'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az aks install-cli --install-location={}`
## Expected Behavior
## Environment Summary
```
Linux-4.15.0-1100-azure-x86_64-with-debian-10.2 (Cloud Shell)
Python 3.6.10
Installer: DEB
azure-cli 2.16.0
Extensions:
ai-examples 0.2.5
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
1.0
|
az aks install-cli fails -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
az aks install-cli --install-location=~/.azure-kubectl/kubectl.exe
**Errors:**
```
[Errno 13] Permission denied: '/usr/local/bin/kubelogin'
Traceback (most recent call last):
az/lib/python3.6/shutil.py, ln 550, in move
os.rename(src, real_dst)
PermissionError: [Errno 13] Permission denied: '/tmp/tmpk4r6ywl2/bin/linux_amd64/kubelogin' -> '/usr/local/bin/kubelogin'
...
az/lib/python3.6/shutil.py, ln 121, in copyfile
with open(dst, 'wb') as fdst:
PermissionError: [Errno 13] Permission denied: '/usr/local/bin/kubelogin'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az aks install-cli --install-location={}`
## Expected Behavior
## Environment Summary
```
Linux-4.15.0-1100-azure-x86_64-with-debian-10.2 (Cloud Shell)
Python 3.6.10
Installer: DEB
azure-cli 2.16.0
Extensions:
ai-examples 0.2.5
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_process
|
az aks install cli fails this is autogenerated please review and update as needed describe the bug command name az aks install cli install location azure kubectl kubectl exe errors permission denied usr local bin kubelogin traceback most recent call last az lib shutil py ln in move os rename src real dst permissionerror permission denied tmp bin linux kubelogin usr local bin kubelogin az lib shutil py ln in copyfile with open dst wb as fdst permissionerror permission denied usr local bin kubelogin to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az aks install cli install location expected behavior environment summary linux azure with debian cloud shell python installer deb azure cli extensions ai examples additional context
| 0
|
304,905
| 26,345,270,708
|
IssuesEvent
|
2023-01-10 21:25:33
|
USEPA/haztrak
|
https://api.github.com/repos/USEPA/haztrak
|
closed
|
RI Site Permissions serializer
|
good first issue django test
|
# 🚀 Feature Request
depends on #294
Add a [DRF model serializer](https://www.django-rest-framework.org/api-guide/serializers/#modelserializer)
for the RCRAInfo site permissions model.
Should be relatively straight forward as I don't think there's DRF methods we need to override on this one.
PR should include new test(s) along with new fixtures as well. Ping me here or open a draft PR if you need assistance with this part.
|
1.0
|
RI Site Permissions serializer - # 🚀 Feature Request
depends on #294
Add a [DRF model serializer](https://www.django-rest-framework.org/api-guide/serializers/#modelserializer)
for the RCRAInfo site permissions model.
Should be relatively straight forward as I don't think there's DRF methods we need to override on this one.
PR should include new test(s) along with new fixtures as well. Ping me here or open a draft PR if you need assistance with this part.
|
non_process
|
ri site permissions serializer 🚀 feature request depends on add a for the rcrainfo site permissions model should be relatively straight forward as i don t think there s drf methods we need to override on this one pr should include new test s along with new fixtures as well ping me here or open a draft pr if you need assistance with this part
| 0
|
561,500
| 16,618,253,818
|
IssuesEvent
|
2021-06-02 19:48:27
|
internetarchive/openlibrary
|
https://api.github.com/repos/internetarchive/openlibrary
|
opened
|
Last deploy broke cron jobs
|
Affects: Admin/Maintenance Lead: @mekarpeles Priority: 1 Type: Bug
|
More investigation needed
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles
|
1.0
|
Last deploy broke cron jobs - More investigation needed
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles
|
non_process
|
last deploy broke cron jobs more investigation needed proposal constraints related files stakeholders mekarpeles
| 0
|
10,303
| 13,153,229,555
|
IssuesEvent
|
2020-08-10 02:29:00
|
kubeflow/testing
|
https://api.github.com/repos/kubeflow/testing
|
closed
|
Remove GKE permissions to kubeflow-ci from kubeflow-testing serving account
|
area/engprod kind/feature kind/process lifecycle/frozen lifecycle/stale priority/p1
|
Argo workflows for E2E test run in the project kubeflow-ci.
If E2E tests deploy GCP infrastructure (e.g. clusters) they should do this in project: kubeflow-ci-deployment.
However, some tests are still deploying in kubeflow-ci.
We should update the tests and lock down the permissions on the service account assigned to the kubeflow-ci project.
|
1.0
|
Remove GKE permissions to kubeflow-ci from kubeflow-testing serving account - Argo workflows for E2E test run in the project kubeflow-ci.
If E2E tests deploy GCP infrastructure (e.g. clusters) they should do this in project: kubeflow-ci-deployment.
However, some tests are still deploying in kubeflow-ci.
We should update the tests and lock down the permissions on the service account assigned to the kubeflow-ci project.
|
process
|
remove gke permissions to kubeflow ci from kubeflow testing serving account argo workflows for test run in the project kubeflow ci if tests deploy gcp infrastructure e g clusters they should do this in project kubeflow ci deployment however some tests are still deploying in kubeflow ci we should update the tests and lock down the permissions on the service account assigned to the kubeflow ci project
| 1
|
429,451
| 30,055,045,621
|
IssuesEvent
|
2023-06-28 05:57:02
|
PLAIF-dev/sw_synthetic_rospkg
|
https://api.github.com/repos/PLAIF-dev/sw_synthetic_rospkg
|
closed
|
github pakcage를 docker 레지스트리로 활용하기 연구
|
documentation
|
github containers package를 사용하면, private registry로 사용할 수 있는 것을 확인
참고문서 : https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
1. Public Access Token (PAT) 만들기
-write package 옵션을 넣어주도록 합시다
2. ghcr.io 계정으로 도커 로그인
3. 레지스트리에 이미지 commit & push
|
1.0
|
github pakcage를 docker 레지스트리로 활용하기 연구 - github containers package를 사용하면, private registry로 사용할 수 있는 것을 확인
참고문서 : https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
1. Public Access Token (PAT) 만들기
-write package 옵션을 넣어주도록 합시다
2. ghcr.io 계정으로 도커 로그인
3. 레지스트리에 이미지 commit & push
|
non_process
|
github pakcage를 docker 레지스트리로 활용하기 연구 github containers package를 사용하면 private registry로 사용할 수 있는 것을 확인 참고문서 public access token pat 만들기 write package 옵션을 넣어주도록 합시다 ghcr io 계정으로 도커 로그인 레지스트리에 이미지 commit push
| 0
|
63,366
| 6,844,531,416
|
IssuesEvent
|
2017-11-13 02:09:46
|
neovim/neovim
|
https://api.github.com/repos/neovim/neovim
|
closed
|
Test failures in 0.2.1 from Debian's builds
|
tests
|
0.2.1 has been uploaded to Debian and there are various [build/test failures](https://buildd.debian.org/status/logs.php?pkg=neovim&ver=0.2.1-2&suite=sid). As I have time, I'll perform triage on the available porterboxes I have access to, but don't let that stop you from offering ideas/fixes. :)
Here's a breakdown of what's happened for tracking purposes, along with potentially relevant information.
---
### [armel](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armel&ver=0.2.1-2&stamp=1510204415&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 7671724'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [armhf](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armhf&ver=0.2.1-2&stamp=1510204523&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 6389676'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [mips](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mips&ver=0.2.1-2&stamp=1510204511&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 53'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [mips64el](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mips64el&ver=0.2.1-2&stamp=1510204554&raw=0)
* LuaJIT build
* mips64el support is still pretty new for LuaJIT, so I may switch this build back to Lua and see if that helps
This caused _tons_ of test failures. The example below is just the first one. I'm ignoring other test failures in this build for now, since it's unclear if they're related to this issue.
```
/usr/share/lua/5.1/busted/block.lua:22: attempt to index field 'env' (a nil value)
stack traceback:
/usr/bin/busted:3: in main chunk
```
---
### [mipsel](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mipsel&ver=0.2.1-2&stamp=1510205124&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 53'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...p6/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...p6/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...p6/neovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...p6/neovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
---
### [ppc64el](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=ppc64el&ver=0.2.1-2&stamp=1510203863&raw=0)
* Lua build
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
---
### [alpha](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=alpha&ver=0.2.1-2&stamp=1510205067&raw=0)
* Lua build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
---
### [hppa](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=hppa&ver=0.2.1-2&stamp=1510211184&raw=0)
* Lua build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
.../neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: -120576164'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
(tail call): ?
.../neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <.../neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...eovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
---
### [powerpc](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=powerpc&ver=0.2.1-2&stamp=1510204003&raw=0)
* LuaJIT build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...xu/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...xu/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
---
### [sparc64](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=sparc64&ver=0.2.1-2&stamp=1510205989&raw=0)
* Lua build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...eovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
|
1.0
|
Test failures in 0.2.1 from Debian's builds - 0.2.1 has been uploaded to Debian and there are various [build/test failures](https://buildd.debian.org/status/logs.php?pkg=neovim&ver=0.2.1-2&suite=sid). As I have time, I'll perform triage on the available porterboxes I have access to, but don't let that stop you from offering ideas/fixes. :)
Here's a breakdown of what's happened for tracking purposes, along with potentially relevant information.
---
### [armel](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armel&ver=0.2.1-2&stamp=1510204415&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 7671724'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...W7Xi/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [armhf](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armhf&ver=0.2.1-2&stamp=1510204523&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 6389676'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...SLtd/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [mips](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mips&ver=0.2.1-2&stamp=1510204511&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 53'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...GU7g/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
---
### [mips64el](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mips64el&ver=0.2.1-2&stamp=1510204554&raw=0)
* LuaJIT build
* mips64el support is still pretty new for LuaJIT, so I may switch this build back to Lua and see if that helps
This caused _tons_ of test failures. The example below is just the first one. I'm ignoring other test failures in this build for now, since it's unclear if they're related to this issue.
```
/usr/share/lua/5.1/busted/block.lua:22: attempt to index field 'env' (a nil value)
stack traceback:
/usr/bin/busted:3: in main chunk
```
---
### [mipsel](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=mipsel&ver=0.2.1-2&stamp=1510205124&raw=0)
* LuaJIT build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: 53'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <...uMp6/neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...p6/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...p6/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...p6/neovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...p6/neovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
---
### [ppc64el](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=ppc64el&ver=0.2.1-2&stamp=1510203863&raw=0)
* Lua build
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...O/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
---
### [alpha](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=alpha&ver=0.2.1-2&stamp=1510205067&raw=0)
* Lua build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...j/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
---
### [hppa](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=hppa&ver=0.2.1-2&stamp=1510211184&raw=0)
* Lua build
```
test/functional/api/highlight_spec.lua[0m @ [36m35[0m: [1mhighlight api nvim_get_hl_by_id[0m
.../neovim-0.2.1/test/functional/api/highlight_spec.lua:46: Expected objects to be the same.
Passed in:
(string) 'Invalid highlight id: -120576164'
Expected:
(string) 'Invalid highlight id: 30000'
stack traceback:
(tail call): ?
.../neovim-0.2.1/test/functional/api/highlight_spec.lua:46: in function <.../neovim-0.2.1/test/functional/api/highlight_spec.lua:35>
```
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...eovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
---
### [powerpc](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=powerpc&ver=0.2.1-2&stamp=1510204003&raw=0)
* LuaJIT build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...xu/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...xu/neovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
---
### [sparc64](https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=sparc64&ver=0.2.1-2&stamp=1510205989&raw=0)
* Lua build
```
test/functional/terminal/cursor_spec.lua[0m @ [36m62[0m: [1mterminal cursor with number column is positioned correctly when focused[0m
./test/functional/ui/screen.lua:302: Row 2 did not match.
Expected:
|{7: 1 }tty ready |
|*{7: 2 }{1: } |
|{7: 3 } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|*{7: 2 }rows: 6, cols: 46 |
|{7: 3 }{1: } |
|{7: 4 } |
|{7: 5 } |
|{7: 6 } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:65: in function <...eovim-0.2.1/test/functional/terminal/cursor_spec.lua:62>
```
```
test/functional/terminal/tui_spec.lua[0m @ [36m408[0m: [1mtui 't_Co' (terminal colors) no TERM uses 8 colors[0m
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: bad argument #2 to 'format' (string expected, got nil)
stack traceback:
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:381: in function 'assert_term_colors'
...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:409: in function <...s/neovim-0.2.1/test/functional/terminal/tui_spec.lua:408>
```
```
test/functional/terminal/window_spec.lua[0m @ [36m16[0m: [1mterminal window with 'number' wraps text[0m
./test/functional/ui/screen.lua:302: Row 4 did not match.
Expected:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZ abcdefghijklmnopqrstuvwxyzABCDEFGHIJ|
|{7: 5 }KLMNOPQRSTUVWXYZrows: 6, cols: 41 |
|{7: 6 }{1: } |
|{3:-- TERMINAL --} |
Actual:
|{7: 1 }tty ready |
|{7: 2 }rows: 6, cols: 48 |
|{7: 3 }abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNO|
|*{7: 4 }WXYZrows: 6, cols: 41 |
|{7: 5 } abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN|
|{7: 6 }OPQRSTUVWXYZ{1: } |
|{3:-- TERMINAL --} |
To print the expect() call that would assert the current screen state, use
screen:snapshot_util(). In case of non-deterministic failures, use
screen:redraw_debug() to show all intermediate screen states.
stack traceback:
./test/functional/ui/screen.lua:302: in function 'wait'
./test/functional/ui/screen.lua:216: in function 'expect'
...eovim-0.2.1/test/functional/terminal/window_spec.lua:47: in function <...eovim-0.2.1/test/functional/terminal/window_spec.lua:16>
```
|
non_process
|
test failures in from debian s builds has been uploaded to debian and there are various as i have time i ll perform triage on the available porterboxes i have access to but don t let that stop you from offering ideas fixes here s a breakdown of what s happened for tracking purposes along with potentially relevant information luajit build test functional api highlight spec lua api nvim get hl by id neovim test functional api highlight spec lua expected objects to be the same passed in string invalid highlight id expected string invalid highlight id stack traceback neovim test functional api highlight spec lua in function luajit build test functional api highlight spec lua api nvim get hl by id sltd neovim test functional api highlight spec lua expected objects to be the same passed in string invalid highlight id expected string invalid highlight id stack traceback sltd neovim test functional api highlight spec lua in function luajit build test functional api highlight spec lua api nvim get hl by id neovim test functional api highlight spec lua expected objects to be the same passed in string invalid highlight id expected string invalid highlight id stack traceback neovim test functional api highlight spec lua in function luajit build support is still pretty new for luajit so i may switch this build back to lua and see if that helps this caused tons of test failures the example below is just the first one i m ignoring other test failures in this build for now since it s unclear if they re related to this issue usr share lua busted block lua attempt to index field env a nil value stack traceback usr bin busted in main chunk luajit build test functional api highlight spec lua api nvim get hl by id neovim test functional api highlight spec lua expected objects to be the same passed in string invalid highlight id expected string invalid highlight id stack traceback neovim test functional api highlight spec lua in function test functional terminal cursor spec lua cursor with number column is positioned correctly when focused test functional ui screen lua row did not match expected tty ready terminal actual tty ready rows cols terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect neovim test functional terminal cursor spec lua in function test functional terminal window spec lua window with number wraps text test functional ui screen lua row did not match expected tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyz abcdefghijklmnopqrstuvwxyzabcdefghij klmnopqrstuvwxyzrows cols terminal actual tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyzrows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmn opqrstuvwxyz terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect neovim test functional terminal window spec lua in function lua build test functional terminal tui spec lua t co terminal colors no term uses colors o neovim test functional terminal tui spec lua bad argument to format string expected got nil stack traceback o neovim test functional terminal tui spec lua in function assert term colors o neovim test functional terminal tui spec lua in function lua build test functional terminal cursor spec lua cursor with number column is positioned correctly when focused test functional ui screen lua row did not match expected tty ready terminal actual tty ready rows cols terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect eovim test functional terminal cursor spec lua in function test functional terminal tui spec lua t co terminal colors no term uses colors j neovim test functional terminal tui spec lua bad argument to format string expected got nil stack traceback j neovim test functional terminal tui spec lua in function assert term colors j neovim test functional terminal tui spec lua in function lua build test functional api highlight spec lua api nvim get hl by id neovim test functional api highlight spec lua expected objects to be the same passed in string invalid highlight id expected string invalid highlight id stack traceback tail call neovim test functional api highlight spec lua in function test functional terminal cursor spec lua cursor with number column is positioned correctly when focused test functional ui screen lua row did not match expected tty ready terminal actual tty ready rows cols terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect eovim test functional terminal cursor spec lua in function test functional terminal tui spec lua t co terminal colors no term uses colors s neovim test functional terminal tui spec lua bad argument to format string expected got nil stack traceback s neovim test functional terminal tui spec lua in function assert term colors s neovim test functional terminal tui spec lua in function test functional terminal window spec lua window with number wraps text test functional ui screen lua row did not match expected tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyz abcdefghijklmnopqrstuvwxyzabcdefghij klmnopqrstuvwxyzrows cols terminal actual tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyzrows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmn opqrstuvwxyz terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect eovim test functional terminal window spec lua in function luajit build test functional terminal cursor spec lua cursor with number column is positioned correctly when focused test functional ui screen lua row did not match expected tty ready terminal actual tty ready rows cols terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect xu neovim test functional terminal cursor spec lua in function lua build test functional terminal cursor spec lua cursor with number column is positioned correctly when focused test functional ui screen lua row did not match expected tty ready terminal actual tty ready rows cols terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect eovim test functional terminal cursor spec lua in function test functional terminal tui spec lua t co terminal colors no term uses colors s neovim test functional terminal tui spec lua bad argument to format string expected got nil stack traceback s neovim test functional terminal tui spec lua in function assert term colors s neovim test functional terminal tui spec lua in function test functional terminal window spec lua window with number wraps text test functional ui screen lua row did not match expected tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyz abcdefghijklmnopqrstuvwxyzabcdefghij klmnopqrstuvwxyzrows cols terminal actual tty ready rows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmno wxyzrows cols abcdefghijklmnopqrstuvwxyzabcdefghijklmn opqrstuvwxyz terminal to print the expect call that would assert the current screen state use screen snapshot util in case of non deterministic failures use screen redraw debug to show all intermediate screen states stack traceback test functional ui screen lua in function wait test functional ui screen lua in function expect eovim test functional terminal window spec lua in function
| 0
|
11,295
| 14,101,399,912
|
IssuesEvent
|
2020-11-06 06:45:23
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
components/tidb_query_vec_expr/src/impl_cast.rs:979:5 fails
|
severity/Moderate sig/coprocessor
|
## Bug Report
<!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. -->
### What version of TiKV are you using?
<!-- You can run `tikv-server --version` -->
15c15b512c95465ff78a990451e5bbe6084a6b3a
rustc-bin-9999 -V
rustc 1.47.0-nightly (2d8a3b918 2020-08-26)
### What operating system and CPU are you using?
<!-- If you're using Linux, you can run `cat /proc/cpuinfo` -->
### Steps to reproduce
<!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. -->
### What did you expect?
### What did happened?
```sh
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:979:5
|
979 | val.into_inner().to_string().as_bytes()
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:981:53
|
981 | cast_as_duration!(BytesRef, cast_bytes_as_duration, val);
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:985:5
|
985 | val.to_string().as_bytes()
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:987:51
|
987 | cast_as_duration!(JsonRef, cast_json_as_duration, val.unquote()?.as_bytes());
| ^^^ not found in this scope
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0425`.
error: could not compile `tidb_query_vec_expr`.
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
```
|
1.0
|
components/tidb_query_vec_expr/src/impl_cast.rs:979:5 fails - ## Bug Report
<!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. -->
### What version of TiKV are you using?
<!-- You can run `tikv-server --version` -->
15c15b512c95465ff78a990451e5bbe6084a6b3a
rustc-bin-9999 -V
rustc 1.47.0-nightly (2d8a3b918 2020-08-26)
### What operating system and CPU are you using?
<!-- If you're using Linux, you can run `cat /proc/cpuinfo` -->
### Steps to reproduce
<!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. -->
### What did you expect?
### What did happened?
```sh
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:979:5
|
979 | val.into_inner().to_string().as_bytes()
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:981:53
|
981 | cast_as_duration!(BytesRef, cast_bytes_as_duration, val);
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:985:5
|
985 | val.to_string().as_bytes()
| ^^^ not found in this scope
error[E0425]: cannot find value `val` in this scope
--> components/tidb_query_vec_expr/src/impl_cast.rs:987:51
|
987 | cast_as_duration!(JsonRef, cast_json_as_duration, val.unquote()?.as_bytes());
| ^^^ not found in this scope
error: aborting due to 4 previous errors
For more information about this error, try `rustc --explain E0425`.
error: could not compile `tidb_query_vec_expr`.
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
```
|
process
|
components tidb query vec expr src impl cast rs fails bug report what version of tikv are you using rustc bin v rustc nightly what operating system and cpu are you using steps to reproduce what did you expect what did happened sh error cannot find value val in this scope components tidb query vec expr src impl cast rs val into inner to string as bytes not found in this scope error cannot find value val in this scope components tidb query vec expr src impl cast rs cast as duration bytesref cast bytes as duration val not found in this scope error cannot find value val in this scope components tidb query vec expr src impl cast rs val to string as bytes not found in this scope error cannot find value val in this scope components tidb query vec expr src impl cast rs cast as duration jsonref cast json as duration val unquote as bytes not found in this scope error aborting due to previous errors for more information about this error try rustc explain error could not compile tidb query vec expr to learn more run the command again with verbose warning build failed waiting for other jobs to finish error build failed
| 1
|
384
| 2,823,574,569
|
IssuesEvent
|
2015-05-21 09:39:53
|
austundag/testing
|
https://api.github.com/repos/austundag/testing
|
closed
|
Undo VM machine changes
|
enhancement in process
|
Now that local development is possible undo changes in 4afc23f146bcc06a6ff76dcc140e2c52b8dc5b8e and VHAINNOVATIONS/AccessAllergyInfo-ADK@6b8da4f494003ff24f55e60d92a511b7e0c626a4.
|
1.0
|
Undo VM machine changes - Now that local development is possible undo changes in 4afc23f146bcc06a6ff76dcc140e2c52b8dc5b8e and VHAINNOVATIONS/AccessAllergyInfo-ADK@6b8da4f494003ff24f55e60d92a511b7e0c626a4.
|
process
|
undo vm machine changes now that local development is possible undo changes in and vhainnovations accessallergyinfo adk
| 1
|
365,348
| 25,531,022,492
|
IssuesEvent
|
2022-11-29 08:25:51
|
tavi22/Reseller-Web
|
https://api.github.com/repos/tavi22/Reseller-Web
|
closed
|
Functional decomposition
|
documentation enhancement
|
Functional decomposition should help reduce complexity and uncertainty.
|
1.0
|
Functional decomposition - Functional decomposition should help reduce complexity and uncertainty.
|
non_process
|
functional decomposition functional decomposition should help reduce complexity and uncertainty
| 0
|
345,477
| 30,816,422,634
|
IssuesEvent
|
2023-08-01 13:44:55
|
transmission/transmission
|
https://api.github.com/repos/transmission/transmission
|
closed
|
MacOS option-click not working
|
scope:mac needs confirmation type:ui needs testers
|
### What is the issue?
I might be misremembering this function, but holding the option key would allow the user to click the start/play/reload button and start the remaining inactive torrents. This doesn't work at the moment.
An assumption, given that it was the last major change to the macOS client, but possibly related to #5147.
### Which application of Transmission?
macOS app
### Which version of Transmission?
4.1.0-dev (f758cb3597)
|
1.0
|
MacOS option-click not working - ### What is the issue?
I might be misremembering this function, but holding the option key would allow the user to click the start/play/reload button and start the remaining inactive torrents. This doesn't work at the moment.
An assumption, given that it was the last major change to the macOS client, but possibly related to #5147.
### Which application of Transmission?
macOS app
### Which version of Transmission?
4.1.0-dev (f758cb3597)
|
non_process
|
macos option click not working what is the issue i might be misremembering this function but holding the option key would allow the user to click the start play reload button and start the remaining inactive torrents this doesn t work at the moment an assumption given that it was the last major change to the macos client but possibly related to which application of transmission macos app which version of transmission dev
| 0
|
318,685
| 27,321,017,587
|
IssuesEvent
|
2023-02-24 19:50:53
|
peviitor-ro/ui-js
|
https://api.github.com/repos/peviitor-ro/ui-js
|
closed
|
[SERP] "Alătură-te" button's height is 20px
|
bug TestQuality Low
|
## Precondition
URL: https://beta.peviitor.ro/
Device: Samsung Galaxy S21 Ultra
Browser: Chrome
Platform: Android 12
## Steps to Reproduce:
### Step 1 <span style="color:#58b880"> **[Pass]** </span>
Open URL in browser
#### Expected Result
Website is loaded without any error
### Step 2 <span style="color:#58b880"> **[Pass]** </span>
Click on “Caută”
#### Expected Result
The user is redirected to SERP
### Step 3 <span style="color:#ff5538"> **[Fail]** </span>
Inspect "Alătură-te" text's height
#### Expected Result
Text height is 19px
#### Actual Result
"Alatura-te" text height in 20 px
|
1.0
|
[SERP] "Alătură-te" button's height is 20px - ## Precondition
URL: https://beta.peviitor.ro/
Device: Samsung Galaxy S21 Ultra
Browser: Chrome
Platform: Android 12
## Steps to Reproduce:
### Step 1 <span style="color:#58b880"> **[Pass]** </span>
Open URL in browser
#### Expected Result
Website is loaded without any error
### Step 2 <span style="color:#58b880"> **[Pass]** </span>
Click on “Caută”
#### Expected Result
The user is redirected to SERP
### Step 3 <span style="color:#ff5538"> **[Fail]** </span>
Inspect "Alătură-te" text's height
#### Expected Result
Text height is 19px
#### Actual Result
"Alatura-te" text height in 20 px
|
non_process
|
alătură te button s height is precondition url device samsung galaxy ultra browser chrome platform android steps to reproduce step open url in browser expected result website is loaded without any error step click on “caută” expected result the user is redirected to serp step inspect quot alătură te quot text s height expected result text height is actual result quot alatura te quot text height in px
| 0
|
21,794
| 30,300,834,341
|
IssuesEvent
|
2023-07-10 05:43:15
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Masks with reconstruct in LCh
|
reproduce: confirmed scope: image processing bug: pending
|
### Describe the bug
It seems to me that mask handling in the highlight reconstruction module does not work as expected when using the reconstruct in LCh mode.
I have an image with clipped highlights. Highlight reconstruction in the Filmic RGB module is switched of. I want to do highlight reconstruction with the highlight reconstruction module using masks.
When I select the "inpaint opposed" mode the module behaves as expected. The clipping threshold is set to default 1 and the area matches the raw-overexposed warning. The module operates on the whole image. Now I select the drawn mask button. No change in the image happens so far. Now I draw a mask. The result is as expected. The effect is now applied just in the mask and the rest of the image shows the image as if the module is not applied.
If I repeat the same procedure whith the "reconstruct in LCh" mode the behaviour is different. Before setting the mask the module applies the reconstruction on the whole image as expected. But when I select the drawn mask button the image immediately changes in a way as if the module is switched off. The histogram changes similar to the switched off module. I now can place a mask but this does not change anything.
My opinion is that similar to the "inpaint opposed" mode, just selecting the drawn mask button should not change the image.
### Steps to reproduce
1. Set the highlight reconstruction module to default value via the default icon
2. Change method in in the highlight reconstruction module to "reconstruct in LCh" -> The method gets applied to the whole image
3. Select the drawn mask button -> The image changes in a way as if the module is switched off. See description above
4. Draw a mask -> No change in the image inside or outside the mask
### Expected behavior
Correct mask handling for reconstruct in LCh mode
### Logfile | Screenshot | Screencast
Figure with highlight reconstruction module disabled:

Figure with highlight reconstruction module enabled and without mask enabled:

Figure with with highlight reconstruction module and mask enabled:

Figure with mask placed:

### Commit
_No response_
### Where did you install darktable from?
distro packaging
### darktable version
4.2.1-4
### What OS are you using?
Linux
### What is the version of your OS?
Kubuntu 23.04
### Describe your system?
I use X11 and Mesa Intel® UHD Graphics 770
### Are you using OpenCL GPU in darktable?
Yes
### If yes, what is the GPU card and driver?
Mesa Intel® UHD Graphics 770
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
_No response_
|
1.0
|
Masks with reconstruct in LCh - ### Describe the bug
It seems to me that mask handling in the highlight reconstruction module does not work as expected when using the reconstruct in LCh mode.
I have an image with clipped highlights. Highlight reconstruction in the Filmic RGB module is switched of. I want to do highlight reconstruction with the highlight reconstruction module using masks.
When I select the "inpaint opposed" mode the module behaves as expected. The clipping threshold is set to default 1 and the area matches the raw-overexposed warning. The module operates on the whole image. Now I select the drawn mask button. No change in the image happens so far. Now I draw a mask. The result is as expected. The effect is now applied just in the mask and the rest of the image shows the image as if the module is not applied.
If I repeat the same procedure whith the "reconstruct in LCh" mode the behaviour is different. Before setting the mask the module applies the reconstruction on the whole image as expected. But when I select the drawn mask button the image immediately changes in a way as if the module is switched off. The histogram changes similar to the switched off module. I now can place a mask but this does not change anything.
My opinion is that similar to the "inpaint opposed" mode, just selecting the drawn mask button should not change the image.
### Steps to reproduce
1. Set the highlight reconstruction module to default value via the default icon
2. Change method in in the highlight reconstruction module to "reconstruct in LCh" -> The method gets applied to the whole image
3. Select the drawn mask button -> The image changes in a way as if the module is switched off. See description above
4. Draw a mask -> No change in the image inside or outside the mask
### Expected behavior
Correct mask handling for reconstruct in LCh mode
### Logfile | Screenshot | Screencast
Figure with highlight reconstruction module disabled:

Figure with highlight reconstruction module enabled and without mask enabled:

Figure with with highlight reconstruction module and mask enabled:

Figure with mask placed:

### Commit
_No response_
### Where did you install darktable from?
distro packaging
### darktable version
4.2.1-4
### What OS are you using?
Linux
### What is the version of your OS?
Kubuntu 23.04
### Describe your system?
I use X11 and Mesa Intel® UHD Graphics 770
### Are you using OpenCL GPU in darktable?
Yes
### If yes, what is the GPU card and driver?
Mesa Intel® UHD Graphics 770
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
_No response_
|
process
|
masks with reconstruct in lch describe the bug it seems to me that mask handling in the highlight reconstruction module does not work as expected when using the reconstruct in lch mode i have an image with clipped highlights highlight reconstruction in the filmic rgb module is switched of i want to do highlight reconstruction with the highlight reconstruction module using masks when i select the inpaint opposed mode the module behaves as expected the clipping threshold is set to default and the area matches the raw overexposed warning the module operates on the whole image now i select the drawn mask button no change in the image happens so far now i draw a mask the result is as expected the effect is now applied just in the mask and the rest of the image shows the image as if the module is not applied if i repeat the same procedure whith the reconstruct in lch mode the behaviour is different before setting the mask the module applies the reconstruction on the whole image as expected but when i select the drawn mask button the image immediately changes in a way as if the module is switched off the histogram changes similar to the switched off module i now can place a mask but this does not change anything my opinion is that similar to the inpaint opposed mode just selecting the drawn mask button should not change the image steps to reproduce set the highlight reconstruction module to default value via the default icon change method in in the highlight reconstruction module to reconstruct in lch the method gets applied to the whole image select the drawn mask button the image changes in a way as if the module is switched off see description above draw a mask no change in the image inside or outside the mask expected behavior correct mask handling for reconstruct in lch mode logfile screenshot screencast figure with highlight reconstruction module disabled figure with highlight reconstruction module enabled and without mask enabled figure with with highlight reconstruction module and mask enabled figure with mask placed commit no response where did you install darktable from distro packaging darktable version what os are you using linux what is the version of your os kubuntu describe your system i use and mesa intel® uhd graphics are you using opencl gpu in darktable yes if yes what is the gpu card and driver mesa intel® uhd graphics please provide additional context if applicable you can attach files too but might need to rename to txt or zip no response
| 1
|
13,401
| 15,874,816,266
|
IssuesEvent
|
2021-04-09 05:53:39
|
googleapis/python-pubsub
|
https://api.github.com/repos/googleapis/python-pubsub
|
closed
|
Bump required unit test coverage to 100%
|
api: pubsub type: process
|
Currently the required test coverage is set to a magic number 99%. This is not ideal for at least two reasons:
- The missing "one percent" is still a bit wide, meaning that new code with less than 100% coverage can be submitted, but the coverage check might not catch that.
- One could legitimately refactor some code and removing some (currently covered) lines in the process. As a result, the relative coverage can _decrease_. If this reduction is just enough to drop under the current 99% threshold, that PR would see a CI failure, even though there's nothing wrong with it.
The generated code has had 100% coverage for quite some time now, meaning that we could clean up the tests a bit and bump the coverage to 100%, avoiding the abovementioned issues.
|
1.0
|
Bump required unit test coverage to 100% - Currently the required test coverage is set to a magic number 99%. This is not ideal for at least two reasons:
- The missing "one percent" is still a bit wide, meaning that new code with less than 100% coverage can be submitted, but the coverage check might not catch that.
- One could legitimately refactor some code and removing some (currently covered) lines in the process. As a result, the relative coverage can _decrease_. If this reduction is just enough to drop under the current 99% threshold, that PR would see a CI failure, even though there's nothing wrong with it.
The generated code has had 100% coverage for quite some time now, meaning that we could clean up the tests a bit and bump the coverage to 100%, avoiding the abovementioned issues.
|
process
|
bump required unit test coverage to currently the required test coverage is set to a magic number this is not ideal for at least two reasons the missing one percent is still a bit wide meaning that new code with less than coverage can be submitted but the coverage check might not catch that one could legitimately refactor some code and removing some currently covered lines in the process as a result the relative coverage can decrease if this reduction is just enough to drop under the current threshold that pr would see a ci failure even though there s nothing wrong with it the generated code has had coverage for quite some time now meaning that we could clean up the tests a bit and bump the coverage to avoiding the abovementioned issues
| 1
|
15,911
| 20,117,356,736
|
IssuesEvent
|
2022-02-07 21:04:00
|
googleapis/gapic-showcase
|
https://api.github.com/repos/googleapis/gapic-showcase
|
closed
|
migrate custom release job to release-please
|
type: process
|
The release notes, tag, and release creation should be handled by release-please, but the asset generation and upload can be done by the existing GitHub Action(s) with a little tweaking. This was done in gapic-generator-go (https://github.com/googleapis/gapic-generator-go/pull/839 and https://github.com/googleapis/gapic-generator-go/pull/863).
|
1.0
|
migrate custom release job to release-please - The release notes, tag, and release creation should be handled by release-please, but the asset generation and upload can be done by the existing GitHub Action(s) with a little tweaking. This was done in gapic-generator-go (https://github.com/googleapis/gapic-generator-go/pull/839 and https://github.com/googleapis/gapic-generator-go/pull/863).
|
process
|
migrate custom release job to release please the release notes tag and release creation should be handled by release please but the asset generation and upload can be done by the existing github action s with a little tweaking this was done in gapic generator go and
| 1
|
5,110
| 7,886,189,855
|
IssuesEvent
|
2018-06-27 14:35:41
|
gvwilson/teachtogether.tech
|
https://api.github.com/repos/gvwilson/teachtogether.tech
|
closed
|
Ch06 Tiffany Timbers
|
Ch06 Process
|
- I like "The rules" section. It is witty and spot on.
- In the table of contents initially I thought the "Challenges" parts of each chapter was a section where you described challenges with that topic, but then digging into Chapter 6 I see that they are like the SWC challenges. Even though I am familiar with SWC "challenges" and know they are exercises to be completed by the reader/learner in that context, my brain did not generalize that to this book format. So I wonder if it might be better to call these sections "Exercises" or "Putting into practice", or something else like that so it is clearer when glancing at the table of contents what these are.
- I see you have objectives at the beginning of every chapter, and I like that. However, I find it a bit abrupt to go straight from the title to the objectives. I therefore suggest that you add 1-2 sentences of gentle but brief intro before you hit the objectives (more than the title but less than the objectives).
Here are my comments on the chapter 6 materials:
- I would argue that knowing your audience/learner persona should be step 1 of backward design... For example, how else might you know what problems or misconceptions you expect to encounter...
- The teaching to the test section sounds very political, and after reading it aside from the presence of a test I am not sure what the difference is still? Is it also that there is more of a prescribed and thoughtful method to backward design?
- For Blooms taxonomy, you list Evaluating and Creating as categories in bold, but then in the following text paragraph you refer to these categories as Synthesis and Evaluation. Might want to use the same words as not to create confusion.
- On page 47/48 you have 11 points for an argument you are trying to make, thats a lot to remember... Maybe you can trim that down to the more meaningful ones (for example, it might be worth dropping the ones that say "well it works for Wikipedia so why not here...).
|
1.0
|
Ch06 Tiffany Timbers - - I like "The rules" section. It is witty and spot on.
- In the table of contents initially I thought the "Challenges" parts of each chapter was a section where you described challenges with that topic, but then digging into Chapter 6 I see that they are like the SWC challenges. Even though I am familiar with SWC "challenges" and know they are exercises to be completed by the reader/learner in that context, my brain did not generalize that to this book format. So I wonder if it might be better to call these sections "Exercises" or "Putting into practice", or something else like that so it is clearer when glancing at the table of contents what these are.
- I see you have objectives at the beginning of every chapter, and I like that. However, I find it a bit abrupt to go straight from the title to the objectives. I therefore suggest that you add 1-2 sentences of gentle but brief intro before you hit the objectives (more than the title but less than the objectives).
Here are my comments on the chapter 6 materials:
- I would argue that knowing your audience/learner persona should be step 1 of backward design... For example, how else might you know what problems or misconceptions you expect to encounter...
- The teaching to the test section sounds very political, and after reading it aside from the presence of a test I am not sure what the difference is still? Is it also that there is more of a prescribed and thoughtful method to backward design?
- For Blooms taxonomy, you list Evaluating and Creating as categories in bold, but then in the following text paragraph you refer to these categories as Synthesis and Evaluation. Might want to use the same words as not to create confusion.
- On page 47/48 you have 11 points for an argument you are trying to make, thats a lot to remember... Maybe you can trim that down to the more meaningful ones (for example, it might be worth dropping the ones that say "well it works for Wikipedia so why not here...).
|
process
|
tiffany timbers i like the rules section it is witty and spot on in the table of contents initially i thought the challenges parts of each chapter was a section where you described challenges with that topic but then digging into chapter i see that they are like the swc challenges even though i am familiar with swc challenges and know they are exercises to be completed by the reader learner in that context my brain did not generalize that to this book format so i wonder if it might be better to call these sections exercises or putting into practice or something else like that so it is clearer when glancing at the table of contents what these are i see you have objectives at the beginning of every chapter and i like that however i find it a bit abrupt to go straight from the title to the objectives i therefore suggest that you add sentences of gentle but brief intro before you hit the objectives more than the title but less than the objectives here are my comments on the chapter materials i would argue that knowing your audience learner persona should be step of backward design for example how else might you know what problems or misconceptions you expect to encounter the teaching to the test section sounds very political and after reading it aside from the presence of a test i am not sure what the difference is still is it also that there is more of a prescribed and thoughtful method to backward design for blooms taxonomy you list evaluating and creating as categories in bold but then in the following text paragraph you refer to these categories as synthesis and evaluation might want to use the same words as not to create confusion on page you have points for an argument you are trying to make thats a lot to remember maybe you can trim that down to the more meaningful ones for example it might be worth dropping the ones that say well it works for wikipedia so why not here
| 1
|
102,860
| 8,862,889,925
|
IssuesEvent
|
2019-01-10 07:57:28
|
chameleon-system/chameleon-system
|
https://api.github.com/repos/chameleon-system/chameleon-system
|
closed
|
Backend: replace icons with icon font
|
Status: Test Type: Feature
|
**Describe the solution you'd like**
Remove unused, deprecated icons, move necessary icons (massive BC break) to one directory.
Replace main menu icons with vector icon set (font) and map all old icons to new ones.
**Possible solution (if any)**
The main set of icons is located in CoreBundle/Resources/public/images/icons
The more than 1000 famfamfam icons are used in the main menu and for some button icons.
They need to replaced with an icon font to fit into the new backend theme design and make it look less like a candy shop.
There are some directories with icons, that are not used for years and should be removed without replacement.
- CoreBundle/Resources/public/images/box
- CoreBundle/Resources/public/images/breadcrumb
- CoreBundle/Resources/public/images/breadcrumb
- CoreBundle/Resources/public/images/nav_icons (where used in main menu blocks, now only the error image is used, remove or replace it there)
- CoreBundle/Resources/public/images/smileys (part of a special field type?)
- CoreBundle/Resources/public/images/social_bookmark_icons (where part of the blog bundle years ago)
Icons that we need:
- CoreBundle/Resources/public/images/filetype_icons (used in backend AND frontend if using core styling for download items), should not be replaced at this point
- CoreBundle/Resources/public/images/tree (if we replace the backend navigation tree, this directory may be deprecated, too)
- CoreBundle/Resources/public/images (the main directory has some images/icons that are used all over the place in the backend. We need to look into this what is deprecated here and what may be get deprecated by replacing the backend theme)
And then there is the theme directory in src/CoreBundle/Resources/public/themes/standard/images
Here we have box, breadcrumb, icons, nav_icons and the main directory again and i am sure we will find duplicates here. The theme directory has precedence.
But again, box, breadcrumb and nav_icons are deprecated for sure.
This issue should be done before we replace the theme, because i cleans up the icon/image mess first.
|
1.0
|
Backend: replace icons with icon font - **Describe the solution you'd like**
Remove unused, deprecated icons, move necessary icons (massive BC break) to one directory.
Replace main menu icons with vector icon set (font) and map all old icons to new ones.
**Possible solution (if any)**
The main set of icons is located in CoreBundle/Resources/public/images/icons
The more than 1000 famfamfam icons are used in the main menu and for some button icons.
They need to replaced with an icon font to fit into the new backend theme design and make it look less like a candy shop.
There are some directories with icons, that are not used for years and should be removed without replacement.
- CoreBundle/Resources/public/images/box
- CoreBundle/Resources/public/images/breadcrumb
- CoreBundle/Resources/public/images/breadcrumb
- CoreBundle/Resources/public/images/nav_icons (where used in main menu blocks, now only the error image is used, remove or replace it there)
- CoreBundle/Resources/public/images/smileys (part of a special field type?)
- CoreBundle/Resources/public/images/social_bookmark_icons (where part of the blog bundle years ago)
Icons that we need:
- CoreBundle/Resources/public/images/filetype_icons (used in backend AND frontend if using core styling for download items), should not be replaced at this point
- CoreBundle/Resources/public/images/tree (if we replace the backend navigation tree, this directory may be deprecated, too)
- CoreBundle/Resources/public/images (the main directory has some images/icons that are used all over the place in the backend. We need to look into this what is deprecated here and what may be get deprecated by replacing the backend theme)
And then there is the theme directory in src/CoreBundle/Resources/public/themes/standard/images
Here we have box, breadcrumb, icons, nav_icons and the main directory again and i am sure we will find duplicates here. The theme directory has precedence.
But again, box, breadcrumb and nav_icons are deprecated for sure.
This issue should be done before we replace the theme, because i cleans up the icon/image mess first.
|
non_process
|
backend replace icons with icon font describe the solution you d like remove unused deprecated icons move necessary icons massive bc break to one directory replace main menu icons with vector icon set font and map all old icons to new ones possible solution if any the main set of icons is located in corebundle resources public images icons the more than famfamfam icons are used in the main menu and for some button icons they need to replaced with an icon font to fit into the new backend theme design and make it look less like a candy shop there are some directories with icons that are not used for years and should be removed without replacement corebundle resources public images box corebundle resources public images breadcrumb corebundle resources public images breadcrumb corebundle resources public images nav icons where used in main menu blocks now only the error image is used remove or replace it there corebundle resources public images smileys part of a special field type corebundle resources public images social bookmark icons where part of the blog bundle years ago icons that we need corebundle resources public images filetype icons used in backend and frontend if using core styling for download items should not be replaced at this point corebundle resources public images tree if we replace the backend navigation tree this directory may be deprecated too corebundle resources public images the main directory has some images icons that are used all over the place in the backend we need to look into this what is deprecated here and what may be get deprecated by replacing the backend theme and then there is the theme directory in src corebundle resources public themes standard images here we have box breadcrumb icons nav icons and the main directory again and i am sure we will find duplicates here the theme directory has precedence but again box breadcrumb and nav icons are deprecated for sure this issue should be done before we replace the theme because i cleans up the icon image mess first
| 0
|
273,474
| 20,793,882,661
|
IssuesEvent
|
2022-03-17 07:01:41
|
AY2122S2-CS2103-F11-2/tp
|
https://api.github.com/repos/AY2122S2-CS2103-F11-2/tp
|
closed
|
Update DG with `schedule` command
|
type.Story type.Task priority.High Documentation
|
User should be able schedule interview time slots for TA candidates in TAlent Assistant™. This functionality should be updated in the following sections:
- [ ] User stories
- [ ] Use cases
|
1.0
|
Update DG with `schedule` command - User should be able schedule interview time slots for TA candidates in TAlent Assistant™. This functionality should be updated in the following sections:
- [ ] User stories
- [ ] Use cases
|
non_process
|
update dg with schedule command user should be able schedule interview time slots for ta candidates in talent assistant™ this functionality should be updated in the following sections user stories use cases
| 0
|
22,319
| 30,882,607,841
|
IssuesEvent
|
2023-08-03 18:51:17
|
openline-ai/openline-customer-os
|
https://api.github.com/repos/openline-ai/openline-customer-os
|
closed
|
Crashes in events-processing on dev env
|
app/events-processing-platform
|
Crashes in events-processing on dev env should log accordingly
|
1.0
|
Crashes in events-processing on dev env - Crashes in events-processing on dev env should log accordingly
|
process
|
crashes in events processing on dev env crashes in events processing on dev env should log accordingly
| 1
|
36,549
| 12,417,702,417
|
IssuesEvent
|
2020-05-22 21:25:57
|
wrbejar/Nova8HML
|
https://api.github.com/repos/wrbejar/Nova8HML
|
opened
|
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.26.jar
|
security vulnerability
|
## CVE-2017-3523 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200522212458_OUIMQA/archiveExtraction_TASGJV/20200522212458/ws-scm_depth_0/Nova8HML/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: epository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/Nova8HML/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/Nova8HML/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,epository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/Nova8HML/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/Nova8HML/commit/59d70a6939762caf159f40fd62a37577fd9e1ced">59d70a6939762caf159f40fd62a37577fd9e1ced</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.26.jar - ## CVE-2017-3523 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200522212458_OUIMQA/archiveExtraction_TASGJV/20200522212458/ws-scm_depth_0/Nova8HML/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: epository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/Nova8HML/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/Nova8HML/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,epository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/Nova8HML/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/Nova8HML/commit/59d70a6939762caf159f40fd62a37577fd9e1ced">59d70a6939762caf159f40fd62a37577fd9e1ced</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in mysql connector java jar cve high severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file tmp ws ua ouimqa archiveextraction tasgjv ws scm depth target javavulnerablelab meta inf maven org cysecurity javavulnerablelab pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar depth target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar target javavulnerablelab web inf lib mysql connector java jar epository mysql mysql connector java mysql connector java jar depth target javavulnerablelab web inf lib mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h vulnerabilityurl
| 0
|
19,698
| 26,048,178,480
|
IssuesEvent
|
2022-12-22 16:06:24
|
MicrosoftDocs/windows-dev-docs
|
https://api.github.com/repos/MicrosoftDocs/windows-dev-docs
|
closed
|
URL referrer / source
|
uwp/prod processes-and-threading/tech Pri1
|
Do you have a referrer / source parameter that can be used to track the link in the partner dashboard?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a8f30bb9-d11a-1865-2d17-9c56cfc3e641
* Version Independent ID: ded22236-86fd-f4eb-f517-f3dd1005dc98
* Content: [Using ms-windows-store URIs - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-store-app)
* Content Source: [windows-apps-src/launch-resume/launch-store-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-store-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
1.0
|
URL referrer / source -
Do you have a referrer / source parameter that can be used to track the link in the partner dashboard?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a8f30bb9-d11a-1865-2d17-9c56cfc3e641
* Version Independent ID: ded22236-86fd-f4eb-f517-f3dd1005dc98
* Content: [Using ms-windows-store URIs - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-store-app)
* Content Source: [windows-apps-src/launch-resume/launch-store-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-store-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
process
|
url referrer source do you have a referrer source parameter that can be used to track the link in the partner dashboard document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft
| 1
|
20,094
| 26,624,968,371
|
IssuesEvent
|
2023-01-24 13:59:02
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
reopened
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431
Last updated: Tue Jan 24 04:03 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995022157)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 873dba22aabb9922d5574e91f626639eb364aaf7
Last updated: Mon Jan 23 10:23 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3986185922)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431
Last updated: Tue Jan 24 03:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995489492)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431
Last updated: Tue Jan 24 04:03 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995022157)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 873dba22aabb9922d5574e91f626639eb364aaf7
Last updated: Mon Jan 23 10:23 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3986185922)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431
Last updated: Tue Jan 24 03:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995489492)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated mon jan pst ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst
| 1
|
134,929
| 19,424,037,738
|
IssuesEvent
|
2021-12-21 01:27:40
|
yukiHaga/regex-hunting
|
https://api.github.com/repos/yukiHaga/regex-hunting
|
opened
|
Add: ゲーム画面を一旦完成させる。
|
Priority: high Type: design Type: improvement Type: new feature
|
## 概要
説明スライドの前に、まずはゲーム画面を一旦完成させる。
以下のようなゲーム画面を作成する。
<a href="https://gyazo.com/3854a825cc2613de929e87be2760dbde"><img src="https://i.gyazo.com/3854a825cc2613de929e87be2760dbde.png" alt="Image from Gyazo" width="534"/></a>
## やること
- [ ] ヘッダーを配置する。
- [ ] 背景を配置する。
- [ ] スライダーを配置する。
- [ ] モンスターとhpをセットで配置する。
- [ ] 問題ブロックを配置する。
- [ ] 解答ブロックを配置する。
- [ ] Timeゲージを配置する。(chart.js的なのReactにないか探す)
- [ ] HPゲージを配置する。(chart.js的なのReactにないか探す)
- [ ] スライドを見るいう項目が入ったフッターを配置する。
## 受け入れ条件
- [ ] 画面が正しく表示されている。
- [ ] 難易度ごとにモンスターの表示が変わっている。
## 懸念点
- モンスター, ゲーム, 問題のデータはバックエンドに格納されているので、それをapiコール関数でどうやって取ってくるか、今後検討する必要がある。現時点では画面を先に作成して、別issueとしてapiリクエストのことを考える。
## 参考記事
特になし。
|
1.0
|
Add: ゲーム画面を一旦完成させる。 - ## 概要
説明スライドの前に、まずはゲーム画面を一旦完成させる。
以下のようなゲーム画面を作成する。
<a href="https://gyazo.com/3854a825cc2613de929e87be2760dbde"><img src="https://i.gyazo.com/3854a825cc2613de929e87be2760dbde.png" alt="Image from Gyazo" width="534"/></a>
## やること
- [ ] ヘッダーを配置する。
- [ ] 背景を配置する。
- [ ] スライダーを配置する。
- [ ] モンスターとhpをセットで配置する。
- [ ] 問題ブロックを配置する。
- [ ] 解答ブロックを配置する。
- [ ] Timeゲージを配置する。(chart.js的なのReactにないか探す)
- [ ] HPゲージを配置する。(chart.js的なのReactにないか探す)
- [ ] スライドを見るいう項目が入ったフッターを配置する。
## 受け入れ条件
- [ ] 画面が正しく表示されている。
- [ ] 難易度ごとにモンスターの表示が変わっている。
## 懸念点
- モンスター, ゲーム, 問題のデータはバックエンドに格納されているので、それをapiコール関数でどうやって取ってくるか、今後検討する必要がある。現時点では画面を先に作成して、別issueとしてapiリクエストのことを考える。
## 参考記事
特になし。
|
non_process
|
add ゲーム画面を一旦完成させる。 概要 説明スライドの前に、まずはゲーム画面を一旦完成させる。 以下のようなゲーム画面を作成する。 やること ヘッダーを配置する。 背景を配置する。 スライダーを配置する。 モンスターとhpをセットで配置する。 問題ブロックを配置する。 解答ブロックを配置する。 timeゲージを配置する。 chart js的なのreactにないか探す hpゲージを配置する。 chart js的なのreactにないか探す スライドを見るいう項目が入ったフッターを配置する。 受け入れ条件 画面が正しく表示されている。 難易度ごとにモンスターの表示が変わっている。 懸念点 モンスター ゲーム 問題のデータはバックエンドに格納されているので、それをapiコール関数でどうやって取ってくるか、今後検討する必要がある。現時点では画面を先に作成して、別issueとしてapiリクエストのことを考える。 参考記事 特になし。
| 0
|
5,921
| 8,742,954,190
|
IssuesEvent
|
2018-12-12 17:44:47
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
DITA-OT assumes strong constraints rather than weak constraints
|
feature preprocess/conref priority/medium stale
|
[Section 2.5.5.4 of the DITA 1.3 Specification](http://docs.oasis-open.org/dita/dita/v1.3/os/part1-base/archSpec/base/constraints-strong-and-weak.html) states
> By default, constraints are weak unless they are explicitly designated as strong
Technically, the DITA-OT is in compliance with the spec because 2.5.5.4 also states
> Processors also can be configured to treat all constraints as strong
Strong constraints can be overly restrictive in many use cases. This is why the specification defaults to weak. For instance, strong constraint processing prevents a step in a task topic from having a conref to a step in a general task topic, despite the fact that step in both cases have identical content models and are unaffected by the strictTaskbody constraint that is preventing the conref from being resolved.
This issue is important to me because I use constraints extensively in the technical content DTDs. Having the DITA-OT default to strong constraints prevents easy interoperation between out-of-the-box technical content instances and the constrained technical content instances.
I understand that there may be use cases where strong constraint enforcement is essential. Perhaps the DITA-OT could be modified to accept an ant parameter that makes strong and weak constraint processing selectable.
|
1.0
|
DITA-OT assumes strong constraints rather than weak constraints - [Section 2.5.5.4 of the DITA 1.3 Specification](http://docs.oasis-open.org/dita/dita/v1.3/os/part1-base/archSpec/base/constraints-strong-and-weak.html) states
> By default, constraints are weak unless they are explicitly designated as strong
Technically, the DITA-OT is in compliance with the spec because 2.5.5.4 also states
> Processors also can be configured to treat all constraints as strong
Strong constraints can be overly restrictive in many use cases. This is why the specification defaults to weak. For instance, strong constraint processing prevents a step in a task topic from having a conref to a step in a general task topic, despite the fact that step in both cases have identical content models and are unaffected by the strictTaskbody constraint that is preventing the conref from being resolved.
This issue is important to me because I use constraints extensively in the technical content DTDs. Having the DITA-OT default to strong constraints prevents easy interoperation between out-of-the-box technical content instances and the constrained technical content instances.
I understand that there may be use cases where strong constraint enforcement is essential. Perhaps the DITA-OT could be modified to accept an ant parameter that makes strong and weak constraint processing selectable.
|
process
|
dita ot assumes strong constraints rather than weak constraints states by default constraints are weak unless they are explicitly designated as strong technically the dita ot is in compliance with the spec because also states processors also can be configured to treat all constraints as strong strong constraints can be overly restrictive in many use cases this is why the specification defaults to weak for instance strong constraint processing prevents a step in a task topic from having a conref to a step in a general task topic despite the fact that step in both cases have identical content models and are unaffected by the stricttaskbody constraint that is preventing the conref from being resolved this issue is important to me because i use constraints extensively in the technical content dtds having the dita ot default to strong constraints prevents easy interoperation between out of the box technical content instances and the constrained technical content instances i understand that there may be use cases where strong constraint enforcement is essential perhaps the dita ot could be modified to accept an ant parameter that makes strong and weak constraint processing selectable
| 1
|
21,631
| 30,034,121,851
|
IssuesEvent
|
2023-06-27 11:37:19
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 04:50 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5376812317)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: windows] [1/4 android_device: emulator_ftl_target]<details><summary>(1 failed tests)</summary> QueryTest.TestCanListenForTheSameQueryWithDifferentOptions</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 07:40 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5378598907)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 04:49 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5377330622)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 04:50 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5376812317)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: windows] [1/4 android_device: emulator_ftl_target]<details><summary>(1 failed tests)</summary> QueryTest.TestCanListenForTheSameQueryWithDifferentOptions</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 07:40 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5378598907)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 9723100f301952fdfdeb31a412dec9b27873683a
Last updated: Mon Jun 26 04:49 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5377330622)**
|
process
|
nightly integration testing report for firestore integration test with flakiness succeeded after retry requested by on commit last updated mon jun pdt failures configs firestore failed tests nbsp nbsp querytest testcanlistenforthesamequerywithdifferentoptions add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated mon jun pdt ✅ nbsp integration test succeeded requested by on commit last updated mon jun pdt
| 1
|
610,789
| 18,924,461,652
|
IssuesEvent
|
2021-11-17 07:56:45
|
MadsBalslev/P3
|
https://api.github.com/repos/MadsBalslev/P3
|
closed
|
Refactor ```frontend.Shared.Manager``` and assoiciated files
|
Type: Maintenance Priority: Medium Domain: Client
|
Limit this do a one person, one day assignment.
|
1.0
|
Refactor ```frontend.Shared.Manager``` and assoiciated files - Limit this do a one person, one day assignment.
|
non_process
|
refactor frontend shared manager and assoiciated files limit this do a one person one day assignment
| 0
|
9,977
| 13,021,867,280
|
IssuesEvent
|
2020-07-27 07:15:14
|
gthecrack/COVID-predictions
|
https://api.github.com/repos/gthecrack/COVID-predictions
|
opened
|
Iterate over model
|
data process
|
Iterate over the model and the features trying to obtain more significantly relevant results
|
1.0
|
Iterate over model - Iterate over the model and the features trying to obtain more significantly relevant results
|
process
|
iterate over model iterate over the model and the features trying to obtain more significantly relevant results
| 1
|
48,889
| 6,112,080,416
|
IssuesEvent
|
2017-06-21 18:30:23
|
Esri/solutions-webappbuilder-widgets
|
https://api.github.com/repos/Esri/solutions-webappbuilder-widgets
|
closed
|
warning labels not red or do not stand out
|
4 - Done B - As Designed B - Enhancement Distance and Direction G - Defense Team Military Tools
|
### Widget
DD
### Version of widget
6/14
### Bug or Enhancement
Enhancement
### Repo Steps or Enhancement details
The warnings for incorrect input are white and do not stand out as warning. See graphic below. In other themes the warning is red.

|
1.0
|
warning labels not red or do not stand out - ### Widget
DD
### Version of widget
6/14
### Bug or Enhancement
Enhancement
### Repo Steps or Enhancement details
The warnings for incorrect input are white and do not stand out as warning. See graphic below. In other themes the warning is red.

|
non_process
|
warning labels not red or do not stand out widget dd version of widget bug or enhancement enhancement repo steps or enhancement details the warnings for incorrect input are white and do not stand out as warning see graphic below in other themes the warning is red
| 0
|
17,415
| 23,231,285,086
|
IssuesEvent
|
2022-08-03 07:52:26
|
CA-G12/Quran-Application
|
https://api.github.com/repos/CA-G12/Quran-Application
|
closed
|
refactoring suruh file
|
in process
|
- remove commented lines
- remove repeated functions
- convert many functions into one renderAyahs() and call the function in dom.js
|
1.0
|
refactoring suruh file - - remove commented lines
- remove repeated functions
- convert many functions into one renderAyahs() and call the function in dom.js
|
process
|
refactoring suruh file remove commented lines remove repeated functions convert many functions into one renderayahs and call the function in dom js
| 1
|
21,752
| 30,271,103,013
|
IssuesEvent
|
2023-07-07 15:24:36
|
brave/brave-talk-gcalendar-extension
|
https://api.github.com/repos/brave/brave-talk-gcalendar-extension
|
opened
|
Tasks for Release of Multi-Cal Update
|
in-process
|
We're soon going to release a major update to the extension which adds support for multiple calendars. Here are a few associated tasks which need to be completed:
Update extension-related references/content on:
- [ ] https://talk.brave.com/
- [ ] https://brave.com/talk/
- [ ] https://brave.com/talk/
- [ ] https://support.brave.com/hc/en-us/categories/16067800578957-Brave-Talk
|
1.0
|
Tasks for Release of Multi-Cal Update - We're soon going to release a major update to the extension which adds support for multiple calendars. Here are a few associated tasks which need to be completed:
Update extension-related references/content on:
- [ ] https://talk.brave.com/
- [ ] https://brave.com/talk/
- [ ] https://brave.com/talk/
- [ ] https://support.brave.com/hc/en-us/categories/16067800578957-Brave-Talk
|
process
|
tasks for release of multi cal update we re soon going to release a major update to the extension which adds support for multiple calendars here are a few associated tasks which need to be completed update extension related references content on
| 1
|
14,595
| 17,703,562,370
|
IssuesEvent
|
2021-08-25 03:17:08
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - georeferenceVerificationStatus
|
Term - change Class - Occurrence Class - Location normative Process - complete
|
## Change term
* Submitter: John Wieczorek @tucotuco
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:gereferenceVerificationStatus
Proposed new attributes of the term:
* Term name (in lowerCamelCase): georeferenceVerificationStatus
* Organized in Class (e.g. Location, Taxon): **Occurrence**
* Definition of the term: A categorical description of the extent to which the georeference has been verified to represent the best possible spatial description **for the Location of the Occurrence**.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary.
* Examples: **`unable to georeference`, `requires georeference`**, `requires verification`, **`verified by data custodian`,
`verified by contributor`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/georeferenceVerificationStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/SiteCoordinateSets/SiteCoordinates/GeoreferenceVerificationStatus
Original comment:
The definition of the term dwc:georeferenceVerificationStatus is "A categorical description of the extent to which the georeference has been verified to represent the best possible spatial description." The definition is unclear on what the best possible spatial description is supposed to refer. As it stands, it could be interpreted to be the best possible spatial description for the textual description provided, that is, for the Location. However, the original intent (with the clue in the recommended value for the term "verified by collector") was that the verification would be for the best spatial description for the specific Occurrence. These two interpretations can be very different.
I propose to rectify the ambiguity by changing the definition to "The state of verification and the role of the agent who did the verification to determine if the the spatial description is the best possible for the Occurrence to which the georeference is applied." I also propose to change the tdwgUtility:organizedInClass value to "http://rs.tdwg.org/dwc/terms/Occurrence".
Since this would constitute a clarification in semantics, it should go through the term change process.
|
1.0
|
Change term - georeferenceVerificationStatus - ## Change term
* Submitter: John Wieczorek @tucotuco
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:gereferenceVerificationStatus
Proposed new attributes of the term:
* Term name (in lowerCamelCase): georeferenceVerificationStatus
* Organized in Class (e.g. Location, Taxon): **Occurrence**
* Definition of the term: A categorical description of the extent to which the georeference has been verified to represent the best possible spatial description **for the Location of the Occurrence**.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary.
* Examples: **`unable to georeference`, `requires georeference`**, `requires verification`, **`verified by data custodian`,
`verified by contributor`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/georeferenceVerificationStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/SiteCoordinateSets/SiteCoordinates/GeoreferenceVerificationStatus
Original comment:
The definition of the term dwc:georeferenceVerificationStatus is "A categorical description of the extent to which the georeference has been verified to represent the best possible spatial description." The definition is unclear on what the best possible spatial description is supposed to refer. As it stands, it could be interpreted to be the best possible spatial description for the textual description provided, that is, for the Location. However, the original intent (with the clue in the recommended value for the term "verified by collector") was that the verification would be for the best spatial description for the specific Occurrence. These two interpretations can be very different.
I propose to rectify the ambiguity by changing the definition to "The state of verification and the role of the agent who did the verification to determine if the the spatial description is the best possible for the Occurrence to which the georeference is applied." I also propose to change the tdwgUtility:organizedInClass value to "http://rs.tdwg.org/dwc/terms/Occurrence".
Since this would constitute a clarification in semantics, it should go through the term change process.
|
process
|
change term georeferenceverificationstatus change term submitter john wieczorek tucotuco justification why is this change necessary consistency and clarity proponents who needs this change everyone current term definition proposed new attributes of the term term name in lowercamelcase georeferenceverificationstatus organized in class e g location taxon occurrence definition of the term a categorical description of the extent to which the georeference has been verified to represent the best possible spatial description for the location of the occurrence usage comments recommendations regarding content etc recommended best practice is to use a controlled vocabulary examples unable to georeference requires georeference requires verification verified by data custodian verified by contributor refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit gathering sitecoordinatesets sitecoordinates georeferenceverificationstatus original comment the definition of the term dwc georeferenceverificationstatus is a categorical description of the extent to which the georeference has been verified to represent the best possible spatial description the definition is unclear on what the best possible spatial description is supposed to refer as it stands it could be interpreted to be the best possible spatial description for the textual description provided that is for the location however the original intent with the clue in the recommended value for the term verified by collector was that the verification would be for the best spatial description for the specific occurrence these two interpretations can be very different i propose to rectify the ambiguity by changing the definition to the state of verification and the role of the agent who did the verification to determine if the the spatial description is the best possible for the occurrence to which the georeference is applied i also propose to change the tdwgutility organizedinclass value to since this would constitute a clarification in semantics it should go through the term change process
| 1
|
52,297
| 10,820,039,332
|
IssuesEvent
|
2019-11-08 15:35:25
|
pywbem/pywbem
|
https://api.github.com/repos/pywbem/pywbem
|
closed
|
Remove methodname init argument of CIMMethod
|
area: code resolution: fixed type: cleanup
|
Remove the `methodname` input argument of `CIMMethod()`. We renamed it to `name` but still allow `methodname`. Its use is deprecated right now.
Because this change is incompatible for users that still use methodname, this change cannot be rolled back before 1.0.0.
This issue was created from issue #635.
|
1.0
|
Remove methodname init argument of CIMMethod - Remove the `methodname` input argument of `CIMMethod()`. We renamed it to `name` but still allow `methodname`. Its use is deprecated right now.
Because this change is incompatible for users that still use methodname, this change cannot be rolled back before 1.0.0.
This issue was created from issue #635.
|
non_process
|
remove methodname init argument of cimmethod remove the methodname input argument of cimmethod we renamed it to name but still allow methodname its use is deprecated right now because this change is incompatible for users that still use methodname this change cannot be rolled back before this issue was created from issue
| 0
|
16,157
| 20,574,887,268
|
IssuesEvent
|
2022-03-04 02:46:42
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Union/Difference fails
|
Feedback stale Processing Bug
|
Author Name: **Massimiliano Moraca** (Massimiliano Moraca)
Original Redmine Issue: [21584](https://issues.qgis.org/issues/21584)
Affected QGIS version: 3.6.0
Redmine category:processing/qgis
---
I think I've found a bug in version 4.4. I'm trying to use Union with two vectors and QGIS show me this error:
*GEOS geoprocessing error: difference failed*
I do the same thing with 2.18.28 version and all work fine.
In the attachment there are two vectors and five screenshots
---
- [dataset.zip](https://issues.qgis.org/attachments/download/14588/dataset.zip) (Massimiliano Moraca)
|
1.0
|
Union/Difference fails - Author Name: **Massimiliano Moraca** (Massimiliano Moraca)
Original Redmine Issue: [21584](https://issues.qgis.org/issues/21584)
Affected QGIS version: 3.6.0
Redmine category:processing/qgis
---
I think I've found a bug in version 4.4. I'm trying to use Union with two vectors and QGIS show me this error:
*GEOS geoprocessing error: difference failed*
I do the same thing with 2.18.28 version and all work fine.
In the attachment there are two vectors and five screenshots
---
- [dataset.zip](https://issues.qgis.org/attachments/download/14588/dataset.zip) (Massimiliano Moraca)
|
process
|
union difference fails author name massimiliano moraca massimiliano moraca original redmine issue affected qgis version redmine category processing qgis i think i ve found a bug in version i m trying to use union with two vectors and qgis show me this error geos geoprocessing error difference failed i do the same thing with version and all work fine in the attachment there are two vectors and five screenshots massimiliano moraca
| 1
|
582
| 3,060,127,966
|
IssuesEvent
|
2015-08-14 18:50:41
|
Microsoft/poshtools
|
https://api.github.com/repos/Microsoft/poshtools
|
closed
|
Local Attach Should Default to PowerShell Only
|
Process Attaching task
|
Visual Studio should not identify processes we say are ok for PowerShell Tools to attach to as debuggable by both our debug engine and the managed code engine.
|
1.0
|
Local Attach Should Default to PowerShell Only - Visual Studio should not identify processes we say are ok for PowerShell Tools to attach to as debuggable by both our debug engine and the managed code engine.
|
process
|
local attach should default to powershell only visual studio should not identify processes we say are ok for powershell tools to attach to as debuggable by both our debug engine and the managed code engine
| 1
|
178,850
| 6,619,331,926
|
IssuesEvent
|
2017-09-21 11:48:38
|
TheScienceMuseum/collectionsonline
|
https://api.github.com/repos/TheScienceMuseum/collectionsonline
|
opened
|
Add 'related' ISAD / RUoD documents to Archive records
|
enhancement priority-4
|
Need to be added to index
http://collection.sciencemuseum.org.uk/documents/aa110000003/
<img width="724" alt="screen shot 2017-09-21 at 12 46 17" src="https://user-images.githubusercontent.com/91365/30694234-0b356458-9ecb-11e7-9e52-67d40bf6e5cc.png">
|
1.0
|
Add 'related' ISAD / RUoD documents to Archive records - Need to be added to index
http://collection.sciencemuseum.org.uk/documents/aa110000003/
<img width="724" alt="screen shot 2017-09-21 at 12 46 17" src="https://user-images.githubusercontent.com/91365/30694234-0b356458-9ecb-11e7-9e52-67d40bf6e5cc.png">
|
non_process
|
add related isad ruod documents to archive records need to be added to index img width alt screen shot at src
| 0
|
479,393
| 13,795,812,623
|
IssuesEvent
|
2020-10-09 18:41:28
|
xwikisas/application-antivirus
|
https://api.github.com/repos/xwikisas/application-antivirus
|
opened
|
Can't delete Incident log
|
Priority: Major Type: Bug
|
Preconditions: Have the Incidents log livetable populated.
Steps to reproduce:
1. Access Administer Wiki > Other > Antivirus
2. Observe the entries in the Incidents log
3. Click on Delete from the Actions column to remove an entry
Expected results: Each incident can be deleted individually.
Actual results: The entry isn't deleted. An error appears in the Browser's console.

Environment: Cloud demo instance - XWiki 11.10.5, Chrome 86, Windows 10
|
1.0
|
Can't delete Incident log - Preconditions: Have the Incidents log livetable populated.
Steps to reproduce:
1. Access Administer Wiki > Other > Antivirus
2. Observe the entries in the Incidents log
3. Click on Delete from the Actions column to remove an entry
Expected results: Each incident can be deleted individually.
Actual results: The entry isn't deleted. An error appears in the Browser's console.

Environment: Cloud demo instance - XWiki 11.10.5, Chrome 86, Windows 10
|
non_process
|
can t delete incident log preconditions have the incidents log livetable populated steps to reproduce access administer wiki other antivirus observe the entries in the incidents log click on delete from the actions column to remove an entry expected results each incident can be deleted individually actual results the entry isn t deleted an error appears in the browser s console environment cloud demo instance xwiki chrome windows
| 0
|
22,583
| 19,682,286,195
|
IssuesEvent
|
2022-01-11 17:57:57
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Integrations] Missing shipper label on integration overview page
|
usability Team:Fleet v8.1.0 Feature:Unified Integrations
|
**Kibana version:**
7.16
**Elasticsearch version:**
7.16
**Original install method (e.g. download page, yum, from source, etc.):**
Docker
**Describe the bug:**
There is no label indicating whether the integration a user is viewing is for Elastic Agent or Beats. This can be confusing because the tiles can switch back and forth between the shippers based on what filters and integrations the user has selected. Having a clear label at the top will provide more context on which one they are viewing.
**Steps to reproduce:**
1. Open any Elastic Agent or Beats integration
2. Notice that there is no label next to the title indicating the shipper
**Expected behavior:**
Our original designs called for adding label next to the integration title. However, this seems to be missing in the 7.16 release. Here is what our design called for:

It'd be helpful to show a similar label on the Beats tutorial pages as well.
**Screenshots (if relevant):**
Here is what I see in my 7.16 installation:

**Any additional context:**
See the "Unified Integrations 7.16 - UX Design" design doc for more context.
CC @dborodyansky
|
True
|
[Integrations] Missing shipper label on integration overview page - **Kibana version:**
7.16
**Elasticsearch version:**
7.16
**Original install method (e.g. download page, yum, from source, etc.):**
Docker
**Describe the bug:**
There is no label indicating whether the integration a user is viewing is for Elastic Agent or Beats. This can be confusing because the tiles can switch back and forth between the shippers based on what filters and integrations the user has selected. Having a clear label at the top will provide more context on which one they are viewing.
**Steps to reproduce:**
1. Open any Elastic Agent or Beats integration
2. Notice that there is no label next to the title indicating the shipper
**Expected behavior:**
Our original designs called for adding label next to the integration title. However, this seems to be missing in the 7.16 release. Here is what our design called for:

It'd be helpful to show a similar label on the Beats tutorial pages as well.
**Screenshots (if relevant):**
Here is what I see in my 7.16 installation:

**Any additional context:**
See the "Unified Integrations 7.16 - UX Design" design doc for more context.
CC @dborodyansky
|
non_process
|
missing shipper label on integration overview page kibana version elasticsearch version original install method e g download page yum from source etc docker describe the bug there is no label indicating whether the integration a user is viewing is for elastic agent or beats this can be confusing because the tiles can switch back and forth between the shippers based on what filters and integrations the user has selected having a clear label at the top will provide more context on which one they are viewing steps to reproduce open any elastic agent or beats integration notice that there is no label next to the title indicating the shipper expected behavior our original designs called for adding label next to the integration title however this seems to be missing in the release here is what our design called for it d be helpful to show a similar label on the beats tutorial pages as well screenshots if relevant here is what i see in my installation any additional context see the unified integrations ux design design doc for more context cc dborodyansky
| 0
|
17,122
| 22,638,802,397
|
IssuesEvent
|
2022-06-30 22:13:00
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/transform]: add replace_regex_match config setting to replace attributes/metrics based on regex pattern
|
priority:p2 comp: transformprocessor
|
**Is your feature request related to a problem? Please describe.**
This issue relates to transformprocessor. replace_match relies on wild cards to find a pattern to replace a string. In some scenarios where the location of the portion of the string to replace is not clear, a regex expression may be more appropriate. One such example would be a hostmetricsreceiver reporting command line parameters. Sensitive data like a password or apikey may be passed in any order.
**Describe the solution you'd like**
I would like to introduce a config setting called replace_regex_match which would operate much like replace_match but using regex patterns.
Note: approval of this feature may mean that we can deprecate replace_match. I am interested in opinions on this.
**Describe alternatives you've considered**
None
**Additional context**
None
I already have an implementation that I can submit for PR.
cc @TylerHelmuth
|
1.0
|
[processor/transform]: add replace_regex_match config setting to replace attributes/metrics based on regex pattern - **Is your feature request related to a problem? Please describe.**
This issue relates to transformprocessor. replace_match relies on wild cards to find a pattern to replace a string. In some scenarios where the location of the portion of the string to replace is not clear, a regex expression may be more appropriate. One such example would be a hostmetricsreceiver reporting command line parameters. Sensitive data like a password or apikey may be passed in any order.
**Describe the solution you'd like**
I would like to introduce a config setting called replace_regex_match which would operate much like replace_match but using regex patterns.
Note: approval of this feature may mean that we can deprecate replace_match. I am interested in opinions on this.
**Describe alternatives you've considered**
None
**Additional context**
None
I already have an implementation that I can submit for PR.
cc @TylerHelmuth
|
process
|
add replace regex match config setting to replace attributes metrics based on regex pattern is your feature request related to a problem please describe this issue relates to transformprocessor replace match relies on wild cards to find a pattern to replace a string in some scenarios where the location of the portion of the string to replace is not clear a regex expression may be more appropriate one such example would be a hostmetricsreceiver reporting command line parameters sensitive data like a password or apikey may be passed in any order describe the solution you d like i would like to introduce a config setting called replace regex match which would operate much like replace match but using regex patterns note approval of this feature may mean that we can deprecate replace match i am interested in opinions on this describe alternatives you ve considered none additional context none i already have an implementation that i can submit for pr cc tylerhelmuth
| 1
|
2,241
| 5,088,641,609
|
IssuesEvent
|
2016-12-31 23:50:34
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @ManyToOne Annotation
|
annotation processor task
|
Handle the `@ManyToOne` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.30 ManyToOne Annotation
|
1.0
|
Handle @ManyToOne Annotation - Handle the `@ManyToOne` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.30 ManyToOne Annotation
|
process
|
handle manytoone annotation handle the manytoone annotation for a property or field see manytoone annotation
| 1
|
15,897
| 20,102,575,482
|
IssuesEvent
|
2022-02-07 06:57:32
|
SAP/openui5-docs
|
https://api.github.com/repos/SAP/openui5-docs
|
closed
|
Missing Deprecation Info for sap.ui.layout.form.GridLayout
|
In Process
|
`GridLayout` is listed as layout option for the `SimpleFormLayout` without any note about its deprecation:
https://openui5.hana.ondemand.com/api/sap.ui.layout.form.SimpleFormLayout
|
1.0
|
Missing Deprecation Info for sap.ui.layout.form.GridLayout - `GridLayout` is listed as layout option for the `SimpleFormLayout` without any note about its deprecation:
https://openui5.hana.ondemand.com/api/sap.ui.layout.form.SimpleFormLayout
|
process
|
missing deprecation info for sap ui layout form gridlayout gridlayout is listed as layout option for the simpleformlayout without any note about its deprecation
| 1
|
43,559
| 7,050,511,547
|
IssuesEvent
|
2018-01-03 06:50:03
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
Publish through multiple gateways document not available in 3.0.0 space
|
3.0.0 documentation-required Priority/Highest Severity/Blocker
|
In order to develop a geographical distributed wide API Management solution that needs api publishing through multiple gateways needs the respective document. Please include the documentation
|
1.0
|
Publish through multiple gateways document not available in 3.0.0 space - In order to develop a geographical distributed wide API Management solution that needs api publishing through multiple gateways needs the respective document. Please include the documentation
|
non_process
|
publish through multiple gateways document not available in space in order to develop a geographical distributed wide api management solution that needs api publishing through multiple gateways needs the respective document please include the documentation
| 0
|
14,464
| 17,569,981,961
|
IssuesEvent
|
2021-08-14 13:34:44
|
oasis-tcs/csaf
|
https://api.github.com/repos/oasis-tcs/csaf
|
opened
|
Acknowledgement table should be sorted by first name
|
csaf 2.0 editorial oasis_tc_process CSDPR01_feedback
|
# Situation
Probably an artifact of the publication pipeline, but the table for acknowledging the names of contributors is ordered by last name and not as it should by first name (given).
https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html#appendix-a-acknowledgments
# Proposal:
Sort ascending by First name, Last name, and Organization in that order (like the subsequent table in the document)
|
1.0
|
Acknowledgement table should be sorted by first name - # Situation
Probably an artifact of the publication pipeline, but the table for acknowledging the names of contributors is ordered by last name and not as it should by first name (given).
https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html#appendix-a-acknowledgments
# Proposal:
Sort ascending by First name, Last name, and Organization in that order (like the subsequent table in the document)
|
process
|
acknowledgement table should be sorted by first name situation probably an artifact of the publication pipeline but the table for acknowledging the names of contributors is ordered by last name and not as it should by first name given proposal sort ascending by first name last name and organization in that order like the subsequent table in the document
| 1
|
8,941
| 12,055,480,323
|
IssuesEvent
|
2020-04-15 13:03:52
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Cleanup of MoreAsserts#assertThrows
|
P3 category: misc > misc type: process
|
JUnit 4.13 will add a very helpful method called assertThrows. Until that is released, we can add this method to com.google.devtools.build.lib.testutil.MoreAsserts. After JUnit 4.13 is released, we should go back and change all of the callsites.
|
1.0
|
Cleanup of MoreAsserts#assertThrows - JUnit 4.13 will add a very helpful method called assertThrows. Until that is released, we can add this method to com.google.devtools.build.lib.testutil.MoreAsserts. After JUnit 4.13 is released, we should go back and change all of the callsites.
|
process
|
cleanup of moreasserts assertthrows junit will add a very helpful method called assertthrows until that is released we can add this method to com google devtools build lib testutil moreasserts after junit is released we should go back and change all of the callsites
| 1
|
9,606
| 12,545,693,624
|
IssuesEvent
|
2020-06-05 19:23:34
|
mendezc1/GenderMagRecordersAssistant
|
https://api.github.com/repos/mendezc1/GenderMagRecordersAssistant
|
closed
|
Add README Section on Downloading from the Chrome Store
|
Enhancement Good First Issue Information Processing Style Medium Priority
|
The README currently references downloading the tool via the chrome store instead of downloading from github but there is no information on how to do that.
README should include a section with info on how to do this or a link to another page that says how to do this.
|
1.0
|
Add README Section on Downloading from the Chrome Store - The README currently references downloading the tool via the chrome store instead of downloading from github but there is no information on how to do that.
README should include a section with info on how to do this or a link to another page that says how to do this.
|
process
|
add readme section on downloading from the chrome store the readme currently references downloading the tool via the chrome store instead of downloading from github but there is no information on how to do that readme should include a section with info on how to do this or a link to another page that says how to do this
| 1
|
7,868
| 11,043,790,376
|
IssuesEvent
|
2019-12-09 11:58:09
|
cetic/tsorage
|
https://api.github.com/repos/cetic/tsorage
|
closed
|
Consider Protobuf as an alternative representation format for published messages
|
enhancement ingestion processing
|
The current representation format for published messages is JSON.
While convenient for the human reader, Protobuf is supposed to be more performant, because of its binary representation.
- [ ] Implement an Protobuf encoder / decoder as an alternative to the JSON one.
- [X] Determine to which extend this representation is better than JSON.
|
1.0
|
Consider Protobuf as an alternative representation format for published messages - The current representation format for published messages is JSON.
While convenient for the human reader, Protobuf is supposed to be more performant, because of its binary representation.
- [ ] Implement an Protobuf encoder / decoder as an alternative to the JSON one.
- [X] Determine to which extend this representation is better than JSON.
|
process
|
consider protobuf as an alternative representation format for published messages the current representation format for published messages is json while convenient for the human reader protobuf is supposed to be more performant because of its binary representation implement an protobuf encoder decoder as an alternative to the json one determine to which extend this representation is better than json
| 1
|
14,169
| 3,236,803,574
|
IssuesEvent
|
2015-10-14 08:16:50
|
quantmind/lux
|
https://api.github.com/repos/quantmind/lux
|
opened
|
Add form relatedfield via a rest model
|
enhancement form requires design effort REST
|
Currently, when one needs to add a related model to a ``RestModel`` it also needs to modify the form so that it uses a ``RelatedField``.
I propose to add enhance the ``RestModel.add_related_column`` so that the ``RelatedField`` is added to the ``form`` and ``updateform`` unless specified otherwise.
|
1.0
|
Add form relatedfield via a rest model - Currently, when one needs to add a related model to a ``RestModel`` it also needs to modify the form so that it uses a ``RelatedField``.
I propose to add enhance the ``RestModel.add_related_column`` so that the ``RelatedField`` is added to the ``form`` and ``updateform`` unless specified otherwise.
|
non_process
|
add form relatedfield via a rest model currently when one needs to add a related model to a restmodel it also needs to modify the form so that it uses a relatedfield i propose to add enhance the restmodel add related column so that the relatedfield is added to the form and updateform unless specified otherwise
| 0
|
293,198
| 25,275,704,654
|
IssuesEvent
|
2022-11-16 12:28:17
|
epiphany-platform/epiphany
|
https://api.github.com/repos/epiphany-platform/epiphany
|
closed
|
[FEATURE REQUEST] [epicli test] Print total number of failed tests
|
status/grooming-needed area/testing
|
**Is your feature request related to a problem? Please describe.**
`epicli test` may produce a big number of xml files (in `spec_tests` sub-directory) and there is no information about total failed tests.
One has to check all xml files manually, for example using grep:
```
grep -Por 'failures="\d+"' /workspaces/epiphany/clusters/build/alma-test-big/spec_tests | grep -cv '"0"$'
1
```
**Describe the solution you'd like**
Print aggregated test results, for example:
`Total tests: 64 [failures: 1, errors: 0, skipped: 0]`
**Describe alternatives you've considered**
Configure rake to create single (common) output file for all groups and hosts.
**Additional context**
n/a
---
**DoD checklist**
- Changelog
- [ ] updated
- [ ] not needed
- COMPONENTS.md
- [ ] updated
- [ ] not needed
- Schema
- [ ] updated
- [ ] not needed
- Backport tasks
- [ ] created
- [ ] not needed
- Documentation
- [ ] added
- [ ] updated
- [ ] not needed
- [ ] Feature has automated tests
- [ ] Automated tests passed (QA pipelines)
- [ ] apply
- [ ] upgrade
- [ ] backup/restore
- [ ] Idempotency tested
- [ ] All conversations in PR resolved
- [ ] Solution meets requirements and is done according to design doc
- [ ] Usage compliant with license
|
1.0
|
[FEATURE REQUEST] [epicli test] Print total number of failed tests - **Is your feature request related to a problem? Please describe.**
`epicli test` may produce a big number of xml files (in `spec_tests` sub-directory) and there is no information about total failed tests.
One has to check all xml files manually, for example using grep:
```
grep -Por 'failures="\d+"' /workspaces/epiphany/clusters/build/alma-test-big/spec_tests | grep -cv '"0"$'
1
```
**Describe the solution you'd like**
Print aggregated test results, for example:
`Total tests: 64 [failures: 1, errors: 0, skipped: 0]`
**Describe alternatives you've considered**
Configure rake to create single (common) output file for all groups and hosts.
**Additional context**
n/a
---
**DoD checklist**
- Changelog
- [ ] updated
- [ ] not needed
- COMPONENTS.md
- [ ] updated
- [ ] not needed
- Schema
- [ ] updated
- [ ] not needed
- Backport tasks
- [ ] created
- [ ] not needed
- Documentation
- [ ] added
- [ ] updated
- [ ] not needed
- [ ] Feature has automated tests
- [ ] Automated tests passed (QA pipelines)
- [ ] apply
- [ ] upgrade
- [ ] backup/restore
- [ ] Idempotency tested
- [ ] All conversations in PR resolved
- [ ] Solution meets requirements and is done according to design doc
- [ ] Usage compliant with license
|
non_process
|
print total number of failed tests is your feature request related to a problem please describe epicli test may produce a big number of xml files in spec tests sub directory and there is no information about total failed tests one has to check all xml files manually for example using grep grep por failures d workspaces epiphany clusters build alma test big spec tests grep cv describe the solution you d like print aggregated test results for example total tests describe alternatives you ve considered configure rake to create single common output file for all groups and hosts additional context n a dod checklist changelog updated not needed components md updated not needed schema updated not needed backport tasks created not needed documentation added updated not needed feature has automated tests automated tests passed qa pipelines apply upgrade backup restore idempotency tested all conversations in pr resolved solution meets requirements and is done according to design doc usage compliant with license
| 0
|
46,252
| 11,806,570,910
|
IssuesEvent
|
2020-03-19 09:47:15
|
openmsupply/mobile
|
https://api.github.com/repos/openmsupply/mobile
|
opened
|
confirm button cut-off in the middle when it's on description mode while creating new transaction
|
Bug: development Build test: success Docs: not needed Effort: small Feature Priority: immediate UX
|
## Describe the bug
confirm button cut-off in the middle when it's on description mode while creating new transaction
<img width="813" alt="Screen Shot 2020-03-19 at 10 38 37 PM" src="https://user-images.githubusercontent.com/57238075/77053008-89bd1200-6a32-11ea-9585-3046d0b76a6e.png">
### To reproduce
Steps to reproduce the behavior:
1. Go to 'cash register'
2. Click on 'new transaction'
3. Scroll down to type in 'description'
4. See error
### Expected behaviour
A clear and concise description of what you expected to happen.
### Proposed Solution
N/A
### Version and device info
- App version: v4.0.2
- Tablet model: emulator
- OS version: macOS
### Additional context
Need more testing on this version due to some new behaviours appears.
|
1.0
|
confirm button cut-off in the middle when it's on description mode while creating new transaction - ## Describe the bug
confirm button cut-off in the middle when it's on description mode while creating new transaction
<img width="813" alt="Screen Shot 2020-03-19 at 10 38 37 PM" src="https://user-images.githubusercontent.com/57238075/77053008-89bd1200-6a32-11ea-9585-3046d0b76a6e.png">
### To reproduce
Steps to reproduce the behavior:
1. Go to 'cash register'
2. Click on 'new transaction'
3. Scroll down to type in 'description'
4. See error
### Expected behaviour
A clear and concise description of what you expected to happen.
### Proposed Solution
N/A
### Version and device info
- App version: v4.0.2
- Tablet model: emulator
- OS version: macOS
### Additional context
Need more testing on this version due to some new behaviours appears.
|
non_process
|
confirm button cut off in the middle when it s on description mode while creating new transaction describe the bug confirm button cut off in the middle when it s on description mode while creating new transaction img width alt screen shot at pm src to reproduce steps to reproduce the behavior go to cash register click on new transaction scroll down to type in description see error expected behaviour a clear and concise description of what you expected to happen proposed solution n a version and device info app version tablet model emulator os version macos additional context need more testing on this version due to some new behaviours appears
| 0
|
4,100
| 7,047,438,829
|
IssuesEvent
|
2018-01-02 13:32:56
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Update freebsd port to 0.9.0 to fix bazel nightly
|
category: misc > bootstrap / installation P1 type: process
|
Since https://github.com/bazelbuild/bazel/commit/caceacd984a3f86b623ea726f4df36bd81998d25 bazel nightly on freebsd has been failing as "configuration_field" is not available in bazel 0.7.0. Now that configuration_field is available on bazel 0.9.0 now, can we update bazel version in freebsd? Or change freebsd ci scripts to use latest released bazel + bazel from HEAD like we do on other distributions?
Thanks!
|
1.0
|
Update freebsd port to 0.9.0 to fix bazel nightly - Since https://github.com/bazelbuild/bazel/commit/caceacd984a3f86b623ea726f4df36bd81998d25 bazel nightly on freebsd has been failing as "configuration_field" is not available in bazel 0.7.0. Now that configuration_field is available on bazel 0.9.0 now, can we update bazel version in freebsd? Or change freebsd ci scripts to use latest released bazel + bazel from HEAD like we do on other distributions?
Thanks!
|
process
|
update freebsd port to to fix bazel nightly since bazel nightly on freebsd has been failing as configuration field is not available in bazel now that configuration field is available on bazel now can we update bazel version in freebsd or change freebsd ci scripts to use latest released bazel bazel from head like we do on other distributions thanks
| 1
|
373,620
| 11,046,384,168
|
IssuesEvent
|
2019-12-09 16:45:51
|
webpack-contrib/compression-webpack-plugin
|
https://api.github.com/repos/webpack-contrib/compression-webpack-plugin
|
closed
|
Allow multiple file outputs one for each algorithm
|
priority: 5 (nice to have) semver: Minor severity: 4 (inconvenient) type: Feature
|
- Operating System: Windows / Mac / Linux
- Node Version: 8.9.0
- NPM Version: 6.9.0
- webpack Version: 4.3.0
- compression-webpack-plugin Version: 3.0.0
### Feature Proposal
Allow plugin to generate multiple file outputs for multiple algorithms. Maybe algorithms can take an optional array format to indicate more than one file (per algorithm) should be generated.
### Feature Use Case
Since this plugin now supports brotli algorithm it would be nice to be able to output both gzip *and* brotli so that my server can serve up the appropriate file to the client based on the accept headers.
|
1.0
|
Allow multiple file outputs one for each algorithm - - Operating System: Windows / Mac / Linux
- Node Version: 8.9.0
- NPM Version: 6.9.0
- webpack Version: 4.3.0
- compression-webpack-plugin Version: 3.0.0
### Feature Proposal
Allow plugin to generate multiple file outputs for multiple algorithms. Maybe algorithms can take an optional array format to indicate more than one file (per algorithm) should be generated.
### Feature Use Case
Since this plugin now supports brotli algorithm it would be nice to be able to output both gzip *and* brotli so that my server can serve up the appropriate file to the client based on the accept headers.
|
non_process
|
allow multiple file outputs one for each algorithm operating system windows mac linux node version npm version webpack version compression webpack plugin version feature proposal allow plugin to generate multiple file outputs for multiple algorithms maybe algorithms can take an optional array format to indicate more than one file per algorithm should be generated feature use case since this plugin now supports brotli algorithm it would be nice to be able to output both gzip and brotli so that my server can serve up the appropriate file to the client based on the accept headers
| 0
|
8,645
| 11,789,396,888
|
IssuesEvent
|
2020-03-17 17:04:47
|
rohanchandra30/Spectral-Trajectory-and-Behavior-Prediction
|
https://api.github.com/repos/rohanchandra30/Spectral-Trajectory-and-Behavior-Prediction
|
closed
|
Problems in data preparation steps
|
Data Processing enhancement
|
"Thanks for pointing that out! As you can see from the comment, that section is to generate multiple batches of data in case the raw data file is too large. But before that, "single" function will basically do similar thing and just put all data in one file. You can comment out that section and gave it a try.
Just let me know if you have more questions! :) "
_Originally posted by @rayguan97 in https://github.com/rohanchandra30/Spectral-Trajectory-Prediction/issues/4#issuecomment-579454136_
Hi, thank you for the share of codbase, I am also new to the trajectory prediction area and try to run the data preparation steps. But there are two problems confusing to me.
Question 1:
I tried to comment out that section as you suggested in https://github.com/rohanchandra30/Spectral-Trajectory-Prediction/issues/4
I ran the data_processing/generate_data.py with the following three commands:
python generate_data.py --set train
python generate_data.py --set val
python generate_data.py --set test
After that, I found that three files: trainSet.txt, valSet.txt, testSet.txt will be automatically created in resources/data/ARGO, but I noticed that there was no data in these three files, which means the code only created these three files but wrote nothing in them. I guess it was caused because I commented out that section. So I guess the line 165 to 185 is necessary. But in line 168 of data_processing/generate_data.py, the code seems to be not finished, could you help me to complete or improve the line 165 to 185 of data_processing/generate_data.py ?
Besides, in line 165-166 of data_processing/generate_data.py, could you explain the function of these two statements?
Question 2:
In line 24-30 of data_processing/data_stream.py, the DATA_DIR for APOL and LYFT both includes trainSet0.txt, but I only found the statements that can create trainSet.txt for ARGO in data_processing/generate_data.py.
I did not find any statements that can help to create trainSet.txt file in these two files:
1.data_processing/format_apolloscape.py
2. data_processing/format_lyft.py.
After I ran the above two files, I did not find trainSet.txt in resources/data/APOL/train and resources/data/LYFT/train, which were expected to be there.
I would be grateful if you could help me with it.
Thank you very much for your attention and help.
|
1.0
|
Problems in data preparation steps - "Thanks for pointing that out! As you can see from the comment, that section is to generate multiple batches of data in case the raw data file is too large. But before that, "single" function will basically do similar thing and just put all data in one file. You can comment out that section and gave it a try.
Just let me know if you have more questions! :) "
_Originally posted by @rayguan97 in https://github.com/rohanchandra30/Spectral-Trajectory-Prediction/issues/4#issuecomment-579454136_
Hi, thank you for the share of codbase, I am also new to the trajectory prediction area and try to run the data preparation steps. But there are two problems confusing to me.
Question 1:
I tried to comment out that section as you suggested in https://github.com/rohanchandra30/Spectral-Trajectory-Prediction/issues/4
I ran the data_processing/generate_data.py with the following three commands:
python generate_data.py --set train
python generate_data.py --set val
python generate_data.py --set test
After that, I found that three files: trainSet.txt, valSet.txt, testSet.txt will be automatically created in resources/data/ARGO, but I noticed that there was no data in these three files, which means the code only created these three files but wrote nothing in them. I guess it was caused because I commented out that section. So I guess the line 165 to 185 is necessary. But in line 168 of data_processing/generate_data.py, the code seems to be not finished, could you help me to complete or improve the line 165 to 185 of data_processing/generate_data.py ?
Besides, in line 165-166 of data_processing/generate_data.py, could you explain the function of these two statements?
Question 2:
In line 24-30 of data_processing/data_stream.py, the DATA_DIR for APOL and LYFT both includes trainSet0.txt, but I only found the statements that can create trainSet.txt for ARGO in data_processing/generate_data.py.
I did not find any statements that can help to create trainSet.txt file in these two files:
1.data_processing/format_apolloscape.py
2. data_processing/format_lyft.py.
After I ran the above two files, I did not find trainSet.txt in resources/data/APOL/train and resources/data/LYFT/train, which were expected to be there.
I would be grateful if you could help me with it.
Thank you very much for your attention and help.
|
process
|
problems in data preparation steps thanks for pointing that out as you can see from the comment that section is to generate multiple batches of data in case the raw data file is too large but before that single function will basically do similar thing and just put all data in one file you can comment out that section and gave it a try just let me know if you have more questions originally posted by in hi thank you for the share of codbase i am also new to the trajectory prediction area and try to run the data preparation steps but there are two problems confusing to me question i tried to comment out that section as you suggested in i ran the data processing generate data py with the following three commands python generate data py set train python generate data py set val python generate data py set test after that i found that three files trainset txt valset txt testset txt will be automatically created in resources data argo but i noticed that there was no data in these three files which means the code only created these three files but wrote nothing in them i guess it was caused because i commented out that section so i guess the line to is necessary but in line of data processing generate data py the code seems to be not finished could you help me to complete or improve the line to of data processing generate data py besides in line of data processing generate data py could you explain the function of these two statements question in line of data processing data stream py the data dir for apol and lyft both includes txt but i only found the statements that can create trainset txt for argo in data processing generate data py i did not find any statements that can help to create trainset txt file in these two files data processing format apolloscape py data processing format lyft py after i ran the above two files i did not find trainset txt in resources data apol train and resources data lyft train which were expected to be there i would be grateful if you could help me with it thank you very much for your attention and help
| 1
|
13,631
| 16,240,547,410
|
IssuesEvent
|
2021-05-07 09:00:35
|
w3c/mediacapture-image
|
https://api.github.com/repos/w3c/mediacapture-image
|
closed
|
WG CR review for mediacapture-image
|
TAG-review process/infra
|
This issue serves to track internal WG review (in the WGs making up the mediacapture TF) of the mediacapture-image document for CR publication.
If you have reviewed the document, please add a “thumbs up” reaction to the issue.
If you find issues that you think need addressing before CR publication, please file the issue in Github and mention it in the comments.
The review lasts until Thursday, October 12, 2016. At that time, the chairs will decide whether or not the review result warrants asking the wider community to review the document for CR; if we do ask the wider community, a similar issue wil be filed for tracking that review.
|
1.0
|
WG CR review for mediacapture-image - This issue serves to track internal WG review (in the WGs making up the mediacapture TF) of the mediacapture-image document for CR publication.
If you have reviewed the document, please add a “thumbs up” reaction to the issue.
If you find issues that you think need addressing before CR publication, please file the issue in Github and mention it in the comments.
The review lasts until Thursday, October 12, 2016. At that time, the chairs will decide whether or not the review result warrants asking the wider community to review the document for CR; if we do ask the wider community, a similar issue wil be filed for tracking that review.
|
process
|
wg cr review for mediacapture image this issue serves to track internal wg review in the wgs making up the mediacapture tf of the mediacapture image document for cr publication if you have reviewed the document please add a “thumbs up” reaction to the issue if you find issues that you think need addressing before cr publication please file the issue in github and mention it in the comments the review lasts until thursday october at that time the chairs will decide whether or not the review result warrants asking the wider community to review the document for cr if we do ask the wider community a similar issue wil be filed for tracking that review
| 1
|
4,512
| 6,664,387,087
|
IssuesEvent
|
2017-10-02 19:57:12
|
cedardevs/onestop
|
https://api.github.com/repos/cedardevs/onestop
|
closed
|
Migrate Envelope dateline wrapping bugfix to api-metadata
|
api bug EPIC: Microservices ready
|
#206 Bugfix was applied to the api module, and needs to be applied to the new api-metadata module as well.
See https://github.com/cedardevs/onestop/pull/207
|
1.0
|
Migrate Envelope dateline wrapping bugfix to api-metadata - #206 Bugfix was applied to the api module, and needs to be applied to the new api-metadata module as well.
See https://github.com/cedardevs/onestop/pull/207
|
non_process
|
migrate envelope dateline wrapping bugfix to api metadata bugfix was applied to the api module and needs to be applied to the new api metadata module as well see
| 0
|
9,768
| 11,816,710,321
|
IssuesEvent
|
2020-03-20 09:37:10
|
Leaflet/Leaflet
|
https://api.github.com/repos/Leaflet/Leaflet
|
closed
|
L.Browser.pointer value is wrong
|
compatibility
|
Here are both docs and code:
https://github.com/Leaflet/Leaflet/blob/d1a1e97b8290f642eb677284af53c2db64199a76/src/core/Browser.js#L94-L96
It is clearly seen that `pointer` value will be `false` in webkit despite docs statement.
The issue appeared after #6855.
I understand that PR purpose, but still insist that proposed fix was not proper.
1. `webkit` is not iOS-specific property.
`safari` is more close, but still, that was not proper place to make that fix.
2. Perhaps this place is better to check for `safari`:
https://github.com/Leaflet/Leaflet/blob/d1a1e97b8290f642eb677284af53c2db64199a76/src/map/handler/Map.Tap.js#L131-L136
At least change here would be more local.
3. Enabling `tap` handler indeed fixes issue with `contextmenu` event (though not completely: #6865), but it also has unwanted sideeffects.
Sample to reproduce issues involves `Leaflet.draw`, which is old, unsupported,and need to be fixed itself.
But if it is relevant - I can setup a fiddle.
|
True
|
L.Browser.pointer value is wrong - Here are both docs and code:
https://github.com/Leaflet/Leaflet/blob/d1a1e97b8290f642eb677284af53c2db64199a76/src/core/Browser.js#L94-L96
It is clearly seen that `pointer` value will be `false` in webkit despite docs statement.
The issue appeared after #6855.
I understand that PR purpose, but still insist that proposed fix was not proper.
1. `webkit` is not iOS-specific property.
`safari` is more close, but still, that was not proper place to make that fix.
2. Perhaps this place is better to check for `safari`:
https://github.com/Leaflet/Leaflet/blob/d1a1e97b8290f642eb677284af53c2db64199a76/src/map/handler/Map.Tap.js#L131-L136
At least change here would be more local.
3. Enabling `tap` handler indeed fixes issue with `contextmenu` event (though not completely: #6865), but it also has unwanted sideeffects.
Sample to reproduce issues involves `Leaflet.draw`, which is old, unsupported,and need to be fixed itself.
But if it is relevant - I can setup a fiddle.
|
non_process
|
l browser pointer value is wrong here are both docs and code it is clearly seen that pointer value will be false in webkit despite docs statement the issue appeared after i understand that pr purpose but still insist that proposed fix was not proper webkit is not ios specific property safari is more close but still that was not proper place to make that fix perhaps this place is better to check for safari at least change here would be more local enabling tap handler indeed fixes issue with contextmenu event though not completely but it also has unwanted sideeffects sample to reproduce issues involves leaflet draw which is old unsupported and need to be fixed itself but if it is relevant i can setup a fiddle
| 0
|
7,359
| 2,601,758,726
|
IssuesEvent
|
2015-02-24 00:34:08
|
chrsmith/bwapi
|
https://api.github.com/repos/chrsmith/bwapi
|
closed
|
Method to get bot-only APM
|
auto-migrated Component-Logic NewFeature Object-Game Priority-Low Type-Enhancement
|
```
A method to obtain the bot's APM and exclude all user-interact clicks.
Should this also include Selects? Or only mark Select+Command pairs?
int Game::getAPM()?
also
int Game::getActionCount()?
```
-----
Original issue reported on code.google.com by `AHeinerm` on 26 Nov 2010 at 5:13
|
1.0
|
Method to get bot-only APM - ```
A method to obtain the bot's APM and exclude all user-interact clicks.
Should this also include Selects? Or only mark Select+Command pairs?
int Game::getAPM()?
also
int Game::getActionCount()?
```
-----
Original issue reported on code.google.com by `AHeinerm` on 26 Nov 2010 at 5:13
|
non_process
|
method to get bot only apm a method to obtain the bot s apm and exclude all user interact clicks should this also include selects or only mark select command pairs int game getapm also int game getactioncount original issue reported on code google com by aheinerm on nov at
| 0
|
1,427
| 3,994,374,528
|
IssuesEvent
|
2016-05-10 12:09:26
|
thesgc/chembiohub_helpdesk
|
https://api.github.com/repos/thesgc/chembiohub_helpdesk
|
closed
|
Clarification in what external users can and can't do required.
|
Managing users priority: Low processed backlog suggestion
|
I can add an external user as an editor on a project via edit user roles. The External user does not have access to editing privileges despite being added as editor by the project owner. We need to be clear about what an external user can and can’t do here
|
1.0
|
Clarification in what external users can and can't do required. - I can add an external user as an editor on a project via edit user roles. The External user does not have access to editing privileges despite being added as editor by the project owner. We need to be clear about what an external user can and can’t do here
|
process
|
clarification in what external users can and can t do required i can add an external user as an editor on a project via edit user roles the external user does not have access to editing privileges despite being added as editor by the project owner we need to be clear about what an external user can and can’t do here
| 1
|
13,256
| 15,725,719,346
|
IssuesEvent
|
2021-03-29 10:20:13
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
amazon-import - improvement
|
enhancement post-processor/amazon-import remote-plugin/amazon
|
#### Feature Description
amazon-import supporting "import-snapshot" (ami import passing through snapshot.).
#### Use Case(s)
I know support from vendor is important, this case can be an exception. AWS is not declaring support for kernel version present in many new OS (CentOS 8, Fedora 31, ...) and stop vmimport task on ami import stage with error: " Unable to determine kernel version" (see #8302 from where we start to search for a solution/workaround).
AWS does not stop however import task as snapshot (instead ami directly). We succesfully reach a working CentOS8 custom AMI manually starting from an virtualbox ova e.g created with packer virtualbox-iso
* extracting VMDK inside ova (tar -xvf packer-centos8min-x86_64.ova)
* uploading vmdk (aws s3 cp ...)
* importing VMDK to aws as "snapshot" (aws ec2 import-snapshot...)
* Creating AMI from snapshot (aws ec2 register-image...)
ova was made using packer provisioned to set right iniramfs with needed modules to allow root device discovery on boot stages for xen-base es2 instances (xen-blkfront), nitro-based ec2 instances (nvme, ena).
It would be nice if amazon-import can support import-snapshot (using VMDK in ova) automating all image creation steps useful to make custom image of new OS release (when kernel version is not yet supported by AWS as often it is a long time lack).
|
1.0
|
amazon-import - improvement - #### Feature Description
amazon-import supporting "import-snapshot" (ami import passing through snapshot.).
#### Use Case(s)
I know support from vendor is important, this case can be an exception. AWS is not declaring support for kernel version present in many new OS (CentOS 8, Fedora 31, ...) and stop vmimport task on ami import stage with error: " Unable to determine kernel version" (see #8302 from where we start to search for a solution/workaround).
AWS does not stop however import task as snapshot (instead ami directly). We succesfully reach a working CentOS8 custom AMI manually starting from an virtualbox ova e.g created with packer virtualbox-iso
* extracting VMDK inside ova (tar -xvf packer-centos8min-x86_64.ova)
* uploading vmdk (aws s3 cp ...)
* importing VMDK to aws as "snapshot" (aws ec2 import-snapshot...)
* Creating AMI from snapshot (aws ec2 register-image...)
ova was made using packer provisioned to set right iniramfs with needed modules to allow root device discovery on boot stages for xen-base es2 instances (xen-blkfront), nitro-based ec2 instances (nvme, ena).
It would be nice if amazon-import can support import-snapshot (using VMDK in ova) automating all image creation steps useful to make custom image of new OS release (when kernel version is not yet supported by AWS as often it is a long time lack).
|
process
|
amazon import improvement feature description amazon import supporting import snapshot ami import passing through snapshot use case s i know support from vendor is important this case can be an exception aws is not declaring support for kernel version present in many new os centos fedora and stop vmimport task on ami import stage with error unable to determine kernel version see from where we start to search for a solution workaround aws does not stop however import task as snapshot instead ami directly we succesfully reach a working custom ami manually starting from an virtualbox ova e g created with packer virtualbox iso extracting vmdk inside ova tar xvf packer ova uploading vmdk aws cp importing vmdk to aws as snapshot aws import snapshot creating ami from snapshot aws register image ova was made using packer provisioned to set right iniramfs with needed modules to allow root device discovery on boot stages for xen base instances xen blkfront nitro based instances nvme ena it would be nice if amazon import can support import snapshot using vmdk in ova automating all image creation steps useful to make custom image of new os release when kernel version is not yet supported by aws as often it is a long time lack
| 1
|
270,960
| 20,616,481,169
|
IssuesEvent
|
2022-03-07 13:43:33
|
cockroachdb/cockroach-operator
|
https://api.github.com/repos/cockroachdb/cockroach-operator
|
closed
|
Operator on OpenShift Challenges
|
bug documentation
|
The Operator installation instructions are not working for me. I explored [gcp-install.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md) and [openshift-dev.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/openshift-dev.md).
I assume the [gcp-install.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md) are the instructions that we are supposed to be using. Is that correct?
In the other file ([openshift-dev.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/openshift-dev.md)), it instructs one to navigate to the OpenShift web console. Does that require a subscription.
In previous conversations, we talke about OpenShift having two listings:
- [Red Hat marketplace listing that requires a subscription](https://marketplace.redhat.com/en-us/products/cockroachdb-operator)
- [Red Hat Ecosystem Catalog listing](https://catalog.redhat.com/software/operators/detail/5e9872712989e6a90307acd6)
**Questions:**
- Are we still supporting both of these?
- Have we been developing/testing on both of these types of listings?
- With the Ecosystem Catalog Listing, how are we expecting users to access the OpenShift Web Console?
**Repro steps for gcp-install.md**
1. The [Execution creation step](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md#execute-the-creation-script) was unclear. I didn't have a DNS domain, so I attempted to create one. What is that for?
2. I ran this command `./openshift-gcp-create.sh -p cockroach-john-277713 -s pull-secret.txt -z cockroachjohn.com. -n cockroachjohn -r us-east1` and received the following error:
```
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.cockroachjohn-openshift.cockroachjohn.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.cockroachjohn-openshift.cockroachjohn.com on 192.168.1.1:53: no such host
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
INFO Pulling debug logs from the bootstrap machine
DEBUG Using SSH_AUTH_SOCK /private/tmp/com.apple.launchd.gvpQGE0eN7/Listeners to connect to an existing agent
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: failed to use pre-existing agent, make sure the appropriate keys exist in the agent for authentication: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
FATAL Bootstrap failed to complete: failed waiting for Kubernetes API: Get "https://api.cockroachjohn-openshift.cockroachjohn.com:6443/version?timeout=32s": dial tcp: lookup api.cockroachjohn-openshift.cockroachjohn.com on 192.168.1.1:53: no such host
Johns-MacBook-Pro:gcp-openshift johnkendall$
```
|
1.0
|
Operator on OpenShift Challenges - The Operator installation instructions are not working for me. I explored [gcp-install.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md) and [openshift-dev.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/openshift-dev.md).
I assume the [gcp-install.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md) are the instructions that we are supposed to be using. Is that correct?
In the other file ([openshift-dev.md](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/openshift-dev.md)), it instructs one to navigate to the OpenShift web console. Does that require a subscription.
In previous conversations, we talke about OpenShift having two listings:
- [Red Hat marketplace listing that requires a subscription](https://marketplace.redhat.com/en-us/products/cockroachdb-operator)
- [Red Hat Ecosystem Catalog listing](https://catalog.redhat.com/software/operators/detail/5e9872712989e6a90307acd6)
**Questions:**
- Are we still supporting both of these?
- Have we been developing/testing on both of these types of listings?
- With the Ecosystem Catalog Listing, how are we expecting users to access the OpenShift Web Console?
**Repro steps for gcp-install.md**
1. The [Execution creation step](https://github.com/cockroachdb/cockroach-operator/blob/master/docs/openshift/gcp-install.md#execute-the-creation-script) was unclear. I didn't have a DNS domain, so I attempted to create one. What is that for?
2. I ran this command `./openshift-gcp-create.sh -p cockroach-john-277713 -s pull-secret.txt -z cockroachjohn.com. -n cockroachjohn -r us-east1` and received the following error:
```
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.cockroachjohn-openshift.cockroachjohn.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.cockroachjohn-openshift.cockroachjohn.com on 192.168.1.1:53: no such host
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
INFO Pulling debug logs from the bootstrap machine
DEBUG Using SSH_AUTH_SOCK /private/tmp/com.apple.launchd.gvpQGE0eN7/Listeners to connect to an existing agent
ERROR Attempted to gather debug logs after installation failure: failed to create SSH client: failed to use pre-existing agent, make sure the appropriate keys exist in the agent for authentication: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
FATAL Bootstrap failed to complete: failed waiting for Kubernetes API: Get "https://api.cockroachjohn-openshift.cockroachjohn.com:6443/version?timeout=32s": dial tcp: lookup api.cockroachjohn-openshift.cockroachjohn.com on 192.168.1.1:53: no such host
Johns-MacBook-Pro:gcp-openshift johnkendall$
```
|
non_process
|
operator on openshift challenges the operator installation instructions are not working for me i explored and i assume the are the instructions that we are supposed to be using is that correct in the other file it instructs one to navigate to the openshift web console does that require a subscription in previous conversations we talke about openshift having two listings questions are we still supporting both of these have we been developing testing on both of these types of listings with the ecosystem catalog listing how are we expecting users to access the openshift web console repro steps for gcp install md the was unclear i didn t have a dns domain so i attempted to create one what is that for i ran this command openshift gcp create sh p cockroach john s pull secret txt z cockroachjohn com n cockroachjohn r us and received the following error error attempted to gather clusteroperator status after installation failure listing clusteroperator objects get dial tcp lookup api cockroachjohn openshift cockroachjohn com on no such host debug fetching bootstrap ssh key pair debug loading bootstrap ssh key pair debug using bootstrap ssh key pair loaded from state file debug reusing previously fetched bootstrap ssh key pair debug fetching install config debug loading install config debug loading ssh key debug loading base domain debug loading platform debug loading cluster name debug loading base domain debug loading platform debug loading networking debug loading platform debug loading pull secret debug loading platform debug using install config loaded from state file debug reusing previously fetched install config info pulling debug logs from the bootstrap machine debug using ssh auth sock private tmp com apple launchd listeners to connect to an existing agent error attempted to gather debug logs after installation failure failed to create ssh client failed to use pre existing agent make sure the appropriate keys exist in the agent for authentication ssh handshake failed ssh unable to authenticate attempted methods no supported methods remain fatal bootstrap failed to complete failed waiting for kubernetes api get dial tcp lookup api cockroachjohn openshift cockroachjohn com on no such host johns macbook pro gcp openshift johnkendall
| 0
|
17,654
| 23,472,401,282
|
IssuesEvent
|
2022-08-17 00:01:02
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
closed
|
Adaptação da resolução do Playwright para configuração da página
|
[2] Baixa Prioridade [0] Desenvolvimento [1] Aprimoramento [3] Processamento Dinâmico
|
## Comportamento Esperado
Desejamos que seja possível configurar a resolução de execução do navegador na coleta dinâmica. O usuário terá a possibilidade de escolher a resolução desejada. Essa resolução pode ser escolhida com a escolha de uma largura e altura arbitrária, como também através de uma lista de resoluções usuais na indústria.
## Comportamento Atual
O navegador do Playwright é executado sempre na resolução de 1280 x720, que pode alterar a visibilidade de alguns elementos em páginas responsivas.
## Especificações da Coleta
Coletas dinâmicas em páginas recursivas
|
1.0
|
Adaptação da resolução do Playwright para configuração da página - ## Comportamento Esperado
Desejamos que seja possível configurar a resolução de execução do navegador na coleta dinâmica. O usuário terá a possibilidade de escolher a resolução desejada. Essa resolução pode ser escolhida com a escolha de uma largura e altura arbitrária, como também através de uma lista de resoluções usuais na indústria.
## Comportamento Atual
O navegador do Playwright é executado sempre na resolução de 1280 x720, que pode alterar a visibilidade de alguns elementos em páginas responsivas.
## Especificações da Coleta
Coletas dinâmicas em páginas recursivas
|
process
|
adaptação da resolução do playwright para configuração da página comportamento esperado desejamos que seja possível configurar a resolução de execução do navegador na coleta dinâmica o usuário terá a possibilidade de escolher a resolução desejada essa resolução pode ser escolhida com a escolha de uma largura e altura arbitrária como também através de uma lista de resoluções usuais na indústria comportamento atual o navegador do playwright é executado sempre na resolução de que pode alterar a visibilidade de alguns elementos em páginas responsivas especificações da coleta coletas dinâmicas em páginas recursivas
| 1
|
20,913
| 27,753,590,531
|
IssuesEvent
|
2023-03-15 23:19:26
|
dDevTech/tapas-top-frontend
|
https://api.github.com/repos/dDevTech/tapas-top-frontend
|
closed
|
Modificación ajustes perfil page 20/03/2023
|
pending in process require testing
|
Añadir a la página /account/settings la posibilidad de cambiar:
-Ubicación [categoría]
-Género [categoria]
-País [categoría]
-Foto text
-Descripción multitext
|
1.0
|
Modificación ajustes perfil page 20/03/2023 - Añadir a la página /account/settings la posibilidad de cambiar:
-Ubicación [categoría]
-Género [categoria]
-País [categoría]
-Foto text
-Descripción multitext
|
process
|
modificación ajustes perfil page añadir a la página account settings la posibilidad de cambiar ubicación género país foto text descripción multitext
| 1
|
246,754
| 18,853,154,133
|
IssuesEvent
|
2021-11-12 00:25:27
|
Interlisp/medley
|
https://api.github.com/repos/Interlisp/medley
|
reopened
|
Embed SUBR documentation in C code
|
documentation wontfix maiko testing
|
The SUBR (i.e. VM opcode) dispatch table seems to be in Maiko's [`src/subr.c`](https://github.com/Interlisp/maiko/blob/master/src/subr.c). Could we add a specially formatted comment into the C code to document each SUBR, and write a script to extract those comments and generate a nice HTML document from them?
If the description of each SUBR is right next to its implementation, that would be the easiest way to keep the two in sync.
(See also #28)
|
1.0
|
Embed SUBR documentation in C code - The SUBR (i.e. VM opcode) dispatch table seems to be in Maiko's [`src/subr.c`](https://github.com/Interlisp/maiko/blob/master/src/subr.c). Could we add a specially formatted comment into the C code to document each SUBR, and write a script to extract those comments and generate a nice HTML document from them?
If the description of each SUBR is right next to its implementation, that would be the easiest way to keep the two in sync.
(See also #28)
|
non_process
|
embed subr documentation in c code the subr i e vm opcode dispatch table seems to be in maiko s could we add a specially formatted comment into the c code to document each subr and write a script to extract those comments and generate a nice html document from them if the description of each subr is right next to its implementation that would be the easiest way to keep the two in sync see also
| 0
|
428,867
| 12,418,461,854
|
IssuesEvent
|
2020-05-23 00:25:19
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
Deployment Fails If EclipseLink Cache Coordination Is Configured
|
Component: entity-persistence ERR: Assignee Priority: Major Stale Type: Bug
|
Glassfish 3.1.2 introduced early validation of JPA persistence units on DAS.
For this reason DAS creates and entity manager during deployment which will fail with EclipseLink Cache Coordination configured because the required resources are missing on the DAS server.
For JMSTopicTransportManager this would be the TopicConnectionFactory and Topic.
For RMITransportManager this would be java system properties that I usually setup for each individual server instance and read in a session customizer to build the rmi url for the current server instance.
Workaround:
In both cases the problem can be worked around by checking whether the resources are available in a session customizer.
Cache Coordination should be switched off if the resources, e.g. the TopicConnectionFactory is not available.
public class JMSSessionCustomizerGF312 implements SessionCustomizer {
@Override
public void customize(Session session) throws Exception {
session.getLog().write("############################## SESSION CUSTOMIZER ##############################");
Server server = (Server) session;
RemoteCommandManager commandMgr = (RemoteCommandManager) server
.getCommandManager();
JMSTopicTransportManager transportManager = (JMSTopicTransportManager) commandMgr
.getTransportManager();
String connectionFactoryName = transportManager.getTopicConnectionFactoryName();
try
{ TopicConnectionFactory connectionFactory = (TopicConnectionFactory) new InitialContext().lookup(connectionFactoryName); }
catch(Exception ex)
{ session.getLog().write("Lookup of TopicConnectionFactory \"" + connectionFactoryName + "\" failed."); server.setCommandManager(null); server.setShouldPropagateChanges(false); }
session.getLog().flush();
if (server.isConnected())
{ server.getCommandManager().initialize(); }
else
{ server.login(); }
}
}
I have uploaded an example that can be used to reproduce the problem and it does also include the workaround.
To reproduce the problem configure the following in project.properties:
eclipselink.transport.protocol=jms-el22
To work around the problem configure the following in project.properties:
eclipselink.transport.protocol=jms-gf312
#### Affected Versions
[3.1.2]
|
1.0
|
Deployment Fails If EclipseLink Cache Coordination Is Configured - Glassfish 3.1.2 introduced early validation of JPA persistence units on DAS.
For this reason DAS creates and entity manager during deployment which will fail with EclipseLink Cache Coordination configured because the required resources are missing on the DAS server.
For JMSTopicTransportManager this would be the TopicConnectionFactory and Topic.
For RMITransportManager this would be java system properties that I usually setup for each individual server instance and read in a session customizer to build the rmi url for the current server instance.
Workaround:
In both cases the problem can be worked around by checking whether the resources are available in a session customizer.
Cache Coordination should be switched off if the resources, e.g. the TopicConnectionFactory is not available.
public class JMSSessionCustomizerGF312 implements SessionCustomizer {
@Override
public void customize(Session session) throws Exception {
session.getLog().write("############################## SESSION CUSTOMIZER ##############################");
Server server = (Server) session;
RemoteCommandManager commandMgr = (RemoteCommandManager) server
.getCommandManager();
JMSTopicTransportManager transportManager = (JMSTopicTransportManager) commandMgr
.getTransportManager();
String connectionFactoryName = transportManager.getTopicConnectionFactoryName();
try
{ TopicConnectionFactory connectionFactory = (TopicConnectionFactory) new InitialContext().lookup(connectionFactoryName); }
catch(Exception ex)
{ session.getLog().write("Lookup of TopicConnectionFactory \"" + connectionFactoryName + "\" failed."); server.setCommandManager(null); server.setShouldPropagateChanges(false); }
session.getLog().flush();
if (server.isConnected())
{ server.getCommandManager().initialize(); }
else
{ server.login(); }
}
}
I have uploaded an example that can be used to reproduce the problem and it does also include the workaround.
To reproduce the problem configure the following in project.properties:
eclipselink.transport.protocol=jms-el22
To work around the problem configure the following in project.properties:
eclipselink.transport.protocol=jms-gf312
#### Affected Versions
[3.1.2]
|
non_process
|
deployment fails if eclipselink cache coordination is configured glassfish introduced early validation of jpa persistence units on das for this reason das creates and entity manager during deployment which will fail with eclipselink cache coordination configured because the required resources are missing on the das server for jmstopictransportmanager this would be the topicconnectionfactory and topic for rmitransportmanager this would be java system properties that i usually setup for each individual server instance and read in a session customizer to build the rmi url for the current server instance workaround in both cases the problem can be worked around by checking whether the resources are available in a session customizer cache coordination should be switched off if the resources e g the topicconnectionfactory is not available public class implements sessioncustomizer override public void customize session session throws exception session getlog write session customizer server server server session remotecommandmanager commandmgr remotecommandmanager server getcommandmanager jmstopictransportmanager transportmanager jmstopictransportmanager commandmgr gettransportmanager string connectionfactoryname transportmanager gettopicconnectionfactoryname try topicconnectionfactory connectionfactory topicconnectionfactory new initialcontext lookup connectionfactoryname catch exception ex session getlog write lookup of topicconnectionfactory connectionfactoryname failed server setcommandmanager null server setshouldpropagatechanges false session getlog flush if server isconnected server getcommandmanager initialize else server login i have uploaded an example that can be used to reproduce the problem and it does also include the workaround to reproduce the problem configure the following in project properties eclipselink transport protocol jms to work around the problem configure the following in project properties eclipselink transport protocol jms affected versions
| 0
|
11,886
| 14,680,972,694
|
IssuesEvent
|
2020-12-31 11:46:06
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Fields should not be editable
|
Bug P1 Participant manager Process: Tested QA Process: Tested dev
|
Fields should not be editable in the following
1. Edit user details page when user is deactivated
2. Edit Location details page when location is deactivated
|
2.0
|
Fields should not be editable - Fields should not be editable in the following
1. Edit user details page when user is deactivated
2. Edit Location details page when location is deactivated
|
process
|
fields should not be editable fields should not be editable in the following edit user details page when user is deactivated edit location details page when location is deactivated
| 1
|
52,562
| 22,294,379,937
|
IssuesEvent
|
2022-06-12 20:49:04
|
badges/shields
|
https://api.github.com/repos/badges/shields
|
opened
|
GitLab Pipeline (nested group) service test failing
|
keep-service-tests-green
|
:clock11: **When did the problem start?**
At least a week ago
<!-- Indicate when the problem started -->
:camera: **Live badge**
<!-- Provide a link to the live badge in plain text and markdown. -->
https://img.shields.io/gitlab/pipeline-status/gitlab-org/gitlab?branch=master

:wrench: **Is the live badge working?**
Yes, at least for non-nested group
<!-- Indicate whether or not the live badge is working. -->
:link: **CircleCI link**
https://app.circleci.com/pipelines/github/badges/daily-tests/1740/workflows/66939cf5-e47e-49dc-bbcc-be10961607f6/jobs/2642/tests#failed-test-11
<!-- Provide a link to the failing test in CircleCI. -->
:lady_beetle: **Stack trace**
```Gitlab Pipeline [live] Pipeline status (nested groups) [ GET /pipeline-status/megabyte-labs/dockerfile/ci-pipeline/ansible-lint.json?branch=master ]
[ GET /pipeline-status/megabyte-labs/dockerfile/ci-pipeline/ansible-lint.json?branch=master ]
/home/circleci/project/shields/core/service-test-runner/cli.js
ValidationError: message mismatch: "value" must be one of [fixed, passed, passing, succeeded, success, successful, partially succeeded, unstable, timeout, broken, error, errored, failed, failing, failure, infrastructure_failure, aborted, building, canceled, cancelled, created, expired, initiated, no builds, no tests, not built, not run, pending, processing, queued, running, scheduled, skipped, starting, stopped, testing, waiting]
at Object.exports.process (node_modules/joi/lib/errors.js:193:16)
at Object.internals.entry (node_modules/joi/lib/validator.js:153:26)
at Object.exports.entry (node_modules/joi/lib/validator.js:27:30)
at internals.Base.validate (node_modules/joi/lib/base.js:548:26)
at Object.internals.assert (node_modules/joi/lib/index.js:225:27)
at Object.attempt (node_modules/joi/lib/index.js:107:26)
at Function._expectField (file:///home/circleci/project/shields/core/service-test-runner/icedfrisby-shields.js:89:13)
at IcedFrisbyNock.<anonymous> (file:///home/circleci/project/shields/core/service-test-runner/icedfrisby-shields.js:70:26)
at IcedFrisbyNock.<anonymous> (node_modules/icedfrisby/lib/icedfrisby.js:954:10)
at invokeNextHook (node_modules/icedfrisby/lib/icedfrisby.js:1003:24)
at /home/circleci/project/shields/node_modules/icedfrisby/lib/icedfrisby.js:1017:7
at new Promise (<anonymous>)
at IcedFrisbyNock._runHooks (node_modules/icedfrisby/lib/icedfrisby.js:976:12)
at IcedFrisbyNock.run (node_modules/icedfrisby/lib/icedfrisby.js:1276:20)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Context.<anonymous> (node_modules/icedfrisby/lib/icedfrisby.js:1348:9)
```
:bulb: **Possible solution**
Potentially just needs a new test target
<!--- Optional: only if you have suggestions on a fix/reason for the bug -->
<!-- Love Shields? Please consider donating $10 to sustain our activities:
👉 https://opencollective.com/shields -->
|
1.0
|
GitLab Pipeline (nested group) service test failing - :clock11: **When did the problem start?**
At least a week ago
<!-- Indicate when the problem started -->
:camera: **Live badge**
<!-- Provide a link to the live badge in plain text and markdown. -->
https://img.shields.io/gitlab/pipeline-status/gitlab-org/gitlab?branch=master

:wrench: **Is the live badge working?**
Yes, at least for non-nested group
<!-- Indicate whether or not the live badge is working. -->
:link: **CircleCI link**
https://app.circleci.com/pipelines/github/badges/daily-tests/1740/workflows/66939cf5-e47e-49dc-bbcc-be10961607f6/jobs/2642/tests#failed-test-11
<!-- Provide a link to the failing test in CircleCI. -->
:lady_beetle: **Stack trace**
```Gitlab Pipeline [live] Pipeline status (nested groups) [ GET /pipeline-status/megabyte-labs/dockerfile/ci-pipeline/ansible-lint.json?branch=master ]
[ GET /pipeline-status/megabyte-labs/dockerfile/ci-pipeline/ansible-lint.json?branch=master ]
/home/circleci/project/shields/core/service-test-runner/cli.js
ValidationError: message mismatch: "value" must be one of [fixed, passed, passing, succeeded, success, successful, partially succeeded, unstable, timeout, broken, error, errored, failed, failing, failure, infrastructure_failure, aborted, building, canceled, cancelled, created, expired, initiated, no builds, no tests, not built, not run, pending, processing, queued, running, scheduled, skipped, starting, stopped, testing, waiting]
at Object.exports.process (node_modules/joi/lib/errors.js:193:16)
at Object.internals.entry (node_modules/joi/lib/validator.js:153:26)
at Object.exports.entry (node_modules/joi/lib/validator.js:27:30)
at internals.Base.validate (node_modules/joi/lib/base.js:548:26)
at Object.internals.assert (node_modules/joi/lib/index.js:225:27)
at Object.attempt (node_modules/joi/lib/index.js:107:26)
at Function._expectField (file:///home/circleci/project/shields/core/service-test-runner/icedfrisby-shields.js:89:13)
at IcedFrisbyNock.<anonymous> (file:///home/circleci/project/shields/core/service-test-runner/icedfrisby-shields.js:70:26)
at IcedFrisbyNock.<anonymous> (node_modules/icedfrisby/lib/icedfrisby.js:954:10)
at invokeNextHook (node_modules/icedfrisby/lib/icedfrisby.js:1003:24)
at /home/circleci/project/shields/node_modules/icedfrisby/lib/icedfrisby.js:1017:7
at new Promise (<anonymous>)
at IcedFrisbyNock._runHooks (node_modules/icedfrisby/lib/icedfrisby.js:976:12)
at IcedFrisbyNock.run (node_modules/icedfrisby/lib/icedfrisby.js:1276:20)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Context.<anonymous> (node_modules/icedfrisby/lib/icedfrisby.js:1348:9)
```
:bulb: **Possible solution**
Potentially just needs a new test target
<!--- Optional: only if you have suggestions on a fix/reason for the bug -->
<!-- Love Shields? Please consider donating $10 to sustain our activities:
👉 https://opencollective.com/shields -->
|
non_process
|
gitlab pipeline nested group service test failing when did the problem start at least a week ago camera live badge wrench is the live badge working yes at least for non nested group link circleci link lady beetle stack trace gitlab pipeline pipeline status nested groups home circleci project shields core service test runner cli js validationerror message mismatch value must be one of at object exports process node modules joi lib errors js at object internals entry node modules joi lib validator js at object exports entry node modules joi lib validator js at internals base validate node modules joi lib base js at object internals assert node modules joi lib index js at object attempt node modules joi lib index js at function expectfield file home circleci project shields core service test runner icedfrisby shields js at icedfrisbynock file home circleci project shields core service test runner icedfrisby shields js at icedfrisbynock node modules icedfrisby lib icedfrisby js at invokenexthook node modules icedfrisby lib icedfrisby js at home circleci project shields node modules icedfrisby lib icedfrisby js at new promise at icedfrisbynock runhooks node modules icedfrisby lib icedfrisby js at icedfrisbynock run node modules icedfrisby lib icedfrisby js at processticksandrejections internal process task queues js at async context node modules icedfrisby lib icedfrisby js bulb possible solution potentially just needs a new test target love shields please consider donating to sustain our activities 👉
| 0
|
17,376
| 23,200,154,458
|
IssuesEvent
|
2022-08-01 20:32:59
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?]
|
whitelisting process
|
**Domains or links**
Please list any domains and links listed here which you believe are a false positive.
**More Information**
How did you discover your web site or domain was listed here?
Notified by another domain blocking company
**Have you requested removal from other sources?**
Yes, working on a whole list with false-positive reports. As example:
https://github.com/herndlm/hosts_merge/issues/8
https://github.com/jerryn70/GoodbyeAds/issues/296
**Additional context**
BitCanna is now marked in your list. BitCanna is not a company involved in spam, but focusses on solving issues in the cannabis industries using blockchain technology. Please remove our domain from your list (https://hosts.ubuntu101.co.za/domains.list). Thanks!
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
1.0
|
[FALSE-POSITIVE?] - **Domains or links**
Please list any domains and links listed here which you believe are a false positive.
**More Information**
How did you discover your web site or domain was listed here?
Notified by another domain blocking company
**Have you requested removal from other sources?**
Yes, working on a whole list with false-positive reports. As example:
https://github.com/herndlm/hosts_merge/issues/8
https://github.com/jerryn70/GoodbyeAds/issues/296
**Additional context**
BitCanna is now marked in your list. BitCanna is not a company involved in spam, but focusses on solving issues in the cannabis industries using blockchain technology. Please remove our domain from your list (https://hosts.ubuntu101.co.za/domains.list). Thanks!
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
process
|
domains or links please list any domains and links listed here which you believe are a false positive more information how did you discover your web site or domain was listed here notified by another domain blocking company have you requested removal from other sources yes working on a whole list with false positive reports as example additional context bitcanna is now marked in your list bitcanna is not a company involved in spam but focusses on solving issues in the cannabis industries using blockchain technology please remove our domain from your list thanks exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
161,180
| 12,533,338,289
|
IssuesEvent
|
2020-06-04 17:25:39
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
server: TestAdminAPINonTableStats failed
|
C-test-failure O-robot branch-master
|
[(server).TestAdminAPINonTableStats failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1952263&tab=buildLog) on [master@a0f5c15b3929c224a46e3ca2545171f349c233fa](https://github.com/cockroachdb/cockroach/commits/a0f5c15b3929c224a46e3ca2545171f349c233fa):
```
github.com/cockroachdb/cockroach/pkg/server.(*TestServer).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/testserver.go:411 +0x32a
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServerRaw()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:255 +0x199
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:223 +0x73
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*TestCluster).doAddServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:349 +0x1d1
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:223 +0xd33
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*testClusterFactoryImpl).StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:886 +0xcc
github.com/cockroachdb/cockroach/pkg/server.TestAdminAPINonTableStats()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_cluster_shim.go:141 +0x180
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
Goroutine 431 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker()
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:191 +0xc3
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*raftScheduler).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/scheduler.go:165 +0x149
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).processRaft()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/store_raft.go:573 +0xcb
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/store.go:1506 +0xc8c
github.com/cockroachdb/cockroach/pkg/server.(*Node).bootstrapStores()
/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:564 +0x60a
github.com/cockroachdb/cockroach/pkg/server.(*Node).start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:452 +0x172e
github.com/cockroachdb/cockroach/pkg/server.(*Server).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:1356 +0x2ff6
github.com/cockroachdb/cockroach/pkg/server.(*TestServer).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/testserver.go:411 +0x32a
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServerRaw()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:255 +0x199
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:223 +0x73
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*TestCluster).doAddServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:349 +0x1d1
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:223 +0xd33
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*testClusterFactoryImpl).StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:886 +0xcc
github.com/cockroachdb/cockroach/pkg/server.TestAdminAPINonTableStats()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_cluster_shim.go:141 +0x180
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
==================
FAIL github.com/cockroachdb/cockroach/pkg/server 19.414s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestAdminAPINonTableStats PKG=./pkg/server TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestAdminAPINonTableStats.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
server: TestAdminAPINonTableStats failed - [(server).TestAdminAPINonTableStats failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1952263&tab=buildLog) on [master@a0f5c15b3929c224a46e3ca2545171f349c233fa](https://github.com/cockroachdb/cockroach/commits/a0f5c15b3929c224a46e3ca2545171f349c233fa):
```
github.com/cockroachdb/cockroach/pkg/server.(*TestServer).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/testserver.go:411 +0x32a
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServerRaw()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:255 +0x199
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:223 +0x73
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*TestCluster).doAddServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:349 +0x1d1
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:223 +0xd33
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*testClusterFactoryImpl).StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:886 +0xcc
github.com/cockroachdb/cockroach/pkg/server.TestAdminAPINonTableStats()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_cluster_shim.go:141 +0x180
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
Goroutine 431 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker()
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:191 +0xc3
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*raftScheduler).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/scheduler.go:165 +0x149
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).processRaft()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/store_raft.go:573 +0xcb
github.com/cockroachdb/cockroach/pkg/kv/kvserver.(*Store).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/kv/kvserver/store.go:1506 +0xc8c
github.com/cockroachdb/cockroach/pkg/server.(*Node).bootstrapStores()
/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:564 +0x60a
github.com/cockroachdb/cockroach/pkg/server.(*Node).start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:452 +0x172e
github.com/cockroachdb/cockroach/pkg/server.(*Server).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/server.go:1356 +0x2ff6
github.com/cockroachdb/cockroach/pkg/server.(*TestServer).Start()
/go/src/github.com/cockroachdb/cockroach/pkg/server/testserver.go:411 +0x32a
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServerRaw()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:255 +0x199
github.com/cockroachdb/cockroach/pkg/testutils/serverutils.StartServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_server_shim.go:223 +0x73
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*TestCluster).doAddServer()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:349 +0x1d1
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:223 +0xd33
github.com/cockroachdb/cockroach/pkg/testutils/testcluster.(*testClusterFactoryImpl).StartTestCluster()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/testcluster/testcluster.go:886 +0xcc
github.com/cockroachdb/cockroach/pkg/server.TestAdminAPINonTableStats()
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/serverutils/test_cluster_shim.go:141 +0x180
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
==================
FAIL github.com/cockroachdb/cockroach/pkg/server 19.414s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestAdminAPINonTableStats PKG=./pkg/server TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestAdminAPINonTableStats.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
server testadminapinontablestats failed on github com cockroachdb cockroach pkg server testserver start go src github com cockroachdb cockroach pkg server testserver go github com cockroachdb cockroach pkg testutils serverutils startserverraw go src github com cockroachdb cockroach pkg testutils serverutils test server shim go github com cockroachdb cockroach pkg testutils serverutils startserver go src github com cockroachdb cockroach pkg testutils serverutils test server shim go github com cockroachdb cockroach pkg testutils testcluster testcluster doaddserver go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg testutils testcluster starttestcluster go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg testutils testcluster testclusterfactoryimpl starttestcluster go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg server testadminapinontablestats go src github com cockroachdb cockroach pkg testutils serverutils test cluster shim go testing trunner usr local go src testing testing go goroutine running created at github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg kv kvserver raftscheduler start go src github com cockroachdb cockroach pkg kv kvserver scheduler go github com cockroachdb cockroach pkg kv kvserver store processraft go src github com cockroachdb cockroach pkg kv kvserver store raft go github com cockroachdb cockroach pkg kv kvserver store start go src github com cockroachdb cockroach pkg kv kvserver store go github com cockroachdb cockroach pkg server node bootstrapstores go src github com cockroachdb cockroach pkg server node go github com cockroachdb cockroach pkg server node start go src github com cockroachdb cockroach pkg server node go github com cockroachdb cockroach pkg server server start go src github com cockroachdb cockroach pkg server server go github com cockroachdb cockroach pkg server testserver start go src github com cockroachdb cockroach pkg server testserver go github com cockroachdb cockroach pkg testutils serverutils startserverraw go src github com cockroachdb cockroach pkg testutils serverutils test server shim go github com cockroachdb cockroach pkg testutils serverutils startserver go src github com cockroachdb cockroach pkg testutils serverutils test server shim go github com cockroachdb cockroach pkg testutils testcluster testcluster doaddserver go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg testutils testcluster starttestcluster go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg testutils testcluster testclusterfactoryimpl starttestcluster go src github com cockroachdb cockroach pkg testutils testcluster testcluster go github com cockroachdb cockroach pkg server testadminapinontablestats go src github com cockroachdb cockroach pkg testutils serverutils test cluster shim go testing trunner usr local go src testing testing go fail github com cockroachdb cockroach pkg server more parameters goflags json make stressrace tests testadminapinontablestats pkg pkg server testtimeout stressflags timeout powered by
| 0
|
15,577
| 19,703,689,923
|
IssuesEvent
|
2022-01-12 19:20:41
|
googleapis/google-cloud-php-compute
|
https://api.github.com/repos/googleapis/google-cloud-php-compute
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname field missing from .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname field missing from .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname field missing from repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
25,124
| 12,218,266,760
|
IssuesEvent
|
2020-05-01 18:57:52
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
opened
|
Ensure that we are tracking Go test coverage correctly
|
planned/3.17 team/core-services testing
|
From @keegancsmith [on Slack](https://sourcegraph.slack.com/archives/CHPC7UX16/p1588235602278900?thread_ts=1588193714.276400&cid=CHPC7UX16)
> FYI the current measurement is much lower than the reality. Everytime we have tried to use `-coverpkg` it broke something (I think because of how it interacts with race? I can't remember).
But right now we only collect coverage inside the package being tested, and miss out on coverage we get across packages. Fixing this might blow the OKR out of the water :slightly_smiling_face: So I'd investigate that first.
cc @unknwon since you are focusing on testing. I am going to tentatively place this in the next milestone because it isn't urgent.
|
1.0
|
Ensure that we are tracking Go test coverage correctly - From @keegancsmith [on Slack](https://sourcegraph.slack.com/archives/CHPC7UX16/p1588235602278900?thread_ts=1588193714.276400&cid=CHPC7UX16)
> FYI the current measurement is much lower than the reality. Everytime we have tried to use `-coverpkg` it broke something (I think because of how it interacts with race? I can't remember).
But right now we only collect coverage inside the package being tested, and miss out on coverage we get across packages. Fixing this might blow the OKR out of the water :slightly_smiling_face: So I'd investigate that first.
cc @unknwon since you are focusing on testing. I am going to tentatively place this in the next milestone because it isn't urgent.
|
non_process
|
ensure that we are tracking go test coverage correctly from keegancsmith fyi the current measurement is much lower than the reality everytime we have tried to use coverpkg it broke something i think because of how it interacts with race i can t remember but right now we only collect coverage inside the package being tested and miss out on coverage we get across packages fixing this might blow the okr out of the water slightly smiling face so i d investigate that first cc unknwon since you are focusing on testing i am going to tentatively place this in the next milestone because it isn t urgent
| 0
|
175,940
| 28,001,072,684
|
IssuesEvent
|
2023-03-27 11:49:00
|
kodadot/nft-gallery
|
https://api.github.com/repos/kodadot/nft-gallery
|
closed
|
Collection Redesign
|
p2 collection redesign
|
Hey 5000th issue 🎂🥳
@exezbcz is author here,
@yangwao reserved special number.
Collection activity
---
- #5272
- #5286
Quick Brief
---
Here we go with collection
- Activity tab missing, will be finished parallelly with item view going live
- note that many parts are similar to to explore v0.9 - #4474
- we will use tag `?redesign=true` on beta again
Design
---
### [Figma design file](https://www.figma.com/file/7BWdGiowrvTucpSFLE0kRX/Collection-detail-handoff?node-id=0%3A1&t=rNneqkPiwAvLnES9-1)
Tasks
---
- [x] #5071
- [x] #5072
- [x] #5073
- [x] #5074
- [x] #5075
- [ ] #4911
- [x] #5076
---
<img src="https://media0.giphy.com/media/3ohhwzBss3LYGSuLPW/giphy.gif"/>
## Hero section
- big change: hero section is becoming one big banner, with the name, social buttons and profile picture on top.
- there should be also option to upload this banner, issue already created, I will try to add designs as soon as possible.
- #4983

- banner on desktop has 1440*560, it would be also recommended size for the customization - its also used on collection cards. It has quite close aspect ration
- over banner is applied black gradient with lowered opacity, see figma

## Hero Buttons:
- first section are buttons of the collection, mainly socials and website.
- Then there is share button and more options - they are divided to make clear that they are not associated with the specific collection
- Share button is pretty clear, more options button is in case visitor - report button, and in case of owner its going to be a customize and other functionalities we already offer.

## Under banner section
- Creator - blue link with address/identity
under which is collection description

right are stats, 2 columns, 3 rows
- see figma for exact margin and padding

## Collection Menubar - Controls

- looks really similar compared to explore, only difference here is the collection offer, which we haven´t implemented yet, but there is made place for the future :D
- Items and Activity switcher - with checkmark as on explore, activity tab tbd
- section on mobile looks like this:

## Burger menu/sidebar
- really similar to the explore page one.
- sticky to the top
- There are trait selectors
- the button itself is showing how many options are in the dropdown
- basic check and then number of items having this trait

## Breadcrumbs

- buy now is by default
- then showing how many items are there
- option to clear all
- margin is 24px top and 24px bottom (from the text)
## Items
- applied dynamic grid
- #4444
- otherwise basic cards used on explore or landing page

- we can maybe customize them later but I think we can keep the collection name in there.
## Mobile

- hero items still in the banner container
- socials are collapsed to max 3 icons
- under banner is description and stats
- collapsed control section - almost like on explore
- smaller cards - same as on explore
- burger menu will work like this

## Dark mode
- Using same system for dark mode like on other pages, that means that borders are usually changing from black to white and the dropshadow as well, if something is not clear feel free to ping.
## Others
- whole page works like infinity scroll
- burger menu is sticky to the top
- no footer
Q&A
---
If you have any questions, please let me know.
I appreciate feedback and pointing up items I may need to look into.
Thanks!
|
1.0
|
Collection Redesign - Hey 5000th issue 🎂🥳
@exezbcz is author here,
@yangwao reserved special number.
Collection activity
---
- #5272
- #5286
Quick Brief
---
Here we go with collection
- Activity tab missing, will be finished parallelly with item view going live
- note that many parts are similar to to explore v0.9 - #4474
- we will use tag `?redesign=true` on beta again
Design
---
### [Figma design file](https://www.figma.com/file/7BWdGiowrvTucpSFLE0kRX/Collection-detail-handoff?node-id=0%3A1&t=rNneqkPiwAvLnES9-1)
Tasks
---
- [x] #5071
- [x] #5072
- [x] #5073
- [x] #5074
- [x] #5075
- [ ] #4911
- [x] #5076
---
<img src="https://media0.giphy.com/media/3ohhwzBss3LYGSuLPW/giphy.gif"/>
## Hero section
- big change: hero section is becoming one big banner, with the name, social buttons and profile picture on top.
- there should be also option to upload this banner, issue already created, I will try to add designs as soon as possible.
- #4983

- banner on desktop has 1440*560, it would be also recommended size for the customization - its also used on collection cards. It has quite close aspect ration
- over banner is applied black gradient with lowered opacity, see figma

## Hero Buttons:
- first section are buttons of the collection, mainly socials and website.
- Then there is share button and more options - they are divided to make clear that they are not associated with the specific collection
- Share button is pretty clear, more options button is in case visitor - report button, and in case of owner its going to be a customize and other functionalities we already offer.

## Under banner section
- Creator - blue link with address/identity
under which is collection description

right are stats, 2 columns, 3 rows
- see figma for exact margin and padding

## Collection Menubar - Controls

- looks really similar compared to explore, only difference here is the collection offer, which we haven´t implemented yet, but there is made place for the future :D
- Items and Activity switcher - with checkmark as on explore, activity tab tbd
- section on mobile looks like this:

## Burger menu/sidebar
- really similar to the explore page one.
- sticky to the top
- There are trait selectors
- the button itself is showing how many options are in the dropdown
- basic check and then number of items having this trait

## Breadcrumbs

- buy now is by default
- then showing how many items are there
- option to clear all
- margin is 24px top and 24px bottom (from the text)
## Items
- applied dynamic grid
- #4444
- otherwise basic cards used on explore or landing page

- we can maybe customize them later but I think we can keep the collection name in there.
## Mobile

- hero items still in the banner container
- socials are collapsed to max 3 icons
- under banner is description and stats
- collapsed control section - almost like on explore
- smaller cards - same as on explore
- burger menu will work like this

## Dark mode
- Using same system for dark mode like on other pages, that means that borders are usually changing from black to white and the dropshadow as well, if something is not clear feel free to ping.
## Others
- whole page works like infinity scroll
- burger menu is sticky to the top
- no footer
Q&A
---
If you have any questions, please let me know.
I appreciate feedback and pointing up items I may need to look into.
Thanks!
|
non_process
|
collection redesign hey issue 🎂🥳 exezbcz is author here yangwao reserved special number collection activity quick brief here we go with collection activity tab missing will be finished parallelly with item view going live note that many parts are similar to to explore we will use tag redesign true on beta again design tasks img src hero section big change hero section is becoming one big banner with the name social buttons and profile picture on top there should be also option to upload this banner issue already created i will try to add designs as soon as possible banner on desktop has it would be also recommended size for the customization its also used on collection cards it has quite close aspect ration over banner is applied black gradient with lowered opacity see figma hero buttons first section are buttons of the collection mainly socials and website then there is share button and more options they are divided to make clear that they are not associated with the specific collection share button is pretty clear more options button is in case visitor report button and in case of owner its going to be a customize and other functionalities we already offer under banner section creator blue link with address identity under which is collection description right are stats columns rows see figma for exact margin and padding collection menubar controls looks really similar compared to explore only difference here is the collection offer which we haven´t implemented yet but there is made place for the future d items and activity switcher with checkmark as on explore activity tab tbd section on mobile looks like this burger menu sidebar really similar to the explore page one sticky to the top there are trait selectors the button itself is showing how many options are in the dropdown basic check and then number of items having this trait breadcrumbs buy now is by default then showing how many items are there option to clear all margin is top and bottom from the text items applied dynamic grid otherwise basic cards used on explore or landing page we can maybe customize them later but i think we can keep the collection name in there mobile hero items still in the banner container socials are collapsed to max icons under banner is description and stats collapsed control section almost like on explore smaller cards same as on explore burger menu will work like this dark mode using same system for dark mode like on other pages that means that borders are usually changing from black to white and the dropshadow as well if something is not clear feel free to ping others whole page works like infinity scroll burger menu is sticky to the top no footer q a if you have any questions please let me know i appreciate feedback and pointing up items i may need to look into thanks
| 0
|
435,856
| 30,523,367,861
|
IssuesEvent
|
2023-07-19 09:33:31
|
SciTools/iris
|
https://api.github.com/repos/SciTools/iris
|
closed
|
Iris incorrectly merging variables with the same STASH code from pp file
|
Type: Bug Type: Documentation Feature: Merge/Concatenate
|
## 🐛 Bug Report
I have pp files that have two temperature variables with the same STASH code, but two different pressure domain profiles. One has 19 pressure levels and the other has 27 pressure levels. When viewing the file using xconv I can see the two variables listed separately.
Some of the values of the pressure coordinate are the same between the two profiles. When loading with `iris.load()`, these two variables are incorrectly merged with iris taking only the unique values in the pressure profile, and giving two duplicate variables with 34 pressure levels each (34 is the number of unique pressure values), instead of one with 19 and one with 27.
## How To Reproduce
Steps to reproduce the behaviour:
1. load pp file with `iris.load()`:
```
constraint = iris.AttributeConstraint(STASH="m01s30i294")
cubelist = iris.load(ppfile, constraints=constraint)
```
2. Output shown below:

The second cube in the cubelist is a duplicate of the first. The 34 pressure levels are the unique values out of the total 46 (19+27) across the two variables.
3. If I load with the following method instead:
```
with iris.fileformats.um.structured_um_loading():
cubelist = iris.load(ppfile, constraints=constraint)
````
I get the following output:

where pressure is now an auxiliary coordinate with 46 (19+27) pressure levels. So, using the structured_loading context the pressure levels are not merged. The variable I want is the one with 19 levels, so as a workaround for now I can just select the first 19 pressure levels and reassign pressure as a normal coordinate. This seems like a nasty bug that could catch people out, however.
## Expected behaviour
Expected `iris.load()` to produce a cubelist with two cubes, one with 19 pressure levels and one with 27.
## Environment
- OS & Version: CentOS Linux 7 (Core)
- Iris Version: 3.4.1
|
1.0
|
Iris incorrectly merging variables with the same STASH code from pp file - ## 🐛 Bug Report
I have pp files that have two temperature variables with the same STASH code, but two different pressure domain profiles. One has 19 pressure levels and the other has 27 pressure levels. When viewing the file using xconv I can see the two variables listed separately.
Some of the values of the pressure coordinate are the same between the two profiles. When loading with `iris.load()`, these two variables are incorrectly merged with iris taking only the unique values in the pressure profile, and giving two duplicate variables with 34 pressure levels each (34 is the number of unique pressure values), instead of one with 19 and one with 27.
## How To Reproduce
Steps to reproduce the behaviour:
1. load pp file with `iris.load()`:
```
constraint = iris.AttributeConstraint(STASH="m01s30i294")
cubelist = iris.load(ppfile, constraints=constraint)
```
2. Output shown below:

The second cube in the cubelist is a duplicate of the first. The 34 pressure levels are the unique values out of the total 46 (19+27) across the two variables.
3. If I load with the following method instead:
```
with iris.fileformats.um.structured_um_loading():
cubelist = iris.load(ppfile, constraints=constraint)
````
I get the following output:

where pressure is now an auxiliary coordinate with 46 (19+27) pressure levels. So, using the structured_loading context the pressure levels are not merged. The variable I want is the one with 19 levels, so as a workaround for now I can just select the first 19 pressure levels and reassign pressure as a normal coordinate. This seems like a nasty bug that could catch people out, however.
## Expected behaviour
Expected `iris.load()` to produce a cubelist with two cubes, one with 19 pressure levels and one with 27.
## Environment
- OS & Version: CentOS Linux 7 (Core)
- Iris Version: 3.4.1
|
non_process
|
iris incorrectly merging variables with the same stash code from pp file 🐛 bug report i have pp files that have two temperature variables with the same stash code but two different pressure domain profiles one has pressure levels and the other has pressure levels when viewing the file using xconv i can see the two variables listed separately some of the values of the pressure coordinate are the same between the two profiles when loading with iris load these two variables are incorrectly merged with iris taking only the unique values in the pressure profile and giving two duplicate variables with pressure levels each is the number of unique pressure values instead of one with and one with how to reproduce steps to reproduce the behaviour load pp file with iris load constraint iris attributeconstraint stash cubelist iris load ppfile constraints constraint output shown below the second cube in the cubelist is a duplicate of the first the pressure levels are the unique values out of the total across the two variables if i load with the following method instead with iris fileformats um structured um loading cubelist iris load ppfile constraints constraint i get the following output where pressure is now an auxiliary coordinate with pressure levels so using the structured loading context the pressure levels are not merged the variable i want is the one with levels so as a workaround for now i can just select the first pressure levels and reassign pressure as a normal coordinate this seems like a nasty bug that could catch people out however expected behaviour expected iris load to produce a cubelist with two cubes one with pressure levels and one with environment os version centos linux core iris version
| 0
|
2,491
| 5,267,387,166
|
IssuesEvent
|
2017-02-04 21:58:02
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [eng] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
|
Language: English Process: [0] Awaiting subtitles
|
# Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Langue des sous-titres (Anglais)
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&forceedit=captions&ui=hd&tab=captions&v=jO8TCOMU2i8&lang=en&action_mde_edit_form=1&bl=vmp
|
1.0
|
[subtitles] [eng] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney - # Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Langue des sous-titres (Anglais)
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&forceedit=captions&ui=hd&tab=captions&v=jO8TCOMU2i8&lang=en&action_mde_edit_form=1&bl=vmp
|
process
|
mélenchon discours sur l abolition de l esclavage à champagney video title mélenchon discours sur l abolition de l esclavage à champagney url youtube subtitles language langue des sous titres anglais duration subtitles url
| 1
|
84,158
| 10,478,966,279
|
IssuesEvent
|
2019-09-24 02:15:43
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
opened
|
Block Navigator: Add ability to insert blocks from inside navigator
|
Block Navigation Menu Needs Design [Type] Enhancement
|

It would be useful for the Navigation block if users could Block Navigator could insert blocks from within the Block Navigator.
The primary use case is for adding Menu Items to a Navigation block.
|
1.0
|
Block Navigator: Add ability to insert blocks from inside navigator - 
It would be useful for the Navigation block if users could Block Navigator could insert blocks from within the Block Navigator.
The primary use case is for adding Menu Items to a Navigation block.
|
non_process
|
block navigator add ability to insert blocks from inside navigator it would be useful for the navigation block if users could block navigator could insert blocks from within the block navigator the primary use case is for adding menu items to a navigation block
| 0
|
5,286
| 2,610,184,718
|
IssuesEvent
|
2015-02-26 18:58:42
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
转载遗传色斑怎么去掉
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,��
�数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸�
��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐
的你我......我已将它埋入心底,谢谢你,希望你要的幸福她��
�以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛�
��!遗传色斑怎么去掉,
《客户案例》
在我的生活当中,要问我对自己最不满意的地方是什么��
�想想,就是我的一脸黄褐斑了。我的皮肤属于比较白的那种�
��生,在很小的时候,鼻梁就有一点黄褐斑,但那时斑是若隐
若现,不多也不明显。后来毕业工作了,因为做的是经常对��
�电脑的文员工作,电脑辐射本身对皮肤伤害很大,我平常又�
��有特别注意清洁皮肤,再加上日常饮食不注意,时常还熬夜
看书……种种原因,我的黄褐斑加重了,鼻梁上的斑也开始��
�散到面部。平常喜欢照镜子穿衣打扮的我,开始害怕仔细看�
��子里的自己。爱美的我,一想到自己脸上布满黄褐斑的情景
,就不寒而栗。</br>
后来一次偶然的机会,再一次跟客服交流的时候他知道��
�我的苦恼,于是他就笑着对我说你怎么不早点碰到我呢,最�
��他说你用黛芙薇尔试试吧,效果很不错的,他以前就是用黛
芙薇尔告别的斑点。回到家中我详细的在官网上了解了这款��
�品,据黛芙薇尔专家讲,之</br>
所以会长斑是体内内分泌紊乱,毒素排泄不畅,长期形��
�的斑。而黛芙薇尔祛斑就是通过内调外养的方法,涂抹调理�
��从皮肤深层着手,是从体内根本上改变身体自洁系统,可以
使机体有效消解异常黑色素,有效抑制黑色素的形成,清除��
�内自由基,抵抗氧化,还原皮肤真皮层弹力,达到美白祛斑�
��作用!。最后通过深入的了解我决定购买两个周期的黛芙薇��
�祛斑,最后我通过客服的引导买了两个周期的黛芙薇尔。</br
>
我刚使用到一个周期的时候我发现效果真的很不错,于��
�我就有定购了三个周期的,想彻底祛除这脸上的黄褐斑,然�
��在使用黛芙薇尔三个周期后,如今,我脸上的色斑不仅完全
去除了,而且感觉很滋润,不用洗面奶也仍然能保持清爽,��
�会太油腻了。真没想到黛芙薇尔还有这么好的效果,嘿嘿,�
��斑的朋友们,建议你们也尝试一下黛芙薇尔,效果真棒!在��
�推荐给大家。
阅读了遗传色斑怎么去掉,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
遗传色斑怎么去掉,同时为您分享祛斑小方法
1、冬虫夏草:真菌类植物,它有人体必需的维生素、蛋白质�
��脂肪、矿物质、纤维素。补肺益肾、补虚益心,安神养心,
改善人体发育迟缓,改善皮肤松弛、面部皱纹。
2、苦参:味苦性寒,入脾、心、肾经,消热,燥湿。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:25
|
1.0
|
转载遗传色斑怎么去掉 - ```
《摘要》
那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,��
�数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸�
��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐
的你我......我已将它埋入心底,谢谢你,希望你要的幸福她��
�以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛�
��!遗传色斑怎么去掉,
《客户案例》
在我的生活当中,要问我对自己最不满意的地方是什么��
�想想,就是我的一脸黄褐斑了。我的皮肤属于比较白的那种�
��生,在很小的时候,鼻梁就有一点黄褐斑,但那时斑是若隐
若现,不多也不明显。后来毕业工作了,因为做的是经常对��
�电脑的文员工作,电脑辐射本身对皮肤伤害很大,我平常又�
��有特别注意清洁皮肤,再加上日常饮食不注意,时常还熬夜
看书……种种原因,我的黄褐斑加重了,鼻梁上的斑也开始��
�散到面部。平常喜欢照镜子穿衣打扮的我,开始害怕仔细看�
��子里的自己。爱美的我,一想到自己脸上布满黄褐斑的情景
,就不寒而栗。</br>
后来一次偶然的机会,再一次跟客服交流的时候他知道��
�我的苦恼,于是他就笑着对我说你怎么不早点碰到我呢,最�
��他说你用黛芙薇尔试试吧,效果很不错的,他以前就是用黛
芙薇尔告别的斑点。回到家中我详细的在官网上了解了这款��
�品,据黛芙薇尔专家讲,之</br>
所以会长斑是体内内分泌紊乱,毒素排泄不畅,长期形��
�的斑。而黛芙薇尔祛斑就是通过内调外养的方法,涂抹调理�
��从皮肤深层着手,是从体内根本上改变身体自洁系统,可以
使机体有效消解异常黑色素,有效抑制黑色素的形成,清除��
�内自由基,抵抗氧化,还原皮肤真皮层弹力,达到美白祛斑�
��作用!。最后通过深入的了解我决定购买两个周期的黛芙薇��
�祛斑,最后我通过客服的引导买了两个周期的黛芙薇尔。</br
>
我刚使用到一个周期的时候我发现效果真的很不错,于��
�我就有定购了三个周期的,想彻底祛除这脸上的黄褐斑,然�
��在使用黛芙薇尔三个周期后,如今,我脸上的色斑不仅完全
去除了,而且感觉很滋润,不用洗面奶也仍然能保持清爽,��
�会太油腻了。真没想到黛芙薇尔还有这么好的效果,嘿嘿,�
��斑的朋友们,建议你们也尝试一下黛芙薇尔,效果真棒!在��
�推荐给大家。
阅读了遗传色斑怎么去掉,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
遗传色斑怎么去掉,同时为您分享祛斑小方法
1、冬虫夏草:真菌类植物,它有人体必需的维生素、蛋白质�
��脂肪、矿物质、纤维素。补肺益肾、补虚益心,安神养心,
改善人体发育迟缓,改善皮肤松弛、面部皱纹。
2、苦参:味苦性寒,入脾、心、肾经,消热,燥湿。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:25
|
non_process
|
转载遗传色斑怎么去掉 《摘要》 那一夜,坐着缓慢行驶的轿车,走过,曾经最美好的画面,�� �数曾经你给我的感动,一个挺拔的身影,一张帅气的娃娃脸� ��牵着我的手,漫步在星空下,灯光洒落,身后印下相依快乐 的你我 我已将它埋入心底,谢谢你,希望你要的幸福她�� �以给你。那一夜,我对于这满脸的色斑达到了难于明喻的痛� ��!遗传色斑怎么去掉, 《客户案例》 在我的生活当中,要问我对自己最不满意的地方是什么�� �想想,就是我的一脸黄褐斑了。我的皮肤属于比较白的那种� ��生,在很小的时候,鼻梁就有一点黄褐斑,但那时斑是若隐 若现,不多也不明显。后来毕业工作了,因为做的是经常对�� �电脑的文员工作,电脑辐射本身对皮肤伤害很大,我平常又� ��有特别注意清洁皮肤,再加上日常饮食不注意,时常还熬夜 看书……种种原因,我的黄褐斑加重了,鼻梁上的斑也开始�� �散到面部。平常喜欢照镜子穿衣打扮的我,开始害怕仔细看� ��子里的自己。爱美的我,一想到自己脸上布满黄褐斑的情景 ,就不寒而栗。 后来一次偶然的机会,再一次跟客服交流的时候他知道�� �我的苦恼,于是他就笑着对我说你怎么不早点碰到我呢,最� ��他说你用黛芙薇尔试试吧,效果很不错的,他以前就是用黛 芙薇尔告别的斑点。回到家中我详细的在官网上了解了这款�� �品,据黛芙薇尔专家讲,之 所以会长斑是体内内分泌紊乱,毒素排泄不畅,长期形�� �的斑。而黛芙薇尔祛斑就是通过内调外养的方法,涂抹调理� ��从皮肤深层着手,是从体内根本上改变身体自洁系统,可以 使机体有效消解异常黑色素,有效抑制黑色素的形成,清除�� �内自由基,抵抗氧化,还原皮肤真皮层弹力,达到美白祛斑� ��作用 。最后通过深入的了解我决定购买两个周期的黛芙薇�� �祛斑,最后我通过客服的引导买了两个周期的黛芙薇尔。 br 我刚使用到一个周期的时候我发现效果真的很不错,于�� �我就有定购了三个周期的,想彻底祛除这脸上的黄褐斑,然� ��在使用黛芙薇尔三个周期后,如今,我脸上的色斑不仅完全 去除了,而且感觉很滋润,不用洗面奶也仍然能保持清爽,�� �会太油腻了。真没想到黛芙薇尔还有这么好的效果,嘿嘿,� ��斑的朋友们,建议你们也尝试一下黛芙薇尔,效果真棒 在�� �推荐给大家。 阅读了遗传色斑怎么去掉,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 遗传色斑怎么去掉,同时为您分享祛斑小方法 、冬虫夏草:真菌类植物,它有人体必需的维生素、蛋白质� ��脂肪、矿物质、纤维素。补肺益肾、补虚益心,安神养心, 改善人体发育迟缓,改善皮肤松弛、面部皱纹。 、苦参:味苦性寒,入脾、心、肾经,消热,燥湿。 original issue reported on code google com by additive gmail com on jul at
| 0
|
1,864
| 4,691,155,573
|
IssuesEvent
|
2016-10-11 09:32:07
|
CERNDocumentServer/cds
|
https://api.github.com/repos/CERNDocumentServer/cds
|
closed
|
Generate frames ` ffmpeg` (?)
|
avc_processing review
|
We want to generate video frames (still images) each ``X`` seconds or percent of the video and should have ``cli`` for generating the frames manually.
One option is to use the wrapper ``PyFFmpeg``.
|
1.0
|
Generate frames ` ffmpeg` (?) - We want to generate video frames (still images) each ``X`` seconds or percent of the video and should have ``cli`` for generating the frames manually.
One option is to use the wrapper ``PyFFmpeg``.
|
process
|
generate frames ffmpeg we want to generate video frames still images each x seconds or percent of the video and should have cli for generating the frames manually one option is to use the wrapper pyffmpeg
| 1
|
22,455
| 31,233,673,030
|
IssuesEvent
|
2023-08-20 02:00:06
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 17 Aug 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
### Computer vision-enriched discrete choice models, with an application to residential location choice
- **Authors:** Sander van Cranenburgh, Francisco Garrido-Valenzuela
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Econometrics (econ.EM)
- **Arxiv link:** https://arxiv.org/abs/2308.08276
- **Pdf link:** https://arxiv.org/pdf/2308.08276
- **Abstract**
Visual imagery is indispensable to many multi-attribute decision situations. Examples of such decision situations in travel behaviour research include residential location choices, vehicle choices, tourist destination choices, and various safety-related choices. However, current discrete choice models cannot handle image data and thus cannot incorporate information embedded in images into their representations of choice behaviour. This gap between discrete choice models' capabilities and the real-world behaviour it seeks to model leads to incomplete and, possibly, misleading outcomes. To solve this gap, this study proposes "Computer Vision-enriched Discrete Choice Models" (CV-DCMs). CV-DCMs can handle choice tasks involving numeric attributes and images by integrating computer vision and traditional discrete choice models. Moreover, because CV-DCMs are grounded in random utility maximisation principles, they maintain the solid behavioural foundation of traditional discrete choice models. We demonstrate the proposed CV-DCM by applying it to data obtained through a novel stated choice experiment involving residential location choices. In this experiment, respondents faced choice tasks with trade-offs between commute time, monthly housing cost and street-level conditions, presented using images. As such, this research contributes to the growing body of literature in the travel behaviour field that seeks to integrate discrete choice modelling and machine learning.
### High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark
- **Authors:** Ben Chen, Xuechao Zou, Kai Li, Yu Zhang, Junliang Xing, Pin Tao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08443
- **Pdf link:** https://arxiv.org/pdf/2308.08443
- **Abstract**
The extraction of lakes from remote sensing images is a complex challenge due to the varied lake shapes and data noise. Current methods rely on multispectral image datasets, making it challenging to learn lake features accurately from pixel arrangements. This, in turn, affects model learning and the creation of accurate segmentation masks. This paper introduces a unified prompt-based dataset construction approach that provides approximate lake locations using point, box, and mask prompts. We also propose a two-stage prompt enhancement framework, LEPrompter, which involves prompt-based and prompt-free stages during training. The prompt-based stage employs a prompt encoder to extract prior information, integrating prompt tokens and image embeddings through self- and cross-attention in the prompt decoder. Prompts are deactivated once the model is trained to ensure independence during inference, enabling automated lake extraction. Evaluations on Surface Water and Qinghai-Tibet Plateau Lake datasets show consistent performance improvements compared to the previous state-of-the-art method. LEPrompter achieves mIoU scores of 91.48% and 97.43% on the respective datasets without introducing additional parameters or GFLOPs. Supplementary materials provide the source code, pre-trained models, and detailed user studies.
## Keyword: image signal processing
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
## Keyword: image signal process
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
## Keyword: compression
### Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction
- **Authors:** Chaeyeon Chung, Yeojeong Park, Seunghwan Choi, Munkhsoyol Ganbat, Jaegul Choo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08011
- **Pdf link:** https://arxiv.org/pdf/2308.08011
- **Abstract**
Video-to-video translation aims to generate video frames of a target domain from an input video. Despite its usefulness, the existing networks require enormous computations, necessitating their model compression for wide use. While there exist compression methods that improve computational efficiency in various image/video tasks, a generally-applicable compression method for video-to-video translation has not been studied much. In response, we present Shortcut-V2V, a general-purpose compression framework for video-to-video translation. Shourcut-V2V avoids full inference for every neighboring video frame by approximating the intermediate features of a current frame from those of the previous frame. Moreover, in our framework, a newly-proposed block called AdaBD adaptively blends and deforms features of neighboring frames, which makes more accurate predictions of the intermediate features possible. We conduct quantitative and qualitative evaluations using well-known video-to-video translation models on various tasks to demonstrate the general applicability of our framework. The results show that Shourcut-V2V achieves comparable performance compared to the original video-to-video translation model while saving 3.2-5.7x computational cost and 7.8-44x memory at test time.
## Keyword: RAW
### Unsupervised Domain Adaptive Detection with Network Stability Analysis
- **Authors:** Wenzhang Zhou, Heng Fan, Tiejian Luo, Libo Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08182
- **Pdf link:** https://arxiv.org/pdf/2308.08182
- **Abstract**
Domain adaptive detection aims to improve the generality of a detector, learned from the labeled source domain, on the unlabeled target domain. In this work, drawing inspiration from the concept of stability from the control theory that a robust system requires to remain consistent both externally and internally regardless of disturbances, we propose a novel framework that achieves unsupervised domain adaptive detection through stability analysis. In specific, we treat discrepancies between images and regions from different domains as disturbances, and introduce a novel simple but effective Network Stability Analysis (NSA) framework that considers various disturbances for domain adaptation. Particularly, we explore three types of perturbations including heavy and light image-level disturbances and instancelevel disturbance. For each type, NSA performs external consistency analysis on the outputs from raw and perturbed images and/or internal consistency analysis on their features, using teacher-student models. By integrating NSA into Faster R-CNN, we immediately achieve state-of-the-art results. In particular, we set a new record of 52.7% mAP on Cityscapes-to-FoggyCityscapes, showing the potential of NSA for domain adaptive detection. It is worth noticing, our NSA is designed for general purpose, and thus applicable to one-stage detection model (e.g., FCOS) besides the adopted one, as shown by experiments. https://github.com/tiankongzhang/NSA.
### Stable and Causal Inference for Discriminative Self-supervised Deep Visual Representations
- **Authors:** Yuewei Yang, Hai Li, Yiran Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08321
- **Pdf link:** https://arxiv.org/pdf/2308.08321
- **Abstract**
In recent years, discriminative self-supervised methods have made significant strides in advancing various visual tasks. The central idea of learning a data encoder that is robust to data distortions/augmentations is straightforward yet highly effective. Although many studies have demonstrated the empirical success of various learning methods, the resulting learned representations can exhibit instability and hinder downstream performance. In this study, we analyze discriminative self-supervised methods from a causal perspective to explain these unstable behaviors and propose solutions to overcome them. Our approach draws inspiration from prior works that empirically demonstrate the ability of discriminative self-supervised methods to demix ground truth causal sources to some extent. Unlike previous work on causality-empowered representation learning, we do not apply our solutions during the training process but rather during the inference process to improve time efficiency. Through experiments on both controlled image datasets and realistic image datasets, we show that our proposed solutions, which involve tempering a linear transformation with controlled synthetic data, are effective in addressing these issues.
### Visually-Aware Context Modeling for News Image Captioning
- **Authors:** Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08325
- **Pdf link:** https://arxiv.org/pdf/2308.08325
- **Abstract**
The goal of News Image Captioning is to generate an image caption according to the content of both a news article and an image. To leverage the visual information effectively, it is important to exploit the connection between the context in the articles/captions and the images. Psychological studies indicate that human faces in images draw higher attention priorities. On top of that, humans often play a central role in news stories, as also proven by the face-name co-occurrence pattern we discover in existing News Image Captioning datasets. Therefore, we design a face-naming module for faces in images and names in captions/articles to learn a better name embedding. Apart from names, which can be directly linked to an image area (faces), news image captions mostly contain context information that can only be found in the article. Humans typically address this by searching for relevant information from the article based on the image. To emulate this thought process, we design a retrieval strategy using CLIP to retrieve sentences that are semantically close to the image. We conduct extensive experiments to demonstrate the efficacy of our framework. Without using additional paired data, we establish the new state-of-the-art performance on two News Image Captioning datasets, exceeding the previous state-of-the-art by 5 CIDEr points. We will release code upon acceptance.
### AdaBrowse: Adaptive Video Browser for Efficient Continuous Sign Language Recognition
- **Authors:** Lianyu Hu, Liqing Gao, Zekang Liu, Chi-Man Pun, Wei Feng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08327
- **Pdf link:** https://arxiv.org/pdf/2308.08327
- **Abstract**
Raw videos have been proven to own considerable feature redundancy where in many cases only a portion of frames can already meet the requirements for accurate recognition. In this paper, we are interested in whether such redundancy can be effectively leveraged to facilitate efficient inference in continuous sign language recognition (CSLR). We propose a novel adaptive model (AdaBrowse) to dynamically select a most informative subsequence from input video sequences by modelling this problem as a sequential decision task. In specific, we first utilize a lightweight network to quickly scan input videos to extract coarse features. Then these features are fed into a policy network to intelligently select a subsequence to process. The corresponding subsequence is finally inferred by a normal CSLR model for sentence prediction. As only a portion of frames are processed in this procedure, the total computations can be considerably saved. Besides temporal redundancy, we are also interested in whether the inherent spatial redundancy can be seamlessly integrated together to achieve further efficiency, i.e., dynamically selecting a lowest input resolution for each sample, whose model is referred to as AdaBrowse+. Extensive experimental results on four large-scale CSLR datasets, i.e., PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaBrowse and AdaBrowse+ by achieving comparable accuracy with state-of-the-art methods with 1.44$\times$ throughput and 2.12$\times$ fewer FLOPs. Comparisons with other commonly-used 2D CNNs and adaptive efficient methods verify the effectiveness of AdaBrowse. Code is available at \url{https://github.com/hulianyuyy/AdaBrowse}.
### ALIP: Adaptive Language-Image Pre-training with Synthetic Caption
- **Authors:** Kaicheng Yang, Jiankang Deng, Xiang An, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08428
- **Pdf link:** https://arxiv.org/pdf/2308.08428
- **Abstract**
Contrastive Language-Image Pre-training (CLIP) has significantly boosted the performance of various vision-language tasks by scaling up the dataset with image-text pairs collected from the web. However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning. To address this issue, we first utilize the OFA model to generate synthetic captions that focus on the image content. The generated captions contain complementary information that is beneficial for pre-training. Then, we propose an Adaptive Language-Image Pre-training (ALIP), a bi-path model that integrates supervision from both raw text and synthetic caption. As the core components of ALIP, the Language Consistency Gate (LCG) and Description Consistency Gate (DCG) dynamically adjust the weights of samples and image-text/caption pairs during the training process. Meanwhile, the adaptive contrastive loss can effectively reduce the impact of noise data and enhances the efficiency of pre-training data. We validate ALIP with experiments on different scales of models and pre-training datasets. Experiments results show that ALIP achieves state-of-the-art performance on multiple downstream tasks including zero-shot image-text retrieval and linear probe. To facilitate future research, the code and pre-trained models are released at https://github.com/deepglint/ALIP.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Thu, 17 Aug 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
### Computer vision-enriched discrete choice models, with an application to residential location choice
- **Authors:** Sander van Cranenburgh, Francisco Garrido-Valenzuela
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Econometrics (econ.EM)
- **Arxiv link:** https://arxiv.org/abs/2308.08276
- **Pdf link:** https://arxiv.org/pdf/2308.08276
- **Abstract**
Visual imagery is indispensable to many multi-attribute decision situations. Examples of such decision situations in travel behaviour research include residential location choices, vehicle choices, tourist destination choices, and various safety-related choices. However, current discrete choice models cannot handle image data and thus cannot incorporate information embedded in images into their representations of choice behaviour. This gap between discrete choice models' capabilities and the real-world behaviour it seeks to model leads to incomplete and, possibly, misleading outcomes. To solve this gap, this study proposes "Computer Vision-enriched Discrete Choice Models" (CV-DCMs). CV-DCMs can handle choice tasks involving numeric attributes and images by integrating computer vision and traditional discrete choice models. Moreover, because CV-DCMs are grounded in random utility maximisation principles, they maintain the solid behavioural foundation of traditional discrete choice models. We demonstrate the proposed CV-DCM by applying it to data obtained through a novel stated choice experiment involving residential location choices. In this experiment, respondents faced choice tasks with trade-offs between commute time, monthly housing cost and street-level conditions, presented using images. As such, this research contributes to the growing body of literature in the travel behaviour field that seeks to integrate discrete choice modelling and machine learning.
### High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark
- **Authors:** Ben Chen, Xuechao Zou, Kai Li, Yu Zhang, Junliang Xing, Pin Tao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08443
- **Pdf link:** https://arxiv.org/pdf/2308.08443
- **Abstract**
The extraction of lakes from remote sensing images is a complex challenge due to the varied lake shapes and data noise. Current methods rely on multispectral image datasets, making it challenging to learn lake features accurately from pixel arrangements. This, in turn, affects model learning and the creation of accurate segmentation masks. This paper introduces a unified prompt-based dataset construction approach that provides approximate lake locations using point, box, and mask prompts. We also propose a two-stage prompt enhancement framework, LEPrompter, which involves prompt-based and prompt-free stages during training. The prompt-based stage employs a prompt encoder to extract prior information, integrating prompt tokens and image embeddings through self- and cross-attention in the prompt decoder. Prompts are deactivated once the model is trained to ensure independence during inference, enabling automated lake extraction. Evaluations on Surface Water and Qinghai-Tibet Plateau Lake datasets show consistent performance improvements compared to the previous state-of-the-art method. LEPrompter achieves mIoU scores of 91.48% and 97.43% on the respective datasets without introducing additional parameters or GFLOPs. Supplementary materials provide the source code, pre-trained models, and detailed user studies.
## Keyword: image signal processing
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
## Keyword: image signal process
### SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
- **Authors:** Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.08137
- **Pdf link:** https://arxiv.org/pdf/2308.08137
- **Abstract**
With the rapid development of AI hardware accelerators, applying deep learning-based algorithms to solve various low-level vision tasks on mobile devices has gradually become possible. However, two main problems still need to be solved: task-specific algorithms make it difficult to integrate them into a single neural network architecture, and large amounts of parameters make it difficult to achieve real-time inference. To tackle these problems, we propose a novel network, SYENet, with only $~$6K parameters, to handle multiple low-level vision tasks on mobile devices in a real-time manner. The SYENet consists of two asymmetrical branches with simple building blocks. To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed. Furthermore, to improve performance, a new Outlier-Aware Loss is proposed to process the image. The proposed method proves its superior performance with the best PSNR as compared with other networks in real-time applications such as Image Signal Processing(ISP), Low-Light Enhancement(LLE), and Super-Resolution(SR) with 2K60FPS throughput on Qualcomm 8 Gen 1 mobile SoC(System-on-Chip). Particularly, for ISP task, SYENet got the highest score in MAI 2022 Learned Smartphone ISP challenge.
## Keyword: compression
### Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction
- **Authors:** Chaeyeon Chung, Yeojeong Park, Seunghwan Choi, Munkhsoyol Ganbat, Jaegul Choo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08011
- **Pdf link:** https://arxiv.org/pdf/2308.08011
- **Abstract**
Video-to-video translation aims to generate video frames of a target domain from an input video. Despite its usefulness, the existing networks require enormous computations, necessitating their model compression for wide use. While there exist compression methods that improve computational efficiency in various image/video tasks, a generally-applicable compression method for video-to-video translation has not been studied much. In response, we present Shortcut-V2V, a general-purpose compression framework for video-to-video translation. Shourcut-V2V avoids full inference for every neighboring video frame by approximating the intermediate features of a current frame from those of the previous frame. Moreover, in our framework, a newly-proposed block called AdaBD adaptively blends and deforms features of neighboring frames, which makes more accurate predictions of the intermediate features possible. We conduct quantitative and qualitative evaluations using well-known video-to-video translation models on various tasks to demonstrate the general applicability of our framework. The results show that Shourcut-V2V achieves comparable performance compared to the original video-to-video translation model while saving 3.2-5.7x computational cost and 7.8-44x memory at test time.
## Keyword: RAW
### Unsupervised Domain Adaptive Detection with Network Stability Analysis
- **Authors:** Wenzhang Zhou, Heng Fan, Tiejian Luo, Libo Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08182
- **Pdf link:** https://arxiv.org/pdf/2308.08182
- **Abstract**
Domain adaptive detection aims to improve the generality of a detector, learned from the labeled source domain, on the unlabeled target domain. In this work, drawing inspiration from the concept of stability from the control theory that a robust system requires to remain consistent both externally and internally regardless of disturbances, we propose a novel framework that achieves unsupervised domain adaptive detection through stability analysis. In specific, we treat discrepancies between images and regions from different domains as disturbances, and introduce a novel simple but effective Network Stability Analysis (NSA) framework that considers various disturbances for domain adaptation. Particularly, we explore three types of perturbations including heavy and light image-level disturbances and instancelevel disturbance. For each type, NSA performs external consistency analysis on the outputs from raw and perturbed images and/or internal consistency analysis on their features, using teacher-student models. By integrating NSA into Faster R-CNN, we immediately achieve state-of-the-art results. In particular, we set a new record of 52.7% mAP on Cityscapes-to-FoggyCityscapes, showing the potential of NSA for domain adaptive detection. It is worth noticing, our NSA is designed for general purpose, and thus applicable to one-stage detection model (e.g., FCOS) besides the adopted one, as shown by experiments. https://github.com/tiankongzhang/NSA.
### Stable and Causal Inference for Discriminative Self-supervised Deep Visual Representations
- **Authors:** Yuewei Yang, Hai Li, Yiran Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08321
- **Pdf link:** https://arxiv.org/pdf/2308.08321
- **Abstract**
In recent years, discriminative self-supervised methods have made significant strides in advancing various visual tasks. The central idea of learning a data encoder that is robust to data distortions/augmentations is straightforward yet highly effective. Although many studies have demonstrated the empirical success of various learning methods, the resulting learned representations can exhibit instability and hinder downstream performance. In this study, we analyze discriminative self-supervised methods from a causal perspective to explain these unstable behaviors and propose solutions to overcome them. Our approach draws inspiration from prior works that empirically demonstrate the ability of discriminative self-supervised methods to demix ground truth causal sources to some extent. Unlike previous work on causality-empowered representation learning, we do not apply our solutions during the training process but rather during the inference process to improve time efficiency. Through experiments on both controlled image datasets and realistic image datasets, we show that our proposed solutions, which involve tempering a linear transformation with controlled synthetic data, are effective in addressing these issues.
### Visually-Aware Context Modeling for News Image Captioning
- **Authors:** Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08325
- **Pdf link:** https://arxiv.org/pdf/2308.08325
- **Abstract**
The goal of News Image Captioning is to generate an image caption according to the content of both a news article and an image. To leverage the visual information effectively, it is important to exploit the connection between the context in the articles/captions and the images. Psychological studies indicate that human faces in images draw higher attention priorities. On top of that, humans often play a central role in news stories, as also proven by the face-name co-occurrence pattern we discover in existing News Image Captioning datasets. Therefore, we design a face-naming module for faces in images and names in captions/articles to learn a better name embedding. Apart from names, which can be directly linked to an image area (faces), news image captions mostly contain context information that can only be found in the article. Humans typically address this by searching for relevant information from the article based on the image. To emulate this thought process, we design a retrieval strategy using CLIP to retrieve sentences that are semantically close to the image. We conduct extensive experiments to demonstrate the efficacy of our framework. Without using additional paired data, we establish the new state-of-the-art performance on two News Image Captioning datasets, exceeding the previous state-of-the-art by 5 CIDEr points. We will release code upon acceptance.
### AdaBrowse: Adaptive Video Browser for Efficient Continuous Sign Language Recognition
- **Authors:** Lianyu Hu, Liqing Gao, Zekang Liu, Chi-Man Pun, Wei Feng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08327
- **Pdf link:** https://arxiv.org/pdf/2308.08327
- **Abstract**
Raw videos have been proven to own considerable feature redundancy where in many cases only a portion of frames can already meet the requirements for accurate recognition. In this paper, we are interested in whether such redundancy can be effectively leveraged to facilitate efficient inference in continuous sign language recognition (CSLR). We propose a novel adaptive model (AdaBrowse) to dynamically select a most informative subsequence from input video sequences by modelling this problem as a sequential decision task. In specific, we first utilize a lightweight network to quickly scan input videos to extract coarse features. Then these features are fed into a policy network to intelligently select a subsequence to process. The corresponding subsequence is finally inferred by a normal CSLR model for sentence prediction. As only a portion of frames are processed in this procedure, the total computations can be considerably saved. Besides temporal redundancy, we are also interested in whether the inherent spatial redundancy can be seamlessly integrated together to achieve further efficiency, i.e., dynamically selecting a lowest input resolution for each sample, whose model is referred to as AdaBrowse+. Extensive experimental results on four large-scale CSLR datasets, i.e., PHOENIX14, PHOENIX14-T, CSL-Daily and CSL, demonstrate the effectiveness of AdaBrowse and AdaBrowse+ by achieving comparable accuracy with state-of-the-art methods with 1.44$\times$ throughput and 2.12$\times$ fewer FLOPs. Comparisons with other commonly-used 2D CNNs and adaptive efficient methods verify the effectiveness of AdaBrowse. Code is available at \url{https://github.com/hulianyuyy/AdaBrowse}.
### ALIP: Adaptive Language-Image Pre-training with Synthetic Caption
- **Authors:** Kaicheng Yang, Jiankang Deng, Xiang An, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.08428
- **Pdf link:** https://arxiv.org/pdf/2308.08428
- **Abstract**
Contrastive Language-Image Pre-training (CLIP) has significantly boosted the performance of various vision-language tasks by scaling up the dataset with image-text pairs collected from the web. However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning. To address this issue, we first utilize the OFA model to generate synthetic captions that focus on the image content. The generated captions contain complementary information that is beneficial for pre-training. Then, we propose an Adaptive Language-Image Pre-training (ALIP), a bi-path model that integrates supervision from both raw text and synthetic caption. As the core components of ALIP, the Language Consistency Gate (LCG) and Description Consistency Gate (DCG) dynamically adjust the weights of samples and image-text/caption pairs during the training process. Meanwhile, the adaptive contrastive loss can effectively reduce the impact of noise data and enhances the efficiency of pre-training data. We validate ALIP with experiments on different scales of models and pre-training datasets. Experiments results show that ALIP achieves state-of-the-art performance on multiple downstream tasks including zero-shot image-text retrieval and linear probe. To facilitate future research, the code and pre-trained models are released at https://github.com/deepglint/ALIP.
## Keyword: raw image
There is no result
|
process
|
new submissions for thu aug keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp syenet a simple yet effective network for multiple low level vision tasks with real time performance on mobile device authors weiran gou ziyao yi yan xiang shaoqing li zibin liu dehui kong ke xu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract with the rapid development of ai hardware accelerators applying deep learning based algorithms to solve various low level vision tasks on mobile devices has gradually become possible however two main problems still need to be solved task specific algorithms make it difficult to integrate them into a single neural network architecture and large amounts of parameters make it difficult to achieve real time inference to tackle these problems we propose a novel network syenet with only parameters to handle multiple low level vision tasks on mobile devices in a real time manner the syenet consists of two asymmetrical branches with simple building blocks to effectively connect the results by asymmetrical branches a quadratic connection unit qcu is proposed furthermore to improve performance a new outlier aware loss is proposed to process the image the proposed method proves its superior performance with the best psnr as compared with other networks in real time applications such as image signal processing isp low light enhancement lle and super resolution sr with throughput on qualcomm gen mobile soc system on chip particularly for isp task syenet got the highest score in mai learned smartphone isp challenge computer vision enriched discrete choice models with an application to residential location choice authors sander van cranenburgh francisco garrido valenzuela subjects computer vision and pattern recognition cs cv econometrics econ em arxiv link pdf link abstract visual imagery is indispensable to many multi attribute decision situations examples of such decision situations in travel behaviour research include residential location choices vehicle choices tourist destination choices and various safety related choices however current discrete choice models cannot handle image data and thus cannot incorporate information embedded in images into their representations of choice behaviour this gap between discrete choice models capabilities and the real world behaviour it seeks to model leads to incomplete and possibly misleading outcomes to solve this gap this study proposes computer vision enriched discrete choice models cv dcms cv dcms can handle choice tasks involving numeric attributes and images by integrating computer vision and traditional discrete choice models moreover because cv dcms are grounded in random utility maximisation principles they maintain the solid behavioural foundation of traditional discrete choice models we demonstrate the proposed cv dcm by applying it to data obtained through a novel stated choice experiment involving residential location choices in this experiment respondents faced choice tasks with trade offs between commute time monthly housing cost and street level conditions presented using images as such this research contributes to the growing body of literature in the travel behaviour field that seeks to integrate discrete choice modelling and machine learning high fidelity lake extraction via two stage prompt enhancement establishing a novel baseline and benchmark authors ben chen xuechao zou kai li yu zhang junliang xing pin tao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the extraction of lakes from remote sensing images is a complex challenge due to the varied lake shapes and data noise current methods rely on multispectral image datasets making it challenging to learn lake features accurately from pixel arrangements this in turn affects model learning and the creation of accurate segmentation masks this paper introduces a unified prompt based dataset construction approach that provides approximate lake locations using point box and mask prompts we also propose a two stage prompt enhancement framework leprompter which involves prompt based and prompt free stages during training the prompt based stage employs a prompt encoder to extract prior information integrating prompt tokens and image embeddings through self and cross attention in the prompt decoder prompts are deactivated once the model is trained to ensure independence during inference enabling automated lake extraction evaluations on surface water and qinghai tibet plateau lake datasets show consistent performance improvements compared to the previous state of the art method leprompter achieves miou scores of and on the respective datasets without introducing additional parameters or gflops supplementary materials provide the source code pre trained models and detailed user studies keyword image signal processing syenet a simple yet effective network for multiple low level vision tasks with real time performance on mobile device authors weiran gou ziyao yi yan xiang shaoqing li zibin liu dehui kong ke xu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract with the rapid development of ai hardware accelerators applying deep learning based algorithms to solve various low level vision tasks on mobile devices has gradually become possible however two main problems still need to be solved task specific algorithms make it difficult to integrate them into a single neural network architecture and large amounts of parameters make it difficult to achieve real time inference to tackle these problems we propose a novel network syenet with only parameters to handle multiple low level vision tasks on mobile devices in a real time manner the syenet consists of two asymmetrical branches with simple building blocks to effectively connect the results by asymmetrical branches a quadratic connection unit qcu is proposed furthermore to improve performance a new outlier aware loss is proposed to process the image the proposed method proves its superior performance with the best psnr as compared with other networks in real time applications such as image signal processing isp low light enhancement lle and super resolution sr with throughput on qualcomm gen mobile soc system on chip particularly for isp task syenet got the highest score in mai learned smartphone isp challenge keyword image signal process syenet a simple yet effective network for multiple low level vision tasks with real time performance on mobile device authors weiran gou ziyao yi yan xiang shaoqing li zibin liu dehui kong ke xu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract with the rapid development of ai hardware accelerators applying deep learning based algorithms to solve various low level vision tasks on mobile devices has gradually become possible however two main problems still need to be solved task specific algorithms make it difficult to integrate them into a single neural network architecture and large amounts of parameters make it difficult to achieve real time inference to tackle these problems we propose a novel network syenet with only parameters to handle multiple low level vision tasks on mobile devices in a real time manner the syenet consists of two asymmetrical branches with simple building blocks to effectively connect the results by asymmetrical branches a quadratic connection unit qcu is proposed furthermore to improve performance a new outlier aware loss is proposed to process the image the proposed method proves its superior performance with the best psnr as compared with other networks in real time applications such as image signal processing isp low light enhancement lle and super resolution sr with throughput on qualcomm gen mobile soc system on chip particularly for isp task syenet got the highest score in mai learned smartphone isp challenge keyword compression shortcut compression framework for video to video translation based on temporal redundancy reduction authors chaeyeon chung yeojeong park seunghwan choi munkhsoyol ganbat jaegul choo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract video to video translation aims to generate video frames of a target domain from an input video despite its usefulness the existing networks require enormous computations necessitating their model compression for wide use while there exist compression methods that improve computational efficiency in various image video tasks a generally applicable compression method for video to video translation has not been studied much in response we present shortcut a general purpose compression framework for video to video translation shourcut avoids full inference for every neighboring video frame by approximating the intermediate features of a current frame from those of the previous frame moreover in our framework a newly proposed block called adabd adaptively blends and deforms features of neighboring frames which makes more accurate predictions of the intermediate features possible we conduct quantitative and qualitative evaluations using well known video to video translation models on various tasks to demonstrate the general applicability of our framework the results show that shourcut achieves comparable performance compared to the original video to video translation model while saving computational cost and memory at test time keyword raw unsupervised domain adaptive detection with network stability analysis authors wenzhang zhou heng fan tiejian luo libo zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract domain adaptive detection aims to improve the generality of a detector learned from the labeled source domain on the unlabeled target domain in this work drawing inspiration from the concept of stability from the control theory that a robust system requires to remain consistent both externally and internally regardless of disturbances we propose a novel framework that achieves unsupervised domain adaptive detection through stability analysis in specific we treat discrepancies between images and regions from different domains as disturbances and introduce a novel simple but effective network stability analysis nsa framework that considers various disturbances for domain adaptation particularly we explore three types of perturbations including heavy and light image level disturbances and instancelevel disturbance for each type nsa performs external consistency analysis on the outputs from raw and perturbed images and or internal consistency analysis on their features using teacher student models by integrating nsa into faster r cnn we immediately achieve state of the art results in particular we set a new record of map on cityscapes to foggycityscapes showing the potential of nsa for domain adaptive detection it is worth noticing our nsa is designed for general purpose and thus applicable to one stage detection model e g fcos besides the adopted one as shown by experiments stable and causal inference for discriminative self supervised deep visual representations authors yuewei yang hai li yiran chen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in recent years discriminative self supervised methods have made significant strides in advancing various visual tasks the central idea of learning a data encoder that is robust to data distortions augmentations is straightforward yet highly effective although many studies have demonstrated the empirical success of various learning methods the resulting learned representations can exhibit instability and hinder downstream performance in this study we analyze discriminative self supervised methods from a causal perspective to explain these unstable behaviors and propose solutions to overcome them our approach draws inspiration from prior works that empirically demonstrate the ability of discriminative self supervised methods to demix ground truth causal sources to some extent unlike previous work on causality empowered representation learning we do not apply our solutions during the training process but rather during the inference process to improve time efficiency through experiments on both controlled image datasets and realistic image datasets we show that our proposed solutions which involve tempering a linear transformation with controlled synthetic data are effective in addressing these issues visually aware context modeling for news image captioning authors tingyu qu tinne tuytelaars marie francine moens subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the goal of news image captioning is to generate an image caption according to the content of both a news article and an image to leverage the visual information effectively it is important to exploit the connection between the context in the articles captions and the images psychological studies indicate that human faces in images draw higher attention priorities on top of that humans often play a central role in news stories as also proven by the face name co occurrence pattern we discover in existing news image captioning datasets therefore we design a face naming module for faces in images and names in captions articles to learn a better name embedding apart from names which can be directly linked to an image area faces news image captions mostly contain context information that can only be found in the article humans typically address this by searching for relevant information from the article based on the image to emulate this thought process we design a retrieval strategy using clip to retrieve sentences that are semantically close to the image we conduct extensive experiments to demonstrate the efficacy of our framework without using additional paired data we establish the new state of the art performance on two news image captioning datasets exceeding the previous state of the art by cider points we will release code upon acceptance adabrowse adaptive video browser for efficient continuous sign language recognition authors lianyu hu liqing gao zekang liu chi man pun wei feng subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract raw videos have been proven to own considerable feature redundancy where in many cases only a portion of frames can already meet the requirements for accurate recognition in this paper we are interested in whether such redundancy can be effectively leveraged to facilitate efficient inference in continuous sign language recognition cslr we propose a novel adaptive model adabrowse to dynamically select a most informative subsequence from input video sequences by modelling this problem as a sequential decision task in specific we first utilize a lightweight network to quickly scan input videos to extract coarse features then these features are fed into a policy network to intelligently select a subsequence to process the corresponding subsequence is finally inferred by a normal cslr model for sentence prediction as only a portion of frames are processed in this procedure the total computations can be considerably saved besides temporal redundancy we are also interested in whether the inherent spatial redundancy can be seamlessly integrated together to achieve further efficiency i e dynamically selecting a lowest input resolution for each sample whose model is referred to as adabrowse extensive experimental results on four large scale cslr datasets i e t csl daily and csl demonstrate the effectiveness of adabrowse and adabrowse by achieving comparable accuracy with state of the art methods with times throughput and times fewer flops comparisons with other commonly used cnns and adaptive efficient methods verify the effectiveness of adabrowse code is available at url alip adaptive language image pre training with synthetic caption authors kaicheng yang jiankang deng xiang an jiawei li ziyong feng jia guo jing yang tongliang liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract contrastive language image pre training clip has significantly boosted the performance of various vision language tasks by scaling up the dataset with image text pairs collected from the web however the presence of intrinsic noise and unmatched image text pairs in web data can potentially affect the performance of representation learning to address this issue we first utilize the ofa model to generate synthetic captions that focus on the image content the generated captions contain complementary information that is beneficial for pre training then we propose an adaptive language image pre training alip a bi path model that integrates supervision from both raw text and synthetic caption as the core components of alip the language consistency gate lcg and description consistency gate dcg dynamically adjust the weights of samples and image text caption pairs during the training process meanwhile the adaptive contrastive loss can effectively reduce the impact of noise data and enhances the efficiency of pre training data we validate alip with experiments on different scales of models and pre training datasets experiments results show that alip achieves state of the art performance on multiple downstream tasks including zero shot image text retrieval and linear probe to facilitate future research the code and pre trained models are released at keyword raw image there is no result
| 1
|
1,946
| 4,770,502,155
|
IssuesEvent
|
2016-10-26 15:25:58
|
P0cL4bs/WiFi-Pumpkin
|
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
|
closed
|
IOError: [Errno 2] No such file or directory
|
in process priority solved
|
Hello
I have had this error, but if you create the folder manually, and Reloading the program, the logs are generated normally and everything continues well. I guess some will be writing problem
/////////////////////
root@kali:/opt/Wifi-Pumpkin/WiFi-Pumpkin# sudo python wifi-pumpkin.py
Loading GUI...
WiFi-Pumpkin Running!
[*] Loading debugging mode
[*] Current Session::ID [NDMyNTk=]
[*] Configuring hostapd...
[*] enable forwarding in iptables...
[*] Configuring dhcpd...
[*] Sharing Internet Connections with NAT...
Traceback (most recent call last):
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/main.py", line 1219, in Start_PumpAP
thread.start()
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utility/threads.py", line 198, in start
self.makeLogger()
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utility/threads.py", line 206, in makeLogger
setup_logger('hostapd', './logs/AccessPoint/hostapd.log',self.session)
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utils.py", line 90, in setup_logger
fileHandler = logging.FileHandler(log_file, mode='a')
File "/usr/lib/python2.7/logging/__init__.py", line 913, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python2.7/logging/__init__.py", line 943, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/opt/Wifi-Pumpkin/WiFi-Pumpkin/logs/AccessPoint/hostapd.log'
killing all threads...
-------------------------------
Thread::[hostapd] successfully stopped.
Thread::[DHCP] successfully stopped.
root@kali:/opt/Wifi-Pumpkin/WiFi-Pumpkin# sudo python wifi-pumpkin.py
Loading GUI...
////////
Thanks for developing
|
1.0
|
IOError: [Errno 2] No such file or directory - Hello
I have had this error, but if you create the folder manually, and Reloading the program, the logs are generated normally and everything continues well. I guess some will be writing problem
/////////////////////
root@kali:/opt/Wifi-Pumpkin/WiFi-Pumpkin# sudo python wifi-pumpkin.py
Loading GUI...
WiFi-Pumpkin Running!
[*] Loading debugging mode
[*] Current Session::ID [NDMyNTk=]
[*] Configuring hostapd...
[*] enable forwarding in iptables...
[*] Configuring dhcpd...
[*] Sharing Internet Connections with NAT...
Traceback (most recent call last):
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/main.py", line 1219, in Start_PumpAP
thread.start()
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utility/threads.py", line 198, in start
self.makeLogger()
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utility/threads.py", line 206, in makeLogger
setup_logger('hostapd', './logs/AccessPoint/hostapd.log',self.session)
File "/opt/Wifi-Pumpkin/WiFi-Pumpkin/core/utils.py", line 90, in setup_logger
fileHandler = logging.FileHandler(log_file, mode='a')
File "/usr/lib/python2.7/logging/__init__.py", line 913, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python2.7/logging/__init__.py", line 943, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/opt/Wifi-Pumpkin/WiFi-Pumpkin/logs/AccessPoint/hostapd.log'
killing all threads...
-------------------------------
Thread::[hostapd] successfully stopped.
Thread::[DHCP] successfully stopped.
root@kali:/opt/Wifi-Pumpkin/WiFi-Pumpkin# sudo python wifi-pumpkin.py
Loading GUI...
////////
Thanks for developing
|
process
|
ioerror no such file or directory hello i have had this error but if you create the folder manually and reloading the program the logs are generated normally and everything continues well i guess some will be writing problem root kali opt wifi pumpkin wifi pumpkin sudo python wifi pumpkin py loading gui wifi pumpkin running loading debugging mode current session id configuring hostapd enable forwarding in iptables configuring dhcpd sharing internet connections with nat traceback most recent call last file opt wifi pumpkin wifi pumpkin core main py line in start pumpap thread start file opt wifi pumpkin wifi pumpkin core utility threads py line in start self makelogger file opt wifi pumpkin wifi pumpkin core utility threads py line in makelogger setup logger hostapd logs accesspoint hostapd log self session file opt wifi pumpkin wifi pumpkin core utils py line in setup logger filehandler logging filehandler log file mode a file usr lib logging init py line in init streamhandler init self self open file usr lib logging init py line in open stream open self basefilename self mode ioerror no such file or directory opt wifi pumpkin wifi pumpkin logs accesspoint hostapd log killing all threads thread successfully stopped thread successfully stopped root kali opt wifi pumpkin wifi pumpkin sudo python wifi pumpkin py loading gui thanks for developing
| 1
|
728,372
| 25,076,402,734
|
IssuesEvent
|
2022-11-07 15:48:29
|
OpenTabletDriver/OpenTabletDriver
|
https://api.github.com/repos/OpenTabletDriver/OpenTabletDriver
|
closed
|
Aux Buttons should not have Pen Passthrough as an option
|
enhancement linux/gtk priority:low desktop
|
## Description
Aux buttons with pen passthrough does not make sense. I tried implementing it with stuff like BTN_0 etc, but applications do not seem to know what to do with this. Until we know it makes sense, it's probably smarter to hide it entirely.
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Operating System | Arch Linux
| OpenTabletDriver Version | 267a322
| Tablet | XP-Pen Star G960S Plus
|
1.0
|
Aux Buttons should not have Pen Passthrough as an option - ## Description
Aux buttons with pen passthrough does not make sense. I tried implementing it with stuff like BTN_0 etc, but applications do not seem to know what to do with this. Until we know it makes sense, it's probably smarter to hide it entirely.
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Operating System | Arch Linux
| OpenTabletDriver Version | 267a322
| Tablet | XP-Pen Star G960S Plus
|
non_process
|
aux buttons should not have pen passthrough as an option description aux buttons with pen passthrough does not make sense i tried implementing it with stuff like btn etc but applications do not seem to know what to do with this until we know it makes sense it s probably smarter to hide it entirely system information name value operating system arch linux opentabletdriver version tablet xp pen star plus
| 0
|
809,411
| 30,191,602,703
|
IssuesEvent
|
2023-07-04 15:51:10
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
Placeholder overmap detectable at any range on sensors
|
Priority: Low Could Reproduce
|
### Description of issue
Placeholders don't adhere to standard overmap icon behaviour.
### Difference between expected and actual behaviour
Placeholder icons should share the same behaviour of being detected- classified as contact- etc. And not be always visible.
### Steps to reproduce
1. Have sensors off
2. Have a placeholder icon anywhere
3. Be able to see it
### Specific information for locating
_No response_
### Client version, server revision, & game ID
Client Version: 514
Server Revision: [f7640bb849f2fe627cd55e7a483f26a9c1c4b41b](https://baystation.xyz/github/commit/f7640bb849f2fe627cd55e7a483f26a9c1c4b41b) - dev - 2023-06-23
Game ID: a90e88ac
Current map: SEV Torch
### Issue bingo
- [X] Issue could be reproduced at least once
- [X] Issue could be reproduced by different players
- [X] Issue could be reproduced in multiple rounds
- [X] Issue happened in a recent (less than 7 days ago) round
- [X] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
1.0
|
Placeholder overmap detectable at any range on sensors - ### Description of issue
Placeholders don't adhere to standard overmap icon behaviour.
### Difference between expected and actual behaviour
Placeholder icons should share the same behaviour of being detected- classified as contact- etc. And not be always visible.
### Steps to reproduce
1. Have sensors off
2. Have a placeholder icon anywhere
3. Be able to see it
### Specific information for locating
_No response_
### Client version, server revision, & game ID
Client Version: 514
Server Revision: [f7640bb849f2fe627cd55e7a483f26a9c1c4b41b](https://baystation.xyz/github/commit/f7640bb849f2fe627cd55e7a483f26a9c1c4b41b) - dev - 2023-06-23
Game ID: a90e88ac
Current map: SEV Torch
### Issue bingo
- [X] Issue could be reproduced at least once
- [X] Issue could be reproduced by different players
- [X] Issue could be reproduced in multiple rounds
- [X] Issue happened in a recent (less than 7 days ago) round
- [X] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
non_process
|
placeholder overmap detectable at any range on sensors description of issue placeholders don t adhere to standard overmap icon behaviour difference between expected and actual behaviour placeholder icons should share the same behaviour of being detected classified as contact etc and not be always visible steps to reproduce have sensors off have a placeholder icon anywhere be able to see it specific information for locating no response client version server revision game id client version server revision dev game id current map sev torch issue bingo issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round
| 0
|
6,911
| 10,061,039,623
|
IssuesEvent
|
2019-07-22 20:20:26
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Process.Start and File.Open not working
|
area-System.Diagnostics.Process question
|
Hi, I don't know if I set this issue in the correct place ;) but I have a problem with .net Core 3.0.
When I try to open file or start process I get an error:
```Log
Exception has occurred: CLR/System.ComponentModel.Win32Exception
An unhandled type exception has occurred „System.ComponentModel.Win32Exception” w System.Diagnostics.Process.dll: 'The specified file can not be found.'
in System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
in System.Diagnostics.Process.Start()
in System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
in System.Diagnostics.Process.Start(String fileName)
in simpleApp.Form1.count_Click(Object sender, EventArgs e) in c:\Repozytoria\Visual Studio\simpleApp\Form1.cs:line 17
in System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
in System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
in System.Windows.Forms.Control.WndProc(Message& m)
in System.Windows.Forms.ButtonBase.WndProc(Message& m)
in System.Windows.Forms.Button.WndProc(Message& m)
in System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
in System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
in System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
in System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
in System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
in simpleApp.Program.Main() w c:\Repozytoria\Visual Studio\simpleApp\Program.cs:line 19
```
**How to reproduce:**
Create those functions in .net Core 3.0 preview:
```C#
private void openSimple()
{
string pathToFile = "<pathToTXTFile>";
File.Open(pathToFile, FileMode.Open);
}
```
```C#
private void openWebPage()
{
System.Diagnostics.Process.Start("http://google.com);
}
```
**Version**
3.0.100-preview6-012264
|
1.0
|
Process.Start and File.Open not working - Hi, I don't know if I set this issue in the correct place ;) but I have a problem with .net Core 3.0.
When I try to open file or start process I get an error:
```Log
Exception has occurred: CLR/System.ComponentModel.Win32Exception
An unhandled type exception has occurred „System.ComponentModel.Win32Exception” w System.Diagnostics.Process.dll: 'The specified file can not be found.'
in System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
in System.Diagnostics.Process.Start()
in System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
in System.Diagnostics.Process.Start(String fileName)
in simpleApp.Form1.count_Click(Object sender, EventArgs e) in c:\Repozytoria\Visual Studio\simpleApp\Form1.cs:line 17
in System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
in System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
in System.Windows.Forms.Control.WndProc(Message& m)
in System.Windows.Forms.ButtonBase.WndProc(Message& m)
in System.Windows.Forms.Button.WndProc(Message& m)
in System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
in System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
in System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
in System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
in System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
in simpleApp.Program.Main() w c:\Repozytoria\Visual Studio\simpleApp\Program.cs:line 19
```
**How to reproduce:**
Create those functions in .net Core 3.0 preview:
```C#
private void openSimple()
{
string pathToFile = "<pathToTXTFile>";
File.Open(pathToFile, FileMode.Open);
}
```
```C#
private void openWebPage()
{
System.Diagnostics.Process.Start("http://google.com);
}
```
**Version**
3.0.100-preview6-012264
|
process
|
process start and file open not working hi i don t know if i set this issue in the correct place but i have a problem with net core when i try to open file or start process i get an error log exception has occurred clr system componentmodel an unhandled type exception has occurred „system componentmodel ” w system diagnostics process dll the specified file can not be found in system diagnostics process startwithcreateprocess processstartinfo startinfo in system diagnostics process start in system diagnostics process start processstartinfo startinfo in system diagnostics process start string filename in simpleapp count click object sender eventargs e in c repozytoria visual studio simpleapp cs line in system windows forms button onmouseup mouseeventargs mevent in system windows forms control wmmouseup message m mousebuttons button clicks in system windows forms control wndproc message m in system windows forms buttonbase wndproc message m in system windows forms button wndproc message m in system windows forms nativewindow debuggablecallback intptr hwnd msg intptr wparam intptr lparam in system windows forms unsafenativemethods dispatchmessagew msg msg in system windows forms application componentmanager system windows forms unsafenativemethods imsocomponentmanager fpushmessageloop intptr dwcomponentid reason pvloopdata in system windows forms application threadcontext runmessageloopinner reason applicationcontext context in system windows forms application threadcontext runmessageloop reason applicationcontext context in simpleapp program main w c repozytoria visual studio simpleapp program cs line how to reproduce create those functions in net core preview c private void opensimple string pathtofile file open pathtofile filemode open c private void openwebpage system diagnostics process start version
| 1
|
30,628
| 4,641,490,926
|
IssuesEvent
|
2016-09-30 05:13:20
|
JuliaLang/julia
|
https://api.github.com/repos/JuliaLang/julia
|
opened
|
Refactoring base's runtests harness for packages to use too
|
packages testsystem
|
Ref https://github.com/JuliaLang/julia/pull/17165#issuecomment-241830583, we should try to refactor what's currently in `test/runtests.jl` to instead be a default part of the `Base.Test` module so packages can also use it for pretty-printing, parallel execution, running of selective subsets of tests (related to https://github.com/JuliaLang/julia/issues/15404), and everything else that base's tests do but packages have to roll their own.
|
1.0
|
Refactoring base's runtests harness for packages to use too - Ref https://github.com/JuliaLang/julia/pull/17165#issuecomment-241830583, we should try to refactor what's currently in `test/runtests.jl` to instead be a default part of the `Base.Test` module so packages can also use it for pretty-printing, parallel execution, running of selective subsets of tests (related to https://github.com/JuliaLang/julia/issues/15404), and everything else that base's tests do but packages have to roll their own.
|
non_process
|
refactoring base s runtests harness for packages to use too ref we should try to refactor what s currently in test runtests jl to instead be a default part of the base test module so packages can also use it for pretty printing parallel execution running of selective subsets of tests related to and everything else that base s tests do but packages have to roll their own
| 0
|
126,030
| 4,971,651,320
|
IssuesEvent
|
2016-12-05 19:18:58
|
SIU-CS/BarGame-Production
|
https://api.github.com/repos/SIU-CS/BarGame-Production
|
closed
|
Leveling Up
|
Priority-Medium Product Backlog
|
I want to earn experience points by completing quests and winning duels which will lead to me leveling up. When I level up I want my stats to increase.
|
1.0
|
Leveling Up - I want to earn experience points by completing quests and winning duels which will lead to me leveling up. When I level up I want my stats to increase.
|
non_process
|
leveling up i want to earn experience points by completing quests and winning duels which will lead to me leveling up when i level up i want my stats to increase
| 0
|
191,646
| 6,835,912,465
|
IssuesEvent
|
2017-11-10 04:12:22
|
DMS-Aus/Roam
|
https://api.github.com/repos/DMS-Aus/Roam
|
closed
|
NameError: global name 'roam' is not defined
|
bug :( priority/mid
|
I'm getting a Roam Error message
> Traceback (most recent call last):
> File "roam\mainwindow.pyc", line 331, in openkeyboard
> File "roam\api\utils.pyc", line 41, in open_keyboard
> NameError: global name 'roam' is not defined
when trying to use **Text** control to a String Column.
I get no Error message if I pick any other control.
|
1.0
|
NameError: global name 'roam' is not defined - I'm getting a Roam Error message
> Traceback (most recent call last):
> File "roam\mainwindow.pyc", line 331, in openkeyboard
> File "roam\api\utils.pyc", line 41, in open_keyboard
> NameError: global name 'roam' is not defined
when trying to use **Text** control to a String Column.
I get no Error message if I pick any other control.
|
non_process
|
nameerror global name roam is not defined i m getting a roam error message traceback most recent call last file roam mainwindow pyc line in openkeyboard file roam api utils pyc line in open keyboard nameerror global name roam is not defined when trying to use text control to a string column i get no error message if i pick any other control
| 0
|
1,363
| 2,511,944,272
|
IssuesEvent
|
2015-01-14 12:50:22
|
transientskp/tkp
|
https://api.github.com/repos/transientskp/tkp
|
opened
|
Quality Control: Percentage of flagged data
|
enhancement priority low
|
In version 1, we will concentrate on the total percentage of flagged data in the measurement set. This test can be fairly easily implemented using the TAQL query:
`taql 'CALC sum([select ntrue(FLAG) from '+ $msname + '])' `
This gives the total number of flags. Changing 'ntrue' to 'nfalse' gives the total amount of unflagged data. Then simply set percent_flagged=(ntrue/nfalse)*100. The threshold for good/bad image needs to be a user entry for now, i.e. image fail if percent_flagged > X% (where X is a user entry float, default 10.0%) This method does not require the NDPPP parset file.
Input: measurement set used in imaging, parset file with the user defined threshold (it might be useful to have 1 parset file for all the pre-imaging quality control checks)
Output: Pass/Fail (When an image fails output the percentage of data flagged and the threshold used)
more discussion in the original issue
https://support.astron.nl/lofar_issuetracker/issues/3788
|
1.0
|
Quality Control: Percentage of flagged data - In version 1, we will concentrate on the total percentage of flagged data in the measurement set. This test can be fairly easily implemented using the TAQL query:
`taql 'CALC sum([select ntrue(FLAG) from '+ $msname + '])' `
This gives the total number of flags. Changing 'ntrue' to 'nfalse' gives the total amount of unflagged data. Then simply set percent_flagged=(ntrue/nfalse)*100. The threshold for good/bad image needs to be a user entry for now, i.e. image fail if percent_flagged > X% (where X is a user entry float, default 10.0%) This method does not require the NDPPP parset file.
Input: measurement set used in imaging, parset file with the user defined threshold (it might be useful to have 1 parset file for all the pre-imaging quality control checks)
Output: Pass/Fail (When an image fails output the percentage of data flagged and the threshold used)
more discussion in the original issue
https://support.astron.nl/lofar_issuetracker/issues/3788
|
non_process
|
quality control percentage of flagged data in version we will concentrate on the total percentage of flagged data in the measurement set this test can be fairly easily implemented using the taql query taql calc sum this gives the total number of flags changing ntrue to nfalse gives the total amount of unflagged data then simply set percent flagged ntrue nfalse the threshold for good bad image needs to be a user entry for now i e image fail if percent flagged x where x is a user entry float default this method does not require the ndppp parset file input measurement set used in imaging parset file with the user defined threshold it might be useful to have parset file for all the pre imaging quality control checks output pass fail when an image fails output the percentage of data flagged and the threshold used more discussion in the original issue
| 0
|
206,656
| 15,766,377,355
|
IssuesEvent
|
2021-03-31 15:00:41
|
pulumi/pulumi
|
https://api.github.com/repos/pulumi/pulumi
|
opened
|
Provide a built-in way to wait for a resource to be completely provisioned (for testing)
|
area/testing impact/usability kind/enhancement
|
To make it easier to do mocks in unit testing (at least in TypeScript, but likely in all Pulumi languages), I want the Pulumi framework to provide a built-in way to wait on any resources to be completely provisioned. This would enable me to write tests that assert over the state of a set of resources that I provisioned.
Possibly related to item no. 2 in #6113
|
1.0
|
Provide a built-in way to wait for a resource to be completely provisioned (for testing) - To make it easier to do mocks in unit testing (at least in TypeScript, but likely in all Pulumi languages), I want the Pulumi framework to provide a built-in way to wait on any resources to be completely provisioned. This would enable me to write tests that assert over the state of a set of resources that I provisioned.
Possibly related to item no. 2 in #6113
|
non_process
|
provide a built in way to wait for a resource to be completely provisioned for testing to make it easier to do mocks in unit testing at least in typescript but likely in all pulumi languages i want the pulumi framework to provide a built in way to wait on any resources to be completely provisioned this would enable me to write tests that assert over the state of a set of resources that i provisioned possibly related to item no in
| 0
|
14,710
| 17,910,319,710
|
IssuesEvent
|
2021-09-09 03:44:02
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Support for 65c02 mnemonics
|
Type: Enhancement Feature: Processor/6502
|
**Is your feature request related to a problem? Please describe.**
Yes, it is not possible to fully disassembled 65C02 code currently.
**Describe the solution you'd like**
The 6502 module currently in Ghidra only supports the 6502 set of mnemonics. The 65C02 variant added several more mnemonics (e.g. STZ) which it would be nice to have added.
**Describe alternatives you've considered**
No other alternatives within Ghidra
**Additional context**
Reference to 65C02 mnemonics: https://www.zophar.net/fileuploads/2/10533qqcap/6502ref.html
|
1.0
|
Support for 65c02 mnemonics - **Is your feature request related to a problem? Please describe.**
Yes, it is not possible to fully disassembled 65C02 code currently.
**Describe the solution you'd like**
The 6502 module currently in Ghidra only supports the 6502 set of mnemonics. The 65C02 variant added several more mnemonics (e.g. STZ) which it would be nice to have added.
**Describe alternatives you've considered**
No other alternatives within Ghidra
**Additional context**
Reference to 65C02 mnemonics: https://www.zophar.net/fileuploads/2/10533qqcap/6502ref.html
|
process
|
support for mnemonics is your feature request related to a problem please describe yes it is not possible to fully disassembled code currently describe the solution you d like the module currently in ghidra only supports the set of mnemonics the variant added several more mnemonics e g stz which it would be nice to have added describe alternatives you ve considered no other alternatives within ghidra additional context reference to mnemonics
| 1
|
202,982
| 15,863,667,379
|
IssuesEvent
|
2021-04-08 13:02:30
|
gabygab159/foodPickupOrdering
|
https://api.github.com/repos/gabygab159/foodPickupOrdering
|
opened
|
code cleaning
|
cleaning documentation
|
- [ ] add comments to functions/routines
- [ ] remove console.log entries
- [ ] remove test/old code
- [ ] run eslint to sanitize the javascript code
- [ ]
|
1.0
|
code cleaning - - [ ] add comments to functions/routines
- [ ] remove console.log entries
- [ ] remove test/old code
- [ ] run eslint to sanitize the javascript code
- [ ]
|
non_process
|
code cleaning add comments to functions routines remove console log entries remove test old code run eslint to sanitize the javascript code
| 0
|
10,635
| 13,443,429,647
|
IssuesEvent
|
2020-09-08 08:21:03
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Hosted 3Bot deployment: succeeded but can't be shown
|
process_wontfix type_bug
|
### Description
I deployed a hosted 3Bot, and the chatflow said it all went well, but Google Chrome gives an error, saying it can't be shown due to 'scrambled credentials'.
<img width="1235" alt="Screenshot 2020-09-08 at 08 18 16" src="https://user-images.githubusercontent.com/30384423/92441009-b7973e00-f1ad-11ea-8a5a-9613d31e813a.png">
### Version information
* OS: MacOS
* Hosted 3Bot through deploy3bot-2.grid.tf
* Browser: Chrome 85
|
1.0
|
Hosted 3Bot deployment: succeeded but can't be shown - ### Description
I deployed a hosted 3Bot, and the chatflow said it all went well, but Google Chrome gives an error, saying it can't be shown due to 'scrambled credentials'.
<img width="1235" alt="Screenshot 2020-09-08 at 08 18 16" src="https://user-images.githubusercontent.com/30384423/92441009-b7973e00-f1ad-11ea-8a5a-9613d31e813a.png">
### Version information
* OS: MacOS
* Hosted 3Bot through deploy3bot-2.grid.tf
* Browser: Chrome 85
|
process
|
hosted deployment succeeded but can t be shown description i deployed a hosted and the chatflow said it all went well but google chrome gives an error saying it can t be shown due to scrambled credentials img width alt screenshot at src version information os macos hosted through grid tf browser chrome
| 1
|
15,329
| 19,440,789,318
|
IssuesEvent
|
2021-12-22 00:23:14
|
redwoodjs/redwood
|
https://api.github.com/repos/redwoodjs/redwood
|
closed
|
Support for pnpm, ability to choose package manager
|
triage/processing
|
First of all thank you for such a great tool🔥.
As I understand Redwood picks up best technologies so you do not need to make a choice. So why don't use pnpm instead of yarn? As I know I has lot of advantages over both npm and yarn. And if not, then why don't put it optionally, to be able choose package manager? For example there will be default package manager but if you want you can choose different. I really appreciate pnpm and would be nice to have such ability.
This is more like feature request and a question. So, put a like on this message if you also want Redwood to have such feature.
|
1.0
|
Support for pnpm, ability to choose package manager - First of all thank you for such a great tool🔥.
As I understand Redwood picks up best technologies so you do not need to make a choice. So why don't use pnpm instead of yarn? As I know I has lot of advantages over both npm and yarn. And if not, then why don't put it optionally, to be able choose package manager? For example there will be default package manager but if you want you can choose different. I really appreciate pnpm and would be nice to have such ability.
This is more like feature request and a question. So, put a like on this message if you also want Redwood to have such feature.
|
process
|
support for pnpm ability to choose package manager first of all thank you for such a great tool🔥 as i understand redwood picks up best technologies so you do not need to make a choice so why don t use pnpm instead of yarn as i know i has lot of advantages over both npm and yarn and if not then why don t put it optionally to be able choose package manager for example there will be default package manager but if you want you can choose different i really appreciate pnpm and would be nice to have such ability this is more like feature request and a question so put a like on this message if you also want redwood to have such feature
| 1
|
51,524
| 13,635,139,968
|
IssuesEvent
|
2020-09-25 02:00:50
|
nasifimtiazohi/openmrs-module-metadatamapping-1.3.4
|
https://api.github.com/repos/nasifimtiazohi/openmrs-module-metadatamapping-1.3.4
|
opened
|
CVE-2016-10518 (High) detected in ws-0.8.0.tgz
|
security vulnerability
|
## CVE-2016-10518 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-0.8.0.tgz</b></p></summary>
<p>simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-0.8.0.tgz">https://registry.npmjs.org/ws/-/ws-0.8.0.tgz</a></p>
<p>Path to dependency file: openmrs-module-metadatamapping-1.3.4/owa/package.json</p>
<p>Path to vulnerable library: openmrs-module-metadatamapping-1.3.4/owa/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.11.1.tgz (Root Library)
- socket.io-1.3.7.tgz
- engine.io-1.5.4.tgz
- :x: **ws-0.8.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-metadatamapping-1.3.4/commit/dbf14247c8c0a7b64ae301a8ab42df19cc87107e">dbf14247c8c0a7b64ae301a8ab42df19cc87107e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the ping functionality of the ws module before 1.0.0 which allowed clients to allocate memory by sending a ping frame. The ping functionality by default responds with a pong frame and the previously given payload of the ping frame. This is exactly what you expect, but internally ws always transforms all data that we need to send to a Buffer instance and that is where the vulnerability existed. ws didn't do any checks for the type of data it was sending. With buffers in node when you allocate it when a number instead of a string it will allocate the amount of bytes.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10518>CVE-2016-10518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10518">https://nvd.nist.gov/vuln/detail/CVE-2016-10518</a></p>
<p>Release Date: 2018-05-31</p>
<p>Fix Resolution: 1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-10518 (High) detected in ws-0.8.0.tgz - ## CVE-2016-10518 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-0.8.0.tgz</b></p></summary>
<p>simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-0.8.0.tgz">https://registry.npmjs.org/ws/-/ws-0.8.0.tgz</a></p>
<p>Path to dependency file: openmrs-module-metadatamapping-1.3.4/owa/package.json</p>
<p>Path to vulnerable library: openmrs-module-metadatamapping-1.3.4/owa/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.11.1.tgz (Root Library)
- socket.io-1.3.7.tgz
- engine.io-1.5.4.tgz
- :x: **ws-0.8.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-metadatamapping-1.3.4/commit/dbf14247c8c0a7b64ae301a8ab42df19cc87107e">dbf14247c8c0a7b64ae301a8ab42df19cc87107e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the ping functionality of the ws module before 1.0.0 which allowed clients to allocate memory by sending a ping frame. The ping functionality by default responds with a pong frame and the previously given payload of the ping frame. This is exactly what you expect, but internally ws always transforms all data that we need to send to a Buffer instance and that is where the vulnerability existed. ws didn't do any checks for the type of data it was sending. With buffers in node when you allocate it when a number instead of a string it will allocate the amount of bytes.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10518>CVE-2016-10518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-10518">https://nvd.nist.gov/vuln/detail/CVE-2016-10518</a></p>
<p>Release Date: 2018-05-31</p>
<p>Fix Resolution: 1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in ws tgz cve high severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client server and console for node js up to date against rfc library home page a href path to dependency file openmrs module metadatamapping owa package json path to vulnerable library openmrs module metadatamapping owa node modules ws package json dependency hierarchy browser sync tgz root library socket io tgz engine io tgz x ws tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in the ping functionality of the ws module before which allowed clients to allocate memory by sending a ping frame the ping functionality by default responds with a pong frame and the previously given payload of the ping frame this is exactly what you expect but internally ws always transforms all data that we need to send to a buffer instance and that is where the vulnerability existed ws didn t do any checks for the type of data it was sending with buffers in node when you allocate it when a number instead of a string it will allocate the amount of bytes publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,210
| 10,343,381,260
|
IssuesEvent
|
2019-09-04 08:50:14
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
closed
|
Support new processor EvaluteXPath
|
processor
|
# Expected behavior and actual behavior.
The need is to be able to apply xpath queries to an XML blob contained in a field of a record to generate new attributes in the record.
Therefore the processor should specify the name of the attribute containing the xml blob in the record to process; plus a set of xpath queries to evaluate against this blob
# Steps to reproduce the problem.
# Specifications like the version of the project, operating system, or hardware.
|
1.0
|
Support new processor EvaluteXPath - # Expected behavior and actual behavior.
The need is to be able to apply xpath queries to an XML blob contained in a field of a record to generate new attributes in the record.
Therefore the processor should specify the name of the attribute containing the xml blob in the record to process; plus a set of xpath queries to evaluate against this blob
# Steps to reproduce the problem.
# Specifications like the version of the project, operating system, or hardware.
|
process
|
support new processor evalutexpath expected behavior and actual behavior the need is to be able to apply xpath queries to an xml blob contained in a field of a record to generate new attributes in the record therefore the processor should specify the name of the attribute containing the xml blob in the record to process plus a set of xpath queries to evaluate against this blob steps to reproduce the problem specifications like the version of the project operating system or hardware
| 1
|
13,890
| 16,655,674,039
|
IssuesEvent
|
2021-06-05 13:31:29
|
paul-buerkner/brms
|
https://api.github.com/repos/paul-buerkner/brms
|
closed
|
extending pp_average to work with posterior_epred
|
feature post-processing
|
I am using `pp_average` to get the posterior predictive values across different models via Bayesian model stacking. But it seems that `pp_average` calls `posterior_predict` by default, which makes it hard to specify other dependent parameters in a distributional model.
It would be better to extend `pp_average` to work with `posterior_epred` as well, which will allow, for example, predictions of `sigma` to be averaged in a normal regression. Thanks!
|
1.0
|
extending pp_average to work with posterior_epred - I am using `pp_average` to get the posterior predictive values across different models via Bayesian model stacking. But it seems that `pp_average` calls `posterior_predict` by default, which makes it hard to specify other dependent parameters in a distributional model.
It would be better to extend `pp_average` to work with `posterior_epred` as well, which will allow, for example, predictions of `sigma` to be averaged in a normal regression. Thanks!
|
process
|
extending pp average to work with posterior epred i am using pp average to get the posterior predictive values across different models via bayesian model stacking but it seems that pp average calls posterior predict by default which makes it hard to specify other dependent parameters in a distributional model it would be better to extend pp average to work with posterior epred as well which will allow for example predictions of sigma to be averaged in a normal regression thanks
| 1
|
1,307
| 3,863,407,410
|
IssuesEvent
|
2016-04-08 09:17:06
|
PlagueHO/LabBuilder
|
https://api.github.com/repos/PlagueHO/LabBuilder
|
closed
|
Creating External Switches - define XML attrib for binding adapter
|
enhancement In Process
|
When creating an external switch, the first non-virtual adapter will be bound.
```
$null = New-VMSwitch `
-Name $SwitchName `
-NetAdapterName (`
Get-NetAdapter | `
Where-Object { $_.Status -eq 'Up' -and $_.InterfaceDescription -notlike "Hyper-V Virtual*" } | `
Select-Object -First 1 -ExpandProperty Name `
)
```
The Where-Object statement should be something like:
Where-Object { $_.Status -eq 'Up' -and $_.InterfaceDescription -notlike **"Hyper-V*"** }
Again, localization, my german adapters are named "Hyper-V-Adapter [...]" - but i guess with "Hyper-V*" you should get localizations.
But still not nice, if you have multiple active network adapters.
An XML Attrib like
```
<switch name="External" type="External">
<vmswitchbindingadapter description="XXXXX" mac="0000000000" />
</switch>
```
would be nice.
|
1.0
|
Creating External Switches - define XML attrib for binding adapter - When creating an external switch, the first non-virtual adapter will be bound.
```
$null = New-VMSwitch `
-Name $SwitchName `
-NetAdapterName (`
Get-NetAdapter | `
Where-Object { $_.Status -eq 'Up' -and $_.InterfaceDescription -notlike "Hyper-V Virtual*" } | `
Select-Object -First 1 -ExpandProperty Name `
)
```
The Where-Object statement should be something like:
Where-Object { $_.Status -eq 'Up' -and $_.InterfaceDescription -notlike **"Hyper-V*"** }
Again, localization, my german adapters are named "Hyper-V-Adapter [...]" - but i guess with "Hyper-V*" you should get localizations.
But still not nice, if you have multiple active network adapters.
An XML Attrib like
```
<switch name="External" type="External">
<vmswitchbindingadapter description="XXXXX" mac="0000000000" />
</switch>
```
would be nice.
|
process
|
creating external switches define xml attrib for binding adapter when creating an external switch the first non virtual adapter will be bound null new vmswitch name switchname netadaptername get netadapter where object status eq up and interfacedescription notlike hyper v virtual select object first expandproperty name the where object statement should be something like where object status eq up and interfacedescription notlike hyper v again localization my german adapters are named hyper v adapter but i guess with hyper v you should get localizations but still not nice if you have multiple active network adapters an xml attrib like would be nice
| 1
|
213,226
| 23,967,743,702
|
IssuesEvent
|
2022-09-13 03:52:09
|
elastic/integrations
|
https://api.github.com/repos/elastic/integrations
|
closed
|
[okta] API Key should be labelled as required.
|
Team:Security-External Integrations Integration:Okta
|
```
- name: api_key
type: text
title: API Key
multi: false
required: false
show_user: true
```
required should be `true` in manifest.yml
https://github.com/elastic/integrations/blob/6a3921602e9ed6cfd0bc7fdfc88832eb2c0bf46d/packages/okta/manifest.yml#L29-L34
|
True
|
[okta] API Key should be labelled as required. - ```
- name: api_key
type: text
title: API Key
multi: false
required: false
show_user: true
```
required should be `true` in manifest.yml
https://github.com/elastic/integrations/blob/6a3921602e9ed6cfd0bc7fdfc88832eb2c0bf46d/packages/okta/manifest.yml#L29-L34
|
non_process
|
api key should be labelled as required name api key type text title api key multi false required false show user true required should be true in manifest yml
| 0
|
192,091
| 22,215,897,353
|
IssuesEvent
|
2022-06-08 01:34:54
|
Nivaskumark/kernel_v4.1.15
|
https://api.github.com/repos/Nivaskumark/kernel_v4.1.15
|
reopened
|
WS-2021-0274 (High) detected in linuxlinux-4.6
|
security vulnerability
|
## WS-2021-0274 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/seq_buf.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/seq_buf.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13.2 is vulnerable to overflow in seq_buf_putmem_hex()
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/d57fcab190b60f43046d5836c3c56114b4f50080>WS-2021-0274</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1001222">https://osv.dev/vulnerability/UVI-2021-1001222</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.13.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0274 (High) detected in linuxlinux-4.6 - ## WS-2021-0274 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/seq_buf.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/seq_buf.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13.2 is vulnerable to overflow in seq_buf_putmem_hex()
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/d57fcab190b60f43046d5836c3c56114b4f50080>WS-2021-0274</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1001222">https://osv.dev/vulnerability/UVI-2021-1001222</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.13.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in linuxlinux ws high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files lib seq buf c lib seq buf c vulnerability details linux kernel in versions to is vulnerable to overflow in seq buf putmem hex publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
114
| 2,546,367,691
|
IssuesEvent
|
2015-01-29 23:23:55
|
GsDevKit/GsDevKit
|
https://api.github.com/repos/GsDevKit/GsDevKit
|
opened
|
With ZipArchive in GemStone-Compression package, mcz based upgrade fails
|
in process
|
See [Pieter's post](https://groups.google.com/forum/#!topic/metacello/kejIJ0MooqQ) for details.
To fix this bug, we will need to extract the ZipArchive (and friends) dependencies from the Core package and move them into GemStone-Compression ... or take an entirely different approach.
|
1.0
|
With ZipArchive in GemStone-Compression package, mcz based upgrade fails - See [Pieter's post](https://groups.google.com/forum/#!topic/metacello/kejIJ0MooqQ) for details.
To fix this bug, we will need to extract the ZipArchive (and friends) dependencies from the Core package and move them into GemStone-Compression ... or take an entirely different approach.
|
process
|
with ziparchive in gemstone compression package mcz based upgrade fails see for details to fix this bug we will need to extract the ziparchive and friends dependencies from the core package and move them into gemstone compression or take an entirely different approach
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.