Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,557
| 9,648,670,630
|
IssuesEvent
|
2019-05-17 16:54:30
|
googleapis/sloth
|
https://api.github.com/repos/googleapis/sloth
|
closed
|
chore(release): proposal for next release
|
release-candidate type: process
|
_:robot: Here's what the next release of **@justinbeckwith/sloth** would look like._
---
### [4.0.1](https://www.github.com/googleapis/sloth/compare/v4.0.0...v4.0.1) (2019-05-17)
### Bug Fixes
* remove suprious log ([#230](https://www.github.com/googleapis/sloth/issues/230)) ([cf361b0](https://www.github.com/googleapis/sloth/commit/cf361b0))
* use the drift prod enpoint ([#229](https://www.github.com/googleapis/sloth/issues/229)) ([e20e160](https://www.github.com/googleapis/sloth/commit/e20e160))
* **deps:** update dependency update-notifier to v3 ([#227](https://www.github.com/googleapis/sloth/issues/227)) ([34fb1a6](https://www.github.com/googleapis/sloth/commit/34fb1a6))
----------------
**release created at #232**
|
1.0
|
chore(release): proposal for next release - _:robot: Here's what the next release of **@justinbeckwith/sloth** would look like._
---
### [4.0.1](https://www.github.com/googleapis/sloth/compare/v4.0.0...v4.0.1) (2019-05-17)
### Bug Fixes
* remove suprious log ([#230](https://www.github.com/googleapis/sloth/issues/230)) ([cf361b0](https://www.github.com/googleapis/sloth/commit/cf361b0))
* use the drift prod enpoint ([#229](https://www.github.com/googleapis/sloth/issues/229)) ([e20e160](https://www.github.com/googleapis/sloth/commit/e20e160))
* **deps:** update dependency update-notifier to v3 ([#227](https://www.github.com/googleapis/sloth/issues/227)) ([34fb1a6](https://www.github.com/googleapis/sloth/commit/34fb1a6))
----------------
**release created at #232**
|
process
|
chore release proposal for next release robot here s what the next release of justinbeckwith sloth would look like bug fixes remove suprious log use the drift prod enpoint deps update dependency update notifier to release created at
| 1
|
8,817
| 11,935,721,490
|
IssuesEvent
|
2020-04-02 09:04:46
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Release 3.0.8
|
release process
|
**release notes**:
- updated project dependencies
- update gradle version
- fixed bug #422 and #415 (changed port for default host for checking internet connectivity from https to http)
**things to do:**
- [x] update javadocs
- [x] update docs
- [x] bump version
- [x] release library
- [x] update changelog
- [x] create github release
|
1.0
|
Release 3.0.8 - **release notes**:
- updated project dependencies
- update gradle version
- fixed bug #422 and #415 (changed port for default host for checking internet connectivity from https to http)
**things to do:**
- [x] update javadocs
- [x] update docs
- [x] bump version
- [x] release library
- [x] update changelog
- [x] create github release
|
process
|
release release notes updated project dependencies update gradle version fixed bug and changed port for default host for checking internet connectivity from https to http things to do update javadocs update docs bump version release library update changelog create github release
| 1
|
3,347
| 6,486,519,983
|
IssuesEvent
|
2017-08-19 20:29:21
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
add synonym GO:0071630 ubiquitin-dependent catabolism of misfolded proteins by nucleus-associated proteasome
|
cellular processes editors-discussion
|
add
exact:
nuclear protein quality control by the ubiquitin-proteasome system
broad:protein quality control (PQC)
to
GO:0071630 ubiquitin-dependent catabolism of misfolded proteins by nucleus-associated proteasome
PMID: 21324894 pombe
PMID: 21211726 cerevisiae
I'm angling for a general "protein qualitiy control term"
https://github.com/geneontology/go-ontology/issues/13944
https://github.com/geneontology/go-ontology/issues/13717
Will also need the equivalent synonyms on the cytoplasmic PQC ubiquitin system.
I can look up refs for this too.
|
1.0
|
add synonym GO:0071630 ubiquitin-dependent catabolism of misfolded proteins by nucleus-associated proteasome - add
exact:
nuclear protein quality control by the ubiquitin-proteasome system
broad:protein quality control (PQC)
to
GO:0071630 ubiquitin-dependent catabolism of misfolded proteins by nucleus-associated proteasome
PMID: 21324894 pombe
PMID: 21211726 cerevisiae
I'm angling for a general "protein qualitiy control term"
https://github.com/geneontology/go-ontology/issues/13944
https://github.com/geneontology/go-ontology/issues/13717
Will also need the equivalent synonyms on the cytoplasmic PQC ubiquitin system.
I can look up refs for this too.
|
process
|
add synonym go ubiquitin dependent catabolism of misfolded proteins by nucleus associated proteasome add exact nuclear protein quality control by the ubiquitin proteasome system broad protein quality control pqc to go ubiquitin dependent catabolism of misfolded proteins by nucleus associated proteasome pmid pombe pmid cerevisiae i m angling for a general protein qualitiy control term will also need the equivalent synonyms on the cytoplasmic pqc ubiquitin system i can look up refs for this too
| 1
|
196,234
| 6,926,097,659
|
IssuesEvent
|
2017-11-30 17:58:08
|
minio/minio
|
https://api.github.com/repos/minio/minio
|
closed
|
On Ubuntu 16.04, Minio service falis to start - tls: bad certificate error
|
priority: medium
|
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
After setting up Minio with a Let's Encrypt certificate, the service should start successfully.
## Current Behavior
Instead, Minio fails to start, giving out the following error:
```
level=error msg="Error in reading from new TLS connection local-ip:52096 at server remote-ip:9000" cause="remote error: tls: bad certificate" source="[listener.go:188:github.com/minio/minio/pkg/http.(*httpListener).start.func2()]"
```
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. install Minio
2. Correctly configured Minio as a systemd service
3. Here's my `/etc/default/minio` file
```
MINIO_ACCESS_KEY=TBBAWH1L
MINIO_SECRET_KEY=Ll7RnfS47ggv JG CtjSafuZv3xuY1X
MINIO_VOLUMES="/mnt/minio-m-nyc1-01/store-m"
MINIO_OPTS="-C /etc/minio --address ip-address:443"
```
4. Set up a domain with a Let's Encrypt certificate. The certificate files have been copied to `/etc/minio/certs` either as `public.crt` and `private.key` or `fullchain.pem` and `privkey.pem`. Same error in either case.
## Context
Trying to set up Minio so I could access it from A browser via HTTPS.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): 2017-09-29T19:16:56Z
* Server type and version:
* Operating System and version (`uname -a`): Ubuntu 16.04
|
1.0
|
On Ubuntu 16.04, Minio service falis to start - tls: bad certificate error - <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
After setting up Minio with a Let's Encrypt certificate, the service should start successfully.
## Current Behavior
Instead, Minio fails to start, giving out the following error:
```
level=error msg="Error in reading from new TLS connection local-ip:52096 at server remote-ip:9000" cause="remote error: tls: bad certificate" source="[listener.go:188:github.com/minio/minio/pkg/http.(*httpListener).start.func2()]"
```
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. install Minio
2. Correctly configured Minio as a systemd service
3. Here's my `/etc/default/minio` file
```
MINIO_ACCESS_KEY=TBBAWH1L
MINIO_SECRET_KEY=Ll7RnfS47ggv JG CtjSafuZv3xuY1X
MINIO_VOLUMES="/mnt/minio-m-nyc1-01/store-m"
MINIO_OPTS="-C /etc/minio --address ip-address:443"
```
4. Set up a domain with a Let's Encrypt certificate. The certificate files have been copied to `/etc/minio/certs` either as `public.crt` and `private.key` or `fullchain.pem` and `privkey.pem`. Same error in either case.
## Context
Trying to set up Minio so I could access it from A browser via HTTPS.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): 2017-09-29T19:16:56Z
* Server type and version:
* Operating System and version (`uname -a`): Ubuntu 16.04
|
non_process
|
on ubuntu minio service falis to start tls bad certificate error expected behavior after setting up minio with a let s encrypt certificate the service should start successfully current behavior instead minio fails to start giving out the following error level error msg error in reading from new tls connection local ip at server remote ip cause remote error tls bad certificate source steps to reproduce for bugs install minio correctly configured minio as a systemd service here s my etc default minio file minio access key minio secret key jg minio volumes mnt minio m store m minio opts c etc minio address ip address set up a domain with a let s encrypt certificate the certificate files have been copied to etc minio certs either as public crt and private key or fullchain pem and privkey pem same error in either case context trying to set up minio so i could access it from a browser via https your environment version used minio version server type and version operating system and version uname a ubuntu
| 0
|
53,042
| 7,804,365,467
|
IssuesEvent
|
2018-06-11 07:09:07
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
reopened
|
Minishift fails to start using xhyve if broken VBox files are on path
|
component/documentation priority/minor status/pinned
|
### General information
* Minishift version: 1.13.1+75352e5
* OS: macOS
* Hypervisor: xhyve
### Steps to reproduce
1. Install virtualbox
2. Remove virtualbox but leave its files in /usr/local/bin
3. Clean ~/.minishift and ~/.kube
4. Start minishift with xhyve as hypervisor
### Expected
Minishift starts up.
### Actual
-- Starting Minishift VM ..... FAIL E0326 14:14:31.265034 10792 start.go:368] Error starting the VM: Error creating the VM. Error with pre-create check: "Error detecting VBox version: exit status 126". Retrying.
Error starting the VM: Error creating the VM. Error with pre-create check: "Error detecting VBox version: exit status 126"
This seems to be a problem with xhyve driver as per:
https://github.com/machine-drivers/docker-machine-driver-xhyve/issues/134
https://github.com/kubernetes/minikube/issues/519
I understand an issue like this might not get fixed for some time, so meanwhile, can somebody document this?
|
1.0
|
Minishift fails to start using xhyve if broken VBox files are on path - ### General information
* Minishift version: 1.13.1+75352e5
* OS: macOS
* Hypervisor: xhyve
### Steps to reproduce
1. Install virtualbox
2. Remove virtualbox but leave its files in /usr/local/bin
3. Clean ~/.minishift and ~/.kube
4. Start minishift with xhyve as hypervisor
### Expected
Minishift starts up.
### Actual
-- Starting Minishift VM ..... FAIL E0326 14:14:31.265034 10792 start.go:368] Error starting the VM: Error creating the VM. Error with pre-create check: "Error detecting VBox version: exit status 126". Retrying.
Error starting the VM: Error creating the VM. Error with pre-create check: "Error detecting VBox version: exit status 126"
This seems to be a problem with xhyve driver as per:
https://github.com/machine-drivers/docker-machine-driver-xhyve/issues/134
https://github.com/kubernetes/minikube/issues/519
I understand an issue like this might not get fixed for some time, so meanwhile, can somebody document this?
|
non_process
|
minishift fails to start using xhyve if broken vbox files are on path general information minishift version os macos hypervisor xhyve steps to reproduce install virtualbox remove virtualbox but leave its files in usr local bin clean minishift and kube start minishift with xhyve as hypervisor expected minishift starts up actual starting minishift vm fail start go error starting the vm error creating the vm error with pre create check error detecting vbox version exit status retrying error starting the vm error creating the vm error with pre create check error detecting vbox version exit status this seems to be a problem with xhyve driver as per i understand an issue like this might not get fixed for some time so meanwhile can somebody document this
| 0
|
20,731
| 27,430,261,001
|
IssuesEvent
|
2023-03-02 00:23:02
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
new java_tools release required to fix @remote_java_tools on Windows
|
type: process team-Rules-Java
|
Compiling `@remote_java_tools//:ijar_cc_binary` fails on Windows since 9f2c62aa489394ae882e07fb97eefb0556075944:
```
undeclared inclusion(s) in rule '@remote_java_tools//:ijar_cc_binary':
this rule is missing dependency declarations for the following files included by 'java_tools/ijar/classfile.cc':
'external/bazel_tools/third_party/ijar/common.h'
```
9f2c62aa489394ae882e07fb97eefb0556075944 includes a workaround for this in `third_party/jdk/BUILD.java_tools` but there needs to be a new java_tools release to pick this `BUILD` file fix up.
|
1.0
|
new java_tools release required to fix @remote_java_tools on Windows - Compiling `@remote_java_tools//:ijar_cc_binary` fails on Windows since 9f2c62aa489394ae882e07fb97eefb0556075944:
```
undeclared inclusion(s) in rule '@remote_java_tools//:ijar_cc_binary':
this rule is missing dependency declarations for the following files included by 'java_tools/ijar/classfile.cc':
'external/bazel_tools/third_party/ijar/common.h'
```
9f2c62aa489394ae882e07fb97eefb0556075944 includes a workaround for this in `third_party/jdk/BUILD.java_tools` but there needs to be a new java_tools release to pick this `BUILD` file fix up.
|
process
|
new java tools release required to fix remote java tools on windows compiling remote java tools ijar cc binary fails on windows since undeclared inclusion s in rule remote java tools ijar cc binary this rule is missing dependency declarations for the following files included by java tools ijar classfile cc external bazel tools third party ijar common h includes a workaround for this in third party jdk build java tools but there needs to be a new java tools release to pick this build file fix up
| 1
|
12,198
| 14,742,473,457
|
IssuesEvent
|
2021-01-07 12:21:24
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
054- Fair Oaks PO # not going on invoice | Parant:896
|
anc-process anp-0.5 ant-bug has attachment
|
In GitLab by @kdjstudios on Apr 23, 2019, 08:34
**Submitted by:** "vanessa salamanca" <vanessa.salamanca@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-22-55412
**Second HD:** http://www.servicedesk.answernet.com/profiles/ticket/8823322
**Server:** Internal (All?)
**Client/Site:** 054
**Account:** XTC1259
**Issue:**
I have a client who requires each invoice to be sent to their AP department with PO # on the invoice.
When i look in SAB under the account XTC1259 (UCSF Cardiology) it shows that the PO # is in the correct place but on the actual invoice where it says PO # It is reflecting Net 30?
How can I fix this?


|
1.0
|
054- Fair Oaks PO # not going on invoice | Parant:896 - In GitLab by @kdjstudios on Apr 23, 2019, 08:34
**Submitted by:** "vanessa salamanca" <vanessa.salamanca@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-22-55412
**Second HD:** http://www.servicedesk.answernet.com/profiles/ticket/8823322
**Server:** Internal (All?)
**Client/Site:** 054
**Account:** XTC1259
**Issue:**
I have a client who requires each invoice to be sent to their AP department with PO # on the invoice.
When i look in SAB under the account XTC1259 (UCSF Cardiology) it shows that the PO # is in the correct place but on the actual invoice where it says PO # It is reflecting Net 30?
How can I fix this?


|
process
|
fair oaks po not going on invoice parant in gitlab by kdjstudios on apr submitted by vanessa salamanca helpdesk second hd server internal all client site account issue i have a client who requires each invoice to be sent to their ap department with po on the invoice when i look in sab under the account ucsf cardiology it shows that the po is in the correct place but on the actual invoice where it says po it is reflecting net how can i fix this uploads image png uploads image png
| 1
|
7,974
| 11,163,764,104
|
IssuesEvent
|
2019-12-27 00:56:11
|
LLNL/axom
|
https://api.github.com/repos/LLNL/axom
|
closed
|
Remove AXOM_USE_CXX11 compiler define
|
Software process
|
Axom requires a ``C++11`` compiler, so we should remove all compiler guards that provide fallbacks for cases where `C++11` was not available (e.g. `#ifdef AXOM_USE_CXX11`).
|
1.0
|
Remove AXOM_USE_CXX11 compiler define - Axom requires a ``C++11`` compiler, so we should remove all compiler guards that provide fallbacks for cases where `C++11` was not available (e.g. `#ifdef AXOM_USE_CXX11`).
|
process
|
remove axom use compiler define axom requires a c compiler so we should remove all compiler guards that provide fallbacks for cases where c was not available e g ifdef axom use
| 1
|
1,004
| 3,470,288,631
|
IssuesEvent
|
2015-12-23 06:54:47
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
simple redux module
|
enhancement video processing
|
no multiple masked layers, etc.
also, do it in a shader. it's not complicated.
|
1.0
|
simple redux module - no multiple masked layers, etc.
also, do it in a shader. it's not complicated.
|
process
|
simple redux module no multiple masked layers etc also do it in a shader it s not complicated
| 1
|
4,399
| 7,295,805,369
|
IssuesEvent
|
2018-02-26 08:35:35
|
UKHomeOffice/dq-aws-transition
|
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
|
opened
|
Turn On S4 AWS Feed from WFTP01 on Sungard
|
Production S4 Processing
|
Turn On S4 AWS Feed from WFTP01 on Sungard.
- [ ] Verify Windows Ingest -> Greenplum jobs in-place and running
- [ ] Ticket Raised on Yellow Board for Sungard Change
- [ ] Change made and verified
|
1.0
|
Turn On S4 AWS Feed from WFTP01 on Sungard - Turn On S4 AWS Feed from WFTP01 on Sungard.
- [ ] Verify Windows Ingest -> Greenplum jobs in-place and running
- [ ] Ticket Raised on Yellow Board for Sungard Change
- [ ] Change made and verified
|
process
|
turn on aws feed from on sungard turn on aws feed from on sungard verify windows ingest greenplum jobs in place and running ticket raised on yellow board for sungard change change made and verified
| 1
|
93,202
| 19,100,987,623
|
IssuesEvent
|
2021-11-29 22:31:22
|
google/android-fhir
|
https://api.github.com/repos/google/android-fhir
|
opened
|
Use datastore 1.0.0 library instead of 1.0.0-rc02
|
code health
|
**Describe the Issue**
The latest version of datastore library is [1.0.0](https://developer.android.com/topic/libraries/architecture/datastore#datastore-typed). Let's use that instead of 1.0.0-rc02
**Would you like to work on the issue?**
Yes
|
1.0
|
Use datastore 1.0.0 library instead of 1.0.0-rc02 - **Describe the Issue**
The latest version of datastore library is [1.0.0](https://developer.android.com/topic/libraries/architecture/datastore#datastore-typed). Let's use that instead of 1.0.0-rc02
**Would you like to work on the issue?**
Yes
|
non_process
|
use datastore library instead of describe the issue the latest version of datastore library is let s use that instead of would you like to work on the issue yes
| 0
|
284,384
| 21,416,579,492
|
IssuesEvent
|
2022-04-22 11:27:58
|
opentelekomcloud/vault-plugin-secrets-openstack
|
https://api.github.com/repos/opentelekomcloud/vault-plugin-secrets-openstack
|
closed
|
README reference command inconsistency
|
documentation
|
I believe this
``` vault read /os/creds/example-role ```
should be:
``` vault read /openstack/creds/example-role ```
to be aligned with the previous commands.
https://github.com/opentelekomcloud/vault-plugin-secrets-openstack/blob/e56d944faeea6de3e601342395858182748e5c37/README.md?plain=1#L81
|
1.0
|
README reference command inconsistency - I believe this
``` vault read /os/creds/example-role ```
should be:
``` vault read /openstack/creds/example-role ```
to be aligned with the previous commands.
https://github.com/opentelekomcloud/vault-plugin-secrets-openstack/blob/e56d944faeea6de3e601342395858182748e5c37/README.md?plain=1#L81
|
non_process
|
readme reference command inconsistency i believe this vault read os creds example role should be vault read openstack creds example role to be aligned with the previous commands
| 0
|
718,018
| 24,700,515,432
|
IssuesEvent
|
2022-10-19 14:59:13
|
PrefectHQ/prefect
|
https://api.github.com/repos/PrefectHQ/prefect
|
closed
|
Tasks do not support callable objects
|
bug priority:medium status:in-progress
|
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
As the title states, callable objects (i.e. those that implement `__call__`) are not supported by tasks.
This happens because callable objects don't have two attributes: `__name__` and `__qualname__`.
The `__name__` restriction can be bypassed by defining the `name` attribute for the task - in which case, the error below is different.
Without the name, the task fails when calling `prefect.utilities.importtools.to_qualified_name`.
https://github.com/PrefectHQ/prefect/blob/8294cb1e9b64660deffb912c9c3c0c9246ba5af6/src/prefect/utilities/importtools.py#L18-L29
Lambdas work because both `__name__` and `__qualname__` are set to `<lambda>`.
### Reproduction
```python3
from typing import Any
import prefect
class CallableObj:
def __init__(self) -> None:
self.x = 10
def __call__(self, *args: Any, **kwds: Any) -> Any:
print(self.x)
@prefect.flow
def test_flow():
t = prefect.Task(fn=CallableObj())
t()
```
### Error
#### Without declaring the `name`
```
File "/Users/duarte/Documents/data/pdbt/dystematic/pdbt/flows/raw/epfr/materializations/test_flow.py", line 18, in test_flow
t = prefect.Task(fn=CallableObj())
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/context.py", line 163, in __register_init__
__init__(__self__, *args, **kwargs)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/tasks.py", line 136, in __init__
self.name = name or self.fn.__name__
AttributeError: 'CallableObj' object has no attribute '__name__'
```
#### Declaring the `name`
```
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/context.py", line 163, in __register_init__
__init__(__self__, *args, **kwargs)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/tasks.py", line 142, in __init__
self.task_key = to_qualified_name(self.fn)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/utilities/importtools.py", line 29, in to_qualified_name
return obj.__module__ + "." + obj.__qualname__
AttributeError: 'CallableObj' object has no attribute '__qualname__'
```
### Versions
```Text
[11:58:43] Φ prefect version
Version: 2.5.0
API version: 0.8.2
Python version: 3.8.9
Git commit: eac37918
Built: Thu, Oct 6, 2022 12:41 PM
OS/Arch: darwin/x86_64
Profile: default
Server type: ephemeral
Server:
Database: sqlite
SQLite version: 3.32.3
```
### Additional context
_No response_
|
1.0
|
Tasks do not support callable objects - ### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
As the title states, callable objects (i.e. those that implement `__call__`) are not supported by tasks.
This happens because callable objects don't have two attributes: `__name__` and `__qualname__`.
The `__name__` restriction can be bypassed by defining the `name` attribute for the task - in which case, the error below is different.
Without the name, the task fails when calling `prefect.utilities.importtools.to_qualified_name`.
https://github.com/PrefectHQ/prefect/blob/8294cb1e9b64660deffb912c9c3c0c9246ba5af6/src/prefect/utilities/importtools.py#L18-L29
Lambdas work because both `__name__` and `__qualname__` are set to `<lambda>`.
### Reproduction
```python3
from typing import Any
import prefect
class CallableObj:
def __init__(self) -> None:
self.x = 10
def __call__(self, *args: Any, **kwds: Any) -> Any:
print(self.x)
@prefect.flow
def test_flow():
t = prefect.Task(fn=CallableObj())
t()
```
### Error
#### Without declaring the `name`
```
File "/Users/duarte/Documents/data/pdbt/dystematic/pdbt/flows/raw/epfr/materializations/test_flow.py", line 18, in test_flow
t = prefect.Task(fn=CallableObj())
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/context.py", line 163, in __register_init__
__init__(__self__, *args, **kwargs)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/tasks.py", line 136, in __init__
self.name = name or self.fn.__name__
AttributeError: 'CallableObj' object has no attribute '__name__'
```
#### Declaring the `name`
```
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/context.py", line 163, in __register_init__
__init__(__self__, *args, **kwargs)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/tasks.py", line 142, in __init__
self.task_key = to_qualified_name(self.fn)
File "/Users/duarte/Documents/data/pdbt/.venv/lib/python3.8/site-packages/prefect/utilities/importtools.py", line 29, in to_qualified_name
return obj.__module__ + "." + obj.__qualname__
AttributeError: 'CallableObj' object has no attribute '__qualname__'
```
### Versions
```Text
[11:58:43] Φ prefect version
Version: 2.5.0
API version: 0.8.2
Python version: 3.8.9
Git commit: eac37918
Built: Thu, Oct 6, 2022 12:41 PM
OS/Arch: darwin/x86_64
Profile: default
Server type: ephemeral
Server:
Database: sqlite
SQLite version: 3.32.3
```
### Additional context
_No response_
|
non_process
|
tasks do not support callable objects first check i added a descriptive title to this issue i used the github search to find a similar issue and didn t find it i searched the prefect documentation for this issue i checked that this issue is related to prefect and not one of its dependencies bug summary as the title states callable objects i e those that implement call are not supported by tasks this happens because callable objects don t have two attributes name and qualname the name restriction can be bypassed by defining the name attribute for the task in which case the error below is different without the name the task fails when calling prefect utilities importtools to qualified name lambdas work because both name and qualname are set to reproduction from typing import any import prefect class callableobj def init self none self x def call self args any kwds any any print self x prefect flow def test flow t prefect task fn callableobj t error without declaring the name file users duarte documents data pdbt dystematic pdbt flows raw epfr materializations test flow py line in test flow t prefect task fn callableobj file users duarte documents data pdbt venv lib site packages prefect context py line in register init init self args kwargs file users duarte documents data pdbt venv lib site packages prefect tasks py line in init self name name or self fn name attributeerror callableobj object has no attribute name declaring the name file users duarte documents data pdbt venv lib site packages prefect context py line in register init init self args kwargs file users duarte documents data pdbt venv lib site packages prefect tasks py line in init self task key to qualified name self fn file users duarte documents data pdbt venv lib site packages prefect utilities importtools py line in to qualified name return obj module obj qualname attributeerror callableobj object has no attribute qualname versions text φ prefect version version api version python version git commit built thu oct pm os arch darwin profile default server type ephemeral server database sqlite sqlite version additional context no response
| 0
|
7,729
| 10,852,889,237
|
IssuesEvent
|
2019-11-13 13:43:23
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
PowerPC VLE offset sometimes incorrect (e_add2i, e_add2is, e_mull2i, e_cmp16i, e_cmph16i)
|
Feature: Processor/PowerPC Type: Bug
|
**Description:**
Certain PowerPC VLE instructions may have an incorrect signed 16 bit immediate value (SIMM16) if one of the instruction fields (SIMM_0_10_VLE) is negative.
**To reproduce:**
1. Create binary file holding bytes "72 C6 96 7D"
2. Load binary as PowerPC PowerISA-VLE-64-32addr
3. Make sure vle bit is set to 1 in code browser processor options (right click in listing and select processor options from context menu)
4. Disassemble
**Expected behaviour:**
`00000000 72 c6 96 7d e_add2is r6,-0x4983`
Actual:
`00000000 72 c6 96 7d e_add2is r6,-0x183`
The SIMM16 value of -0x183 is incorrect. From the instruction info display the constant -0x183 comes from the following bit fields:
Op-Objects const:-0x183
Operand Mask 00000011 11100000 00000111 11111111
Masked Value 00000010 11000000 00000110 01111101
SIMM16 should be 10110 11001111101 (-0x4983)
SIMM16 actually is 11111 11001111101 (-0x183)
If we look in ppc_vle.sinc we find SIMM16 defined as:
`SIMM16: val is SIMM_0_10_VLE & SIMM_21_25_VLE [ val = (SIMM_21_25_VLE << 11) | SIMM_0_10_VLE ;] { export *[const]:2 val; }`
It appears that in our example SIMM_0_10_VLE (11001111101) was sign extended to a full 16 bits (11111 11001111101). This or'd 11111 over the SIMM_21_25_VLE bits (10110).
Using IMM_0_10_VLE in place of SIMM_0_10_VLE seems to fix the problem:
`SIMM16: val is IMM_0_10_VLE & SIMM_21_25_VLE [ val = (SIMM_21_25_VLE << 11) | IMM_0_10_VLE ;] { export *[const]:2 val; }`
**PowerPC VLE instructions affected:** e_add2i, e_add2is, e_mull2i, e_cmp16i, e_cmph16i
**Environment:** ghidra_9.1-BETA_DEV_20190923
|
1.0
|
PowerPC VLE offset sometimes incorrect (e_add2i, e_add2is, e_mull2i, e_cmp16i, e_cmph16i) - **Description:**
Certain PowerPC VLE instructions may have an incorrect signed 16 bit immediate value (SIMM16) if one of the instruction fields (SIMM_0_10_VLE) is negative.
**To reproduce:**
1. Create binary file holding bytes "72 C6 96 7D"
2. Load binary as PowerPC PowerISA-VLE-64-32addr
3. Make sure vle bit is set to 1 in code browser processor options (right click in listing and select processor options from context menu)
4. Disassemble
**Expected behaviour:**
`00000000 72 c6 96 7d e_add2is r6,-0x4983`
Actual:
`00000000 72 c6 96 7d e_add2is r6,-0x183`
The SIMM16 value of -0x183 is incorrect. From the instruction info display the constant -0x183 comes from the following bit fields:
Op-Objects const:-0x183
Operand Mask 00000011 11100000 00000111 11111111
Masked Value 00000010 11000000 00000110 01111101
SIMM16 should be 10110 11001111101 (-0x4983)
SIMM16 actually is 11111 11001111101 (-0x183)
If we look in ppc_vle.sinc we find SIMM16 defined as:
`SIMM16: val is SIMM_0_10_VLE & SIMM_21_25_VLE [ val = (SIMM_21_25_VLE << 11) | SIMM_0_10_VLE ;] { export *[const]:2 val; }`
It appears that in our example SIMM_0_10_VLE (11001111101) was sign extended to a full 16 bits (11111 11001111101). This or'd 11111 over the SIMM_21_25_VLE bits (10110).
Using IMM_0_10_VLE in place of SIMM_0_10_VLE seems to fix the problem:
`SIMM16: val is IMM_0_10_VLE & SIMM_21_25_VLE [ val = (SIMM_21_25_VLE << 11) | IMM_0_10_VLE ;] { export *[const]:2 val; }`
**PowerPC VLE instructions affected:** e_add2i, e_add2is, e_mull2i, e_cmp16i, e_cmph16i
**Environment:** ghidra_9.1-BETA_DEV_20190923
|
process
|
powerpc vle offset sometimes incorrect e e e e e description certain powerpc vle instructions may have an incorrect signed bit immediate value if one of the instruction fields simm vle is negative to reproduce create binary file holding bytes load binary as powerpc powerisa vle make sure vle bit is set to in code browser processor options right click in listing and select processor options from context menu disassemble expected behaviour e actual e the value of is incorrect from the instruction info display the constant comes from the following bit fields op objects const operand mask masked value should be actually is if we look in ppc vle sinc we find defined as val is simm vle simm vle export val it appears that in our example simm vle was sign extended to a full bits this or d over the simm vle bits using imm vle in place of simm vle seems to fix the problem val is imm vle simm vle export val powerpc vle instructions affected e e e e e environment ghidra beta dev
| 1
|
135,219
| 12,678,678,413
|
IssuesEvent
|
2020-06-19 10:12:56
|
kai687/sphinxawesome-theme
|
https://api.github.com/repos/kai687/sphinxawesome-theme
|
closed
|
Include instruction about sampdirective
|
documentation enhancement
|
If you want to use it in the documentation, you also need to add it to the extension list.
I could think of a way to automatically add it, since we will always install it anyway.
If import fails for some reason (someone uninstalled it, etc.) I can skip it but it should
work from within the theme.
|
1.0
|
Include instruction about sampdirective - If you want to use it in the documentation, you also need to add it to the extension list.
I could think of a way to automatically add it, since we will always install it anyway.
If import fails for some reason (someone uninstalled it, etc.) I can skip it but it should
work from within the theme.
|
non_process
|
include instruction about sampdirective if you want to use it in the documentation you also need to add it to the extension list i could think of a way to automatically add it since we will always install it anyway if import fails for some reason someone uninstalled it etc i can skip it but it should work from within the theme
| 0
|
10,422
| 13,214,171,003
|
IssuesEvent
|
2020-08-16 16:25:21
|
jyn514/saltwater
|
https://api.github.com/repos/jyn514/saltwater
|
closed
|
Missing predefined macros
|
enhancement preprocessor
|
[6.10.8.1 Mandatory macros](http://port70.net/~nsz/c/c11/n1570.html#6.10.8.1)
- [x] `__LINE__` - tricky, can't be passed through to `replace()` literally. This might be as simple as updating `self.definitions` whenever someone calls `line()`?
- [ ] `__COLUMN__` - tricky since we don't keep track of this explicitly and recalculate it. Probably needs yet more preprocessing before passing things through to `replace()`.
- [x] `__FILE__` - easy, but might need a refactor to put `PreProcessor` and `FileProcessor` back together.
- [x] `__DATE__` - easy, but needs a date-time dependency
- [x] `__TIME__` - same as `__DATE__`
[6.10.8.3 Conditional feature macros](http://port70.net/~nsz/c/c11/n1570.html#6.10.8.3)
- [ ] ` __STDC_IEC_559__ ` - trivial, good first issue. Code should go near https://github.com/jyn514/rcc/blob/180b27bb90bfcc02e2d4cba3afdd0a4dedccd662/src/lex/cpp.rs#L330.
Note that `__LINE__` should be the line at time of replacement, not within a definition. For example:
```c
$ clang -x c - -E -P
#define a __LINE__
a
2
```
|
1.0
|
Missing predefined macros - [6.10.8.1 Mandatory macros](http://port70.net/~nsz/c/c11/n1570.html#6.10.8.1)
- [x] `__LINE__` - tricky, can't be passed through to `replace()` literally. This might be as simple as updating `self.definitions` whenever someone calls `line()`?
- [ ] `__COLUMN__` - tricky since we don't keep track of this explicitly and recalculate it. Probably needs yet more preprocessing before passing things through to `replace()`.
- [x] `__FILE__` - easy, but might need a refactor to put `PreProcessor` and `FileProcessor` back together.
- [x] `__DATE__` - easy, but needs a date-time dependency
- [x] `__TIME__` - same as `__DATE__`
[6.10.8.3 Conditional feature macros](http://port70.net/~nsz/c/c11/n1570.html#6.10.8.3)
- [ ] ` __STDC_IEC_559__ ` - trivial, good first issue. Code should go near https://github.com/jyn514/rcc/blob/180b27bb90bfcc02e2d4cba3afdd0a4dedccd662/src/lex/cpp.rs#L330.
Note that `__LINE__` should be the line at time of replacement, not within a definition. For example:
```c
$ clang -x c - -E -P
#define a __LINE__
a
2
```
|
process
|
missing predefined macros line tricky can t be passed through to replace literally this might be as simple as updating self definitions whenever someone calls line column tricky since we don t keep track of this explicitly and recalculate it probably needs yet more preprocessing before passing things through to replace file easy but might need a refactor to put preprocessor and fileprocessor back together date easy but needs a date time dependency time same as date stdc iec trivial good first issue code should go near note that line should be the line at time of replacement not within a definition for example c clang x c e p define a line a
| 1
|
20,353
| 27,013,260,494
|
IssuesEvent
|
2023-02-10 17:02:35
|
NCAR/comp-pipeline
|
https://api.github.com/repos/NCAR/comp-pipeline
|
closed
|
First pass at reprocessing pre-December 2012
|
level 1 process
|
Need to get the raw data from the archive, reprocess, and do some analysis on the results to see if they are consistent with the rest of the mission.
The sub-tasks are:
- [x] download pre-December 2012 from Campaign Storage
- [x] untar
- [x] reprocess level 1 and 2
- [x] update flat plot
- [x] update velocity data file, i.e., run `comp_quick_invert_restwvl`
|
1.0
|
First pass at reprocessing pre-December 2012 - Need to get the raw data from the archive, reprocess, and do some analysis on the results to see if they are consistent with the rest of the mission.
The sub-tasks are:
- [x] download pre-December 2012 from Campaign Storage
- [x] untar
- [x] reprocess level 1 and 2
- [x] update flat plot
- [x] update velocity data file, i.e., run `comp_quick_invert_restwvl`
|
process
|
first pass at reprocessing pre december need to get the raw data from the archive reprocess and do some analysis on the results to see if they are consistent with the rest of the mission the sub tasks are download pre december from campaign storage untar reprocess level and update flat plot update velocity data file i e run comp quick invert restwvl
| 1
|
58,468
| 16,548,303,894
|
IssuesEvent
|
2021-05-28 04:38:11
|
Questie/Questie
|
https://api.github.com/repos/Questie/Questie
|
opened
|
[Classic ERA] minimap button can't be hidden
|
Type - Defect
|
## Bug description
It came back after each reload.
## Questie version
3.3.13
|
1.0
|
[Classic ERA] minimap button can't be hidden - ## Bug description
It came back after each reload.
## Questie version
3.3.13
|
non_process
|
minimap button can t be hidden bug description it came back after each reload questie version
| 0
|
133,829
| 5,215,367,937
|
IssuesEvent
|
2017-01-26 04:29:47
|
biocore/gneiss
|
https://api.github.com/repos/biocore/gneiss
|
opened
|
RegressionModel.fit() issues
|
bug high priority
|
- [ ] The RegressionModel object will add more variables if `fit` is called multiple times
- [ ] `fit` should return the updated RegressionModel object as output
|
1.0
|
RegressionModel.fit() issues - - [ ] The RegressionModel object will add more variables if `fit` is called multiple times
- [ ] `fit` should return the updated RegressionModel object as output
|
non_process
|
regressionmodel fit issues the regressionmodel object will add more variables if fit is called multiple times fit should return the updated regressionmodel object as output
| 0
|
8,063
| 11,223,812,678
|
IssuesEvent
|
2020-01-07 23:55:14
|
mendezc1/GenderMagRecordersAssistant
|
https://api.github.com/repos/mendezc1/GenderMagRecordersAssistant
|
closed
|
Change of facet name : "Willingness to Tinker" to "Learning: by Process vs. by Tinkering"
|
Good First Issue Information Processing Style Learning by Process vs. by Tinkering Medium Priority
|
* **What Operating System?**
System-wide issue
* **What steps will reproduce the issue?**
Start a GenderMag tool and you will find "Willingness to Tinker" under facets list
* **What would you expect to be the outcome?**
"Learning: by Process vs. by Tinkering" instead of "Willingness to Tinker" based on foundation document
* **All these details will help people to fix any potential bugs?**
|
2.0
|
Change of facet name : "Willingness to Tinker" to "Learning: by Process vs. by Tinkering" - * **What Operating System?**
System-wide issue
* **What steps will reproduce the issue?**
Start a GenderMag tool and you will find "Willingness to Tinker" under facets list
* **What would you expect to be the outcome?**
"Learning: by Process vs. by Tinkering" instead of "Willingness to Tinker" based on foundation document
* **All these details will help people to fix any potential bugs?**
|
process
|
change of facet name willingness to tinker to learning by process vs by tinkering what operating system system wide issue what steps will reproduce the issue start a gendermag tool and you will find willingness to tinker under facets list what would you expect to be the outcome learning by process vs by tinkering instead of willingness to tinker based on foundation document all these details will help people to fix any potential bugs
| 1
|
16,606
| 21,659,420,681
|
IssuesEvent
|
2022-05-06 17:29:32
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Incorrect Dissassembly of MIPS Function Prologue (Big Endian)
|
Type: Bug Feature: Processor/MIPS
|
**Describe the bug**
Ghidra disassembles the function prologue as follows:
```
.text:004fad64 lui gp,0x57
assume t9 = <UNKNOWN>
assume gp = <UNKNOWN>
.text:004fad68 addiu gp,gp,0xfcc
.text:004fad6c addu gp,gp,t9
.text:004fad70 addiu sp,sp,-0x48
.text:004fad74 sw ra,local_4(sp)
.text:004fad78 sw s4,local_8(sp)
.text:004fad7c sw s3,local_c(sp)
...
```
Ida's version:
```
.text:004FAD64 la $gp, loc_570FCC
.text:004FAD6C addu $gp, $t9
.text:004FAD70 addiu $sp, -0x48
.text:004FAD74 sw $ra, 0x48+var_4($sp)
.text:004FAD78 sw $s4, 0x48+var_8($sp)
.text:004FAD7C sw $s3, 0x48+var_C($sp)
```
Binary Ninja:
```
004fad64 3c1c0057… li $gp, 0x570fcc
004fad6c 0399e021 addu $gp, $gp, $t9
004fad70 27bdffb8 addiu $sp, $sp, -0x48
004fad74 afbf0044 sw $ra, 0x44($sp) {__saved_$ra}
004fad78 afb40040 sw $s4, 0x40($sp) {__saved_$s4}
004fad7c afb3003c sw $s3, 0x3c($sp) {__saved_$s3}
```
As a result, Ghidra bases the decompilation off of an incorrect $GP value resulting in an incorrect decompilation:
```
void FUN_004fad64(undefined8 param_1,undefined8 param_2,longlong param_3)
{
undefined *puVar1;
int iVar2;
undefined4 uVar3;
code *pcVar4;
longlong lVar5;
ssize_t sVar8;
longlong lVar6;
undefined4 *puVar9;
ulonglong uVar7;
int iVar10;
sockaddr *local_24;
undefined *local_20;
socklen_t *in_stack_ffffffe4;
local_20 = &_gp;
FUN_00643480();
if (param_3 < 0) {
puVar1 = *(local_20 + -0x2a3c);
*puVar1 = 0;
lVar5 = (**(local_20 + -0x5858))(0,0xff03);
if (lVar5 != 0) {
(**(local_20 + -0x6dc4))(puVar1,0x400,0,0xff03);
(**(local_20 + -0x6c34))(*(local_20 + -0x6380),0xff03,0,puVar1);
}
lVar5 = (**(local_20 + -0x3540))(0,0xff03,0,0);
if (lVar5 != 0) {
if (**(local_20 + -0x2a3c) == '\0') {
(**(local_20 + -0x6dc4))(*(local_20 + -0x2a3c),0x400,0,0xff03);
}
(**(local_20 + -0x6c34))(*(local_20 + -0x5930),0xff03,0,*(local_20 + -0x2a3c));
}
}
....
```
**To Reproduce**
Steps to reproduce the behavior:
Function bytes:
```
3c 1c 00 57 27 9c 0f cc 03 99 e0 21 27 bd ff b8 af bf 00 44 af b4 00 40 af b3 00 3c af b2 00 38 af b1 00 34 af b0 00 30 af bc 00 28 8f 99 9f fc 03 20 f8 09 00 c0 80 21 06 01 00 44 8f bc 00 28 8f 91 d5 c4 a2 20 00 00 00 00 20 21 8f 99 a7 a8 03 20 f8 09 34 05 ff 03 10 40 00 19 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e c0 af a2 00 20 02 20 20 21 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 34 07 ff 03 8f bc 00 28 af b0 00 10 8f 84 9c 80 34 05 ff 03 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 20 38 21 8f bc 00 28 00 00 20 21 34 05 ff 03 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 02 3b 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e c0 af a2 00 20 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 34 07 ff 03 8f bc 00 28 af b0 00 10 8f 84 a6 d0 34 05 ff 03 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 02 20 8f bc 00 28 8f 99 aa 5c 03 20 f8 09 24 04 06 72 10 40 01 cb 8f bc 00 28 00 40 90 21 00 40 20 21 00 00 28 21 8f 99 b9 30 03 20 f8 09 24 06 06 72 8f bc 00 28 af a0 00 10 af a0 00 14 02 00 20 21 02 40 28 21 24 06 06 72 8f 99 88 84 03 20 f8 09 00 00 38 21 8f bc 00 28 1c 40 00 4d 00 40 80 21 8f 83 d6 20 8c 62 01 3c 24 42 00 01 ac 62 01 3c 8c 62 01 38 24 42 00 01 ac 62 01 38 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e e8 af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e e8 af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f 99 d4 10 03 20 f8 09 02 40 20 21 10 00 01 bd 8f bc 00 28 24 04 00 01 8f 99 a5 44 03 20 f8 09 02 40 28 21 8f bc 00 28 14 40 00 46 00 40 88 21 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f 0c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f 0c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f 99 d4 10 03 20 f8 09 02 40 20 21 10 00 01 71 8f bc 00 28 ac 50 00 08 8f 82 d6 20 8c 52 0b 54 12 40 00 02 00 00 a0 21 8e 54 01 00 8e 33 00 00 8f 90 d5 c4 a2 00 00 00 24 04 00 03 8f 99 a7 a8 03 20 f8 09 24 05 00 01 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 3c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 03 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 8f 84 9c 80 24 05 00 01 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 03 24 05 00 01 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af b2 00 18 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 3c af a2 00 20 24 05 04 00 24 06 00 03 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 8f 84 a6 d0 24 05 00 01 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8e 30 00 08 2e 02 00 e8 10 40 00 4f 26 23 00 28 8f 83 d6 20 8c 62 01 44 24 42 00 01 ac 62 01 44 8c 62 01 38 24 42 00 01 ac 62 01 38 8f 93 d5 c4 a2 60 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 01 10 40 00 1b 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 5c af a2 00 20 02 60 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 af b0 00 10 24 02 00 e8 af a2 00 14 8f 84 9c 80 24 05 00 01 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 60 38 21 8f bc 00 28 24 04 00 02 24 05 00 01 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 94 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af b2 00 18 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 5c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 af b0 00 10 24 02 00 e8 af a2 00 14 8f 84 a6 d0 24 05 00 01 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 00 77 8f bc 00 28 ae 30 00 24 ae 33 00 0c a2 20 00 28 24 02 00 02 a0 62 00 01 24 02 00 43 a4 62 00 02 8f 99 be 20 03 20 f8 09 02 20 20 21 12 40 00 03 8f bc 00 28 16 80 00 5d 02 80 20 21 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 25 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 9c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 99 c0 30 03 20 f8 09 00 00 20 21 8f bc 00 28 af b2 00 10 af a2 00 14 af b4 00 18 8e 22 00 10 af a2 00 1c 8e 22 00 14 af a2 00 20 8e 22 00 24 af a2 00 24 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 35 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 11 8f 99 c0 30 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 9c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 99 c0 30 03 20 f8 09 00 00 20 21 8f bc 00 28 af b2 00 10 af a2 00 14 af b4 00 18 8e 22 00 10 af a2 00 1c 8e 22 00 14 af a2 00 20 8e 22 00 24 af a2 00 24 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 00 0d 8f bc 00 28 8f 99 a7 7c 03 20 f8 09 00 00 28 21 8f bc 00 28 a2 20 01 78 02 40 20 21 8f 99 cb 34 03 20 f8 09 02 20 28 21 30 42 00 ff 14 40 00 07 8f bc 00 28 02 20 20 21 8f 99 b9 98 03 20 f8 09 24 05 00 01 10 00 00 5c 8f bc 00 28 92 22 01 54 14 40 00 07 24 02 00 01 02 20 20 21 8f 99 b9 98 03 20 f8 09 24 05 00 01 10 00 00 53 8f bc 00 28 10 00 00 51 a2 22 01 55 8f 83 d6 20 8c 62 01 50 24 42 00 01 ac 62 01 50 8c 62 01 38 24 42 00 01 ac 62 01 38 af a0 00 10 af a0 00 14 02 00 20 21 00 00 28 21 00 00 30 21 8f 99 88 84 03 20 f8 09 00 00 38 21 8f bc 00 28 8f 90 d5 c4 a2 00 00 00 00 00 20 21 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f f4 af a2 00 20 02 00 20 21 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 00 00 20 21 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f f4 af a2 00 20 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f bf 00 44 8f b4 00 40 8f b3 00 3c 8f b2 00 38 8f b1 00 34 8f b0 00 30 03 e0 00 08 27 bd 00 48
```
**Environment (please complete the following information):**
- OS: Arch Linux
- Java Version:
```
$ java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
```
- Ghidra Version: Ghidra_9.1-BETA_DEV
|
1.0
|
Incorrect Dissassembly of MIPS Function Prologue (Big Endian) - **Describe the bug**
Ghidra disassembles the function prologue as follows:
```
.text:004fad64 lui gp,0x57
assume t9 = <UNKNOWN>
assume gp = <UNKNOWN>
.text:004fad68 addiu gp,gp,0xfcc
.text:004fad6c addu gp,gp,t9
.text:004fad70 addiu sp,sp,-0x48
.text:004fad74 sw ra,local_4(sp)
.text:004fad78 sw s4,local_8(sp)
.text:004fad7c sw s3,local_c(sp)
...
```
Ida's version:
```
.text:004FAD64 la $gp, loc_570FCC
.text:004FAD6C addu $gp, $t9
.text:004FAD70 addiu $sp, -0x48
.text:004FAD74 sw $ra, 0x48+var_4($sp)
.text:004FAD78 sw $s4, 0x48+var_8($sp)
.text:004FAD7C sw $s3, 0x48+var_C($sp)
```
Binary Ninja:
```
004fad64 3c1c0057… li $gp, 0x570fcc
004fad6c 0399e021 addu $gp, $gp, $t9
004fad70 27bdffb8 addiu $sp, $sp, -0x48
004fad74 afbf0044 sw $ra, 0x44($sp) {__saved_$ra}
004fad78 afb40040 sw $s4, 0x40($sp) {__saved_$s4}
004fad7c afb3003c sw $s3, 0x3c($sp) {__saved_$s3}
```
As a result, Ghidra bases the decompilation off of an incorrect $GP value resulting in an incorrect decompilation:
```
void FUN_004fad64(undefined8 param_1,undefined8 param_2,longlong param_3)
{
undefined *puVar1;
int iVar2;
undefined4 uVar3;
code *pcVar4;
longlong lVar5;
ssize_t sVar8;
longlong lVar6;
undefined4 *puVar9;
ulonglong uVar7;
int iVar10;
sockaddr *local_24;
undefined *local_20;
socklen_t *in_stack_ffffffe4;
local_20 = &_gp;
FUN_00643480();
if (param_3 < 0) {
puVar1 = *(local_20 + -0x2a3c);
*puVar1 = 0;
lVar5 = (**(local_20 + -0x5858))(0,0xff03);
if (lVar5 != 0) {
(**(local_20 + -0x6dc4))(puVar1,0x400,0,0xff03);
(**(local_20 + -0x6c34))(*(local_20 + -0x6380),0xff03,0,puVar1);
}
lVar5 = (**(local_20 + -0x3540))(0,0xff03,0,0);
if (lVar5 != 0) {
if (**(local_20 + -0x2a3c) == '\0') {
(**(local_20 + -0x6dc4))(*(local_20 + -0x2a3c),0x400,0,0xff03);
}
(**(local_20 + -0x6c34))(*(local_20 + -0x5930),0xff03,0,*(local_20 + -0x2a3c));
}
}
....
```
**To Reproduce**
Steps to reproduce the behavior:
Function bytes:
```
3c 1c 00 57 27 9c 0f cc 03 99 e0 21 27 bd ff b8 af bf 00 44 af b4 00 40 af b3 00 3c af b2 00 38 af b1 00 34 af b0 00 30 af bc 00 28 8f 99 9f fc 03 20 f8 09 00 c0 80 21 06 01 00 44 8f bc 00 28 8f 91 d5 c4 a2 20 00 00 00 00 20 21 8f 99 a7 a8 03 20 f8 09 34 05 ff 03 10 40 00 19 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e c0 af a2 00 20 02 20 20 21 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 34 07 ff 03 8f bc 00 28 af b0 00 10 8f 84 9c 80 34 05 ff 03 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 20 38 21 8f bc 00 28 00 00 20 21 34 05 ff 03 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 02 3b 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e c0 af a2 00 20 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 34 07 ff 03 8f bc 00 28 af b0 00 10 8f 84 a6 d0 34 05 ff 03 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 02 20 8f bc 00 28 8f 99 aa 5c 03 20 f8 09 24 04 06 72 10 40 01 cb 8f bc 00 28 00 40 90 21 00 40 20 21 00 00 28 21 8f 99 b9 30 03 20 f8 09 24 06 06 72 8f bc 00 28 af a0 00 10 af a0 00 14 02 00 20 21 02 40 28 21 24 06 06 72 8f 99 88 84 03 20 f8 09 00 00 38 21 8f bc 00 28 1c 40 00 4d 00 40 80 21 8f 83 d6 20 8c 62 01 3c 24 42 00 01 ac 62 01 3c 8c 62 01 38 24 42 00 01 ac 62 01 38 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e e8 af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7e e8 af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f 99 d4 10 03 20 f8 09 02 40 20 21 10 00 01 bd 8f bc 00 28 24 04 00 01 8f 99 a5 44 03 20 f8 09 02 40 28 21 8f bc 00 28 14 40 00 46 00 40 88 21 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f 0c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f 0c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f 99 d4 10 03 20 f8 09 02 40 20 21 10 00 01 71 8f bc 00 28 ac 50 00 08 8f 82 d6 20 8c 52 0b 54 12 40 00 02 00 00 a0 21 8e 54 01 00 8e 33 00 00 8f 90 d5 c4 a2 00 00 00 24 04 00 03 8f 99 a7 a8 03 20 f8 09 24 05 00 01 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 3c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 03 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 8f 84 9c 80 24 05 00 01 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 03 24 05 00 01 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af b2 00 18 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 3c af a2 00 20 24 05 04 00 24 06 00 03 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 8f 84 a6 d0 24 05 00 01 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8e 30 00 08 2e 02 00 e8 10 40 00 4f 26 23 00 28 8f 83 d6 20 8c 62 01 44 24 42 00 01 ac 62 01 44 8c 62 01 38 24 42 00 01 ac 62 01 38 8f 93 d5 c4 a2 60 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 01 10 40 00 1b 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 5c af a2 00 20 02 60 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 af b0 00 10 24 02 00 e8 af a2 00 14 8f 84 9c 80 24 05 00 01 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 60 38 21 8f bc 00 28 24 04 00 02 24 05 00 01 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 94 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af b2 00 18 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 5c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 01 8f bc 00 28 af b0 00 10 24 02 00 e8 af a2 00 14 8f 84 a6 d0 24 05 00 01 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 00 77 8f bc 00 28 ae 30 00 24 ae 33 00 0c a2 20 00 28 24 02 00 02 a0 62 00 01 24 02 00 43 a4 62 00 02 8f 99 be 20 03 20 f8 09 02 20 20 21 12 40 00 03 8f bc 00 28 16 80 00 5d 02 80 20 21 8f 90 d5 c4 a2 00 00 00 24 04 00 02 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 25 8f bc 00 28 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 9c af a2 00 20 02 00 20 21 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 99 c0 30 03 20 f8 09 00 00 20 21 8f bc 00 28 af b2 00 10 af a2 00 14 af b4 00 18 8e 22 00 10 af a2 00 1c 8e 22 00 14 af a2 00 20 8e 22 00 24 af a2 00 24 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 24 04 00 02 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 35 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 11 8f 99 c0 30 af a0 00 10 af a0 00 14 af b2 00 18 8f 82 80 60 24 42 90 90 af a2 00 1c 8f 82 80 48 24 42 7f 9c af a2 00 20 24 05 04 00 24 06 00 02 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 99 c0 30 03 20 f8 09 00 00 20 21 8f bc 00 28 af b2 00 10 af a2 00 14 af b4 00 18 8e 22 00 10 af a2 00 1c 8e 22 00 14 af a2 00 20 8e 22 00 24 af a2 00 24 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 10 00 00 0d 8f bc 00 28 8f 99 a7 7c 03 20 f8 09 00 00 28 21 8f bc 00 28 a2 20 01 78 02 40 20 21 8f 99 cb 34 03 20 f8 09 02 20 28 21 30 42 00 ff 14 40 00 07 8f bc 00 28 02 20 20 21 8f 99 b9 98 03 20 f8 09 24 05 00 01 10 00 00 5c 8f bc 00 28 92 22 01 54 14 40 00 07 24 02 00 01 02 20 20 21 8f 99 b9 98 03 20 f8 09 24 05 00 01 10 00 00 53 8f bc 00 28 10 00 00 51 a2 22 01 55 8f 83 d6 20 8c 62 01 50 24 42 00 01 ac 62 01 50 8c 62 01 38 24 42 00 01 ac 62 01 38 af a0 00 10 af a0 00 14 02 00 20 21 00 00 28 21 00 00 30 21 8f 99 88 84 03 20 f8 09 00 00 38 21 8f bc 00 28 8f 90 d5 c4 a2 00 00 00 00 00 20 21 8f 99 a7 a8 03 20 f8 09 24 05 00 0d 10 40 00 18 8f bc 00 28 af a0 00 10 af a0 00 14 af a0 00 18 8f 82 80 60 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f f4 af a2 00 20 02 00 20 21 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 9c 80 24 05 00 0d 00 00 30 21 8f 99 93 cc 03 20 f8 09 02 00 38 21 8f bc 00 28 00 00 20 21 24 05 00 0d 00 00 30 21 8f 99 ca c0 03 20 f8 09 00 00 38 21 10 40 00 1a 8f bc 00 28 8f 84 d5 c4 80 82 00 00 14 40 00 0f 8f 82 80 60 af a0 00 10 af a0 00 14 af a0 00 18 24 42 90 6c af a2 00 1c 8f 82 80 48 24 42 7f f4 af a2 00 20 24 05 04 00 00 00 30 21 8f 99 92 3c 03 20 f8 09 24 07 00 0d 8f bc 00 28 8f 84 a6 d0 24 05 00 0d 8f 87 d5 c4 8f 99 93 cc 03 20 f8 09 00 00 30 21 8f bc 00 28 8f bf 00 44 8f b4 00 40 8f b3 00 3c 8f b2 00 38 8f b1 00 34 8f b0 00 30 03 e0 00 08 27 bd 00 48
```
**Environment (please complete the following information):**
- OS: Arch Linux
- Java Version:
```
$ java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
```
- Ghidra Version: Ghidra_9.1-BETA_DEV
|
process
|
incorrect dissassembly of mips function prologue big endian describe the bug ghidra disassembles the function prologue as follows text lui gp assume assume gp text addiu gp gp text addu gp gp text addiu sp sp text sw ra local sp text sw local sp text sw local c sp ida s version text la gp loc text addu gp text addiu sp text sw ra var sp text sw var sp text sw var c sp binary ninja … li gp addu gp gp addiu sp sp sw ra sp saved ra sw sp saved sw sp saved as a result ghidra bases the decompilation off of an incorrect gp value resulting in an incorrect decompilation void fun param param longlong param undefined int code longlong ssize t longlong ulonglong int sockaddr local undefined local socklen t in stack local gp fun if param local local if local local local local if if local local local local local local to reproduce steps to reproduce the behavior function bytes cc bd ff af bf af af af af af af bc fc bc ff bc af af af af af ff bc af ff cc bc ff ca bc af af af af af ff bc af ff cc bc aa cb bc bc af af bc ac ac bc af af af af af bc cc bc ca bc af af af af af bc cc bc bd bc bc bc af af af af af bc cc bc ca bc af af af af af bc cc bc bc ac bc af af af af af bc cc bc ca bc af af af af af bc cc bc ac ac bc af af af af af bc af af cc bc ca bc af af af af af bc af af cc bc ae ae be bc bc af af af af af bc bc af af af af af af cc bc ca bc af af af af af bc bc af af af af af af cc bc bc cb ff bc bc bc ac ac af af bc bc af af af af af bc cc bc ca bc af af af af af bc cc bc bf bd environment please complete the following information os arch linux java version java version openjdk version openjdk runtime environment build openjdk bit server vm build mixed mode ghidra version ghidra beta dev
| 1
|
642,484
| 20,905,180,466
|
IssuesEvent
|
2022-03-24 00:48:37
|
JustArchiNET/ArchiSteamFarm
|
https://api.github.com/repos/JustArchiNET/ArchiSteamFarm
|
closed
|
zh-CH global setting not fully effective
|
🐛 Bug ✔️ Confirmed 🟢 Low priority
|
### Checklist
- [X] I read and understood ASF's **[Contributing guidelines](https://github.com/JustArchiNET/ArchiSteamFarm/blob/main/.github/CONTRIBUTING.md)**
- [X] I also read **[Setting-up](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/Setting-up)** and **[FAQ](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/FAQ)**, I don't need **[help](https://github.com/JustArchiNET/ArchiSteamFarm/blob/main/.github/SUPPORT.md)**, this is a bug report
- [X] I don't own more than **[10 accounts in total](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/FAQ#how-many-bots-can-i-run-with-asf)**
- [X] I'm not using **[custom plugins](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/Plugins)**
- [X] This is not a **[question](https://github.com/JustArchiNET/ArchiSteamFarm/discussions)**
- [X] This is not a **[technical issue](https://github.com/JustArchiNET/ArchiSteamFarm/discussions)**
- [X] This is not **[ASF-ui problem](https://github.com/JustArchiNET/ASF-ui/issues/new/choose)**
### ASF version
Latest stable release
### ASF variant
docker-linux/amd64
### Bug description
When I use the zh-CH global setting, everything works fine at the beginning of startup, if you add an enable: true bot, the log will normally display Chinese, but when I start this bot for the second time, it will start to show English. In fact, I started to encounter this problem on version 5.2.2.5. But this problem remains in 5.2.3.7.
### Expected behavior
Chinese is displayed all the time
### Actual behavior
At first the log showed Chinese, but when I started a robot, it showed english
### Steps to reproduce
1. Start an asf docker container, using the "CurrentCulture": "zh-CN" global setting
2. Add a robot set to enable
3. All of the above will show Chinese normally
4. But when I stop the bot and start it again, the logs start to show up in English
### Possible reason/solution
_No response_
### Can you help us with this bug report?
Somehow, I can test and offer feedback, but can't code
### Full log.txt recorded during reproducing the problem
```text
Session terminated, killing shell...]0;ArchiSteamFarm V5.2.3.7 (linux-x64/4fe6586f-d76c-4b9d-85d1-8572770bfc1f | .NET 6.0.2; debian.11-x64; Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020)2022-03-22 13:52:03.9255|INFO|ASF|InitCore() ArchiSteamFarm V5.2.3.7 (linux-x64/4fe6586f-d76c-4b9d-85d1-8572770bfc1f | .NET 6.0.2; debian.11-x64; Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020)
2022-03-22 13:52:03.9852|INFO|ASF|InitCore() Copyright © 2015-2022 JustArchiNET
2022-03-22 13:52:04.3888|WARN|ASF|InitGlobalConfigAndLanguage() 配置文件 config/ASF.json 将会迁移到最新格式……
2022-03-22 13:52:04.4195|ERROR|ASF|Write()
2022-03-22 13:52:04.4201|INFO|ASF|InitGlobalConfigAndLanguage() 完成!
2022-03-22 13:52:14.6204|INFO|ASF|Start() 正在启动 IPC 服务……
2022-03-22 13:52:15.1381|INFO|ASF|Start() IPC 服务已就绪!
2022-03-22 13:52:15.2248|WARN|ASF|Load() 您 'luyu121' 的 Steam 密码似乎很弱。请考虑选择更强的密码来增强安全性,详情 :Add another word or two. Uncommon words are better.
2022-03-22 13:52:15.2692|INFO|luyu121|InitStart() 您已在配置文件中禁用此机器人,该实例将不会启动!
2022-03-22 13:52:33.6203|INFO|luyu121|Start() Starting...
2022-03-22 13:52:33.6247|INFO|luyu121|Connect() Connecting...
2022-03-22 13:52:33.9711|INFO|luyu121|OnConnected() Connected to Steam!
2022-03-22 13:52:33.9846|INFO|luyu121|OnConnected() Logging in...
2022-03-22 13:52:34.7218|WARN|luyu121|OnLoggedOn() Unable to login to Steam: RateLimitExceeded/RateLimitExceeded
2022-03-22 13:52:34.7342|INFO|luyu121|OnDisconnected() Disconnected from Steam!
2022-03-22 13:52:34.7555|INFO|luyu121|OnDisconnected() Rate limit exceeded, we will retry after 25 minutes of cooldown...
```
### Global ASF.json config file
```json
{
"CurrentCulture": "zh-CN",
"Headless": true,
"IPCPassword": "***",
"LoginLimiterDelay": 3,
"SteamOwnerID": 76561198395619669,
"UpdateChannel": 0
}
```
### BotName.json config of all affected bot instances
_No response_
### Additional info
_No response_
|
1.0
|
zh-CH global setting not fully effective - ### Checklist
- [X] I read and understood ASF's **[Contributing guidelines](https://github.com/JustArchiNET/ArchiSteamFarm/blob/main/.github/CONTRIBUTING.md)**
- [X] I also read **[Setting-up](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/Setting-up)** and **[FAQ](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/FAQ)**, I don't need **[help](https://github.com/JustArchiNET/ArchiSteamFarm/blob/main/.github/SUPPORT.md)**, this is a bug report
- [X] I don't own more than **[10 accounts in total](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/FAQ#how-many-bots-can-i-run-with-asf)**
- [X] I'm not using **[custom plugins](https://github.com/JustArchiNET/ArchiSteamFarm/wiki/Plugins)**
- [X] This is not a **[question](https://github.com/JustArchiNET/ArchiSteamFarm/discussions)**
- [X] This is not a **[technical issue](https://github.com/JustArchiNET/ArchiSteamFarm/discussions)**
- [X] This is not **[ASF-ui problem](https://github.com/JustArchiNET/ASF-ui/issues/new/choose)**
### ASF version
Latest stable release
### ASF variant
docker-linux/amd64
### Bug description
When I use the zh-CH global setting, everything works fine at the beginning of startup, if you add an enable: true bot, the log will normally display Chinese, but when I start this bot for the second time, it will start to show English. In fact, I started to encounter this problem on version 5.2.2.5. But this problem remains in 5.2.3.7.
### Expected behavior
Chinese is displayed all the time
### Actual behavior
At first the log showed Chinese, but when I started a robot, it showed english
### Steps to reproduce
1. Start an asf docker container, using the "CurrentCulture": "zh-CN" global setting
2. Add a robot set to enable
3. All of the above will show Chinese normally
4. But when I stop the bot and start it again, the logs start to show up in English
### Possible reason/solution
_No response_
### Can you help us with this bug report?
Somehow, I can test and offer feedback, but can't code
### Full log.txt recorded during reproducing the problem
```text
Session terminated, killing shell...]0;ArchiSteamFarm V5.2.3.7 (linux-x64/4fe6586f-d76c-4b9d-85d1-8572770bfc1f | .NET 6.0.2; debian.11-x64; Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020)2022-03-22 13:52:03.9255|INFO|ASF|InitCore() ArchiSteamFarm V5.2.3.7 (linux-x64/4fe6586f-d76c-4b9d-85d1-8572770bfc1f | .NET 6.0.2; debian.11-x64; Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020)
2022-03-22 13:52:03.9852|INFO|ASF|InitCore() Copyright © 2015-2022 JustArchiNET
2022-03-22 13:52:04.3888|WARN|ASF|InitGlobalConfigAndLanguage() 配置文件 config/ASF.json 将会迁移到最新格式……
2022-03-22 13:52:04.4195|ERROR|ASF|Write()
2022-03-22 13:52:04.4201|INFO|ASF|InitGlobalConfigAndLanguage() 完成!
2022-03-22 13:52:14.6204|INFO|ASF|Start() 正在启动 IPC 服务……
2022-03-22 13:52:15.1381|INFO|ASF|Start() IPC 服务已就绪!
2022-03-22 13:52:15.2248|WARN|ASF|Load() 您 'luyu121' 的 Steam 密码似乎很弱。请考虑选择更强的密码来增强安全性,详情 :Add another word or two. Uncommon words are better.
2022-03-22 13:52:15.2692|INFO|luyu121|InitStart() 您已在配置文件中禁用此机器人,该实例将不会启动!
2022-03-22 13:52:33.6203|INFO|luyu121|Start() Starting...
2022-03-22 13:52:33.6247|INFO|luyu121|Connect() Connecting...
2022-03-22 13:52:33.9711|INFO|luyu121|OnConnected() Connected to Steam!
2022-03-22 13:52:33.9846|INFO|luyu121|OnConnected() Logging in...
2022-03-22 13:52:34.7218|WARN|luyu121|OnLoggedOn() Unable to login to Steam: RateLimitExceeded/RateLimitExceeded
2022-03-22 13:52:34.7342|INFO|luyu121|OnDisconnected() Disconnected from Steam!
2022-03-22 13:52:34.7555|INFO|luyu121|OnDisconnected() Rate limit exceeded, we will retry after 25 minutes of cooldown...
```
### Global ASF.json config file
```json
{
"CurrentCulture": "zh-CN",
"Headless": true,
"IPCPassword": "***",
"LoginLimiterDelay": 3,
"SteamOwnerID": 76561198395619669,
"UpdateChannel": 0
}
```
### BotName.json config of all affected bot instances
_No response_
### Additional info
_No response_
|
non_process
|
zh ch global setting not fully effective checklist i read and understood asf s i also read and i don t need this is a bug report i don t own more than i m not using this is not a this is not a this is not asf version latest stable release asf variant docker linux bug description when i use the zh ch global setting everything works fine at the beginning of startup if you add an enable true bot the log will normally display chinese but when i start this bot for the second time it will start to show english in fact i started to encounter this problem on version but this problem remains in expected behavior chinese is displayed all the time actual behavior at first the log showed chinese but when i started a robot it showed english steps to reproduce start an asf docker container using the currentculture zh cn global setting add a robot set to enable all of the above will show chinese normally but when i stop the bot and start it again the logs start to show up in english possible reason solution no response can you help us with this bug report somehow i can test and offer feedback but can t code full log txt recorded during reproducing the problem text session terminated killing shell archisteamfarm linux net debian linux smp sun jul utc info asf initcore archisteamfarm linux net debian linux smp sun jul utc info asf initcore copyright © justarchinet warn asf initglobalconfigandlanguage 配置文件 config asf json 将会迁移到最新格式…… error asf write info asf initglobalconfigandlanguage 完成! info asf start 正在启动 ipc 服务…… info asf start ipc 服务已就绪! warn asf load 您 的 steam 密码似乎很弱。请考虑选择更强的密码来增强安全性,详情 :add another word or two uncommon words are better info initstart 您已在配置文件中禁用此机器人,该实例将不会启动! info start starting info connect connecting info onconnected connected to steam info onconnected logging in warn onloggedon unable to login to steam ratelimitexceeded ratelimitexceeded info ondisconnected disconnected from steam info ondisconnected rate limit exceeded we will retry after minutes of cooldown global asf json config file json currentculture zh cn headless true ipcpassword loginlimiterdelay steamownerid updatechannel botname json config of all affected bot instances no response additional info no response
| 0
|
277,992
| 24,116,863,570
|
IssuesEvent
|
2022-09-20 15:20:07
|
redpanda-data/redpanda
|
https://api.github.com/repos/redpanda-data/redpanda
|
closed
|
Failure in `EndToEndShadowIndexingTestWithDisruptions.test_write_with_node_failures` (Failed to consume up to offsets)
|
kind/bug area/tests ci-failure area/shadow-indexing ci-disabled-test
|
https://buildkite.com/redpanda/redpanda/builds/9910#0ad95b8f-ad64-45eb-bd6d-081ecbbd9f81
```
test_id: rptest.tests.e2e_shadow_indexing_test.EndToEndShadowIndexingTestWithDisruptions.test_write_with_node_failures
status: FAIL
run time: 2 minutes 47.183 seconds
TimeoutError("Consumer failed to consume up to offsets {TopicPartition(topic='panda-topic', partition=0): 36217} after waiting 30s.")
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/root/tests/rptest/services/cluster.py", line 35, in wrapped
r = f(self, *args, **kwargs)
File "/root/tests/rptest/tests/e2e_shadow_indexing_test.py", line 137, in test_write_with_node_failures
self.run_validation()
File "/root/tests/rptest/tests/end_to_end.py", line 188, in run_validation
self.await_consumed_offsets(self.producer.last_acked_offsets,
File "/root/tests/rptest/tests/end_to_end.py", line 154, in await_consumed_offsets
wait_until(has_finished_consuming,
File "/usr/local/lib/python3.9/dist-packages/ducktape/utils/util.py", line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from last_exception
ducktape.errors.TimeoutError: Consumer failed to consume up to offsets {TopicPartition(topic='panda-topic', partition=0): 36217} after waiting 30s.
```
|
2.0
|
Failure in `EndToEndShadowIndexingTestWithDisruptions.test_write_with_node_failures` (Failed to consume up to offsets) - https://buildkite.com/redpanda/redpanda/builds/9910#0ad95b8f-ad64-45eb-bd6d-081ecbbd9f81
```
test_id: rptest.tests.e2e_shadow_indexing_test.EndToEndShadowIndexingTestWithDisruptions.test_write_with_node_failures
status: FAIL
run time: 2 minutes 47.183 seconds
TimeoutError("Consumer failed to consume up to offsets {TopicPartition(topic='panda-topic', partition=0): 36217} after waiting 30s.")
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/usr/local/lib/python3.9/dist-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/root/tests/rptest/services/cluster.py", line 35, in wrapped
r = f(self, *args, **kwargs)
File "/root/tests/rptest/tests/e2e_shadow_indexing_test.py", line 137, in test_write_with_node_failures
self.run_validation()
File "/root/tests/rptest/tests/end_to_end.py", line 188, in run_validation
self.await_consumed_offsets(self.producer.last_acked_offsets,
File "/root/tests/rptest/tests/end_to_end.py", line 154, in await_consumed_offsets
wait_until(has_finished_consuming,
File "/usr/local/lib/python3.9/dist-packages/ducktape/utils/util.py", line 58, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from last_exception
ducktape.errors.TimeoutError: Consumer failed to consume up to offsets {TopicPartition(topic='panda-topic', partition=0): 36217} after waiting 30s.
```
|
non_process
|
failure in endtoendshadowindexingtestwithdisruptions test write with node failures failed to consume up to offsets test id rptest tests shadow indexing test endtoendshadowindexingtestwithdisruptions test write with node failures status fail run time minutes seconds timeouterror consumer failed to consume up to offsets topicpartition topic panda topic partition after waiting traceback most recent call last file usr local lib dist packages ducktape tests runner client py line in run data self run test file usr local lib dist packages ducktape tests runner client py line in run test return self test context function self test file root tests rptest services cluster py line in wrapped r f self args kwargs file root tests rptest tests shadow indexing test py line in test write with node failures self run validation file root tests rptest tests end to end py line in run validation self await consumed offsets self producer last acked offsets file root tests rptest tests end to end py line in await consumed offsets wait until has finished consuming file usr local lib dist packages ducktape utils util py line in wait until raise timeouterror err msg if callable err msg else err msg from last exception ducktape errors timeouterror consumer failed to consume up to offsets topicpartition topic panda topic partition after waiting
| 0
|
1,566
| 4,164,978,824
|
IssuesEvent
|
2016-06-19 06:20:05
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
opened
|
Set custom wait_timeout
|
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
|
When ProxySQL connects to a backend it should define a custom wait_timeout.
|
1.0
|
Set custom wait_timeout - When ProxySQL connects to a backend it should define a custom wait_timeout.
|
process
|
set custom wait timeout when proxysql connects to a backend it should define a custom wait timeout
| 1
|
15,456
| 19,669,069,241
|
IssuesEvent
|
2022-01-11 03:57:13
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
rtmp 推流到 lal,可能会 panic
|
#Bug *In process
|
rtmp 推流到 lal,如果是如下的源,lal 会 panic
ffmpeg.exe -i http://:8800/hls/0/index.m3u8 -c copy -f flv rtmp://127.0.0.1:1935/live/test110
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x1 pc=0x13ea02a]
goroutine 34903 [running]:
github.com/q191201771/lal/pkg/aac.(*AscContext).GetSamplingFrequency(...)
/Volumes/extssd/chef_git/lal/pkg/aac/aac.go:160
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).getAudioPacker(0xc000360000, 0x6f)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:189 +0x10a
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).remux(0xc000360000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:161 +0x205
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).FeedRtmpMsg(0xc000360000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:109 +0x1d1
github.com/q191201771/lal/pkg/logic.(*Group).broadcastByRtmpMsg(0xc000152000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/logic/group.go:1002 +0x1e5c
github.com/q191201771/lal/pkg/logic.(*Group).OnReadRtmpAvMsg(0xc000152000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/logic/group.go:918 +0xa5
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).doMsg(0xc00032a000, 0xc00023a280, 0x99, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/server_session.go:227 +0x124
github.com/q191201771/lal/pkg/rtmp.(*ChunkComposer).RunLoop(0xc0001840a0, 0x2094c02db58, 0xc00011e2c0, 0xc00033ff00, 0xc00011e2c0, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/chunk_composer.go:240 +0x858
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).runReadLoop(0xc00032a000, 0x0, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/server_session.go:185 +0xa6
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).RunLoop(0xc00032a000, 0xc000181470, 0x161e060)
```
|
1.0
|
rtmp 推流到 lal,可能会 panic - rtmp 推流到 lal,如果是如下的源,lal 会 panic
ffmpeg.exe -i http://:8800/hls/0/index.m3u8 -c copy -f flv rtmp://127.0.0.1:1935/live/test110
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x1 pc=0x13ea02a]
goroutine 34903 [running]:
github.com/q191201771/lal/pkg/aac.(*AscContext).GetSamplingFrequency(...)
/Volumes/extssd/chef_git/lal/pkg/aac/aac.go:160
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).getAudioPacker(0xc000360000, 0x6f)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:189 +0x10a
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).remux(0xc000360000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:161 +0x205
github.com/q191201771/lal/pkg/remux.(*Rtmp2RtspRemuxer).FeedRtmpMsg(0xc000360000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/remux/rtmp2rtsp.go:109 +0x1d1
github.com/q191201771/lal/pkg/logic.(*Group).broadcastByRtmpMsg(0xc000152000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/logic/group.go:1002 +0x1e5c
github.com/q191201771/lal/pkg/logic.(*Group).OnReadRtmpAvMsg(0xc000152000, 0x4, 0x800000099, 0x1, 0x0, 0xc000362000, 0x99, 0x1000)
/Volumes/extssd/chef_git/lal/pkg/logic/group.go:918 +0xa5
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).doMsg(0xc00032a000, 0xc00023a280, 0x99, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/server_session.go:227 +0x124
github.com/q191201771/lal/pkg/rtmp.(*ChunkComposer).RunLoop(0xc0001840a0, 0x2094c02db58, 0xc00011e2c0, 0xc00033ff00, 0xc00011e2c0, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/chunk_composer.go:240 +0x858
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).runReadLoop(0xc00032a000, 0x0, 0x0)
/Volumes/extssd/chef_git/lal/pkg/rtmp/server_session.go:185 +0xa6
github.com/q191201771/lal/pkg/rtmp.(*ServerSession).RunLoop(0xc00032a000, 0xc000181470, 0x161e060)
```
|
process
|
rtmp 推流到 lal,可能会 panic rtmp 推流到 lal,如果是如下的源,lal 会 panic ffmpeg exe i c copy f flv rtmp live panic runtime error invalid memory address or nil pointer dereference goroutine github com lal pkg aac asccontext getsamplingfrequency volumes extssd chef git lal pkg aac aac go github com lal pkg remux getaudiopacker volumes extssd chef git lal pkg remux go github com lal pkg remux remux volumes extssd chef git lal pkg remux go github com lal pkg remux feedrtmpmsg volumes extssd chef git lal pkg remux go github com lal pkg logic group broadcastbyrtmpmsg volumes extssd chef git lal pkg logic group go github com lal pkg logic group onreadrtmpavmsg volumes extssd chef git lal pkg logic group go github com lal pkg rtmp serversession domsg volumes extssd chef git lal pkg rtmp server session go github com lal pkg rtmp chunkcomposer runloop volumes extssd chef git lal pkg rtmp chunk composer go github com lal pkg rtmp serversession runreadloop volumes extssd chef git lal pkg rtmp server session go github com lal pkg rtmp serversession runloop
| 1
|
636
| 3,092,139,715
|
IssuesEvent
|
2015-08-26 16:19:58
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
opened
|
ЦНАП м.Тернопіль - Погодження режиму роботи об’єктів сфери торгівлі та сфери обслуговування населення на території м. Тернополя" ,
|
in process of creating
|
Описание
https://drive.google.com/file/d/0B-TXzbaEvbw9MFUxQUlaMzVaZVE/view?usp=sharing
|
1.0
|
ЦНАП м.Тернопіль - Погодження режиму роботи об’єктів сфери торгівлі та сфери обслуговування населення на території м. Тернополя" , - Описание
https://drive.google.com/file/d/0B-TXzbaEvbw9MFUxQUlaMzVaZVE/view?usp=sharing
|
process
|
цнап м тернопіль погодження режиму роботи об’єктів сфери торгівлі та сфери обслуговування населення на території м тернополя описание
| 1
|
231,767
| 7,643,117,001
|
IssuesEvent
|
2018-05-08 11:36:15
|
containous/traefik
|
https://api.github.com/repos/containous/traefik
|
closed
|
Keep AccessLog entries based on retry attempts
|
area/logs kind/enhancement priority/P2
|
### Do you want to request a *feature* or report a *bug*?
*feature*
### Description
We want to keep all access logs that either responded with a 5xx or where at least one retry attempt happened. While the former is perfectly possible now with the access log filters for status codes, we can't achieve the second, yet.
|
1.0
|
Keep AccessLog entries based on retry attempts - ### Do you want to request a *feature* or report a *bug*?
*feature*
### Description
We want to keep all access logs that either responded with a 5xx or where at least one retry attempt happened. While the former is perfectly possible now with the access log filters for status codes, we can't achieve the second, yet.
|
non_process
|
keep accesslog entries based on retry attempts do you want to request a feature or report a bug feature description we want to keep all access logs that either responded with a or where at least one retry attempt happened while the former is perfectly possible now with the access log filters for status codes we can t achieve the second yet
| 0
|
20,580
| 27,242,212,981
|
IssuesEvent
|
2023-02-21 21:34:33
|
biocodellc/localcontexts_db
|
https://api.github.com/repos/biocodellc/localcontexts_db
|
opened
|
Login page: add a 'forgot username?' function
|
registration process
|
Users have been asking their usernames to login so need to add functionality that would help a user remember their username.
|
1.0
|
Login page: add a 'forgot username?' function - Users have been asking their usernames to login so need to add functionality that would help a user remember their username.
|
process
|
login page add a forgot username function users have been asking their usernames to login so need to add functionality that would help a user remember their username
| 1
|
99,345
| 8,698,043,613
|
IssuesEvent
|
2018-12-04 22:03:56
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test: storage (global)
|
testplan-item
|
Refs: https://github.com/Microsoft/vscode/issues/58957
Complexity: 5
- [x] macOS: @RMacfarlane
- [x] Windows: @alexr00
- [x] Linux: @Tyriar
**Setup**
You need to set an environment variable (best from the console) to be able to test this item because while the code is in, it will not be enabled by default in the next stable release:
* macOS/Linux: `export VSCODE_TEST_STORAGE_MIGRATION=1`
* Windows: `set VSCODE_TEST_STORAGE_MIGRATION=1`
**Background**
Global storage is data that gets persisted for all windows and thus is shared (any user of `IStorageService` with `StorageScope.GLOBAL`). The new SQLite backend (`<user data home>/Code Insiders/User/globalStorage/temp.vscdb` - will be renamed later) runs from the main process and as such any changes to global storage first travel from renderer to the main process and is then emitted to all windows via an event to signal the change.
Since there is existing data in `window.localStorage` a migration takes place the very first time Code is started. The migration runs from the main process and includes:
* copying the Chrome SQLite DB to a different name to avoid locking issues (very early on startup)
* using SQLite NPM module to access the data
* write the data into our own SQLite DB with some filter (only support keys that are still in use)
**Testing**
The easiest to test the migration is by running Code Stable on a user data dir (using `--user-data-dir`) and then opening Code Insiders on that same directory (remember to set the environment variable!). Alternatively you can delete the folder `User/globalStorage` in our user home directory. This is the place where the SQLite DB is stored.
Some things that we store in global storage that we can be used to check on:
* list of recently used commands in command palette
* width of the sidebar or height of the panel
* visibility and order of viewlets or panels
* [and more...](https://github.com/Microsoft/vscode/blob/a6defd0/src/vs/platform/storage/node/storageMainService.ts#L187)
**Note:** check for a message `[storage] migrating global storage from localStorage into SQLite` in the main logs to verify that the migration took place.
**Note:** if you do not shutdown VSCode normally, some data might not get persisted. The reason is simply that some data is only persisted when the window closes and not periodically. That behaviour was there before with the old backend already.
**Note:** You can use https://sqlitebrowser.org/ to read the DB with an external tool if needed.
**Verify:**
* you can monitor all storage activity by running with `--verbose` (check for entries in the main log!)
* you can print the contents of global and workspace storage with the "Log Storage" action from the command palette
* stress test global storage by working with multiple windows and configuring `window.restoreWindows:all` and ensuring global storage is persisted and synchronized
* e.g. change the order/visibility of viewlets in the sidebar and open a new window and verify that order/visibility is used
* verify global storage is persisted on shutdown and restored when you start again
* try to crash the Code process to get the SQLite DB into a broken state if possible. Data might get lost but the DB itself should never get corrupted. If it does we try to restore the data from a previous backup. If the backup is corrupt we start from scratch.
* verify we start even if a corrupt DB is present (you can turn a DB corrupt by changing its contents with a normal text editor)
|
1.0
|
Test: storage (global) - Refs: https://github.com/Microsoft/vscode/issues/58957
Complexity: 5
- [x] macOS: @RMacfarlane
- [x] Windows: @alexr00
- [x] Linux: @Tyriar
**Setup**
You need to set an environment variable (best from the console) to be able to test this item because while the code is in, it will not be enabled by default in the next stable release:
* macOS/Linux: `export VSCODE_TEST_STORAGE_MIGRATION=1`
* Windows: `set VSCODE_TEST_STORAGE_MIGRATION=1`
**Background**
Global storage is data that gets persisted for all windows and thus is shared (any user of `IStorageService` with `StorageScope.GLOBAL`). The new SQLite backend (`<user data home>/Code Insiders/User/globalStorage/temp.vscdb` - will be renamed later) runs from the main process and as such any changes to global storage first travel from renderer to the main process and is then emitted to all windows via an event to signal the change.
Since there is existing data in `window.localStorage` a migration takes place the very first time Code is started. The migration runs from the main process and includes:
* copying the Chrome SQLite DB to a different name to avoid locking issues (very early on startup)
* using SQLite NPM module to access the data
* write the data into our own SQLite DB with some filter (only support keys that are still in use)
**Testing**
The easiest to test the migration is by running Code Stable on a user data dir (using `--user-data-dir`) and then opening Code Insiders on that same directory (remember to set the environment variable!). Alternatively you can delete the folder `User/globalStorage` in our user home directory. This is the place where the SQLite DB is stored.
Some things that we store in global storage that we can be used to check on:
* list of recently used commands in command palette
* width of the sidebar or height of the panel
* visibility and order of viewlets or panels
* [and more...](https://github.com/Microsoft/vscode/blob/a6defd0/src/vs/platform/storage/node/storageMainService.ts#L187)
**Note:** check for a message `[storage] migrating global storage from localStorage into SQLite` in the main logs to verify that the migration took place.
**Note:** if you do not shutdown VSCode normally, some data might not get persisted. The reason is simply that some data is only persisted when the window closes and not periodically. That behaviour was there before with the old backend already.
**Note:** You can use https://sqlitebrowser.org/ to read the DB with an external tool if needed.
**Verify:**
* you can monitor all storage activity by running with `--verbose` (check for entries in the main log!)
* you can print the contents of global and workspace storage with the "Log Storage" action from the command palette
* stress test global storage by working with multiple windows and configuring `window.restoreWindows:all` and ensuring global storage is persisted and synchronized
* e.g. change the order/visibility of viewlets in the sidebar and open a new window and verify that order/visibility is used
* verify global storage is persisted on shutdown and restored when you start again
* try to crash the Code process to get the SQLite DB into a broken state if possible. Data might get lost but the DB itself should never get corrupted. If it does we try to restore the data from a previous backup. If the backup is corrupt we start from scratch.
* verify we start even if a corrupt DB is present (you can turn a DB corrupt by changing its contents with a normal text editor)
|
non_process
|
test storage global refs complexity macos rmacfarlane windows linux tyriar setup you need to set an environment variable best from the console to be able to test this item because while the code is in it will not be enabled by default in the next stable release macos linux export vscode test storage migration windows set vscode test storage migration background global storage is data that gets persisted for all windows and thus is shared any user of istorageservice with storagescope global the new sqlite backend code insiders user globalstorage temp vscdb will be renamed later runs from the main process and as such any changes to global storage first travel from renderer to the main process and is then emitted to all windows via an event to signal the change since there is existing data in window localstorage a migration takes place the very first time code is started the migration runs from the main process and includes copying the chrome sqlite db to a different name to avoid locking issues very early on startup using sqlite npm module to access the data write the data into our own sqlite db with some filter only support keys that are still in use testing the easiest to test the migration is by running code stable on a user data dir using user data dir and then opening code insiders on that same directory remember to set the environment variable alternatively you can delete the folder user globalstorage in our user home directory this is the place where the sqlite db is stored some things that we store in global storage that we can be used to check on list of recently used commands in command palette width of the sidebar or height of the panel visibility and order of viewlets or panels note check for a message migrating global storage from localstorage into sqlite in the main logs to verify that the migration took place note if you do not shutdown vscode normally some data might not get persisted the reason is simply that some data is only persisted when the window closes and not periodically that behaviour was there before with the old backend already note you can use to read the db with an external tool if needed verify you can monitor all storage activity by running with verbose check for entries in the main log you can print the contents of global and workspace storage with the log storage action from the command palette stress test global storage by working with multiple windows and configuring window restorewindows all and ensuring global storage is persisted and synchronized e g change the order visibility of viewlets in the sidebar and open a new window and verify that order visibility is used verify global storage is persisted on shutdown and restored when you start again try to crash the code process to get the sqlite db into a broken state if possible data might get lost but the db itself should never get corrupted if it does we try to restore the data from a previous backup if the backup is corrupt we start from scratch verify we start even if a corrupt db is present you can turn a db corrupt by changing its contents with a normal text editor
| 0
|
5,861
| 8,681,910,613
|
IssuesEvent
|
2018-12-02 01:21:21
|
lightningWhite/weatherLearning
|
https://api.github.com/repos/lightningWhite/weatherLearning
|
closed
|
Consolidate the data
|
dataProcessing to do
|
Since we will probably be presenting data to the net that we extrapolated from the original data (instead of actually using the data), it might be easier to create a new file containing everything we want with all of the cities' datas consolidated into one file.
Currently, there is a separate file for each attribute, and each attribute is divided into a separate column for each city.
Once Issue 4 is completed, we need to put together each difference column's attribute while making sure each attribute is associated with its correct date, hour, and weather condition prediction.
Our final file might be in the following format:
Columns:
humidity_diff_12hrs
pressure_diff_12hrs
temperature_diff_12hrs
wind_dir_diff_ 12hrs
wind_speed_diff_12hrs
weather_desc_future_12hrs
|
1.0
|
Consolidate the data - Since we will probably be presenting data to the net that we extrapolated from the original data (instead of actually using the data), it might be easier to create a new file containing everything we want with all of the cities' datas consolidated into one file.
Currently, there is a separate file for each attribute, and each attribute is divided into a separate column for each city.
Once Issue 4 is completed, we need to put together each difference column's attribute while making sure each attribute is associated with its correct date, hour, and weather condition prediction.
Our final file might be in the following format:
Columns:
humidity_diff_12hrs
pressure_diff_12hrs
temperature_diff_12hrs
wind_dir_diff_ 12hrs
wind_speed_diff_12hrs
weather_desc_future_12hrs
|
process
|
consolidate the data since we will probably be presenting data to the net that we extrapolated from the original data instead of actually using the data it might be easier to create a new file containing everything we want with all of the cities datas consolidated into one file currently there is a separate file for each attribute and each attribute is divided into a separate column for each city once issue is completed we need to put together each difference column s attribute while making sure each attribute is associated with its correct date hour and weather condition prediction our final file might be in the following format columns humidity diff pressure diff temperature diff wind dir diff wind speed diff weather desc future
| 1
|
149,166
| 23,440,267,444
|
IssuesEvent
|
2022-08-15 14:15:47
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Events near me: Build test-ready prototypes
|
Design Needs refining ⭐️ Public Websites
|
## Description
Using sketched concepts, build out prototypes that are ready for testing with editors.
## Acceptance Criteria
- [ ] At least 2 prototypes are built and ready to test
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [x] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
Events near me: Build test-ready prototypes - ## Description
Using sketched concepts, build out prototypes that are ready for testing with editors.
## Acceptance Criteria
- [ ] At least 2 prototypes are built and ready to test
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [x] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
non_process
|
events near me build test ready prototypes description using sketched concepts build out prototypes that are ready for testing with editors acceptance criteria at least prototypes are built and ready to test cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 0
|
86,833
| 17,089,840,401
|
IssuesEvent
|
2021-07-08 15:59:33
|
eclipse/eclipse.jdt.ls
|
https://api.github.com/repos/eclipse/eclipse.jdt.ls
|
closed
|
'Create method' code action for method reference
|
code-actions enhancement upstream
|
in code block
```
list.map(this::cutPrefix);
```
would be nice to have suggested code action 'Create method cutPrefix' if method is absent
|
1.0
|
'Create method' code action for method reference - in code block
```
list.map(this::cutPrefix);
```
would be nice to have suggested code action 'Create method cutPrefix' if method is absent
|
non_process
|
create method code action for method reference in code block list map this cutprefix would be nice to have suggested code action create method cutprefix if method is absent
| 0
|
161,232
| 12,534,357,851
|
IssuesEvent
|
2020-06-04 19:16:55
|
Azure-Samples/Cognitive-Services-Voice-Assistant
|
https://api.github.com/repos/Azure-Samples/Cognitive-Services-Voice-Assistant
|
closed
|
Use common naming convention for variable across samples
|
Devices Console Client (C++) UWP Voice Assistant (C# UWP) Voice Assistant Test Tool (C# .NET Core) Windows Voice Assistant Client (C# WPF)
|
We should use a common convention for variables across all samples. We could use Win32: https://docs.microsoft.com/en-us/windows/win32/stg/coding-style-conventions
or come up with our own, but we should be consistent
I imagine this will result in an issue for each client to follow the decided convention.
### This issue is for a: (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
```
|
1.0
|
Use common naming convention for variable across samples - We should use a common convention for variables across all samples. We could use Win32: https://docs.microsoft.com/en-us/windows/win32/stg/coding-style-conventions
or come up with our own, but we should be consistent
I imagine this will result in an issue for each client to follow the decided convention.
### This issue is for a: (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
```
|
non_process
|
use common naming convention for variable across samples we should use a common convention for variables across all samples we could use or come up with our own but we should be consistent i imagine this will result in an issue for each client to follow the decided convention this issue is for a mark with an x bug report please search issues before submitting feature request documentation issue or request regression a behavior that used to work and stopped in a new release
| 0
|
285,272
| 8,756,769,632
|
IssuesEvent
|
2018-12-14 18:52:11
|
kotekan/kotekan
|
https://api.github.com/repos/kotekan/kotekan
|
opened
|
Double counting RFI data sample loss (over normalization)
|
bug high priority
|
In the `2018.12` deployment it was discovered that when packets are lost due to network/system load levels, and there was RFI flagged on the same time period in which the packet loss happened, then the number of lost samples is double counted. This means the amount of renormalization applied ends up being too high.
The fix should be fairly easy, but because of when it was discovered in the `2018.12` deployment, it didn't make it into that release. Instead for `2018.12` the RFI zeroing is disabled by default.
|
1.0
|
Double counting RFI data sample loss (over normalization) - In the `2018.12` deployment it was discovered that when packets are lost due to network/system load levels, and there was RFI flagged on the same time period in which the packet loss happened, then the number of lost samples is double counted. This means the amount of renormalization applied ends up being too high.
The fix should be fairly easy, but because of when it was discovered in the `2018.12` deployment, it didn't make it into that release. Instead for `2018.12` the RFI zeroing is disabled by default.
|
non_process
|
double counting rfi data sample loss over normalization in the deployment it was discovered that when packets are lost due to network system load levels and there was rfi flagged on the same time period in which the packet loss happened then the number of lost samples is double counted this means the amount of renormalization applied ends up being too high the fix should be fairly easy but because of when it was discovered in the deployment it didn t make it into that release instead for the rfi zeroing is disabled by default
| 0
|
120,648
| 25,836,680,307
|
IssuesEvent
|
2022-12-12 20:14:30
|
Clueless-Community/fintech-api
|
https://api.github.com/repos/Clueless-Community/fintech-api
|
closed
|
Create an endpoint to calculate Purchasing Power
|
issue:3 codepeak 22
|
### What bug or feature you wants to report.
Create an endpoint to calculate Purchasing Power of an entity.
|
1.0
|
Create an endpoint to calculate Purchasing Power - ### What bug or feature you wants to report.
Create an endpoint to calculate Purchasing Power of an entity.
|
non_process
|
create an endpoint to calculate purchasing power what bug or feature you wants to report create an endpoint to calculate purchasing power of an entity
| 0
|
12,683
| 15,048,569,885
|
IssuesEvent
|
2021-02-03 10:21:13
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[processing][feature] New algorithm "Flatten Relationship"
|
3.16 Automatic new feature Processing Alg
|
Original commit: https://github.com/qgis/QGIS/commit/d45cf980d8bfa35ea902444d6eec34043b337d33 by nyalldawson
This algorithm flattens all relationships for a vector layer,
exporting a single layer containing one master feature per
related feature. This master feature contains all the
attributes for the related features.
It's designed as a quick way to de-normalize a relation from
a project, e.g. to allow exporting to CSV
Sponsored by SMEC/SJ
|
1.0
|
[processing][feature] New algorithm "Flatten Relationship" - Original commit: https://github.com/qgis/QGIS/commit/d45cf980d8bfa35ea902444d6eec34043b337d33 by nyalldawson
This algorithm flattens all relationships for a vector layer,
exporting a single layer containing one master feature per
related feature. This master feature contains all the
attributes for the related features.
It's designed as a quick way to de-normalize a relation from
a project, e.g. to allow exporting to CSV
Sponsored by SMEC/SJ
|
process
|
new algorithm flatten relationship original commit by nyalldawson this algorithm flattens all relationships for a vector layer exporting a single layer containing one master feature per related feature this master feature contains all the attributes for the related features it s designed as a quick way to de normalize a relation from a project e g to allow exporting to csv sponsored by smec sj
| 1
|
18,241
| 24,313,060,962
|
IssuesEvent
|
2022-09-30 01:47:27
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
How to execute sql statement in a loop?
|
question processors
|
I want to loop insert data from array into database. just like this:
```yaml
input:
generate:
mapping: |
root = {"sqldata":["delete from b.tax where month_of_salary='202208'","delete from a.tax where month_of_salary='202208'"]}
interval: 0s
count: 1
pipeline:
processors:
- while:
at_least_once: false
max_loops: 0
check: this.sqldata.length() > 0
processors:
- sql_raw:
driver: clickhouse
dsn: clickhouse://[username[:password]@][netloc][:port]/dbname[?param1=value1&...¶mN=valueN]
query: this.sqldata[i]
output:
drop: {}
```
the processor of while is seem not appropriate and i can't get index of array.How to loop through array and execute sql statement?
|
1.0
|
How to execute sql statement in a loop? - I want to loop insert data from array into database. just like this:
```yaml
input:
generate:
mapping: |
root = {"sqldata":["delete from b.tax where month_of_salary='202208'","delete from a.tax where month_of_salary='202208'"]}
interval: 0s
count: 1
pipeline:
processors:
- while:
at_least_once: false
max_loops: 0
check: this.sqldata.length() > 0
processors:
- sql_raw:
driver: clickhouse
dsn: clickhouse://[username[:password]@][netloc][:port]/dbname[?param1=value1&...¶mN=valueN]
query: this.sqldata[i]
output:
drop: {}
```
the processor of while is seem not appropriate and i can't get index of array.How to loop through array and execute sql statement?
|
process
|
how to execute sql statement in a loop i want to loop insert data from array into database just like this yaml input generate mapping root sqldata interval count pipeline processors while at least once false max loops check this sqldata length processors sql raw driver clickhouse dsn clickhouse dbname query this sqldata output drop the processor of while is seem not appropriate and i can t get index of array how to loop through array and execute sql statement
| 1
|
21,189
| 28,180,676,806
|
IssuesEvent
|
2023-04-04 02:00:10
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Tue, 4 Apr 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### LivePose: Online 3D Reconstruction from Monocular Video with Dynamic Camera Poses
- **Authors:** Noah Stier, Baptiste Angles, Liang Yang, Yajie Yan, Alex Colburn, Ming Chuang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00054
- **Pdf link:** https://arxiv.org/pdf/2304.00054
- **Abstract**
Dense 3D reconstruction from RGB images traditionally assumes static camera pose estimates. This assumption has endured, even as recent works have increasingly focused on real-time methods for mobile devices. However, the assumption of one pose per image does not hold for online execution: poses from real-time SLAM are dynamic and may be updated following events such as bundle adjustment and loop closure. This has been addressed in the RGB-D setting, by de-integrating past views and re-integrating them with updated poses, but it remains largely untreated in the RGB-only setting. We formalize this problem to define the new task of online reconstruction from dynamically-posed images. To support further research, we introduce a dataset called LivePose containing the dynamic poses from a SLAM system running on ScanNet. We select three recent reconstruction systems and apply a framework based on de-integration to adapt each one to the dynamic-pose setting. In addition, we propose a novel, non-linear de-integration module that learns to remove stale scene content. We show that responding to pose updates is critical for high-quality reconstruction, and that our de-integration framework is an effective solution.
### Improving extreme weather events detection with light-weight neural networks
- **Authors:** Romain Lacombe (1,2), Hannah Grossman (1), Lucas Hendren (1), David Lüdeke (1) ((1) Stanford University, (2) Plume Labs)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.00176
- **Pdf link:** https://arxiv.org/pdf/2304.00176
- **Abstract**
To advance automated detection of extreme weather events, which are increasing in frequency and intensity with climate change, we explore modifications to a novel light-weight Context Guided convolutional neural network architecture trained for semantic segmentation of tropical cyclones and atmospheric rivers in climate data. Our primary focus is on tropical cyclones, the most destructive weather events, for which current models show limited performance. We investigate feature engineering, data augmentation, learning rate modifications, alternative loss functions, and architectural changes. In contrast to previous approaches optimizing for intersection over union, we specifically seek to improve recall to penalize under-counting and prioritize identification of tropical cyclones. We report success through the use of weighted loss functions to counter class imbalance for these rare events. We conclude with directions for future research on extreme weather events detection, a crucial task for prediction, mitigation, and equitable adaptation to the impacts of climate change.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
- **Authors:** Mohammadi Kiarash, Zhao He, Mengyao Zhai, Frederick Tung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2304.00049
- **Pdf link:** https://arxiv.org/pdf/2304.00049
- **Abstract**
In many real-world settings, the critical class is rare and a missed detection carries a disproportionately high cost. For example, tumors are rare and a false negative diagnosis could have severe consequences on treatment outcomes; fraudulent banking transactions are rare and an undetected occurrence could result in significant losses or legal penalties. In such contexts, systems are often operated at a high true positive rate, which may require tolerating high false positives. In this paper, we present a novel approach to address the challenge of minimizing false positives for systems that need to operate at a high true positive rate. We propose a ranking-based regularization (RankReg) approach that is easy to implement, and show empirically that it not only effectively reduces false positives, but also complements conventional imbalanced learning losses. With this novel technique in hand, we conduct a series of experiments on three broadly explored datasets (CIFAR-10&100 and Melanoma) and show that our approach lifts the previous state-of-the-art performance by notable margins.
### kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
- **Authors:** Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00050
- **Pdf link:** https://arxiv.org/pdf/2304.00050
- **Abstract**
In this paper, we present a residual neural network-based method for point set registration. Given a target and a reference point cloud, the goal is to learn a minimal transformation that aligns the target to the reference under the constraint that the topological structure of the target point cloud is preserved. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. To mitigate these issues, a Jacobian-based cost function along with geometric-aware statistical distances is proposed. The latter allows for measuring misalignment between the target and the reference. The justification for the kNN-graph preservation of target data, when the Jacobian cost is used, is also provided. Further, a stochastic approximation for high dimensional registration is introduced to make a high-dimensional alignment feasible. The proposed method is tested on high-dimensional Flow Cytometry to align two data distributions whilst preserving the kNN-graph of the data. The implementation of the proposed approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/ under the MIT license.
### Learning the Distribution of Errors in Stereo Matching for Joint Disparity and Uncertainty Estimation
- **Authors:** Liyan Chen, Weihan Wang, Philippos Mordohai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00152
- **Pdf link:** https://arxiv.org/pdf/2304.00152
- **Abstract**
We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching. Our work is motivated by the need for precise uncertainty estimates and the observation that multi-task learning often leads to improved performance in all tasks. We show that this can be achieved by requiring the distribution of uncertainty to match the distribution of disparity errors via a KL divergence term in the network's loss function. A differentiable soft-histogramming technique is used to approximate the distributions so that they can be used in the loss. We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets.
### Learning Anchor Transformations for 3D Garment Animation
- **Authors:** Fang Zhao, Zekun Li, Shaoli Huang, Junwu Weng, Tianfei Zhou, Guo-Sen Xie, Jue Wang, Ying Shan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00761
- **Pdf link:** https://arxiv.org/pdf/2304.00761
- **Abstract**
This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-of-the-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.
### ViT-DAE: Transformer-driven Diffusion Autoencoder for Histopathology Image Analysis
- **Authors:** Xuan Xu, Saarthak Kapse, Rajarsi Gupta, Prateek Prasanna
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.01053
- **Pdf link:** https://arxiv.org/pdf/2304.01053
- **Abstract**
Generative AI has received substantial attention in recent years due to its ability to synthesize data that closely resembles the original data source. While Generative Adversarial Networks (GANs) have provided innovative approaches for histopathological image analysis, they suffer from limitations such as mode collapse and overfitting in discriminator. Recently, Denoising Diffusion models have demonstrated promising results in computer vision. These models exhibit superior stability during training, better distribution coverage, and produce high-quality diverse images. Additionally, they display a high degree of resilience to noise and perturbations, making them well-suited for use in digital pathology, where images commonly contain artifacts and exhibit significant variations in staining. In this paper, we present a novel approach, namely ViT-DAE, which integrates vision transformers (ViT) and diffusion autoencoders for high-quality histopathology image synthesis. This marks the first time that ViT has been introduced to diffusion autoencoders in computational pathology, allowing the model to better capture the complex and intricate details of histopathology images. We demonstrate the effectiveness of ViT-DAE on three publicly available datasets. Our approach outperforms recent GAN-based and vanilla DAE methods in generating realistic images.
### Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
- **Authors:** Xiangyang Zhu, Renrui Zhang, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, Peng Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2304.01195
- **Pdf link:** https://arxiv.org/pdf/2304.01195
- **Abstract**
The popularity of Contrastive Language-Image Pre-training (CLIP) has propelled its application to diverse downstream vision tasks. To improve its capacity on downstream tasks, few-shot learning has become a widely-adopted technique. However, existing methods either exhibit limited performance or suffer from excessive learnable parameters. In this paper, we propose APE, an Adaptive Prior rEfinement method for CLIP's pre-trained knowledge, which achieves superior accuracy with high computational efficiency. Via a prior refinement module, we analyze the inter-class disparity in the downstream data and decouple the domain-specific knowledge from the CLIP-extracted cache model. On top of that, we introduce two model variants, a training-free APE and a training-required APE-T. We explore the trilateral affinities between the test image, prior cache model, and textual representations, and only enable a lightweight category-residual module to be trained. For the average accuracy over 11 benchmarks, both APE and APE-T attain state-of-the-art and respectively outperform the second-best by +1.59% and +1.99% under 16 shots with x30 less learnable parameters.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Progressive Channel-Shrinking Network
- **Authors:** Jianhong Pan, Siyuan Yang, Lin Geng Foo, Qiuhong Ke, Hossein Rahmani, Zhipeng Fan, Jun Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00280
- **Pdf link:** https://arxiv.org/pdf/2304.00280
- **Abstract**
Currently, salience-based channel pruning makes continuous breakthroughs in network compression. In the realization, the salience mechanism is used as a metric of channel salience to guide pruning. Therefore, salience-based channel pruning can dynamically adjust the channel width at run-time, which provides a flexible pruning scheme. However, there are two problems emerging: a gating function is often needed to truncate the specific salience entries to zero, which destabilizes the forward propagation; dynamic architecture brings more cost for indexing in inference which bottlenecks the inference speed. In this paper, we propose a Progressive Channel-Shrinking (PCS) method to compress the selected salience entries at run-time instead of roughly approximating them to zero. We also propose a Running Shrinking Policy to provide a testing-static pruning scheme that can reduce the memory access cost for filter indexing. We evaluate our method on ImageNet and CIFAR10 datasets over two prevalent networks: ResNet and VGG, and demonstrate that our PCS outperforms all baselines and achieves state-of-the-art in terms of compression-performance tradeoff. Moreover, we observe a significant and practical acceleration of inference.
### Accuracy Improvement of Object Detection in VVC Coded Video Using YOLO-v7 Features
- **Authors:** Takahiro Shindo, Taiju Watanabe, Kein Yamada, Hiroshi Watanabe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2304.00689
- **Pdf link:** https://arxiv.org/pdf/2304.00689
- **Abstract**
With advances in image recognition technology based on deep learning, automatic video analysis by Artificial Intelligence is becoming more widespread. As the amount of video used for image recognition increases, efficient compression methods for such video data are necessary. In general, when the image quality deteriorates due to image encoding, the image recognition accuracy also falls. Therefore, in this paper, we propose a neural-network-based approach to improve image recognition accuracy, especially the object detection accuracy by applying post-processing to the encoded video. Versatile Video Coding (VVC) will be used for the video compression method, since it is the latest video coding method with the best encoding performance. The neural network is trained using the features of YOLO-v7, the latest object detection model. By using VVC as the video coding method and YOLO-v7 as the detection model, high object detection accuracy is achieved even at low bit rates. Experimental results show that the combination of the proposed method and VVC achieves better coding performance than regular VVC in object detection accuracy.
## Keyword: RAW
### Vision Transformers with Mixed-Resolution Tokenization
- **Authors:** Tomer Ronen, Omer Levy, Avram Golbert
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00287
- **Pdf link:** https://arxiv.org/pdf/2304.00287
- **Abstract**
Vision Transformer models process input images by dividing them into a spatially regular grid of equal-size patches. Conversely, Transformers were originally introduced over natural language sequences, where each token represents a subword - a chunk of raw data of arbitrary size. In this work, we apply this approach to Vision Transformers by introducing a novel image tokenization scheme, replacing the standard uniform grid with a mixed-resolution sequence of tokens, where each token represents a patch of arbitrary size. Using the Quadtree algorithm and a novel saliency scorer, we construct a patch mosaic where low-saliency areas of the image are processed in low resolution, routing more of the model's capacity to important image regions. Using the same architecture as vanilla ViTs, our Quadformer models achieve substantial accuracy gains on image classification when controlling for the computational budget. Code and models are publicly available at https://github.com/TomerRonen34/mixed-resolution-vit .
### OTS: A One-shot Learning Approach for Text Spotting in Historical Manuscripts
- **Authors:** Wen-Bo Hu, Hong-Jian Zhan, Cong Liu, Bing Yin, Yue Lu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.00746
- **Pdf link:** https://arxiv.org/pdf/2304.00746
- **Abstract**
Historical manuscript processing poses challenges like limited annotated training data and novel class emergence. To address this, we propose a novel One-shot learning-based Text Spotting (OTS) approach that accurately and reliably spots novel characters with just one annotated support sample. Drawing inspiration from cognitive research, we introduce a spatial alignment module that finds, focuses on, and learns the most discriminative spatial regions in the query image based on one support image. Especially, since the low-resource spotting task often faces the problem of example imbalance, we propose a novel loss function called torus loss which can make the embedding space of distance metric more discriminative. Our approach is highly efficient and requires only a few training samples while exhibiting the remarkable ability to handle novel characters, and symbols. To enhance dataset diversity, a new manuscript dataset that contains the ancient Dongba hieroglyphics (DBH) is created. We conduct experiments on publicly available VML-HD, TKH, NC datasets, and the new proposed DBH dataset. The experimental results demonstrate that OTS outperforms the state-of-the-art methods in one-shot text spotting. Overall, our proposed method offers promising applications in the field of text spotting in historical manuscripts.
### Semi-Automated Computer Vision based Tracking of Multiple Industrial Entities -- A Framework and Dataset Creation Approach
- **Authors:** Jérôme Rutinowski, Hazem Youssef, Sven Franke, Irfan Fachrudin Priyanta, Frederik Polachowski, Moritz Roidl, Christopher Reining
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00950
- **Pdf link:** https://arxiv.org/pdf/2304.00950
- **Abstract**
This contribution presents the TOMIE framework (Tracking Of Multiple Industrial Entities), a framework for the continuous tracking of industrial entities (e.g., pallets, crates, barrels) over a network of, in this example, six RGB cameras. This framework, makes use of multiple sensors, data pipelines and data annotation procedures, and is described in detail in this contribution. With the vision of a fully automated tracking system for industrial entities in mind, it enables researchers to efficiently capture high quality data in an industrial setting. Using this framework, an image dataset, the TOMIE dataset, is created, which at the same time is used to gauge the framework's validity. This dataset contains annotation files for 112,860 frames and 640,936 entity instances that are captured from a set of six cameras that perceive a large indoor space. This dataset out-scales comparable datasets by a factor of four and is made up of scenarios, drawn from industrial applications from the sector of warehousing. Three tracking algorithms, namely ByteTrack, Bot-Sort and SiamMOT are applied to this dataset, serving as a proof-of-concept and providing tracking results that are comparable to the state of the art.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Tue, 4 Apr 23 - ## Keyword: events
### LivePose: Online 3D Reconstruction from Monocular Video with Dynamic Camera Poses
- **Authors:** Noah Stier, Baptiste Angles, Liang Yang, Yajie Yan, Alex Colburn, Ming Chuang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00054
- **Pdf link:** https://arxiv.org/pdf/2304.00054
- **Abstract**
Dense 3D reconstruction from RGB images traditionally assumes static camera pose estimates. This assumption has endured, even as recent works have increasingly focused on real-time methods for mobile devices. However, the assumption of one pose per image does not hold for online execution: poses from real-time SLAM are dynamic and may be updated following events such as bundle adjustment and loop closure. This has been addressed in the RGB-D setting, by de-integrating past views and re-integrating them with updated poses, but it remains largely untreated in the RGB-only setting. We formalize this problem to define the new task of online reconstruction from dynamically-posed images. To support further research, we introduce a dataset called LivePose containing the dynamic poses from a SLAM system running on ScanNet. We select three recent reconstruction systems and apply a framework based on de-integration to adapt each one to the dynamic-pose setting. In addition, we propose a novel, non-linear de-integration module that learns to remove stale scene content. We show that responding to pose updates is critical for high-quality reconstruction, and that our de-integration framework is an effective solution.
### Improving extreme weather events detection with light-weight neural networks
- **Authors:** Romain Lacombe (1,2), Hannah Grossman (1), Lucas Hendren (1), David Lüdeke (1) ((1) Stanford University, (2) Plume Labs)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.00176
- **Pdf link:** https://arxiv.org/pdf/2304.00176
- **Abstract**
To advance automated detection of extreme weather events, which are increasing in frequency and intensity with climate change, we explore modifications to a novel light-weight Context Guided convolutional neural network architecture trained for semantic segmentation of tropical cyclones and atmospheric rivers in climate data. Our primary focus is on tropical cyclones, the most destructive weather events, for which current models show limited performance. We investigate feature engineering, data augmentation, learning rate modifications, alternative loss functions, and architectural changes. In contrast to previous approaches optimizing for intersection over union, we specifically seek to improve recall to penalize under-counting and prioritize identification of tropical cyclones. We report success through the use of weighted loss functions to counter class imbalance for these rare events. We conclude with directions for future research on extreme weather events detection, a crucial task for prediction, mitigation, and equitable adaptation to the impacts of climate change.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate
- **Authors:** Mohammadi Kiarash, Zhao He, Mengyao Zhai, Frederick Tung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2304.00049
- **Pdf link:** https://arxiv.org/pdf/2304.00049
- **Abstract**
In many real-world settings, the critical class is rare and a missed detection carries a disproportionately high cost. For example, tumors are rare and a false negative diagnosis could have severe consequences on treatment outcomes; fraudulent banking transactions are rare and an undetected occurrence could result in significant losses or legal penalties. In such contexts, systems are often operated at a high true positive rate, which may require tolerating high false positives. In this paper, we present a novel approach to address the challenge of minimizing false positives for systems that need to operate at a high true positive rate. We propose a ranking-based regularization (RankReg) approach that is easy to implement, and show empirically that it not only effectively reduces false positives, but also complements conventional imbalanced learning losses. With this novel technique in hand, we conduct a series of experiments on three broadly explored datasets (CIFAR-10&100 and Melanoma) and show that our approach lifts the previous state-of-the-art performance by notable margins.
### kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
- **Authors:** Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00050
- **Pdf link:** https://arxiv.org/pdf/2304.00050
- **Abstract**
In this paper, we present a residual neural network-based method for point set registration. Given a target and a reference point cloud, the goal is to learn a minimal transformation that aligns the target to the reference under the constraint that the topological structure of the target point cloud is preserved. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. To mitigate these issues, a Jacobian-based cost function along with geometric-aware statistical distances is proposed. The latter allows for measuring misalignment between the target and the reference. The justification for the kNN-graph preservation of target data, when the Jacobian cost is used, is also provided. Further, a stochastic approximation for high dimensional registration is introduced to make a high-dimensional alignment feasible. The proposed method is tested on high-dimensional Flow Cytometry to align two data distributions whilst preserving the kNN-graph of the data. The implementation of the proposed approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/ under the MIT license.
### Learning the Distribution of Errors in Stereo Matching for Joint Disparity and Uncertainty Estimation
- **Authors:** Liyan Chen, Weihan Wang, Philippos Mordohai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00152
- **Pdf link:** https://arxiv.org/pdf/2304.00152
- **Abstract**
We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching. Our work is motivated by the need for precise uncertainty estimates and the observation that multi-task learning often leads to improved performance in all tasks. We show that this can be achieved by requiring the distribution of uncertainty to match the distribution of disparity errors via a KL divergence term in the network's loss function. A differentiable soft-histogramming technique is used to approximate the distributions so that they can be used in the loss. We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets.
### Learning Anchor Transformations for 3D Garment Animation
- **Authors:** Fang Zhao, Zekun Li, Shaoli Huang, Junwu Weng, Tianfei Zhou, Guo-Sen Xie, Jue Wang, Ying Shan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00761
- **Pdf link:** https://arxiv.org/pdf/2304.00761
- **Abstract**
This paper proposes an anchor-based deformation model, namely AnchorDEF, to predict 3D garment animation from a body motion sequence. It deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements. A set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices. Once the anchor transformations are found, per-vertex nonlinear displacements of the garment template can be regressed in a canonical space, which reduces the complexity of deformation space learning. By explicitly constraining the transformed anchors to satisfy the consistencies of position, normal and direction, the physical meaning of learned anchor transformations in space is guaranteed for better generalization. Furthermore, an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations. Qualitative and quantitative experiments on different types of garments demonstrate that AnchorDEF achieves the state-of-the-art performance on 3D garment deformation prediction in motion, especially for loose-fitting garments.
### ViT-DAE: Transformer-driven Diffusion Autoencoder for Histopathology Image Analysis
- **Authors:** Xuan Xu, Saarthak Kapse, Rajarsi Gupta, Prateek Prasanna
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.01053
- **Pdf link:** https://arxiv.org/pdf/2304.01053
- **Abstract**
Generative AI has received substantial attention in recent years due to its ability to synthesize data that closely resembles the original data source. While Generative Adversarial Networks (GANs) have provided innovative approaches for histopathological image analysis, they suffer from limitations such as mode collapse and overfitting in discriminator. Recently, Denoising Diffusion models have demonstrated promising results in computer vision. These models exhibit superior stability during training, better distribution coverage, and produce high-quality diverse images. Additionally, they display a high degree of resilience to noise and perturbations, making them well-suited for use in digital pathology, where images commonly contain artifacts and exhibit significant variations in staining. In this paper, we present a novel approach, namely ViT-DAE, which integrates vision transformers (ViT) and diffusion autoencoders for high-quality histopathology image synthesis. This marks the first time that ViT has been introduced to diffusion autoencoders in computational pathology, allowing the model to better capture the complex and intricate details of histopathology images. We demonstrate the effectiveness of ViT-DAE on three publicly available datasets. Our approach outperforms recent GAN-based and vanilla DAE methods in generating realistic images.
### Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement
- **Authors:** Xiangyang Zhu, Renrui Zhang, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, Peng Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2304.01195
- **Pdf link:** https://arxiv.org/pdf/2304.01195
- **Abstract**
The popularity of Contrastive Language-Image Pre-training (CLIP) has propelled its application to diverse downstream vision tasks. To improve its capacity on downstream tasks, few-shot learning has become a widely-adopted technique. However, existing methods either exhibit limited performance or suffer from excessive learnable parameters. In this paper, we propose APE, an Adaptive Prior rEfinement method for CLIP's pre-trained knowledge, which achieves superior accuracy with high computational efficiency. Via a prior refinement module, we analyze the inter-class disparity in the downstream data and decouple the domain-specific knowledge from the CLIP-extracted cache model. On top of that, we introduce two model variants, a training-free APE and a training-required APE-T. We explore the trilateral affinities between the test image, prior cache model, and textual representations, and only enable a lightweight category-residual module to be trained. For the average accuracy over 11 benchmarks, both APE and APE-T attain state-of-the-art and respectively outperform the second-best by +1.59% and +1.99% under 16 shots with x30 less learnable parameters.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Progressive Channel-Shrinking Network
- **Authors:** Jianhong Pan, Siyuan Yang, Lin Geng Foo, Qiuhong Ke, Hossein Rahmani, Zhipeng Fan, Jun Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00280
- **Pdf link:** https://arxiv.org/pdf/2304.00280
- **Abstract**
Currently, salience-based channel pruning makes continuous breakthroughs in network compression. In the realization, the salience mechanism is used as a metric of channel salience to guide pruning. Therefore, salience-based channel pruning can dynamically adjust the channel width at run-time, which provides a flexible pruning scheme. However, there are two problems emerging: a gating function is often needed to truncate the specific salience entries to zero, which destabilizes the forward propagation; dynamic architecture brings more cost for indexing in inference which bottlenecks the inference speed. In this paper, we propose a Progressive Channel-Shrinking (PCS) method to compress the selected salience entries at run-time instead of roughly approximating them to zero. We also propose a Running Shrinking Policy to provide a testing-static pruning scheme that can reduce the memory access cost for filter indexing. We evaluate our method on ImageNet and CIFAR10 datasets over two prevalent networks: ResNet and VGG, and demonstrate that our PCS outperforms all baselines and achieves state-of-the-art in terms of compression-performance tradeoff. Moreover, we observe a significant and practical acceleration of inference.
### Accuracy Improvement of Object Detection in VVC Coded Video Using YOLO-v7 Features
- **Authors:** Takahiro Shindo, Taiju Watanabe, Kein Yamada, Hiroshi Watanabe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2304.00689
- **Pdf link:** https://arxiv.org/pdf/2304.00689
- **Abstract**
With advances in image recognition technology based on deep learning, automatic video analysis by Artificial Intelligence is becoming more widespread. As the amount of video used for image recognition increases, efficient compression methods for such video data are necessary. In general, when the image quality deteriorates due to image encoding, the image recognition accuracy also falls. Therefore, in this paper, we propose a neural-network-based approach to improve image recognition accuracy, especially the object detection accuracy by applying post-processing to the encoded video. Versatile Video Coding (VVC) will be used for the video compression method, since it is the latest video coding method with the best encoding performance. The neural network is trained using the features of YOLO-v7, the latest object detection model. By using VVC as the video coding method and YOLO-v7 as the detection model, high object detection accuracy is achieved even at low bit rates. Experimental results show that the combination of the proposed method and VVC achieves better coding performance than regular VVC in object detection accuracy.
## Keyword: RAW
### Vision Transformers with Mixed-Resolution Tokenization
- **Authors:** Tomer Ronen, Omer Levy, Avram Golbert
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00287
- **Pdf link:** https://arxiv.org/pdf/2304.00287
- **Abstract**
Vision Transformer models process input images by dividing them into a spatially regular grid of equal-size patches. Conversely, Transformers were originally introduced over natural language sequences, where each token represents a subword - a chunk of raw data of arbitrary size. In this work, we apply this approach to Vision Transformers by introducing a novel image tokenization scheme, replacing the standard uniform grid with a mixed-resolution sequence of tokens, where each token represents a patch of arbitrary size. Using the Quadtree algorithm and a novel saliency scorer, we construct a patch mosaic where low-saliency areas of the image are processed in low resolution, routing more of the model's capacity to important image regions. Using the same architecture as vanilla ViTs, our Quadformer models achieve substantial accuracy gains on image classification when controlling for the computational budget. Code and models are publicly available at https://github.com/TomerRonen34/mixed-resolution-vit .
### OTS: A One-shot Learning Approach for Text Spotting in Historical Manuscripts
- **Authors:** Wen-Bo Hu, Hong-Jian Zhan, Cong Liu, Bing Yin, Yue Lu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.00746
- **Pdf link:** https://arxiv.org/pdf/2304.00746
- **Abstract**
Historical manuscript processing poses challenges like limited annotated training data and novel class emergence. To address this, we propose a novel One-shot learning-based Text Spotting (OTS) approach that accurately and reliably spots novel characters with just one annotated support sample. Drawing inspiration from cognitive research, we introduce a spatial alignment module that finds, focuses on, and learns the most discriminative spatial regions in the query image based on one support image. Especially, since the low-resource spotting task often faces the problem of example imbalance, we propose a novel loss function called torus loss which can make the embedding space of distance metric more discriminative. Our approach is highly efficient and requires only a few training samples while exhibiting the remarkable ability to handle novel characters, and symbols. To enhance dataset diversity, a new manuscript dataset that contains the ancient Dongba hieroglyphics (DBH) is created. We conduct experiments on publicly available VML-HD, TKH, NC datasets, and the new proposed DBH dataset. The experimental results demonstrate that OTS outperforms the state-of-the-art methods in one-shot text spotting. Overall, our proposed method offers promising applications in the field of text spotting in historical manuscripts.
### Semi-Automated Computer Vision based Tracking of Multiple Industrial Entities -- A Framework and Dataset Creation Approach
- **Authors:** Jérôme Rutinowski, Hazem Youssef, Sven Franke, Irfan Fachrudin Priyanta, Frederik Polachowski, Moritz Roidl, Christopher Reining
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.00950
- **Pdf link:** https://arxiv.org/pdf/2304.00950
- **Abstract**
This contribution presents the TOMIE framework (Tracking Of Multiple Industrial Entities), a framework for the continuous tracking of industrial entities (e.g., pallets, crates, barrels) over a network of, in this example, six RGB cameras. This framework, makes use of multiple sensors, data pipelines and data annotation procedures, and is described in detail in this contribution. With the vision of a fully automated tracking system for industrial entities in mind, it enables researchers to efficiently capture high quality data in an industrial setting. Using this framework, an image dataset, the TOMIE dataset, is created, which at the same time is used to gauge the framework's validity. This dataset contains annotation files for 112,860 frames and 640,936 entity instances that are captured from a set of six cameras that perceive a large indoor space. This dataset out-scales comparable datasets by a factor of four and is made up of scenarios, drawn from industrial applications from the sector of warehousing. Three tracking algorithms, namely ByteTrack, Bot-Sort and SiamMOT are applied to this dataset, serving as a proof-of-concept and providing tracking results that are comparable to the state of the art.
## Keyword: raw image
There is no result
|
process
|
new submissions for tue apr keyword events livepose online reconstruction from monocular video with dynamic camera poses authors noah stier baptiste angles liang yang yajie yan alex colburn ming chuang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract dense reconstruction from rgb images traditionally assumes static camera pose estimates this assumption has endured even as recent works have increasingly focused on real time methods for mobile devices however the assumption of one pose per image does not hold for online execution poses from real time slam are dynamic and may be updated following events such as bundle adjustment and loop closure this has been addressed in the rgb d setting by de integrating past views and re integrating them with updated poses but it remains largely untreated in the rgb only setting we formalize this problem to define the new task of online reconstruction from dynamically posed images to support further research we introduce a dataset called livepose containing the dynamic poses from a slam system running on scannet we select three recent reconstruction systems and apply a framework based on de integration to adapt each one to the dynamic pose setting in addition we propose a novel non linear de integration module that learns to remove stale scene content we show that responding to pose updates is critical for high quality reconstruction and that our de integration framework is an effective solution improving extreme weather events detection with light weight neural networks authors romain lacombe hannah grossman lucas hendren david lüdeke stanford university plume labs subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract to advance automated detection of extreme weather events which are increasing in frequency and intensity with climate change we explore modifications to a novel light weight context guided convolutional neural network architecture trained for semantic segmentation of tropical cyclones and atmospheric rivers in climate data our primary focus is on tropical cyclones the most destructive weather events for which current models show limited performance we investigate feature engineering data augmentation learning rate modifications alternative loss functions and architectural changes in contrast to previous approaches optimizing for intersection over union we specifically seek to improve recall to penalize under counting and prioritize identification of tropical cyclones we report success through the use of weighted loss functions to counter class imbalance for these rare events we conclude with directions for future research on extreme weather events detection a crucial task for prediction mitigation and equitable adaptation to the impacts of climate change keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp ranking regularization for critical rare classes minimizing false positives at a high true positive rate authors mohammadi kiarash zhao he mengyao zhai frederick tung subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract in many real world settings the critical class is rare and a missed detection carries a disproportionately high cost for example tumors are rare and a false negative diagnosis could have severe consequences on treatment outcomes fraudulent banking transactions are rare and an undetected occurrence could result in significant losses or legal penalties in such contexts systems are often operated at a high true positive rate which may require tolerating high false positives in this paper we present a novel approach to address the challenge of minimizing false positives for systems that need to operate at a high true positive rate we propose a ranking based regularization rankreg approach that is easy to implement and show empirically that it not only effectively reduces false positives but also complements conventional imbalanced learning losses with this novel technique in hand we conduct a series of experiments on three broadly explored datasets cifar and melanoma and show that our approach lifts the previous state of the art performance by notable margins knn res residual neural network with knn graph coherence for point cloud registration authors muhammad s battikh dillon hammill matthew cook artem lensky subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this paper we present a residual neural network based method for point set registration given a target and a reference point cloud the goal is to learn a minimal transformation that aligns the target to the reference under the constraint that the topological structure of the target point cloud is preserved similar to coherent point drift cpd the registration alignment problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field while the coherence constraint in cpd is stated in terms of local motion coherence the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology this makes cpd less flexible when the deformation is locally rigid but globally non rigid as in the case of multiple objects and articulate pose registration to mitigate these issues a jacobian based cost function along with geometric aware statistical distances is proposed the latter allows for measuring misalignment between the target and the reference the justification for the knn graph preservation of target data when the jacobian cost is used is also provided further a stochastic approximation for high dimensional registration is introduced to make a high dimensional alignment feasible the proposed method is tested on high dimensional flow cytometry to align two data distributions whilst preserving the knn graph of the data the implementation of the proposed approach is available at under the mit license learning the distribution of errors in stereo matching for joint disparity and uncertainty estimation authors liyan chen weihan wang philippos mordohai subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present a new loss function for joint disparity and uncertainty estimation in deep stereo matching our work is motivated by the need for precise uncertainty estimates and the observation that multi task learning often leads to improved performance in all tasks we show that this can be achieved by requiring the distribution of uncertainty to match the distribution of disparity errors via a kl divergence term in the network s loss function a differentiable soft histogramming technique is used to approximate the distributions so that they can be used in the loss we experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets learning anchor transformations for garment animation authors fang zhao zekun li shaoli huang junwu weng tianfei zhou guo sen xie jue wang ying shan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper proposes an anchor based deformation model namely anchordef to predict garment animation from a body motion sequence it deforms a garment mesh template by a mixture of rigid transformations with extra nonlinear displacements a set of anchors around the mesh surface is introduced to guide the learning of rigid transformation matrices once the anchor transformations are found per vertex nonlinear displacements of the garment template can be regressed in a canonical space which reduces the complexity of deformation space learning by explicitly constraining the transformed anchors to satisfy the consistencies of position normal and direction the physical meaning of learned anchor transformations in space is guaranteed for better generalization furthermore an adaptive anchor updating is proposed to optimize the anchor position by being aware of local mesh topology for learning representative anchor transformations qualitative and quantitative experiments on different types of garments demonstrate that anchordef achieves the state of the art performance on garment deformation prediction in motion especially for loose fitting garments vit dae transformer driven diffusion autoencoder for histopathology image analysis authors xuan xu saarthak kapse rajarsi gupta prateek prasanna subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract generative ai has received substantial attention in recent years due to its ability to synthesize data that closely resembles the original data source while generative adversarial networks gans have provided innovative approaches for histopathological image analysis they suffer from limitations such as mode collapse and overfitting in discriminator recently denoising diffusion models have demonstrated promising results in computer vision these models exhibit superior stability during training better distribution coverage and produce high quality diverse images additionally they display a high degree of resilience to noise and perturbations making them well suited for use in digital pathology where images commonly contain artifacts and exhibit significant variations in staining in this paper we present a novel approach namely vit dae which integrates vision transformers vit and diffusion autoencoders for high quality histopathology image synthesis this marks the first time that vit has been introduced to diffusion autoencoders in computational pathology allowing the model to better capture the complex and intricate details of histopathology images we demonstrate the effectiveness of vit dae on three publicly available datasets our approach outperforms recent gan based and vanilla dae methods in generating realistic images not all features matter enhancing few shot clip with adaptive prior refinement authors xiangyang zhu renrui zhang bowei he aojun zhou dong wang bin zhao peng gao subjects computer vision and pattern recognition cs cv artificial intelligence cs ai multimedia cs mm arxiv link pdf link abstract the popularity of contrastive language image pre training clip has propelled its application to diverse downstream vision tasks to improve its capacity on downstream tasks few shot learning has become a widely adopted technique however existing methods either exhibit limited performance or suffer from excessive learnable parameters in this paper we propose ape an adaptive prior refinement method for clip s pre trained knowledge which achieves superior accuracy with high computational efficiency via a prior refinement module we analyze the inter class disparity in the downstream data and decouple the domain specific knowledge from the clip extracted cache model on top of that we introduce two model variants a training free ape and a training required ape t we explore the trilateral affinities between the test image prior cache model and textual representations and only enable a lightweight category residual module to be trained for the average accuracy over benchmarks both ape and ape t attain state of the art and respectively outperform the second best by and under shots with less learnable parameters keyword image signal processing there is no result keyword image signal process there is no result keyword compression progressive channel shrinking network authors jianhong pan siyuan yang lin geng foo qiuhong ke hossein rahmani zhipeng fan jun liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract currently salience based channel pruning makes continuous breakthroughs in network compression in the realization the salience mechanism is used as a metric of channel salience to guide pruning therefore salience based channel pruning can dynamically adjust the channel width at run time which provides a flexible pruning scheme however there are two problems emerging a gating function is often needed to truncate the specific salience entries to zero which destabilizes the forward propagation dynamic architecture brings more cost for indexing in inference which bottlenecks the inference speed in this paper we propose a progressive channel shrinking pcs method to compress the selected salience entries at run time instead of roughly approximating them to zero we also propose a running shrinking policy to provide a testing static pruning scheme that can reduce the memory access cost for filter indexing we evaluate our method on imagenet and datasets over two prevalent networks resnet and vgg and demonstrate that our pcs outperforms all baselines and achieves state of the art in terms of compression performance tradeoff moreover we observe a significant and practical acceleration of inference accuracy improvement of object detection in vvc coded video using yolo features authors takahiro shindo taiju watanabe kein yamada hiroshi watanabe subjects computer vision and pattern recognition cs cv multimedia cs mm image and video processing eess iv arxiv link pdf link abstract with advances in image recognition technology based on deep learning automatic video analysis by artificial intelligence is becoming more widespread as the amount of video used for image recognition increases efficient compression methods for such video data are necessary in general when the image quality deteriorates due to image encoding the image recognition accuracy also falls therefore in this paper we propose a neural network based approach to improve image recognition accuracy especially the object detection accuracy by applying post processing to the encoded video versatile video coding vvc will be used for the video compression method since it is the latest video coding method with the best encoding performance the neural network is trained using the features of yolo the latest object detection model by using vvc as the video coding method and yolo as the detection model high object detection accuracy is achieved even at low bit rates experimental results show that the combination of the proposed method and vvc achieves better coding performance than regular vvc in object detection accuracy keyword raw vision transformers with mixed resolution tokenization authors tomer ronen omer levy avram golbert subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract vision transformer models process input images by dividing them into a spatially regular grid of equal size patches conversely transformers were originally introduced over natural language sequences where each token represents a subword a chunk of raw data of arbitrary size in this work we apply this approach to vision transformers by introducing a novel image tokenization scheme replacing the standard uniform grid with a mixed resolution sequence of tokens where each token represents a patch of arbitrary size using the quadtree algorithm and a novel saliency scorer we construct a patch mosaic where low saliency areas of the image are processed in low resolution routing more of the model s capacity to important image regions using the same architecture as vanilla vits our quadformer models achieve substantial accuracy gains on image classification when controlling for the computational budget code and models are publicly available at ots a one shot learning approach for text spotting in historical manuscripts authors wen bo hu hong jian zhan cong liu bing yin yue lu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract historical manuscript processing poses challenges like limited annotated training data and novel class emergence to address this we propose a novel one shot learning based text spotting ots approach that accurately and reliably spots novel characters with just one annotated support sample drawing inspiration from cognitive research we introduce a spatial alignment module that finds focuses on and learns the most discriminative spatial regions in the query image based on one support image especially since the low resource spotting task often faces the problem of example imbalance we propose a novel loss function called torus loss which can make the embedding space of distance metric more discriminative our approach is highly efficient and requires only a few training samples while exhibiting the remarkable ability to handle novel characters and symbols to enhance dataset diversity a new manuscript dataset that contains the ancient dongba hieroglyphics dbh is created we conduct experiments on publicly available vml hd tkh nc datasets and the new proposed dbh dataset the experimental results demonstrate that ots outperforms the state of the art methods in one shot text spotting overall our proposed method offers promising applications in the field of text spotting in historical manuscripts semi automated computer vision based tracking of multiple industrial entities a framework and dataset creation approach authors jérôme rutinowski hazem youssef sven franke irfan fachrudin priyanta frederik polachowski moritz roidl christopher reining subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this contribution presents the tomie framework tracking of multiple industrial entities a framework for the continuous tracking of industrial entities e g pallets crates barrels over a network of in this example six rgb cameras this framework makes use of multiple sensors data pipelines and data annotation procedures and is described in detail in this contribution with the vision of a fully automated tracking system for industrial entities in mind it enables researchers to efficiently capture high quality data in an industrial setting using this framework an image dataset the tomie dataset is created which at the same time is used to gauge the framework s validity this dataset contains annotation files for frames and entity instances that are captured from a set of six cameras that perceive a large indoor space this dataset out scales comparable datasets by a factor of four and is made up of scenarios drawn from industrial applications from the sector of warehousing three tracking algorithms namely bytetrack bot sort and siammot are applied to this dataset serving as a proof of concept and providing tracking results that are comparable to the state of the art keyword raw image there is no result
| 1
|
1,320
| 3,870,871,039
|
IssuesEvent
|
2016-04-11 07:12:26
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Migrate collectors, processors to Docker Cloud and PaperTrail
|
Collectors enhancement Processors
|
@roll
It looks like the same old Tutum. I suppose it's just like change few hosts.
https://support.tutum.co/support/solutions/articles/13000003644-tutum-to-docker-cloud-migration-guide
# Tasks
* [x] Migrate to Docker Cloud
* [x] Drain logs to PaperTrail (@pwalsh has creds)
|
1.0
|
Migrate collectors, processors to Docker Cloud and PaperTrail - @roll
It looks like the same old Tutum. I suppose it's just like change few hosts.
https://support.tutum.co/support/solutions/articles/13000003644-tutum-to-docker-cloud-migration-guide
# Tasks
* [x] Migrate to Docker Cloud
* [x] Drain logs to PaperTrail (@pwalsh has creds)
|
process
|
migrate collectors processors to docker cloud and papertrail roll it looks like the same old tutum i suppose it s just like change few hosts tasks migrate to docker cloud drain logs to papertrail pwalsh has creds
| 1
|
17,940
| 23,937,356,195
|
IssuesEvent
|
2022-09-11 12:26:59
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
closed
|
Add Spatial Hub as a data source
|
research data processing
|
Add https://data.spatialhub.scot/dataset/ as a source
We think this might be possibly through the existing CKAN API.
Some datasets on the Spatial Hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates. For example, Angus Council’s polling districts is listed on both their [CKAN instance](http://opendata.angus.gov.uk/dataset/angus-council-polling-districts), and [IS’s Spatial Hub](https://data.spatialhub.scot/dataset/polling_districts-an). We need to have a discussion around how we tackle these instances.
- Do we consider them duplicates?
- Do we add both records to the site or do we just add one of them?
- If the latter option, which ones goes on the site?
Out of the 138 datasets on the Spatial Hub, 42 of them are licensed as “not open”, meaning it would potentially be counterproductive to list them on opendata.scot as they would be inaccessible to the vast majority of people. We need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license. If the latter option was chosen then we would need to do some work on filtering these non-open datasets by adding a new step to our pipeline.
|
1.0
|
Add Spatial Hub as a data source - Add https://data.spatialhub.scot/dataset/ as a source
We think this might be possibly through the existing CKAN API.
Some datasets on the Spatial Hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates. For example, Angus Council’s polling districts is listed on both their [CKAN instance](http://opendata.angus.gov.uk/dataset/angus-council-polling-districts), and [IS’s Spatial Hub](https://data.spatialhub.scot/dataset/polling_districts-an). We need to have a discussion around how we tackle these instances.
- Do we consider them duplicates?
- Do we add both records to the site or do we just add one of them?
- If the latter option, which ones goes on the site?
Out of the 138 datasets on the Spatial Hub, 42 of them are licensed as “not open”, meaning it would potentially be counterproductive to list them on opendata.scot as they would be inaccessible to the vast majority of people. We need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license. If the latter option was chosen then we would need to do some work on filtering these non-open datasets by adding a new step to our pipeline.
|
process
|
add spatial hub as a data source add as a source we think this might be possibly through the existing ckan api some datasets on the spatial hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates for example angus council’s polling districts is listed on both their and we need to have a discussion around how we tackle these instances do we consider them duplicates do we add both records to the site or do we just add one of them if the latter option which ones goes on the site out of the datasets on the spatial hub of them are licensed as “not open” meaning it would potentially be counterproductive to list them on opendata scot as they would be inaccessible to the vast majority of people we need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license if the latter option was chosen then we would need to do some work on filtering these non open datasets by adding a new step to our pipeline
| 1
|
207,791
| 23,495,761,191
|
IssuesEvent
|
2022-08-18 01:04:26
|
LingalaShalini/openjpeg-2.3.0_before_fix
|
https://api.github.com/repos/LingalaShalini/openjpeg-2.3.0_before_fix
|
closed
|
CVE-2022-0908 (Medium) detected in openjpegv2.3.0 - autoclosed
|
security vulnerability
|
## CVE-2022-0908 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openjpegv2.3.0</b></p></summary>
<p>
<p>Official repository of the OpenJPEG project</p>
<p>Library home page: <a href=https://github.com/uclouvain/openjpeg.git>https://github.com/uclouvain/openjpeg.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/thirdparty/libtiff/tif_dirread.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Null source pointer passed as an argument to memcpy() function within TIFFFetchNormalTag () in tif_dirread.c in libtiff versions up to 4.3.0 could lead to Denial of Service via crafted TIFF file.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0908>CVE-2022-0908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-0908">https://nvd.nist.gov/vuln/detail/CVE-2022-0908</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: libtiff-tools - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiffxx5 - 4.3.0-6,4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4;libtiff4 - 4.1.0+git191117-2~deb10u4,4.2.0-1+deb11u1,4.3.0-6;libtiff5 - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff-opengl - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiffxx0c2 - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff4-dev - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff5-dev - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiff-dev - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiff-doc - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0908 (Medium) detected in openjpegv2.3.0 - autoclosed - ## CVE-2022-0908 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openjpegv2.3.0</b></p></summary>
<p>
<p>Official repository of the OpenJPEG project</p>
<p>Library home page: <a href=https://github.com/uclouvain/openjpeg.git>https://github.com/uclouvain/openjpeg.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/thirdparty/libtiff/tif_dirread.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Null source pointer passed as an argument to memcpy() function within TIFFFetchNormalTag () in tif_dirread.c in libtiff versions up to 4.3.0 could lead to Denial of Service via crafted TIFF file.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0908>CVE-2022-0908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-0908">https://nvd.nist.gov/vuln/detail/CVE-2022-0908</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: libtiff-tools - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiffxx5 - 4.3.0-6,4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4;libtiff4 - 4.1.0+git191117-2~deb10u4,4.2.0-1+deb11u1,4.3.0-6;libtiff5 - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff-opengl - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiffxx0c2 - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff4-dev - 4.2.0-1+deb11u1,4.1.0+git191117-2~deb10u4,4.3.0-6;libtiff5-dev - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiff-dev - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1;libtiff-doc - 4.1.0+git191117-2~deb10u4,4.3.0-6,4.2.0-1+deb11u1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in autoclosed cve medium severity vulnerability vulnerable library official repository of the openjpeg project library home page a href found in base branch master vulnerable source files thirdparty libtiff tif dirread c vulnerability details null source pointer passed as an argument to memcpy function within tifffetchnormaltag in tif dirread c in libtiff versions up to could lead to denial of service via crafted tiff file publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libtiff tools libtiff opengl dev dev libtiff dev libtiff doc step up your open source security game with mend
| 0
|
21,728
| 30,240,655,413
|
IssuesEvent
|
2023-07-06 13:19:31
|
EBIvariation/eva-opentargets
|
https://api.github.com/repos/EBIvariation/eva-opentargets
|
closed
|
Manual curation for 2023.09 release
|
Processing
|
Should be done after #379 but before 30 June to ensure handover goes smoothly and SOPs are up to date.
Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/tree/master/docs/manual-curation) for full description of steps.
**Checklist:**
- [x] Step 1 — Process
- [x] Step 2 — Curate
- [x] Curation
- [x] Review 1
- [x] Review 2
- [x] Step 3 — Export
- [x] Step 4 — EFO feedback
|
1.0
|
Manual curation for 2023.09 release - Should be done after #379 but before 30 June to ensure handover goes smoothly and SOPs are up to date.
Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/tree/master/docs/manual-curation) for full description of steps.
**Checklist:**
- [x] Step 1 — Process
- [x] Step 2 — Curate
- [x] Curation
- [x] Review 1
- [x] Review 2
- [x] Step 3 — Export
- [x] Step 4 — EFO feedback
|
process
|
manual curation for release should be done after but before june to ensure handover goes smoothly and sops are up to date refer to for full description of steps checklist step — process step — curate curation review review step — export step — efo feedback
| 1
|
7,941
| 11,137,385,066
|
IssuesEvent
|
2019-12-20 19:11:34
|
kubeflow/testing
|
https://api.github.com/repos/kubeflow/testing
|
closed
|
install tektoncd/pipeline in our test clusters
|
area/testing kind/process priority/p1
|
It might be valuable to support [tektoncd/pipeline](https://rubygems.pkg.github.com/tektoncd/pipeline/blob/master/docs/install.md) for writing E2E tests.
One of the nice things about pipelines is that pipelines are split into reusable tasks. So we could have common tasks for things like deploying kubeflow and then combine these tasks into E2E tests.
Installing it should just be a matter of running kubectl apply in the test cluster.
https://rubygems.pkg.github.com/tektoncd/pipeline/blob/master/docs/install.md
Related to: kubeflow/kubeflow#3035
|
1.0
|
install tektoncd/pipeline in our test clusters - It might be valuable to support [tektoncd/pipeline](https://rubygems.pkg.github.com/tektoncd/pipeline/blob/master/docs/install.md) for writing E2E tests.
One of the nice things about pipelines is that pipelines are split into reusable tasks. So we could have common tasks for things like deploying kubeflow and then combine these tasks into E2E tests.
Installing it should just be a matter of running kubectl apply in the test cluster.
https://rubygems.pkg.github.com/tektoncd/pipeline/blob/master/docs/install.md
Related to: kubeflow/kubeflow#3035
|
process
|
install tektoncd pipeline in our test clusters it might be valuable to support for writing tests one of the nice things about pipelines is that pipelines are split into reusable tasks so we could have common tasks for things like deploying kubeflow and then combine these tasks into tests installing it should just be a matter of running kubectl apply in the test cluster related to kubeflow kubeflow
| 1
|
400,748
| 27,298,388,288
|
IssuesEvent
|
2023-02-23 22:35:11
|
Azure/azure-functions-java-worker
|
https://api.github.com/repos/Azure/azure-functions-java-worker
|
closed
|
Spring Cloud Functions in Azure Functions
|
Documentation
|
Quick start and developer guide
Sample code for top triggers and bindings
|
1.0
|
Spring Cloud Functions in Azure Functions - Quick start and developer guide
Sample code for top triggers and bindings
|
non_process
|
spring cloud functions in azure functions quick start and developer guide sample code for top triggers and bindings
| 0
|
17,436
| 23,255,647,979
|
IssuesEvent
|
2022-08-04 09:01:45
|
goblint/analyzer
|
https://api.github.com/repos/goblint/analyzer
|
opened
|
Polish `__goblint_check`
|
cleanup testing preprocessing
|
In #278 all the `assert`s in tests were changed to `__goblint_check`, which is declared by a `goblint.h` header that we forcefully include into every analyzed file via `-include`. This makes the programs normally uncompilable and most of them still include `assert.h`, which now is excessive.
This situation should be improved:
- [ ] Add explicit `#include <goblint.h>` to all tests using `__goblint_check`.
- [ ] Remove excessive `#include <assert.h>` from such tests.
- [ ] Provide a dummy implementation for `goblint.h`, which wouldn't be used for analysis (they're handled as specials), but could be used for compiling the programs. It would also allow using Goblint function annotations in actual projects/stories.
- [ ] Document `__goblint_check`, etc as function annotations like `__goblint_assume_join` is documented in #724.
- [ ] Design a better structure for our `includes` directory: right now it contains a mix of various things and cannot be added to include paths of actual projects:
1. Goblint-specific headers: `goblint.h`, `linux/goblint_preconf.h`, `linuxlight.h`?
2. Goblint overrides of standard headers: `assert.h`.
3. Goblint stubs for standard functions: `stdlib.c`, `pthread.c`.
4. Goblint stubs for non-standard functions: `sv-comp.c`.
5. And eventually general stubs for Goblint-specific functions: `goblint.c`.
They should be split by these various types. Moreover, each of them should have standard structure (`include`, `src`, etc).
- [ ] Rename `includes` to something better since it also contains various stub implementations.
|
1.0
|
Polish `__goblint_check` - In #278 all the `assert`s in tests were changed to `__goblint_check`, which is declared by a `goblint.h` header that we forcefully include into every analyzed file via `-include`. This makes the programs normally uncompilable and most of them still include `assert.h`, which now is excessive.
This situation should be improved:
- [ ] Add explicit `#include <goblint.h>` to all tests using `__goblint_check`.
- [ ] Remove excessive `#include <assert.h>` from such tests.
- [ ] Provide a dummy implementation for `goblint.h`, which wouldn't be used for analysis (they're handled as specials), but could be used for compiling the programs. It would also allow using Goblint function annotations in actual projects/stories.
- [ ] Document `__goblint_check`, etc as function annotations like `__goblint_assume_join` is documented in #724.
- [ ] Design a better structure for our `includes` directory: right now it contains a mix of various things and cannot be added to include paths of actual projects:
1. Goblint-specific headers: `goblint.h`, `linux/goblint_preconf.h`, `linuxlight.h`?
2. Goblint overrides of standard headers: `assert.h`.
3. Goblint stubs for standard functions: `stdlib.c`, `pthread.c`.
4. Goblint stubs for non-standard functions: `sv-comp.c`.
5. And eventually general stubs for Goblint-specific functions: `goblint.c`.
They should be split by these various types. Moreover, each of them should have standard structure (`include`, `src`, etc).
- [ ] Rename `includes` to something better since it also contains various stub implementations.
|
process
|
polish goblint check in all the assert s in tests were changed to goblint check which is declared by a goblint h header that we forcefully include into every analyzed file via include this makes the programs normally uncompilable and most of them still include assert h which now is excessive this situation should be improved add explicit include to all tests using goblint check remove excessive include from such tests provide a dummy implementation for goblint h which wouldn t be used for analysis they re handled as specials but could be used for compiling the programs it would also allow using goblint function annotations in actual projects stories document goblint check etc as function annotations like goblint assume join is documented in design a better structure for our includes directory right now it contains a mix of various things and cannot be added to include paths of actual projects goblint specific headers goblint h linux goblint preconf h linuxlight h goblint overrides of standard headers assert h goblint stubs for standard functions stdlib c pthread c goblint stubs for non standard functions sv comp c and eventually general stubs for goblint specific functions goblint c they should be split by these various types moreover each of them should have standard structure include src etc rename includes to something better since it also contains various stub implementations
| 1
|
9,186
| 12,228,624,864
|
IssuesEvent
|
2020-05-03 20:14:03
|
chfor183/data_science_articles
|
https://api.github.com/repos/chfor183/data_science_articles
|
opened
|
Data Preprocessing Concepts
|
Data Preprocessing
|
## TL;DR
1. Load data
2. Look at the data shape and type
3. Drop duplicates
4. Look at the few first and last rows
5. Remove special characters from numeric columns
6. Remove special characters from categorical columns
7. Handling missing values
8. Look at the data distribution
9. Handling of outliers
10. Dealing with skewed data
11. Transformations if we use predictive modeling
a. Normalization
b. Scaling
c. Standardization
d. Balancing (oversampling / undersampling)
e. Binning
f. Categorical encoding
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/data-preprocessing-concepts-fa946d11c825
|
1.0
|
Data Preprocessing Concepts - ## TL;DR
1. Load data
2. Look at the data shape and type
3. Drop duplicates
4. Look at the few first and last rows
5. Remove special characters from numeric columns
6. Remove special characters from categorical columns
7. Handling missing values
8. Look at the data distribution
9. Handling of outliers
10. Dealing with skewed data
11. Transformations if we use predictive modeling
a. Normalization
b. Scaling
c. Standardization
d. Balancing (oversampling / undersampling)
e. Binning
f. Categorical encoding
## Key Takeaways
- 1
- 2
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
https://towardsdatascience.com/data-preprocessing-concepts-fa946d11c825
|
process
|
data preprocessing concepts tl dr load data look at the data shape and type drop duplicates look at the few first and last rows remove special characters from numeric columns remove special characters from categorical columns handling missing values look at the data distribution handling of outliers dealing with skewed data transformations if we use predictive modeling a normalization b scaling c standardization d balancing oversampling undersampling e binning f categorical encoding key takeaways useful code snippets function test console log notice the blank line before this function articles ressources
| 1
|
7,430
| 10,548,713,310
|
IssuesEvent
|
2019-10-03 06:48:17
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Create a workflow with conditional steps
|
altinn-studio process
|
As a service developer I should be able to create a workflow with conditional steps
|
1.0
|
Create a workflow with conditional steps - As a service developer I should be able to create a workflow with conditional steps
|
process
|
create a workflow with conditional steps as a service developer i should be able to create a workflow with conditional steps
| 1
|
2,676
| 5,495,567,821
|
IssuesEvent
|
2017-03-15 05:02:04
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
False positive of "extraneous input" due to RaiseEvent
|
bug critical parse-tree-processing
|
I have a complex procedure that is causing a parser error.
The offending line is simply this:
mFrmQueue.RecordSource = strSQL
and it reports `extraneous input 'strSQL' expecting {')', WS, LINE_CONTINUATION}`.
Few things that raise red flags:
It reports the error being located at the line 313, on column 52. This is the correct line but the end of the line (after the `strSQL`) is at column 40. Hmm....
Commenting out this line _stills_ results in a parse error
Deleting the lines and pasting it back -> parse error
Typing the whole line by hand -> parse error
Commenting the entire body of the procedure -> no more parse error
Copy'n'paste the entire body of the procedure -> error parse again. (I used Immediate windows which normally would strip out any unprintable characters).
The procedure does not contain any hidden instructions like `VB_Attribute` or anything of that sort.
While writing the bug report, I found a piece of clue -- if I comment the `RaiseEvent` line, no more parse error.... So it seems to be related to the raiseevent somehow.
'Several lines of code...
RaiseEvent FilterQueue(bolDoNotContinue, ByVal strSQL, colSpecials)
If bolDoNotContinue = False Then
If Len(strSQL) Then
strSQL = "WHERE " & strSQL
End If
strOrigSQL = mFrmQueue.RecordSource
strSQL = ReplaceWhereClause(strOrigSQL, strSQL)
mFrmQueue.RecordSource = strSQL
On Error Resume Next
If Err.Number Then
lngError = Err.Number
strError = Err.Description & ", (" & Err.Source & ")"
mFrmQueue.RecordSource = strOrigSQL
End If
On Error GoTo 0
If lngError Then
Err.Raise lngError, mconstrModuleName, "Cannot perform ApplyFilters due to error: " & vbNewLine & strError
End If
End If
I verified that even if I commented everything else leaving only the `RaiseEvent` in the code, I get an parse error. Comment out only the `RaiseEvent`, it parses OK. So it looks like RaiseEvent is causing this problem somehow and this cascades to subsequent lines.
In case it matters... the definition of event is...
Public Event FilterQueue(ByRef Cancel As Boolean, ByVal SQLWhereClause As String, ByRef UnprocessedControls As VBA.Collection)
|
1.0
|
False positive of "extraneous input" due to RaiseEvent - I have a complex procedure that is causing a parser error.
The offending line is simply this:
mFrmQueue.RecordSource = strSQL
and it reports `extraneous input 'strSQL' expecting {')', WS, LINE_CONTINUATION}`.
Few things that raise red flags:
It reports the error being located at the line 313, on column 52. This is the correct line but the end of the line (after the `strSQL`) is at column 40. Hmm....
Commenting out this line _stills_ results in a parse error
Deleting the lines and pasting it back -> parse error
Typing the whole line by hand -> parse error
Commenting the entire body of the procedure -> no more parse error
Copy'n'paste the entire body of the procedure -> error parse again. (I used Immediate windows which normally would strip out any unprintable characters).
The procedure does not contain any hidden instructions like `VB_Attribute` or anything of that sort.
While writing the bug report, I found a piece of clue -- if I comment the `RaiseEvent` line, no more parse error.... So it seems to be related to the raiseevent somehow.
'Several lines of code...
RaiseEvent FilterQueue(bolDoNotContinue, ByVal strSQL, colSpecials)
If bolDoNotContinue = False Then
If Len(strSQL) Then
strSQL = "WHERE " & strSQL
End If
strOrigSQL = mFrmQueue.RecordSource
strSQL = ReplaceWhereClause(strOrigSQL, strSQL)
mFrmQueue.RecordSource = strSQL
On Error Resume Next
If Err.Number Then
lngError = Err.Number
strError = Err.Description & ", (" & Err.Source & ")"
mFrmQueue.RecordSource = strOrigSQL
End If
On Error GoTo 0
If lngError Then
Err.Raise lngError, mconstrModuleName, "Cannot perform ApplyFilters due to error: " & vbNewLine & strError
End If
End If
I verified that even if I commented everything else leaving only the `RaiseEvent` in the code, I get an parse error. Comment out only the `RaiseEvent`, it parses OK. So it looks like RaiseEvent is causing this problem somehow and this cascades to subsequent lines.
In case it matters... the definition of event is...
Public Event FilterQueue(ByRef Cancel As Boolean, ByVal SQLWhereClause As String, ByRef UnprocessedControls As VBA.Collection)
|
process
|
false positive of extraneous input due to raiseevent i have a complex procedure that is causing a parser error the offending line is simply this mfrmqueue recordsource strsql and it reports extraneous input strsql expecting ws line continuation few things that raise red flags it reports the error being located at the line on column this is the correct line but the end of the line after the strsql is at column hmm commenting out this line stills results in a parse error deleting the lines and pasting it back parse error typing the whole line by hand parse error commenting the entire body of the procedure no more parse error copy n paste the entire body of the procedure error parse again i used immediate windows which normally would strip out any unprintable characters the procedure does not contain any hidden instructions like vb attribute or anything of that sort while writing the bug report i found a piece of clue if i comment the raiseevent line no more parse error so it seems to be related to the raiseevent somehow several lines of code raiseevent filterqueue boldonotcontinue byval strsql colspecials if boldonotcontinue false then if len strsql then strsql where strsql end if strorigsql mfrmqueue recordsource strsql replacewhereclause strorigsql strsql mfrmqueue recordsource strsql on error resume next if err number then lngerror err number strerror err description err source mfrmqueue recordsource strorigsql end if on error goto if lngerror then err raise lngerror mconstrmodulename cannot perform applyfilters due to error vbnewline strerror end if end if i verified that even if i commented everything else leaving only the raiseevent in the code i get an parse error comment out only the raiseevent it parses ok so it looks like raiseevent is causing this problem somehow and this cascades to subsequent lines in case it matters the definition of event is public event filterqueue byref cancel as boolean byval sqlwhereclause as string byref unprocessedcontrols as vba collection
| 1
|
30,093
| 5,726,222,892
|
IssuesEvent
|
2017-04-20 18:26:20
|
stormpath/stormpath-sdk-dotnet
|
https://api.github.com/repos/stormpath/stormpath-sdk-dotnet
|
closed
|
Add examples of token creation code
|
documentation wontfix
|
This page currently lacks examples of code for each type of token generation strategy: https://docs.stormpath.com/csharp/product-guide/latest/auth_n.html#generating-an-oauth-2-0-access-token
|
1.0
|
Add examples of token creation code - This page currently lacks examples of code for each type of token generation strategy: https://docs.stormpath.com/csharp/product-guide/latest/auth_n.html#generating-an-oauth-2-0-access-token
|
non_process
|
add examples of token creation code this page currently lacks examples of code for each type of token generation strategy
| 0
|
10,311
| 13,156,649,694
|
IssuesEvent
|
2020-08-10 11:13:15
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
vsphere-template post-processor -> Failed - the task was cancelled by a user
|
post-processor/vsphere-template regression waiting-reply
|
Here's our post-processors chain:
> "post-processors": [
> [
> {
> "cluster": "{{ user `vsphere_cluster` }}",
> "datacenter": "{{ user `vsphere_datacenter` }}",
> "datastore": "{{ user `vsphere_datastore` }}",
> "disk_mode": "{{ user `vsphere_disk_mode` }}",
> "host": "{{ user `vsphere_host` }}",
> "insecure": "{{ user `vsphere_insecure` }}",
> "only": [
> "vmware-iso"
> ],
> "options": [
> "--memorySize:{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}={{ user `vsphere_options_memory` }}",
> "--numberOfCpus:{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}={{ user `vsphere_options_cpus` }}"
> ],
> "password": "{{ user `vsphere_password` }}",
> "type": "vsphere",
> "username": "{{ user `vsphere_username` }}",
> "vm_folder": "{{ user `vsphere_vm_folder` }}",
> "vm_name": "{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}",
> "vm_network": "{{ user `vsphere_vm_network` }}"
> },
> {
> "datacenter": "{{ user `vsphere_datacenter` }}",
> "folder": "{{ user `vsphere_folder` }}",
> "host": "{{ user `vsphere_host` }}",
> "insecure": "{{ user `vsphere_insecure` }}",
> "only": [
> "vmware-iso"
> ],
> "password": "{{ user `vsphere_password` }}",
> "type": "vsphere-template",
> "username": "{{ user `vsphere_username` }}"
> }
> ],
> {
> "type": "manifest"
> }
> ]
This worked till recently. Now vsphere-template pp won't save template. It starts and fails with only error/message on vsphere
saying
> Export VM Failed - the task was cancelled by a user
There is nothing conclusive in esxi logs or packer debug log. Any clues?
|
1.0
|
vsphere-template post-processor -> Failed - the task was cancelled by a user - Here's our post-processors chain:
> "post-processors": [
> [
> {
> "cluster": "{{ user `vsphere_cluster` }}",
> "datacenter": "{{ user `vsphere_datacenter` }}",
> "datastore": "{{ user `vsphere_datastore` }}",
> "disk_mode": "{{ user `vsphere_disk_mode` }}",
> "host": "{{ user `vsphere_host` }}",
> "insecure": "{{ user `vsphere_insecure` }}",
> "only": [
> "vmware-iso"
> ],
> "options": [
> "--memorySize:{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}={{ user `vsphere_options_memory` }}",
> "--numberOfCpus:{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}={{ user `vsphere_options_cpus` }}"
> ],
> "password": "{{ user `vsphere_password` }}",
> "type": "vsphere",
> "username": "{{ user `vsphere_username` }}",
> "vm_folder": "{{ user `vsphere_vm_folder` }}",
> "vm_name": "{{ user `name` }}-{{ user `operating_system` }}-{{ user `uuid` }}",
> "vm_network": "{{ user `vsphere_vm_network` }}"
> },
> {
> "datacenter": "{{ user `vsphere_datacenter` }}",
> "folder": "{{ user `vsphere_folder` }}",
> "host": "{{ user `vsphere_host` }}",
> "insecure": "{{ user `vsphere_insecure` }}",
> "only": [
> "vmware-iso"
> ],
> "password": "{{ user `vsphere_password` }}",
> "type": "vsphere-template",
> "username": "{{ user `vsphere_username` }}"
> }
> ],
> {
> "type": "manifest"
> }
> ]
This worked till recently. Now vsphere-template pp won't save template. It starts and fails with only error/message on vsphere
saying
> Export VM Failed - the task was cancelled by a user
There is nothing conclusive in esxi logs or packer debug log. Any clues?
|
process
|
vsphere template post processor failed the task was cancelled by a user here s our post processors chain post processors cluster user vsphere cluster datacenter user vsphere datacenter datastore user vsphere datastore disk mode user vsphere disk mode host user vsphere host insecure user vsphere insecure only vmware iso options memorysize user name user operating system user uuid user vsphere options memory numberofcpus user name user operating system user uuid user vsphere options cpus password user vsphere password type vsphere username user vsphere username vm folder user vsphere vm folder vm name user name user operating system user uuid vm network user vsphere vm network datacenter user vsphere datacenter folder user vsphere folder host user vsphere host insecure user vsphere insecure only vmware iso password user vsphere password type vsphere template username user vsphere username type manifest this worked till recently now vsphere template pp won t save template it starts and fails with only error message on vsphere saying export vm failed the task was cancelled by a user there is nothing conclusive in esxi logs or packer debug log any clues
| 1
|
19,886
| 26,330,435,258
|
IssuesEvent
|
2023-01-10 10:22:52
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
closed
|
`PreProcessor` `FileExistsError` with parallel pipeline instantiation on same machine (noisy neighbor)
|
type:bug topic:preprocessing
|
**Describe the bug**
When a `PreProcessor` is initialized, it downloads the `punkt` sentence tokenizer ([see code here](https://github.com/deepset-ai/haystack/blob/fc077992067263a1a2c2d8bdf3ccfad02ea87ee6/haystack/nodes/preprocessor/preprocessor.py#L99)). When this is done by multiple processes in parallel then it can happen that both don't find the model data locally and hence start to download the model and then one of the processes fails as the downloaded files already exist.
**Error message**
Stack trace:
```bash
File "/usr/local/lib/python3.8/dist-packages/nltk/data.py", line 583, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt
Searched in:
- '/root/nltk_data'
- '/usr/nltk_data'
- '/usr/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
ray::RayServeWrappedReplica.reconfigure() (pid=76, ip=xy, repr=xy)
File "/usr/local/lib/python3.8/dist-packages/haystack/pipelines/base.py", line 1958, in _load_or_get_component
component_instance = BaseComponent._create_instance(
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/base.py", line 158, in _create_instance
instance = subclass(**component_params)
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/base.py", line 48, in wrapper_exportable_to_yaml
init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/preprocessor/preprocessor.py", line 108, in __init__
nltk.download("punkt")
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 777, in download
for msg in self.incr_download(info_or_id, download_dir, force):
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 642, in incr_download
yield from self._download_package(info, download_dir, force)
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 735, in _download_package
for msg in _unzip_iter(filepath, zipdir, verbose=False):
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 2250, in _unzip_iter
zf.extractall(root)
File "/usr/lib/python3.8/zipfile.py", line 1647, in extractall
self._extract_member(zipinfo, path, pwd)
File "/usr/lib/python3.8/zipfile.py", line 1697, in _extract_member
os.mkdir(targetpath)
FileExistsError: [Errno 17] File exists: '/root/nltk_data/tokenizers/punkt'
```
Pipeline YAML (happy to share this upon request within a DM)
**Expected behavior**
Component should be multiprocessing safe.
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
It's a race condition so a bit hard to replicate:
1. Initialize 2 pipelines with a `Preprocessor` at the same time while not having the `punkt` sentence tokenizer on your local disk
2. The above error should happen
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://haystack.deepset.ai/overview/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number): 1.10
- DocumentStore:
- Reader:
- Retriever:
|
1.0
|
`PreProcessor` `FileExistsError` with parallel pipeline instantiation on same machine (noisy neighbor) - **Describe the bug**
When a `PreProcessor` is initialized, it downloads the `punkt` sentence tokenizer ([see code here](https://github.com/deepset-ai/haystack/blob/fc077992067263a1a2c2d8bdf3ccfad02ea87ee6/haystack/nodes/preprocessor/preprocessor.py#L99)). When this is done by multiple processes in parallel then it can happen that both don't find the model data locally and hence start to download the model and then one of the processes fails as the downloaded files already exist.
**Error message**
Stack trace:
```bash
File "/usr/local/lib/python3.8/dist-packages/nltk/data.py", line 583, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt
Searched in:
- '/root/nltk_data'
- '/usr/nltk_data'
- '/usr/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
ray::RayServeWrappedReplica.reconfigure() (pid=76, ip=xy, repr=xy)
File "/usr/local/lib/python3.8/dist-packages/haystack/pipelines/base.py", line 1958, in _load_or_get_component
component_instance = BaseComponent._create_instance(
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/base.py", line 158, in _create_instance
instance = subclass(**component_params)
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/base.py", line 48, in wrapper_exportable_to_yaml
init_func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/haystack/nodes/preprocessor/preprocessor.py", line 108, in __init__
nltk.download("punkt")
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 777, in download
for msg in self.incr_download(info_or_id, download_dir, force):
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 642, in incr_download
yield from self._download_package(info, download_dir, force)
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 735, in _download_package
for msg in _unzip_iter(filepath, zipdir, verbose=False):
File "/usr/local/lib/python3.8/dist-packages/nltk/downloader.py", line 2250, in _unzip_iter
zf.extractall(root)
File "/usr/lib/python3.8/zipfile.py", line 1647, in extractall
self._extract_member(zipinfo, path, pwd)
File "/usr/lib/python3.8/zipfile.py", line 1697, in _extract_member
os.mkdir(targetpath)
FileExistsError: [Errno 17] File exists: '/root/nltk_data/tokenizers/punkt'
```
Pipeline YAML (happy to share this upon request within a DM)
**Expected behavior**
Component should be multiprocessing safe.
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
It's a race condition so a bit hard to replicate:
1. Initialize 2 pipelines with a `Preprocessor` at the same time while not having the `punkt` sentence tokenizer on your local disk
2. The above error should happen
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://haystack.deepset.ai/overview/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number): 1.10
- DocumentStore:
- Reader:
- Retriever:
|
process
|
preprocessor fileexistserror with parallel pipeline instantiation on same machine noisy neighbor describe the bug when a preprocessor is initialized it downloads the punkt sentence tokenizer when this is done by multiple processes in parallel then it can happen that both don t find the model data locally and hence start to download the model and then one of the processes fails as the downloaded files already exist error message stack trace bash file usr local lib dist packages nltk data py line in find raise lookuperror resource not found lookuperror resource punkt not found please use the nltk downloader to obtain the resource import nltk nltk download punkt for more information see attempted to load tokenizers punkt searched in root nltk data usr nltk data usr share nltk data usr lib nltk data usr share nltk data usr local share nltk data usr lib nltk data usr local lib nltk data during handling of the above exception another exception occurred ray rayservewrappedreplica reconfigure pid ip xy repr xy file usr local lib dist packages haystack pipelines base py line in load or get component component instance basecomponent create instance file usr local lib dist packages haystack nodes base py line in create instance instance subclass component params file usr local lib dist packages haystack nodes base py line in wrapper exportable to yaml init func self args kwargs file usr local lib dist packages haystack nodes preprocessor preprocessor py line in init nltk download punkt file usr local lib dist packages nltk downloader py line in download for msg in self incr download info or id download dir force file usr local lib dist packages nltk downloader py line in incr download yield from self download package info download dir force file usr local lib dist packages nltk downloader py line in download package for msg in unzip iter filepath zipdir verbose false file usr local lib dist packages nltk downloader py line in unzip iter zf extractall root file usr lib zipfile py line in extractall self extract member zipinfo path pwd file usr lib zipfile py line in extract member os mkdir targetpath fileexistserror file exists root nltk data tokenizers punkt pipeline yaml happy to share this upon request within a dm expected behavior component should be multiprocessing safe additional context add any other context about the problem here like document types preprocessing steps settings of reader etc to reproduce it s a race condition so a bit hard to replicate initialize pipelines with a preprocessor at the same time while not having the punkt sentence tokenizer on your local disk the above error should happen faq check have you had a look at system os gpu cpu haystack version commit or version number documentstore reader retriever
| 1
|
46,972
| 6,035,117,521
|
IssuesEvent
|
2017-06-09 13:07:37
|
geetsisbac/WK2XQXCBCXIVMLBGXSPVU5EB
|
https://api.github.com/repos/geetsisbac/WK2XQXCBCXIVMLBGXSPVU5EB
|
reopened
|
aM3WrkmVCjZRWlNDoR92bv6UcoTRvywEAK+V6gANgqRDDcICwpIcCCE2nHK420T/Ycnv2tijxWAoDyXJXSAnU5MUXVlk9yFf91miza+U/dkE6q5JaJlxNl7XsZldMgca6OOqBdXqnn8whgzdeCaie4Q3/F/fjdhpP128xe5nGl0=
|
design
|
y50q32r6Y2e0mDbJ7oz5wbFQ31DXp+o8QTm/eSfvVdBfNKBIhtb8D/Bf/cFg8K+NCQLdp3zhZjYjxvWP6IR2UaQ7XZOWjQbj+BwYz8hlxeJtf3cPydKNrGl5X8ZKRPUMEdCGj9RYDKZvxXVTftXU1y8+n+XuaJW66hyNgdA/W3yxuNx7KINBYkrgv7zJpemAWnree1jDNbBExXDf3K7M7xE1J2Iea7gwPx/HWGj8QYOH00ULtTfhfWe4yuYHE/m3xFF22qzOn8o80CEiKE4Rcp6mxHaTrko/9P2dljFWM++ct167VCuNhP3Uta+hKcftf8U1B/b/rWGV25n9RVwKtLACtT4eOWPzpkcEjFo788FBGwF4ZB5aPu413Vk/phKjMBhJqBS5FsrJWpUzUOqBqiTYiu5ytzsNZVCppuEj4Or3MtwhesrpXUL29xCbNBeM+p/v1AaMZu+WaFdnjud9l/nmJn/AaHZADUW5LUULzhPi8sxO2JLXJFkvuv10B75TaoveJvmOSLeXWAeUnBwXA5uDDUfe9WXBaSPwAuYnmSnFEo35iNuHZKDsXiwWr91uQjS/eQX/77nqBUyHrt3+Bu0AkFlEIe7P0U/OgW8AKCmPhVwzJ7L/cyrvUSWCWCCQCnGJK+kt78twixkPTYlR1O0fIMiDg8pyOIVWT8KF+hk=
|
1.0
|
aM3WrkmVCjZRWlNDoR92bv6UcoTRvywEAK+V6gANgqRDDcICwpIcCCE2nHK420T/Ycnv2tijxWAoDyXJXSAnU5MUXVlk9yFf91miza+U/dkE6q5JaJlxNl7XsZldMgca6OOqBdXqnn8whgzdeCaie4Q3/F/fjdhpP128xe5nGl0= - y50q32r6Y2e0mDbJ7oz5wbFQ31DXp+o8QTm/eSfvVdBfNKBIhtb8D/Bf/cFg8K+NCQLdp3zhZjYjxvWP6IR2UaQ7XZOWjQbj+BwYz8hlxeJtf3cPydKNrGl5X8ZKRPUMEdCGj9RYDKZvxXVTftXU1y8+n+XuaJW66hyNgdA/W3yxuNx7KINBYkrgv7zJpemAWnree1jDNbBExXDf3K7M7xE1J2Iea7gwPx/HWGj8QYOH00ULtTfhfWe4yuYHE/m3xFF22qzOn8o80CEiKE4Rcp6mxHaTrko/9P2dljFWM++ct167VCuNhP3Uta+hKcftf8U1B/b/rWGV25n9RVwKtLACtT4eOWPzpkcEjFo788FBGwF4ZB5aPu413Vk/phKjMBhJqBS5FsrJWpUzUOqBqiTYiu5ytzsNZVCppuEj4Or3MtwhesrpXUL29xCbNBeM+p/v1AaMZu+WaFdnjud9l/nmJn/AaHZADUW5LUULzhPi8sxO2JLXJFkvuv10B75TaoveJvmOSLeXWAeUnBwXA5uDDUfe9WXBaSPwAuYnmSnFEo35iNuHZKDsXiwWr91uQjS/eQX/77nqBUyHrt3+Bu0AkFlEIe7P0U/OgW8AKCmPhVwzJ7L/cyrvUSWCWCCQCnGJK+kt78twixkPTYlR1O0fIMiDg8pyOIVWT8KF+hk=
|
non_process
|
u f bf n b p nmjn eqx cyrvuswcwccqcngjk hk
| 0
|
19,492
| 25,801,009,632
|
IssuesEvent
|
2022-12-11 01:26:05
|
LLazyEmail/nomoretogo_email_template
|
https://api.github.com/repos/LLazyEmail/nomoretogo_email_template
|
closed
|
подумать как добавлять множество text
|
todo in process
| ERROR: type should be string, got "https://github.com/LLazyEmail/nomoretogo_email_template/blob/16a0b7def2f80d35a05e53e7c2d257aeabc84b8c/src/components/instructionComponent.js#L13\n\n```javascript\n\n// Create instruction component\nconst INSTRUCTION_COMPONENT_ERROR = (variable) =>\n `Empty ${variable} in instructionComponent`;\n\nconst createTitle = (title) => {\n return `<p style=\"margin-top: 0px; margin-bottom: 10px; line-height: 150%;\"><strong>${title}</strong></p>`;\n};\n\nconst createText = (text) => {\n return `<p style=\"margin-top: 0px; margin-bottom: 10px; line-height: 150%;\">${text}</p>`;\n};\n\n// TODO : нужно подумать как добавлять множество text\nconst mainBlock = (params) => {\n var { title, text, title2, text2 } = params;\n return `<table align=\"center\" border=\"0\" bgcolor=\"#ffffff\" class=\"mlContentTable mlContentTableDefault\" cellpadding=\"0\" cellspacing=\"0\" width=\"640\">\n <tbody><tr>\n <td class=\"mlContentTableCardTd\">\n <table align=\"center\" bgcolor=\"#ffffff\" border=\"0\" cellpadding=\"0\" cellspacing=\"0\" class=\"mlContentTable ml-default\" style=\"width: 640px; min-width: 640px;\" width=\"640\">\n <tbody><tr>\n <td>\n <table role=\"presentation\" cellpadding=\"0\" cellspacing=\"0\" border=\"0\" align=\"center\" width=\"640\" style=\"width: 640px; min-width: 640px;\" class=\"mlContentTable\">\n <tbody><tr>\n <td height=\"20\" class=\"spacingHeight-20\" style=\"line-height: 20px; min-height: 20px;\"></td>\n </tr>\n </tbody></table>\n <table role=\"presentation\" cellpadding=\"0\" cellspacing=\"0\" border=\"0\" align=\"center\" width=\"640\" style=\"width: 640px; min-width: 640px;\" class=\"mlContentTable\">\n <tbody><tr>\n <td align=\"center\" style=\"padding: 0px 40px;\" class=\"mlContentOuter\">\n <table role=\"presentation\" cellpadding=\"0\" cellspacing=\"0\" border=\"0\" align=\"center\" width=\"100%\">\n <tbody><tr>\n <td class=\"bodyTitle\" id=\"bodyText-34\" style=\"font-family: 'Poppins', sans-serif; font-size: 14px; line-height: 150%; color: #6f6f6f;\">\n <p style=\"margin-top: 0px; margin-bottom: 10px; line-height: 150%; text-align: center;\"></p>\n <p style=\"margin-top: 0px; margin-bottom: 10px; line-height: 150%;\"><strong></strong></p>\n ${createTitle(title)}\n ${createText(text)}\n ${createTitle(title2)}\n ${createText(text2)}\n <p style=\"margin-top: 0px; margin-bottom: 10px; line-height: 150%;\">Slice and Dice: Cut the vegetables and store in zippered bags or divided containers.</p>\n <p style=\"margin-top: 0px; margin-bottom: 0px; line-height: 150%;\">Make Ahead and Refrigerate: Make the sauce; Cook the noodles; Make the dressing; Make the spaetzle; Cook the rice.<br><br><br><br><strong></strong><br><strong></strong><strong></strong></p>\n </td>\n </tr>\n </tbody></table>\n </td>\n </tr>\n </tbody></table>\n <table role=\"presentation\" cellpadding=\"0\" cellspacing=\"0\" border=\"0\" align=\"center\" width=\"640\" style=\"width: 640px; min-width: 640px;\" class=\"mlContentTable\">\n <tbody><tr>\n <td height=\"20\" class=\"spacingHeight-20\" style=\"line-height: 20px; min-height: 20px;\"></td>\n </tr>\n </tbody></table>\n </td>\n </tr>\n </tbody></table>\n </td>\n </tr>\n </tbody></table>`;\n};\n\n// we are throwing an error with the same constant 10 times.\nfunction searchForErrors(params) {\n var { title, text, title2, text2 } = params;\n\n if (title == '') {\n throw new Error(INSTRUCTION_COMPONENT_ERROR('title'));\n }\n if (text == '') {\n throw new Error(INSTRUCTION_COMPONENT_ERROR('text'));\n }\n if (title2 == '') {\n throw new Error(INSTRUCTION_COMPONENT_ERROR('title2'));\n }\n if (text2 == '') {\n throw new Error(INSTRUCTION_COMPONENT_ERROR('text2'));\n }\n}\n\nexport default function (data) {\n searchForErrors(data);\n return mainBlock(data);\n}\n\n```"
|
1.0
|
подумать как добавлять множество text - https://github.com/LLazyEmail/nomoretogo_email_template/blob/16a0b7def2f80d35a05e53e7c2d257aeabc84b8c/src/components/instructionComponent.js#L13
```javascript
// Create instruction component
const INSTRUCTION_COMPONENT_ERROR = (variable) =>
`Empty ${variable} in instructionComponent`;
const createTitle = (title) => {
return `<p style="margin-top: 0px; margin-bottom: 10px; line-height: 150%;"><strong>${title}</strong></p>`;
};
const createText = (text) => {
return `<p style="margin-top: 0px; margin-bottom: 10px; line-height: 150%;">${text}</p>`;
};
// TODO : нужно подумать как добавлять множество text
const mainBlock = (params) => {
var { title, text, title2, text2 } = params;
return `<table align="center" border="0" bgcolor="#ffffff" class="mlContentTable mlContentTableDefault" cellpadding="0" cellspacing="0" width="640">
<tbody><tr>
<td class="mlContentTableCardTd">
<table align="center" bgcolor="#ffffff" border="0" cellpadding="0" cellspacing="0" class="mlContentTable ml-default" style="width: 640px; min-width: 640px;" width="640">
<tbody><tr>
<td>
<table role="presentation" cellpadding="0" cellspacing="0" border="0" align="center" width="640" style="width: 640px; min-width: 640px;" class="mlContentTable">
<tbody><tr>
<td height="20" class="spacingHeight-20" style="line-height: 20px; min-height: 20px;"></td>
</tr>
</tbody></table>
<table role="presentation" cellpadding="0" cellspacing="0" border="0" align="center" width="640" style="width: 640px; min-width: 640px;" class="mlContentTable">
<tbody><tr>
<td align="center" style="padding: 0px 40px;" class="mlContentOuter">
<table role="presentation" cellpadding="0" cellspacing="0" border="0" align="center" width="100%">
<tbody><tr>
<td class="bodyTitle" id="bodyText-34" style="font-family: 'Poppins', sans-serif; font-size: 14px; line-height: 150%; color: #6f6f6f;">
<p style="margin-top: 0px; margin-bottom: 10px; line-height: 150%; text-align: center;"></p>
<p style="margin-top: 0px; margin-bottom: 10px; line-height: 150%;"><strong></strong></p>
${createTitle(title)}
${createText(text)}
${createTitle(title2)}
${createText(text2)}
<p style="margin-top: 0px; margin-bottom: 10px; line-height: 150%;">Slice and Dice: Cut the vegetables and store in zippered bags or divided containers.</p>
<p style="margin-top: 0px; margin-bottom: 0px; line-height: 150%;">Make Ahead and Refrigerate: Make the sauce; Cook the noodles; Make the dressing; Make the spaetzle; Cook the rice.<br><br><br><br><strong></strong><br><strong></strong><strong></strong></p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</tbody></table>
<table role="presentation" cellpadding="0" cellspacing="0" border="0" align="center" width="640" style="width: 640px; min-width: 640px;" class="mlContentTable">
<tbody><tr>
<td height="20" class="spacingHeight-20" style="line-height: 20px; min-height: 20px;"></td>
</tr>
</tbody></table>
</td>
</tr>
</tbody></table>
</td>
</tr>
</tbody></table>`;
};
// we are throwing an error with the same constant 10 times.
function searchForErrors(params) {
var { title, text, title2, text2 } = params;
if (title == '') {
throw new Error(INSTRUCTION_COMPONENT_ERROR('title'));
}
if (text == '') {
throw new Error(INSTRUCTION_COMPONENT_ERROR('text'));
}
if (title2 == '') {
throw new Error(INSTRUCTION_COMPONENT_ERROR('title2'));
}
if (text2 == '') {
throw new Error(INSTRUCTION_COMPONENT_ERROR('text2'));
}
}
export default function (data) {
searchForErrors(data);
return mainBlock(data);
}
```
|
process
|
подумать как добавлять множество text javascript create instruction component const instruction component error variable empty variable in instructioncomponent const createtitle title return title const createtext text return text todo нужно подумать как добавлять множество text const mainblock params var title text params return createtitle title createtext text createtitle createtext slice and dice cut the vegetables and store in zippered bags or divided containers make ahead and refrigerate make the sauce cook the noodles make the dressing make the spaetzle cook the rice we are throwing an error with the same constant times function searchforerrors params var title text params if title throw new error instruction component error title if text throw new error instruction component error text if throw new error instruction component error if throw new error instruction component error export default function data searchforerrors data return mainblock data
| 1
|
611,284
| 18,950,927,218
|
IssuesEvent
|
2021-11-18 15:06:39
|
ARMmbed/mbed-os
|
https://api.github.com/repos/ARMmbed/mbed-os
|
closed
|
Export to uvision failing with missing context fault handler
|
priority: untriaged component: untriaged
|
<!--
************************************** WARNING **************************************
The ciarcom bot parses this header automatically. Any deviation from the
template may cause the bot to automatically correct this header or may result in a
warning message, requesting updates.
Please ensure all sections of the template below are filled in and no changes
are made to the template format. Only bugs should be raised here as issues.
Questions or enhancements should instead be raised on our forums:
https://forums.mbed.com/ .
*************************************************************************************
-->
### Description of defect
When exporting a uVision6 project for ARMC6 toolchain on NUCLEO_F746ZG target, the build fails with the following error.
```
linking...
.\BUILD\mbed-os-example-blinky.axf: Warning: L3912W: Option 'legacyalign' is deprecated.
.\BUILD\mbed-os-example-blinky.axf: Error: L6218E: Undefined symbol mbed_fault_context (referred from .\build\except.o).
```
#### Target(s) affected by this defect ?
Only tried to export to NUCLEO_F746ZG target, but probably others as well.
#### Toolchain(s) (name and version) displaying this defect ?
uVision6 / ARMClang6
#### What version of Mbed-os are you using (tag or sha) ?
tag: mbed-os-6.15.0
sha: 4cfbea43cabe86bc3ed7a5287cd464be7a218938
#### What version(s) of tools are you using. List all that apply (E.g. mbed-cli)
mbed-cli
#### How is this defect reproduced ?
1) Create a new project or take an existing one (e.g. blinky).
2) Export the project for uVision6 by using
````
mbed export -i uvision6 -m NUCLEO_F746ZG --source .
````
3) Build the project
|
1.0
|
Export to uvision failing with missing context fault handler - <!--
************************************** WARNING **************************************
The ciarcom bot parses this header automatically. Any deviation from the
template may cause the bot to automatically correct this header or may result in a
warning message, requesting updates.
Please ensure all sections of the template below are filled in and no changes
are made to the template format. Only bugs should be raised here as issues.
Questions or enhancements should instead be raised on our forums:
https://forums.mbed.com/ .
*************************************************************************************
-->
### Description of defect
When exporting a uVision6 project for ARMC6 toolchain on NUCLEO_F746ZG target, the build fails with the following error.
```
linking...
.\BUILD\mbed-os-example-blinky.axf: Warning: L3912W: Option 'legacyalign' is deprecated.
.\BUILD\mbed-os-example-blinky.axf: Error: L6218E: Undefined symbol mbed_fault_context (referred from .\build\except.o).
```
#### Target(s) affected by this defect ?
Only tried to export to NUCLEO_F746ZG target, but probably others as well.
#### Toolchain(s) (name and version) displaying this defect ?
uVision6 / ARMClang6
#### What version of Mbed-os are you using (tag or sha) ?
tag: mbed-os-6.15.0
sha: 4cfbea43cabe86bc3ed7a5287cd464be7a218938
#### What version(s) of tools are you using. List all that apply (E.g. mbed-cli)
mbed-cli
#### How is this defect reproduced ?
1) Create a new project or take an existing one (e.g. blinky).
2) Export the project for uVision6 by using
````
mbed export -i uvision6 -m NUCLEO_F746ZG --source .
````
3) Build the project
|
non_process
|
export to uvision failing with missing context fault handler warning the ciarcom bot parses this header automatically any deviation from the template may cause the bot to automatically correct this header or may result in a warning message requesting updates please ensure all sections of the template below are filled in and no changes are made to the template format only bugs should be raised here as issues questions or enhancements should instead be raised on our forums description of defect when exporting a project for toolchain on nucleo target the build fails with the following error linking build mbed os example blinky axf warning option legacyalign is deprecated build mbed os example blinky axf error undefined symbol mbed fault context referred from build except o target s affected by this defect only tried to export to nucleo target but probably others as well toolchain s name and version displaying this defect what version of mbed os are you using tag or sha tag mbed os sha what version s of tools are you using list all that apply e g mbed cli mbed cli how is this defect reproduced create a new project or take an existing one e g blinky export the project for by using mbed export i m nucleo source build the project
| 0
|
1,944
| 4,769,527,120
|
IssuesEvent
|
2016-10-26 12:53:03
|
Lever-age/leverage
|
https://api.github.com/repos/Lever-age/leverage
|
closed
|
Separate repos for data pipeline and analysis?
|
process/administration question ready for review
|
The data pipeline will scrape donation records from the BoE, clean it, and load it into the database. The analysis subproject looks at new ways of classifying the donations in the DB. People who are working on the public-facing application probably don't need to clone all of this code.
|
1.0
|
Separate repos for data pipeline and analysis? - The data pipeline will scrape donation records from the BoE, clean it, and load it into the database. The analysis subproject looks at new ways of classifying the donations in the DB. People who are working on the public-facing application probably don't need to clone all of this code.
|
process
|
separate repos for data pipeline and analysis the data pipeline will scrape donation records from the boe clean it and load it into the database the analysis subproject looks at new ways of classifying the donations in the db people who are working on the public facing application probably don t need to clone all of this code
| 1
|
18,641
| 24,580,825,857
|
IssuesEvent
|
2022-10-13 15:29:17
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] Cloud storage > Unsigned document > Issue related to consent version in the naming convention of the unsigned document
|
Bug P2 Process: Fixed Process: Tested dev
|
Publish the study for the fourth time in the study builder and Verify the naming convention of unsigned document in the cloud storage.
**AR:** Consent version is getting displayed as "1.3000001" in cloud storage
**ER:** Consent version should be displayed as "1.3" in cloud storage
**Note:**
1. Issue is observed only when published the study for fourth time.
2. Issue is not observed for other published version.
**Screenshot of the issue:**

|
2.0
|
[Consent API] Cloud storage > Unsigned document > Issue related to consent version in the naming convention of the unsigned document - Publish the study for the fourth time in the study builder and Verify the naming convention of unsigned document in the cloud storage.
**AR:** Consent version is getting displayed as "1.3000001" in cloud storage
**ER:** Consent version should be displayed as "1.3" in cloud storage
**Note:**
1. Issue is observed only when published the study for fourth time.
2. Issue is not observed for other published version.
**Screenshot of the issue:**

|
process
|
cloud storage unsigned document issue related to consent version in the naming convention of the unsigned document publish the study for the fourth time in the study builder and verify the naming convention of unsigned document in the cloud storage ar consent version is getting displayed as in cloud storage er consent version should be displayed as in cloud storage note issue is observed only when published the study for fourth time issue is not observed for other published version screenshot of the issue
| 1
|
411,137
| 12,015,007,798
|
IssuesEvent
|
2020-04-10 12:59:49
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Tipping banner does not scale properly based on viewport size
|
feature/rewards priority/P5
|
**Issue:** Tipping banner does not scale properly based on viewport size. If viewport size is smaller than usual because of, e.g., docked dev tools, the tipping banner will appear shorter horizontally, and users will have to scroll side to side inside it to use it.
Even when the developer tools are later closed, the tipping banner does _not_ scale back to proper size, but remains horizontally short.
_UX symptom:_ Users cannot click outside the banner to close it, but instead need to press the X button. But the X button is outside view, so users have to know to scroll sideways to get to it.
**Reproduce:**
1. Open a website you'd like to tip
2. Open dev tools (docked on right side)
3. Open tipping banner

|
1.0
|
Tipping banner does not scale properly based on viewport size - **Issue:** Tipping banner does not scale properly based on viewport size. If viewport size is smaller than usual because of, e.g., docked dev tools, the tipping banner will appear shorter horizontally, and users will have to scroll side to side inside it to use it.
Even when the developer tools are later closed, the tipping banner does _not_ scale back to proper size, but remains horizontally short.
_UX symptom:_ Users cannot click outside the banner to close it, but instead need to press the X button. But the X button is outside view, so users have to know to scroll sideways to get to it.
**Reproduce:**
1. Open a website you'd like to tip
2. Open dev tools (docked on right side)
3. Open tipping banner

|
non_process
|
tipping banner does not scale properly based on viewport size issue tipping banner does not scale properly based on viewport size if viewport size is smaller than usual because of e g docked dev tools the tipping banner will appear shorter horizontally and users will have to scroll side to side inside it to use it even when the developer tools are later closed the tipping banner does not scale back to proper size but remains horizontally short ux symptom users cannot click outside the banner to close it but instead need to press the x button but the x button is outside view so users have to know to scroll sideways to get to it reproduce open a website you d like to tip open dev tools docked on right side open tipping banner
| 0
|
22,165
| 30,712,785,490
|
IssuesEvent
|
2023-07-27 10:58:30
|
ipfs/kubo
|
https://api.github.com/repos/ipfs/kubo
|
closed
|
Patch Releases
|
topic/process
|
Currently, all go-ipfs releases are "patch" releases. IIRC, this was because we decided to bump to 0.5.0 when we hit beta. Unfortunately, this means we can't issue _actual_ patch releases and means that issues like https://github.com/ipfs/go-ipfs/issues/6254.
Proposal:
1. Start cutting 0.x.0 releases starting with 0.5.0.
2. Create patch releases as-needed that cherry-pick specific changes/commits.
These patch releases:
* Would _not_ go through the full release process.
* Would only contain specific fixes for specific _regressions_.
* Would not contain complicated fixes for complicated issues.
* Would be issued on an as-need basis based on impact
cc @ipfs/wg-go-core
|
1.0
|
Patch Releases - Currently, all go-ipfs releases are "patch" releases. IIRC, this was because we decided to bump to 0.5.0 when we hit beta. Unfortunately, this means we can't issue _actual_ patch releases and means that issues like https://github.com/ipfs/go-ipfs/issues/6254.
Proposal:
1. Start cutting 0.x.0 releases starting with 0.5.0.
2. Create patch releases as-needed that cherry-pick specific changes/commits.
These patch releases:
* Would _not_ go through the full release process.
* Would only contain specific fixes for specific _regressions_.
* Would not contain complicated fixes for complicated issues.
* Would be issued on an as-need basis based on impact
cc @ipfs/wg-go-core
|
process
|
patch releases currently all go ipfs releases are patch releases iirc this was because we decided to bump to when we hit beta unfortunately this means we can t issue actual patch releases and means that issues like proposal start cutting x releases starting with create patch releases as needed that cherry pick specific changes commits these patch releases would not go through the full release process would only contain specific fixes for specific regressions would not contain complicated fixes for complicated issues would be issued on an as need basis based on impact cc ipfs wg go core
| 1
|
108,465
| 9,308,371,826
|
IssuesEvent
|
2019-03-25 14:26:12
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
synclist: EXECUTEALWAYS error when another line in the synclist references the same destination node
|
status:pending test:testcase_requested
|
Hi,
This one is a bit involved, but it's still a perfectly reproducible bug :)
### Description
When an `EXECUTEALWAYS` directive in a `syncfile` refers to a destination node that is already referenced in a previous file copy directive, `updatenode -F` returns a `Not an ARRAY reference` error
### Steps to reproduce
Given the following `synclist` file:
```
/etc/hosts -> (sh-06-34) /tmp/h
/tmp/script.single.sh -> (sh-06-34) /tmp/script.single.sh
EXECUTEALWAYS:
/tmp/script.single.sh
```
with the following `/tmp/script.single.sh`:
```
#!/bin/bash
echo "script.single running on $(hostname -s)"
```
running `updatenode sh-06-34 -F` fails with the following error:
```
# XCATBYPASS=1 updatenode sh-06-34 -F
Not an ARRAY reference at /opt/xcat/lib/perl/xCAT/DSHCLI.pm line 6320.
```
### Additional information
* if `EXECUTE` is used instead of `EXECUTEALWAYS`, the error doesn't happen, even for the first execution (ie. when the destination file doesn't already exist)
* if the first line in the `syncfile` (`/etc/hosts -> (sh-06-34) /tmp/h`) is commented or removed, `updatenode -F` works normally
#### Verbose mode output
```
# XCATBYPASS=1 updatenode sh-06-34 -FV
Running command on sh-hn01.SUNet: ip -4 --oneline addr show |awk -F ' ' '{print $4}'|awk -F '/' '{print $1}' 2>&1
Running command on sh-hn01.SUNet: chmod -R a+r /install/postscripts 2>&1
Running command on sh-hn01.SUNet: cat /install/postscripts/mypostscript.tmpl | grep ZONENAME 2>&1
sh-hn01.SUNet: Internal call command: xdcp sh-06-34 --nodestatus -F /install/custom/sherlock/lists/_common/synclist.test -T
Running internal xCAT command: xdcp ...
Running command on sh-hn01.SUNet: ip -4 --oneline addr show |awk -F ' ' '{print $4}'|awk -F '/' '{print $1}' 2>&1
DSH:DCP_DEVICE_OPTS=
DSH:DCP_DEVICE_RCP=
DSH:DCP_NODE_OPTS=
DSH:DCP_NODE_RCP=
DSH:DSH_CONTEXT=
DSH:DSH_DEVICE_LIST=
DSH:DSH_DEVICE_OPTS=
DSH:DSH_DEVICE_RCP=
DSH:DSH_DEVICE_RSH=
DSH:DSH_ENVIRONMENT=
DSH:DSH_FANOUT=
DSH:DSH_LOG=
DSH:DSH_NODEGROUP_PATH=
DSH:DSH_NODE_LIST=
DSH:DSH_NODE_OPTS=
DSH:DSH_NODE_RCP=
DSH:DSH_NODE_RSH=
DSH:DSH_OUTPUT=
DSH:DSH_PATH=
DSH:DSH_REPORT=
DSH:DSH_SYNTAX=
DSH:DSH_TIMEOUT=
DSH:DSH_VERIFY=
DSH:RSYNC_RSH=
XCAT:RemoteCopyCmd=/usr/bin/scp
XCAT:RemoteShell=/usr/bin/ssh
Not an ARRAY reference at /opt/xcat/lib/perl/xCAT/DSHCLI.pm line 6320.
```
|
2.0
|
synclist: EXECUTEALWAYS error when another line in the synclist references the same destination node - Hi,
This one is a bit involved, but it's still a perfectly reproducible bug :)
### Description
When an `EXECUTEALWAYS` directive in a `syncfile` refers to a destination node that is already referenced in a previous file copy directive, `updatenode -F` returns a `Not an ARRAY reference` error
### Steps to reproduce
Given the following `synclist` file:
```
/etc/hosts -> (sh-06-34) /tmp/h
/tmp/script.single.sh -> (sh-06-34) /tmp/script.single.sh
EXECUTEALWAYS:
/tmp/script.single.sh
```
with the following `/tmp/script.single.sh`:
```
#!/bin/bash
echo "script.single running on $(hostname -s)"
```
running `updatenode sh-06-34 -F` fails with the following error:
```
# XCATBYPASS=1 updatenode sh-06-34 -F
Not an ARRAY reference at /opt/xcat/lib/perl/xCAT/DSHCLI.pm line 6320.
```
### Additional information
* if `EXECUTE` is used instead of `EXECUTEALWAYS`, the error doesn't happen, even for the first execution (ie. when the destination file doesn't already exist)
* if the first line in the `syncfile` (`/etc/hosts -> (sh-06-34) /tmp/h`) is commented or removed, `updatenode -F` works normally
#### Verbose mode output
```
# XCATBYPASS=1 updatenode sh-06-34 -FV
Running command on sh-hn01.SUNet: ip -4 --oneline addr show |awk -F ' ' '{print $4}'|awk -F '/' '{print $1}' 2>&1
Running command on sh-hn01.SUNet: chmod -R a+r /install/postscripts 2>&1
Running command on sh-hn01.SUNet: cat /install/postscripts/mypostscript.tmpl | grep ZONENAME 2>&1
sh-hn01.SUNet: Internal call command: xdcp sh-06-34 --nodestatus -F /install/custom/sherlock/lists/_common/synclist.test -T
Running internal xCAT command: xdcp ...
Running command on sh-hn01.SUNet: ip -4 --oneline addr show |awk -F ' ' '{print $4}'|awk -F '/' '{print $1}' 2>&1
DSH:DCP_DEVICE_OPTS=
DSH:DCP_DEVICE_RCP=
DSH:DCP_NODE_OPTS=
DSH:DCP_NODE_RCP=
DSH:DSH_CONTEXT=
DSH:DSH_DEVICE_LIST=
DSH:DSH_DEVICE_OPTS=
DSH:DSH_DEVICE_RCP=
DSH:DSH_DEVICE_RSH=
DSH:DSH_ENVIRONMENT=
DSH:DSH_FANOUT=
DSH:DSH_LOG=
DSH:DSH_NODEGROUP_PATH=
DSH:DSH_NODE_LIST=
DSH:DSH_NODE_OPTS=
DSH:DSH_NODE_RCP=
DSH:DSH_NODE_RSH=
DSH:DSH_OUTPUT=
DSH:DSH_PATH=
DSH:DSH_REPORT=
DSH:DSH_SYNTAX=
DSH:DSH_TIMEOUT=
DSH:DSH_VERIFY=
DSH:RSYNC_RSH=
XCAT:RemoteCopyCmd=/usr/bin/scp
XCAT:RemoteShell=/usr/bin/ssh
Not an ARRAY reference at /opt/xcat/lib/perl/xCAT/DSHCLI.pm line 6320.
```
|
non_process
|
synclist executealways error when another line in the synclist references the same destination node hi this one is a bit involved but it s still a perfectly reproducible bug description when an executealways directive in a syncfile refers to a destination node that is already referenced in a previous file copy directive updatenode f returns a not an array reference error steps to reproduce given the following synclist file etc hosts sh tmp h tmp script single sh sh tmp script single sh executealways tmp script single sh with the following tmp script single sh bin bash echo script single running on hostname s running updatenode sh f fails with the following error xcatbypass updatenode sh f not an array reference at opt xcat lib perl xcat dshcli pm line additional information if execute is used instead of executealways the error doesn t happen even for the first execution ie when the destination file doesn t already exist if the first line in the syncfile etc hosts sh tmp h is commented or removed updatenode f works normally verbose mode output xcatbypass updatenode sh fv running command on sh sunet ip oneline addr show awk f print awk f print running command on sh sunet chmod r a r install postscripts running command on sh sunet cat install postscripts mypostscript tmpl grep zonename sh sunet internal call command xdcp sh nodestatus f install custom sherlock lists common synclist test t running internal xcat command xdcp running command on sh sunet ip oneline addr show awk f print awk f print dsh dcp device opts dsh dcp device rcp dsh dcp node opts dsh dcp node rcp dsh dsh context dsh dsh device list dsh dsh device opts dsh dsh device rcp dsh dsh device rsh dsh dsh environment dsh dsh fanout dsh dsh log dsh dsh nodegroup path dsh dsh node list dsh dsh node opts dsh dsh node rcp dsh dsh node rsh dsh dsh output dsh dsh path dsh dsh report dsh dsh syntax dsh dsh timeout dsh dsh verify dsh rsync rsh xcat remotecopycmd usr bin scp xcat remoteshell usr bin ssh not an array reference at opt xcat lib perl xcat dshcli pm line
| 0
|
5,983
| 8,799,683,750
|
IssuesEvent
|
2018-12-24 15:59:33
|
syauqiahmd/project_e-commerce
|
https://api.github.com/repos/syauqiahmd/project_e-commerce
|
closed
|
Laporan
|
on process
|
- [x] Laporan transaksi penjualan per-priode (offline/online)
- [x] Laporan barang terjual per-priode (terjual via offline/online)
- [x] master brand
- [x] master barang
- [x] master pengguna (public/admin)
|
1.0
|
Laporan - - [x] Laporan transaksi penjualan per-priode (offline/online)
- [x] Laporan barang terjual per-priode (terjual via offline/online)
- [x] master brand
- [x] master barang
- [x] master pengguna (public/admin)
|
process
|
laporan laporan transaksi penjualan per priode offline online laporan barang terjual per priode terjual via offline online master brand master barang master pengguna public admin
| 1
|
366,032
| 10,807,898,087
|
IssuesEvent
|
2019-11-07 09:26:56
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Review public API's (jBallerina runtime value classes)
|
Area/jBallerina Points/2 Priority/High Type/Task
|
**Description:**
Do a API review on the user facing public methods from jBallerina runtime value classes.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
Review public API's (jBallerina runtime value classes) - **Description:**
Do a API review on the user facing public methods from jBallerina runtime value classes.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
non_process
|
review public api s jballerina runtime value classes description do a api review on the user facing public methods from jballerina runtime value classes steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 0
|
340,007
| 10,265,622,775
|
IssuesEvent
|
2019-08-22 19:19:13
|
deep-learning-indaba/Baobab
|
https://api.github.com/repos/deep-learning-indaba/Baobab
|
closed
|
Attendance Admin: Unconfirmed Registration for Industry Professionals to Appear on Attendance list with an Alert
|
High Priority back-end front-end
|
Industry professionals with unconfirmed registration to appear on attendance list, but can't be marked as attending.
The interface must display a message/alert to go pay fee at the special circumstances desk. The special circumstances desk can then confirm the registration and then the individual will appear on the attendance list and can be marked off as usual
**Backend**
- Backend needs to send an additional field: registration_confirmed which is true if the registration has been confirmed.
**Frontend**
- Needs to show the alert if registration_confirmed is true and not allow attendance confirmation to proceed.
|
1.0
|
Attendance Admin: Unconfirmed Registration for Industry Professionals to Appear on Attendance list with an Alert - Industry professionals with unconfirmed registration to appear on attendance list, but can't be marked as attending.
The interface must display a message/alert to go pay fee at the special circumstances desk. The special circumstances desk can then confirm the registration and then the individual will appear on the attendance list and can be marked off as usual
**Backend**
- Backend needs to send an additional field: registration_confirmed which is true if the registration has been confirmed.
**Frontend**
- Needs to show the alert if registration_confirmed is true and not allow attendance confirmation to proceed.
|
non_process
|
attendance admin unconfirmed registration for industry professionals to appear on attendance list with an alert industry professionals with unconfirmed registration to appear on attendance list but can t be marked as attending the interface must display a message alert to go pay fee at the special circumstances desk the special circumstances desk can then confirm the registration and then the individual will appear on the attendance list and can be marked off as usual backend backend needs to send an additional field registration confirmed which is true if the registration has been confirmed frontend needs to show the alert if registration confirmed is true and not allow attendance confirmation to proceed
| 0
|
12,181
| 14,742,019,987
|
IssuesEvent
|
2021-01-07 11:33:34
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
054-Fair Oaks Issue account XTC7995
|
anc-process anp-0.5 ant-support
|
In GitLab by @kdjstudios on Mar 6, 2019, 09:05
**Submitted by:** "Vanessa Salamanca" <vanessa.salamanca@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/7360476
**Server:** Internal
**Client/Site:** Fair Oaks
**Account:** XTC7995
**Issue:**
Terminated account is throwing out a charge of $100 on a terminated account from 2017.
They have no usage and haven’t had any since July 2017.
Where is this charge coming from?
Account XTC7995
|
1.0
|
054-Fair Oaks Issue account XTC7995 - In GitLab by @kdjstudios on Mar 6, 2019, 09:05
**Submitted by:** "Vanessa Salamanca" <vanessa.salamanca@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/7360476
**Server:** Internal
**Client/Site:** Fair Oaks
**Account:** XTC7995
**Issue:**
Terminated account is throwing out a charge of $100 on a terminated account from 2017.
They have no usage and haven’t had any since July 2017.
Where is this charge coming from?
Account XTC7995
|
process
|
fair oaks issue account in gitlab by kdjstudios on mar submitted by vanessa salamanca helpdesk server internal client site fair oaks account issue terminated account is throwing out a charge of on a terminated account from they have no usage and haven’t had any since july where is this charge coming from account
| 1
|
43,468
| 23,252,264,413
|
IssuesEvent
|
2022-08-04 05:44:12
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[Perf] Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Byte>
|
area-System.Threading tenet-performance tenet-performance-benchmarks refs/heads/main RunKind=micro Regression CoreClr arm64 ubuntu 20.04
|
### Run Information
Architecture | arm64
-- | --
OS | ubuntu 20.04
Baseline | [3d74b00659fec817506e2888f87936518556e01c](https://github.com/dotnet/runtime/commit/3d74b00659fec817506e2888f87936518556e01c)
Compare | [0c3d5ad05754be529e470d7b0399f40a2bc8087d](https://github.com/dotnet/runtime/commit/0c3d5ad05754be529e470d7b0399f40a2bc8087d)
Diff | [Diff](https://github.com/dotnet/runtime/compare/3d74b00659fec817506e2888f87936518556e01c...0c3d5ad05754be529e470d7b0399f40a2bc8087d)
### Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Byte>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20False).html>) | 2.04 μs | 2.22 μs | 1.09 | 0.47 | False | | |
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20True).html>) | 2.15 μs | 2.70 μs | 1.26 | 0.47 | False | | |
_1.png>)
_2.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_16_2022/refs/heads/main_arm64_ubuntu%2020.04_Regression/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Buffers.Tests.RentReturnArrayPoolTests<Byte>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2038dd61-4fdb-4173-a91c-abf8d4a8dfd4698421366684ddbb3/8ee8c6a5-1aa5-4205-84c3-5bf423e16dc6.zip?sv=2019-07-07&se=2022-07-09T15%3A49%3A05Z&sr=c&sp=rl&sig=E2%2FoxOfoYHdNAyfVAl3NuUHrWELHWjkn6jfWllal8bI%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-47af6e6a-5790-4474-89e4-ba351042eb59a6fd880c5cf4de493/2e84eba5-66b0-4ae6-8013-ea9ecfa776bb.zip?sv=2019-07-07&se=2022-07-10T07%3A30%3A11Z&sr=c&sp=rl&sig=XvBNhqp7cD1V57jNnrz9sF8EfZ0z3BVSl0suwxl84JA%3D>)
### Histogram
#### System.Buffers.Tests.RentReturnArrayPoolTests<Byte>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: False)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.222304787368421 > 2.1041162312624997.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/9/2022 7:18:42 PM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -7.940322094716294 (T) = (0 -2265.3246321699557) / Math.Sqrt((99975.27738342408 / (22)) + (12748.520185748039 / (24))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (22) + (24) - 2, .025) and -0.33283020571515504 = (1699.634824043062 - 2265.3246321699557) / 1699.634824043062 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Buffers.Tests.RentReturnArrayPoolTests<Byte>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: True)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.701941663500001 > 2.3429245986250007.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -4.377755390345438 (T) = (0 -2646.81070288735) / Math.Sqrt((41494.895751718395 / (12)) + (30130.630066296122 / (34))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (12) + (34) - 2, .025) and -0.12235094938828425 = (2358.273679306765 - 2646.81070288735) / 2358.273679306765 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | ubuntu 20.04
Baseline | [3d74b00659fec817506e2888f87936518556e01c](https://github.com/dotnet/runtime/commit/3d74b00659fec817506e2888f87936518556e01c)
Compare | [0c3d5ad05754be529e470d7b0399f40a2bc8087d](https://github.com/dotnet/runtime/commit/0c3d5ad05754be529e470d7b0399f40a2bc8087d)
Diff | [Diff](https://github.com/dotnet/runtime/compare/3d74b00659fec817506e2888f87936518556e01c...0c3d5ad05754be529e470d7b0399f40a2bc8087d)
### Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Object>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Object).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20False).html>) | 2.06 μs | 2.30 μs | 1.12 | 0.45 | False | | |
_1.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_16_2022/refs/heads/main_arm64_ubuntu%2020.04_Regression/System.Buffers.Tests.RentReturnArrayPoolTests(Object).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Buffers.Tests.RentReturnArrayPoolTests<Object>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2038dd61-4fdb-4173-a91c-abf8d4a8dfd4698421366684ddbb3/8ee8c6a5-1aa5-4205-84c3-5bf423e16dc6.zip?sv=2019-07-07&se=2022-07-09T15%3A49%3A05Z&sr=c&sp=rl&sig=E2%2FoxOfoYHdNAyfVAl3NuUHrWELHWjkn6jfWllal8bI%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-47af6e6a-5790-4474-89e4-ba351042eb59a6fd880c5cf4de493/2e84eba5-66b0-4ae6-8013-ea9ecfa776bb.zip?sv=2019-07-07&se=2022-07-10T07%3A30%3A11Z&sr=c&sp=rl&sig=XvBNhqp7cD1V57jNnrz9sF8EfZ0z3BVSl0suwxl84JA%3D>)
### Histogram
#### System.Buffers.Tests.RentReturnArrayPoolTests<Object>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: False)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.302442016 > 2.0439055712250003.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/9/2022 7:18:42 PM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -7.992647096979788 (T) = (0 -2299.1375774301177) / Math.Sqrt((80590.29858937715 / (22)) + (17488.730522057453 / (24))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (22) + (24) - 2, .025) and -0.29934858212836735 = (1769.454024157297 - 2299.1375774301177) / 1769.454024157297 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
|
True
|
[Perf] Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Byte> - ### Run Information
Architecture | arm64
-- | --
OS | ubuntu 20.04
Baseline | [3d74b00659fec817506e2888f87936518556e01c](https://github.com/dotnet/runtime/commit/3d74b00659fec817506e2888f87936518556e01c)
Compare | [0c3d5ad05754be529e470d7b0399f40a2bc8087d](https://github.com/dotnet/runtime/commit/0c3d5ad05754be529e470d7b0399f40a2bc8087d)
Diff | [Diff](https://github.com/dotnet/runtime/compare/3d74b00659fec817506e2888f87936518556e01c...0c3d5ad05754be529e470d7b0399f40a2bc8087d)
### Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Byte>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20False).html>) | 2.04 μs | 2.22 μs | 1.09 | 0.47 | False | | |
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20True).html>) | 2.15 μs | 2.70 μs | 1.26 | 0.47 | False | | |
_1.png>)
_2.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_16_2022/refs/heads/main_arm64_ubuntu%2020.04_Regression/System.Buffers.Tests.RentReturnArrayPoolTests(Byte).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Buffers.Tests.RentReturnArrayPoolTests<Byte>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2038dd61-4fdb-4173-a91c-abf8d4a8dfd4698421366684ddbb3/8ee8c6a5-1aa5-4205-84c3-5bf423e16dc6.zip?sv=2019-07-07&se=2022-07-09T15%3A49%3A05Z&sr=c&sp=rl&sig=E2%2FoxOfoYHdNAyfVAl3NuUHrWELHWjkn6jfWllal8bI%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-47af6e6a-5790-4474-89e4-ba351042eb59a6fd880c5cf4de493/2e84eba5-66b0-4ae6-8013-ea9ecfa776bb.zip?sv=2019-07-07&se=2022-07-10T07%3A30%3A11Z&sr=c&sp=rl&sig=XvBNhqp7cD1V57jNnrz9sF8EfZ0z3BVSl0suwxl84JA%3D>)
### Histogram
#### System.Buffers.Tests.RentReturnArrayPoolTests<Byte>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: False)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.222304787368421 > 2.1041162312624997.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/9/2022 7:18:42 PM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -7.940322094716294 (T) = (0 -2265.3246321699557) / Math.Sqrt((99975.27738342408 / (22)) + (12748.520185748039 / (24))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (22) + (24) - 2, .025) and -0.33283020571515504 = (1699.634824043062 - 2265.3246321699557) / 1699.634824043062 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Buffers.Tests.RentReturnArrayPoolTests<Byte>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: True)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.701941663500001 > 2.3429245986250007.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -4.377755390345438 (T) = (0 -2646.81070288735) / Math.Sqrt((41494.895751718395 / (12)) + (30130.630066296122 / (34))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (12) + (34) - 2, .025) and -0.12235094938828425 = (2358.273679306765 - 2646.81070288735) / 2358.273679306765 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | ubuntu 20.04
Baseline | [3d74b00659fec817506e2888f87936518556e01c](https://github.com/dotnet/runtime/commit/3d74b00659fec817506e2888f87936518556e01c)
Compare | [0c3d5ad05754be529e470d7b0399f40a2bc8087d](https://github.com/dotnet/runtime/commit/0c3d5ad05754be529e470d7b0399f40a2bc8087d)
Diff | [Diff](https://github.com/dotnet/runtime/compare/3d74b00659fec817506e2888f87936518556e01c...0c3d5ad05754be529e470d7b0399f40a2bc8087d)
### Regressions in System.Buffers.Tests.RentReturnArrayPoolTests<Object>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[ProducerConsumer - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_ubuntu 20.04/System.Buffers.Tests.RentReturnArrayPoolTests(Object).ProducerConsumer(RentalSize%3a%204096%2c%20ManipulateArray%3a%20False%2c%20Async%3a%20False%2c%20UseSharedPool%3a%20False).html>) | 2.06 μs | 2.30 μs | 1.12 | 0.45 | False | | |
_1.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_16_2022/refs/heads/main_arm64_ubuntu%2020.04_Regression/System.Buffers.Tests.RentReturnArrayPoolTests(Object).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Buffers.Tests.RentReturnArrayPoolTests<Object>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2038dd61-4fdb-4173-a91c-abf8d4a8dfd4698421366684ddbb3/8ee8c6a5-1aa5-4205-84c3-5bf423e16dc6.zip?sv=2019-07-07&se=2022-07-09T15%3A49%3A05Z&sr=c&sp=rl&sig=E2%2FoxOfoYHdNAyfVAl3NuUHrWELHWjkn6jfWllal8bI%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-47af6e6a-5790-4474-89e4-ba351042eb59a6fd880c5cf4de493/2e84eba5-66b0-4ae6-8013-ea9ecfa776bb.zip?sv=2019-07-07&se=2022-07-10T07%3A30%3A11Z&sr=c&sp=rl&sig=XvBNhqp7cD1V57jNnrz9sF8EfZ0z3BVSl0suwxl84JA%3D>)
### Histogram
#### System.Buffers.Tests.RentReturnArrayPoolTests<Object>.ProducerConsumer(RentalSize: 4096, ManipulateArray: False, Async: False, UseSharedPool: False)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.302442016 > 2.0439055712250003.
IsChangePoint: Marked as a change because one of 6/7/2022 9:37:06 AM, 6/9/2022 7:18:42 PM, 6/15/2022 7:24:26 PM falls between 6/7/2022 5:42:36 AM and 6/15/2022 7:24:26 PM.
IsRegressionStdDev: Marked as regression because -7.992647096979788 (T) = (0 -2299.1375774301177) / Math.Sqrt((80590.29858937715 / (22)) + (17488.730522057453 / (24))) is less than -2.0153675744421933 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (22) + (24) - 2, .025) and -0.29934858212836735 = (1769.454024157297 - 2299.1375774301177) / 1769.454024157297 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
|
non_process
|
regressions in system buffers tests rentreturnarraypooltests run information architecture os ubuntu baseline compare diff regressions in system buffers tests rentreturnarraypooltests lt byte gt benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl μs μs false μs μs false repro cmd git clone py performance scripts benchmarks ci py f filter system buffers tests rentreturnarraypooltests lt byte gt payloads histogram system buffers tests rentreturnarraypooltests lt byte gt producerconsumer rentalsize manipulatearray false async false usesharedpool false log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm falls between am and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system buffers tests rentreturnarraypooltests lt byte gt producerconsumer rentalsize manipulatearray false async false usesharedpool true log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm falls between am and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs run information architecture os ubuntu baseline compare diff regressions in system buffers tests rentreturnarraypooltests lt object gt benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl μs μs false repro cmd git clone py performance scripts benchmarks ci py f filter system buffers tests rentreturnarraypooltests lt object gt payloads histogram system buffers tests rentreturnarraypooltests lt object gt producerconsumer rentalsize manipulatearray false async false usesharedpool false log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm falls between am and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small docs
| 0
|
158,363
| 20,024,301,124
|
IssuesEvent
|
2022-02-01 19:30:47
|
Recidiviz/supervision-success-component
|
https://api.github.com/repos/Recidiviz/supervision-success-component
|
closed
|
Security Alert - Package: color-string; Severity: MODERATE
|
Subject: Security Subject: Vulnerability Severity: MODERATE
|
---
due: 2022-03-26
---
Affected package: color-string
Ecosystem: NPM
Affected version range: < 1.5.5
Summary: Regular Expression Denial of Service (ReDOS)
Description: In the npm package `color-string`, there is a ReDos (Regular Expression Denial of Service) vulnerability regarding an exponential time complexity for
linearly increasing input lengths for `hwb()` color strings.
Strings reaching more than 5000 characters would see several
milliseconds of processing time; strings reaching more than
50,000 characters began seeing 1500ms (1.5s) of processing time.
The cause was due to a the regular expression that parses
hwb() strings - specifically, the hue value - where
the integer portion of the hue value used a 0-or-more quantifier
shortly thereafter followed by a 1-or-more quantifier.
This caused excessive backtracking and a cartesian scan,
resulting in exponential time complexity given a linear
increase in input length.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-257v-vj4p-3w2h'}, {'type': 'CVE', 'value': 'CVE-2021-29060'}]
Fixed Version: 1.5.5
Created Date = January 18, 2022
---
|
True
|
Security Alert - Package: color-string; Severity: MODERATE -
---
due: 2022-03-26
---
Affected package: color-string
Ecosystem: NPM
Affected version range: < 1.5.5
Summary: Regular Expression Denial of Service (ReDOS)
Description: In the npm package `color-string`, there is a ReDos (Regular Expression Denial of Service) vulnerability regarding an exponential time complexity for
linearly increasing input lengths for `hwb()` color strings.
Strings reaching more than 5000 characters would see several
milliseconds of processing time; strings reaching more than
50,000 characters began seeing 1500ms (1.5s) of processing time.
The cause was due to a the regular expression that parses
hwb() strings - specifically, the hue value - where
the integer portion of the hue value used a 0-or-more quantifier
shortly thereafter followed by a 1-or-more quantifier.
This caused excessive backtracking and a cartesian scan,
resulting in exponential time complexity given a linear
increase in input length.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-257v-vj4p-3w2h'}, {'type': 'CVE', 'value': 'CVE-2021-29060'}]
Fixed Version: 1.5.5
Created Date = January 18, 2022
---
|
non_process
|
security alert package color string severity moderate due affected package color string ecosystem npm affected version range summary regular expression denial of service redos description in the npm package color string there is a redos regular expression denial of service vulnerability regarding an exponential time complexity for linearly increasing input lengths for hwb color strings strings reaching more than characters would see several milliseconds of processing time strings reaching more than characters began seeing of processing time the cause was due to a the regular expression that parses hwb strings specifically the hue value where the integer portion of the hue value used a or more quantifier shortly thereafter followed by a or more quantifier this caused excessive backtracking and a cartesian scan resulting in exponential time complexity given a linear increase in input length identifiers fixed version created date january
| 0
|
9,313
| 12,324,035,775
|
IssuesEvent
|
2020-05-13 13:08:31
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `TruncateInt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `TruncateInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `TruncateInt` from TiDB -
## Description
Port the scalar function `TruncateInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function truncateint from tidb description port the scalar function truncateint from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
147,821
| 23,278,970,456
|
IssuesEvent
|
2022-08-05 10:02:51
|
nymtech/team-product
|
https://api.github.com/repos/nymtech/team-product
|
closed
|
Wallet - Menu, highlights, global background colors
|
desktop wallet design
|
- We are using wrong background color in light version of wallet

- Spacing in leftside menu
So between each position in menu we have 32px of spacing, but each position is in block/group which height is 20px. Because we don't have this space menu still looks clutered

- Hovers in leftside menu, light and dark mode
https://user-images.githubusercontent.com/95295201/180409005-33ccce48-a5e8-4ca0-8693-34cb76bdc07e.mp4
https://user-images.githubusercontent.com/95295201/180409048-8495208a-116c-43e5-b300-0730669a8dab.mp4
Figma file

: https://www.figma.com/file/KiYKZnWeefSALAe0j2dAPI/Nym-Wallet?node-id=2646%3A43681
- [x] design
- [x] development
|
1.0
|
Wallet - Menu, highlights, global background colors - - We are using wrong background color in light version of wallet

- Spacing in leftside menu
So between each position in menu we have 32px of spacing, but each position is in block/group which height is 20px. Because we don't have this space menu still looks clutered

- Hovers in leftside menu, light and dark mode
https://user-images.githubusercontent.com/95295201/180409005-33ccce48-a5e8-4ca0-8693-34cb76bdc07e.mp4
https://user-images.githubusercontent.com/95295201/180409048-8495208a-116c-43e5-b300-0730669a8dab.mp4
Figma file

: https://www.figma.com/file/KiYKZnWeefSALAe0j2dAPI/Nym-Wallet?node-id=2646%3A43681
- [x] design
- [x] development
|
non_process
|
wallet menu highlights global background colors we are using wrong background color in light version of wallet spacing in leftside menu so between each position in menu we have of spacing but each position is in block group which height is because we don t have this space menu still looks clutered hovers in leftside menu light and dark mode figma file design development
| 0
|
46,598
| 24,618,132,531
|
IssuesEvent
|
2022-10-15 15:22:10
|
gtadigital/grp-userfeedback
|
https://api.github.com/repos/gtadigital/grp-userfeedback
|
opened
|
Load Time
|
performance
|
[Feedback Gregorio]
The gta Digital page takes a very long time to load (but perhaps this is inevitable, given the complexity and richness of the results).
|
True
|
Load Time - [Feedback Gregorio]
The gta Digital page takes a very long time to load (but perhaps this is inevitable, given the complexity and richness of the results).
|
non_process
|
load time the gta digital page takes a very long time to load but perhaps this is inevitable given the complexity and richness of the results
| 0
|
6,282
| 9,260,474,808
|
IssuesEvent
|
2019-03-18 05:51:42
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process, process: possibly confusing error message
|
child_process errors process
|
* **Version**: all?
* **Platform**: all?
* **Subsystem**: child_process, process
`parent.js`:
```js
'use strict';
const subprocess = require('child_process').fork('subprocess.js');
try {
subprocess.send(Symbol());
} catch (err) {
console.error('PARENT error:', err);
}
```
`subprocess.js`:
```js
'use strict';
process.on('uncaughtException', (err) => {
console.log('SUBPROCESS error:', err);
});
process.on('message', (msg) => {
console.log('SUBPROCESS got message:', msg);
});
```
Output:
```sh
SUBPROCESS error: SyntaxError: Unexpected token u in JSON at position 0
at JSON.parse (<anonymous>)
at Pipe.channel.onread (internal/child_process.js:492:28)
```
1. Should we intercept unserializable values at the sending side?
2. If not, should we make the error message on receiving side more clear?
|
2.0
|
child_process, process: possibly confusing error message - * **Version**: all?
* **Platform**: all?
* **Subsystem**: child_process, process
`parent.js`:
```js
'use strict';
const subprocess = require('child_process').fork('subprocess.js');
try {
subprocess.send(Symbol());
} catch (err) {
console.error('PARENT error:', err);
}
```
`subprocess.js`:
```js
'use strict';
process.on('uncaughtException', (err) => {
console.log('SUBPROCESS error:', err);
});
process.on('message', (msg) => {
console.log('SUBPROCESS got message:', msg);
});
```
Output:
```sh
SUBPROCESS error: SyntaxError: Unexpected token u in JSON at position 0
at JSON.parse (<anonymous>)
at Pipe.channel.onread (internal/child_process.js:492:28)
```
1. Should we intercept unserializable values at the sending side?
2. If not, should we make the error message on receiving side more clear?
|
process
|
child process process possibly confusing error message version all platform all subsystem child process process parent js js use strict const subprocess require child process fork subprocess js try subprocess send symbol catch err console error parent error err subprocess js js use strict process on uncaughtexception err console log subprocess error err process on message msg console log subprocess got message msg output sh subprocess error syntaxerror unexpected token u in json at position at json parse at pipe channel onread internal child process js should we intercept unserializable values at the sending side if not should we make the error message on receiving side more clear
| 1
|
140,721
| 18,908,736,410
|
IssuesEvent
|
2021-11-16 11:55:09
|
lyubov888L/long-term-system
|
https://api.github.com/repos/lyubov888L/long-term-system
|
opened
|
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz
|
security vulnerability
|
## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>Path to dependency file: long-term-system/package.json</p>
<p>Path to vulnerable library: long-term-system/node_modules/path-parse/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- liftoff-3.1.0.tgz
- resolve-1.15.1.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lyubov888L/long-term-system/commit/70ceacee09bde0d9b6f809a77010ad55db64e593">70ceacee09bde0d9b6f809a77010ad55db64e593</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: path-parse - 1.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - ## CVE-2021-23343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>Path to dependency file: long-term-system/package.json</p>
<p>Path to vulnerable library: long-term-system/node_modules/path-parse/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- liftoff-3.1.0.tgz
- resolve-1.15.1.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lyubov888L/long-term-system/commit/70ceacee09bde0d9b6f809a77010ad55db64e593">70ceacee09bde0d9b6f809a77010ad55db64e593</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jbgutierrez/path-parse/issues/8">https://github.com/jbgutierrez/path-parse/issues/8</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: path-parse - 1.0.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in path parse tgz cve high severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href path to dependency file long term system package json path to vulnerable library long term system node modules path parse package json dependency hierarchy gulp tgz root library gulp cli tgz liftoff tgz resolve tgz x path parse tgz vulnerable library found in head commit a href found in base branch main vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution path parse step up your open source security game with whitesource
| 0
|
640,246
| 20,777,567,508
|
IssuesEvent
|
2022-03-16 11:58:31
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
IllegalArgumentException when hover is expr inside if condition context
|
Type/Bug Priority/High Area/Compiler Team/CompilerFETools Team/CompilerFE Points/3 Area/SemanticAPI
|
**Description:**
Below sample causes `IllegalArgumentException` when I hover result var ref in RHS of the logical expression.
```bal
function testUnreachabilityWithIfStmtWithUnaryNot() {
1|2 result = 2;
if result is 1 || result is 2 {
}
}
```
<details>
<summary>Stack trace.(Click to expand)</summary>
```
[Error - 2:54:14 PM] Operation 'text/hover' failed! {uri: '/home/dulmina/Documents/test2.bal', [4:23], error: 'Symbol is 'null''}
java.lang.IllegalArgumentException: Symbol is 'null'
at io.ballerina.compiler.api.impl.SymbolFactory.getBCompiledSymbol(SymbolFactory.java:129)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.definition(BallerinaTypeReferenceTypeSymbol.java:104)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.getModule(BallerinaTypeReferenceTypeSymbol.java:122)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.signature(BallerinaTypeReferenceTypeSymbol.java:157)
at org.ballerinalang.langserver.hover.HoverUtil.getVariableHoverMarkupContent(HoverUtil.java:422)
at org.ballerinalang.langserver.hover.HoverUtil.getHoverForSymbol(HoverUtil.java:127)
at org.ballerinalang.langserver.hover.HoverUtil.getHover(HoverUtil.java:103)
at org.ballerinalang.langserver.BallerinaTextDocumentService.lambda$hover$1(BallerinaTextDocumentService.java:170)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
```
</details>
|
1.0
|
IllegalArgumentException when hover is expr inside if condition context - **Description:**
Below sample causes `IllegalArgumentException` when I hover result var ref in RHS of the logical expression.
```bal
function testUnreachabilityWithIfStmtWithUnaryNot() {
1|2 result = 2;
if result is 1 || result is 2 {
}
}
```
<details>
<summary>Stack trace.(Click to expand)</summary>
```
[Error - 2:54:14 PM] Operation 'text/hover' failed! {uri: '/home/dulmina/Documents/test2.bal', [4:23], error: 'Symbol is 'null''}
java.lang.IllegalArgumentException: Symbol is 'null'
at io.ballerina.compiler.api.impl.SymbolFactory.getBCompiledSymbol(SymbolFactory.java:129)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.definition(BallerinaTypeReferenceTypeSymbol.java:104)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.getModule(BallerinaTypeReferenceTypeSymbol.java:122)
at io.ballerina.compiler.api.impl.symbols.BallerinaTypeReferenceTypeSymbol.signature(BallerinaTypeReferenceTypeSymbol.java:157)
at org.ballerinalang.langserver.hover.HoverUtil.getVariableHoverMarkupContent(HoverUtil.java:422)
at org.ballerinalang.langserver.hover.HoverUtil.getHoverForSymbol(HoverUtil.java:127)
at org.ballerinalang.langserver.hover.HoverUtil.getHover(HoverUtil.java:103)
at org.ballerinalang.langserver.BallerinaTextDocumentService.lambda$hover$1(BallerinaTextDocumentService.java:170)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:479)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
```
</details>
|
non_process
|
illegalargumentexception when hover is expr inside if condition context description below sample causes illegalargumentexception when i hover result var ref in rhs of the logical expression bal function testunreachabilitywithifstmtwithunarynot result if result is result is stack trace click to expand operation text hover failed uri home dulmina documents bal error symbol is null java lang illegalargumentexception symbol is null at io ballerina compiler api impl symbolfactory getbcompiledsymbol symbolfactory java at io ballerina compiler api impl symbols ballerinatypereferencetypesymbol definition ballerinatypereferencetypesymbol java at io ballerina compiler api impl symbols ballerinatypereferencetypesymbol getmodule ballerinatypereferencetypesymbol java at io ballerina compiler api impl symbols ballerinatypereferencetypesymbol signature ballerinatypereferencetypesymbol java at org ballerinalang langserver hover hoverutil getvariablehovermarkupcontent hoverutil java at org ballerinalang langserver hover hoverutil gethoverforsymbol hoverutil java at org ballerinalang langserver hover hoverutil gethover hoverutil java at org ballerinalang langserver ballerinatextdocumentservice lambda hover ballerinatextdocumentservice java at java base java util concurrent completablefuture uniapply tryfire completablefuture java at java base java util concurrent completablefuture completion exec completablefuture java at java base java util concurrent forkjointask doexec forkjointask java at java base java util concurrent forkjoinpool workqueue toplevelexec forkjoinpool java at java base java util concurrent forkjoinpool scan forkjoinpool java at java base java util concurrent forkjoinpool runworker forkjoinpool java at java base java util concurrent forkjoinworkerthread run forkjoinworkerthread java
| 0
|
276
| 2,707,671,925
|
IssuesEvent
|
2015-04-08 00:40:56
|
sysown/proxysql-0.2
|
https://api.github.com/repos/sysown/proxysql-0.2
|
closed
|
Add variable mysql-poll_timeout_on_failure in global_variables
|
CONNECTION POOL enhancement MYSQL PROTOCOL QUERY PROCESSOR ROUTING
|
When a connection is not returned by the connection pool for any reason (backends not available or all saturated), MySQL_Thread should try again shortly after
|
1.0
|
Add variable mysql-poll_timeout_on_failure in global_variables - When a connection is not returned by the connection pool for any reason (backends not available or all saturated), MySQL_Thread should try again shortly after
|
process
|
add variable mysql poll timeout on failure in global variables when a connection is not returned by the connection pool for any reason backends not available or all saturated mysql thread should try again shortly after
| 1
|
672,801
| 22,840,771,093
|
IssuesEvent
|
2022-07-12 21:34:52
|
codeforbtv/green-up-app
|
https://api.github.com/repos/codeforbtv/green-up-app
|
closed
|
Editing Team Details doesn't close on Save
|
Type: Bug Priority: High Usability Team Screen
|
**Describe the bug**
When you click Save you expect the page to close
**To Reproduce**
Steps to reproduce the behavior:
1. Tap on an existing team.
2. Tap Details
3. Make some changes.
4. Tap Save
5. Page doesn't close
**Expected behavior**
After tapping Save, return to the Main Screen.
**Smartphone (please complete the following information):**
- Device: Samsung S9
- Version 10
|
1.0
|
Editing Team Details doesn't close on Save - **Describe the bug**
When you click Save you expect the page to close
**To Reproduce**
Steps to reproduce the behavior:
1. Tap on an existing team.
2. Tap Details
3. Make some changes.
4. Tap Save
5. Page doesn't close
**Expected behavior**
After tapping Save, return to the Main Screen.
**Smartphone (please complete the following information):**
- Device: Samsung S9
- Version 10
|
non_process
|
editing team details doesn t close on save describe the bug when you click save you expect the page to close to reproduce steps to reproduce the behavior tap on an existing team tap details make some changes tap save page doesn t close expected behavior after tapping save return to the main screen smartphone please complete the following information device samsung version
| 0
|
63,552
| 7,725,533,285
|
IssuesEvent
|
2018-05-24 18:16:38
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
closed
|
PD: Update tiprack + trash-box images
|
protocol designer small
|
## overview
PD still uses blue images from opentrons website via S3, this is an artifact from the old prototype PD version. Morgan made new designs, PD should use these instead.
## behavior

|
1.0
|
PD: Update tiprack + trash-box images - ## overview
PD still uses blue images from opentrons website via S3, this is an artifact from the old prototype PD version. Morgan made new designs, PD should use these instead.
## behavior

|
non_process
|
pd update tiprack trash box images overview pd still uses blue images from opentrons website via this is an artifact from the old prototype pd version morgan made new designs pd should use these instead behavior
| 0
|
15,619
| 19,761,888,402
|
IssuesEvent
|
2022-01-16 14:53:40
|
ForNeVeR/Cesium
|
https://api.github.com/repos/ForNeVeR/Cesium
|
opened
|
Preprocessor #error support
|
status:help-wanted area:preprocessor
|
Our preprocessor should support the `#error` directive from the C standard.
|
1.0
|
Preprocessor #error support - Our preprocessor should support the `#error` directive from the C standard.
|
process
|
preprocessor error support our preprocessor should support the error directive from the c standard
| 1
|
735,654
| 25,408,570,345
|
IssuesEvent
|
2022-11-22 17:01:33
|
googleapis/java-bigquery
|
https://api.github.com/repos/googleapis/java-bigquery
|
closed
|
bigquery.it.ITBigQueryTest: testCreateAndGetJobWithSelectedFields failed
|
type: bug priority: p1 api: bigquery flakybot: issue flakybot: flaky
|
Note: #2343 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 2e1047c8115e294a4454cf34abb3172b3fb1ff69
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/056084d5-6832-443d-b1d2-241ef1a3d491), [Sponge](http://sponge2/056084d5-6832-443d-b1d2-241ef1a3d491)
status: failed
<details><summary>Test output</summary><br><pre>com.google.cloud.bigquery.BigQueryException: Not found: Table gcloud-devel:gcloud_test_dataset_temp_21aa6b7f_59da_410f_99ed_c147b70e877c.test_create_and_get_job_with_selected_fields_source_table
at com.google.cloud.bigquery.Job.reload(Job.java:419)
at com.google.cloud.bigquery.Job.waitFor(Job.java:252)
at com.google.cloud.bigquery.it.ITBigQueryTest.testCreateAndGetJobWithSelectedFields(ITBigQueryTest.java:4127)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:829)
</pre></details>
|
1.0
|
bigquery.it.ITBigQueryTest: testCreateAndGetJobWithSelectedFields failed - Note: #2343 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 2e1047c8115e294a4454cf34abb3172b3fb1ff69
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/056084d5-6832-443d-b1d2-241ef1a3d491), [Sponge](http://sponge2/056084d5-6832-443d-b1d2-241ef1a3d491)
status: failed
<details><summary>Test output</summary><br><pre>com.google.cloud.bigquery.BigQueryException: Not found: Table gcloud-devel:gcloud_test_dataset_temp_21aa6b7f_59da_410f_99ed_c147b70e877c.test_create_and_get_job_with_selected_fields_source_table
at com.google.cloud.bigquery.Job.reload(Job.java:419)
at com.google.cloud.bigquery.Job.waitFor(Job.java:252)
at com.google.cloud.bigquery.it.ITBigQueryTest.testCreateAndGetJobWithSelectedFields(ITBigQueryTest.java:4127)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:829)
</pre></details>
|
non_process
|
bigquery it itbigquerytest testcreateandgetjobwithselectedfields failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output com google cloud bigquery bigqueryexception not found table gcloud devel gcloud test dataset temp test create and get job with selected fields source table at com google cloud bigquery job reload job java at com google cloud bigquery job waitfor job java at com google cloud bigquery it itbigquerytest testcreateandgetjobwithselectedfields itbigquerytest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements failontimeout callablestatement call failontimeout java at org junit internal runners statements failontimeout callablestatement call failontimeout java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java
| 0
|
126,188
| 4,973,799,970
|
IssuesEvent
|
2016-12-06 02:47:47
|
yairodriguez/yairodriguez.github.io
|
https://api.github.com/repos/yairodriguez/yairodriguez.github.io
|
opened
|
Enable CI
|
[priority] high [status] accepted [type] feature
|
### Description
Enable Continuous Integration.
---
### Issue Checklist
- [ ] Enable project to implement Continuous Integration.
- [ ] Configure `Travis CI`.
---
### Assignees
- [ ] Final assign @yairodriguez
|
1.0
|
Enable CI - ### Description
Enable Continuous Integration.
---
### Issue Checklist
- [ ] Enable project to implement Continuous Integration.
- [ ] Configure `Travis CI`.
---
### Assignees
- [ ] Final assign @yairodriguez
|
non_process
|
enable ci description enable continuous integration issue checklist enable project to implement continuous integration configure travis ci assignees final assign yairodriguez
| 0
|
144,173
| 5,537,069,082
|
IssuesEvent
|
2017-03-21 21:09:50
|
CraftAcademy/ca_course
|
https://api.github.com/repos/CraftAcademy/ca_course
|
closed
|
Update Chartjs installation instructions in cooper challenge
|
ca-course course material high priority ready
|
http://class.craftacademy.se/courses/course-v1:CraftAcademy+CA-CAMP+2016_3/courseware/96bf29b196214229a1f5b420c670ac7f/
Support for bower has been dropped as of `2.2.0` so we need to make use of `bower-npm-resolver` to install chartjs
see instructions [here])(http://www.chartjs.org/docs/#getting-started-installation)
Or easy way just use npm `npm install chart.js --save` and require it in `index.html`
``` html
<script src="../node_modules/chart.js/dist/Chart.js"></script>
```
instead of
``` html
<script src="lib/Chart.js/dist/Chart.js"></script>
```
|
1.0
|
Update Chartjs installation instructions in cooper challenge - http://class.craftacademy.se/courses/course-v1:CraftAcademy+CA-CAMP+2016_3/courseware/96bf29b196214229a1f5b420c670ac7f/
Support for bower has been dropped as of `2.2.0` so we need to make use of `bower-npm-resolver` to install chartjs
see instructions [here])(http://www.chartjs.org/docs/#getting-started-installation)
Or easy way just use npm `npm install chart.js --save` and require it in `index.html`
``` html
<script src="../node_modules/chart.js/dist/Chart.js"></script>
```
instead of
``` html
<script src="lib/Chart.js/dist/Chart.js"></script>
```
|
non_process
|
update chartjs installation instructions in cooper challenge support for bower has been dropped as of so we need to make use of bower npm resolver to install chartjs see instructions or easy way just use npm npm install chart js save and require it in index html html instead of html
| 0
|
6,015
| 8,822,172,525
|
IssuesEvent
|
2019-01-02 08:04:39
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
Main menu ICU button bug
|
2.0.6 Fixed Process bug
|
when entering the app for the first time or when moving to another tab and pressing on ICU once, instead of loading this screen :

it instead loads the first tasks in the my tasks tab, until you press the ICU main menu button again:

|
1.0
|
Main menu ICU button bug - when entering the app for the first time or when moving to another tab and pressing on ICU once, instead of loading this screen :

it instead loads the first tasks in the my tasks tab, until you press the ICU main menu button again:

|
process
|
main menu icu button bug when entering the app for the first time or when moving to another tab and pressing on icu once instead of loading this screen it instead loads the first tasks in the my tasks tab until you press the icu main menu button again
| 1
|
183,925
| 21,784,743,966
|
IssuesEvent
|
2022-05-14 01:10:13
|
RG4421/HackShack-Session-Landing-Page
|
https://api.github.com/repos/RG4421/HackShack-Session-Landing-Page
|
closed
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js - autoclosed
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: HackShack-Session-Landing-Page/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: /node_modules/sockjs/examples/hapi/html/index.html,/node_modules/sockjs/examples/multiplex/index.html,/node_modules/sockjs/examples/express-3.x/index.html,/node_modules/sockjs/examples/echo/index.html,/node_modules/sockjs/examples/express/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/HackShack-Session-Landing-Page/commit/07bd1498ae0f65f0e53050b22c0e2348289e620e">07bd1498ae0f65f0e53050b22c0e2348289e620e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js - autoclosed - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: HackShack-Session-Landing-Page/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: /node_modules/sockjs/examples/hapi/html/index.html,/node_modules/sockjs/examples/multiplex/index.html,/node_modules/sockjs/examples/express-3.x/index.html,/node_modules/sockjs/examples/echo/index.html,/node_modules/sockjs/examples/express/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/HackShack-Session-Landing-Page/commit/07bd1498ae0f65f0e53050b22c0e2348289e620e">07bd1498ae0f65f0e53050b22c0e2348289e620e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in jquery min js autoclosed cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file hackshack session landing page node modules sockjs examples hapi html index html path to vulnerable library node modules sockjs examples hapi html index html node modules sockjs examples multiplex index html node modules sockjs examples express x index html node modules sockjs examples echo index html node modules sockjs examples express index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery
| 0
|
712,455
| 24,495,793,420
|
IssuesEvent
|
2022-10-10 08:33:46
|
Denloob/ItGrowsGame
|
https://api.github.com/repos/Denloob/ItGrowsGame
|
closed
|
Hitboxes aren't centered
|
priority-low stale bugfix
|
You can see it when looking on hitboxes of objects smaller than the grid.
Example:

|
1.0
|
Hitboxes aren't centered - You can see it when looking on hitboxes of objects smaller than the grid.
Example:

|
non_process
|
hitboxes aren t centered you can see it when looking on hitboxes of objects smaller than the grid example
| 0
|
5,564
| 8,404,351,532
|
IssuesEvent
|
2018-10-11 12:36:15
|
OpenSourcePolitics/decidim
|
https://api.github.com/repos/OpenSourcePolitics/decidim
|
opened
|
Hide alert message on Orsay's private process
|
0.12-stable space: processes
|
When connecting to the private process where parents will fill the survey, an alert message appears:
<img width="1168" alt="capture d ecran 2018-10-11 a 14 31 49" src="https://user-images.githubusercontent.com/10398564/46804393-c62b0f80-cd62-11e8-89e5-069bf540d2ec.png">
It may be confusing and should be hidden. Best would be to do it before this weekend :)
|
1.0
|
Hide alert message on Orsay's private process - When connecting to the private process where parents will fill the survey, an alert message appears:
<img width="1168" alt="capture d ecran 2018-10-11 a 14 31 49" src="https://user-images.githubusercontent.com/10398564/46804393-c62b0f80-cd62-11e8-89e5-069bf540d2ec.png">
It may be confusing and should be hidden. Best would be to do it before this weekend :)
|
process
|
hide alert message on orsay s private process when connecting to the private process where parents will fill the survey an alert message appears img width alt capture d ecran a src it may be confusing and should be hidden best would be to do it before this weekend
| 1
|
662,147
| 22,103,048,819
|
IssuesEvent
|
2022-06-01 15:00:50
|
dhowe/ramble
|
https://api.github.com/repos/dhowe/ramble
|
reopened
|
Add 'about' dialog, opened from legend button
|
high-priority
|
Idea here is to add a 5th 'about' button (nicely designed) to the legend which will pop up something like below:
<img width="400" alt="image" src="https://user-images.githubusercontent.com/737638/169821646-b56aa9c8-367f-4cec-9dd8-32a3945f2e55.png">
<img width="500" alt="image" src="https://user-images.githubusercontent.com/737638/169820556-8440e72e-b267-4fb6-9c6b-6b045dadeb24.png">
|
1.0
|
Add 'about' dialog, opened from legend button - Idea here is to add a 5th 'about' button (nicely designed) to the legend which will pop up something like below:
<img width="400" alt="image" src="https://user-images.githubusercontent.com/737638/169821646-b56aa9c8-367f-4cec-9dd8-32a3945f2e55.png">
<img width="500" alt="image" src="https://user-images.githubusercontent.com/737638/169820556-8440e72e-b267-4fb6-9c6b-6b045dadeb24.png">
|
non_process
|
add about dialog opened from legend button idea here is to add a about button nicely designed to the legend which will pop up something like below img width alt image src img width alt image src
| 0
|
22,366
| 31,080,414,442
|
IssuesEvent
|
2023-08-13 02:00:09
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 11 Aug 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision
- **Authors:** Changjian Chen, Yukai Guo, Fengyuan Tian, Shilong Liu, Weikai Yang, Zhaowei Wang, Jing Wu, Hang Su, Hanspeter Pfister, Shixia Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2308.05168
- **Pdf link:** https://arxiv.org/pdf/2308.05168
- **Abstract**
Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.
### Product Review Image Ranking for Fashion E-commerce
- **Authors:** Sangeet Jaiswal, Dhruv Patel, Sreekanth Vempati, Konduru Saiswaroop
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Retrieval (cs.IR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2308.05390
- **Pdf link:** https://arxiv.org/pdf/2308.05390
- **Abstract**
In a fashion e-commerce platform where customers can't physically examine the products on their own, being able to see other customers' text and image reviews of the product is critical while making purchase decisions. Given the high reliance on these reviews, over the years we have observed customers proactively sharing their reviews. With an increase in the coverage of User Generated Content (UGC), there has been a corresponding increase in the number of customer images. It is thus imperative to display the most relevant images on top as it may influence users' online shopping choices and behavior. In this paper, we propose a simple yet effective training procedure for ranking customer images. We created a dataset consisting of Myntra (A Major Indian Fashion e-commerce company) studio posts and highly engaged (upvotes/downvotes) UGC images as our starting point and used selected distortion techniques on the images of the above dataset to bring their quality at par with those of bad UGC images. We train our network to rank bad-quality images lower than high-quality ones. Our proposed method outperforms the baseline models on two metrics, namely correlation coefficient, and accuracy, by substantial margins.
### Speech-Driven 3D Face Animation with Composite and Regional Facial Movements
- **Authors:** Haozhe Wu, Songtao Zhou, Jia Jia, Junliang Xing, Qi Wen, Xiang Wen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2308.05428
- **Pdf link:** https://arxiv.org/pdf/2308.05428
- **Abstract**
Speech-driven 3D face animation poses significant challenges due to the intricacy and variability inherent in human facial movements. This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation. The composite nature pertains to how speech-independent factors globally modulate speech-driven facial movements along the temporal dimension. Meanwhile, the regional nature alludes to the notion that facial movements are not globally correlated but are actuated by local musculature along the spatial dimension. It is thus indispensable to incorporate both natures for engendering vivid animation. To address the composite nature, we introduce an adaptive modulation module that employs arbitrary facial movements to dynamically adjust speech-driven facial movements across frames on a global scale. To accommodate the regional nature, our approach ensures that each constituent of the facial features for every frame focuses on the local spatial movements of 3D faces. Moreover, we present a non-autoregressive backbone for translating audio to 3D facial movements, which maintains high-frequency nuances of facial movements and facilitates efficient inference. Comprehensive experiments and user studies demonstrate that our method surpasses contemporary state-of-the-art approaches both qualitatively and quantitatively.
### Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation for Panoramic Semantic Segmentation
- **Authors:** Xu Zheng, Tianbo Pan, Yunhao Luo, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.05493
- **Pdf link:** https://arxiv.org/pdf/2308.05493
- **Abstract**
Endeavors have been recently made to transfer knowledge from the labeled pinhole image domain to the unlabeled panoramic image domain via Unsupervised Domain Adaptation (UDA). The aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non-uniformly distributed pixels of equirectangular projection (ERP). Previous works typically focus on transferring knowledge based on geometric priors with specially designed multi-branch network architectures. As a result, considerable computational costs are induced, and meanwhile, their generalization abilities are profoundly hindered by the variation of distortion among pixels. In this paper, we find that the pixels' neighborhood regions of the ERP indeed introduce less distortion. Intuitively, we propose a novel UDA framework that can effectively address the distortion problems for panoramic semantic segmentation. In comparison, our method is simpler, easier to implement, and more computationally efficient. Specifically, we propose distortion-aware attention (DA) capturing the neighboring pixel distribution without using any geometric constraints. Moreover, we propose a class-wise feature aggregation (CFA) module to iteratively update the feature representations with a memory bank. As such, the feature similarity between two domains can be consistently optimized. Extensive experiments show that our method achieves new state-of-the-art performance while remarkably reducing 80% parameters.
### MapTRv2: An End-to-End Framework for Online Vectorized HD Map Construction
- **Authors:** Bencheng Liao, Shaoyu Chen, Yunchi Zhang, Bo Jiang, Qian Zhang, Wenyu Liu, Chang Huang, Xinggang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2308.05736
- **Pdf link:** https://arxiv.org/pdf/2308.05736
- **Abstract**
High-definition (HD) map provides abundant and precise static environmental information of the driving scene, serving as a fundamental and indispensable component for planning in autonomous driving system. In this paper, we present \textbf{Map} \textbf{TR}ansformer, an end-to-end framework for online vectorized HD map construction. We propose a unified permutation-equivalent modeling approach, \ie, modeling map element as a point set with a group of equivalent permutations, which accurately describes the shape of map element and stabilizes the learning process. We design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning. To speed up convergence, we further introduce auxiliary one-to-many matching and dense supervision. The proposed method well copes with various map elements with arbitrary shapes. It runs at real-time inference speed and achieves state-of-the-art performance on both nuScenes and Argoverse2 datasets. Abundant qualitative results show stable and robust map construction quality in complex and various driving scenes. Code and more demos are available at \url{https://github.com/hustvl/MapTR} for facilitating further studies and applications.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection in Autonomous Driving
- **Authors:** Faisal Hawlader, François Robinet, Raphaël Frank
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
- **Arxiv link:** https://arxiv.org/abs/2308.05234
- **Pdf link:** https://arxiv.org/pdf/2308.05234
- **Abstract**
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
### Neural Progressive Meshes
- **Authors:** Yun-Chun Chen, Vladimir G. Kim, Noam Aigerman, Alec Jacobson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2308.05741
- **Pdf link:** https://arxiv.org/pdf/2308.05741
- **Abstract**
The recent proliferation of 3D content that can be consumed on hand-held devices necessitates efficient tools for transmitting large geometric data, e.g., 3D meshes, over the Internet. Detailed high-resolution assets can pose a challenge to storage as well as transmission bandwidth, and level-of-detail techniques are often used to transmit an asset using an appropriate bandwidth budget. It is especially desirable for these methods to transmit data progressively, improving the quality of the geometry with more data. Our key insight is that the geometric details of 3D meshes often exhibit similar local patterns even across different shapes, and thus can be effectively represented with a shared learned generative space. We learn this space using a subdivision-based encoder-decoder architecture trained in advance on a large collection of surfaces. We further observe that additional residual features can be transmitted progressively between intermediate levels of subdivision that enable the client to control the tradeoff between bandwidth cost and quality of reconstruction, providing a neural progressive mesh representation. We evaluate our method on a diverse set of complex 3D shapes and demonstrate that it outperforms baselines in terms of compression ratio and reconstruction quality.
## Keyword: RAW
### Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection in Autonomous Driving
- **Authors:** Faisal Hawlader, François Robinet, Raphaël Frank
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
- **Arxiv link:** https://arxiv.org/abs/2308.05234
- **Pdf link:** https://arxiv.org/pdf/2308.05234
- **Abstract**
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
### PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs
- **Authors:** Wentao Hu, Jia Zheng, Zixin Zhang, Xiaojun Yuan, Jian Yin, Zihan Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2308.05744
- **Pdf link:** https://arxiv.org/pdf/2308.05744
- **Abstract**
In this paper, we develop a new method to automatically convert 2D line drawings from three orthographic views into 3D CAD models. Existing methods for this problem reconstruct 3D models by back-projecting the 2D observations into 3D space while maintaining explicit correspondence between the input and output. Such methods are sensitive to errors and noises in the input, thus often fail in practice where the input drawings created by human designers are imperfect. To overcome this difficulty, we leverage the attention mechanism in a Transformer-based sequence generation model to learn flexible mappings between the input and output. Further, we design shape programs which are suitable for generating the objects of interest to boost the reconstruction accuracy and facilitate CAD modeling applications. Experiments on a new benchmark dataset show that our method significantly outperforms existing ones when the inputs are noisy or incomplete.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 11 Aug 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision
- **Authors:** Changjian Chen, Yukai Guo, Fengyuan Tian, Shilong Liu, Weikai Yang, Zhaowei Wang, Jing Wu, Hang Su, Hanspeter Pfister, Shixia Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2308.05168
- **Pdf link:** https://arxiv.org/pdf/2308.05168
- **Abstract**
Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.
### Product Review Image Ranking for Fashion E-commerce
- **Authors:** Sangeet Jaiswal, Dhruv Patel, Sreekanth Vempati, Konduru Saiswaroop
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Information Retrieval (cs.IR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2308.05390
- **Pdf link:** https://arxiv.org/pdf/2308.05390
- **Abstract**
In a fashion e-commerce platform where customers can't physically examine the products on their own, being able to see other customers' text and image reviews of the product is critical while making purchase decisions. Given the high reliance on these reviews, over the years we have observed customers proactively sharing their reviews. With an increase in the coverage of User Generated Content (UGC), there has been a corresponding increase in the number of customer images. It is thus imperative to display the most relevant images on top as it may influence users' online shopping choices and behavior. In this paper, we propose a simple yet effective training procedure for ranking customer images. We created a dataset consisting of Myntra (A Major Indian Fashion e-commerce company) studio posts and highly engaged (upvotes/downvotes) UGC images as our starting point and used selected distortion techniques on the images of the above dataset to bring their quality at par with those of bad UGC images. We train our network to rank bad-quality images lower than high-quality ones. Our proposed method outperforms the baseline models on two metrics, namely correlation coefficient, and accuracy, by substantial margins.
### Speech-Driven 3D Face Animation with Composite and Regional Facial Movements
- **Authors:** Haozhe Wu, Songtao Zhou, Jia Jia, Junliang Xing, Qi Wen, Xiang Wen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2308.05428
- **Pdf link:** https://arxiv.org/pdf/2308.05428
- **Abstract**
Speech-driven 3D face animation poses significant challenges due to the intricacy and variability inherent in human facial movements. This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation. The composite nature pertains to how speech-independent factors globally modulate speech-driven facial movements along the temporal dimension. Meanwhile, the regional nature alludes to the notion that facial movements are not globally correlated but are actuated by local musculature along the spatial dimension. It is thus indispensable to incorporate both natures for engendering vivid animation. To address the composite nature, we introduce an adaptive modulation module that employs arbitrary facial movements to dynamically adjust speech-driven facial movements across frames on a global scale. To accommodate the regional nature, our approach ensures that each constituent of the facial features for every frame focuses on the local spatial movements of 3D faces. Moreover, we present a non-autoregressive backbone for translating audio to 3D facial movements, which maintains high-frequency nuances of facial movements and facilitates efficient inference. Comprehensive experiments and user studies demonstrate that our method surpasses contemporary state-of-the-art approaches both qualitatively and quantitatively.
### Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation for Panoramic Semantic Segmentation
- **Authors:** Xu Zheng, Tianbo Pan, Yunhao Luo, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.05493
- **Pdf link:** https://arxiv.org/pdf/2308.05493
- **Abstract**
Endeavors have been recently made to transfer knowledge from the labeled pinhole image domain to the unlabeled panoramic image domain via Unsupervised Domain Adaptation (UDA). The aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non-uniformly distributed pixels of equirectangular projection (ERP). Previous works typically focus on transferring knowledge based on geometric priors with specially designed multi-branch network architectures. As a result, considerable computational costs are induced, and meanwhile, their generalization abilities are profoundly hindered by the variation of distortion among pixels. In this paper, we find that the pixels' neighborhood regions of the ERP indeed introduce less distortion. Intuitively, we propose a novel UDA framework that can effectively address the distortion problems for panoramic semantic segmentation. In comparison, our method is simpler, easier to implement, and more computationally efficient. Specifically, we propose distortion-aware attention (DA) capturing the neighboring pixel distribution without using any geometric constraints. Moreover, we propose a class-wise feature aggregation (CFA) module to iteratively update the feature representations with a memory bank. As such, the feature similarity between two domains can be consistently optimized. Extensive experiments show that our method achieves new state-of-the-art performance while remarkably reducing 80% parameters.
### MapTRv2: An End-to-End Framework for Online Vectorized HD Map Construction
- **Authors:** Bencheng Liao, Shaoyu Chen, Yunchi Zhang, Bo Jiang, Qian Zhang, Wenyu Liu, Chang Huang, Xinggang Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2308.05736
- **Pdf link:** https://arxiv.org/pdf/2308.05736
- **Abstract**
High-definition (HD) map provides abundant and precise static environmental information of the driving scene, serving as a fundamental and indispensable component for planning in autonomous driving system. In this paper, we present \textbf{Map} \textbf{TR}ansformer, an end-to-end framework for online vectorized HD map construction. We propose a unified permutation-equivalent modeling approach, \ie, modeling map element as a point set with a group of equivalent permutations, which accurately describes the shape of map element and stabilizes the learning process. We design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning. To speed up convergence, we further introduce auxiliary one-to-many matching and dense supervision. The proposed method well copes with various map elements with arbitrary shapes. It runs at real-time inference speed and achieves state-of-the-art performance on both nuScenes and Argoverse2 datasets. Abundant qualitative results show stable and robust map construction quality in complex and various driving scenes. Code and more demos are available at \url{https://github.com/hustvl/MapTR} for facilitating further studies and applications.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection in Autonomous Driving
- **Authors:** Faisal Hawlader, François Robinet, Raphaël Frank
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
- **Arxiv link:** https://arxiv.org/abs/2308.05234
- **Pdf link:** https://arxiv.org/pdf/2308.05234
- **Abstract**
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
### Neural Progressive Meshes
- **Authors:** Yun-Chun Chen, Vladimir G. Kim, Noam Aigerman, Alec Jacobson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2308.05741
- **Pdf link:** https://arxiv.org/pdf/2308.05741
- **Abstract**
The recent proliferation of 3D content that can be consumed on hand-held devices necessitates efficient tools for transmitting large geometric data, e.g., 3D meshes, over the Internet. Detailed high-resolution assets can pose a challenge to storage as well as transmission bandwidth, and level-of-detail techniques are often used to transmit an asset using an appropriate bandwidth budget. It is especially desirable for these methods to transmit data progressively, improving the quality of the geometry with more data. Our key insight is that the geometric details of 3D meshes often exhibit similar local patterns even across different shapes, and thus can be effectively represented with a shared learned generative space. We learn this space using a subdivision-based encoder-decoder architecture trained in advance on a large collection of surfaces. We further observe that additional residual features can be transmitted progressively between intermediate levels of subdivision that enable the client to control the tradeoff between bandwidth cost and quality of reconstruction, providing a neural progressive mesh representation. We evaluate our method on a diverse set of complex 3D shapes and demonstrate that it outperforms baselines in terms of compression ratio and reconstruction quality.
## Keyword: RAW
### Leveraging the Edge and Cloud for V2X-Based Real-Time Object Detection in Autonomous Driving
- **Authors:** Faisal Hawlader, François Robinet, Raphaël Frank
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG); Networking and Internet Architecture (cs.NI)
- **Arxiv link:** https://arxiv.org/abs/2308.05234
- **Pdf link:** https://arxiv.org/pdf/2308.05234
- **Abstract**
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
### PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs
- **Authors:** Wentao Hu, Jia Zheng, Zixin Zhang, Xiaojun Yuan, Jian Yin, Zihan Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2308.05744
- **Pdf link:** https://arxiv.org/pdf/2308.05744
- **Abstract**
In this paper, we develop a new method to automatically convert 2D line drawings from three orthographic views into 3D CAD models. Existing methods for this problem reconstruct 3D models by back-projecting the 2D observations into 3D space while maintaining explicit correspondence between the input and output. Such methods are sensitive to errors and noises in the input, thus often fail in practice where the input drawings created by human designers are imperfect. To overcome this difficulty, we leverage the attention mechanism in a Transformer-based sequence generation model to learn flexible mappings between the input and output. Further, we design shape programs which are suitable for generating the objects of interest to boost the reconstruction accuracy and facilitate CAD modeling applications. Experiments on a new benchmark dataset show that our method significantly outperforms existing ones when the inputs are noisy or incomplete.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri aug keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp a unified interactive model evaluation for classification object detection and instance segmentation in computer vision authors changjian chen yukai guo fengyuan tian shilong liu weikai yang zhaowei wang jing wu hang su hanspeter pfister shixia liu subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract existing model evaluation tools mainly focus on evaluating classification models leaving a gap in evaluating more complex models such as object detection in this paper we develop an open source visual analysis tool uni evaluator to support a unified model evaluation for classification object detection and instance segmentation in computer vision the key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions based on these distributions we develop a matrix based visualization to provide an overview of model performance a table visualization to identify the problematic data subsets where the model performs poorly a grid visualization to display the samples of interest these visualizations work together to facilitate the model evaluation from a global overview to individual samples two case studies demonstrate the effectiveness of uni evaluator in evaluating model performance and making informed improvements product review image ranking for fashion e commerce authors sangeet jaiswal dhruv patel sreekanth vempati konduru saiswaroop subjects computer vision and pattern recognition cs cv information retrieval cs ir machine learning cs lg arxiv link pdf link abstract in a fashion e commerce platform where customers can t physically examine the products on their own being able to see other customers text and image reviews of the product is critical while making purchase decisions given the high reliance on these reviews over the years we have observed customers proactively sharing their reviews with an increase in the coverage of user generated content ugc there has been a corresponding increase in the number of customer images it is thus imperative to display the most relevant images on top as it may influence users online shopping choices and behavior in this paper we propose a simple yet effective training procedure for ranking customer images we created a dataset consisting of myntra a major indian fashion e commerce company studio posts and highly engaged upvotes downvotes ugc images as our starting point and used selected distortion techniques on the images of the above dataset to bring their quality at par with those of bad ugc images we train our network to rank bad quality images lower than high quality ones our proposed method outperforms the baseline models on two metrics namely correlation coefficient and accuracy by substantial margins speech driven face animation with composite and regional facial movements authors haozhe wu songtao zhou jia jia junliang xing qi wen xiang wen subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract speech driven face animation poses significant challenges due to the intricacy and variability inherent in human facial movements this paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech driven face animation the composite nature pertains to how speech independent factors globally modulate speech driven facial movements along the temporal dimension meanwhile the regional nature alludes to the notion that facial movements are not globally correlated but are actuated by local musculature along the spatial dimension it is thus indispensable to incorporate both natures for engendering vivid animation to address the composite nature we introduce an adaptive modulation module that employs arbitrary facial movements to dynamically adjust speech driven facial movements across frames on a global scale to accommodate the regional nature our approach ensures that each constituent of the facial features for every frame focuses on the local spatial movements of faces moreover we present a non autoregressive backbone for translating audio to facial movements which maintains high frequency nuances of facial movements and facilitates efficient inference comprehensive experiments and user studies demonstrate that our method surpasses contemporary state of the art approaches both qualitatively and quantitatively look at the neighbor distortion aware unsupervised domain adaptation for panoramic semantic segmentation authors xu zheng tianbo pan yunhao luo lin wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract endeavors have been recently made to transfer knowledge from the labeled pinhole image domain to the unlabeled panoramic image domain via unsupervised domain adaptation uda the aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non uniformly distributed pixels of equirectangular projection erp previous works typically focus on transferring knowledge based on geometric priors with specially designed multi branch network architectures as a result considerable computational costs are induced and meanwhile their generalization abilities are profoundly hindered by the variation of distortion among pixels in this paper we find that the pixels neighborhood regions of the erp indeed introduce less distortion intuitively we propose a novel uda framework that can effectively address the distortion problems for panoramic semantic segmentation in comparison our method is simpler easier to implement and more computationally efficient specifically we propose distortion aware attention da capturing the neighboring pixel distribution without using any geometric constraints moreover we propose a class wise feature aggregation cfa module to iteratively update the feature representations with a memory bank as such the feature similarity between two domains can be consistently optimized extensive experiments show that our method achieves new state of the art performance while remarkably reducing parameters an end to end framework for online vectorized hd map construction authors bencheng liao shaoyu chen yunchi zhang bo jiang qian zhang wenyu liu chang huang xinggang wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract high definition hd map provides abundant and precise static environmental information of the driving scene serving as a fundamental and indispensable component for planning in autonomous driving system in this paper we present textbf map textbf tr ansformer an end to end framework for online vectorized hd map construction we propose a unified permutation equivalent modeling approach ie modeling map element as a point set with a group of equivalent permutations which accurately describes the shape of map element and stabilizes the learning process we design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning to speed up convergence we further introduce auxiliary one to many matching and dense supervision the proposed method well copes with various map elements with arbitrary shapes it runs at real time inference speed and achieves state of the art performance on both nuscenes and datasets abundant qualitative results show stable and robust map construction quality in complex and various driving scenes code and more demos are available at url for facilitating further studies and applications keyword image signal processing there is no result keyword image signal process there is no result keyword compression leveraging the edge and cloud for based real time object detection in autonomous driving authors faisal hawlader françois robinet raphaël frank subjects computer vision and pattern recognition cs cv artificial intelligence cs ai distributed parallel and cluster computing cs dc machine learning cs lg networking and internet architecture cs ni arxiv link pdf link abstract environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions an outstanding challenge in real time perception for autonomous driving lies in finding the best trade off between detection quality and latency major constraints on both computation and power have to be taken into account for real time perception in autonomous vehicles larger object detection models tend to produce the best results but are also slower at runtime since the most accurate detectors cannot run in real time locally we investigate the possibility of offloading computation to edge and cloud platforms which are less resource constrained we create a synthetic dataset to train object detection models and evaluate different offloading strategies using real hardware and network simulations we compare different trade offs between prediction quality and end to end delay since sending raw frames over the network implies additional transmission delays we also explore the use of jpeg and h compression at varying qualities and measure their impact on prediction metrics we show that models with adequate compression can be run in real time on the cloud while outperforming local detection performance neural progressive meshes authors yun chun chen vladimir g kim noam aigerman alec jacobson subjects computer vision and pattern recognition cs cv artificial intelligence cs ai graphics cs gr machine learning cs lg arxiv link pdf link abstract the recent proliferation of content that can be consumed on hand held devices necessitates efficient tools for transmitting large geometric data e g meshes over the internet detailed high resolution assets can pose a challenge to storage as well as transmission bandwidth and level of detail techniques are often used to transmit an asset using an appropriate bandwidth budget it is especially desirable for these methods to transmit data progressively improving the quality of the geometry with more data our key insight is that the geometric details of meshes often exhibit similar local patterns even across different shapes and thus can be effectively represented with a shared learned generative space we learn this space using a subdivision based encoder decoder architecture trained in advance on a large collection of surfaces we further observe that additional residual features can be transmitted progressively between intermediate levels of subdivision that enable the client to control the tradeoff between bandwidth cost and quality of reconstruction providing a neural progressive mesh representation we evaluate our method on a diverse set of complex shapes and demonstrate that it outperforms baselines in terms of compression ratio and reconstruction quality keyword raw leveraging the edge and cloud for based real time object detection in autonomous driving authors faisal hawlader françois robinet raphaël frank subjects computer vision and pattern recognition cs cv artificial intelligence cs ai distributed parallel and cluster computing cs dc machine learning cs lg networking and internet architecture cs ni arxiv link pdf link abstract environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions an outstanding challenge in real time perception for autonomous driving lies in finding the best trade off between detection quality and latency major constraints on both computation and power have to be taken into account for real time perception in autonomous vehicles larger object detection models tend to produce the best results but are also slower at runtime since the most accurate detectors cannot run in real time locally we investigate the possibility of offloading computation to edge and cloud platforms which are less resource constrained we create a synthetic dataset to train object detection models and evaluate different offloading strategies using real hardware and network simulations we compare different trade offs between prediction quality and end to end delay since sending raw frames over the network implies additional transmission delays we also explore the use of jpeg and h compression at varying qualities and measure their impact on prediction metrics we show that models with adequate compression can be run in real time on the cloud while outperforming local detection performance plankassembly robust reconstruction from three orthographic views with learnt shape programs authors wentao hu jia zheng zixin zhang xiaojun yuan jian yin zihan zhou subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract in this paper we develop a new method to automatically convert line drawings from three orthographic views into cad models existing methods for this problem reconstruct models by back projecting the observations into space while maintaining explicit correspondence between the input and output such methods are sensitive to errors and noises in the input thus often fail in practice where the input drawings created by human designers are imperfect to overcome this difficulty we leverage the attention mechanism in a transformer based sequence generation model to learn flexible mappings between the input and output further we design shape programs which are suitable for generating the objects of interest to boost the reconstruction accuracy and facilitate cad modeling applications experiments on a new benchmark dataset show that our method significantly outperforms existing ones when the inputs are noisy or incomplete keyword raw image there is no result
| 1
|
11,170
| 13,188,622,270
|
IssuesEvent
|
2020-08-13 06:53:08
|
mmikkel/Reasons-Craft3
|
https://api.github.com/repos/mmikkel/Reasons-Craft3
|
closed
|
Conditional fields don't show after updating to 3.5.3?
|
bug compatibility
|
I updated to 3.5.3 today and it looks to have broken Reasons. I'm able to access the Conditional options when editing the Entry Type, but the conditional field doesn't show at all after adding criteria. In the second screenshot, the field should show to the right of 'Child Layout'.


|
True
|
Conditional fields don't show after updating to 3.5.3? - I updated to 3.5.3 today and it looks to have broken Reasons. I'm able to access the Conditional options when editing the Entry Type, but the conditional field doesn't show at all after adding criteria. In the second screenshot, the field should show to the right of 'Child Layout'.


|
non_process
|
conditional fields don t show after updating to i updated to today and it looks to have broken reasons i m able to access the conditional options when editing the entry type but the conditional field doesn t show at all after adding criteria in the second screenshot the field should show to the right of child layout
| 0
|
21,217
| 28,299,030,432
|
IssuesEvent
|
2023-04-10 03:04:22
|
vnphanquang/svelte-put
|
https://api.github.com/repos/vnphanquang/svelte-put
|
closed
|
[preprocess-inline-svg] Experimental Typing Extraction
|
priority:medium scope:preprocess-inline-svg type:enhance
|
## Context
Following [@svelte-put/preprocess-inline-svg@1.2.0](https://github.com/vnphanquang/svelte-put/releases/tag/%40svelte-put%2Fpreprocess-inline-svg%401.2.0) in which an experimental typing generator is introduced, this issue is to track the next steps for completing this feature, or whether it should be removed.
|
1.0
|
[preprocess-inline-svg] Experimental Typing Extraction - ## Context
Following [@svelte-put/preprocess-inline-svg@1.2.0](https://github.com/vnphanquang/svelte-put/releases/tag/%40svelte-put%2Fpreprocess-inline-svg%401.2.0) in which an experimental typing generator is introduced, this issue is to track the next steps for completing this feature, or whether it should be removed.
|
process
|
experimental typing extraction context following in which an experimental typing generator is introduced this issue is to track the next steps for completing this feature or whether it should be removed
| 1
|
382,494
| 11,307,265,295
|
IssuesEvent
|
2020-01-18 19:42:42
|
Thorium-Sim/thorium
|
https://api.github.com/repos/Thorium-Sim/thorium
|
closed
|
Card Memory Foam
|
priority/high type/bug
|
### Requested By: Natalie Anderson
### Priority: High
### Version: 2.2.0
You know how memory foam mattresses remember your shape after a while? Well currently the software is like a memory foam mattress and it remembers the card it was last on even when you make a new flight or reset the old one. This makes it tricky for our training since we have them start on the first screen. We would have to rerecord all our trainings...again. I just did that and I'd rather not go through that again.
### Steps to Reproduce
1. Have a flight
2. Make sure that all the screens are not on the first flight (so you can see this happen)
3. Reset your flight or delete it and make a new one.
4. Log in to your stations and see that they are on the same screen you left them on before (not the first screen)
5. Do a dance because you found the bug.
|
1.0
|
Card Memory Foam - ### Requested By: Natalie Anderson
### Priority: High
### Version: 2.2.0
You know how memory foam mattresses remember your shape after a while? Well currently the software is like a memory foam mattress and it remembers the card it was last on even when you make a new flight or reset the old one. This makes it tricky for our training since we have them start on the first screen. We would have to rerecord all our trainings...again. I just did that and I'd rather not go through that again.
### Steps to Reproduce
1. Have a flight
2. Make sure that all the screens are not on the first flight (so you can see this happen)
3. Reset your flight or delete it and make a new one.
4. Log in to your stations and see that they are on the same screen you left them on before (not the first screen)
5. Do a dance because you found the bug.
|
non_process
|
card memory foam requested by natalie anderson priority high version you know how memory foam mattresses remember your shape after a while well currently the software is like a memory foam mattress and it remembers the card it was last on even when you make a new flight or reset the old one this makes it tricky for our training since we have them start on the first screen we would have to rerecord all our trainings again i just did that and i d rather not go through that again steps to reproduce have a flight make sure that all the screens are not on the first flight so you can see this happen reset your flight or delete it and make a new one log in to your stations and see that they are on the same screen you left them on before not the first screen do a dance because you found the bug
| 0
|
8,299
| 11,462,666,571
|
IssuesEvent
|
2020-02-07 14:34:41
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
connecting . GO:0034053 and GO:0080185
|
multi-species process
|
GO:0034053
Definition
modulation by symbiont of host defense-related programmed cell death
Any process in which a symbiont modulates the frequency, rate or extent of defense-related programmed cell death in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
modulation by symbiont of host HR | narrow
modulation by symbiont of host hypersensitive response | narrow
but these narrow synonyms have their own terms (currently unrelated)
GO:0080185 effector-mediated induction of plant hypersensitive response by symbiont
Definition
A symbiont process whereby a molecule secreted by the symbiont activates plant effector-triggered immunity (ETI) signalling and the subsequent activation of a plant hypersensitive response to induce necrosis. In the plant, ETI involves the direct or indirect recognition of an effector protein by the host. for example through plant resistance receptor or R proteins. PMID:16497589 PMID:22241993 PMID:23411798 PMID:27641772
There are 9 annotations to
GO:0034053 modulation by symbiont of host defense-related programmed cell death
Some are effectors of HR in plants and should move down to GO:0080185
Some are Effectors from human pathogens, so this term is required as a parent to the plant term.
|
1.0
|
connecting . GO:0034053 and GO:0080185 -
GO:0034053
Definition
modulation by symbiont of host defense-related programmed cell death
Any process in which a symbiont modulates the frequency, rate or extent of defense-related programmed cell death in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
modulation by symbiont of host HR | narrow
modulation by symbiont of host hypersensitive response | narrow
but these narrow synonyms have their own terms (currently unrelated)
GO:0080185 effector-mediated induction of plant hypersensitive response by symbiont
Definition
A symbiont process whereby a molecule secreted by the symbiont activates plant effector-triggered immunity (ETI) signalling and the subsequent activation of a plant hypersensitive response to induce necrosis. In the plant, ETI involves the direct or indirect recognition of an effector protein by the host. for example through plant resistance receptor or R proteins. PMID:16497589 PMID:22241993 PMID:23411798 PMID:27641772
There are 9 annotations to
GO:0034053 modulation by symbiont of host defense-related programmed cell death
Some are effectors of HR in plants and should move down to GO:0080185
Some are Effectors from human pathogens, so this term is required as a parent to the plant term.
|
process
|
connecting go and go go definition modulation by symbiont of host defense related programmed cell death any process in which a symbiont modulates the frequency rate or extent of defense related programmed cell death in the host organism the host is defined as the larger of the organisms involved in a symbiotic interaction modulation by symbiont of host hr narrow modulation by symbiont of host hypersensitive response narrow but these narrow synonyms have their own terms currently unrelated go effector mediated induction of plant hypersensitive response by symbiont definition a symbiont process whereby a molecule secreted by the symbiont activates plant effector triggered immunity eti signalling and the subsequent activation of a plant hypersensitive response to induce necrosis in the plant eti involves the direct or indirect recognition of an effector protein by the host for example through plant resistance receptor or r proteins pmid pmid pmid pmid there are annotations to go modulation by symbiont of host defense related programmed cell death some are effectors of hr in plants and should move down to go some are effectors from human pathogens so this term is required as a parent to the plant term
| 1
|
8,955
| 12,060,679,455
|
IssuesEvent
|
2020-04-15 21:43:58
|
googleapis/python-spanner
|
https://api.github.com/repos/googleapis/python-spanner
|
closed
|
Investigate new Sphinx release changes
|
api: spanner priority: p2 type: process
|
Sphinx has a new release: [3.0.0](https://www.sphinx-doc.org/en/master/changes.html#release-3-0-0-released-apr-06-2020)
This release has caused the docs generation to fail due to issues in CHANGELOG.md. The root cause in the CHANGELOG should be found and fixed so the library can continue to rely on the most recent update.
If this proves difficult, the version can be temporarily pinned to 2.2.4 in the interim.
|
1.0
|
Investigate new Sphinx release changes - Sphinx has a new release: [3.0.0](https://www.sphinx-doc.org/en/master/changes.html#release-3-0-0-released-apr-06-2020)
This release has caused the docs generation to fail due to issues in CHANGELOG.md. The root cause in the CHANGELOG should be found and fixed so the library can continue to rely on the most recent update.
If this proves difficult, the version can be temporarily pinned to 2.2.4 in the interim.
|
process
|
investigate new sphinx release changes sphinx has a new release this release has caused the docs generation to fail due to issues in changelog md the root cause in the changelog should be found and fixed so the library can continue to rely on the most recent update if this proves difficult the version can be temporarily pinned to in the interim
| 1
|
143,551
| 5,520,103,112
|
IssuesEvent
|
2017-03-19 00:49:53
|
GoogleCloudPlatform/google-cloud-eclipse
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
|
closed
|
Add authUser to Cloud Console URLs
|
enhancement high priority
|
Cloud Console URLs support the `authuser=foo@gmail.com` parameter to specify which user should be selected when the URL is opened in the browser.
`ProjectSelectorSelectionChangedListener.java`
`StandardDeployPreferencesPanel.java (2 matches)`
|
1.0
|
Add authUser to Cloud Console URLs - Cloud Console URLs support the `authuser=foo@gmail.com` parameter to specify which user should be selected when the URL is opened in the browser.
`ProjectSelectorSelectionChangedListener.java`
`StandardDeployPreferencesPanel.java (2 matches)`
|
non_process
|
add authuser to cloud console urls cloud console urls support the authuser foo gmail com parameter to specify which user should be selected when the url is opened in the browser projectselectorselectionchangedlistener java standarddeploypreferencespanel java matches
| 0
|
11,721
| 14,548,465,800
|
IssuesEvent
|
2020-12-16 01:20:11
|
googleapis/doc-pipeline
|
https://api.github.com/repos/googleapis/doc-pipeline
|
closed
|
Request user to delete tmp directory when running tests
|
type: process
|
Running the test with existing tmp directory in the doc-pipeline directory can cause flakiness and unknown behaviors.
Instead of potentially prematurely deleting the tmp folder, the test should ask the user to get rid of it before running any tests.
|
1.0
|
Request user to delete tmp directory when running tests - Running the test with existing tmp directory in the doc-pipeline directory can cause flakiness and unknown behaviors.
Instead of potentially prematurely deleting the tmp folder, the test should ask the user to get rid of it before running any tests.
|
process
|
request user to delete tmp directory when running tests running the test with existing tmp directory in the doc pipeline directory can cause flakiness and unknown behaviors instead of potentially prematurely deleting the tmp folder the test should ask the user to get rid of it before running any tests
| 1
|
151,468
| 12,037,233,060
|
IssuesEvent
|
2020-04-13 21:23:30
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
closed
|
Can't create a secret
|
[zube]: To Test area/secret kind/bug status/to-test
|
Steps:
1. Try to create a secret
2. Enter something in the Key/Value fields
Results: Error "Cannot read property 'linkFor' of undefined

|
2.0
|
Can't create a secret - Steps:
1. Try to create a secret
2. Enter something in the Key/Value fields
Results: Error "Cannot read property 'linkFor' of undefined

|
non_process
|
can t create a secret steps try to create a secret enter something in the key value fields results error cannot read property linkfor of undefined
| 0
|
15,405
| 19,596,630,274
|
IssuesEvent
|
2022-01-05 18:40:28
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
closed
|
Create a "team roles calendar"
|
type: enhancement :label: team-process
|
### Description
We currently use [a team roles meta-issue](https://github.com/2i2c-org/team-compass/issues/294) to track which team members are in each role. However, this is a bit clunky to update, and also doesn't make it easy to see when team members will take on a role in the *future*.
Instead, we can create a Google Calendar that tracks all of our team roles. We can update this instead of a GitHub issue
### Value / benefit
A few benefits:
- Calendars are easier to track into the future
- We can compare this calendar with our other team calendars, most importantly the **Team Leave** calendar - this could help us spot when people are planning to be away while they are in an important role
- It feels more natural to track dates in a calendar, than in an issue, so it may be more findable.
### Implementation details
_No response_
### Tasks to complete
_No response_
### Updates
_No response_
|
1.0
|
Create a "team roles calendar" - ### Description
We currently use [a team roles meta-issue](https://github.com/2i2c-org/team-compass/issues/294) to track which team members are in each role. However, this is a bit clunky to update, and also doesn't make it easy to see when team members will take on a role in the *future*.
Instead, we can create a Google Calendar that tracks all of our team roles. We can update this instead of a GitHub issue
### Value / benefit
A few benefits:
- Calendars are easier to track into the future
- We can compare this calendar with our other team calendars, most importantly the **Team Leave** calendar - this could help us spot when people are planning to be away while they are in an important role
- It feels more natural to track dates in a calendar, than in an issue, so it may be more findable.
### Implementation details
_No response_
### Tasks to complete
_No response_
### Updates
_No response_
|
process
|
create a team roles calendar description we currently use to track which team members are in each role however this is a bit clunky to update and also doesn t make it easy to see when team members will take on a role in the future instead we can create a google calendar that tracks all of our team roles we can update this instead of a github issue value benefit a few benefits calendars are easier to track into the future we can compare this calendar with our other team calendars most importantly the team leave calendar this could help us spot when people are planning to be away while they are in an important role it feels more natural to track dates in a calendar than in an issue so it may be more findable implementation details no response tasks to complete no response updates no response
| 1
|
7,316
| 10,452,792,990
|
IssuesEvent
|
2019-09-19 15:18:29
|
prisma/lift
|
https://api.github.com/repos/prisma/lift
|
closed
|
`Unexpected token.` when running `prisma2 lift save` after adding custom type definition
|
kind/docs process/next-milestone
|
Since `Long` integers (64-Bit) are not implemented yet (see https://github.com/prisma/photonjs/issues/170) I've tried adding a custom type definition to my `schema.prisma` file:
```diff
datasource pg {
provider = "postgresql"
url = env("POSTGRES_URL")
}
generator photonjs {
provider = "photonjs"
}
+ type Long Int @pg.bigint
model User {
id String @id @default(cuid())
name String?
email String @unique
+ likes Long
posts Post[]
}
model Post {
id String @id @default(cuid())
title String
published Boolean @default(false)
author User?
}
```
When then running `prisma2 lift save` I ran into the following error:
# Failed listMigrations at 2019-08-02T19:26:14.924Z
## RPC Input One Line
```json
{"id":1,"jsonrpc":"2.0","method":"listMigrations","params":{"projectInfo":"","sourceConfig":"datasource pg {\n provider = \"postgresql\"\n url = env(\"POSTGRES_URL\")\n}\n\ngenerator photonjs {\n provider = \"photonjs\"\n}\n\ntype Long Int @pg.bigint\n\nmodel User {\n id String @id @default(cuid())\n name String?\n email String @unique\n likes Long\n posts Post[]\n}\n\nmodel Post {\n id String @id @default(cuid())\n title String\n published Boolean @default(false)\n author User?\n}"}}
```
## RPC Input Readable
```json
{
"id": 1,
"jsonrpc": "2.0",
"method": "listMigrations",
"params": {
"projectInfo": "",
"sourceConfig": "datasource pg {\n provider = \"postgresql\"\n url = env(\"POSTGRES_URL\")\n}\n\ngenerator photonjs {\n provider = \"photonjs\"\n}\n\ntype Long Int @pg.bigint\n\nmodel User {\n id String @id @default(cuid())\n name String?\n email String @unique\n likes Long\n posts Post[]\n}\n\nmodel Post {\n id String @id @default(cuid())\n title String\n published Boolean @default(false)\n author User?\n}"
}
}
```
## RPC Response
```
null
```
## Stack Trace
```bash
thread 'main' panicked at 'loading the connector failed.: DataModelErrors { code: 1001, errors: ["Unexpected token. Expected one of: Start of block (\"{\")."] }', src/libcore/result.rs:997:5
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
1: std::sys_common::backtrace::_print
2: std::panicking::default_hook::{{closure}}
3: std::panicking::default_hook
4: std::panicking::rust_panic_with_hook
5: std::panicking::continue_panic_fmt
6: rust_begin_unwind
7: core::panicking::panic_fmt
8: core::result::unwrap_failed
9: migration_core::migration_engine::MigrationEngine::init
10: <F as jsonrpc_core::calls::RpcMethodSimple>::call
11: <F as jsonrpc_core::calls::RpcMethod<T>>::call
12: <futures::future::lazy::Lazy<F,R> as futures::future::Future>::poll
13: <futures::future::then::Then<A,B,F> as futures::future::Future>::poll
14: <futures::future::map::Map<A,F> as futures::future::Future>::poll
15: <futures::future::either::Either<A,B> as futures::future::Future>::poll
16: futures::task_impl::std::set
17: std::thread::local::LocalKey<T>::with
18: futures::future::Future::wait
19: jsonrpc_core::io::IoHandler<M>::handle_request_sync
20: migration_core::rpc_api::RpcApi::handle
21: migration_engine::main
22: std::rt::lang_start::{{closure}}
23: std::panicking::try::do_call
24: __rust_maybe_catch_panic
25: std::rt::lang_start_internal
26: main
```
|
1.0
|
`Unexpected token.` when running `prisma2 lift save` after adding custom type definition - Since `Long` integers (64-Bit) are not implemented yet (see https://github.com/prisma/photonjs/issues/170) I've tried adding a custom type definition to my `schema.prisma` file:
```diff
datasource pg {
provider = "postgresql"
url = env("POSTGRES_URL")
}
generator photonjs {
provider = "photonjs"
}
+ type Long Int @pg.bigint
model User {
id String @id @default(cuid())
name String?
email String @unique
+ likes Long
posts Post[]
}
model Post {
id String @id @default(cuid())
title String
published Boolean @default(false)
author User?
}
```
When then running `prisma2 lift save` I ran into the following error:
# Failed listMigrations at 2019-08-02T19:26:14.924Z
## RPC Input One Line
```json
{"id":1,"jsonrpc":"2.0","method":"listMigrations","params":{"projectInfo":"","sourceConfig":"datasource pg {\n provider = \"postgresql\"\n url = env(\"POSTGRES_URL\")\n}\n\ngenerator photonjs {\n provider = \"photonjs\"\n}\n\ntype Long Int @pg.bigint\n\nmodel User {\n id String @id @default(cuid())\n name String?\n email String @unique\n likes Long\n posts Post[]\n}\n\nmodel Post {\n id String @id @default(cuid())\n title String\n published Boolean @default(false)\n author User?\n}"}}
```
## RPC Input Readable
```json
{
"id": 1,
"jsonrpc": "2.0",
"method": "listMigrations",
"params": {
"projectInfo": "",
"sourceConfig": "datasource pg {\n provider = \"postgresql\"\n url = env(\"POSTGRES_URL\")\n}\n\ngenerator photonjs {\n provider = \"photonjs\"\n}\n\ntype Long Int @pg.bigint\n\nmodel User {\n id String @id @default(cuid())\n name String?\n email String @unique\n likes Long\n posts Post[]\n}\n\nmodel Post {\n id String @id @default(cuid())\n title String\n published Boolean @default(false)\n author User?\n}"
}
}
```
## RPC Response
```
null
```
## Stack Trace
```bash
thread 'main' panicked at 'loading the connector failed.: DataModelErrors { code: 1001, errors: ["Unexpected token. Expected one of: Start of block (\"{\")."] }', src/libcore/result.rs:997:5
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
1: std::sys_common::backtrace::_print
2: std::panicking::default_hook::{{closure}}
3: std::panicking::default_hook
4: std::panicking::rust_panic_with_hook
5: std::panicking::continue_panic_fmt
6: rust_begin_unwind
7: core::panicking::panic_fmt
8: core::result::unwrap_failed
9: migration_core::migration_engine::MigrationEngine::init
10: <F as jsonrpc_core::calls::RpcMethodSimple>::call
11: <F as jsonrpc_core::calls::RpcMethod<T>>::call
12: <futures::future::lazy::Lazy<F,R> as futures::future::Future>::poll
13: <futures::future::then::Then<A,B,F> as futures::future::Future>::poll
14: <futures::future::map::Map<A,F> as futures::future::Future>::poll
15: <futures::future::either::Either<A,B> as futures::future::Future>::poll
16: futures::task_impl::std::set
17: std::thread::local::LocalKey<T>::with
18: futures::future::Future::wait
19: jsonrpc_core::io::IoHandler<M>::handle_request_sync
20: migration_core::rpc_api::RpcApi::handle
21: migration_engine::main
22: std::rt::lang_start::{{closure}}
23: std::panicking::try::do_call
24: __rust_maybe_catch_panic
25: std::rt::lang_start_internal
26: main
```
|
process
|
unexpected token when running lift save after adding custom type definition since long integers bit are not implemented yet see i ve tried adding a custom type definition to my schema prisma file diff datasource pg provider postgresql url env postgres url generator photonjs provider photonjs type long int pg bigint model user id string id default cuid name string email string unique likes long posts post model post id string id default cuid title string published boolean default false author user when then running lift save i ran into the following error failed listmigrations at rpc input one line json id jsonrpc method listmigrations params projectinfo sourceconfig datasource pg n provider postgresql n url env postgres url n n ngenerator photonjs n provider photonjs n n ntype long int pg bigint n nmodel user n id string id default cuid n name string n email string unique n likes long n posts post n n nmodel post n id string id default cuid n title string n published boolean default false n author user n rpc input readable json id jsonrpc method listmigrations params projectinfo sourceconfig datasource pg n provider postgresql n url env postgres url n n ngenerator photonjs n provider photonjs n n ntype long int pg bigint n nmodel user n id string id default cuid n name string n email string unique n likes long n posts post n n nmodel post n id string id default cuid n title string n published boolean default false n author user n rpc response null stack trace bash thread main panicked at loading the connector failed datamodelerrors code errors src libcore result rs stack backtrace std sys unix backtrace tracing imp unwind backtrace std sys common backtrace print std panicking default hook closure std panicking default hook std panicking rust panic with hook std panicking continue panic fmt rust begin unwind core panicking panic fmt core result unwrap failed migration core migration engine migrationengine init call call as futures future future poll as futures future future poll as futures future future poll as futures future future poll futures task impl std set std thread local localkey with futures future future wait jsonrpc core io iohandler handle request sync migration core rpc api rpcapi handle migration engine main std rt lang start closure std panicking try do call rust maybe catch panic std rt lang start internal main
| 1
|
195,665
| 6,916,691,193
|
IssuesEvent
|
2017-11-29 04:08:56
|
chef/chef
|
https://api.github.com/repos/chef/chef
|
closed
|
Windows: env LWRP with action :modify overwrites env variables
|
Area: Windows Priority: Medium Type: Bug
|
I've been trying to provision some windows VMs, and I'm trying to add a folder to the `PATH` environment variable. So far, this has not worked so well. In fact, it seems that chef completely ovewrites the environment variables even when I use `action :modify`
Here is the block I use:
``` ruby
env "Add java binaries to PATH" do
action :modify
key_name "PATH"
value "#{java_home}\\bin"
end
```
Here is how PATH looks like before the block runs
```
PS C:\Windows\system32> $Env:PATH
C:\Windows\system32\WindowsPowerShell\v1.0\;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\opscode\chef\bin
```
and here is how it looks like after the block
```
> $Env:PATH
C:\Windows\system32\WindowsPowerShell\v1.0\;C:\softwares\extern\jvm\jdk1.8.0_45\bin
```
It seems that powershell always prepends itself to `PATH`, so that's why it's still there.
|
1.0
|
Windows: env LWRP with action :modify overwrites env variables - I've been trying to provision some windows VMs, and I'm trying to add a folder to the `PATH` environment variable. So far, this has not worked so well. In fact, it seems that chef completely ovewrites the environment variables even when I use `action :modify`
Here is the block I use:
``` ruby
env "Add java binaries to PATH" do
action :modify
key_name "PATH"
value "#{java_home}\\bin"
end
```
Here is how PATH looks like before the block runs
```
PS C:\Windows\system32> $Env:PATH
C:\Windows\system32\WindowsPowerShell\v1.0\;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\opscode\chef\bin
```
and here is how it looks like after the block
```
> $Env:PATH
C:\Windows\system32\WindowsPowerShell\v1.0\;C:\softwares\extern\jvm\jdk1.8.0_45\bin
```
It seems that powershell always prepends itself to `PATH`, so that's why it's still there.
|
non_process
|
windows env lwrp with action modify overwrites env variables i ve been trying to provision some windows vms and i m trying to add a folder to the path environment variable so far this has not worked so well in fact it seems that chef completely ovewrites the environment variables even when i use action modify here is the block i use ruby env add java binaries to path do action modify key name path value java home bin end here is how path looks like before the block runs ps c windows env path c windows windowspowershell c programdata oracle java javapath c windows c windows c windows wbem c windows windowspowershell c opscode chef bin and here is how it looks like after the block env path c windows windowspowershell c softwares extern jvm bin it seems that powershell always prepends itself to path so that s why it s still there
| 0
|
21,430
| 29,359,594,885
|
IssuesEvent
|
2023-05-28 00:37:31
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] DevOps Engineer Sênior na Coodesh
|
SALVADOR INFRAESTRUTURA PYTHON JIRA SENIOR STARTUP DOCKER KUBERNETES DEVOPS AWS REQUISITOS REMOTO PROCESSOS INOVAÇÃO GITLAB GITHUB SHELL CI CD AZURE SEGURANÇA UMA C R LIDERANÇA ECS VIRTUALIZAÇÃO TERRAFORM MANUTENÇÃO CONFLUENCE GRPC INTELIGÊNCIA ARTIFICIAL PIPELINE BITBUCKET SUPORTE Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/engenheira-devops-senior-203049581?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Kor está em busca de DevOps Engineer Sênior para compor seu time!</p>
<p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.</p>
<p>Sua gigante missão:</p>
<ul>
<li> Configurar e gerenciar as infraestruturas AWS para as aplicações;</li>
<li>Criar e manter infraestrutura como código Terraform;</li>
<li>Monitorar o desempenho e a disponibilidade das aplicações na nuvem AWS;</li>
<li>Criar laboratórios, POV (prova de valor) e POC (prova de conceito);</li>
<li>Gerenciar a infraestrutura e monitorar as aplicações, criando processos de integração continua dentro da KOR, responsável por guardar e cuidar da arquitetura cloud AWS da KOR;</li>
<li>Trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações;</li>
<li>Oferecer suporte técnico à equipe de TI em questões relacionadas à plataforma AWS e;</li>
<li>Realizar levantamento do cenário atual (as is).</li>
</ul>
<p>Por aqui temos:</p>
<ul>
<li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li>
<li>Ambiente sensacional e sem burocras;</li>
<li>Liderança HUMANIZADA;</li>
<li>Crescimento meritocrático e humanizado;</li>
<li>Formato 100% remoto;</li>
<li>Horário das 09h00 às 18h00.</li>
</ul>
<p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc. </p>
## KOR Solutions:
<p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p>
</p>
## Habilidades:
- CI/CD
- AWS
- Docker
- Kubernetes
## Local:
100% Remoto
## Requisitos:
- Graduação Completa em Sistemas da Informação ou Engenharia da Computação;
- Conhecimentos sólidos na implantação e gerenciamento de infraestruturas AWS;
- Conhecimentos avançados em automação, virtualização e segurança da informação;
- Forte habilidade de comunicação e colaboração em equipe;
- Experiência em Integrações (Gateways e ferramentas, API, Filas, gRPC);
- Experiência em CI/CD (Github Action, Azure Devops, CircleCI, Gitlab, Bitbucket Pipeline etc.);
- Experiência em Arquitetura de micro serviços;
- Experiência em Containers (Docker, ECS, Kubernetes)
- Experiência em Versionamento de código;
- Experiência em Ferramentas Ágeis (Jira, Confluence, Azure Devops) e;
- Experiência em FaaS (AWS Lambda).
## Diferenciais:
- Experiência em desenvolvimento de scripts (Shell, Python etc.).
## Benefícios:
- Gympass.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps Engineer Sênior na KOR Solutions](https://coodesh.com/jobs/engenheira-devops-senior-203049581?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
DevOps
|
1.0
|
[Remoto] DevOps Engineer Sênior na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/engenheira-devops-senior-203049581?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Kor está em busca de DevOps Engineer Sênior para compor seu time!</p>
<p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.</p>
<p>Sua gigante missão:</p>
<ul>
<li> Configurar e gerenciar as infraestruturas AWS para as aplicações;</li>
<li>Criar e manter infraestrutura como código Terraform;</li>
<li>Monitorar o desempenho e a disponibilidade das aplicações na nuvem AWS;</li>
<li>Criar laboratórios, POV (prova de valor) e POC (prova de conceito);</li>
<li>Gerenciar a infraestrutura e monitorar as aplicações, criando processos de integração continua dentro da KOR, responsável por guardar e cuidar da arquitetura cloud AWS da KOR;</li>
<li>Trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações;</li>
<li>Oferecer suporte técnico à equipe de TI em questões relacionadas à plataforma AWS e;</li>
<li>Realizar levantamento do cenário atual (as is).</li>
</ul>
<p>Por aqui temos:</p>
<ul>
<li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li>
<li>Ambiente sensacional e sem burocras;</li>
<li>Liderança HUMANIZADA;</li>
<li>Crescimento meritocrático e humanizado;</li>
<li>Formato 100% remoto;</li>
<li>Horário das 09h00 às 18h00.</li>
</ul>
<p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc. </p>
## KOR Solutions:
<p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p>
</p>
## Habilidades:
- CI/CD
- AWS
- Docker
- Kubernetes
## Local:
100% Remoto
## Requisitos:
- Graduação Completa em Sistemas da Informação ou Engenharia da Computação;
- Conhecimentos sólidos na implantação e gerenciamento de infraestruturas AWS;
- Conhecimentos avançados em automação, virtualização e segurança da informação;
- Forte habilidade de comunicação e colaboração em equipe;
- Experiência em Integrações (Gateways e ferramentas, API, Filas, gRPC);
- Experiência em CI/CD (Github Action, Azure Devops, CircleCI, Gitlab, Bitbucket Pipeline etc.);
- Experiência em Arquitetura de micro serviços;
- Experiência em Containers (Docker, ECS, Kubernetes)
- Experiência em Versionamento de código;
- Experiência em Ferramentas Ágeis (Jira, Confluence, Azure Devops) e;
- Experiência em FaaS (AWS Lambda).
## Diferenciais:
- Experiência em desenvolvimento de scripts (Shell, Python etc.).
## Benefícios:
- Gympass.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps Engineer Sênior na KOR Solutions](https://coodesh.com/jobs/engenheira-devops-senior-203049581?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
DevOps
|
process
|
devops engineer sênior na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a kor está em busca de devops engineer sênior para compor seu time somos uma startup lawtech em crescimento super acelerado que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores além da recuperação de crédito pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes via inteligência artificial aquela combinação perfeita entre inovação tecnológica serviços e integração com o negócio sua gigante missão nbsp configurar e gerenciar as infraestruturas aws para as aplicações criar e manter infraestrutura como código terraform monitorar o desempenho e a disponibilidade das aplicações na nuvem aws criar laboratórios pov prova de valor e poc prova de conceito gerenciar a infraestrutura e monitorar as aplicações criando processos de integração continua dentro da kor responsável por guardar e cuidar da arquitetura cloud aws da kor trabalhar em estreita colaboração com o time de desenvolvimento para garantir a integração da infraestrutura e das aplicações oferecer suporte técnico à equipe de ti em questões relacionadas à plataforma aws e realizar levantamento do cenário atual as is por aqui temos oportunidade de construir a sua história em uma empresa gigante ambiente sensacional e sem burocras liderança humanizada crescimento meritocrático e humanizado formato remoto horário das às todas as aplicações de vagas na kor são consideradas sem distinção de gênero orientação sexual etnia cultura origem religião deficiência idade etc nbsp kor solutions a kor auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes fazendo isso de forma conveniente segura e com validade jurídica uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa o sistema gera automaticamente a minuta do acordo e o contrato é executado a assinatura de ambas as partes podem ser físicas ou digitais facilitando o acordo conforme conveniência tudo isso pode ser concluído no mesmo dia além da negociação automatizada oferecemos diversas outras soluções de automação inteligência e análise nas demais etapas e processos do setor jurídico das empresas habilidades ci cd aws docker kubernetes local remoto requisitos graduação completa em sistemas da informação ou engenharia da computação conhecimentos sólidos na implantação e gerenciamento de infraestruturas aws conhecimentos avançados em automação virtualização e segurança da informação forte habilidade de comunicação e colaboração em equipe experiência em integrações gateways e ferramentas api filas grpc experiência em ci cd github action azure devops circleci gitlab bitbucket pipeline etc experiência em arquitetura de micro serviços experiência em containers docker ecs kubernetes experiência em versionamento de código experiência em ferramentas ágeis jira confluence azure devops e experiência em faas aws lambda diferenciais experiência em desenvolvimento de scripts shell python etc benefícios gympass como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria devops
| 1
|
130,908
| 10,675,560,789
|
IssuesEvent
|
2019-10-21 11:59:08
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test failure: ThreadPoolBoundHandleTests/BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException
|
area-System.Threading os-windows-uwp test-run-uwp-coreclr
|
Opened on behalf of @Sunny-pu
The test `ThreadPoolBoundHandleTests/BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException` has failed.
System.IO.IOException : CreateFile or CreateFile2 failed (error code 5): Access is denied.\r
File name: Overlapped.tmp\r
File path: C:\\dotnetbuild\\work\\4a41c8dd-7ec4-4a82-a405-52940333990d\\Work\\61612780-30ca-4f1a-a0ee-3e3eeff8b096\\Unzip\\Overlapped.tmp\r
Failed to write to the file: System.UnauthorizedAccessException: Access to the path 'C:\\dotnetbuild\\work\\4a41c8dd-7ec4-4a82-a405-52940333990d\\Work\\61612780-30ca-4f1a-a0ee-3e3eeff8b096\\Unzip\\Overlapped.tmp' is denied.\r
at System.IO.FileStream.ValidateFileHandle(SafeFileHandle fileHandle)\r
at System.IO.FileStream.CreateFileOpenHandle(FileMode mode, FileShare share, FileOptions options)\r
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)\r
at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize)\r
at System.IO.File.WriteAllText(String path, String contents)\r
at HandleFactory.CreateHandle(Boolean async, String fileName) in E:\\A\\_work\\4\\s\\corefx\\src\\System.Threading.Overlapped\\tests\\HandleFactory.cs:line 74
Stack Trace:
at HandleFactory.CreateHandle(Boolean async, String fileName) in E:\A\_work\4\s\corefx\src\System.Threading.Overlapped\tests\HandleFactory.cs:line 63
at ThreadPoolBoundHandleTests.BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException()
Build : Master - 20180620.01 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS4-x64
- Release
- Windows.10.Arm64-arm
- Release
- Windows.10.Amd64.ClientRS4-x86
- Release
Details: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20180620.01/workItem/System.Threading.Overlapped.Tests/analysis/xunit/ThreadPoolBoundHandleTests~2FBindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException
|
1.0
|
Test failure: ThreadPoolBoundHandleTests/BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException - Opened on behalf of @Sunny-pu
The test `ThreadPoolBoundHandleTests/BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException` has failed.
System.IO.IOException : CreateFile or CreateFile2 failed (error code 5): Access is denied.\r
File name: Overlapped.tmp\r
File path: C:\\dotnetbuild\\work\\4a41c8dd-7ec4-4a82-a405-52940333990d\\Work\\61612780-30ca-4f1a-a0ee-3e3eeff8b096\\Unzip\\Overlapped.tmp\r
Failed to write to the file: System.UnauthorizedAccessException: Access to the path 'C:\\dotnetbuild\\work\\4a41c8dd-7ec4-4a82-a405-52940333990d\\Work\\61612780-30ca-4f1a-a0ee-3e3eeff8b096\\Unzip\\Overlapped.tmp' is denied.\r
at System.IO.FileStream.ValidateFileHandle(SafeFileHandle fileHandle)\r
at System.IO.FileStream.CreateFileOpenHandle(FileMode mode, FileShare share, FileOptions options)\r
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)\r
at System.IO.StreamWriter..ctor(String path, Boolean append, Encoding encoding, Int32 bufferSize)\r
at System.IO.File.WriteAllText(String path, String contents)\r
at HandleFactory.CreateHandle(Boolean async, String fileName) in E:\\A\\_work\\4\\s\\corefx\\src\\System.Threading.Overlapped\\tests\\HandleFactory.cs:line 74
Stack Trace:
at HandleFactory.CreateHandle(Boolean async, String fileName) in E:\A\_work\4\s\corefx\src\System.Threading.Overlapped\tests\HandleFactory.cs:line 63
at ThreadPoolBoundHandleTests.BindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException()
Build : Master - 20180620.01 (UWP F5 Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS4-x64
- Release
- Windows.10.Arm64-arm
- Release
- Windows.10.Amd64.ClientRS4-x86
- Release
Details: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20180620.01/workItem/System.Threading.Overlapped.Tests/analysis/xunit/ThreadPoolBoundHandleTests~2FBindHandle_DisposedSyncHandleAsHandle_ThrowsArgumentException
|
non_process
|
test failure threadpoolboundhandletests bindhandle disposedsynchandleashandle throwsargumentexception opened on behalf of sunny pu the test threadpoolboundhandletests bindhandle disposedsynchandleashandle throwsargumentexception has failed system io ioexception createfile or failed error code access is denied r file name overlapped tmp r file path c dotnetbuild work work unzip overlapped tmp r failed to write to the file system unauthorizedaccessexception access to the path c dotnetbuild work work unzip overlapped tmp is denied r at system io filestream validatefilehandle safefilehandle filehandle r at system io filestream createfileopenhandle filemode mode fileshare share fileoptions options r at system io filestream ctor string path filemode mode fileaccess access fileshare share buffersize fileoptions options r at system io streamwriter ctor string path boolean append encoding encoding buffersize r at system io file writealltext string path string contents r at handlefactory createhandle boolean async string filename in e a work s corefx src system threading overlapped tests handlefactory cs line stack trace at handlefactory createhandle boolean async string filename in e a work s corefx src system threading overlapped tests handlefactory cs line at threadpoolboundhandletests bindhandle disposedsynchandleashandle throwsargumentexception build master uwp tests failing configurations windows release windows arm release windows release details
| 0
|
70,623
| 30,699,398,734
|
IssuesEvent
|
2023-07-26 21:30:18
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
terraform detects change in sql_filter of azurerm_servicebus_subscription_rule on every run
|
bug service/service-bus
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_servicebus_subscription_rule`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_servicebus_subscription_rule" "sbsr11" {
name = "rulename"
resource_group_name = azurerm_resource_group.rg.name
namespace_name = azurerm_servicebus_namespace.sb.name
topic_name = azurerm_servicebus_topic.sbt.name
subscription_name = azurerm_servicebus_subscription.sbs.name
filter_type = "SqlFilter"
sql_filter = <<-EOT
[NServiceBus.EnclosedMessageTypes] IN (
'Events.Something.Updated'
)
EOT
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behaviour
<!--- What should have happened? --->
Correctly detect no change in `sql_filter`.
### Actual Behaviour
<!--- What actually happened? --->
Terraform detects a change to `sql_filter` whereas there is none.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
2.0
|
terraform detects change in sql_filter of azurerm_servicebus_subscription_rule on every run - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_servicebus_subscription_rule`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_servicebus_subscription_rule" "sbsr11" {
name = "rulename"
resource_group_name = azurerm_resource_group.rg.name
namespace_name = azurerm_servicebus_namespace.sb.name
topic_name = azurerm_servicebus_topic.sbt.name
subscription_name = azurerm_servicebus_subscription.sbs.name
filter_type = "SqlFilter"
sql_filter = <<-EOT
[NServiceBus.EnclosedMessageTypes] IN (
'Events.Something.Updated'
)
EOT
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behaviour
<!--- What should have happened? --->
Correctly detect no change in `sql_filter`.
### Actual Behaviour
<!--- What actually happened? --->
Terraform detects a change to `sql_filter` whereas there is none.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
non_process
|
terraform detects change in sql filter of azurerm servicebus subscription rule on every run please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version affected resource s azurerm servicebus subscription rule terraform configuration files hcl resource azurerm servicebus subscription rule name rulename resource group name azurerm resource group rg name namespace name azurerm servicebus namespace sb name topic name azurerm servicebus topic sbt name subscription name azurerm servicebus subscription sbs name filter type sqlfilter sql filter eot in events something updated eot debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behaviour correctly detect no change in sql filter actual behaviour terraform detects a change to sql filter whereas there is none steps to reproduce terraform apply important factoids references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation
| 0
|
256,714
| 27,561,711,460
|
IssuesEvent
|
2023-03-07 22:41:35
|
samqws-marketing/pinterest_orion
|
https://api.github.com/repos/samqws-marketing/pinterest_orion
|
closed
|
CVE-2021-23368 (Medium) detected in postcss-7.0.21.tgz - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /orion-server/src/main/resources/webapp/package.json</p>
<p>Path to vulnerable library: /orion-server/src/main/resources/webapp/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/pinterest_orion/commit/f713a1acc7accd46b2232cbbabae1990941bc416">f713a1acc7accd46b2232cbbabae1990941bc416</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2021-23368 (Medium) detected in postcss-7.0.21.tgz - autoclosed - ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /orion-server/src/main/resources/webapp/package.json</p>
<p>Path to vulnerable library: /orion-server/src/main/resources/webapp/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/pinterest_orion/commit/f713a1acc7accd46b2232cbbabae1990941bc416">f713a1acc7accd46b2232cbbabae1990941bc416</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution (postcss): 7.0.36</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve medium detected in postcss tgz autoclosed cve medium severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file orion server src main resources webapp package json path to vulnerable library orion server src main resources webapp node modules resolve url loader node modules postcss package json dependency hierarchy react scripts tgz root library resolve url loader tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss direct dependency fix resolution react scripts check this box to open an automated fix pr
| 0
|
16,133
| 20,381,985,051
|
IssuesEvent
|
2022-02-21 23:33:18
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Autocomplete shows invalid suggestions for composite types
|
kind/bug process/candidate team/migrations topic: mongodb
|
### Bug description
Auto-complete suggests that you can use `@id` and `@relation` on composite types.
### How to reproduce
<img width="601" alt="CleanShot 2022-02-21 at 17 31 45@2x" src="https://user-images.githubusercontent.com/170299/155038311-c8e84d38-2629-4464-bb29-069b312b2bdc.png">
We do correctly disallow it once you run `prisma format`
### Expected behavior
_No response_
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
### Environment & setup
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
### Prisma Version
```
Environment variables loaded from .env
prisma : 3.10.0-integration-composites.1
@prisma/client : 3.10.0-integration-composites.1
Current platform : darwin
Query Engine (Node-API) : libquery-engine ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine : migration-engine-cli ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : query-engine-composite-basic-api-ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a
Studio : 0.458.0
Preview Features : mongoDb
```
|
1.0
|
Autocomplete shows invalid suggestions for composite types - ### Bug description
Auto-complete suggests that you can use `@id` and `@relation` on composite types.
### How to reproduce
<img width="601" alt="CleanShot 2022-02-21 at 17 31 45@2x" src="https://user-images.githubusercontent.com/170299/155038311-c8e84d38-2629-4464-bb29-069b312b2bdc.png">
We do correctly disallow it once you run `prisma format`
### Expected behavior
_No response_
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
### Environment & setup
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
### Prisma Version
```
Environment variables loaded from .env
prisma : 3.10.0-integration-composites.1
@prisma/client : 3.10.0-integration-composites.1
Current platform : darwin
Query Engine (Node-API) : libquery-engine ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine : migration-engine-cli ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a (at node_modules/@prisma/engines/prisma-fmt-darwin)
Default Engines Hash : query-engine-composite-basic-api-ddc7068ea0a6b8c1ac10c85e13e5e396421b5c6a
Studio : 0.458.0
Preview Features : mongoDb
```
|
process
|
autocomplete shows invalid suggestions for composite types bug description auto complete suggests that you can use id and relation on composite types how to reproduce img width alt cleanshot at src we do correctly disallow it once you run prisma format expected behavior no response prisma information environment setup os database node js version prisma version environment variables loaded from env prisma integration composites prisma client integration composites current platform darwin query engine node api libquery engine at node modules prisma engines libquery engine darwin dylib node migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin default engines hash query engine composite basic api studio preview features mongodb
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.