Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,961
| 12,069,013,267
|
IssuesEvent
|
2020-04-16 15:31:23
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Typos in example docs
|
Pri2 cxp doc-bug machine-learning/svc team-data-science-process/subsvc triaged
|
1. All of the steps are `1.` instead of 1,2,3,...
2. The variable `LOCALFILE` should be `LOCALFILENAME`
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2f45a6b5-0fea-7fbb-5d4d-37e0e2583fd7
* Version Independent ID: 7be6f792-09f8-c22f-86b6-d0f690e9b3a4
* Content: [Explore data in Azure blob storage with pandas - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob)
* Content Source: [articles/machine-learning/team-data-science-process/explore-data-blob.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/explore-data-blob.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
1.0
|
Typos in example docs - 1. All of the steps are `1.` instead of 1,2,3,...
2. The variable `LOCALFILE` should be `LOCALFILENAME`
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2f45a6b5-0fea-7fbb-5d4d-37e0e2583fd7
* Version Independent ID: 7be6f792-09f8-c22f-86b6-d0f690e9b3a4
* Content: [Explore data in Azure blob storage with pandas - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob)
* Content Source: [articles/machine-learning/team-data-science-process/explore-data-blob.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/explore-data-blob.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
process
|
typos in example docs all of the steps are instead of the variable localfile should be localfilename document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service machine learning sub service team data science process github login marktab microsoft alias tdsp
| 1
|
42,112
| 6,964,248,007
|
IssuesEvent
|
2017-12-08 20:49:24
|
18F/federalist
|
https://api.github.com/repos/18F/federalist
|
closed
|
Add reference to Federalist blog posts on doc site
|
documentation
|
- https://18f.gsa.gov/2016/07/11/conversation-about-static-dynamic-websites/
- https://18f.gsa.gov/2016/05/18/why-were-moving-18f-gsa-gov-to-federalist/
- https://18f.gsa.gov/2015/09/15/federalist-platform-launch/ (need to edit out the content editor piece here)
|
1.0
|
Add reference to Federalist blog posts on doc site - - https://18f.gsa.gov/2016/07/11/conversation-about-static-dynamic-websites/
- https://18f.gsa.gov/2016/05/18/why-were-moving-18f-gsa-gov-to-federalist/
- https://18f.gsa.gov/2015/09/15/federalist-platform-launch/ (need to edit out the content editor piece here)
|
non_process
|
add reference to federalist blog posts on doc site need to edit out the content editor piece here
| 0
|
573,355
| 17,023,635,780
|
IssuesEvent
|
2021-07-03 03:02:15
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Vorlagen religion=* und denomination=*
|
Component: potlatch (flash editor) Priority: minor Resolution: fixed Type: enhancement
|
**[Submitted to the original trac issue database at 2.09pm, Monday, 20th September 2010]**
In den Potlatch-Vorlagen werden fr amenity=place_of_worship keine Vorlagen fr religion=* und denomination=* "mitgeliefert". Potlatch scheint diese Tags berhaupt nicht zu kennen und schlgt sie beim automatischen vervollstndigen nicht vor.
Das fhrt dazu, dass manche Mapper diese wichtigen (v.a. religion) Zusatz-Tags nicht eintragen.
|
1.0
|
Vorlagen religion=* und denomination=* - **[Submitted to the original trac issue database at 2.09pm, Monday, 20th September 2010]**
In den Potlatch-Vorlagen werden fr amenity=place_of_worship keine Vorlagen fr religion=* und denomination=* "mitgeliefert". Potlatch scheint diese Tags berhaupt nicht zu kennen und schlgt sie beim automatischen vervollstndigen nicht vor.
Das fhrt dazu, dass manche Mapper diese wichtigen (v.a. religion) Zusatz-Tags nicht eintragen.
|
non_process
|
vorlagen religion und denomination in den potlatch vorlagen werden fr amenity place of worship keine vorlagen fr religion und denomination mitgeliefert potlatch scheint diese tags berhaupt nicht zu kennen und schlgt sie beim automatischen vervollstndigen nicht vor das fhrt dazu dass manche mapper diese wichtigen v a religion zusatz tags nicht eintragen
| 0
|
15,660
| 19,846,977,687
|
IssuesEvent
|
2022-01-21 07:54:25
|
ooi-data/RS01SBPS-PC01A-4A-DOSTAD103-streamed-do_stable_sample
|
https://api.github.com/repos/ooi-data/RS01SBPS-PC01A-4A-DOSTAD103-streamed-do_stable_sample
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:54:23.731352.
## Details
Flow name: `RS01SBPS-PC01A-4A-DOSTAD103-streamed-do_stable_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:54:23.731352.
## Details
Flow name: `RS01SBPS-PC01A-4A-DOSTAD103-streamed-do_stable_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed do stable sample task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
718,035
| 24,701,459,547
|
IssuesEvent
|
2022-10-19 15:34:11
|
authelia/authelia
|
https://api.github.com/repos/authelia/authelia
|
opened
|
Web app sessions don't persist
|
priority/4/normal type/bug/unconfirmed status/needs-triage
|
### Version
v4.36.9
### Deployment Method
Bare-metal
### Reverse Proxy
NGINX
### Reverse Proxy Version
1.18.0
### Description
Hi there,
I'm running a java webapp which uses Apache Shiro for auth. I'm fronting it with nginx and authelia and untilising trusted headers to instruct shiro. The problem is that the java session appears to not persist between http calls. If I disable authelia by commenting the proxy redirect in the nginx site conf it works fine, however if requests are authed by authelia the java session is blank for each request.
### Reproduction
This minimal example also doesn't work, but possibly with a slightly different failure more to mine? When attempting to log in to the shiro app it hangs and eventual issues a 500. I'm hoping they're related!
git clone https://github.com/apache/shiro.git
cd shiro/samples/web
mvn jetty:run &
#configure /etc/hosts & nginx ssl virtual host for shiro e.g. shiro.example.com
#configure user access for shiro virtual host in authelia
navigate https://shiro.example.com/login.jsp
enter authelia creds
hangs with 500 server error
### Expectations
The application should be able to set cookies for it's own auth.
### Logs
```shell
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=warning msg="Session destroyed for user 'simon' after exceeding configured session inactivity and not being marked as remembered" method=GET path=/api/verify remote_ip=10.88.142.1
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=debug msg="Check authorization of subject username= groups= ip=10.88.142.1 and object https://app.example.com/ (method GET)."
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=info msg="Access to https://app.example.com/ (method GET) is not authorized to user <anonymous>, responding with status code 401" method=GET path=/api/verify remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Mark 1FA authentication attempt made by user 'simon'" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Successful 1FA authentication attempt made by user 'simon'" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/ (method )."
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Required level for the URL https://app.example.com/ is 1" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Redirection URL https://app.example.com/ is safe" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:48 test2 authelia[25714]: time="2022-10-19T14:03:48Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/ (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/bootstrap/css/bootstrap.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/bootstrap-ladda/ladda-themeless.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/css/font-awesome.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/css/ng-table.css (method GET)."
```
### Configuration
```yaml
# yamllint disable rule:comments-indentation
---
theme: light
jwt_secret: a_very_important_secret
default_2fa_method: ""
server:
host: 0.0.0.0
port: 9091
path: ""
enable_pprof: false
enable_expvars: false
disable_healthcheck: false
headers:
csp_template: ""
log:
level: debug
telemetry:
metrics:
enabled: false
address: tcp://0.0.0.0:9959
totp:
disable: false
issuer: authelia.com
algorithm: sha1
digits: 6
period: 30
skew: 1
secret_size: 32
webauthn:
disable: false
timeout: 60s
display_name: Authelia
user_verification: preferred
duo_api:
disable: false
hostname: api-123456789.example.com
integration_key: ABCDEF
secret_key: 1234567890abcdefghifjkl
enable_self_enrollment: false
ntp:
address: "time.cloudflare.com:123"
version: 4
max_desync: 3s
disable_startup_check: false
disable_failure: false
authentication_backend:
password_reset:
custom_url: ""
refresh_interval: 5m
file:
path: /etc/authelia/user_database.yml
password:
algorithm: argon2id
iterations: 1
key_length: 32
salt_length: 16
memory: 64
parallelism: 8
password_policy:
standard:
enabled: false
min_length: 8
max_length: 0
require_uppercase: true
require_lowercase: true
require_number: true
require_special: true
access_control:
default_policy: deny
rules:
- domain: 'shiro.example.com'
policy: one_factor
session:
name: authelia_session
domain: example.com
same_site: lax
secret: insecure_session_secret123123
expiration: 1h
inactivity: 5m
remember_me_duration: 1M
regulation:
max_retries: 3
find_time: 2m
ban_time: 5m
storage:
encryption_key: you_must_generate_a_random_string_of_more_than_twenty_chars_and_configure_this
postgres:
host: 127.0.0.1
port: 5432
database: authelia
schema: public
username: authelia
## Password can also be set using a secret: https://www.authelia.com/c/secrets
password: password
timeout: 5s
ssl:
mode: disable
root_certificate: disable
certificate: disable
key: disable
notifier:
filesystem:
filename: /etc/authelia/notification.txt
...
```
### Documentation
_No response_
|
1.0
|
Web app sessions don't persist - ### Version
v4.36.9
### Deployment Method
Bare-metal
### Reverse Proxy
NGINX
### Reverse Proxy Version
1.18.0
### Description
Hi there,
I'm running a java webapp which uses Apache Shiro for auth. I'm fronting it with nginx and authelia and untilising trusted headers to instruct shiro. The problem is that the java session appears to not persist between http calls. If I disable authelia by commenting the proxy redirect in the nginx site conf it works fine, however if requests are authed by authelia the java session is blank for each request.
### Reproduction
This minimal example also doesn't work, but possibly with a slightly different failure more to mine? When attempting to log in to the shiro app it hangs and eventual issues a 500. I'm hoping they're related!
git clone https://github.com/apache/shiro.git
cd shiro/samples/web
mvn jetty:run &
#configure /etc/hosts & nginx ssl virtual host for shiro e.g. shiro.example.com
#configure user access for shiro virtual host in authelia
navigate https://shiro.example.com/login.jsp
enter authelia creds
hangs with 500 server error
### Expectations
The application should be able to set cookies for it's own auth.
### Logs
```shell
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=warning msg="Session destroyed for user 'simon' after exceeding configured session inactivity and not being marked as remembered" method=GET path=/api/verify remote_ip=10.88.142.1
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=debug msg="Check authorization of subject username= groups= ip=10.88.142.1 and object https://app.example.com/ (method GET)."
Oct 19 14:03:35 test2 authelia[25714]: time="2022-10-19T14:03:35Z" level=info msg="Access to https://app.example.com/ (method GET) is not authorized to user <anonymous>, responding with status code 401" method=GET path=/api/verify remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Mark 1FA authentication attempt made by user 'simon'" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Successful 1FA authentication attempt made by user 'simon'" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/ (method )."
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Required level for the URL https://app.example.com/ is 1" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:47 test2 authelia[25714]: time="2022-10-19T14:03:47Z" level=debug msg="Redirection URL https://app.example.com/ is safe" method=POST path=/api/firstfactor remote_ip=10.88.142.1
Oct 19 14:03:48 test2 authelia[25714]: time="2022-10-19T14:03:48Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/ (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/bootstrap/css/bootstrap.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/bootstrap-ladda/ladda-themeless.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/css/font-awesome.min.css (method GET)."
Oct 19 14:03:50 test2 authelia[25714]: time="2022-10-19T14:03:50Z" level=debug msg="Check authorization of subject username=simon groups=admins,dev ip=10.88.142.1 and object https://app.example.com/static/css/ng-table.css (method GET)."
```
### Configuration
```yaml
# yamllint disable rule:comments-indentation
---
theme: light
jwt_secret: a_very_important_secret
default_2fa_method: ""
server:
host: 0.0.0.0
port: 9091
path: ""
enable_pprof: false
enable_expvars: false
disable_healthcheck: false
headers:
csp_template: ""
log:
level: debug
telemetry:
metrics:
enabled: false
address: tcp://0.0.0.0:9959
totp:
disable: false
issuer: authelia.com
algorithm: sha1
digits: 6
period: 30
skew: 1
secret_size: 32
webauthn:
disable: false
timeout: 60s
display_name: Authelia
user_verification: preferred
duo_api:
disable: false
hostname: api-123456789.example.com
integration_key: ABCDEF
secret_key: 1234567890abcdefghifjkl
enable_self_enrollment: false
ntp:
address: "time.cloudflare.com:123"
version: 4
max_desync: 3s
disable_startup_check: false
disable_failure: false
authentication_backend:
password_reset:
custom_url: ""
refresh_interval: 5m
file:
path: /etc/authelia/user_database.yml
password:
algorithm: argon2id
iterations: 1
key_length: 32
salt_length: 16
memory: 64
parallelism: 8
password_policy:
standard:
enabled: false
min_length: 8
max_length: 0
require_uppercase: true
require_lowercase: true
require_number: true
require_special: true
access_control:
default_policy: deny
rules:
- domain: 'shiro.example.com'
policy: one_factor
session:
name: authelia_session
domain: example.com
same_site: lax
secret: insecure_session_secret123123
expiration: 1h
inactivity: 5m
remember_me_duration: 1M
regulation:
max_retries: 3
find_time: 2m
ban_time: 5m
storage:
encryption_key: you_must_generate_a_random_string_of_more_than_twenty_chars_and_configure_this
postgres:
host: 127.0.0.1
port: 5432
database: authelia
schema: public
username: authelia
## Password can also be set using a secret: https://www.authelia.com/c/secrets
password: password
timeout: 5s
ssl:
mode: disable
root_certificate: disable
certificate: disable
key: disable
notifier:
filesystem:
filename: /etc/authelia/notification.txt
...
```
### Documentation
_No response_
|
non_process
|
web app sessions don t persist version deployment method bare metal reverse proxy nginx reverse proxy version description hi there i m running a java webapp which uses apache shiro for auth i m fronting it with nginx and authelia and untilising trusted headers to instruct shiro the problem is that the java session appears to not persist between http calls if i disable authelia by commenting the proxy redirect in the nginx site conf it works fine however if requests are authed by authelia the java session is blank for each request reproduction this minimal example also doesn t work but possibly with a slightly different failure more to mine when attempting to log in to the shiro app it hangs and eventual issues a i m hoping they re related git clone cd shiro samples web mvn jetty run configure etc hosts nginx ssl virtual host for shiro e g shiro example com configure user access for shiro virtual host in authelia navigate enter authelia creds hangs with server error expectations the application should be able to set cookies for it s own auth logs shell oct authelia time level warning msg session destroyed for user simon after exceeding configured session inactivity and not being marked as remembered method get path api verify remote ip oct authelia time level debug msg check authorization of subject username groups ip and object method get oct authelia time level info msg access to method get is not authorized to user responding with status code method get path api verify remote ip oct authelia time level debug msg mark authentication attempt made by user simon method post path api firstfactor remote ip oct authelia time level debug msg successful authentication attempt made by user simon method post path api firstfactor remote ip oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method oct authelia time level debug msg required level for the url is method post path api firstfactor remote ip oct authelia time level debug msg redirection url is safe method post path api firstfactor remote ip oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method get oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method get oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method get oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method get oct authelia time level debug msg check authorization of subject username simon groups admins dev ip and object method get configuration yaml yamllint disable rule comments indentation theme light jwt secret a very important secret default method server host port path enable pprof false enable expvars false disable healthcheck false headers csp template log level debug telemetry metrics enabled false address tcp totp disable false issuer authelia com algorithm digits period skew secret size webauthn disable false timeout display name authelia user verification preferred duo api disable false hostname api example com integration key abcdef secret key enable self enrollment false ntp address time cloudflare com version max desync disable startup check false disable failure false authentication backend password reset custom url refresh interval file path etc authelia user database yml password algorithm iterations key length salt length memory parallelism password policy standard enabled false min length max length require uppercase true require lowercase true require number true require special true access control default policy deny rules domain shiro example com policy one factor session name authelia session domain example com same site lax secret insecure session expiration inactivity remember me duration regulation max retries find time ban time storage encryption key you must generate a random string of more than twenty chars and configure this postgres host port database authelia schema public username authelia password can also be set using a secret password password timeout ssl mode disable root certificate disable certificate disable key disable notifier filesystem filename etc authelia notification txt documentation no response
| 0
|
5,276
| 8,065,886,986
|
IssuesEvent
|
2018-08-04 08:11:34
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
NPE when adding a keyref on a DITA element which does not allow one
|
P3 bug preprocess/keyref
|
This is also a question related to the DITA 1.3 specs, so maybe @robander can help a little bit here.
Let's say someone creates a DITA DTD specialization and adds a new element which extends the DITA <p> element. But their new element is allowed to have the keyref attribute on it. The DITA OT processing only expects the keyref attribute to be defined on specific DITA elements, see "org.dita.dost.writer.KeyrefPaser.keyrefInfos". But I do not see anything in the DITA 1.3 specification stating that only certain element types can have the keyref attribute specified on them.
So right now the publishing breaks with a NullPointerException when you have a DITA element with an unexpected keyref set on it.
As a quick experiment, let's say that inside a DITA topic somewhere in a paragraph I add this non-existing DITA element:
<booboo class="- topic/p topic/booboo " keyref="bulb"/>
and then I publish to HTML5 setting the "validate=false" parameter, I will get a NullPointerException reported with DITA OT 3.1 (but also with older DITA OTs):
org.dita.base\build_preprocess.xml:280: java.lang.NullPointerException
at org.dita.dost.writer.KeyrefPaser.processElement(KeyrefPaser.java:478)
at org.dita.dost.writer.KeyrefPaser.startElement(KeyrefPaser.java:468)
at org.xml.sax.helpers.XMLFilterImpl.startElement(Unknown Source)
at org.dita.dost.writer.TopicFragmentFilter.startElement(TopicFragmentFilter.java:79)
at org.dita.dost.writer.ConkeyrefFilter.startElement(ConkeyrefFilter.java:83)
at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
|
1.0
|
NPE when adding a keyref on a DITA element which does not allow one - This is also a question related to the DITA 1.3 specs, so maybe @robander can help a little bit here.
Let's say someone creates a DITA DTD specialization and adds a new element which extends the DITA <p> element. But their new element is allowed to have the keyref attribute on it. The DITA OT processing only expects the keyref attribute to be defined on specific DITA elements, see "org.dita.dost.writer.KeyrefPaser.keyrefInfos". But I do not see anything in the DITA 1.3 specification stating that only certain element types can have the keyref attribute specified on them.
So right now the publishing breaks with a NullPointerException when you have a DITA element with an unexpected keyref set on it.
As a quick experiment, let's say that inside a DITA topic somewhere in a paragraph I add this non-existing DITA element:
<booboo class="- topic/p topic/booboo " keyref="bulb"/>
and then I publish to HTML5 setting the "validate=false" parameter, I will get a NullPointerException reported with DITA OT 3.1 (but also with older DITA OTs):
org.dita.base\build_preprocess.xml:280: java.lang.NullPointerException
at org.dita.dost.writer.KeyrefPaser.processElement(KeyrefPaser.java:478)
at org.dita.dost.writer.KeyrefPaser.startElement(KeyrefPaser.java:468)
at org.xml.sax.helpers.XMLFilterImpl.startElement(Unknown Source)
at org.dita.dost.writer.TopicFragmentFilter.startElement(TopicFragmentFilter.java:79)
at org.dita.dost.writer.ConkeyrefFilter.startElement(ConkeyrefFilter.java:83)
at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
|
process
|
npe when adding a keyref on a dita element which does not allow one this is also a question related to the dita specs so maybe robander can help a little bit here let s say someone creates a dita dtd specialization and adds a new element which extends the dita element but their new element is allowed to have the keyref attribute on it the dita ot processing only expects the keyref attribute to be defined on specific dita elements see org dita dost writer keyrefpaser keyrefinfos but i do not see anything in the dita specification stating that only certain element types can have the keyref attribute specified on them so right now the publishing breaks with a nullpointerexception when you have a dita element with an unexpected keyref set on it as a quick experiment let s say that inside a dita topic somewhere in a paragraph i add this non existing dita element and then i publish to setting the validate false parameter i will get a nullpointerexception reported with dita ot but also with older dita ots org dita base build preprocess xml java lang nullpointerexception at org dita dost writer keyrefpaser processelement keyrefpaser java at org dita dost writer keyrefpaser startelement keyrefpaser java at org xml sax helpers xmlfilterimpl startelement unknown source at org dita dost writer topicfragmentfilter startelement topicfragmentfilter java at org dita dost writer conkeyreffilter startelement conkeyreffilter java at org apache xerces parsers abstractsaxparser startelement unknown source
| 1
|
4,557
| 7,389,343,361
|
IssuesEvent
|
2018-03-16 08:17:57
|
KetchPartners/kmsprint2
|
https://api.github.com/repos/KetchPartners/kmsprint2
|
opened
|
Create Trigger
|
Axure Prototype Process enhancement
|
# Baseline- Create tasks to initiate activities with key feilds.
<img width="363" alt="base4" src="https://user-images.githubusercontent.com/29525920/37510264-eccf1b4a-28d0-11e8-8d0c-fb6c5c8f5db1.png">
|
1.0
|
Create Trigger - # Baseline- Create tasks to initiate activities with key feilds.
<img width="363" alt="base4" src="https://user-images.githubusercontent.com/29525920/37510264-eccf1b4a-28d0-11e8-8d0c-fb6c5c8f5db1.png">
|
process
|
create trigger baseline create tasks to initiate activities with key feilds img width alt src
| 1
|
266,933
| 20,171,874,432
|
IssuesEvent
|
2022-02-10 11:08:14
|
dianna-ai/dianna
|
https://api.github.com/repos/dianna-ai/dianna
|
closed
|
Make sure all tutorials show up correctly in the docs
|
documentation
|
Add the links to tutorials at the right place
- [x] create links under docs/tutorials
- [x] make sure the header levels are correct
|
1.0
|
Make sure all tutorials show up correctly in the docs - Add the links to tutorials at the right place
- [x] create links under docs/tutorials
- [x] make sure the header levels are correct
|
non_process
|
make sure all tutorials show up correctly in the docs add the links to tutorials at the right place create links under docs tutorials make sure the header levels are correct
| 0
|
536,466
| 15,709,447,106
|
IssuesEvent
|
2021-03-26 22:33:37
|
argoproj/argo-cd
|
https://api.github.com/repos/argoproj/argo-cd
|
closed
|
Create App fails if repo is Helm OCI and revision is a hash
|
bug bug/priority:medium bug/severity:minor component:config-management
|
Checklist:
* [x ] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
* [x ] I've included steps to reproduce the bug.
* [x ] I've pasted the output of `argocd version`.
**Describe the bug**
Using a hash as revision number when creating an app from a chart hosted on a OCI registry fails (Error seems to suggest it's interpreting it as a semver range). As you can see below it works with the same parameters if I provide a semver number.
**To Reproduce**


**Expected behavior**
Should create an app since a hash should suffice as an unique identifier
**Version**

|
1.0
|
Create App fails if repo is Helm OCI and revision is a hash - Checklist:
* [x ] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
* [x ] I've included steps to reproduce the bug.
* [x ] I've pasted the output of `argocd version`.
**Describe the bug**
Using a hash as revision number when creating an app from a chart hosted on a OCI registry fails (Error seems to suggest it's interpreting it as a semver range). As you can see below it works with the same parameters if I provide a semver number.
**To Reproduce**


**Expected behavior**
Should create an app since a hash should suffice as an unique identifier
**Version**

|
non_process
|
create app fails if repo is helm oci and revision is a hash checklist i ve searched in the docs and faq for my answer i ve included steps to reproduce the bug i ve pasted the output of argocd version describe the bug using a hash as revision number when creating an app from a chart hosted on a oci registry fails error seems to suggest it s interpreting it as a semver range as you can see below it works with the same parameters if i provide a semver number to reproduce expected behavior should create an app since a hash should suffice as an unique identifier version
| 0
|
832,049
| 32,070,660,523
|
IssuesEvent
|
2023-09-25 07:47:32
|
steedos/steedos-platform
|
https://api.github.com/repos/steedos/steedos-platform
|
opened
|
[Bug]: unpkg配置成相对路径后,手机客户端访问白屏
|
bug priority: High
|
### Description

### Steps To Reproduce 重现步骤
1、STEEDOS_UNPKG_URL=/unpkg/
2、使用手机客户端登录
### Version 版本
2.5
|
1.0
|
[Bug]: unpkg配置成相对路径后,手机客户端访问白屏 - ### Description

### Steps To Reproduce 重现步骤
1、STEEDOS_UNPKG_URL=/unpkg/
2、使用手机客户端登录
### Version 版本
2.5
|
non_process
|
unpkg配置成相对路径后,手机客户端访问白屏 description steps to reproduce 重现步骤 、steedos unpkg url unpkg 、使用手机客户端登录 version 版本
| 0
|
19,292
| 25,466,362,809
|
IssuesEvent
|
2022-11-25 05:05:01
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] Admin > Edit admin > Edit admin details screen > UI issue
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Enter the phone number (Eg: +919999999999999)
5. Click on 'Save' button and Verify
**AR:** UI issue is observed as attached in the below screenshot
**ER:** Status of the admin should get displayed

|
3.0
|
[IDP] [PM] Admin > Edit admin > Edit admin details screen > UI issue - **Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Enter the phone number (Eg: +919999999999999)
5. Click on 'Save' button and Verify
**AR:** UI issue is observed as attached in the below screenshot
**ER:** Status of the admin should get displayed

|
process
|
admin edit admin edit admin details screen ui issue steps login to pm click on admins tab edit admin in the list enter the phone number eg click on save button and verify ar ui issue is observed as attached in the below screenshot er status of the admin should get displayed
| 1
|
281,004
| 8,689,384,720
|
IssuesEvent
|
2018-12-03 18:33:30
|
googleapis/google-api-python-client
|
https://api.github.com/repos/googleapis/google-api-python-client
|
closed
|
Unable to grant owner access to User members
|
priority: p2 status: blocked type: question
|
I couldn't able to add new member in GCP(iam) with the role owner using the gcloud command The below command fails:
`gcloud projects add-iam-policy-binding linuxacademy-3 --member user:rohithmn3@gmail.com --role roles/owner`
With the below Error:
```
ERROR: (gcloud.projects.add-iam-policy-binding) INVALID_ARGUMENT: Request contains an invalid argument.
- '@type': type.googleapis.com/google.cloudresourcemanager.v1.ProjectIamPolicyError
member: user:rohithmn3@gmail.com
role: roles/owner
type: SOLO_MUST_INVITE_OWNERS
```
But, the same command works well for other roles like: viewer, browser...! It just doesn't work for "owner". Is there any alternative for this; if yes, How to add this in my Python Code. Please help me here..!
Thank you!
Regards,
Rohith
|
1.0
|
Unable to grant owner access to User members - I couldn't able to add new member in GCP(iam) with the role owner using the gcloud command The below command fails:
`gcloud projects add-iam-policy-binding linuxacademy-3 --member user:rohithmn3@gmail.com --role roles/owner`
With the below Error:
```
ERROR: (gcloud.projects.add-iam-policy-binding) INVALID_ARGUMENT: Request contains an invalid argument.
- '@type': type.googleapis.com/google.cloudresourcemanager.v1.ProjectIamPolicyError
member: user:rohithmn3@gmail.com
role: roles/owner
type: SOLO_MUST_INVITE_OWNERS
```
But, the same command works well for other roles like: viewer, browser...! It just doesn't work for "owner". Is there any alternative for this; if yes, How to add this in my Python Code. Please help me here..!
Thank you!
Regards,
Rohith
|
non_process
|
unable to grant owner access to user members i couldn t able to add new member in gcp iam with the role owner using the gcloud command the below command fails gcloud projects add iam policy binding linuxacademy member user gmail com role roles owner with the below error error gcloud projects add iam policy binding invalid argument request contains an invalid argument type type googleapis com google cloudresourcemanager projectiampolicyerror member user gmail com role roles owner type solo must invite owners but the same command works well for other roles like viewer browser it just doesn t work for owner is there any alternative for this if yes how to add this in my python code please help me here thank you regards rohith
| 0
|
3,710
| 6,732,524,795
|
IssuesEvent
|
2017-10-18 11:52:05
|
lockedata/rcms
|
https://api.github.com/repos/lockedata/rcms
|
opened
|
Manage speaker submission
|
conference team osem processes
|
## Detailed task
- Review submissions
- Accept/reject sessions
- Email speakers
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [osem](https://intense-shore-93790.herokuapp.com/)
- System documentation: [osem docs](http://osem.io/)
- Role: Conference team
- Area: Processes
|
1.0
|
Manage speaker submission - ## Detailed task
- Review submissions
- Accept/reject sessions
- Email speakers
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [osem](https://intense-shore-93790.herokuapp.com/)
- System documentation: [osem docs](http://osem.io/)
- Role: Conference team
- Area: Processes
|
process
|
manage speaker submission detailed task review submissions accept reject sessions email speakers assessing the task try to perform the task use google and the system documentation to help part of what we re trying to assess how easy it is for people to work out how to do tasks use a 👍 reaction to this task if you were able to perform the task use a 👎 reaction to the task if you could not complete it add a reply with any comments or feedback extra info site system documentation role conference team area processes
| 1
|
274,377
| 30,015,706,850
|
IssuesEvent
|
2023-06-26 18:32:15
|
DevSecOpsTrainingAz/epshoppublic
|
https://api.github.com/repos/DevSecOpsTrainingAz/epshoppublic
|
opened
|
UnitTests-1.0.0: 5 vulnerabilities (highest severity is: 7.8)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>UnitTests-1.0.0</b></p></summary>
<p></p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.common/6.2.1/nuget.common.6.2.1.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (UnitTests version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-41032](https://www.mend.io/vulnerability-database/CVE-2022-41032) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | nuget.protocol.5.11.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | system.net.http.4.3.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2019-0820](https://www.mend.io/vulnerability-database/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2023-29337](https://www.mend.io/vulnerability-database/CVE-2023-29337) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.1 | detected in multiple dependencies | Transitive | N/A* | ❌ |
| [CVE-2022-34716](https://www.mend.io/vulnerability-database/CVE-2022-34716) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.9 | system.security.cryptography.xml.4.5.0.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-41032</summary>
### Vulnerable Library - <b>nuget.protocol.5.11.0.nupkg</b></p>
<p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg
- microsoft.dotnet.scaffolding.shared.6.0.7.nupkg
- nuget.projectmodel.5.11.0.nupkg
- nuget.dependencyresolver.core.5.11.0.nupkg
- :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
NuGet Client Elevation of Privilege Vulnerability.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41032>CVE-2022-41032</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: NuGet.CommandLine - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Commands - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Protocol - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1
</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2018-8292</summary>
### Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that allow applications to consume web services over HTTP and HTTP components that can be used by both clients and servers for parsing HTTP headers.
</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- newtonsoft.json.bson.1.0.1.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2019-0820</summary>
### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p>
<p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- newtonsoft.json.bson.1.0.1.nupkg
- netstandard.library.1.6.1.nupkg
- system.xml.xdocument.4.3.0.nupkg
- system.xml.readerwriter.4.3.0.nupkg
- :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981.
Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg.
<p>Publish Date: 2019-05-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0820>CVE-2019-0820</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p>
<p>Release Date: 2019-05-16</p>
<p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-29337</summary>
### Vulnerable Libraries - <b>nuget.protocol.5.11.0.nupkg</b>, <b>nuget.common.6.2.1.nupkg</b></p>
<p>
### <b>nuget.protocol.5.11.0.nupkg</b></p>
<p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg
- microsoft.dotnet.scaffolding.shared.6.0.7.nupkg
- nuget.projectmodel.5.11.0.nupkg
- nuget.dependencyresolver.core.5.11.0.nupkg
- :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library)
### <b>nuget.common.6.2.1.nupkg</b></p>
<p>Common utilities and interfaces for all NuGet libraries.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.common.6.2.1.nupkg">https://api.nuget.org/packages/nuget.common.6.2.1.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.common/6.2.1/nuget.common.6.2.1.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- nuget.packaging.6.2.1.nupkg
- nuget.configuration.6.2.1.nupkg
- :x: **nuget.common.6.2.1.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
NuGet Client Remote Code Execution Vulnerability
<p>Publish Date: 2023-06-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-29337>CVE-2023-29337</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6qmf-mmc7-6c2p">https://github.com/advisories/GHSA-6qmf-mmc7-6c2p</a></p>
<p>Release Date: 2023-06-14</p>
<p>Fix Resolution: NuGet.CommandLine - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Commands - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Common - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.PackageManagement - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Protocol - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2022-34716</summary>
### Vulnerable Library - <b>system.security.cryptography.xml.4.5.0.nupkg</b></p>
<p>Provides classes to support the creation and validation of XML digital signatures. The classes in th...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.security.cryptography.xml.4.5.0.nupkg">https://api.nuget.org/packages/system.security.cryptography.xml.4.5.0.nupkg</a></p>
<p>Path to dependency file: /tests/UnitTests/UnitTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.security.cryptography.xml/4.5.0/system.security.cryptography.xml.4.5.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- microsoft.aspnetcore.antiforgery.2.2.0.nupkg
- microsoft.aspnetcore.dataprotection.2.2.0.nupkg
- :x: **system.security.cryptography.xml.4.5.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Microsoft is releasing this security advisory to provide information about a vulnerability in .NET Core 3.1 and .NET 6.0. An information disclosure vulnerability exists in .NET Core 3.1 and .NET 6.0 that could lead to unauthorized access of privileged information.
## Affected software
* Any .NET 6.0 application running on .NET 6.0.7 or earlier.
* Any .NET Core 3.1 applicaiton running on .NET Core 3.1.27 or earlier.
## Patches
* If you're using .NET 6.0, you should download and install Runtime 6.0.8 or SDK 6.0.108 (for Visual Studio 2022 v17.1) from https://dotnet.microsoft.com/download/dotnet-core/6.0.
* If you're using .NET Core 3.1, you should download and install Runtime 3.1.28 (for Visual Studio 2019 v16.9) from https://dotnet.microsoft.com/download/dotnet-core/3.1.
<p>Publish Date: 2022-08-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-34716>CVE-2022-34716</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-2m65-m22p-9wjw">https://github.com/advisories/GHSA-2m65-m22p-9wjw</a></p>
<p>Release Date: 2022-08-09</p>
<p>Fix Resolution: Microsoft.AspNetCore.App.Runtime.linux-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.osx-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-x86 - 3.1.28,6.0.8;System.Security.Cryptography.Xml - 4.7.1,6.0.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
UnitTests-1.0.0: 5 vulnerabilities (highest severity is: 7.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>UnitTests-1.0.0</b></p></summary>
<p></p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.common/6.2.1/nuget.common.6.2.1.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (UnitTests version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-41032](https://www.mend.io/vulnerability-database/CVE-2022-41032) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | nuget.protocol.5.11.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | system.net.http.4.3.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2019-0820](https://www.mend.io/vulnerability-database/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A* | ❌ |
| [CVE-2023-29337](https://www.mend.io/vulnerability-database/CVE-2023-29337) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.1 | detected in multiple dependencies | Transitive | N/A* | ❌ |
| [CVE-2022-34716](https://www.mend.io/vulnerability-database/CVE-2022-34716) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.9 | system.security.cryptography.xml.4.5.0.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-41032</summary>
### Vulnerable Library - <b>nuget.protocol.5.11.0.nupkg</b></p>
<p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg
- microsoft.dotnet.scaffolding.shared.6.0.7.nupkg
- nuget.projectmodel.5.11.0.nupkg
- nuget.dependencyresolver.core.5.11.0.nupkg
- :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
NuGet Client Elevation of Privilege Vulnerability.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41032>CVE-2022-41032</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-11</p>
<p>Fix Resolution: NuGet.CommandLine - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Commands - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Protocol - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1
</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2018-8292</summary>
### Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that allow applications to consume web services over HTTP and HTTP components that can be used by both clients and servers for parsing HTTP headers.
</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- newtonsoft.json.bson.1.0.1.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2019-0820</summary>
### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p>
<p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- newtonsoft.json.bson.1.0.1.nupkg
- netstandard.library.1.6.1.nupkg
- system.xml.xdocument.4.3.0.nupkg
- system.xml.readerwriter.4.3.0.nupkg
- :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981.
Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg.
<p>Publish Date: 2019-05-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0820>CVE-2019-0820</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p>
<p>Release Date: 2019-05-16</p>
<p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2023-29337</summary>
### Vulnerable Libraries - <b>nuget.protocol.5.11.0.nupkg</b>, <b>nuget.common.6.2.1.nupkg</b></p>
<p>
### <b>nuget.protocol.5.11.0.nupkg</b></p>
<p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg
- microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg
- microsoft.dotnet.scaffolding.shared.6.0.7.nupkg
- nuget.projectmodel.5.11.0.nupkg
- nuget.dependencyresolver.core.5.11.0.nupkg
- :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library)
### <b>nuget.common.6.2.1.nupkg</b></p>
<p>Common utilities and interfaces for all NuGet libraries.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/nuget.common.6.2.1.nupkg">https://api.nuget.org/packages/nuget.common.6.2.1.nupkg</a></p>
<p>Path to dependency file: /src/PublicApi/PublicApi.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.common/6.2.1/nuget.common.6.2.1.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- Web-1.0.0
- microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg
- microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg
- nuget.packaging.6.2.1.nupkg
- nuget.configuration.6.2.1.nupkg
- :x: **nuget.common.6.2.1.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
NuGet Client Remote Code Execution Vulnerability
<p>Publish Date: 2023-06-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-29337>CVE-2023-29337</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6qmf-mmc7-6c2p">https://github.com/advisories/GHSA-6qmf-mmc7-6c2p</a></p>
<p>Release Date: 2023-06-14</p>
<p>Fix Resolution: NuGet.CommandLine - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Commands - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Common - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.PackageManagement - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1, NuGet.Protocol - 6.0.5,6.2.4,6.3.3,6.4.2,6.5.1,6.6.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2022-34716</summary>
### Vulnerable Library - <b>system.security.cryptography.xml.4.5.0.nupkg</b></p>
<p>Provides classes to support the creation and validation of XML digital signatures. The classes in th...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.security.cryptography.xml.4.5.0.nupkg">https://api.nuget.org/packages/system.security.cryptography.xml.4.5.0.nupkg</a></p>
<p>Path to dependency file: /tests/UnitTests/UnitTests.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.security.cryptography.xml/4.5.0/system.security.cryptography.xml.4.5.0.nupkg</p>
<p>
Dependency Hierarchy:
- UnitTests-1.0.0 (Root Library)
- microsoft.aspnetcore.mvc.2.2.0.nupkg
- microsoft.aspnetcore.mvc.taghelpers.2.2.0.nupkg
- microsoft.aspnetcore.mvc.razor.2.2.0.nupkg
- microsoft.aspnetcore.mvc.viewfeatures.2.2.0.nupkg
- microsoft.aspnetcore.antiforgery.2.2.0.nupkg
- microsoft.aspnetcore.dataprotection.2.2.0.nupkg
- :x: **system.security.cryptography.xml.4.5.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DevSecOpsTrainingAz/epshoppublic/commit/87e4ac53883973c1d1a552462c45e3a8f0a183a2">87e4ac53883973c1d1a552462c45e3a8f0a183a2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Microsoft is releasing this security advisory to provide information about a vulnerability in .NET Core 3.1 and .NET 6.0. An information disclosure vulnerability exists in .NET Core 3.1 and .NET 6.0 that could lead to unauthorized access of privileged information.
## Affected software
* Any .NET 6.0 application running on .NET 6.0.7 or earlier.
* Any .NET Core 3.1 applicaiton running on .NET Core 3.1.27 or earlier.
## Patches
* If you're using .NET 6.0, you should download and install Runtime 6.0.8 or SDK 6.0.108 (for Visual Studio 2022 v17.1) from https://dotnet.microsoft.com/download/dotnet-core/6.0.
* If you're using .NET Core 3.1, you should download and install Runtime 3.1.28 (for Visual Studio 2019 v16.9) from https://dotnet.microsoft.com/download/dotnet-core/3.1.
<p>Publish Date: 2022-08-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-34716>CVE-2022-34716</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-2m65-m22p-9wjw">https://github.com/advisories/GHSA-2m65-m22p-9wjw</a></p>
<p>Release Date: 2022-08-09</p>
<p>Fix Resolution: Microsoft.AspNetCore.App.Runtime.linux-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-musl-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.linux-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.osx-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-arm - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-arm64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-x64 - 3.1.28,6.0.8;Microsoft.AspNetCore.App.Runtime.win-x86 - 3.1.28,6.0.8;System.Security.Cryptography.Xml - 4.7.1,6.0.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
unittests vulnerabilities highest severity is vulnerable library unittests path to vulnerable library home wss scanner nuget packages nuget common nuget common nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in unittests version remediation available high nuget protocol nupkg transitive n a high system net http nupkg transitive n a high system text regularexpressions nupkg transitive n a high detected in multiple dependencies transitive n a medium system security cryptography xml nupkg transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library nuget protocol nupkg nuget s implementation for interacting with feeds contains functionality for all feed types library home page a href path to dependency file src publicapi publicapi csproj path to vulnerable library home wss scanner nuget packages nuget protocol nuget protocol nupkg dependency hierarchy unittests root library web microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg microsoft visualstudio web codegeneration nupkg microsoft visualstudio web codegeneration entityframeworkcore nupkg microsoft visualstudio web codegeneration core nupkg microsoft visualstudio web codegeneration templating nupkg microsoft visualstudio web codegeneration utils nupkg microsoft dotnet scaffolding shared nupkg nuget projectmodel nupkg nuget dependencyresolver core nupkg x nuget protocol nupkg vulnerable library found in head commit a href found in base branch main vulnerability details nuget client elevation of privilege vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution nuget commandline nuget commands nuget protocol step up your open source security game with mend cve vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that allow applications to consume web services over http and http components that can be used by both clients and servers for parsing http headers library home page a href path to dependency file src publicapi publicapi csproj path to vulnerable library home wss scanner nuget packages system net http system net http nupkg dependency hierarchy unittests root library microsoft aspnetcore mvc nupkg microsoft aspnetcore mvc taghelpers nupkg microsoft aspnetcore mvc razor nupkg microsoft aspnetcore mvc viewfeatures nupkg newtonsoft json bson nupkg netstandard library nupkg x system net http nupkg vulnerable library found in head commit a href found in base branch main vulnerability details an information disclosure vulnerability exists in net core when authentication information is inadvertently exposed in a redirect aka net core information disclosure vulnerability this affects net core net core net core powershell core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution system net http microsoft powershell commands utility rc step up your open source security game with mend cve vulnerable library system text regularexpressions nupkg provides the system text regularexpressions regex class an implementation of a regular expression e library home page a href path to dependency file src publicapi publicapi csproj path to vulnerable library home wss scanner nuget packages system text regularexpressions system text regularexpressions nupkg dependency hierarchy unittests root library microsoft aspnetcore mvc nupkg microsoft aspnetcore mvc taghelpers nupkg microsoft aspnetcore mvc razor nupkg microsoft aspnetcore mvc viewfeatures nupkg newtonsoft json bson nupkg netstandard library nupkg system xml xdocument nupkg system xml readerwriter nupkg x system text regularexpressions nupkg vulnerable library found in head commit a href found in base branch main vulnerability details a denial of service vulnerability exists when net framework and net core improperly process regex strings aka net framework and net core denial of service vulnerability this cve id is unique from cve cve mend note after conducting further research mend has determined that cve only affects environments with versions and only on environment of system text regularexpressions nupkg publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text regularexpressions step up your open source security game with mend cve vulnerable libraries nuget protocol nupkg nuget common nupkg nuget protocol nupkg nuget s implementation for interacting with feeds contains functionality for all feed types library home page a href path to dependency file src publicapi publicapi csproj path to vulnerable library home wss scanner nuget packages nuget protocol nuget protocol nupkg dependency hierarchy unittests root library web microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg microsoft visualstudio web codegeneration nupkg microsoft visualstudio web codegeneration entityframeworkcore nupkg microsoft visualstudio web codegeneration core nupkg microsoft visualstudio web codegeneration templating nupkg microsoft visualstudio web codegeneration utils nupkg microsoft dotnet scaffolding shared nupkg nuget projectmodel nupkg nuget dependencyresolver core nupkg x nuget protocol nupkg vulnerable library nuget common nupkg common utilities and interfaces for all nuget libraries library home page a href path to dependency file src publicapi publicapi csproj path to vulnerable library home wss scanner nuget packages nuget common nuget common nupkg dependency hierarchy unittests root library web microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg nuget packaging nupkg nuget configuration nupkg x nuget common nupkg vulnerable library found in head commit a href found in base branch main vulnerability details nuget client remote code execution vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nuget commandline nuget commands nuget common nuget packagemanagement nuget protocol step up your open source security game with mend cve vulnerable library system security cryptography xml nupkg provides classes to support the creation and validation of xml digital signatures the classes in th library home page a href path to dependency file tests unittests unittests csproj path to vulnerable library home wss scanner nuget packages system security cryptography xml system security cryptography xml nupkg dependency hierarchy unittests root library microsoft aspnetcore mvc nupkg microsoft aspnetcore mvc taghelpers nupkg microsoft aspnetcore mvc razor nupkg microsoft aspnetcore mvc viewfeatures nupkg microsoft aspnetcore antiforgery nupkg microsoft aspnetcore dataprotection nupkg x system security cryptography xml nupkg vulnerable library found in head commit a href found in base branch main vulnerability details microsoft is releasing this security advisory to provide information about a vulnerability in net core and net an information disclosure vulnerability exists in net core and net that could lead to unauthorized access of privileged information affected software any net application running on net or earlier any net core applicaiton running on net core or earlier patches if you re using net you should download and install runtime or sdk for visual studio from if you re using net core you should download and install runtime for visual studio from publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore app runtime linux arm microsoft aspnetcore app runtime linux microsoft aspnetcore app runtime linux musl arm microsoft aspnetcore app runtime linux musl microsoft aspnetcore app runtime linux musl microsoft aspnetcore app runtime linux microsoft aspnetcore app runtime osx microsoft aspnetcore app runtime win arm microsoft aspnetcore app runtime win microsoft aspnetcore app runtime win microsoft aspnetcore app runtime win system security cryptography xml step up your open source security game with mend
| 0
|
6,249
| 9,210,202,980
|
IssuesEvent
|
2019-03-09 02:35:24
|
Remosy/DropTheGame
|
https://api.github.com/repos/Remosy/DropTheGame
|
opened
|
Questions for week1-2
|
Inprocessing meeting
|
- What is the IH exploration space look like?
- How to use RL in IH exploration space?
- How to control 2 agents
- How to use RL agent to play the game
- How can image classification help with RL?
- Do I really use OpenCV
- Which labels do I need? (agents' movementPath/AllPic/actions?)
- What is the differences btw RL and imitation learning(Inverse RL)?
|
1.0
|
Questions for week1-2 - - What is the IH exploration space look like?
- How to use RL in IH exploration space?
- How to control 2 agents
- How to use RL agent to play the game
- How can image classification help with RL?
- Do I really use OpenCV
- Which labels do I need? (agents' movementPath/AllPic/actions?)
- What is the differences btw RL and imitation learning(Inverse RL)?
|
process
|
questions for what is the ih exploration space look like how to use rl in ih exploration space how to control agents how to use rl agent to play the game how can image classification help with rl do i really use opencv which labels do i need agents movementpath allpic actions what is the differences btw rl and imitation learning inverse rl
| 1
|
28,761
| 11,688,249,052
|
IssuesEvent
|
2020-03-05 14:14:47
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
RememberMe token should be hashed in the database
|
Feature Security help wanted
|
**Symfony version(s) affected**: any (since 2.0)
**Description**
Symfony security supports two different types of rememberme cookies. Persistent rememberme tokens are stored in the database to prevent identity theft. They store a _series_ (sort of like a username for your token) and the _token_ (sort of your password). For each series/username, the token is changed on every successful rememberme login.
If someone get's access to a database dump for any reason, he can possibly manually add a rememberme cookie to his browser and be authenticated by Symfony automatically. To prevent such attach, it is recommended to hash the token value in the database (see [How to Secure Long-Term Authentication](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies) article by Paragon IE).
This is currently not done in Symfony, and adding it would generally improve security.
**How to reproduce**
This is more of a theoretical issue in the security implementation, reproducing it would require manipulating your browser cookies.
**Possible Solution**
The `PersistentTokenBasedRememberMeServices` implementation should be adjusted to store hashed passwords and authenticate for them.
**Additional context**
I'm very much willing to create a PR for this if the Symfony security experts agree that things should be changed (and for which version). We also need to decide on whether all existing rememberme cookies will become invalid after this implementation or we should continue to support unencrypted database values (for until the user has logged in once and the token is updated).
|
True
|
RememberMe token should be hashed in the database - **Symfony version(s) affected**: any (since 2.0)
**Description**
Symfony security supports two different types of rememberme cookies. Persistent rememberme tokens are stored in the database to prevent identity theft. They store a _series_ (sort of like a username for your token) and the _token_ (sort of your password). For each series/username, the token is changed on every successful rememberme login.
If someone get's access to a database dump for any reason, he can possibly manually add a rememberme cookie to his browser and be authenticated by Symfony automatically. To prevent such attach, it is recommended to hash the token value in the database (see [How to Secure Long-Term Authentication](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies) article by Paragon IE).
This is currently not done in Symfony, and adding it would generally improve security.
**How to reproduce**
This is more of a theoretical issue in the security implementation, reproducing it would require manipulating your browser cookies.
**Possible Solution**
The `PersistentTokenBasedRememberMeServices` implementation should be adjusted to store hashed passwords and authenticate for them.
**Additional context**
I'm very much willing to create a PR for this if the Symfony security experts agree that things should be changed (and for which version). We also need to decide on whether all existing rememberme cookies will become invalid after this implementation or we should continue to support unencrypted database values (for until the user has logged in once and the token is updated).
|
non_process
|
rememberme token should be hashed in the database symfony version s affected any since description symfony security supports two different types of rememberme cookies persistent rememberme tokens are stored in the database to prevent identity theft they store a series sort of like a username for your token and the token sort of your password for each series username the token is changed on every successful rememberme login if someone get s access to a database dump for any reason he can possibly manually add a rememberme cookie to his browser and be authenticated by symfony automatically to prevent such attach it is recommended to hash the token value in the database see article by paragon ie this is currently not done in symfony and adding it would generally improve security how to reproduce this is more of a theoretical issue in the security implementation reproducing it would require manipulating your browser cookies possible solution the persistenttokenbasedremembermeservices implementation should be adjusted to store hashed passwords and authenticate for them additional context i m very much willing to create a pr for this if the symfony security experts agree that things should be changed and for which version we also need to decide on whether all existing rememberme cookies will become invalid after this implementation or we should continue to support unencrypted database values for until the user has logged in once and the token is updated
| 0
|
394
| 2,842,097,041
|
IssuesEvent
|
2015-05-28 07:01:23
|
ChelseaStats/issues
|
https://api.github.com/repos/ChelseaStats/issues
|
closed
|
Managerial merry-go round
|
to process ★ priority-medium
|
2015-05-26 Di Matteo resigned from Schalke
2015-05-26 Jokanovic sacked by Watford
2015-05-25 Ancelotti sacked by Real Madrid
2015-05-25 Paul Clement sacked by Real Madrid
|
1.0
|
Managerial merry-go round - 2015-05-26 Di Matteo resigned from Schalke
2015-05-26 Jokanovic sacked by Watford
2015-05-25 Ancelotti sacked by Real Madrid
2015-05-25 Paul Clement sacked by Real Madrid
|
process
|
managerial merry go round di matteo resigned from schalke jokanovic sacked by watford ancelotti sacked by real madrid paul clement sacked by real madrid
| 1
|
282,240
| 21,315,474,432
|
IssuesEvent
|
2022-04-16 07:35:39
|
zzkzzzz/pe
|
https://api.github.com/repos/zzkzzzz/pe
|
opened
|
Not enough visuals for export and import
|
severity.Medium type.DocumentationBug
|
The feature of import and export need additional knowledge to the computer system. The user might dont what is `FILEPATH` or `Relative filepath` or `Absolute filepath`. Then might don't understand how to use them
<!--session: 1650042395412-d97f8b6b-a194-4f39-b260-a68fef853117-->
<!--Version: Web v3.4.2-->
|
1.0
|
Not enough visuals for export and import - The feature of import and export need additional knowledge to the computer system. The user might dont what is `FILEPATH` or `Relative filepath` or `Absolute filepath`. Then might don't understand how to use them
<!--session: 1650042395412-d97f8b6b-a194-4f39-b260-a68fef853117-->
<!--Version: Web v3.4.2-->
|
non_process
|
not enough visuals for export and import the feature of import and export need additional knowledge to the computer system the user might dont what is filepath or relative filepath or absolute filepath then might don t understand how to use them
| 0
|
218,429
| 24,369,597,369
|
IssuesEvent
|
2022-10-03 18:03:47
|
inmar/twine.js
|
https://api.github.com/repos/inmar/twine.js
|
closed
|
CVE-2022-0722 (High) detected in parse-url-6.0.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-0722 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-4.0.0.tgz (Root Library)
- version-4.0.0.tgz
- github-client-4.0.0.tgz
- git-url-parse-11.5.0.tgz
- git-up-4.0.5.tgz
- :x: **parse-url-6.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/inmar/twine.js/commit/fd7dc082fc728a62fc1459d0d1ba2fb8410cdfc6">fd7dc082fc728a62fc1459d0d1ba2fb8410cdfc6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0722>CVE-2022-0722</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226">https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution (parse-url): 6.0.3</p>
<p>Direct dependency fix Resolution (lerna): 5.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2022-0722 (High) detected in parse-url-6.0.0.tgz - autoclosed - ## CVE-2022-0722 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-4.0.0.tgz (Root Library)
- version-4.0.0.tgz
- github-client-4.0.0.tgz
- git-url-parse-11.5.0.tgz
- git-up-4.0.5.tgz
- :x: **parse-url-6.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/inmar/twine.js/commit/fd7dc082fc728a62fc1459d0d1ba2fb8410cdfc6">fd7dc082fc728a62fc1459d0d1ba2fb8410cdfc6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0722>CVE-2022-0722</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226">https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution (parse-url): 6.0.3</p>
<p>Direct dependency fix Resolution (lerna): 5.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in parse url tgz autoclosed cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url package json dependency hierarchy lerna tgz root library version tgz github client tgz git url parse tgz git up tgz x parse url tgz vulnerable library found in head commit a href found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url direct dependency fix resolution lerna rescue worker helmet automatic remediation is available for this issue
| 0
|
11,318
| 14,137,149,347
|
IssuesEvent
|
2020-11-10 06:04:04
|
googleapis/java-logging-logback
|
https://api.github.com/repos/googleapis/java-logging-logback
|
closed
|
readme regeneration CI is failing continuously since 22nd of October
|
api: logging priority: p1 type: process
|
cloud-devrel/client-libraries/java/java-logging-logback/continuous/readme CI is failing
|
1.0
|
readme regeneration CI is failing continuously since 22nd of October - cloud-devrel/client-libraries/java/java-logging-logback/continuous/readme CI is failing
|
process
|
readme regeneration ci is failing continuously since of october cloud devrel client libraries java java logging logback continuous readme ci is failing
| 1
|
2,495
| 5,268,268,886
|
IssuesEvent
|
2017-02-05 09:16:40
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [eng] Les 10 premières mesures de Mélenchon à l'Élysée
|
Language: English Process: [6] Approved
|
# Video title
Les 10 premières mesures de Mélenchon à l'Élysée
# URL
https://www.youtube.com/watch?v=brJu7qMJdVc
# Youtube subtitles language
Anglais
# Duration
6:48
# Subtitles URL
https://www.youtube.com/timedtext_editor?bl=vmp&tab=captions&lang=en&v=brJu7qMJdVc&ref=player&action_mde_edit_form=1
|
1.0
|
[subtitles] [eng] Les 10 premières mesures de Mélenchon à l'Élysée - # Video title
Les 10 premières mesures de Mélenchon à l'Élysée
# URL
https://www.youtube.com/watch?v=brJu7qMJdVc
# Youtube subtitles language
Anglais
# Duration
6:48
# Subtitles URL
https://www.youtube.com/timedtext_editor?bl=vmp&tab=captions&lang=en&v=brJu7qMJdVc&ref=player&action_mde_edit_form=1
|
process
|
les premières mesures de mélenchon à l élysée video title les premières mesures de mélenchon à l élysée url youtube subtitles language anglais duration subtitles url
| 1
|
363,397
| 25,448,911,158
|
IssuesEvent
|
2022-11-24 08:56:26
|
cadburry6969/qb-cooldown
|
https://api.github.com/repos/cadburry6969/qb-cooldown
|
closed
|
Documentation
|
documentation
|
Is there anyway to get more documentation on this? I want to implement this across store robbery and jewelry robbery, etc
|
1.0
|
Documentation - Is there anyway to get more documentation on this? I want to implement this across store robbery and jewelry robbery, etc
|
non_process
|
documentation is there anyway to get more documentation on this i want to implement this across store robbery and jewelry robbery etc
| 0
|
171,042
| 27,052,872,911
|
IssuesEvent
|
2023-02-13 14:23:19
|
CMPUT301W23T47/Canary
|
https://api.github.com/repos/CMPUT301W23T47/Canary
|
closed
|
UI: Lo-FI Mockup for viewing Other Player's profile
|
design
|
Design a Lo-FI Mockup for viewing Other Player's profile
Add view for searching Other Player's profile using Username
|
1.0
|
UI: Lo-FI Mockup for viewing Other Player's profile - Design a Lo-FI Mockup for viewing Other Player's profile
Add view for searching Other Player's profile using Username
|
non_process
|
ui lo fi mockup for viewing other player s profile design a lo fi mockup for viewing other player s profile add view for searching other player s profile using username
| 0
|
7,439
| 10,551,287,212
|
IssuesEvent
|
2019-10-03 13:02:53
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
search yeilds a deleted task result
|
2.0.7 Process bug Search
|
go to tasks
create a task, name it and then delete it
search for it
it still shows up in search after being deleted

|
1.0
|
search yeilds a deleted task result - go to tasks
create a task, name it and then delete it
search for it
it still shows up in search after being deleted

|
process
|
search yeilds a deleted task result go to tasks create a task name it and then delete it search for it it still shows up in search after being deleted
| 1
|
17,213
| 22,820,643,527
|
IssuesEvent
|
2022-07-12 01:42:11
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Algorithm "Difference" failing to execute properly
|
Processing Bug
|
### What is the bug or the crash?
The algorithm "Difference", used to extract features from an input layer that fall outside of features in the overlay layer has not been working properly since at least v3.16.4 of QGIS. I've attached a representative log from QGIS v3.26.0 of a failed attempt at extracting features of a point layer that fall outside of a polygon layer.
* macOS Big Sur v11.6.7
* Macbook Pro 16" 2019

### Steps to reproduce the issue
1. Vector... Geoprocessing Tools... Difference
2. Choose point vector input layer (EPSG:26916]
3. Choose polygon vector overlay layer (EPSG:26916]
4. Click "run"
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.26.0-Buenos Aires | QGIS code revision | 0aece2818e
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.9.5
GDAL/OGR version | 3.3.2
PROJ version | 8.1.1
EPSG Registry database version | v10.028 (2021-07-07)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.35.2
PDAL version | 2.3.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.1.6
QScintilla2 version | 2.11.5
OS version | macOS 11.6
| | |
Active Python plugins
Multi_Ring_Buffer | 1.1
mmqgis | 2020.1.16
processing | 2.12.99
sagaprovider | 2.12.99
grassprovider | 2.12.99
db_manager | 0.1.20
MetaSearch | 0.3.6
</body></html>QGIS version
3.26.0-Buenos Aires
QGIS code revision
[0aece2818e](https://github.com/qgis/QGIS/commit/0aece2818e)
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.3.2
PROJ version
8.1.1
EPSG Registry database version
v10.028 (2021-07-07)
GEOS version
3.9.1-CAPI-1.14.2
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.11.5
OS version
macOS 11.6
Active Python plugins
Multi_Ring_Buffer
1.1
mmqgis
2020.1.16
processing
2.12.99
sagaprovider
2.12.99
grassprovider
2.12.99
db_manager
0.1.20
MetaSearch
0.3.6
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
The problem appeared in all versions subsequent to v3.10.6.
|
1.0
|
Algorithm "Difference" failing to execute properly - ### What is the bug or the crash?
The algorithm "Difference", used to extract features from an input layer that fall outside of features in the overlay layer has not been working properly since at least v3.16.4 of QGIS. I've attached a representative log from QGIS v3.26.0 of a failed attempt at extracting features of a point layer that fall outside of a polygon layer.
* macOS Big Sur v11.6.7
* Macbook Pro 16" 2019

### Steps to reproduce the issue
1. Vector... Geoprocessing Tools... Difference
2. Choose point vector input layer (EPSG:26916]
3. Choose polygon vector overlay layer (EPSG:26916]
4. Click "run"
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.26.0-Buenos Aires | QGIS code revision | 0aece2818e
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.9.5
GDAL/OGR version | 3.3.2
PROJ version | 8.1.1
EPSG Registry database version | v10.028 (2021-07-07)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.35.2
PDAL version | 2.3.0
PostgreSQL client version | unknown
SpatiaLite version | 5.0.1
QWT version | 6.1.6
QScintilla2 version | 2.11.5
OS version | macOS 11.6
| | |
Active Python plugins
Multi_Ring_Buffer | 1.1
mmqgis | 2020.1.16
processing | 2.12.99
sagaprovider | 2.12.99
grassprovider | 2.12.99
db_manager | 0.1.20
MetaSearch | 0.3.6
</body></html>QGIS version
3.26.0-Buenos Aires
QGIS code revision
[0aece2818e](https://github.com/qgis/QGIS/commit/0aece2818e)
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.3.2
PROJ version
8.1.1
EPSG Registry database version
v10.028 (2021-07-07)
GEOS version
3.9.1-CAPI-1.14.2
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.11.5
OS version
macOS 11.6
Active Python plugins
Multi_Ring_Buffer
1.1
mmqgis
2020.1.16
processing
2.12.99
sagaprovider
2.12.99
grassprovider
2.12.99
db_manager
0.1.20
MetaSearch
0.3.6
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
The problem appeared in all versions subsequent to v3.10.6.
|
process
|
algorithm difference failing to execute properly what is the bug or the crash the algorithm difference used to extract features from an input layer that fall outside of features in the overlay layer has not been working properly since at least of qgis i ve attached a representative log from qgis of a failed attempt at extracting features of a point layer that fall outside of a polygon layer macos big sur macbook pro steps to reproduce the issue vector geoprocessing tools difference choose point vector input layer epsg choose polygon vector overlay layer epsg click run versions doctype html public dtd html en p li white space pre wrap qgis version buenos aires qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version macos active python plugins multi ring buffer mmqgis processing sagaprovider grassprovider db manager metasearch qgis version buenos aires qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version macos active python plugins multi ring buffer mmqgis processing sagaprovider grassprovider db manager metasearch supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context the problem appeared in all versions subsequent to
| 1
|
69,392
| 14,988,507,057
|
IssuesEvent
|
2021-01-29 01:25:18
|
Omni3Tech/corda
|
https://api.github.com/repos/Omni3Tech/corda
|
opened
|
CVE-2020-9546 (High) detected in jackson-databind-2.9.7.jar, jackson-databind-2.8.4.jar
|
security vulnerability
|
## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.7.jar</b>, <b>jackson-databind-2.8.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: corda/tools/demobench/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar,canner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- jersey-media-json-jackson-2.25.jar (Root Library)
- jackson-jaxrs-json-provider-2.8.4.jar
- :x: **jackson-databind-2.9.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: corda/testing/testserver/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210129003010_HURYRX/downloadResource_DWADGL/20210129011449/jackson-databind-2.8.4.jar</p>
<p>
Dependency Hierarchy:
- jersey-media-json-jackson-2.25.jar (Root Library)
- jackson-jaxrs-json-provider-2.8.4.jar
- :x: **jackson-databind-2.8.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Omni3Tech/corda/commit/29c33d3b0ae2ca5fdb1be95ae420943d69013d34">29c33d3b0ae2ca5fdb1be95ae420943d69013d34</a></p>
<p>Found in base branch: <b>release/os/4.8</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-9546 (High) detected in jackson-databind-2.9.7.jar, jackson-databind-2.8.4.jar - ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.7.jar</b>, <b>jackson-databind-2.8.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: corda/tools/demobench/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar,canner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- jersey-media-json-jackson-2.25.jar (Root Library)
- jackson-jaxrs-json-provider-2.8.4.jar
- :x: **jackson-databind-2.9.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: corda/testing/testserver/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210129003010_HURYRX/downloadResource_DWADGL/20210129011449/jackson-databind-2.8.4.jar</p>
<p>
Dependency Hierarchy:
- jersey-media-json-jackson-2.25.jar (Root Library)
- jackson-jaxrs-json-provider-2.8.4.jar
- :x: **jackson-databind-2.8.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Omni3Tech/corda/commit/29c33d3b0ae2ca5fdb1be95ae420943d69013d34">29c33d3b0ae2ca5fdb1be95ae420943d69013d34</a></p>
<p>Found in base branch: <b>release/os/4.8</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file corda tools demobench build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar canner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jersey media json jackson jar root library jackson jaxrs json provider jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file corda testing testserver build gradle path to vulnerable library tmp ws ua huryrx downloadresource dwadgl jackson databind jar dependency hierarchy jersey media json jackson jar root library jackson jaxrs json provider jar x jackson databind jar vulnerable library found in head commit a href found in base branch release os vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
18,695
| 24,595,369,027
|
IssuesEvent
|
2022-10-14 07:51:45
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR > Questionnaire response > Multiple records are getting created for one activity for the same run
|
Bug P1 Response datastore Process: Fixed Process: Tested dev
|
AR: Questionnaire response > Multiple records are getting created for one activity for the same run
ER: Questionnaire response > Multiple records should not get created for one activity for the same run

|
2.0
|
[FHIR > Questionnaire response > Multiple records are getting created for one activity for the same run - AR: Questionnaire response > Multiple records are getting created for one activity for the same run
ER: Questionnaire response > Multiple records should not get created for one activity for the same run

|
process
|
fhir questionnaire response multiple records are getting created for one activity for the same run ar questionnaire response multiple records are getting created for one activity for the same run er questionnaire response multiple records should not get created for one activity for the same run
| 1
|
22,002
| 30,504,730,254
|
IssuesEvent
|
2023-07-18 16:03:17
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Release Checklist 0.85
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.85.0)
- [x] GitHub checks for branch are passing
- [x] No pre-release or snapshot dependencies present in build files
- [x] Automated Kubernetes deployment successful
- [ ] Tag release
- [ ] Upload release artifacts
- [ ] Manual Submission for GCP Marketplace verification by google
- [ ] Publish marketplace release
- [ ] Publish release
## Performance
- [ ] Deployed
- [ ] gRPC API performance tests
- [ ] Importer performance tests
- [ ] REST API performance tests
## Previewnet
- [ ] Deployed
## Staging
- [ ] Deployed
## Testnet
- [ ] Deployed
## Mainnet
- [ ] Deployed to public
- [ ] Deployed to private
### Alternatives
_No response_
|
1.0
|
Release Checklist 0.85 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.85.0)
- [x] GitHub checks for branch are passing
- [x] No pre-release or snapshot dependencies present in build files
- [x] Automated Kubernetes deployment successful
- [ ] Tag release
- [ ] Upload release artifacts
- [ ] Manual Submission for GCP Marketplace verification by google
- [ ] Publish marketplace release
- [ ] Publish release
## Performance
- [ ] Deployed
- [ ] gRPC API performance tests
- [ ] Importer performance tests
- [ ] REST API performance tests
## Previewnet
- [ ] Deployed
## Staging
- [ ] Deployed
## Testnet
- [ ] Deployed
## Mainnet
- [ ] Deployed to public
- [ ] Deployed to private
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing no pre release or snapshot dependencies present in build files automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deployed grpc api performance tests importer performance tests rest api performance tests previewnet deployed staging deployed testnet deployed mainnet deployed to public deployed to private alternatives no response
| 1
|
8,505
| 11,686,156,911
|
IssuesEvent
|
2020-03-05 10:22:26
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Put `@prisma/sdk` version in lockstep with `prisma2`
|
kind/improvement process/candidate
|
And at the same time lock down the version of the binaries delivered with that package, the same way we do it in `prisma2`. The reason is very simple: Tools that depend on `@prisma/sdk` right now need a complex setup to pin the binary version.
|
1.0
|
Put `@prisma/sdk` version in lockstep with `prisma2` - And at the same time lock down the version of the binaries delivered with that package, the same way we do it in `prisma2`. The reason is very simple: Tools that depend on `@prisma/sdk` right now need a complex setup to pin the binary version.
|
process
|
put prisma sdk version in lockstep with and at the same time lock down the version of the binaries delivered with that package the same way we do it in the reason is very simple tools that depend on prisma sdk right now need a complex setup to pin the binary version
| 1
|
8,140
| 11,348,661,991
|
IssuesEvent
|
2020-01-24 01:15:38
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
opened
|
Achieve "passing" level of CII badge
|
process
|
OpenCue CII badge is located at https://bestpractices.coreinfrastructure.org/en/projects/2837.
We should achieve a "passing" grade for our badge, for general project health and to satisfy the ASWF graduation requirement.
|
1.0
|
Achieve "passing" level of CII badge - OpenCue CII badge is located at https://bestpractices.coreinfrastructure.org/en/projects/2837.
We should achieve a "passing" grade for our badge, for general project health and to satisfy the ASWF graduation requirement.
|
process
|
achieve passing level of cii badge opencue cii badge is located at we should achieve a passing grade for our badge for general project health and to satisfy the aswf graduation requirement
| 1
|
139,579
| 18,853,669,392
|
IssuesEvent
|
2021-11-12 01:28:43
|
jiw065/Springboot-demo
|
https://api.github.com/repos/jiw065/Springboot-demo
|
opened
|
CVE-2020-11112 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2020-11112 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /Springboot-demo/spring-boot-demo/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11112 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2020-11112 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /Springboot-demo/spring-boot-demo/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file springboot demo spring boot demo pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons proxy provider remoting rmiprovider aka apache commons proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
381,442
| 11,276,542,132
|
IssuesEvent
|
2020-01-14 23:34:01
|
googleapis/google-api-java-client-services
|
https://api.github.com/repos/googleapis/google-api-java-client-services
|
closed
|
Synthesis failed for cloudiot
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate cloudiot. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Checking out files: 24% (15877/65361)
Checking out files: 25% (16341/65361)
Checking out files: 26% (16994/65361)
Checking out files: 27% (17648/65361)
Checking out files: 28% (18302/65361)
Checking out files: 29% (18955/65361)
Checking out files: 30% (19609/65361)
Checking out files: 31% (20262/65361)
Checking out files: 32% (20916/65361)
Checking out files: 33% (21570/65361)
Checking out files: 34% (22223/65361)
Checking out files: 35% (22877/65361)
Checking out files: 36% (23530/65361)
Checking out files: 37% (24184/65361)
Checking out files: 38% (24838/65361)
Checking out files: 39% (25491/65361)
Checking out files: 40% (26145/65361)
Checking out files: 41% (26799/65361)
Checking out files: 42% (27452/65361)
Checking out files: 43% (28106/65361)
Checking out files: 44% (28759/65361)
Checking out files: 45% (29413/65361)
Checking out files: 46% (30067/65361)
Checking out files: 47% (30720/65361)
Checking out files: 48% (31374/65361)
Checking out files: 49% (32027/65361)
Checking out files: 50% (32681/65361)
Checking out files: 51% (33335/65361)
Checking out files: 52% (33988/65361)
Checking out files: 53% (34642/65361)
Checking out files: 53% (34761/65361)
Checking out files: 54% (35295/65361)
Checking out files: 55% (35949/65361)
Checking out files: 56% (36603/65361)
Checking out files: 57% (37256/65361)
Checking out files: 58% (37910/65361)
Checking out files: 59% (38563/65361)
Checking out files: 60% (39217/65361)
Checking out files: 61% (39871/65361)
Checking out files: 62% (40524/65361)
Checking out files: 63% (41178/65361)
Checking out files: 64% (41832/65361)
Checking out files: 65% (42485/65361)
Checking out files: 66% (43139/65361)
Checking out files: 67% (43792/65361)
Checking out files: 68% (44446/65361)
Checking out files: 69% (45100/65361)
Checking out files: 70% (45753/65361)
Checking out files: 71% (46407/65361)
Checking out files: 72% (47060/65361)
Checking out files: 73% (47714/65361)
Checking out files: 74% (48368/65361)
Checking out files: 75% (49021/65361)
Checking out files: 76% (49675/65361)
Checking out files: 77% (50328/65361)
Checking out files: 78% (50982/65361)
Checking out files: 79% (51636/65361)
Checking out files: 80% (52289/65361)
Checking out files: 81% (52943/65361)
Checking out files: 81% (53273/65361)
Checking out files: 82% (53597/65361)
Checking out files: 83% (54250/65361)
Checking out files: 84% (54904/65361)
Checking out files: 85% (55557/65361)
Checking out files: 86% (56211/65361)
Checking out files: 87% (56865/65361)
Checking out files: 88% (57518/65361)
Checking out files: 89% (58172/65361)
Checking out files: 90% (58825/65361)
Checking out files: 91% (59479/65361)
Checking out files: 92% (60133/65361)
Checking out files: 93% (60786/65361)
Checking out files: 94% (61440/65361)
Checking out files: 95% (62093/65361)
Checking out files: 96% (62747/65361)
Checking out files: 97% (63401/65361)
Checking out files: 98% (64054/65361)
Checking out files: 99% (64708/65361)
Checking out files: 100% (65361/65361)
Checking out files: 100% (65361/65361), done.
Switched to branch 'autosynth-cloudiot'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/40f694d4-43de-41f0-b993-f4694e4a45de).
|
1.0
|
Synthesis failed for cloudiot - Hello! Autosynth couldn't regenerate cloudiot. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Checking out files: 24% (15877/65361)
Checking out files: 25% (16341/65361)
Checking out files: 26% (16994/65361)
Checking out files: 27% (17648/65361)
Checking out files: 28% (18302/65361)
Checking out files: 29% (18955/65361)
Checking out files: 30% (19609/65361)
Checking out files: 31% (20262/65361)
Checking out files: 32% (20916/65361)
Checking out files: 33% (21570/65361)
Checking out files: 34% (22223/65361)
Checking out files: 35% (22877/65361)
Checking out files: 36% (23530/65361)
Checking out files: 37% (24184/65361)
Checking out files: 38% (24838/65361)
Checking out files: 39% (25491/65361)
Checking out files: 40% (26145/65361)
Checking out files: 41% (26799/65361)
Checking out files: 42% (27452/65361)
Checking out files: 43% (28106/65361)
Checking out files: 44% (28759/65361)
Checking out files: 45% (29413/65361)
Checking out files: 46% (30067/65361)
Checking out files: 47% (30720/65361)
Checking out files: 48% (31374/65361)
Checking out files: 49% (32027/65361)
Checking out files: 50% (32681/65361)
Checking out files: 51% (33335/65361)
Checking out files: 52% (33988/65361)
Checking out files: 53% (34642/65361)
Checking out files: 53% (34761/65361)
Checking out files: 54% (35295/65361)
Checking out files: 55% (35949/65361)
Checking out files: 56% (36603/65361)
Checking out files: 57% (37256/65361)
Checking out files: 58% (37910/65361)
Checking out files: 59% (38563/65361)
Checking out files: 60% (39217/65361)
Checking out files: 61% (39871/65361)
Checking out files: 62% (40524/65361)
Checking out files: 63% (41178/65361)
Checking out files: 64% (41832/65361)
Checking out files: 65% (42485/65361)
Checking out files: 66% (43139/65361)
Checking out files: 67% (43792/65361)
Checking out files: 68% (44446/65361)
Checking out files: 69% (45100/65361)
Checking out files: 70% (45753/65361)
Checking out files: 71% (46407/65361)
Checking out files: 72% (47060/65361)
Checking out files: 73% (47714/65361)
Checking out files: 74% (48368/65361)
Checking out files: 75% (49021/65361)
Checking out files: 76% (49675/65361)
Checking out files: 77% (50328/65361)
Checking out files: 78% (50982/65361)
Checking out files: 79% (51636/65361)
Checking out files: 80% (52289/65361)
Checking out files: 81% (52943/65361)
Checking out files: 81% (53273/65361)
Checking out files: 82% (53597/65361)
Checking out files: 83% (54250/65361)
Checking out files: 84% (54904/65361)
Checking out files: 85% (55557/65361)
Checking out files: 86% (56211/65361)
Checking out files: 87% (56865/65361)
Checking out files: 88% (57518/65361)
Checking out files: 89% (58172/65361)
Checking out files: 90% (58825/65361)
Checking out files: 91% (59479/65361)
Checking out files: 92% (60133/65361)
Checking out files: 93% (60786/65361)
Checking out files: 94% (61440/65361)
Checking out files: 95% (62093/65361)
Checking out files: 96% (62747/65361)
Checking out files: 97% (63401/65361)
Checking out files: 98% (64054/65361)
Checking out files: 99% (64708/65361)
Checking out files: 100% (65361/65361)
Checking out files: 100% (65361/65361), done.
Switched to branch 'autosynth-cloudiot'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/40f694d4-43de-41f0-b993-f4694e4a45de).
|
non_process
|
synthesis failed for cloudiot hello autosynth couldn t regenerate cloudiot broken heart here s the output from running synth py cloning into working repo checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files done switched to branch autosynth cloudiot traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main last synth commit hash get last metadata commit args metadata path file tmpfs src git autosynth autosynth synth py line in get last metadata commit text true file home kbuilder pyenv versions lib subprocess py line in run with popen popenargs kwargs as process typeerror init got an unexpected keyword argument text google internal developers can see the full log
| 0
|
19,402
| 25,543,628,865
|
IssuesEvent
|
2022-11-29 17:00:21
|
pycaret/pycaret
|
https://api.github.com/repos/pycaret/pycaret
|
closed
|
ValueError: The Box-Cox transformation can only be applied to strictly positive data[BUG]
|
bug preprocessing
|
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
<img width="1201" alt="Screen Shot 2021-07-01 at 1 01 06 AM" src="https://user-images.githubusercontent.com/78435087/124068010-81f76600-da08-11eb-80c2-edbef73df681.png">
**Describe the bug**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
-->
**To Reproduce**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
<img width="1201" alt="Screen Shot 2021-07-01 at 1 01 06 AM" src="https://user-images.githubusercontent.com/78435087/124067569-655b2e00-da08-11eb-9264-061dd5d9a41d.png">
-->
```python
lr = create_model('lr')
ValueError: The Box-Cox transformation can only be applied to strictly positive data
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
import pycaret
pycaret.__version__
-->
</details>
<!-- Thanks for contributing! -->
|
1.0
|
ValueError: The Box-Cox transformation can only be applied to strictly positive data[BUG] - Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
<img width="1201" alt="Screen Shot 2021-07-01 at 1 01 06 AM" src="https://user-images.githubusercontent.com/78435087/124068010-81f76600-da08-11eb-80c2-edbef73df681.png">
**Describe the bug**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
-->
**To Reproduce**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
<img width="1201" alt="Screen Shot 2021-07-01 at 1 01 06 AM" src="https://user-images.githubusercontent.com/78435087/124067569-655b2e00-da08-11eb-9264-061dd5d9a41d.png">
-->
```python
lr = create_model('lr')
ValueError: The Box-Cox transformation can only be applied to strictly positive data
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Hi, I am using pycaret 2.3 for regression analysis, and get this error below: Can you please help? Thanks.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
import pycaret
pycaret.__version__
-->
</details>
<!-- Thanks for contributing! -->
|
process
|
valueerror the box cox transformation can only be applied to strictly positive data hi i am using pycaret for regression analysis and get this error below can you please help thanks img width alt screen shot at am src describe the bug hi i am using pycaret for regression analysis and get this error below can you please help thanks to reproduce hi i am using pycaret for regression analysis and get this error below can you please help thanks img width alt screen shot at am src python lr create model lr valueerror the box cox transformation can only be applied to strictly positive data expected behavior a clear and concise description of what you expected to happen additional context hi i am using pycaret for regression analysis and get this error below can you please help thanks versions please run the following code snippet and paste the output here import pycaret pycaret version
| 1
|
282,533
| 30,889,358,512
|
IssuesEvent
|
2023-08-04 02:36:23
|
madhans23/linux-4.1.15
|
https://api.github.com/repos/madhans23/linux-4.1.15
|
reopened
|
CVE-2015-8962 (High) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2015-8962 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Double free vulnerability in the sg_common_write function in drivers/scsi/sg.c in the Linux kernel before 4.4 allows local users to gain privileges or cause a denial of service (memory corruption and system crash) by detaching a device during an SG_IO ioctl call.
<p>Publish Date: 2016-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8962>CVE-2015-8962</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8962">https://nvd.nist.gov/vuln/detail/CVE-2015-8962</a></p>
<p>Release Date: 2016-11-16</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-8962 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2015-8962 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/sg.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Double free vulnerability in the sg_common_write function in drivers/scsi/sg.c in the Linux kernel before 4.4 allows local users to gain privileges or cause a denial of service (memory corruption and system crash) by detaching a device during an SG_IO ioctl call.
<p>Publish Date: 2016-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8962>CVE-2015-8962</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8962">https://nvd.nist.gov/vuln/detail/CVE-2015-8962</a></p>
<p>Release Date: 2016-11-16</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files drivers scsi sg c drivers scsi sg c vulnerability details double free vulnerability in the sg common write function in drivers scsi sg c in the linux kernel before allows local users to gain privileges or cause a denial of service memory corruption and system crash by detaching a device during an sg io ioctl call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
15,076
| 18,774,373,232
|
IssuesEvent
|
2021-11-07 12:11:12
|
streamnative/pulsar-flink
|
https://api.github.com/repos/streamnative/pulsar-flink
|
closed
|
[BUG] job canceled,Failed to cancel the pulsar, Failed to remove cursor or TopicRange
|
type/bug platform/data-processing
|
**Describe the bug**
flink version 1.13.1
pulsar version 2.5.0
pulsar-flink-connector_2.11 version 1.13.1.0
run with flink standalone cluster
flink job is a source connector job, when one of taskmanager restart cause oom, the job canceled,but throw one ERROR,:
```
ERROR org.apache.flink.streaming.connectors.pulsar.FlinkPulsarSource [] - Failed to cancel the pulsar Fetcher java.lang.RuntimeException: Failed to remove cursor or TopicRange[topic=persistent://xxxxx/xxxx/xxxx-partition-2, key-range=SerializableRange{range=[0,65535]}]
at org.apache.flink.steraming.connectors.pulsar.inernal.PulsarMetadataReader.removeCursor(PulsarMetadataReader.java:278)
......
Caused by: org.apache.pulsar.client.admin.PUlsarAdminException:java.lang.InterruptedException
......6 more
```
when start the flink job again ,i got this
```
org.apache.pulsar.clint.api.PulsarClientException$ConsumerBusyException: Exclusive consumer is already connected
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
1.0
|
[BUG] job canceled,Failed to cancel the pulsar, Failed to remove cursor or TopicRange - **Describe the bug**
flink version 1.13.1
pulsar version 2.5.0
pulsar-flink-connector_2.11 version 1.13.1.0
run with flink standalone cluster
flink job is a source connector job, when one of taskmanager restart cause oom, the job canceled,but throw one ERROR,:
```
ERROR org.apache.flink.streaming.connectors.pulsar.FlinkPulsarSource [] - Failed to cancel the pulsar Fetcher java.lang.RuntimeException: Failed to remove cursor or TopicRange[topic=persistent://xxxxx/xxxx/xxxx-partition-2, key-range=SerializableRange{range=[0,65535]}]
at org.apache.flink.steraming.connectors.pulsar.inernal.PulsarMetadataReader.removeCursor(PulsarMetadataReader.java:278)
......
Caused by: org.apache.pulsar.client.admin.PUlsarAdminException:java.lang.InterruptedException
......6 more
```
when start the flink job again ,i got this
```
org.apache.pulsar.clint.api.PulsarClientException$ConsumerBusyException: Exclusive consumer is already connected
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
process
|
job canceled failed to cancel the pulsar failed to remove cursor or topicrange describe the bug flink version pulsar version pulsar flink connector version run with flink standalone cluster flink job is a source connector job when one of taskmanager restart cause oom the job canceled but throw one error error org apache flink streaming connectors pulsar flinkpulsarsource failed to cancel the pulsar fetcher java lang runtimeexception failed to remove cursor or topicrange at org apache flink steraming connectors pulsar inernal pulsarmetadatareader removecursor pulsarmetadatareader java caused by org apache pulsar client admin pulsaradminexception java lang interruptedexception more when start the flink job again i got this org apache pulsar clint api pulsarclientexception consumerbusyexception exclusive consumer is already connected to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here
| 1
|
437,954
| 30,616,184,908
|
IssuesEvent
|
2023-07-24 03:30:15
|
abraham/twitteroauth
|
https://api.github.com/repos/abraham/twitteroauth
|
closed
|
Improve error handling in demo
|
Documentation Available
|
```
2015-02-20T05:40:01.339495+00:00 app[web.1]: [20-Feb-2015 05:40:01 UTC] PHP Fatal error: Uncaught exception 'Abraham\TwitterOAuth\TwitterOAuthException' with message 'Request timed out.' in /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php:311
2015-02-20T05:40:01.549171+00:00 app[web.1]: 10.101.131.90 - - [20/Feb/2015:05:40:01 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.53 Safari/537.36"
2015-02-20T05:40:01.339557+00:00 app[web.1]: Stack trace:
2015-02-20T05:40:01.339816+00:00 app[web.1]: #0 /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php(251): Abraham\TwitterOAuth\TwitterOAuth->request('https://api.twi...', 'POST', 'Authorization: ...', Array)
2015-02-20T05:40:01.340033+00:00 app[web.1]: #1 /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php(134): Abraham\TwitterOAuth\TwitterOAuth->oAuthRequest('https://api.twi...', 'POST', Array)
2015-02-20T05:40:01.340316+00:00 app[web.1]: thrown in /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php on line 311
2015-02-20T05:40:01.340180+00:00 app[web.1]: #2 /app/callback.php(27): Abraham\TwitterOAuth\TwitterOAuth->oauth('oauth/access_to...', Array)
2015-02-20T05:40:01.340198+00:00 app[web.1]: #3 {main}
2015-02-20T05:40:01.343373+00:00 app[web.1]: 10.101.131.90 - - [20/Feb/2015:05:39:55 +0000] "GET /callback.php?oauth_token=avaIQtW34x1epqFs67pPAd3MJBuoZnYP&oauth_verifier=2CI8qcn9IIRjKP9nl8CJUJsq7IvQOYmd HTTP/1.1" 500 - "https://api.twitter.com/oauth/authorize" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.53 Safari/537.36"
```
|
1.0
|
Improve error handling in demo - ```
2015-02-20T05:40:01.339495+00:00 app[web.1]: [20-Feb-2015 05:40:01 UTC] PHP Fatal error: Uncaught exception 'Abraham\TwitterOAuth\TwitterOAuthException' with message 'Request timed out.' in /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php:311
2015-02-20T05:40:01.549171+00:00 app[web.1]: 10.101.131.90 - - [20/Feb/2015:05:40:01 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "-" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.53 Safari/537.36"
2015-02-20T05:40:01.339557+00:00 app[web.1]: Stack trace:
2015-02-20T05:40:01.339816+00:00 app[web.1]: #0 /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php(251): Abraham\TwitterOAuth\TwitterOAuth->request('https://api.twi...', 'POST', 'Authorization: ...', Array)
2015-02-20T05:40:01.340033+00:00 app[web.1]: #1 /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php(134): Abraham\TwitterOAuth\TwitterOAuth->oAuthRequest('https://api.twi...', 'POST', Array)
2015-02-20T05:40:01.340316+00:00 app[web.1]: thrown in /app/vendor/abraham/twitteroauth/src/TwitterOAuth.php on line 311
2015-02-20T05:40:01.340180+00:00 app[web.1]: #2 /app/callback.php(27): Abraham\TwitterOAuth\TwitterOAuth->oauth('oauth/access_to...', Array)
2015-02-20T05:40:01.340198+00:00 app[web.1]: #3 {main}
2015-02-20T05:40:01.343373+00:00 app[web.1]: 10.101.131.90 - - [20/Feb/2015:05:39:55 +0000] "GET /callback.php?oauth_token=avaIQtW34x1epqFs67pPAd3MJBuoZnYP&oauth_verifier=2CI8qcn9IIRjKP9nl8CJUJsq7IvQOYmd HTTP/1.1" 500 - "https://api.twitter.com/oauth/authorize" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.53 Safari/537.36"
```
|
non_process
|
improve error handling in demo app php fatal error uncaught exception abraham twitteroauth twitteroauthexception with message request timed out in app vendor abraham twitteroauth src twitteroauth php app get favicon ico http mozilla windows nt applewebkit khtml like gecko chrome safari app stack trace app app vendor abraham twitteroauth src twitteroauth php abraham twitteroauth twitteroauth request post authorization array app app vendor abraham twitteroauth src twitteroauth php abraham twitteroauth twitteroauth oauthrequest post array app thrown in app vendor abraham twitteroauth src twitteroauth php on line app app callback php abraham twitteroauth twitteroauth oauth oauth access to array app main app get callback php oauth token oauth verifier http mozilla windows nt applewebkit khtml like gecko chrome safari
| 0
|
15,106
| 18,844,453,162
|
IssuesEvent
|
2021-11-11 13:28:34
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[SLOW Query] API endpoint for query are too slow vs the query
|
Type:Performance Querying/Processor
|
### Bug
- Your browser and the version: Chrome 67.0.3396.62 (64-bits)
- Your operating system: Linux/Ubuntu
- Your databases: Postgres
- Metabase version: v0.29.3
- Metabase hosting environment: Linux/Debian
- Metabase internal database: Postgres
I run the same query over `/api/card/320/query` and at my `SQL client` (I use [SQLWorkbench](https://www.sql-workbench.eu/) that uses the same JDBC connector). Did it take 5 seconds at the endpoint and 200 milliseconds and my client, why this the parse slow at this point?
:arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
|
1.0
|
[SLOW Query] API endpoint for query are too slow vs the query - ### Bug
- Your browser and the version: Chrome 67.0.3396.62 (64-bits)
- Your operating system: Linux/Ubuntu
- Your databases: Postgres
- Metabase version: v0.29.3
- Metabase hosting environment: Linux/Debian
- Metabase internal database: Postgres
I run the same query over `/api/card/320/query` and at my `SQL client` (I use [SQLWorkbench](https://www.sql-workbench.eu/) that uses the same JDBC connector). Did it take 5 seconds at the endpoint and 200 milliseconds and my client, why this the parse slow at this point?
:arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
|
process
|
api endpoint for query are too slow vs the query bug your browser and the version chrome bits your operating system linux ubuntu your databases postgres metabase version metabase hosting environment linux debian metabase internal database postgres i run the same query over api card query and at my sql client i use that uses the same jdbc connector did it take seconds at the endpoint and milliseconds and my client why this the parse slow at this point arrow down please click the reaction instead of leaving a or update comment
| 1
|
306,701
| 26,491,516,995
|
IssuesEvent
|
2023-01-17 23:16:18
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
k3s exits during `make test` due to cgroups v2 issue
|
[zube]: To Test status/dev-validate team/area2
|
**Describe the bug**
<!--A clear and concise description of what the bug is.-->
When running the rancher process inside of dapper, rancher needs to be moved to the init process group if running on a machine using cgroups v2 (for me, this is Ubuntu 22.10, but IIRC first occured on 20.04). Otherwise, the [embedded k3s created by norman](https://github.com/rancher/norman/blob/32ef2e185b999ee40e041406080fcefffe045f22/pkg/kwrapper/k8s/k3s_linux.go#L38) will continuously exit.
**To Reproduce**
<!--Steps to reproduce the behavior-->
On an Ubuntu 22.04 machine, checkout rancher and run `make test`.
**Result**
```
K3s logs were:
...
I1207 00:10:12.640790 4031 kubelet.go:1986] "Starting kubelet main sync loop"
E1207 00:10:12.640837 4031 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
E1207 00:10:12.660521 4031 node_container_manager_linux.go:61] "Failed to create cgroup" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E1207 00:10:12.660532 4031 kubelet.go:1378] "Failed to start ContainerManager" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state"
W1207 00:10:12.660530 4031 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods: no such file or directory
Attempting to kill K3s
Starting rancher server using run
Rancher died
Rancher logs were
2022/12/07 00:10:40 [INFO] Applying CRD preferences.management.cattle.io
2022/12/07 00:10:40 [INFO] Applying CRD features.management.cattle.io
...
2022/12/07 00:10:40 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2022/12/07 00:10:40 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/12/07 00:10:40 [FATAL] k3s exited with: exit status 1
```
**Expected Result**
<!--A clear and concise description of what you expected to happen.-->
```
Starting rancher server for test
Sleeping for 5 seconds before checking Rancher health
Starting rancher server using run
pongError from server (NotFound): deployments.apps "rancher-webhook" not found
```
**Additional context**
<!--Add any other context about the problem here.-->
This isn't a bug in rancher itself, but rather our build process. I am creating the issue for tracking purposes.
|
1.0
|
k3s exits during `make test` due to cgroups v2 issue - **Describe the bug**
<!--A clear and concise description of what the bug is.-->
When running the rancher process inside of dapper, rancher needs to be moved to the init process group if running on a machine using cgroups v2 (for me, this is Ubuntu 22.10, but IIRC first occured on 20.04). Otherwise, the [embedded k3s created by norman](https://github.com/rancher/norman/blob/32ef2e185b999ee40e041406080fcefffe045f22/pkg/kwrapper/k8s/k3s_linux.go#L38) will continuously exit.
**To Reproduce**
<!--Steps to reproduce the behavior-->
On an Ubuntu 22.04 machine, checkout rancher and run `make test`.
**Result**
```
K3s logs were:
...
I1207 00:10:12.640790 4031 kubelet.go:1986] "Starting kubelet main sync loop"
E1207 00:10:12.640837 4031 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
E1207 00:10:12.660521 4031 node_container_manager_linux.go:61] "Failed to create cgroup" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E1207 00:10:12.660532 4031 kubelet.go:1378] "Failed to start ContainerManager" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state"
W1207 00:10:12.660530 4031 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods: no such file or directory
Attempting to kill K3s
Starting rancher server using run
Rancher died
Rancher logs were
2022/12/07 00:10:40 [INFO] Applying CRD preferences.management.cattle.io
2022/12/07 00:10:40 [INFO] Applying CRD features.management.cattle.io
...
2022/12/07 00:10:40 [INFO] Applying CRD machinesets.cluster.x-k8s.io
2022/12/07 00:10:40 [INFO] Waiting for CRD machinesets.cluster.x-k8s.io to become available
2022/12/07 00:10:40 [FATAL] k3s exited with: exit status 1
```
**Expected Result**
<!--A clear and concise description of what you expected to happen.-->
```
Starting rancher server for test
Sleeping for 5 seconds before checking Rancher health
Starting rancher server using run
pongError from server (NotFound): deployments.apps "rancher-webhook" not found
```
**Additional context**
<!--Add any other context about the problem here.-->
This isn't a bug in rancher itself, but rather our build process. I am creating the issue for tracking purposes.
|
non_process
|
exits during make test due to cgroups issue describe the bug when running the rancher process inside of dapper rancher needs to be moved to the init process group if running on a machine using cgroups for me this is ubuntu but iirc first occured on otherwise the will continuously exit to reproduce on an ubuntu machine checkout rancher and run make test result logs were kubelet go starting kubelet main sync loop kubelet go skipping pod synchronization err node container manager linux go failed to create cgroup err cannot enter sys fs cgroup kubepods with domain controllers it is in an invalid state cgroupname kubelet go failed to start containermanager err cannot enter sys fs cgroup kubepods with domain controllers it is in an invalid state watcher go error while processing event sys fs cgroup kubepods in create in isdir inotify add watch sys fs cgroup kubepods no such file or directory attempting to kill starting rancher server using run rancher died rancher logs were applying crd preferences management cattle io applying crd features management cattle io applying crd machinesets cluster x io waiting for crd machinesets cluster x io to become available exited with exit status expected result starting rancher server for test sleeping for seconds before checking rancher health starting rancher server using run pongerror from server notfound deployments apps rancher webhook not found additional context this isn t a bug in rancher itself but rather our build process i am creating the issue for tracking purposes
| 0
|
4,008
| 6,937,520,656
|
IssuesEvent
|
2017-12-04 05:34:12
|
badgerloop-software/pod3_gcc
|
https://api.github.com/repos/badgerloop-software/pod3_gcc
|
closed
|
Add support for target STM32L432 (Nucleo)
|
new processor toolchain
|
Populate `proc/stm32L4xx` with necessary infrastructure to compile the source and flash this device.
Some of this will likely need to be aggregated from examples.
|
1.0
|
Add support for target STM32L432 (Nucleo) - Populate `proc/stm32L4xx` with necessary infrastructure to compile the source and flash this device.
Some of this will likely need to be aggregated from examples.
|
process
|
add support for target nucleo populate proc with necessary infrastructure to compile the source and flash this device some of this will likely need to be aggregated from examples
| 1
|
21,237
| 28,357,953,927
|
IssuesEvent
|
2023-04-12 08:43:35
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
Allow remove_field() function to take a regular expression or glob
|
processing feature triaged
|
The `remove_field()` pipeline function currently only allows to remove fields that match an exact name. This works if I know which fields my message will have, but not for more dynamic data, like in DNS responses that can contain multiple answers to a request.
## Expected Behavior
I'd like to be able to remove all fields of a message that match a regular expression or glob pattern.
## Current Behavior
Currently, I can only remove fields if I know the name of each field.
## Context

In this example, I'd like to call something like `remove_fields('dns_authorities_*')`.
|
1.0
|
Allow remove_field() function to take a regular expression or glob - The `remove_field()` pipeline function currently only allows to remove fields that match an exact name. This works if I know which fields my message will have, but not for more dynamic data, like in DNS responses that can contain multiple answers to a request.
## Expected Behavior
I'd like to be able to remove all fields of a message that match a regular expression or glob pattern.
## Current Behavior
Currently, I can only remove fields if I know the name of each field.
## Context

In this example, I'd like to call something like `remove_fields('dns_authorities_*')`.
|
process
|
allow remove field function to take a regular expression or glob the remove field pipeline function currently only allows to remove fields that match an exact name this works if i know which fields my message will have but not for more dynamic data like in dns responses that can contain multiple answers to a request expected behavior i d like to be able to remove all fields of a message that match a regular expression or glob pattern current behavior currently i can only remove fields if i know the name of each field context in this example i d like to call something like remove fields dns authorities
| 1
|
12,774
| 15,159,451,504
|
IssuesEvent
|
2021-02-12 04:19:45
|
googlefonts/noto-fonts
|
https://api.github.com/repos/googlefonts/noto-fonts
|
closed
|
name ID 4 and name ID 6 have 'Regular' in all variation fonts
|
Noto-Process-Issue
|
Name ID 4 (full name) and ID 6 (postscript name) have 'Regular' in variation fonts. Shouldn't it be dropped for VF ?
For instance, Noto Sans Thai VF has 'Noto Sans Thai Regular' (id 4) and 'NotoSansThai-Regular' (id 6), but I think they'd better be 'Noto Sans Thai' and 'NotoSansThai'.
https://docs.microsoft.com/en-us/typography/opentype/spec/name does not have much to say about this other than ID 25. Related issue is #869 . If there were ID 25 or ID 16, we would not have to worry about ID 6 per [Adobe technical note 5902] (https://wwwimages2.adobe.com/content/dam/acom/en/devnet/font/pdfs/5902.AdobePSNameGeneration.pdf) , but we don't.
Perhaps, this is a fonconfig issue. When constructing 'FullName' for named instances, fontconfig just uses name ID 4, but it'd better construct fullname by concatenating ID 1 and subfamilyID from fvar.
I'll raise an issue in fontconfig.
|
1.0
|
name ID 4 and name ID 6 have 'Regular' in all variation fonts - Name ID 4 (full name) and ID 6 (postscript name) have 'Regular' in variation fonts. Shouldn't it be dropped for VF ?
For instance, Noto Sans Thai VF has 'Noto Sans Thai Regular' (id 4) and 'NotoSansThai-Regular' (id 6), but I think they'd better be 'Noto Sans Thai' and 'NotoSansThai'.
https://docs.microsoft.com/en-us/typography/opentype/spec/name does not have much to say about this other than ID 25. Related issue is #869 . If there were ID 25 or ID 16, we would not have to worry about ID 6 per [Adobe technical note 5902] (https://wwwimages2.adobe.com/content/dam/acom/en/devnet/font/pdfs/5902.AdobePSNameGeneration.pdf) , but we don't.
Perhaps, this is a fonconfig issue. When constructing 'FullName' for named instances, fontconfig just uses name ID 4, but it'd better construct fullname by concatenating ID 1 and subfamilyID from fvar.
I'll raise an issue in fontconfig.
|
process
|
name id and name id have regular in all variation fonts name id full name and id postscript name have regular in variation fonts shouldn t it be dropped for vf for instance noto sans thai vf has noto sans thai regular id and notosansthai regular id but i think they d better be noto sans thai and notosansthai does not have much to say about this other than id related issue is if there were id or id we would not have to worry about id per but we don t perhaps this is a fonconfig issue when constructing fullname for named instances fontconfig just uses name id but it d better construct fullname by concatenating id and subfamilyid from fvar i ll raise an issue in fontconfig
| 1
|
1,624
| 4,238,196,740
|
IssuesEvent
|
2016-07-06 01:54:34
|
BriceChou/WeiboClient
|
https://api.github.com/repos/BriceChou/WeiboClient
|
closed
|
weibo detail page BUG.
|
Highest In processing
|
1. select a weibo with long text and long image,then click it.
2. enter the weibo detail page
3. can't show full page and can't scroll down this page.
|
1.0
|
weibo detail page BUG. - 1. select a weibo with long text and long image,then click it.
2. enter the weibo detail page
3. can't show full page and can't scroll down this page.
|
process
|
weibo detail page bug select a weibo with long text and long image then click it enter the weibo detail page can t show full page and can t scroll down this page
| 1
|
3,708
| 6,731,556,903
|
IssuesEvent
|
2017-10-18 08:07:40
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Process and encapsulated quotes in parameters
|
Bug Process Status: Waiting feedback
|
I'm trying to run a mysql command using the process package, but I'm having troubles with the double quotes.
My process builder looks like this:
```
$query = "LOAD DATA INFILE 'file.csv' IGNORE INTO TABLE `table` FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' (col1, col2, col3)";
$builder = new ProcessBuilder([
'mysql',
'-u',
$user,
$pass ? '-p'.$pass:null,
$db,
'-e',
$query
]);
```
Process generates the following command line:
`cmd /V:ON /E:ON /D /C "("mysql" "-u" "root" "database" "-e" "LOAD DATA INFILE 'file.csv' IGNORE INTO TABLE `table` FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' (col1, col2, col3)") 1>"...\AppData\Local\Temp\sf_proc_00.out" 2>"...\AppData\Local\Temp\sf_proc_00.err""`
And it fails, due to the quote between "ENLOSED BY'" and "'LINES TERMINATED BY". Without this quote, it works except that MySQL can't parse the file properly. How am I supposed to pass the SQL query parameter?
|
1.0
|
Process and encapsulated quotes in parameters - I'm trying to run a mysql command using the process package, but I'm having troubles with the double quotes.
My process builder looks like this:
```
$query = "LOAD DATA INFILE 'file.csv' IGNORE INTO TABLE `table` FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' (col1, col2, col3)";
$builder = new ProcessBuilder([
'mysql',
'-u',
$user,
$pass ? '-p'.$pass:null,
$db,
'-e',
$query
]);
```
Process generates the following command line:
`cmd /V:ON /E:ON /D /C "("mysql" "-u" "root" "database" "-e" "LOAD DATA INFILE 'file.csv' IGNORE INTO TABLE `table` FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' (col1, col2, col3)") 1>"...\AppData\Local\Temp\sf_proc_00.out" 2>"...\AppData\Local\Temp\sf_proc_00.err""`
And it fails, due to the quote between "ENLOSED BY'" and "'LINES TERMINATED BY". Without this quote, it works except that MySQL can't parse the file properly. How am I supposed to pass the SQL query parameter?
|
process
|
process and encapsulated quotes in parameters i m trying to run a mysql command using the process package but i m having troubles with the double quotes my process builder looks like this query load data infile file csv ignore into table table fields terminated by enclosed by lines terminated by n builder new processbuilder mysql u user pass p pass null db e query process generates the following command line cmd v on e on d c mysql u root database e load data infile file csv ignore into table table fields terminated by enclosed by lines terminated by n appdata local temp sf proc out appdata local temp sf proc err and it fails due to the quote between enlosed by and lines terminated by without this quote it works except that mysql can t parse the file properly how am i supposed to pass the sql query parameter
| 1
|
158,571
| 13,736,675,600
|
IssuesEvent
|
2020-10-05 12:07:42
|
Interacao-Humano-Computador/2020.1-BCE
|
https://api.github.com/repos/Interacao-Humano-Computador/2020.1-BCE
|
closed
|
Planejamento e avaliação - usuários
|
documentation
|
## Planejamento e avaliação de perfil de usuário
### Descrição:
Criar um planejamento para avaliação de perfil de usuários e, posteriormente, realizar e documentar a avaliação de perfil.
### Critérios de aceitação:
- Este documento deve conter referências bibliográficas
- Este documento deve conter tabela de versionamento
- Deve ser realizado um questionário (forms)
### Tarefas:
- [x] Criar documento de planejamento (@isabellacgmsa, @geraldovictor)
- [x] Criar um questionário (@RafaellaJunqueira, @durvalcarvalho, @geraldovictor, @joao15victor08)
- [x] Divulgar o questionário (Todos)
- [x] Analisar resultados (@durvalcarvalho @RafaellaJunqueira )
- [x] Atualizar os documentos (perfil_de_usuario.md) (@durvalcarvalho)
- [x] Atualizar os slides (@durvalcarvalho @geraldovictor)
- [x] Gravar a apresentação (@geraldovictor)
|
1.0
|
Planejamento e avaliação - usuários - ## Planejamento e avaliação de perfil de usuário
### Descrição:
Criar um planejamento para avaliação de perfil de usuários e, posteriormente, realizar e documentar a avaliação de perfil.
### Critérios de aceitação:
- Este documento deve conter referências bibliográficas
- Este documento deve conter tabela de versionamento
- Deve ser realizado um questionário (forms)
### Tarefas:
- [x] Criar documento de planejamento (@isabellacgmsa, @geraldovictor)
- [x] Criar um questionário (@RafaellaJunqueira, @durvalcarvalho, @geraldovictor, @joao15victor08)
- [x] Divulgar o questionário (Todos)
- [x] Analisar resultados (@durvalcarvalho @RafaellaJunqueira )
- [x] Atualizar os documentos (perfil_de_usuario.md) (@durvalcarvalho)
- [x] Atualizar os slides (@durvalcarvalho @geraldovictor)
- [x] Gravar a apresentação (@geraldovictor)
|
non_process
|
planejamento e avaliação usuários planejamento e avaliação de perfil de usuário descrição criar um planejamento para avaliação de perfil de usuários e posteriormente realizar e documentar a avaliação de perfil critérios de aceitação este documento deve conter referências bibliográficas este documento deve conter tabela de versionamento deve ser realizado um questionário forms tarefas criar documento de planejamento isabellacgmsa geraldovictor criar um questionário rafaellajunqueira durvalcarvalho geraldovictor divulgar o questionário todos analisar resultados durvalcarvalho rafaellajunqueira atualizar os documentos perfil de usuario md durvalcarvalho atualizar os slides durvalcarvalho geraldovictor gravar a apresentação geraldovictor
| 0
|
458,936
| 13,184,306,768
|
IssuesEvent
|
2020-08-12 19:09:10
|
TheTypingMatch/lecashbot
|
https://api.github.com/repos/TheTypingMatch/lecashbot
|
closed
|
LeCashBot is typing...
|
commands priority: low type: feat
|
Have LeCashBot start "typing" before a command is run and stop the typing after the command is complete.
|
1.0
|
LeCashBot is typing... - Have LeCashBot start "typing" before a command is run and stop the typing after the command is complete.
|
non_process
|
lecashbot is typing have lecashbot start typing before a command is run and stop the typing after the command is complete
| 0
|
5,282
| 2,573,705,265
|
IssuesEvent
|
2015-02-11 12:09:41
|
WarEmu/WarBugs
|
https://api.github.com/repos/WarEmu/WarBugs
|
closed
|
Engineer unable to deploy turret
|
Ability Emulator Low Priority
|
The engineer fails to deploy the turret when ability is activated. There is a visual animation bug as a result of it.

|
1.0
|
Engineer unable to deploy turret - The engineer fails to deploy the turret when ability is activated. There is a visual animation bug as a result of it.

|
non_process
|
engineer unable to deploy turret the engineer fails to deploy the turret when ability is activated there is a visual animation bug as a result of it
| 0
|
17,537
| 23,345,637,404
|
IssuesEvent
|
2022-08-09 17:40:57
|
AdyanRios-NOAA/SEFSC-MH-Processing
|
https://api.github.com/repos/AdyanRios-NOAA/SEFSC-MH-Processing
|
closed
|
Review code for filling in dates
|
Processing
|
Change date logic may not be working as intended
- Example cluster 14 where diff days = -1 and two records have the same ineffective date
- Two records with same effective date because of a change in species aggregates
- Should that record where bag limit flagged for change in a species aggregate be captured? By including this record it is giving us an ineffective date that produces a gap in our timeline.
- What is happening with other clusters where diff_days = -1? Is it a similar issue with species agg/groups changing?
|
1.0
|
Review code for filling in dates - Change date logic may not be working as intended
- Example cluster 14 where diff days = -1 and two records have the same ineffective date
- Two records with same effective date because of a change in species aggregates
- Should that record where bag limit flagged for change in a species aggregate be captured? By including this record it is giving us an ineffective date that produces a gap in our timeline.
- What is happening with other clusters where diff_days = -1? Is it a similar issue with species agg/groups changing?
|
process
|
review code for filling in dates change date logic may not be working as intended example cluster where diff days and two records have the same ineffective date two records with same effective date because of a change in species aggregates should that record where bag limit flagged for change in a species aggregate be captured by including this record it is giving us an ineffective date that produces a gap in our timeline what is happening with other clusters where diff days is it a similar issue with species agg groups changing
| 1
|
23,047
| 15,781,593,968
|
IssuesEvent
|
2021-04-01 11:36:46
|
RasaHQ/rasa
|
https://api.github.com/repos/RasaHQ/rasa
|
closed
|
Reuse trained models for tests in `tests/nlu/test_diet_classifier.py`
|
area:rasa-oss :ferris_wheel: area:rasa-oss/infrastructure :bullettrain_front: effort:enable-squad/2 feature:speed-up-ci :zap: type:maintenance :wrench:
|
**Rasa version**: 2.3.1 (used `main` branch, commit hash: `2574c46e9576607a7b8de39f823ff3e5c01a475c`)
**Rasa SDK version** (if used & relevant): 2.3.1
**Rasa X version** (if used & relevant):
**Python version**: 3.7 / 3.8
**Operating system** (windows, osx, ...): MacOS / Windows
**Issue**:
Some of the tests in `tests/nlu/test_diet_classifier.py` do training but they can instead reuse the trained model (for example, `trained_simple_rasa_model` from `tests/conftest.py`) to be faster.
See more details about time measurements: https://docs.google.com/spreadsheets/d/1tDRD0vWaLW91W1wPj5h__dwXuXioRt9aBPYyF1n4MjI/edit#gid=57091041
**Command or request that led to error**:
```
pytest tests/nlu/test_diet_classifier.py
```
|
1.0
|
Reuse trained models for tests in `tests/nlu/test_diet_classifier.py` - **Rasa version**: 2.3.1 (used `main` branch, commit hash: `2574c46e9576607a7b8de39f823ff3e5c01a475c`)
**Rasa SDK version** (if used & relevant): 2.3.1
**Rasa X version** (if used & relevant):
**Python version**: 3.7 / 3.8
**Operating system** (windows, osx, ...): MacOS / Windows
**Issue**:
Some of the tests in `tests/nlu/test_diet_classifier.py` do training but they can instead reuse the trained model (for example, `trained_simple_rasa_model` from `tests/conftest.py`) to be faster.
See more details about time measurements: https://docs.google.com/spreadsheets/d/1tDRD0vWaLW91W1wPj5h__dwXuXioRt9aBPYyF1n4MjI/edit#gid=57091041
**Command or request that led to error**:
```
pytest tests/nlu/test_diet_classifier.py
```
|
non_process
|
reuse trained models for tests in tests nlu test diet classifier py rasa version used main branch commit hash rasa sdk version if used relevant rasa x version if used relevant python version operating system windows osx macos windows issue some of the tests in tests nlu test diet classifier py do training but they can instead reuse the trained model for example trained simple rasa model from tests conftest py to be faster see more details about time measurements command or request that led to error pytest tests nlu test diet classifier py
| 0
|
19,453
| 25,736,575,403
|
IssuesEvent
|
2022-12-08 01:21:57
|
mdsreq-fga-unb/2022.2-Dubium
|
https://api.github.com/repos/mdsreq-fga-unb/2022.2-Dubium
|
closed
|
Atividades e Ciclo de Vida
|
processo
|
Existem várias informações soltas que não se comunicam.
- [x] É preciso posicionar as atividades dentro do ciclo de vida que será utilizado pelo projeto.
- [x] As atividades de ANÁLISE DE REQUISITOS, DOCUMENTAÇÃO DE REQUISITOS, VERIFICAÇÃO E VALIDAÇÃO DE REQUISITOS, GERENCIAMENTO DE REQUISITOS não possuem nenhuma descrição.
- [x] não há nenhuma descrição das atividades de implementação: codificação - modelagem de banco de dados, programação back-end e front-end, criação de cenários de testes
- [x] Backlog do Produto e da Sprint, são artefatos. Em que atividades serão feitos?
- [x] Review, qual o resultado dessa atividade?
- [x] Retrospectiva, qual o resultado dessa atividade?
- [x] Entrega, em qual atividade será feita?
|
1.0
|
Atividades e Ciclo de Vida - Existem várias informações soltas que não se comunicam.
- [x] É preciso posicionar as atividades dentro do ciclo de vida que será utilizado pelo projeto.
- [x] As atividades de ANÁLISE DE REQUISITOS, DOCUMENTAÇÃO DE REQUISITOS, VERIFICAÇÃO E VALIDAÇÃO DE REQUISITOS, GERENCIAMENTO DE REQUISITOS não possuem nenhuma descrição.
- [x] não há nenhuma descrição das atividades de implementação: codificação - modelagem de banco de dados, programação back-end e front-end, criação de cenários de testes
- [x] Backlog do Produto e da Sprint, são artefatos. Em que atividades serão feitos?
- [x] Review, qual o resultado dessa atividade?
- [x] Retrospectiva, qual o resultado dessa atividade?
- [x] Entrega, em qual atividade será feita?
|
process
|
atividades e ciclo de vida existem várias informações soltas que não se comunicam é preciso posicionar as atividades dentro do ciclo de vida que será utilizado pelo projeto as atividades de análise de requisitos documentação de requisitos verificação e validação de requisitos gerenciamento de requisitos não possuem nenhuma descrição não há nenhuma descrição das atividades de implementação codificação modelagem de banco de dados programação back end e front end criação de cenários de testes backlog do produto e da sprint são artefatos em que atividades serão feitos review qual o resultado dessa atividade retrospectiva qual o resultado dessa atividade entrega em qual atividade será feita
| 1
|
2,820
| 5,767,278,572
|
IssuesEvent
|
2017-04-27 09:34:38
|
nikitavoloboev/knowledge-map
|
https://api.github.com/repos/nikitavoloboev/knowledge-map
|
opened
|
best path for learning natural language processing
|
main study plan natural language processing
|
Take a look [here](https://my.mindnode.com/nFFywmhppMRxw1Z6n7QNxikisQo9q9egH5jL8PfD#171.2,5.5,2).
If you think there is a better way one can learn natural language processing or you think the way the nodes are structured is wrong, please say it here.
Also if you think there are some really amazing resources on natural language processing that are missing, you can also add them here.
|
1.0
|
best path for learning natural language processing - Take a look [here](https://my.mindnode.com/nFFywmhppMRxw1Z6n7QNxikisQo9q9egH5jL8PfD#171.2,5.5,2).
If you think there is a better way one can learn natural language processing or you think the way the nodes are structured is wrong, please say it here.
Also if you think there are some really amazing resources on natural language processing that are missing, you can also add them here.
|
process
|
best path for learning natural language processing take a look if you think there is a better way one can learn natural language processing or you think the way the nodes are structured is wrong please say it here also if you think there are some really amazing resources on natural language processing that are missing you can also add them here
| 1
|
10,601
| 13,428,562,038
|
IssuesEvent
|
2020-09-06 22:20:23
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Classifier fails to distinguish between Gitlab.API logs and Gitlab.Production logs
|
bug p1 team:data processing
|
### Describe the bug
Depending on the state of the classifier queue Gitlab.API logs can end up into the Gitlab.Production table
### Steps to reproduce
Not easy to reproduce with certainty because of the randomness in the classifier queue
### Expected behavior
There should be a distinction between the two
### How to fix
Based on samples we just need to add `validate:"required"` to both `controller` and `action` fields in the `Production` schema struct
|
1.0
|
Classifier fails to distinguish between Gitlab.API logs and Gitlab.Production logs - ### Describe the bug
Depending on the state of the classifier queue Gitlab.API logs can end up into the Gitlab.Production table
### Steps to reproduce
Not easy to reproduce with certainty because of the randomness in the classifier queue
### Expected behavior
There should be a distinction between the two
### How to fix
Based on samples we just need to add `validate:"required"` to both `controller` and `action` fields in the `Production` schema struct
|
process
|
classifier fails to distinguish between gitlab api logs and gitlab production logs describe the bug depending on the state of the classifier queue gitlab api logs can end up into the gitlab production table steps to reproduce not easy to reproduce with certainty because of the randomness in the classifier queue expected behavior there should be a distinction between the two how to fix based on samples we just need to add validate required to both controller and action fields in the production schema struct
| 1
|
276,580
| 8,600,853,828
|
IssuesEvent
|
2018-11-16 09:07:29
|
highcharts/highcharts
|
https://api.github.com/repos/highcharts/highcharts
|
closed
|
Implement styled mode as an option.
|
Enhancement In progress Priority:High
|
With this enhancement, we serve only one built file, and styled mode will be toggled with a simple option, `chart.styledMode`.
Implementation is done in the [hc7/styled-mode](https://github.com/highcharts/highcharts/tree/hc7/styled-mode) branch.
#### Implementation plan
- [x] Remove supercode comments and replace with inline check for `Chart.styledMode` or `Renderer.styledMode`.
- [x] Identify and implement a structure for dual-mode option defaults, namely `tooltip.pointFormat` and `tooltip.headerFormat`.
- [x] Set up unit tests for various series types. The tests should check that presentational attributes (`fill`, `stroke`, `style` etc) are not set in the generated SVG. All series types should be tested, and new series types and features should be caught.
- [x] Update all demos to use the root level `highcharts.js` file, and set the `chart.styledMode` option.
- [x] Decide how to serve the legacy styled-mode file structure (`code.highcharts.com/js/...`). Decided to not serve new files.
- [ ] Turn off building of styled mode files in assembler/tools.
|
1.0
|
Implement styled mode as an option. - With this enhancement, we serve only one built file, and styled mode will be toggled with a simple option, `chart.styledMode`.
Implementation is done in the [hc7/styled-mode](https://github.com/highcharts/highcharts/tree/hc7/styled-mode) branch.
#### Implementation plan
- [x] Remove supercode comments and replace with inline check for `Chart.styledMode` or `Renderer.styledMode`.
- [x] Identify and implement a structure for dual-mode option defaults, namely `tooltip.pointFormat` and `tooltip.headerFormat`.
- [x] Set up unit tests for various series types. The tests should check that presentational attributes (`fill`, `stroke`, `style` etc) are not set in the generated SVG. All series types should be tested, and new series types and features should be caught.
- [x] Update all demos to use the root level `highcharts.js` file, and set the `chart.styledMode` option.
- [x] Decide how to serve the legacy styled-mode file structure (`code.highcharts.com/js/...`). Decided to not serve new files.
- [ ] Turn off building of styled mode files in assembler/tools.
|
non_process
|
implement styled mode as an option with this enhancement we serve only one built file and styled mode will be toggled with a simple option chart styledmode implementation is done in the branch implementation plan remove supercode comments and replace with inline check for chart styledmode or renderer styledmode identify and implement a structure for dual mode option defaults namely tooltip pointformat and tooltip headerformat set up unit tests for various series types the tests should check that presentational attributes fill stroke style etc are not set in the generated svg all series types should be tested and new series types and features should be caught update all demos to use the root level highcharts js file and set the chart styledmode option decide how to serve the legacy styled mode file structure code highcharts com js decided to not serve new files turn off building of styled mode files in assembler tools
| 0
|
178,153
| 29,507,328,365
|
IssuesEvent
|
2023-06-03 13:24:46
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
reopened
|
Type 'SafeHtml' is not assignable to type 'string'
|
type: bug/fix freq2: medium area: compiler state: confirmed cross-cutting: types design complexity: major P3 compiler: template type-checking
|
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
The issue is caused by package @angular/platform-browser
### Is this a regression?
This is kind of regression - please see description bellow.
### Description
I am getting **Typescript error** after switching to *Ivy compiler*:
```bash
[Step 4/5] src/app/app.component.html(1,26): Type 'SafeHtml' is not assignable to type 'string'.
```
In Angular class there is a member property declared as `SafeHtml`:
```typescript
@Component({
selector: 'app',
template: `<div [innerHTML]="description"></div>`
})
export class AppComponent {
description: SafeHtml;
constructor(private sanitizer: DomSanitizer) {}
ngOnInit(): void {
this.description = this.sanitizer.sanitize(SecurityContext.HTML, '<strong>whatever comes from server</strong>');
}
}
```
My issue is how to convert `SafeHtml` and `SafeUrl` to string, so that strict template check does not emits an error - I would like to avoid calling `.toString()` on the `description` - it should work out of the box - at least it is my expectation.
Angular `SafeHtml` is declared as:
```typescript
/**
* Marker interface for a value that's safe to use as HTML.
*
* @publicApi
*/
export declare interface SafeHtml extends SafeValue {
}
```
And `innerHtml` defined in _lib.dom.d.ts_:
```typescript
interface InnerHTML {
innerHTML: string;
}
```
The `innerHtml` is type of `string`, whereas I have `SafeHtml`.
## 🔬 Minimal Reproduction
How to reproduce:
```bash
git clone https://github.com/felikf/angular-repro-safe-html.git
npm i
npm run build
```
## 🔥 Exception or Error
<pre><code>
Type 'SafeHtml' is not assignable to type 'string'</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 8.3.4
Node: 12.11.1
OS: win32 x64
Angular: 8.2.6
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.803.4
@angular-devkit/build-angular 0.803.6
@angular-devkit/build-optimizer 0.803.6
@angular-devkit/build-webpack 0.803.6
@angular-devkit/core 8.3.4
@angular-devkit/schematics 8.3.4
@angular/cdk 8.2.0
@angular/cli 8.3.4
@angular/flex-layout 8.0.0-beta.27
@angular/material 8.2.0
@angular/material-moment-adapter 8.2.0
@ngtools/webpack 8.3.6
@schematics/angular 8.3.4
@schematics/update 0.803.4
rxjs 6.5.3
typescript 3.5.3
webpack 4.39.2
</code></pre>
**Anything else relevant?**
This happens during `npm run build`.
Also asked here: https://stackoverflow.com/questions/58265539/angular-ivy-type-check-type-safehtml-is-not-assignable-to-type-string
|
1.0
|
Type 'SafeHtml' is not assignable to type 'string' - <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
The issue is caused by package @angular/platform-browser
### Is this a regression?
This is kind of regression - please see description bellow.
### Description
I am getting **Typescript error** after switching to *Ivy compiler*:
```bash
[Step 4/5] src/app/app.component.html(1,26): Type 'SafeHtml' is not assignable to type 'string'.
```
In Angular class there is a member property declared as `SafeHtml`:
```typescript
@Component({
selector: 'app',
template: `<div [innerHTML]="description"></div>`
})
export class AppComponent {
description: SafeHtml;
constructor(private sanitizer: DomSanitizer) {}
ngOnInit(): void {
this.description = this.sanitizer.sanitize(SecurityContext.HTML, '<strong>whatever comes from server</strong>');
}
}
```
My issue is how to convert `SafeHtml` and `SafeUrl` to string, so that strict template check does not emits an error - I would like to avoid calling `.toString()` on the `description` - it should work out of the box - at least it is my expectation.
Angular `SafeHtml` is declared as:
```typescript
/**
* Marker interface for a value that's safe to use as HTML.
*
* @publicApi
*/
export declare interface SafeHtml extends SafeValue {
}
```
And `innerHtml` defined in _lib.dom.d.ts_:
```typescript
interface InnerHTML {
innerHTML: string;
}
```
The `innerHtml` is type of `string`, whereas I have `SafeHtml`.
## 🔬 Minimal Reproduction
How to reproduce:
```bash
git clone https://github.com/felikf/angular-repro-safe-html.git
npm i
npm run build
```
## 🔥 Exception or Error
<pre><code>
Type 'SafeHtml' is not assignable to type 'string'</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code>
Angular CLI: 8.3.4
Node: 12.11.1
OS: win32 x64
Angular: 8.2.6
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.803.4
@angular-devkit/build-angular 0.803.6
@angular-devkit/build-optimizer 0.803.6
@angular-devkit/build-webpack 0.803.6
@angular-devkit/core 8.3.4
@angular-devkit/schematics 8.3.4
@angular/cdk 8.2.0
@angular/cli 8.3.4
@angular/flex-layout 8.0.0-beta.27
@angular/material 8.2.0
@angular/material-moment-adapter 8.2.0
@ngtools/webpack 8.3.6
@schematics/angular 8.3.4
@schematics/update 0.803.4
rxjs 6.5.3
typescript 3.5.3
webpack 4.39.2
</code></pre>
**Anything else relevant?**
This happens during `npm run build`.
Also asked here: https://stackoverflow.com/questions/58265539/angular-ivy-type-check-type-safehtml-is-not-assignable-to-type-string
|
non_process
|
type safehtml is not assignable to type string 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 🐞 bug report affected package the issue is caused by package angular platform browser is this a regression this is kind of regression please see description bellow description i am getting typescript error after switching to ivy compiler bash src app app component html type safehtml is not assignable to type string in angular class there is a member property declared as safehtml typescript component selector app template export class appcomponent description safehtml constructor private sanitizer domsanitizer ngoninit void this description this sanitizer sanitize securitycontext html whatever comes from server my issue is how to convert safehtml and safeurl to string so that strict template check does not emits an error i would like to avoid calling tostring on the description it should work out of the box at least it is my expectation angular safehtml is declared as typescript marker interface for a value that s safe to use as html publicapi export declare interface safehtml extends safevalue and innerhtml defined in lib dom d ts typescript interface innerhtml innerhtml string the innerhtml is type of string whereas i have safehtml 🔬 minimal reproduction how to reproduce bash git clone npm i npm run build 🔥 exception or error type safehtml is not assignable to type string 🌍 your environment angular version angular cli node os angular animations common compiler compiler cli core forms language service platform browser platform browser dynamic router package version angular devkit architect angular devkit build angular angular devkit build optimizer angular devkit build webpack angular devkit core angular devkit schematics angular cdk angular cli angular flex layout beta angular material angular material moment adapter ngtools webpack schematics angular schematics update rxjs typescript webpack anything else relevant this happens during npm run build also asked here
| 0
|
6,771
| 9,908,203,029
|
IssuesEvent
|
2019-06-27 17:41:03
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
clang-tidy should enforce correct naming style for constants
|
type: process
|
Per the GSG [1], constants should be named like` kSomeConstantThing`, not `SOME_CONSTANT_THING`. clang-tidy should enforce this so that we don't end up with a mixture of the two styles.
|
1.0
|
clang-tidy should enforce correct naming style for constants - Per the GSG [1], constants should be named like` kSomeConstantThing`, not `SOME_CONSTANT_THING`. clang-tidy should enforce this so that we don't end up with a mixture of the two styles.
|
process
|
clang tidy should enforce correct naming style for constants per the gsg constants should be named like ksomeconstantthing not some constant thing clang tidy should enforce this so that we don t end up with a mixture of the two styles
| 1
|
8,369
| 11,519,773,922
|
IssuesEvent
|
2020-02-14 13:32:30
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
possible obsoletion GO:0052047 symbiotic process mediated by secreted substance
|
multi-species process obsoletion
|
I found this term today
GO:0052047 symbiotic process mediated by secreted substance
An interaction with a second organism mediated by a substance secreted by the first organism, where the two organisms are in a symbiotic interaction.
I thought I had seen the entire branch by now, but I still find new things.
It is possible that this term was created for secreted pathogen effectors? It's difficult to know. I would suggest obsoletion on the grounds that "first" and "second" organism are not specified. I'll check the existing annotations first to see If I can figure out what process it is intended for....
|
1.0
|
possible obsoletion GO:0052047 symbiotic process mediated by secreted substance - I found this term today
GO:0052047 symbiotic process mediated by secreted substance
An interaction with a second organism mediated by a substance secreted by the first organism, where the two organisms are in a symbiotic interaction.
I thought I had seen the entire branch by now, but I still find new things.
It is possible that this term was created for secreted pathogen effectors? It's difficult to know. I would suggest obsoletion on the grounds that "first" and "second" organism are not specified. I'll check the existing annotations first to see If I can figure out what process it is intended for....
|
process
|
possible obsoletion go symbiotic process mediated by secreted substance i found this term today go symbiotic process mediated by secreted substance an interaction with a second organism mediated by a substance secreted by the first organism where the two organisms are in a symbiotic interaction i thought i had seen the entire branch by now but i still find new things it is possible that this term was created for secreted pathogen effectors it s difficult to know i would suggest obsoletion on the grounds that first and second organism are not specified i ll check the existing annotations first to see if i can figure out what process it is intended for
| 1
|
269,586
| 20,387,769,702
|
IssuesEvent
|
2022-02-22 08:57:35
|
ita-social-projects/dokazovi-requirements
|
https://api.github.com/repos/ita-social-projects/dokazovi-requirements
|
opened
|
[Test for Story #265] Verify that prioritizations of cards are modified in the Carousel after changing number of cards
|
documentation test case
|
**Story link**
[#265 Story](https://github.com/ita-social-projects/dokazovi-be/issues/265)
### Status:
Not executed
### Title:
Verify that prioritizations of cards are modified in the Carousel after changing number of cards
### Description:
This test case verifies that admin can change number of cards in the Carousel and modifies prioritizations of cards
### Pre-conditions:
Admin is logged in
Admin is on the Existing Materials Section
Admin can number sequence of cards in the section
Each card in the Existing Materials Section has a number according to a place of this card in a row of the Carousel (Numerical field in edit mode, not more than 2 symbols)
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Type new number in the field of card and press the [Enter] button on the keyboard | | Card in the section is placed according to numbering | Not executed|
2 | Type new number in the field of card and click on a free space | |Card in the section is placed according to numbering | Not executed|
3 | Delete number in the field | | Card is placed at the end in continuing order of numbering | Not executed|
4 | Type the same numbers in different fields | | Cards numbering continues and changes for all cards from the last number entered | Not executed|
### Dependencies:
link #265
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
|
1.0
|
[Test for Story #265] Verify that prioritizations of cards are modified in the Carousel after changing number of cards - **Story link**
[#265 Story](https://github.com/ita-social-projects/dokazovi-be/issues/265)
### Status:
Not executed
### Title:
Verify that prioritizations of cards are modified in the Carousel after changing number of cards
### Description:
This test case verifies that admin can change number of cards in the Carousel and modifies prioritizations of cards
### Pre-conditions:
Admin is logged in
Admin is on the Existing Materials Section
Admin can number sequence of cards in the section
Each card in the Existing Materials Section has a number according to a place of this card in a row of the Carousel (Numerical field in edit mode, not more than 2 symbols)
Step № | Test Steps | Test data | Expected result | Status (Pass/Fail/Not executed) | Notes
------------ | ------------ | ------------ | ------------ | ------------ | ------------
1 | Type new number in the field of card and press the [Enter] button on the keyboard | | Card in the section is placed according to numbering | Not executed|
2 | Type new number in the field of card and click on a free space | |Card in the section is placed according to numbering | Not executed|
3 | Delete number in the field | | Card is placed at the end in continuing order of numbering | Not executed|
4 | Type the same numbers in different fields | | Cards numbering continues and changes for all cards from the last number entered | Not executed|
### Dependencies:
link #265
### [Gantt Chart](https://docs.google.com/spreadsheets/d/1bgaEJDOf3OhfNRfP-WWPKmmZFW5C3blOUxamE3wSCbM/edit#gid=775577959)
|
non_process
|
verify that prioritizations of cards are modified in the carousel after changing number of cards story link status not executed title verify that prioritizations of cards are modified in the carousel after changing number of cards description this test case verifies that admin can change number of cards in the carousel and modifies prioritizations of cards pre conditions admin is logged in admin is on the existing materials section admin can number sequence of cards in the section each card in the existing materials section has a number according to a place of this card in a row of the carousel numerical field in edit mode not more than symbols step № test steps test data expected result status pass fail not executed notes type new number in the field of card and press the button on the keyboard card in the section is placed according to numbering not executed type new number in the field of card and click on a free space card in the section is placed according to numbering not executed delete number in the field card is placed at the end in continuing order of numbering not executed type the same numbers in different fields cards numbering continues and changes for all cards from the last number entered not executed dependencies link
| 0
|
17,365
| 23,189,589,913
|
IssuesEvent
|
2022-08-01 11:26:58
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Transformation crashes if map element doesn't have class attribute
|
bug priority/medium preprocess
|
With a topicref like this:
<topicref href="topic.dita" rev="1.0"/>
publishing to Oxygen WebHelp responsive using force.unique throws a NPE:
/.../plugins/org.dita.base/build_preprocess.xml:250: java.lang.NullPointerException
at java.base/java.util.ArrayDeque.addFirst(ArrayDeque.java:286)
at org.dita.dost.writer.ForceUniqueFilter.startElement(ForceUniqueFilter.java:60)
This appears to be a side effect from fixing this issue where the Stack was replaced with ArrayDeque which does not allow null values: [3750](https://github.com/dita-ot/dita-ot/issues/3750)
|
1.0
|
Transformation crashes if map element doesn't have class attribute - With a topicref like this:
<topicref href="topic.dita" rev="1.0"/>
publishing to Oxygen WebHelp responsive using force.unique throws a NPE:
/.../plugins/org.dita.base/build_preprocess.xml:250: java.lang.NullPointerException
at java.base/java.util.ArrayDeque.addFirst(ArrayDeque.java:286)
at org.dita.dost.writer.ForceUniqueFilter.startElement(ForceUniqueFilter.java:60)
This appears to be a side effect from fixing this issue where the Stack was replaced with ArrayDeque which does not allow null values: [3750](https://github.com/dita-ot/dita-ot/issues/3750)
|
process
|
transformation crashes if map element doesn t have class attribute with a topicref like this publishing to oxygen webhelp responsive using force unique throws a npe plugins org dita base build preprocess xml java lang nullpointerexception at java base java util arraydeque addfirst arraydeque java at org dita dost writer forceuniquefilter startelement forceuniquefilter java this appears to be a side effect from fixing this issue where the stack was replaced with arraydeque which does not allow null values
| 1
|
243,994
| 7,869,107,018
|
IssuesEvent
|
2018-06-24 09:35:23
|
fog/fog-google
|
https://api.github.com/repos/fog/fog-google
|
closed
|
Bootstrap method should look gcloud ssh keys
|
enhancement help wanted priority/low ready
|
Currently, the live bootstrap test assumes the user has `~/.ssh/id_rsa[.pub]` for ssh keys. Google users will typically have `~/.ssh/google_compute_engine[.pub]`. In that case, ssh will "Just Work"(tm), so the suggestion is for the live bootstrap test to first try to use a the google key, then could fall back to id_rsa.
wdyt @ihmccreery?
|
1.0
|
Bootstrap method should look gcloud ssh keys - Currently, the live bootstrap test assumes the user has `~/.ssh/id_rsa[.pub]` for ssh keys. Google users will typically have `~/.ssh/google_compute_engine[.pub]`. In that case, ssh will "Just Work"(tm), so the suggestion is for the live bootstrap test to first try to use a the google key, then could fall back to id_rsa.
wdyt @ihmccreery?
|
non_process
|
bootstrap method should look gcloud ssh keys currently the live bootstrap test assumes the user has ssh id rsa for ssh keys google users will typically have ssh google compute engine in that case ssh will just work tm so the suggestion is for the live bootstrap test to first try to use a the google key then could fall back to id rsa wdyt ihmccreery
| 0
|
11,089
| 13,930,672,853
|
IssuesEvent
|
2020-10-22 03:05:55
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
Release KF 1.1
|
area/engprod kind/feature kind/process priority/p0
|
/kind process
We need to identify who will be driving the 1.1 release. These folks should then
* Identify timelines for the release
* e.g. cutoff dates for branch cuts
* Target dates for RCs
This would probably be a good opportunity to update some of the processes and policies around releases.
https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md
Area | release czar | Tracking Issue
--- | --- | ---
aws | @Jeffwan | ~~#5057~~ |
centraldashboard | | ~~#5068~~
docs | | kubeflow/website#1984
fairing | @jinchihe | ~~kubeflow/fairing#503~~ |
feast | @woop | |
gcp | @jlewi | kubeflow/gcp-blueprints#46 |
katib | @andreyvelich | kubeflow/katib#1211 |
kfctl | @krishnadurai, @crobby | kubeflow/kfctl#352 |
kfserving | @yuzisun @animeshsingh | kubeflow/kfserving#648 |
manifests | @krishnadurai | kubeflow/manifests#1252
metadata | |
minikf | @vkoukis | |
multiuser | @yanniszark @bmorphism | #5067, #5068
notebooks | @kimwnasptd , @jtfogarty | #5060, #5068 |
pipelines | @Bobgy | kubeflow/pipelines#3961 |
training | @johnugeorge @andreyvelich @Jeffwan | kubeflow/common#97
Target Dates
* Branch cut June 19
|
1.0
|
Release KF 1.1 - /kind process
We need to identify who will be driving the 1.1 release. These folks should then
* Identify timelines for the release
* e.g. cutoff dates for branch cuts
* Target dates for RCs
This would probably be a good opportunity to update some of the processes and policies around releases.
https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md
Area | release czar | Tracking Issue
--- | --- | ---
aws | @Jeffwan | ~~#5057~~ |
centraldashboard | | ~~#5068~~
docs | | kubeflow/website#1984
fairing | @jinchihe | ~~kubeflow/fairing#503~~ |
feast | @woop | |
gcp | @jlewi | kubeflow/gcp-blueprints#46 |
katib | @andreyvelich | kubeflow/katib#1211 |
kfctl | @krishnadurai, @crobby | kubeflow/kfctl#352 |
kfserving | @yuzisun @animeshsingh | kubeflow/kfserving#648 |
manifests | @krishnadurai | kubeflow/manifests#1252
metadata | |
minikf | @vkoukis | |
multiuser | @yanniszark @bmorphism | #5067, #5068
notebooks | @kimwnasptd , @jtfogarty | #5060, #5068 |
pipelines | @Bobgy | kubeflow/pipelines#3961 |
training | @johnugeorge @andreyvelich @Jeffwan | kubeflow/common#97
Target Dates
* Branch cut June 19
|
process
|
release kf kind process we need to identify who will be driving the release these folks should then identify timelines for the release e g cutoff dates for branch cuts target dates for rcs this would probably be a good opportunity to update some of the processes and policies around releases area release czar tracking issue aws jeffwan centraldashboard docs kubeflow website fairing jinchihe kubeflow fairing feast woop gcp jlewi kubeflow gcp blueprints katib andreyvelich kubeflow katib kfctl krishnadurai crobby kubeflow kfctl kfserving yuzisun animeshsingh kubeflow kfserving manifests krishnadurai kubeflow manifests metadata minikf vkoukis multiuser yanniszark bmorphism notebooks kimwnasptd jtfogarty pipelines bobgy kubeflow pipelines training johnugeorge andreyvelich jeffwan kubeflow common target dates branch cut june
| 1
|
12,886
| 15,279,961,829
|
IssuesEvent
|
2021-02-23 05:18:10
|
Today-I-Learn/backend-study
|
https://api.github.com/repos/Today-I-Learn/backend-study
|
opened
|
스레드와 프로세스의 차이는 무엇인가요?
|
OS process thread
|
### 스레드와 프로세스의 차이는 무엇인가요?
- [ ] 스레드와 프로세스의 정의
- [ ] 스레드와 프로세스의 차이점
### 멀티스레드와 멀티프로세스 사용시 각각의 장단점이 무엇인가요?
- [ ] 멀티스레드와 멀티프로세스의 장단점
- [ ] 어떤 경우가 각각의 방법에 대해서 효율적인지
|
1.0
|
스레드와 프로세스의 차이는 무엇인가요? - ### 스레드와 프로세스의 차이는 무엇인가요?
- [ ] 스레드와 프로세스의 정의
- [ ] 스레드와 프로세스의 차이점
### 멀티스레드와 멀티프로세스 사용시 각각의 장단점이 무엇인가요?
- [ ] 멀티스레드와 멀티프로세스의 장단점
- [ ] 어떤 경우가 각각의 방법에 대해서 효율적인지
|
process
|
스레드와 프로세스의 차이는 무엇인가요 스레드와 프로세스의 차이는 무엇인가요 스레드와 프로세스의 정의 스레드와 프로세스의 차이점 멀티스레드와 멀티프로세스 사용시 각각의 장단점이 무엇인가요 멀티스레드와 멀티프로세스의 장단점 어떤 경우가 각각의 방법에 대해서 효율적인지
| 1
|
18,464
| 24,549,730,758
|
IssuesEvent
|
2022-10-12 11:39:05
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Angular Upgrade] All the tabs should be aligned properly in the main screen of the participant manager
|
Bug P1 Participant manager Process: Fixed Process: Tested dev
|
Login to PM and Verify,
1. Alignment issue
**AR:** All the tabs such as 'Dashboard' , ' Locations' , 'Admins' , 'My Account' and also 'Search by site or study ID or name 'is not aligned properly in the main screen of the participant manager as per the design document
**ER:** All the tabs should be aligned properly in the main screen of the participant manager as per the design document
2. A line is getting displayed in all the tabs below , So it should not get displayed. It should be fixed throughout the application
**1.**

**2.**

|
2.0
|
[PM] [Angular Upgrade] All the tabs should be aligned properly in the main screen of the participant manager - Login to PM and Verify,
1. Alignment issue
**AR:** All the tabs such as 'Dashboard' , ' Locations' , 'Admins' , 'My Account' and also 'Search by site or study ID or name 'is not aligned properly in the main screen of the participant manager as per the design document
**ER:** All the tabs should be aligned properly in the main screen of the participant manager as per the design document
2. A line is getting displayed in all the tabs below , So it should not get displayed. It should be fixed throughout the application
**1.**

**2.**

|
process
|
all the tabs should be aligned properly in the main screen of the participant manager login to pm and verify alignment issue ar all the tabs such as dashboard locations admins my account and also search by site or study id or name is not aligned properly in the main screen of the participant manager as per the design document er all the tabs should be aligned properly in the main screen of the participant manager as per the design document a line is getting displayed in all the tabs below so it should not get displayed it should be fixed throughout the application
| 1
|
32,506
| 26,746,701,460
|
IssuesEvent
|
2023-01-30 16:27:21
|
opendatahub-io/odh-dashboard
|
https://api.github.com/repos/opendatahub-io/odh-dashboard
|
closed
|
Explore allowing an administrator to enable/disable ODH components from UI
|
kind/enhancement infrastructure priority/normal
|
With access to the `kfdef` file, we could allow authorized users to update the installed components of Open Data Hub. This would require authentication through "Login with OpenShift" and verify they have access. You could allow users to customize kfdef fields with a forms wizard.
Requirements:
- Allow users to Login with OpenShift
- Allow components to be correctly:
- installed
- updated
- uninstalled



|
1.0
|
Explore allowing an administrator to enable/disable ODH components from UI - With access to the `kfdef` file, we could allow authorized users to update the installed components of Open Data Hub. This would require authentication through "Login with OpenShift" and verify they have access. You could allow users to customize kfdef fields with a forms wizard.
Requirements:
- Allow users to Login with OpenShift
- Allow components to be correctly:
- installed
- updated
- uninstalled



|
non_process
|
explore allowing an administrator to enable disable odh components from ui with access to the kfdef file we could allow authorized users to update the installed components of open data hub this would require authentication through login with openshift and verify they have access you could allow users to customize kfdef fields with a forms wizard requirements allow users to login with openshift allow components to be correctly installed updated uninstalled
| 0
|
17,767
| 23,698,579,505
|
IssuesEvent
|
2022-08-29 16:45:10
|
cloudfoundry/korifi
|
https://api.github.com/repos/cloudfoundry/korifi
|
opened
|
[Feature]: Developer can push apps using the top-level `instances` field in the manifest
|
Top-level process config
|
### Blockers/Dependencies
_No response_
### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
**GIVEN** I have the sources for an application (e.g. `tests/smoke/assets/test-node-app`)
**WHEN I** push it with the following command:
```sh
cf push test -i 3
```
**THEN I** see the push succeeds with an output similar to this:
```
name: test
requested state: started
routes: test.vcap.me
last uploaded: Mon 29 Aug 16:28:36 UTC 2022
stack: cflinuxfs3
buildpacks:
name version detect output buildpack name
nodejs_buildpack 1.7.61 nodejs nodejs
type: web
sidecars:
instances: 3/3
memory usage: 256M
start command: npm start
state since cpu memory disk details
#0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G
#1 running 2022-08-29T16:28:54Z 1.6% 40.5M of 256M 115.7M of 1G
#2 running 2022-08-29T16:28:54Z 1.5% 40.6M of 256M 115.7M of 1G
```
### Dev Notes
_No response_
|
1.0
|
[Feature]: Developer can push apps using the top-level `instances` field in the manifest - ### Blockers/Dependencies
_No response_
### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
**GIVEN** I have the sources for an application (e.g. `tests/smoke/assets/test-node-app`)
**WHEN I** push it with the following command:
```sh
cf push test -i 3
```
**THEN I** see the push succeeds with an output similar to this:
```
name: test
requested state: started
routes: test.vcap.me
last uploaded: Mon 29 Aug 16:28:36 UTC 2022
stack: cflinuxfs3
buildpacks:
name version detect output buildpack name
nodejs_buildpack 1.7.61 nodejs nodejs
type: web
sidecars:
instances: 3/3
memory usage: 256M
start command: npm start
state since cpu memory disk details
#0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G
#1 running 2022-08-29T16:28:54Z 1.6% 40.5M of 256M 115.7M of 1G
#2 running 2022-08-29T16:28:54Z 1.5% 40.6M of 256M 115.7M of 1G
```
### Dev Notes
_No response_
|
process
|
developer can push apps using the top level instances field in the manifest blockers dependencies no response background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the sources for an application e g tests smoke assets test node app when i push it with the following command sh cf push test i then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command npm start state since cpu memory disk details running of of running of of running of of dev notes no response
| 1
|
9,487
| 12,480,217,768
|
IssuesEvent
|
2020-05-29 19:50:36
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Add kicbase to dockerhub for accessibility from China
|
kind/process priority/backlog
|
<!--- Please include the "minikube start" command you used in your reproduction steps --->
**Steps to reproduce the issue:**
1.
2.
3.
<!--- TIP: Add the "--alsologtostderr" flag to the command-line for more logs --->
**Full output of failed command:**
**Full output of `minikube start` command used, if not already included:**
**Optional: Full output of `minikube logs` command:**
<details>
</details>
|
1.0
|
Add kicbase to dockerhub for accessibility from China - <!--- Please include the "minikube start" command you used in your reproduction steps --->
**Steps to reproduce the issue:**
1.
2.
3.
<!--- TIP: Add the "--alsologtostderr" flag to the command-line for more logs --->
**Full output of failed command:**
**Full output of `minikube start` command used, if not already included:**
**Optional: Full output of `minikube logs` command:**
<details>
</details>
|
process
|
add kicbase to dockerhub for accessibility from china steps to reproduce the issue full output of failed command full output of minikube start command used if not already included optional full output of minikube logs command
| 1
|
466,374
| 13,400,979,673
|
IssuesEvent
|
2020-09-03 16:35:29
|
elixir-cloud-aai/drs-filer
|
https://api.github.com/repos/elixir-cloud-aai/drs-filer
|
closed
|
Replace code that has been moved to archetype
|
flag: good 1st issue priority: medium type flag: meta workload: weeks
|
A lot of the code shared between this service and related services [proTES](https://github.com/elixir-cloud-aai/proTES), [proWES](https://github.com/elixir-cloud-aai/proWES) and to some extent [TEStribute](https://github.com/elixir-cloud-aai/TEStribute), [mock-DRS](https://github.com/elixir-cloud-aai/mock-DRS) and [mock-TES](https://github.com/elixir-cloud-aai/mock-TES), is successively being moved to the archetype library [FOCA](https://github.com/elixir-cloud-aai/foca).
Duplicate code should now successively be removed from this service. This meta issue should document the progress of this process and remain open. Please open a separate issue for the replacement of every individual module (or other unit of code, as appropriate) and link to this issue.
|
1.0
|
Replace code that has been moved to archetype - A lot of the code shared between this service and related services [proTES](https://github.com/elixir-cloud-aai/proTES), [proWES](https://github.com/elixir-cloud-aai/proWES) and to some extent [TEStribute](https://github.com/elixir-cloud-aai/TEStribute), [mock-DRS](https://github.com/elixir-cloud-aai/mock-DRS) and [mock-TES](https://github.com/elixir-cloud-aai/mock-TES), is successively being moved to the archetype library [FOCA](https://github.com/elixir-cloud-aai/foca).
Duplicate code should now successively be removed from this service. This meta issue should document the progress of this process and remain open. Please open a separate issue for the replacement of every individual module (or other unit of code, as appropriate) and link to this issue.
|
non_process
|
replace code that has been moved to archetype a lot of the code shared between this service and related services and to some extent and is successively being moved to the archetype library duplicate code should now successively be removed from this service this meta issue should document the progress of this process and remain open please open a separate issue for the replacement of every individual module or other unit of code as appropriate and link to this issue
| 0
|
1,125
| 3,603,694,480
|
IssuesEvent
|
2016-02-03 20:00:45
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
reopened
|
Quotes are not handled correctly when child_process.spawn() parses args
|
child_process windows
|
```js
require('child_process')
.spawn('cmd', ['/c', 'node "my test/__args_out.js" b c d e f'])
.stderr.addListener("data", data => {console.log(data.toString());});
```
fails because quotes don't seem to be recognized and parsed by `child_process.spawn()` as delimiters for parameters with spaces:
```
Error: Cannot find module 'd:\Documents\VS Code\"my test\__args_out.js"'
at Function.Module._resolveFilename (module.js:326:15)
at Function.Module._load (module.js:277:25)
at Function.Module.runMain (module.js:430:10)
at startup (node.js:141:18)
at node.js:980:3
```
This happens on Windows platforms. I can't tell whether the above error occurs on other platforms, too.
<br/>
The above code is taken from [*jake* source](https://github.com/jakejs/jake/blob/master/lib/utils/index.js#l179-l184).
|
1.0
|
Quotes are not handled correctly when child_process.spawn() parses args - ```js
require('child_process')
.spawn('cmd', ['/c', 'node "my test/__args_out.js" b c d e f'])
.stderr.addListener("data", data => {console.log(data.toString());});
```
fails because quotes don't seem to be recognized and parsed by `child_process.spawn()` as delimiters for parameters with spaces:
```
Error: Cannot find module 'd:\Documents\VS Code\"my test\__args_out.js"'
at Function.Module._resolveFilename (module.js:326:15)
at Function.Module._load (module.js:277:25)
at Function.Module.runMain (module.js:430:10)
at startup (node.js:141:18)
at node.js:980:3
```
This happens on Windows platforms. I can't tell whether the above error occurs on other platforms, too.
<br/>
The above code is taken from [*jake* source](https://github.com/jakejs/jake/blob/master/lib/utils/index.js#l179-l184).
|
process
|
quotes are not handled correctly when child process spawn parses args js require child process spawn cmd stderr addlistener data data console log data tostring fails because quotes don t seem to be recognized and parsed by child process spawn as delimiters for parameters with spaces error cannot find module d documents vs code my test args out js at function module resolvefilename module js at function module load module js at function module runmain module js at startup node js at node js this happens on windows platforms i can t tell whether the above error occurs on other platforms too the above code is taken from
| 1
|
360,421
| 25,288,859,780
|
IssuesEvent
|
2022-11-16 21:51:46
|
wixtoolset/issues
|
https://api.github.com/repos/wixtoolset/issues
|
closed
|
Wix documentation links broken for verssion 3, multiple pages not found:
|
documentation website
|
### WiX Toolset Page URL
https://wixtoolset.org/docs/v3/xsd/wix/wix/msipackage/
### Page URL that linked to WiX Toolset
https://wixtoolset.org/docs/v3/xsd/wix/chain/
### Additional information (optional)
No link on this page works correctly and many other pages have this problem (Checked some random elements pages and they had broken links)
See more info here: https://github.com/wixtoolset/web/pull/112
|
1.0
|
Wix documentation links broken for verssion 3, multiple pages not found: - ### WiX Toolset Page URL
https://wixtoolset.org/docs/v3/xsd/wix/wix/msipackage/
### Page URL that linked to WiX Toolset
https://wixtoolset.org/docs/v3/xsd/wix/chain/
### Additional information (optional)
No link on this page works correctly and many other pages have this problem (Checked some random elements pages and they had broken links)
See more info here: https://github.com/wixtoolset/web/pull/112
|
non_process
|
wix documentation links broken for verssion multiple pages not found wix toolset page url page url that linked to wix toolset additional information optional no link on this page works correctly and many other pages have this problem checked some random elements pages and they had broken links see more info here
| 0
|
5,245
| 8,039,253,116
|
IssuesEvent
|
2018-07-30 17:47:56
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Spanner system test timeout
|
api: spanner flaky testing type: process
|
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7224
```python
___________________ TestDatabaseAPI.test_update_database_ddl ___________________
self = <tests.system.test_system.TestDatabaseAPI testMethod=test_update_database_ddl>
def test_update_database_ddl(self):
pool = BurstyPool()
temp_db_id = 'temp_db' + unique_resource_id('_')
temp_db = Config.INSTANCE.database(temp_db_id, pool=pool)
create_op = temp_db.create()
self.to_delete.append(temp_db)
# We want to make sure the operation completes.
create_op.result(120) # raises on failure / timeout.
operation = temp_db.update_ddl(DDL_STATEMENTS)
# We want to make sure the operation completes.
> operation.result(120) # raises on failure / timeout.
tests/system/test_system.py:320:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/future/polling.py:115: in result
self._blocking_poll(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f4604191d10>
timeout = 120
def _blocking_poll(self, timeout=None):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
retry_(self._done_or_raise)()
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
> 'Operation did not complete within the designated '
'timeout.')
E TimeoutError: Operation did not complete within the designated timeout.
```
|
1.0
|
Spanner system test timeout - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7224
```python
___________________ TestDatabaseAPI.test_update_database_ddl ___________________
self = <tests.system.test_system.TestDatabaseAPI testMethod=test_update_database_ddl>
def test_update_database_ddl(self):
pool = BurstyPool()
temp_db_id = 'temp_db' + unique_resource_id('_')
temp_db = Config.INSTANCE.database(temp_db_id, pool=pool)
create_op = temp_db.create()
self.to_delete.append(temp_db)
# We want to make sure the operation completes.
create_op.result(120) # raises on failure / timeout.
operation = temp_db.update_ddl(DDL_STATEMENTS)
# We want to make sure the operation completes.
> operation.result(120) # raises on failure / timeout.
tests/system/test_system.py:320:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/future/polling.py:115: in result
self._blocking_poll(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f4604191d10>
timeout = 120
def _blocking_poll(self, timeout=None):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
retry_(self._done_or_raise)()
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
> 'Operation did not complete within the designated '
'timeout.')
E TimeoutError: Operation did not complete within the designated timeout.
```
|
process
|
spanner system test timeout see python testdatabaseapi test update database ddl self def test update database ddl self pool burstypool temp db id temp db unique resource id temp db config instance database temp db id pool pool create op temp db create self to delete append temp db we want to make sure the operation completes create op result raises on failure timeout operation temp db update ddl ddl statements we want to make sure the operation completes operation result raises on failure timeout tests system test system py nox sys lib site packages google api core future polling py in result self blocking poll timeout timeout self timeout def blocking poll self timeout none poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try retry self done or raise except exceptions retryerror raise concurrent futures timeouterror operation did not complete within the designated timeout e timeouterror operation did not complete within the designated timeout
| 1
|
359,650
| 25,249,028,240
|
IssuesEvent
|
2022-11-15 13:23:11
|
joshuacc/ahkpm
|
https://api.github.com/repos/joshuacc/ahkpm
|
opened
|
Add package search to ahkpm.dev
|
documentation
|
Based on the query mentioned in #34 .
As a prerequisite, need to get a decent number of packages on GitHub to tag themselves appropriately.
|
1.0
|
Add package search to ahkpm.dev - Based on the query mentioned in #34 .
As a prerequisite, need to get a decent number of packages on GitHub to tag themselves appropriately.
|
non_process
|
add package search to ahkpm dev based on the query mentioned in as a prerequisite need to get a decent number of packages on github to tag themselves appropriately
| 0
|
10,890
| 4,107,270,357
|
IssuesEvent
|
2016-06-06 12:23:22
|
ProftaakS4/Case-Photo-Producer
|
https://api.github.com/repos/ProftaakS4/Case-Photo-Producer
|
opened
|
CodeReview: PhotoshopWebsite/Domain/Order.cs
|
Code Review
|
#**File: PhotoshopWebsite/Domain/Order.cs**
klassen: commentaar op klassen
constructor: commentaar bij constructor
methodes: commentaar bij methodes
String: kleine letter s gebruiken voor de klasse string
|
1.0
|
CodeReview: PhotoshopWebsite/Domain/Order.cs - #**File: PhotoshopWebsite/Domain/Order.cs**
klassen: commentaar op klassen
constructor: commentaar bij constructor
methodes: commentaar bij methodes
String: kleine letter s gebruiken voor de klasse string
|
non_process
|
codereview photoshopwebsite domain order cs file photoshopwebsite domain order cs klassen commentaar op klassen constructor commentaar bij constructor methodes commentaar bij methodes string kleine letter s gebruiken voor de klasse string
| 0
|
259,975
| 19,651,249,116
|
IssuesEvent
|
2022-01-10 07:26:57
|
team-GIFT/gift
|
https://api.github.com/repos/team-GIFT/gift
|
closed
|
기획문서: 요구사항 정의서
|
documentation
|
- 요구사항 정의서
- 비즈니스 요구사항 : 서비스의 목적, 타겟 등
- 기술 요구사항 : 페이지별 혹은 컴퍼넌트별 상세 요구사항
- 공통 요구사항 : 이미지 최적화 목표 수치, 웹접근성 개선 목표치 (탭 이용, 보조기기 접근 가능 스피너 등)
|
1.0
|
기획문서: 요구사항 정의서 - - 요구사항 정의서
- 비즈니스 요구사항 : 서비스의 목적, 타겟 등
- 기술 요구사항 : 페이지별 혹은 컴퍼넌트별 상세 요구사항
- 공통 요구사항 : 이미지 최적화 목표 수치, 웹접근성 개선 목표치 (탭 이용, 보조기기 접근 가능 스피너 등)
|
non_process
|
기획문서 요구사항 정의서 요구사항 정의서 비즈니스 요구사항 서비스의 목적 타겟 등 기술 요구사항 페이지별 혹은 컴퍼넌트별 상세 요구사항 공통 요구사항 이미지 최적화 목표 수치 웹접근성 개선 목표치 탭 이용 보조기기 접근 가능 스피너 등
| 0
|
302,496
| 9,260,762,636
|
IssuesEvent
|
2019-03-18 07:06:43
|
OpenSourceEconomics/soepy
|
https://api.github.com/repos/OpenSourceEconomics/soepy
|
opened
|
proper conda environment
|
enhancement pb package priority medium size small
|
We need a proper conda environment file added to the repository as well.
|
1.0
|
proper conda environment - We need a proper conda environment file added to the repository as well.
|
non_process
|
proper conda environment we need a proper conda environment file added to the repository as well
| 0
|
20,781
| 3,634,365,776
|
IssuesEvent
|
2016-02-11 17:43:16
|
coder-molok/foowd_alpha2
|
https://api.github.com/repos/coder-molok/foowd_alpha2
|
closed
|
Registrazione - immagine obbligatoria produttore
|
design enhancement
|
Togliere campo obbligtorio "carica immagine" (non essendoci più il carousel nn serve)
|
1.0
|
Registrazione - immagine obbligatoria produttore - Togliere campo obbligtorio "carica immagine" (non essendoci più il carousel nn serve)
|
non_process
|
registrazione immagine obbligatoria produttore togliere campo obbligtorio carica immagine non essendoci più il carousel nn serve
| 0
|
9,566
| 12,519,511,382
|
IssuesEvent
|
2020-06-03 14:32:49
|
code4romania/expert-consultation-api
|
https://api.github.com/repos/code4romania/expert-consultation-api
|
closed
|
Unable to get document by id returned from /documents
|
bug document processing documents java spring
|
/api/documents returns `metadataId` instead of `id` which causes the endpoint /documents/{id} to throw "Resource not found"
|
1.0
|
Unable to get document by id returned from /documents - /api/documents returns `metadataId` instead of `id` which causes the endpoint /documents/{id} to throw "Resource not found"
|
process
|
unable to get document by id returned from documents api documents returns metadataid instead of id which causes the endpoint documents id to throw resource not found
| 1
|
22,002
| 14,960,306,903
|
IssuesEvent
|
2021-01-27 05:28:06
|
OpenHistoricalMap/issues
|
https://api.github.com/repos/OpenHistoricalMap/issues
|
opened
|
Notifications when Github Actions succeed or fail
|
infrastructure
|
**What's your idea for a cool feature that would help you use OHM better.**
The Github Actions build takes about 20 minutes. It's easy to get distracted and forget to look at the results. It would be better if the Actions automagically told us when they are done.
there's some docs on sending Slack notifications:
https://www.ravsam.in/blog/send-slack-notification-when-github-actions-fails/
If we prefer email, there's also this:
https://medium.com/ravsam-web-solutions/send-an-email-notification-when-github-actions-fails-ea83cbeabbe0
I would say we send the alerts either to the channel shared by GreenInfo and DevSeed in their own Slacks, or maybe even better to the openhistoricalmap-tech channel in OSM slack.
**Current workarounds**
Remember to look at the Actions panel in the `ohm-deploy` repo.
|
1.0
|
Notifications when Github Actions succeed or fail - **What's your idea for a cool feature that would help you use OHM better.**
The Github Actions build takes about 20 minutes. It's easy to get distracted and forget to look at the results. It would be better if the Actions automagically told us when they are done.
there's some docs on sending Slack notifications:
https://www.ravsam.in/blog/send-slack-notification-when-github-actions-fails/
If we prefer email, there's also this:
https://medium.com/ravsam-web-solutions/send-an-email-notification-when-github-actions-fails-ea83cbeabbe0
I would say we send the alerts either to the channel shared by GreenInfo and DevSeed in their own Slacks, or maybe even better to the openhistoricalmap-tech channel in OSM slack.
**Current workarounds**
Remember to look at the Actions panel in the `ohm-deploy` repo.
|
non_process
|
notifications when github actions succeed or fail what s your idea for a cool feature that would help you use ohm better the github actions build takes about minutes it s easy to get distracted and forget to look at the results it would be better if the actions automagically told us when they are done there s some docs on sending slack notifications if we prefer email there s also this i would say we send the alerts either to the channel shared by greeninfo and devseed in their own slacks or maybe even better to the openhistoricalmap tech channel in osm slack current workarounds remember to look at the actions panel in the ohm deploy repo
| 0
|
447,261
| 12,887,212,997
|
IssuesEvent
|
2020-07-13 10:49:06
|
minio/minio
|
https://api.github.com/repos/minio/minio
|
closed
|
Why Standalone Multi-Tenant Deployment is faster than Standalone normal?(write or put)
|
community priority: medium stale triage
|
<!--- Provide a general summary of the issue in the Title above -->
I want a fast object storage tool and did a performance test for Minio. I found that if I deploy Minio in Standalone mode, Multi-Tenant (4) is fatster than norma node,I want to know why.
I start Minio like this:
* Normal: minio server --address :9000 ./DATA1
* Multi-Tenant: minio server --address :9000 ./DATA{1...4}
The directory DATA1~DATA4 is just a directory, I didn't mount it to any disk devices.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
As far as I know, if we split one file to many small files and did something like Erasure Code, and then rewrite it to such small files, the write rate can't be faster than I write it to only one file.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
But, the truth is that Multi-Tenant mode (4 nodes) is fatster than Normal mode. My result is:
* Normal mode: maybe 50times/s, that is, I can write a file which size is 10KB to Minio 50 times persecond, so the rate is 500KB/s;
* Multi-Tenant mode: 180times/s, the rate is 1800KB/s.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. start two Minio instances which is in normal mode and Multi-Tenant mode ;
2. PUT a complete file that is 10KB size into Minio to test the upload performance in both mode;
3. compare the performance or write rate between this two mode.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): 2018-11-06
* Environment name and version (e.g. nginx 1.9.1): Minio standalone mode in normal and Multi-Tenant run directoly in CentOS 7.4.
* Server type and version: Virtual machine run CentOS 7.4 in VMWARE
* Operating System and version (`uname -a`): Linux zcw61 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
|
1.0
|
Why Standalone Multi-Tenant Deployment is faster than Standalone normal?(write or put) - <!--- Provide a general summary of the issue in the Title above -->
I want a fast object storage tool and did a performance test for Minio. I found that if I deploy Minio in Standalone mode, Multi-Tenant (4) is fatster than norma node,I want to know why.
I start Minio like this:
* Normal: minio server --address :9000 ./DATA1
* Multi-Tenant: minio server --address :9000 ./DATA{1...4}
The directory DATA1~DATA4 is just a directory, I didn't mount it to any disk devices.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
As far as I know, if we split one file to many small files and did something like Erasure Code, and then rewrite it to such small files, the write rate can't be faster than I write it to only one file.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
But, the truth is that Multi-Tenant mode (4 nodes) is fatster than Normal mode. My result is:
* Normal mode: maybe 50times/s, that is, I can write a file which size is 10KB to Minio 50 times persecond, so the rate is 500KB/s;
* Multi-Tenant mode: 180times/s, the rate is 1800KB/s.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. start two Minio instances which is in normal mode and Multi-Tenant mode ;
2. PUT a complete file that is 10KB size into Minio to test the upload performance in both mode;
3. compare the performance or write rate between this two mode.
4.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used (`minio version`): 2018-11-06
* Environment name and version (e.g. nginx 1.9.1): Minio standalone mode in normal and Multi-Tenant run directoly in CentOS 7.4.
* Server type and version: Virtual machine run CentOS 7.4 in VMWARE
* Operating System and version (`uname -a`): Linux zcw61 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
* Link to your project:
|
non_process
|
why standalone multi tenant deployment is faster than standalone normal? write or put i want a fast object storage tool and did a performance test for minio i found that if i deploy minio in standalone mode multi tenant is fatster than norma node i want to know why i start minio like this normal minio server address multi tenant minio server address data the directory is just a directory i didn t mount it to any disk devices expected behavior as far as i know if we split one file to many small files and did something like erasure code and then rewrite it to such small files the write rate can t be faster than i write it to only one file current behavior but the truth is that multi tenant mode nodes is fatster than normal mode my result is normal mode maybe s that is i can write a file which size is to minio times persecond so the rate is s multi tenant mode s the rate is s possible solution steps to reproduce for bugs start two minio instances which is in normal mode and multi tenant mode put a complete file that is size into minio to test the upload performance in both mode compare the performance or write rate between this two mode context regression your environment version used minio version environment name and version e g nginx minio standalone mode in normal and multi tenant run directoly in centos server type and version virtual machine run centos in vmware operating system and version uname a linux smp thu jul edt gnu linux link to your project
| 0
|
24,738
| 3,907,786,338
|
IssuesEvent
|
2016-04-19 14:00:40
|
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
closed
|
Z7kXqlD8DigBu/4YGgfYtikP6c+uqdFLJ1EBnGYVVGfFuRHd0gvtS9kGYB03xWWGZYLraWo3uCDS0tQ7k6ZGqMF07p3hSJ6mep/3mJJ4MranvtE6HZJkQVlHOh6pgPZ7OJ42ixKFOnZm0UuDn2WNYi8Zh/hpjrRdngbGDCcCVGM=
|
design
|
ijqniqRtkMBLKzHdbIzg9qqbgv4+fSGZeaJl/JZ6ma/XxyrJc5Iz1LXbht7XpAEDsSuJbawurudEVREgaFFhLbAGNERoIc4JbujA9uV0nsQ1NrRgfVFXCVsqbWeVTbKICXeGZIVxQZ2hgFDV/EcfQDH+v84HRszO1CxcRNPbgqHujbCWY0OfQ/pxw4Zx5fT6CSDxqDzsDuBpBWD34wPLApN8Bwb9ADh2ch6SMKG4sVdmJTeN4DWZJv5U414EB4tFqpuC/j59IZl5omX8lnqZr9fHKslzkjPUtduG3tekAQPPMzdYuiQa1NsddsNX2iLgWvCYcRZ3j528+rsrp7tV2EjwTLoWNtGUaHST/x95iI5MIXavIJ8lyO/JBcUi64SIMgbD5Zu++F0zDapTu5SxEHnRu0HzgeThi/6pUAiKDQdWvksQ+3URu0r91gG6Be088qIhKsPKajV5dMGwof7Ft0H36nOzVBniRjhCp9Ptv2IB9t/JBNnn3755F528Al9l7EHpJECJKJUZxXQIvRJLaAWd+98nAGwjqvvJkfFIAerhdgdGXvsfj65BBNg9jWel93+CSku5/ClggZLmx7XiVNWaTW3+F7MA5GfdaDcyCmtzJxcbOaPkMSiNNh4vxspFzPG8EJOOGhFbGZdfzCsaHuh6PV9O5REWa8WtOyhJZXsztDgLi5AHhqypG23F2LrnqmnpFmEbIjidHGEd+52ftCaRJ+LsM680ESdRVdDv+bB5PT84IyvFmb2o0Y7qV+fLAfbfyQTZ59++eRedvAJfZSne0g5lG/NOnWJSP4/cp30qi3whoYSVD2MfhDi1SFpdM+oXpez6i4P6Fwm2NxnaKvWM3380MR/5Zca9/XVQZcVUIPWuOpXPMFDx+iwJP6mxOVJX4WOxhxQhDmQYW5edGkQUWpLcJy0mmF4TkU8mR3h1Io0HpwICdLKxdC+AZPolMHwJqUrpS8mx1/+5WNQ6coW7hejjtkj7TihOkYaztTc=
|
1.0
|
Z7kXqlD8DigBu/4YGgfYtikP6c+uqdFLJ1EBnGYVVGfFuRHd0gvtS9kGYB03xWWGZYLraWo3uCDS0tQ7k6ZGqMF07p3hSJ6mep/3mJJ4MranvtE6HZJkQVlHOh6pgPZ7OJ42ixKFOnZm0UuDn2WNYi8Zh/hpjrRdngbGDCcCVGM= - ijqniqRtkMBLKzHdbIzg9qqbgv4+fSGZeaJl/JZ6ma/XxyrJc5Iz1LXbht7XpAEDsSuJbawurudEVREgaFFhLbAGNERoIc4JbujA9uV0nsQ1NrRgfVFXCVsqbWeVTbKICXeGZIVxQZ2hgFDV/EcfQDH+v84HRszO1CxcRNPbgqHujbCWY0OfQ/pxw4Zx5fT6CSDxqDzsDuBpBWD34wPLApN8Bwb9ADh2ch6SMKG4sVdmJTeN4DWZJv5U414EB4tFqpuC/j59IZl5omX8lnqZr9fHKslzkjPUtduG3tekAQPPMzdYuiQa1NsddsNX2iLgWvCYcRZ3j528+rsrp7tV2EjwTLoWNtGUaHST/x95iI5MIXavIJ8lyO/JBcUi64SIMgbD5Zu++F0zDapTu5SxEHnRu0HzgeThi/6pUAiKDQdWvksQ+3URu0r91gG6Be088qIhKsPKajV5dMGwof7Ft0H36nOzVBniRjhCp9Ptv2IB9t/JBNnn3755F528Al9l7EHpJECJKJUZxXQIvRJLaAWd+98nAGwjqvvJkfFIAerhdgdGXvsfj65BBNg9jWel93+CSku5/ClggZLmx7XiVNWaTW3+F7MA5GfdaDcyCmtzJxcbOaPkMSiNNh4vxspFzPG8EJOOGhFbGZdfzCsaHuh6PV9O5REWa8WtOyhJZXsztDgLi5AHhqypG23F2LrnqmnpFmEbIjidHGEd+52ftCaRJ+LsM680ESdRVdDv+bB5PT84IyvFmb2o0Y7qV+fLAfbfyQTZ59++eRedvAJfZSne0g5lG/NOnWJSP4/cp30qi3whoYSVD2MfhDi1SFpdM+oXpez6i4P6Fwm2NxnaKvWM3380MR/5Zca9/XVQZcVUIPWuOpXPMFDx+iwJP6mxOVJX4WOxhxQhDmQYW5edGkQUWpLcJy0mmF4TkU8mR3h1Io0HpwICdLKxdC+AZPolMHwJqUrpS8mx1/+5WNQ6coW7hejjtkj7TihOkYaztTc=
|
non_process
|
hpjrrdngbgdcccvgm fsgzeajl ecfqdh xvqzcvuipwuopxpmfdx
| 0
|
22,720
| 32,040,399,771
|
IssuesEvent
|
2023-09-22 18:48:10
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
s3fs 2023.9.2 has 1 GuardDog issues
|
guarddog silent-process-execution
| ERROR: type should be string, got "https://pypi.org/project/s3fs\nhttps://inspector.pypi.io/project/s3fs\n```{\n \"dependency\": \"s3fs\",\n \"version\": \"2023.9.2\",\n \"result\": {\n \"issues\": 1,\n \"errors\": {},\n \"results\": {\n \"silent-process-execution\": [\n {\n \"location\": \"s3fs-2023.9.2/s3fs/tests/derived/s3fs_fixtures.py:94\",\n \"code\": \" proc = subprocess.Popen(\\n shlex.split(\\\"moto_server s3 -p %s\\\" % port),\\n stderr=subprocess.DEVNULL,\\n stdout=subprocess.DEVNULL,\\n stdin=subprocess.DEVNULL,\\n )\",\n \"message\": \"This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null\"\n }\n ]\n },\n \"path\": \"/tmp/tmpqpe0p7fy/s3fs\"\n }\n}```"
|
1.0
|
s3fs 2023.9.2 has 1 GuardDog issues - https://pypi.org/project/s3fs
https://inspector.pypi.io/project/s3fs
```{
"dependency": "s3fs",
"version": "2023.9.2",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "s3fs-2023.9.2/s3fs/tests/derived/s3fs_fixtures.py:94",
"code": " proc = subprocess.Popen(\n shlex.split(\"moto_server s3 -p %s\" % port),\n stderr=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stdin=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpqpe0p7fy/s3fs"
}
}```
|
process
|
has guarddog issues dependency version result issues errors results silent process execution location tests derived fixtures py code proc subprocess popen n shlex split moto server p s port n stderr subprocess devnull n stdout subprocess devnull n stdin subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp
| 1
|
16,301
| 9,358,894,616
|
IssuesEvent
|
2019-04-02 04:32:46
|
postmanlabs/postman-app-support
|
https://api.github.com/repos/postmanlabs/postman-app-support
|
closed
|
Postman very slow to return from endpoint calls and global variable updates
|
bug performance product/desktop-app
|
## App Details:
Postman for Windows
Version 5.0.0
win32 6.1.7601 / x64
## Issue Report:
1. Did you encounter this recently, or has this bug always been there: Postman has been very slow to respond to calls to endpoint apis, used to be very responsive
2. Expected behaviour: calling an endpoint api, or updating a global parameter should not hang for a while.
3. Console logs (http://blog.getpostman.com/2014/01/27/enabling-chrome-developer-tools-inside-postman/ for the Chrome App, View->Toggle Dev Tools for the Mac app):
4. Screenshots (if applicable)
|
True
|
Postman very slow to return from endpoint calls and global variable updates - ## App Details:
Postman for Windows
Version 5.0.0
win32 6.1.7601 / x64
## Issue Report:
1. Did you encounter this recently, or has this bug always been there: Postman has been very slow to respond to calls to endpoint apis, used to be very responsive
2. Expected behaviour: calling an endpoint api, or updating a global parameter should not hang for a while.
3. Console logs (http://blog.getpostman.com/2014/01/27/enabling-chrome-developer-tools-inside-postman/ for the Chrome App, View->Toggle Dev Tools for the Mac app):
4. Screenshots (if applicable)
|
non_process
|
postman very slow to return from endpoint calls and global variable updates app details postman for windows version issue report did you encounter this recently or has this bug always been there postman has been very slow to respond to calls to endpoint apis used to be very responsive expected behaviour calling an endpoint api or updating a global parameter should not hang for a while console logs for the chrome app view toggle dev tools for the mac app screenshots if applicable
| 0
|
10,137
| 13,044,162,434
|
IssuesEvent
|
2020-07-29 03:47:32
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `JsonArrayInsertSig` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `JsonArrayInsertSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `JsonArrayInsertSig` from TiDB -
## Description
Port the scalar function `JsonArrayInsertSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function jsonarrayinsertsig from tidb description port the scalar function jsonarrayinsertsig from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
32,352
| 13,795,073,582
|
IssuesEvent
|
2020-10-09 17:22:15
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Receiving error while running the ContosoAdsCloudService project
|
Pri2 assigned-to-author cloud-services/svc doc-bug triaged
|
Received the following error while trying to run the service project:
The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. ContosoAdsCloudService C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\Microsoft\VisualStudio\v15.0\Windows Azure Tools\2.9\Microsoft.WindowsAzure.targets at line 1061
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8cc8f1b2-fa7a-6772-e534-9f91b37106e7
* Version Independent ID: e9309449-36ab-f570-8d04-121b1158a507
* Content: [Get started with Azure Cloud Services and ASP.NET](https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-dotnet-get-started)
* Content Source: [articles/cloud-services/cloud-services-dotnet-get-started.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cloud-services/cloud-services-dotnet-get-started.md)
* Service: **cloud-services**
* GitHub Login: @jpconnock
* Microsoft Alias: **jeconnoc**
|
1.0
|
Receiving error while running the ContosoAdsCloudService project - Received the following error while trying to run the service project:
The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. ContosoAdsCloudService C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\Microsoft\VisualStudio\v15.0\Windows Azure Tools\2.9\Microsoft.WindowsAzure.targets at line 1061
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8cc8f1b2-fa7a-6772-e534-9f91b37106e7
* Version Independent ID: e9309449-36ab-f570-8d04-121b1158a507
* Content: [Get started with Azure Cloud Services and ASP.NET](https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-dotnet-get-started)
* Content Source: [articles/cloud-services/cloud-services-dotnet-get-started.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cloud-services/cloud-services-dotnet-get-started.md)
* Service: **cloud-services**
* GitHub Login: @jpconnock
* Microsoft Alias: **jeconnoc**
|
non_process
|
receiving error while running the contosoadscloudservice project received the following error while trying to run the service project the specified path file name or both are too long the fully qualified file name must be less than characters and the directory name must be less than characters contosoadscloudservice c program files microsoft visual studio community msbuild microsoft visualstudio windows azure tools microsoft windowsazure targets at line document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cloud services github login jpconnock microsoft alias jeconnoc
| 0
|
19,410
| 5,870,945,799
|
IssuesEvent
|
2017-05-15 07:10:33
|
glua/gm_voxelate
|
https://api.github.com/repos/glua/gm_voxelate
|
opened
|
fix naming conventions
|
Discussion Shitcode Removal
|
The exposed C++<->Lua api is fairly consistent, but the pure Lua API is messy as fuck.
Here's how I'm doing it:
- local vars are simpleCaseCamelCased
- global vars are UpperCaseCamelCased, with the exception of `gm_voxelate`
- class methods are UpperCaseCamelCased
- avoid using snake_case because it looks weird af tbh
Thoughts @birdbrainswagtrain ?
|
1.0
|
fix naming conventions - The exposed C++<->Lua api is fairly consistent, but the pure Lua API is messy as fuck.
Here's how I'm doing it:
- local vars are simpleCaseCamelCased
- global vars are UpperCaseCamelCased, with the exception of `gm_voxelate`
- class methods are UpperCaseCamelCased
- avoid using snake_case because it looks weird af tbh
Thoughts @birdbrainswagtrain ?
|
non_process
|
fix naming conventions the exposed c lua api is fairly consistent but the pure lua api is messy as fuck here s how i m doing it local vars are simplecasecamelcased global vars are uppercasecamelcased with the exception of gm voxelate class methods are uppercasecamelcased avoid using snake case because it looks weird af tbh thoughts birdbrainswagtrain
| 0
|
34,562
| 14,442,171,680
|
IssuesEvent
|
2020-12-07 17:46:25
|
hashicorp/terraform-provider-aws
|
https://api.github.com/repos/hashicorp/terraform-provider-aws
|
closed
|
data.aws_iam_policy_document proposes unnecessary changes under 0.14
|
service/iam service/s3 thinking upstream-terraform
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
$ terraform -v
Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/aws v3.20.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* data.aws_iam_policy_document
* aws_iam_policy
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```
resource "aws_s3_bucket" "b" {
bucket_prefix = "terraform-0-14-test-"
acl = "private"
# server_side_encryption_configuration {
# rule {
# apply_server_side_encryption_by_default {
# sse_algorithm = "aws:kms"
# }
# }
# }
}
data "aws_iam_policy_document" "b" {
statement {
effect = "Deny"
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.b.arn}/*"]
}
}
resource "aws_iam_policy" "policy" {
name = "terraform_014_test"
description = "My test policy"
policy = data.aws_iam_policy_document.b.json
}
```
### Expected Behavior
`terraform plan` should show diffs in this IAM policy only when the ARN of this bucket changes, and in `0.13.5` it works as expected.
### Actual Behavior
But in `0.14.0` plan diffs show that the IAM policy will be changed even when the ARN of the bucket does not change.
This will be easiest to explain in the `Steps to Reproduce` section.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Use `0.13.5` and apply the above infrastructure, with S3 encryption commented out. It will apply as expected.
2. Uncomment the S3 encryption and run a `plan` using `0.13.5`, it will show that only one resource (the bucket) is going to change. Which is the expected behavior.
```
# aws_s3_bucket.b will be updated in-place
~ resource "aws_s3_bucket" "b" {
acl = "private"
arn = "arn:aws:s3:::terraform-0-14-test-20201204174955070900000001"
bucket = "terraform-0-14-test-20201204174955070900000001"
bucket_domain_name = "terraform-0-14-test-20201204174955070900000001.s3.amazonaws.com"
bucket_prefix = "terraform-0-14-test-"
bucket_regional_domain_name = "terraform-0-14-test-20201204174955070900000001.s3.amazonaws.com"
force_destroy = false
hosted_zone_id = "Z3AQBSTGFYJSTF"
id = "terraform-0-14-test-20201204174955070900000001"
region = "us-east-1"
request_payer = "BucketOwner"
tags = {}
+ server_side_encryption_configuration {
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
}
}
}
versioning {
enabled = false
mfa_delete = false
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
3. Comment out the S3 encryption again and `terraform destroy`.
4. Install `0.14.0` and `init` to get set up.
5. Run an `apply` with the S3 encryption commented out, using `0.14.0`. It will apply as expected.
6. Uncomment the S3 encryption and run a `plan` using `0.14.0`. It will report that the IAM policy is going to change, even though the bucket is being updated in-place and the bucket ARN isn't going to change.
```
Terraform will perform the following actions:
# data.aws_iam_policy_document.b will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "b" {
~ id = "900537039" -> (known after apply)
~ json = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetObject"
- Effect = "Deny"
- Resource = "arn:aws:s3:::terraform-0-14-test-20201204175506660300000001/*"
- Sid = ""
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
- version = "2012-10-17" -> null
~ statement {
- not_actions = [] -> null
- not_resources = [] -> null
# (3 unchanged attributes hidden)
}
}
# aws_iam_policy.policy will be updated in-place
~ resource "aws_iam_policy" "policy" {
id = "arn:aws:iam::1234567890:policy/terraform_014_test"
name = "terraform_014_test"
~ policy = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetObject"
- Effect = "Deny"
- Resource = "arn:aws:s3:::terraform-0-14-test-20201204175506660300000001/*"
- Sid = ""
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
# (3 unchanged attributes hidden)
}
# aws_s3_bucket.b will be updated in-place
~ resource "aws_s3_bucket" "b" {
id = "terraform-0-14-test-20201204175506660300000001"
tags = {}
# (10 unchanged attributes hidden)
+ server_side_encryption_configuration {
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
}
}
}
# (1 unchanged block hidden)
}
Plan: 0 to add, 2 to change, 0 to destroy.
```
The bucket will be updated in-place and it's ARN isn't going to change. So `0.14.0` should report no change to the IAM policy, the same way that `0.13.5` reports no change to the IAM policy.
### Other factoids
This behavior isn't specific to S3 either. I was able to reproduce the same thing with an in-place change to a `aws_codepipeline` resource, for example, and `data.aws_iam_policy_document` behaved similarly to the above when it referenced the ARN of that pipeline. The S3 case is just the simplest way to reproduce this with the least amount of moving parts.
Since the behavior that I'm seeing differs between 0.13 and 0.14, perhaps this is a problem with Terraform core instead of with the AWS provider. I'm happy to move this issue over there if that's more appropriate. But since Terraform core has no direct concept of an ARN or if/when an ARN should/shouldn't change, I figured I would start here...
|
2.0
|
data.aws_iam_policy_document proposes unnecessary changes under 0.14 - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
$ terraform -v
Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/aws v3.20.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* data.aws_iam_policy_document
* aws_iam_policy
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```
resource "aws_s3_bucket" "b" {
bucket_prefix = "terraform-0-14-test-"
acl = "private"
# server_side_encryption_configuration {
# rule {
# apply_server_side_encryption_by_default {
# sse_algorithm = "aws:kms"
# }
# }
# }
}
data "aws_iam_policy_document" "b" {
statement {
effect = "Deny"
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.b.arn}/*"]
}
}
resource "aws_iam_policy" "policy" {
name = "terraform_014_test"
description = "My test policy"
policy = data.aws_iam_policy_document.b.json
}
```
### Expected Behavior
`terraform plan` should show diffs in this IAM policy only when the ARN of this bucket changes, and in `0.13.5` it works as expected.
### Actual Behavior
But in `0.14.0` plan diffs show that the IAM policy will be changed even when the ARN of the bucket does not change.
This will be easiest to explain in the `Steps to Reproduce` section.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Use `0.13.5` and apply the above infrastructure, with S3 encryption commented out. It will apply as expected.
2. Uncomment the S3 encryption and run a `plan` using `0.13.5`, it will show that only one resource (the bucket) is going to change. Which is the expected behavior.
```
# aws_s3_bucket.b will be updated in-place
~ resource "aws_s3_bucket" "b" {
acl = "private"
arn = "arn:aws:s3:::terraform-0-14-test-20201204174955070900000001"
bucket = "terraform-0-14-test-20201204174955070900000001"
bucket_domain_name = "terraform-0-14-test-20201204174955070900000001.s3.amazonaws.com"
bucket_prefix = "terraform-0-14-test-"
bucket_regional_domain_name = "terraform-0-14-test-20201204174955070900000001.s3.amazonaws.com"
force_destroy = false
hosted_zone_id = "Z3AQBSTGFYJSTF"
id = "terraform-0-14-test-20201204174955070900000001"
region = "us-east-1"
request_payer = "BucketOwner"
tags = {}
+ server_side_encryption_configuration {
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
}
}
}
versioning {
enabled = false
mfa_delete = false
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
3. Comment out the S3 encryption again and `terraform destroy`.
4. Install `0.14.0` and `init` to get set up.
5. Run an `apply` with the S3 encryption commented out, using `0.14.0`. It will apply as expected.
6. Uncomment the S3 encryption and run a `plan` using `0.14.0`. It will report that the IAM policy is going to change, even though the bucket is being updated in-place and the bucket ARN isn't going to change.
```
Terraform will perform the following actions:
# data.aws_iam_policy_document.b will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "b" {
~ id = "900537039" -> (known after apply)
~ json = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetObject"
- Effect = "Deny"
- Resource = "arn:aws:s3:::terraform-0-14-test-20201204175506660300000001/*"
- Sid = ""
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
- version = "2012-10-17" -> null
~ statement {
- not_actions = [] -> null
- not_resources = [] -> null
# (3 unchanged attributes hidden)
}
}
# aws_iam_policy.policy will be updated in-place
~ resource "aws_iam_policy" "policy" {
id = "arn:aws:iam::1234567890:policy/terraform_014_test"
name = "terraform_014_test"
~ policy = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetObject"
- Effect = "Deny"
- Resource = "arn:aws:s3:::terraform-0-14-test-20201204175506660300000001/*"
- Sid = ""
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
# (3 unchanged attributes hidden)
}
# aws_s3_bucket.b will be updated in-place
~ resource "aws_s3_bucket" "b" {
id = "terraform-0-14-test-20201204175506660300000001"
tags = {}
# (10 unchanged attributes hidden)
+ server_side_encryption_configuration {
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "aws:kms"
}
}
}
# (1 unchanged block hidden)
}
Plan: 0 to add, 2 to change, 0 to destroy.
```
The bucket will be updated in-place and it's ARN isn't going to change. So `0.14.0` should report no change to the IAM policy, the same way that `0.13.5` reports no change to the IAM policy.
### Other factoids
This behavior isn't specific to S3 either. I was able to reproduce the same thing with an in-place change to a `aws_codepipeline` resource, for example, and `data.aws_iam_policy_document` behaved similarly to the above when it referenced the ARN of that pipeline. The S3 case is just the simplest way to reproduce this with the least amount of moving parts.
Since the behavior that I'm seeing differs between 0.13 and 0.14, perhaps this is a problem with Terraform core instead of with the AWS provider. I'm happy to move this issue over there if that's more appropriate. But since Terraform core has no direct concept of an ARN or if/when an ARN should/shouldn't change, I figured I would start here...
|
non_process
|
data aws iam policy document proposes unnecessary changes under please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform v terraform provider registry terraform io hashicorp aws affected resource s data aws iam policy document aws iam policy terraform configuration files resource aws bucket b bucket prefix terraform test acl private server side encryption configuration rule apply server side encryption by default sse algorithm aws kms data aws iam policy document b statement effect deny actions resources resource aws iam policy policy name terraform test description my test policy policy data aws iam policy document b json expected behavior terraform plan should show diffs in this iam policy only when the arn of this bucket changes and in it works as expected actual behavior but in plan diffs show that the iam policy will be changed even when the arn of the bucket does not change this will be easiest to explain in the steps to reproduce section steps to reproduce use and apply the above infrastructure with encryption commented out it will apply as expected uncomment the encryption and run a plan using it will show that only one resource the bucket is going to change which is the expected behavior aws bucket b will be updated in place resource aws bucket b acl private arn arn aws terraform test bucket terraform test bucket domain name terraform test amazonaws com bucket prefix terraform test bucket regional domain name terraform test amazonaws com force destroy false hosted zone id id terraform test region us east request payer bucketowner tags server side encryption configuration rule apply server side encryption by default sse algorithm aws kms versioning enabled false mfa delete false plan to add to change to destroy comment out the encryption again and terraform destroy install and init to get set up run an apply with the encryption commented out using it will apply as expected uncomment the encryption and run a plan using it will report that the iam policy is going to change even though the bucket is being updated in place and the bucket arn isn t going to change terraform will perform the following actions data aws iam policy document b will be read during apply config refers to values not yet known data aws iam policy document b id known after apply json jsonencode statement action getobject effect deny resource arn aws terraform test sid version known after apply version null statement not actions null not resources null unchanged attributes hidden aws iam policy policy will be updated in place resource aws iam policy policy id arn aws iam policy terraform test name terraform test policy jsonencode statement action getobject effect deny resource arn aws terraform test sid version known after apply unchanged attributes hidden aws bucket b will be updated in place resource aws bucket b id terraform test tags unchanged attributes hidden server side encryption configuration rule apply server side encryption by default sse algorithm aws kms unchanged block hidden plan to add to change to destroy the bucket will be updated in place and it s arn isn t going to change so should report no change to the iam policy the same way that reports no change to the iam policy other factoids this behavior isn t specific to either i was able to reproduce the same thing with an in place change to a aws codepipeline resource for example and data aws iam policy document behaved similarly to the above when it referenced the arn of that pipeline the case is just the simplest way to reproduce this with the least amount of moving parts since the behavior that i m seeing differs between and perhaps this is a problem with terraform core instead of with the aws provider i m happy to move this issue over there if that s more appropriate but since terraform core has no direct concept of an arn or if when an arn should shouldn t change i figured i would start here
| 0
|
321,730
| 27,550,237,292
|
IssuesEvent
|
2023-03-07 14:30:29
|
hazelcast/hazelcast-cpp-client
|
https://api.github.com/repos/hazelcast/hazelcast-cpp-client
|
opened
|
BasicStoreTest/NearCacheRecordStoreTest.destroyStore Seg Fault
|
Type: Test-Failure to-jira
|
C++ compiler version: gcc 7.5
Hazelcast Cpp client version: 5.1.0
Hazelcast server version: 5.2.0
Number of the clients: 1
Cluster size, i.e. the number of Hazelcast cluster members: 1
OS version (Windows/Linux/OSX): Ubuntu i386
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Test pass successfully
#### Actual behaviour
Test failure
#### Steps to reproduce the behaviour.
The error occurs randomly at Github actions. The stack trace is attached in log file.
Here is the link. [https://github.com/hazelcast/hazelcast-cpp-client/actions/runs/4342850867/jobs/7584172778](url)
[error_logs.txt](https://github.com/hazelcast/hazelcast-cpp-client/files/10910386/error_logs.txt)
|
1.0
|
BasicStoreTest/NearCacheRecordStoreTest.destroyStore Seg Fault - C++ compiler version: gcc 7.5
Hazelcast Cpp client version: 5.1.0
Hazelcast server version: 5.2.0
Number of the clients: 1
Cluster size, i.e. the number of Hazelcast cluster members: 1
OS version (Windows/Linux/OSX): Ubuntu i386
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Test pass successfully
#### Actual behaviour
Test failure
#### Steps to reproduce the behaviour.
The error occurs randomly at Github actions. The stack trace is attached in log file.
Here is the link. [https://github.com/hazelcast/hazelcast-cpp-client/actions/runs/4342850867/jobs/7584172778](url)
[error_logs.txt](https://github.com/hazelcast/hazelcast-cpp-client/files/10910386/error_logs.txt)
|
non_process
|
basicstoretest nearcacherecordstoretest destroystore seg fault c compiler version gcc hazelcast cpp client version hazelcast server version number of the clients cluster size i e the number of hazelcast cluster members os version windows linux osx ubuntu please attach relevant logs and files for client and server side expected behaviour test pass successfully actual behaviour test failure steps to reproduce the behaviour the error occurs randomly at github actions the stack trace is attached in log file here is the link url
| 0
|
21,375
| 2,639,706,265
|
IssuesEvent
|
2015-03-11 05:12:42
|
cs2103jan2015-w15-2j/main
|
https://api.github.com/repos/cs2103jan2015-w15-2j/main
|
closed
|
Logic should be able to determine validity of command functions with parameters passed to it
|
priority.high type.task
|
Check validity using the list of supported operation cases discussed before.
|
1.0
|
Logic should be able to determine validity of command functions with parameters passed to it - Check validity using the list of supported operation cases discussed before.
|
non_process
|
logic should be able to determine validity of command functions with parameters passed to it check validity using the list of supported operation cases discussed before
| 0
|
71,222
| 30,828,866,285
|
IssuesEvent
|
2023-08-01 22:51:45
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Update DTS Portal with HR New Hires - August 2023
|
Workgroup: HR Type: IT Support Service: Apps Product: Data & Technology Services Portal
|
Check HR new hires for previous month:
https://atd.knack.com/hr#manage-accounts/
Add all **regular** employees to DTS Portal as Viewer
- Go to DTS Portal Builder side > Accounts
- Check to make sure the user isn't already in the system
- Add new Account record, making sure the email is correct
- Add password (this won't be used since the system uses SSO but Knack requires a password nonetheless for account creation)
- Enter division information, checking from Teams or Delve if needed
- Set user role as Viewer
- Save/create account
- List employee first name and initial in your comment to this issue.
|
2.0
|
Update DTS Portal with HR New Hires - August 2023 - Check HR new hires for previous month:
https://atd.knack.com/hr#manage-accounts/
Add all **regular** employees to DTS Portal as Viewer
- Go to DTS Portal Builder side > Accounts
- Check to make sure the user isn't already in the system
- Add new Account record, making sure the email is correct
- Add password (this won't be used since the system uses SSO but Knack requires a password nonetheless for account creation)
- Enter division information, checking from Teams or Delve if needed
- Set user role as Viewer
- Save/create account
- List employee first name and initial in your comment to this issue.
|
non_process
|
update dts portal with hr new hires august check hr new hires for previous month add all regular employees to dts portal as viewer go to dts portal builder side accounts check to make sure the user isn t already in the system add new account record making sure the email is correct add password this won t be used since the system uses sso but knack requires a password nonetheless for account creation enter division information checking from teams or delve if needed set user role as viewer save create account list employee first name and initial in your comment to this issue
| 0
|
8,244
| 11,420,645,692
|
IssuesEvent
|
2020-02-03 10:30:55
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: effector mediated suppression of plant PTI signalling
|
New term request high priority multi-species process quick fix
|
NTR: effector mediated suppression of plant PTI signalling
broad synonym effector-mediated suppression of host pattern recognition receptor signalling
effector-mediated suppression of host PRR signalling
broad effector-mediated suppression of host PRR signaling
parent
GO:0140403 effector-mediated suppression of host innate immune response by symbiont
the PTI synonym will need to move down
That's the last one I think...
|
1.0
|
NTR: effector mediated suppression of plant PTI signalling - NTR: effector mediated suppression of plant PTI signalling
broad synonym effector-mediated suppression of host pattern recognition receptor signalling
effector-mediated suppression of host PRR signalling
broad effector-mediated suppression of host PRR signaling
parent
GO:0140403 effector-mediated suppression of host innate immune response by symbiont
the PTI synonym will need to move down
That's the last one I think...
|
process
|
ntr effector mediated suppression of plant pti signalling ntr effector mediated suppression of plant pti signalling broad synonym effector mediated suppression of host pattern recognition receptor signalling effector mediated suppression of host prr signalling
broad effector mediated suppression of host prr signaling parent go effector mediated suppression of host innate immune response by symbiont the pti synonym will need to move down that s the last one i think
| 1
|
61,063
| 14,601,513,137
|
IssuesEvent
|
2020-12-21 08:48:35
|
M157q/m157q.github.io
|
https://api.github.com/repos/M157q/m157q.github.io
|
opened
|
駭客將電腦的記憶體變成臨時 Wi-Fi,成功將資料從網路隔絕環境洩露 | TechNews 科技新報
|
security
|
<https://technews.tw/2020/12/17/hackers-turn-computer-memory-into-temporary-wi-fi/?utm_source=fb_tn&utm_medium=facebook&fbclid=IwAR3vv6FnPGOyfp5G4OVVEGvVPDfq-zkL0XwB3S-EwdBlwxJvPoEuLjFgRmI>
> 「(AIR-FI)只需一般使用者,只要他能接觸到這台電腦就可發起。且這種攻擊適用任何作業系統,甚至從虛擬機器(VM)內部進行。」
太神了吧
|
True
|
駭客將電腦的記憶體變成臨時 Wi-Fi,成功將資料從網路隔絕環境洩露 | TechNews 科技新報 - <https://technews.tw/2020/12/17/hackers-turn-computer-memory-into-temporary-wi-fi/?utm_source=fb_tn&utm_medium=facebook&fbclid=IwAR3vv6FnPGOyfp5G4OVVEGvVPDfq-zkL0XwB3S-EwdBlwxJvPoEuLjFgRmI>
> 「(AIR-FI)只需一般使用者,只要他能接觸到這台電腦就可發起。且這種攻擊適用任何作業系統,甚至從虛擬機器(VM)內部進行。」
太神了吧
|
non_process
|
駭客將電腦的記憶體變成臨時 wi fi,成功將資料從網路隔絕環境洩露 technews 科技新報 「(air fi)只需一般使用者,只要他能接觸到這台電腦就可發起。且這種攻擊適用任何作業系統,甚至從虛擬機器(vm)內部進行。」 太神了吧
| 0
|
13,912
| 8,405,333,265
|
IssuesEvent
|
2018-10-11 15:01:01
|
Alexander-Miller/treemacs
|
https://api.github.com/repos/Alexander-Miller/treemacs
|
reopened
|
Treemacs hungs on large projects
|
Bug Feature:Git Integration Performance
|
Each time I try to open a project that uses golang and vendoring for dependencies (which means really large projects) then treemacs hungs.
An example project is https://github.com/fabric8io/kubernetes-model.
I tried creating a predicate for ignoring the vendor folder (if there is a glide.yml file in the same dir), but it doesn't seem to work.
(defun golang-vendor-p (name path)
(let ((vendor (and (equal "vendor" name) (file-exists-p (format "%s/glide.yml" (file-name-directory path))))))
(message (format "%s - %s" path vendor))
vendor)
)
(setq treemacs-ignored-file-predicates '(golang-vendor-p))
I see in the Messages buffer that the predicate is called for for some of the files but then hugs (without displaying anything the vendor folder).
|
True
|
Treemacs hungs on large projects - Each time I try to open a project that uses golang and vendoring for dependencies (which means really large projects) then treemacs hungs.
An example project is https://github.com/fabric8io/kubernetes-model.
I tried creating a predicate for ignoring the vendor folder (if there is a glide.yml file in the same dir), but it doesn't seem to work.
(defun golang-vendor-p (name path)
(let ((vendor (and (equal "vendor" name) (file-exists-p (format "%s/glide.yml" (file-name-directory path))))))
(message (format "%s - %s" path vendor))
vendor)
)
(setq treemacs-ignored-file-predicates '(golang-vendor-p))
I see in the Messages buffer that the predicate is called for for some of the files but then hugs (without displaying anything the vendor folder).
|
non_process
|
treemacs hungs on large projects each time i try to open a project that uses golang and vendoring for dependencies which means really large projects then treemacs hungs an example project is i tried creating a predicate for ignoring the vendor folder if there is a glide yml file in the same dir but it doesn t seem to work defun golang vendor p name path let vendor and equal vendor name file exists p format s glide yml file name directory path message format s s path vendor vendor setq treemacs ignored file predicates golang vendor p i see in the messages buffer that the predicate is called for for some of the files but then hugs without displaying anything the vendor folder
| 0
|
251,422
| 27,162,461,857
|
IssuesEvent
|
2023-02-17 13:00:47
|
Azure/AKS
|
https://api.github.com/repos/Azure/AKS
|
closed
|
CVE-2022-3162: Unauthorized read of Custom Resources
|
security announcement stale
|
https://github.com/kubernetes/kubernetes/issues/113756
A security issue was discovered in Kubernetes where users authorized to list or watch one type of namespaced custom resource cluster-wide can read custom resources of a different type in the same API group without authorization.
This issue has been rated Medium ([CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.first.org%2Fcvss%2Fcalculator%2F3.0%23CVSS%3A3.0%2FAV%3AN%2FAC%3AL%2FPR%3AL%2FUI%3AN%2FS%3AU%2FC%3AH%2FI%3AN%2FA%3AN&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cdf82e23a479b43e7378a08dac33ef9ca%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638036972527321648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=M2zKwp1vYArF1gHMplKXu%2B5jQixqRryKKIdxEfz6V24%3D&reserved=0)), and assigned CVE-2022-3162
### Am I vulnerable?
Clusters are impacted by this vulnerability if all of the following are true:
1. There are 2+ CustomResourceDefinitions sharing the same API group
2. Users have cluster-wide list or watch authorization on one of those custom resources.
3. The same users are not authorized to read another custom resource in the same API group.
### Affected Versions
• Kubernetes kube-apiserver <= v1.25.3
• Kubernetes kube-apiserver <= v1.24.7
• Kubernetes kube-apiserver <= v1.23.13
• Kubernetes kube-apiserver <= v1.22.15
### AKS Information:
AKS has hotfixed the below AKS versions:
AKS 1.22.11
AKS 1.22.15
AKS 1.23.8
AKS 1.23.12
AKS 1.24.3
AKS 1.24.6
AKS 1.25.2
If you want to be mitigated from this CVE it is recommended that you move to one of the supported AKS versions above.
|
True
|
CVE-2022-3162: Unauthorized read of Custom Resources - https://github.com/kubernetes/kubernetes/issues/113756
A security issue was discovered in Kubernetes where users authorized to list or watch one type of namespaced custom resource cluster-wide can read custom resources of a different type in the same API group without authorization.
This issue has been rated Medium ([CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.first.org%2Fcvss%2Fcalculator%2F3.0%23CVSS%3A3.0%2FAV%3AN%2FAC%3AL%2FPR%3AL%2FUI%3AN%2FS%3AU%2FC%3AH%2FI%3AN%2FA%3AN&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cdf82e23a479b43e7378a08dac33ef9ca%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638036972527321648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=M2zKwp1vYArF1gHMplKXu%2B5jQixqRryKKIdxEfz6V24%3D&reserved=0)), and assigned CVE-2022-3162
### Am I vulnerable?
Clusters are impacted by this vulnerability if all of the following are true:
1. There are 2+ CustomResourceDefinitions sharing the same API group
2. Users have cluster-wide list or watch authorization on one of those custom resources.
3. The same users are not authorized to read another custom resource in the same API group.
### Affected Versions
• Kubernetes kube-apiserver <= v1.25.3
• Kubernetes kube-apiserver <= v1.24.7
• Kubernetes kube-apiserver <= v1.23.13
• Kubernetes kube-apiserver <= v1.22.15
### AKS Information:
AKS has hotfixed the below AKS versions:
AKS 1.22.11
AKS 1.22.15
AKS 1.23.8
AKS 1.23.12
AKS 1.24.3
AKS 1.24.6
AKS 1.25.2
If you want to be mitigated from this CVE it is recommended that you move to one of the supported AKS versions above.
|
non_process
|
cve unauthorized read of custom resources a security issue was discovered in kubernetes where users authorized to list or watch one type of namespaced custom resource cluster wide can read custom resources of a different type in the same api group without authorization this issue has been rated medium and assigned cve am i vulnerable clusters are impacted by this vulnerability if all of the following are true there are customresourcedefinitions sharing the same api group users have cluster wide list or watch authorization on one of those custom resources the same users are not authorized to read another custom resource in the same api group affected versions • kubernetes kube apiserver • kubernetes kube apiserver • kubernetes kube apiserver • kubernetes kube apiserver aks information aks has hotfixed the below aks versions aks aks aks aks aks aks aks if you want to be mitigated from this cve it is recommended that you move to one of the supported aks versions above
| 0
|
114,804
| 11,856,913,778
|
IssuesEvent
|
2020-03-25 08:34:24
|
fatin99/alpha
|
https://api.github.com/repos/fatin99/alpha
|
opened
|
Test
|
severity.High type.DocumentationBug
|
Test
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

|
1.0
|
Test - Test
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

|
non_process
|
test test lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua
| 0
|
22,505
| 31,558,911,323
|
IssuesEvent
|
2023-09-03 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 1 Sep 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports
- **Authors:** İrem Üstek, Jay Desai, Iván López Torrecillas, Sofiane Abadou, Jinjie Wang, Quentin Fever, Sandhya Rani Kasthuri, Yang Xing, Weisi Guo, Antonios Tsourdos
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16325
- **Pdf link:** https://arxiv.org/pdf/2308.16325
- **Abstract**
This study introduces an innovative violence detection framework tailored to the unique requirements of smart airports, where prompt responses to violent situations are crucial. The proposed framework harnesses the power of ViTPose for human pose estimation. It employs a CNN - BiLSTM network to analyse spatial and temporal information within keypoints sequences, enabling the accurate classification of violent behaviour in real time. Seamlessly integrated within the SAFE (Situational Awareness for Enhanced Security framework of SAAB, the solution underwent integrated testing to ensure robust performance in real world scenarios. The AIRTLab dataset, characterized by its high video quality and relevance to surveillance scenarios, is utilized in this study to enhance the model's accuracy and mitigate false positives. As airports face increased foot traffic in the post pandemic era, implementing AI driven violence detection systems, such as the one proposed, is paramount for improving security, expediting response times, and promoting data informed decision making. The implementation of this framework not only diminishes the probability of violent events but also assists surveillance teams in effectively addressing potential threats, ultimately fostering a more secure and protected aviation sector. Codes are available at: https://github.com/Asami-1/GDP.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Prompt-enhanced Hierarchical Transformer Elevating Cardiopulmonary Resuscitation Instruction via Temporal Action Segmentation
- **Authors:** Yang Liu, Xiaoyun Zhong, Shiyao Zhai, Zhicheng Du, Zhenyuan Gao, Qiming Huang, Canyang Zhang, Bin Jiang, Vijay Kumar Pandey, Sanyang Han, Runming Wang, Yuxing Han, Peiwu Qin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16552
- **Pdf link:** https://arxiv.org/pdf/2308.16552
- **Abstract**
The vast majority of people who suffer unexpected cardiac arrest are performed cardiopulmonary resuscitation (CPR) by passersby in a desperate attempt to restore life, but endeavors turn out to be fruitless on account of disqualification. Fortunately, many pieces of research manifest that disciplined training will help to elevate the success rate of resuscitation, which constantly desires a seamless combination of novel techniques to yield further advancement. To this end, we collect a custom CPR video dataset in which trainees make efforts to behave resuscitation on mannequins independently in adherence to approved guidelines, thereby devising an auxiliary toolbox to assist supervision and rectification of intermediate potential issues via modern deep learning methodologies. Our research empirically views this problem as a temporal action segmentation (TAS) task in computer vision, which aims to segment an untrimmed video at a frame-wise level. Here, we propose a Prompt-enhanced hierarchical Transformer (PhiTrans) that integrates three indispensable modules, including a textual prompt-based Video Features Extractor (VFE), a transformer-based Action Segmentation Executor (ASE), and a regression-based Prediction Refinement Calibrator (PRC). The backbone of the model preferentially derives from applications in three approved public datasets (GTEA, 50Salads, and Breakfast) collected for TAS tasks, which accounts for the excavation of the segmentation pipeline on the CPR dataset. In general, we unprecedentedly probe into a feasible pipeline that genuinely elevates the CPR instruction qualification via action segmentation in conjunction with cutting-edge deep learning techniques. Associated experiments advocate our implementation with multiple metrics surpassing 91.0%.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Holistic Processing of Colour Images Using Novel Quaternion-Valued Wavelets on the Plane
- **Authors:** Neil D. Dizon, Jeffrey A. Hogan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Numerical Analysis (math.NA)
- **Arxiv link:** https://arxiv.org/abs/2308.16875
- **Pdf link:** https://arxiv.org/pdf/2308.16875
- **Abstract**
We investigate the applicability of quaternion-valued wavelets on the plane to holistic colour image processing. We present a methodology for decomposing and reconstructing colour images using quaternionic wavelet filters associated to recently developed quaternion-valued wavelets on the plane. We consider compression, enhancement, segmentation, and denoising techniques to demonstrate quaternion-valued wavelets as a promising tool for holistic colour image processing.
## Keyword: RAW
### Unsupervised Recognition of Unknown Objects for Open-World Object Detection
- **Authors:** Ruohuan Fang, Guansong Pang, Lei Zhou, Xiao Bai, Jin Zheng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16527
- **Pdf link:** https://arxiv.org/pdf/2308.16527
- **Abstract**
Open-World Object Detection (OWOD) extends object detection problem to a realistic and dynamic scenario, where a detection model is required to be capable of detecting both known and unknown objects and incrementally learning newly introduced knowledge. Current OWOD models, such as ORE and OW-DETR, focus on pseudo-labeling regions with high objectness scores as unknowns, whose performance relies heavily on the supervision of known objects. While they can detect the unknowns that exhibit similar features to the known objects, they suffer from a severe label bias problem that they tend to detect all regions (including unknown object regions) that are dissimilar to the known objects as part of the background. To eliminate the label bias, this paper proposes a novel approach that learns an unsupervised discriminative model to recognize true unknown objects from raw pseudo labels generated by unsupervised region proposal methods. The resulting model can be further refined by a classification-free self-training method which iteratively extends pseudo unknown objects to the unlabeled regions. Experimental results show that our method 1) significantly outperforms the prior SOTA in detecting unknown objects while maintaining competitive performance of detecting known object classes on the MS COCO dataset, and 2) achieves better generalization ability on the LVIS and Objects365 datasets.
### SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame Interpolation
- **Authors:** Jiaben Chen, Huaizu Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16876
- **Pdf link:** https://arxiv.org/pdf/2308.16876
- **Abstract**
Human-centric video frame interpolation has great potential for improving people's entertainment experiences and finding commercial applications in the sports analysis industry, e.g., synthesizing slow-motion videos. Although there are multiple benchmark datasets available in the community, none of them is dedicated for human-centric scenarios. To bridge this gap, we introduce SportsSloMo, a benchmark consisting of more than 130K video clips and 1M video frames of high-resolution ($\geq$720p) slow-motion sports videos crawled from YouTube. We re-train several state-of-the-art methods on our benchmark, and the results show a decrease in their accuracy compared to other datasets. It highlights the difficulty of our benchmark and suggests that it poses significant challenges even for the best-performing methods, as human bodies are highly deformable and occlusions are frequent in sports videos. To improve the accuracy, we introduce two loss terms considering the human-aware priors, where we add auxiliary supervision to panoptic segmentation and human keypoints detection, respectively. The loss terms are model agnostic and can be easily plugged into any video frame interpolation approaches. Experimental results validate the effectiveness of our proposed loss terms, leading to consistent performance improvement over 5 existing models, which establish strong baseline models on our benchmark. The dataset and code can be found at: https://neu-vi.github.io/SportsSlomo/.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 1 Sep 23 - ## Keyword: events
### Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports
- **Authors:** İrem Üstek, Jay Desai, Iván López Torrecillas, Sofiane Abadou, Jinjie Wang, Quentin Fever, Sandhya Rani Kasthuri, Yang Xing, Weisi Guo, Antonios Tsourdos
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16325
- **Pdf link:** https://arxiv.org/pdf/2308.16325
- **Abstract**
This study introduces an innovative violence detection framework tailored to the unique requirements of smart airports, where prompt responses to violent situations are crucial. The proposed framework harnesses the power of ViTPose for human pose estimation. It employs a CNN - BiLSTM network to analyse spatial and temporal information within keypoints sequences, enabling the accurate classification of violent behaviour in real time. Seamlessly integrated within the SAFE (Situational Awareness for Enhanced Security framework of SAAB, the solution underwent integrated testing to ensure robust performance in real world scenarios. The AIRTLab dataset, characterized by its high video quality and relevance to surveillance scenarios, is utilized in this study to enhance the model's accuracy and mitigate false positives. As airports face increased foot traffic in the post pandemic era, implementing AI driven violence detection systems, such as the one proposed, is paramount for improving security, expediting response times, and promoting data informed decision making. The implementation of this framework not only diminishes the probability of violent events but also assists surveillance teams in effectively addressing potential threats, ultimately fostering a more secure and protected aviation sector. Codes are available at: https://github.com/Asami-1/GDP.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Prompt-enhanced Hierarchical Transformer Elevating Cardiopulmonary Resuscitation Instruction via Temporal Action Segmentation
- **Authors:** Yang Liu, Xiaoyun Zhong, Shiyao Zhai, Zhicheng Du, Zhenyuan Gao, Qiming Huang, Canyang Zhang, Bin Jiang, Vijay Kumar Pandey, Sanyang Han, Runming Wang, Yuxing Han, Peiwu Qin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16552
- **Pdf link:** https://arxiv.org/pdf/2308.16552
- **Abstract**
The vast majority of people who suffer unexpected cardiac arrest are performed cardiopulmonary resuscitation (CPR) by passersby in a desperate attempt to restore life, but endeavors turn out to be fruitless on account of disqualification. Fortunately, many pieces of research manifest that disciplined training will help to elevate the success rate of resuscitation, which constantly desires a seamless combination of novel techniques to yield further advancement. To this end, we collect a custom CPR video dataset in which trainees make efforts to behave resuscitation on mannequins independently in adherence to approved guidelines, thereby devising an auxiliary toolbox to assist supervision and rectification of intermediate potential issues via modern deep learning methodologies. Our research empirically views this problem as a temporal action segmentation (TAS) task in computer vision, which aims to segment an untrimmed video at a frame-wise level. Here, we propose a Prompt-enhanced hierarchical Transformer (PhiTrans) that integrates three indispensable modules, including a textual prompt-based Video Features Extractor (VFE), a transformer-based Action Segmentation Executor (ASE), and a regression-based Prediction Refinement Calibrator (PRC). The backbone of the model preferentially derives from applications in three approved public datasets (GTEA, 50Salads, and Breakfast) collected for TAS tasks, which accounts for the excavation of the segmentation pipeline on the CPR dataset. In general, we unprecedentedly probe into a feasible pipeline that genuinely elevates the CPR instruction qualification via action segmentation in conjunction with cutting-edge deep learning techniques. Associated experiments advocate our implementation with multiple metrics surpassing 91.0%.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Holistic Processing of Colour Images Using Novel Quaternion-Valued Wavelets on the Plane
- **Authors:** Neil D. Dizon, Jeffrey A. Hogan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Numerical Analysis (math.NA)
- **Arxiv link:** https://arxiv.org/abs/2308.16875
- **Pdf link:** https://arxiv.org/pdf/2308.16875
- **Abstract**
We investigate the applicability of quaternion-valued wavelets on the plane to holistic colour image processing. We present a methodology for decomposing and reconstructing colour images using quaternionic wavelet filters associated to recently developed quaternion-valued wavelets on the plane. We consider compression, enhancement, segmentation, and denoising techniques to demonstrate quaternion-valued wavelets as a promising tool for holistic colour image processing.
## Keyword: RAW
### Unsupervised Recognition of Unknown Objects for Open-World Object Detection
- **Authors:** Ruohuan Fang, Guansong Pang, Lei Zhou, Xiao Bai, Jin Zheng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16527
- **Pdf link:** https://arxiv.org/pdf/2308.16527
- **Abstract**
Open-World Object Detection (OWOD) extends object detection problem to a realistic and dynamic scenario, where a detection model is required to be capable of detecting both known and unknown objects and incrementally learning newly introduced knowledge. Current OWOD models, such as ORE and OW-DETR, focus on pseudo-labeling regions with high objectness scores as unknowns, whose performance relies heavily on the supervision of known objects. While they can detect the unknowns that exhibit similar features to the known objects, they suffer from a severe label bias problem that they tend to detect all regions (including unknown object regions) that are dissimilar to the known objects as part of the background. To eliminate the label bias, this paper proposes a novel approach that learns an unsupervised discriminative model to recognize true unknown objects from raw pseudo labels generated by unsupervised region proposal methods. The resulting model can be further refined by a classification-free self-training method which iteratively extends pseudo unknown objects to the unlabeled regions. Experimental results show that our method 1) significantly outperforms the prior SOTA in detecting unknown objects while maintaining competitive performance of detecting known object classes on the MS COCO dataset, and 2) achieves better generalization ability on the LVIS and Objects365 datasets.
### SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame Interpolation
- **Authors:** Jiaben Chen, Huaizu Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.16876
- **Pdf link:** https://arxiv.org/pdf/2308.16876
- **Abstract**
Human-centric video frame interpolation has great potential for improving people's entertainment experiences and finding commercial applications in the sports analysis industry, e.g., synthesizing slow-motion videos. Although there are multiple benchmark datasets available in the community, none of them is dedicated for human-centric scenarios. To bridge this gap, we introduce SportsSloMo, a benchmark consisting of more than 130K video clips and 1M video frames of high-resolution ($\geq$720p) slow-motion sports videos crawled from YouTube. We re-train several state-of-the-art methods on our benchmark, and the results show a decrease in their accuracy compared to other datasets. It highlights the difficulty of our benchmark and suggests that it poses significant challenges even for the best-performing methods, as human bodies are highly deformable and occlusions are frequent in sports videos. To improve the accuracy, we introduce two loss terms considering the human-aware priors, where we add auxiliary supervision to panoptic segmentation and human keypoints detection, respectively. The loss terms are model agnostic and can be easily plugged into any video frame interpolation approaches. Experimental results validate the effectiveness of our proposed loss terms, leading to consistent performance improvement over 5 existing models, which establish strong baseline models on our benchmark. The dataset and code can be found at: https://neu-vi.github.io/SportsSlomo/.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri sep keyword events two stage violence detection using vitpose and classification models at smart airports authors i̇rem üstek jay desai iván lópez torrecillas sofiane abadou jinjie wang quentin fever sandhya rani kasthuri yang xing weisi guo antonios tsourdos subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this study introduces an innovative violence detection framework tailored to the unique requirements of smart airports where prompt responses to violent situations are crucial the proposed framework harnesses the power of vitpose for human pose estimation it employs a cnn bilstm network to analyse spatial and temporal information within keypoints sequences enabling the accurate classification of violent behaviour in real time seamlessly integrated within the safe situational awareness for enhanced security framework of saab the solution underwent integrated testing to ensure robust performance in real world scenarios the airtlab dataset characterized by its high video quality and relevance to surveillance scenarios is utilized in this study to enhance the model s accuracy and mitigate false positives as airports face increased foot traffic in the post pandemic era implementing ai driven violence detection systems such as the one proposed is paramount for improving security expediting response times and promoting data informed decision making the implementation of this framework not only diminishes the probability of violent events but also assists surveillance teams in effectively addressing potential threats ultimately fostering a more secure and protected aviation sector codes are available at keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp prompt enhanced hierarchical transformer elevating cardiopulmonary resuscitation instruction via temporal action segmentation authors yang liu xiaoyun zhong shiyao zhai zhicheng du zhenyuan gao qiming huang canyang zhang bin jiang vijay kumar pandey sanyang han runming wang yuxing han peiwu qin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the vast majority of people who suffer unexpected cardiac arrest are performed cardiopulmonary resuscitation cpr by passersby in a desperate attempt to restore life but endeavors turn out to be fruitless on account of disqualification fortunately many pieces of research manifest that disciplined training will help to elevate the success rate of resuscitation which constantly desires a seamless combination of novel techniques to yield further advancement to this end we collect a custom cpr video dataset in which trainees make efforts to behave resuscitation on mannequins independently in adherence to approved guidelines thereby devising an auxiliary toolbox to assist supervision and rectification of intermediate potential issues via modern deep learning methodologies our research empirically views this problem as a temporal action segmentation tas task in computer vision which aims to segment an untrimmed video at a frame wise level here we propose a prompt enhanced hierarchical transformer phitrans that integrates three indispensable modules including a textual prompt based video features extractor vfe a transformer based action segmentation executor ase and a regression based prediction refinement calibrator prc the backbone of the model preferentially derives from applications in three approved public datasets gtea and breakfast collected for tas tasks which accounts for the excavation of the segmentation pipeline on the cpr dataset in general we unprecedentedly probe into a feasible pipeline that genuinely elevates the cpr instruction qualification via action segmentation in conjunction with cutting edge deep learning techniques associated experiments advocate our implementation with multiple metrics surpassing keyword image signal processing there is no result keyword image signal process there is no result keyword compression holistic processing of colour images using novel quaternion valued wavelets on the plane authors neil d dizon jeffrey a hogan subjects computer vision and pattern recognition cs cv numerical analysis math na arxiv link pdf link abstract we investigate the applicability of quaternion valued wavelets on the plane to holistic colour image processing we present a methodology for decomposing and reconstructing colour images using quaternionic wavelet filters associated to recently developed quaternion valued wavelets on the plane we consider compression enhancement segmentation and denoising techniques to demonstrate quaternion valued wavelets as a promising tool for holistic colour image processing keyword raw unsupervised recognition of unknown objects for open world object detection authors ruohuan fang guansong pang lei zhou xiao bai jin zheng subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract open world object detection owod extends object detection problem to a realistic and dynamic scenario where a detection model is required to be capable of detecting both known and unknown objects and incrementally learning newly introduced knowledge current owod models such as ore and ow detr focus on pseudo labeling regions with high objectness scores as unknowns whose performance relies heavily on the supervision of known objects while they can detect the unknowns that exhibit similar features to the known objects they suffer from a severe label bias problem that they tend to detect all regions including unknown object regions that are dissimilar to the known objects as part of the background to eliminate the label bias this paper proposes a novel approach that learns an unsupervised discriminative model to recognize true unknown objects from raw pseudo labels generated by unsupervised region proposal methods the resulting model can be further refined by a classification free self training method which iteratively extends pseudo unknown objects to the unlabeled regions experimental results show that our method significantly outperforms the prior sota in detecting unknown objects while maintaining competitive performance of detecting known object classes on the ms coco dataset and achieves better generalization ability on the lvis and datasets sportsslomo a new benchmark and baselines for human centric video frame interpolation authors jiaben chen huaizu jiang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract human centric video frame interpolation has great potential for improving people s entertainment experiences and finding commercial applications in the sports analysis industry e g synthesizing slow motion videos although there are multiple benchmark datasets available in the community none of them is dedicated for human centric scenarios to bridge this gap we introduce sportsslomo a benchmark consisting of more than video clips and video frames of high resolution geq slow motion sports videos crawled from youtube we re train several state of the art methods on our benchmark and the results show a decrease in their accuracy compared to other datasets it highlights the difficulty of our benchmark and suggests that it poses significant challenges even for the best performing methods as human bodies are highly deformable and occlusions are frequent in sports videos to improve the accuracy we introduce two loss terms considering the human aware priors where we add auxiliary supervision to panoptic segmentation and human keypoints detection respectively the loss terms are model agnostic and can be easily plugged into any video frame interpolation approaches experimental results validate the effectiveness of our proposed loss terms leading to consistent performance improvement over existing models which establish strong baseline models on our benchmark the dataset and code can be found at keyword raw image there is no result
| 1
|
15,011
| 18,723,483,948
|
IssuesEvent
|
2021-11-03 14:12:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Index name validations
|
bug/2-confirmed process/candidate topic: schema validation engines/migration engine team/migrations
|
Currently we validate the `name` parameter uniqueness in indices per database settings. This variable is not part of the database, but the client, and should always be validated to be unique per model.
The issue should be to take this validation out from the scoped database constraint validations, and apply it to all connectors.
|
1.0
|
Index name validations - Currently we validate the `name` parameter uniqueness in indices per database settings. This variable is not part of the database, but the client, and should always be validated to be unique per model.
The issue should be to take this validation out from the scoped database constraint validations, and apply it to all connectors.
|
process
|
index name validations currently we validate the name parameter uniqueness in indices per database settings this variable is not part of the database but the client and should always be validated to be unique per model the issue should be to take this validation out from the scoped database constraint validations and apply it to all connectors
| 1
|
14,121
| 17,016,833,342
|
IssuesEvent
|
2021-07-02 13:14:03
|
CIAT-DAPA/subsets_genebank_accessions
|
https://api.github.com/repos/CIAT-DAPA/subsets_genebank_accessions
|
closed
|
Generic indicators process
|
processing
|
|Indicator|Europe |Oceania|Asia|North America|South America|Africa|
|---|---|---|---|---|---|---|
|ndws|Ready|Ready|Ready|Ready|Ready|Ready|
|ndwl|Ready|Ready|Ready|Ready|Ready|Ready|
|p95|Ready|Ready|Ready|Ready|Ready|Ready|
|t_rain|Ready|Ready|Ready|Ready|Ready|Ready|
|tn|Ready|Ready|Ready|Ready|Ready|Ready|
|tx|Ready|Ready|Ready|Ready|Ready|Ready|
|sr|Ready|Ready|Ready|Ready|Ready|Ready|
|nvpd 4|Ready|Ready|Ready|Ready|Ready|Ready|
|vpd|Ready|Ready|Ready|Ready-|Ready|Ready|
|cdd|Ready|Ready|Ready|Ready|Ready|Ready|
|dl|Ready|Ready|Ready|Ready|Ready|Ready|
|
1.0
|
Generic indicators process - |Indicator|Europe |Oceania|Asia|North America|South America|Africa|
|---|---|---|---|---|---|---|
|ndws|Ready|Ready|Ready|Ready|Ready|Ready|
|ndwl|Ready|Ready|Ready|Ready|Ready|Ready|
|p95|Ready|Ready|Ready|Ready|Ready|Ready|
|t_rain|Ready|Ready|Ready|Ready|Ready|Ready|
|tn|Ready|Ready|Ready|Ready|Ready|Ready|
|tx|Ready|Ready|Ready|Ready|Ready|Ready|
|sr|Ready|Ready|Ready|Ready|Ready|Ready|
|nvpd 4|Ready|Ready|Ready|Ready|Ready|Ready|
|vpd|Ready|Ready|Ready|Ready-|Ready|Ready|
|cdd|Ready|Ready|Ready|Ready|Ready|Ready|
|dl|Ready|Ready|Ready|Ready|Ready|Ready|
|
process
|
generic indicators process indicator europe oceania asia north america south america africa ndws ready ready ready ready ready ready ndwl ready ready ready ready ready ready ready ready ready ready ready ready t rain ready ready ready ready ready ready tn ready ready ready ready ready ready tx ready ready ready ready ready ready sr ready ready ready ready ready ready nvpd ready ready ready ready ready ready vpd ready ready ready ready ready ready cdd ready ready ready ready ready ready dl ready ready ready ready ready ready
| 1
|
19,568
| 25,887,827,790
|
IssuesEvent
|
2022-12-14 15:43:31
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_terminate_signal (__main__.ForkTest)
|
module: multiprocessing triaged module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_terminate_signal%2C%20ForkTest) and the most recent
[workflow logs](https://github.com/pytorch/pytorch/actions/runs/1853887168).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with
1 red and 3 green.
cc @ezyang @gchanan @zou3519 @VitalyFedyunin
|
1.0
|
DISABLED test_terminate_signal (__main__.ForkTest) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_terminate_signal%2C%20ForkTest) and the most recent
[workflow logs](https://github.com/pytorch/pytorch/actions/runs/1853887168).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with
1 red and 3 green.
cc @ezyang @gchanan @zou3519 @VitalyFedyunin
|
process
|
disabled test terminate signal main forktest platforms linux this test was disabled because it is failing in ci see and the most recent over the past hours it has been determined flaky in workflow s with red and green cc ezyang gchanan vitalyfedyunin
| 1
|
196,442
| 15,594,640,913
|
IssuesEvent
|
2021-03-18 14:06:58
|
spring-projects/spring-data-keyvalue
|
https://api.github.com/repos/spring-projects/spring-data-keyvalue
|
closed
|
Update reference documentation with examples that match `@EnableMapRepositories`
|
type: documentation
|
https://docs.spring.io/spring-data/keyvalue/docs/current/reference/html/#key-value.template-configuration
Add/modify https://github.com/spring-projects/spring-data-examples/blob/master/map/src/test/java/example/springdata/map/PersonRepositoryIntegrationTest.java :
```
@RunWith(SpringRunner.class)
@SpringBootTest
public class PersonRepositoryIntegrationTest {
static final class MapKeyValueAdapterExt extends MapKeyValueAdapter {
MapKeyValueAdapterExt(Class<? extends Map> mapType) {
super(mapType);
}
@Override
public Object get(Object id, String keyspace) {
System.out.println(keyspace + ":" + id);
return super.get(id, keyspace);
}
}
@SpringBootApplication
@EnableMapRepositories
static class Config {
@Bean
public KeyValueOperations keyValueTemplate() {
return new KeyValueTemplate(keyValueAdapter());
}
@Bean
public KeyValueAdapter keyValueAdapter() {
return new MapKeyValueAdapterExt(ConcurrentHashMap.class);
}
}
// ...
}
```
`MapKeyValueAdapterExt#get(Object, String)` will not be called.
|
1.0
|
Update reference documentation with examples that match `@EnableMapRepositories` - https://docs.spring.io/spring-data/keyvalue/docs/current/reference/html/#key-value.template-configuration
Add/modify https://github.com/spring-projects/spring-data-examples/blob/master/map/src/test/java/example/springdata/map/PersonRepositoryIntegrationTest.java :
```
@RunWith(SpringRunner.class)
@SpringBootTest
public class PersonRepositoryIntegrationTest {
static final class MapKeyValueAdapterExt extends MapKeyValueAdapter {
MapKeyValueAdapterExt(Class<? extends Map> mapType) {
super(mapType);
}
@Override
public Object get(Object id, String keyspace) {
System.out.println(keyspace + ":" + id);
return super.get(id, keyspace);
}
}
@SpringBootApplication
@EnableMapRepositories
static class Config {
@Bean
public KeyValueOperations keyValueTemplate() {
return new KeyValueTemplate(keyValueAdapter());
}
@Bean
public KeyValueAdapter keyValueAdapter() {
return new MapKeyValueAdapterExt(ConcurrentHashMap.class);
}
}
// ...
}
```
`MapKeyValueAdapterExt#get(Object, String)` will not be called.
|
non_process
|
update reference documentation with examples that match enablemaprepositories add modify runwith springrunner class springboottest public class personrepositoryintegrationtest static final class mapkeyvalueadapterext extends mapkeyvalueadapter mapkeyvalueadapterext class maptype super maptype override public object get object id string keyspace system out println keyspace id return super get id keyspace springbootapplication enablemaprepositories static class config bean public keyvalueoperations keyvaluetemplate return new keyvaluetemplate keyvalueadapter bean public keyvalueadapter keyvalueadapter return new mapkeyvalueadapterext concurrenthashmap class mapkeyvalueadapterext get object string will not be called
| 0
|
4,441
| 7,313,091,508
|
IssuesEvent
|
2018-02-28 23:22:37
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Please release new pip version of google-cloud-monitoring
|
api: monitoring type: process
|
Please release a new version of google-cloud-monitoring via pip. The most recent version is 0.28.0 from Oct 31, 2017. I'd like to pick up recent bug fixes (e.g. #4910).
https://pypi.org/project/google-cloud-monitoring/#history
|
1.0
|
Please release new pip version of google-cloud-monitoring - Please release a new version of google-cloud-monitoring via pip. The most recent version is 0.28.0 from Oct 31, 2017. I'd like to pick up recent bug fixes (e.g. #4910).
https://pypi.org/project/google-cloud-monitoring/#history
|
process
|
please release new pip version of google cloud monitoring please release a new version of google cloud monitoring via pip the most recent version is from oct i d like to pick up recent bug fixes e g
| 1
|
18,509
| 24,551,531,313
|
IssuesEvent
|
2022-10-12 12:58:45
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Activities will be in Resume status in the following scenario
|
Bug Blocker P0 iOS Process: Fixed Process: Tested dev
|
Steps:
1. Sign in to SB
2. Create a study
3. Click on edit study and go to questionnaires/active tasks
4. Edt the activities and publish the updates
5. Sign up or sign in to the mobile app and enroll to the particular study
6. Try to provide the responses for the activities which are edited and published in step 4
7. Observe
AR: Activities are moving to Resume state
ER: Activities should move to a completed state
|
2.0
|
[iOS] Activities will be in Resume status in the following scenario - Steps:
1. Sign in to SB
2. Create a study
3. Click on edit study and go to questionnaires/active tasks
4. Edt the activities and publish the updates
5. Sign up or sign in to the mobile app and enroll to the particular study
6. Try to provide the responses for the activities which are edited and published in step 4
7. Observe
AR: Activities are moving to Resume state
ER: Activities should move to a completed state
|
process
|
activities will be in resume status in the following scenario steps sign in to sb create a study click on edit study and go to questionnaires active tasks edt the activities and publish the updates sign up or sign in to the mobile app and enroll to the particular study try to provide the responses for the activities which are edited and published in step observe ar activities are moving to resume state er activities should move to a completed state
| 1
|
14,300
| 17,288,899,041
|
IssuesEvent
|
2021-07-24 09:43:17
|
apache/shardingsphere
|
https://api.github.com/repos/apache/shardingsphere
|
closed
|
Add unit test for ProcessRegistrySubscriber
|
feature:show-process in: test project: OSD2021
|
Hi community,
This issue is for #10887.
### Aim
Add unit test for `ProcessRegistrySubscriber` to test its public functions.
### Basic Qualifications
* Java
* Maven
* Junit.Test
### Example FYI
* `RegistryCenterTest`
|
1.0
|
Add unit test for ProcessRegistrySubscriber - Hi community,
This issue is for #10887.
### Aim
Add unit test for `ProcessRegistrySubscriber` to test its public functions.
### Basic Qualifications
* Java
* Maven
* Junit.Test
### Example FYI
* `RegistryCenterTest`
|
process
|
add unit test for processregistrysubscriber hi community this issue is for aim add unit test for processregistrysubscriber to test its public functions basic qualifications java maven junit test example fyi registrycentertest
| 1
|
1,850
| 4,647,941,333
|
IssuesEvent
|
2016-10-01 19:45:52
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Fix ISRCTN identifiers we got from EUCTR
|
data cleaning Processors
|
The identifiers should be in the form `ISRCTN09988575`, but they're quite messy. Check the `isrctn_international_standard_randomised_controlled_trial_numbe` column in `euctr` table in the warehouse.
|
1.0
|
Fix ISRCTN identifiers we got from EUCTR - The identifiers should be in the form `ISRCTN09988575`, but they're quite messy. Check the `isrctn_international_standard_randomised_controlled_trial_numbe` column in `euctr` table in the warehouse.
|
process
|
fix isrctn identifiers we got from euctr the identifiers should be in the form but they re quite messy check the isrctn international standard randomised controlled trial numbe column in euctr table in the warehouse
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.