Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
176,138
| 6,556,836,588
|
IssuesEvent
|
2017-09-06 15:20:36
|
strongloop/loopback-component-oauth2
|
https://api.github.com/repos/strongloop/loopback-component-oauth2
|
closed
|
Unable to identify when login is failed
|
feature needs-priority stale
|
Hi,
When the component must ensure that the user is logged, depending on the grant method, we generally reach the following line of code inside `oauth2-loopback.js` :
```
login.ensureLoggedIn({ redirectTo: options.loginPage || '/login' }),
```
If we send back false as user, the component redirect the user to exactly the same _loginPage_ and there's no way to indicate to the user that he does not give the right login/password.
May you make this configurable so we can set a GET param to indicate that the login did not go well ?
Cheers.
Max
|
1.0
|
Unable to identify when login is failed - Hi,
When the component must ensure that the user is logged, depending on the grant method, we generally reach the following line of code inside `oauth2-loopback.js` :
```
login.ensureLoggedIn({ redirectTo: options.loginPage || '/login' }),
```
If we send back false as user, the component redirect the user to exactly the same _loginPage_ and there's no way to indicate to the user that he does not give the right login/password.
May you make this configurable so we can set a GET param to indicate that the login did not go well ?
Cheers.
Max
|
non_process
|
unable to identify when login is failed hi when the component must ensure that the user is logged depending on the grant method we generally reach the following line of code inside loopback js login ensureloggedin redirectto options loginpage login if we send back false as user the component redirect the user to exactly the same loginpage and there s no way to indicate to the user that he does not give the right login password may you make this configurable so we can set a get param to indicate that the login did not go well cheers max
| 0
|
14,760
| 18,041,411,187
|
IssuesEvent
|
2021-09-18 05:12:31
|
ooi-data/CE04OSPD-DP01B-01-CTDPFL105-recovered_inst-dpc_ctd_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE04OSPD-DP01B-01-CTDPFL105-recovered_inst-dpc_ctd_instrument_recovered
|
opened
|
🛑 Processing failed: TypeError
|
process
|
## Overview
`TypeError` found in `processing_task` task during run ended on 2021-09-18T05:12:30.549031.
## Details
Flow name: `CE04OSPD-DP01B-01-CTDPFL105-recovered_inst-dpc_ctd_instrument_recovered`
Task name: `processing_task`
Error type: `TypeError`
Error message: 'NoneType' object is not subscriptable
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing
final_path = finalize_zarr(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 267, in _call_s3
err = translate_boto_error(err)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/errors.py", line 142, in translate_boto_error
code = error.response["Error"].get("Code")
TypeError: 'NoneType' object is not subscriptable
```
</details>
|
1.0
|
🛑 Processing failed: TypeError - ## Overview
`TypeError` found in `processing_task` task during run ended on 2021-09-18T05:12:30.549031.
## Details
Flow name: `CE04OSPD-DP01B-01-CTDPFL105-recovered_inst-dpc_ctd_instrument_recovered`
Task name: `processing_task`
Error type: `TypeError`
Error message: 'NoneType' object is not subscriptable
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing
final_path = finalize_zarr(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 267, in _call_s3
err = translate_boto_error(err)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/errors.py", line 142, in translate_boto_error
code = error.response["Error"].get("Code")
TypeError: 'NoneType' object is not subscriptable
```
</details>
|
process
|
🛑 processing failed typeerror overview typeerror found in processing task task during run ended on details flow name recovered inst dpc ctd instrument recovered task name processing task error type typeerror error message nonetype object is not subscriptable traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize zarr file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return sync self loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise result file srv conda envs notebook lib site packages fsspec asyn py line in runner result await coro file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call delete objects kwargs bucket bucket delete delete keys file srv conda envs notebook lib site packages core py line in call err translate boto error err file srv conda envs notebook lib site packages errors py line in translate boto error code error response get code typeerror nonetype object is not subscriptable
| 1
|
49,975
| 7,549,130,628
|
IssuesEvent
|
2018-04-18 13:27:41
|
pburtchaell/redux-promise-middleware
|
https://api.github.com/repos/pburtchaell/redux-promise-middleware
|
opened
|
Fix 404 Error in Complex Example
|
documentation good-first-issue
|
The complex example is throwing a 404 error. It's probably best to redevelop this example to use some fun public API—like the [Cat API](http://thecatapi.com/).
|
1.0
|
Fix 404 Error in Complex Example - The complex example is throwing a 404 error. It's probably best to redevelop this example to use some fun public API—like the [Cat API](http://thecatapi.com/).
|
non_process
|
fix error in complex example the complex example is throwing a error it s probably best to redevelop this example to use some fun public api—like the
| 0
|
1,685
| 4,328,565,518
|
IssuesEvent
|
2016-07-26 14:25:43
|
CGAL/cgal
|
https://api.github.com/repos/CGAL/cgal
|
closed
|
Add light display on point sets with normals in Polyhedron demo
|
CGAL 3D demo feature request Pkg::Point_set_processing
|
When point sets have normals, a better and clearer display can be done using light (similarly to a polyhedron).
(@maxGimeno If you add this to your todo-list, this is [the commit](https://github.com/CGAL/cgal-dev/commit/455843e1bf01faf87eda7994b4c715d7ea173b6d) where we worked on it together last time, although you can't use it directly because it modifies other unrelated files – sorry about that, I should have separated it.)
|
1.0
|
Add light display on point sets with normals in Polyhedron demo - When point sets have normals, a better and clearer display can be done using light (similarly to a polyhedron).
(@maxGimeno If you add this to your todo-list, this is [the commit](https://github.com/CGAL/cgal-dev/commit/455843e1bf01faf87eda7994b4c715d7ea173b6d) where we worked on it together last time, although you can't use it directly because it modifies other unrelated files – sorry about that, I should have separated it.)
|
process
|
add light display on point sets with normals in polyhedron demo when point sets have normals a better and clearer display can be done using light similarly to a polyhedron maxgimeno if you add this to your todo list this is where we worked on it together last time although you can t use it directly because it modifies other unrelated files – sorry about that i should have separated it
| 1
|
15,064
| 18,764,637,536
|
IssuesEvent
|
2021-11-05 21:18:14
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
PHAST flue gas property suite error
|
bug Process Heating
|
Reproduce
- Need to remove save() method if condition around line 200 flue-gas-losses-form-mass for bug to appear. This was a temprorary fix
- Go to PHAST --> flue gas --> switching from gas to solid/liquid

See last comments of #435 for problems related to this
|
1.0
|
PHAST flue gas property suite error - Reproduce
- Need to remove save() method if condition around line 200 flue-gas-losses-form-mass for bug to appear. This was a temprorary fix
- Go to PHAST --> flue gas --> switching from gas to solid/liquid

See last comments of #435 for problems related to this
|
process
|
phast flue gas property suite error reproduce need to remove save method if condition around line flue gas losses form mass for bug to appear this was a temprorary fix go to phast flue gas switching from gas to solid liquid see last comments of for problems related to this
| 1
|
1,529
| 4,118,762,887
|
IssuesEvent
|
2016-06-08 12:48:45
|
World4Fly/Interface-for-Arduino
|
https://api.github.com/repos/World4Fly/Interface-for-Arduino
|
closed
|
Control Tetris from application
|
enhancement process
|
- [ ] Create mind concept (interaction between tabs and accessing Arduino information)
|
1.0
|
Control Tetris from application - - [ ] Create mind concept (interaction between tabs and accessing Arduino information)
|
process
|
control tetris from application create mind concept interaction between tabs and accessing arduino information
| 1
|
367,876
| 10,862,443,149
|
IssuesEvent
|
2019-11-14 13:19:36
|
mozilla/voice-web
|
https://api.github.com/repos/mozilla/voice-web
|
opened
|
"Team Progress" and "Top Contributors" dashboards are empty, even though people signed up and contributed to the challenge
|
Priority: P0 Type: Bug voice-challenge
|
I used several test accounts to sign up for the challenge in the SAP team
I also contributed with some of those accounts.
Though, the "SAP Team Progress" and "Overall Challenge Top Contributors" dashboards are still empty. Users who accepted the SAP team invite to the challenge should be displayed here.

|
1.0
|
"Team Progress" and "Top Contributors" dashboards are empty, even though people signed up and contributed to the challenge - I used several test accounts to sign up for the challenge in the SAP team
I also contributed with some of those accounts.
Though, the "SAP Team Progress" and "Overall Challenge Top Contributors" dashboards are still empty. Users who accepted the SAP team invite to the challenge should be displayed here.

|
non_process
|
team progress and top contributors dashboards are empty even though people signed up and contributed to the challenge i used several test accounts to sign up for the challenge in the sap team i also contributed with some of those accounts though the sap team progress and overall challenge top contributors dashboards are still empty users who accepted the sap team invite to the challenge should be displayed here
| 0
|
13,027
| 15,380,470,353
|
IssuesEvent
|
2021-03-02 21:09:05
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] fossbytes.com
|
waiting for Mitch whitelisting process
|
I feel `fossbytes.com` shouldn't be blocked. It's just a technology website / blog.
|
1.0
|
[FALSE-POSITIVE?] fossbytes.com - I feel `fossbytes.com` shouldn't be blocked. It's just a technology website / blog.
|
process
|
fossbytes com i feel fossbytes com shouldn t be blocked it s just a technology website blog
| 1
|
1,407
| 3,971,515,885
|
IssuesEvent
|
2016-05-04 12:16:43
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Processors: improvements
|
enhancement Processors
|
### Base
- [x] remove `OPENTRIALS_` prefix from env vars?
- [x] check `facts` search uses index
- [x] add time checks for `mapper` stack - map only updated/created items (https://github.com/opentrials/opentrials/issues/105)
- [x] improve papertrail logging formatter (https://github.com/opentrials/opentrials/issues/86)
- [x] remove `trials_trialrecords` table because it's not a `m2m` relationship?
- [x] use some kind of `slug` to improve `Finder` system (now it uses `person.name` for example)
- [x] refactor `Finder`
- [x] move column creation to `api` migration
- [x] adjust extractor (registers) priorities
- [x] merge `links` and `facts`?
- [x] use `facts` analogue for things like `person.name` (and:`primary_facts` + or:`secondary_facts`)?
- [x] don't md5 while slugifying scientific_titles?
- [x] ~~add periodic logger.info like scrapy does: `Translated 50 trials (at 50 trials/min)` (speed is missing)~~
- [x] merge records on entity entries? (https://github.com/opentrials/opentrials/issues/103)
### Concrete
- [x] all - fix `extractors` todos with additional `scraper` work (https://github.com/opentrials/opentrials/issues/104)
|
1.0
|
Processors: improvements - ### Base
- [x] remove `OPENTRIALS_` prefix from env vars?
- [x] check `facts` search uses index
- [x] add time checks for `mapper` stack - map only updated/created items (https://github.com/opentrials/opentrials/issues/105)
- [x] improve papertrail logging formatter (https://github.com/opentrials/opentrials/issues/86)
- [x] remove `trials_trialrecords` table because it's not a `m2m` relationship?
- [x] use some kind of `slug` to improve `Finder` system (now it uses `person.name` for example)
- [x] refactor `Finder`
- [x] move column creation to `api` migration
- [x] adjust extractor (registers) priorities
- [x] merge `links` and `facts`?
- [x] use `facts` analogue for things like `person.name` (and:`primary_facts` + or:`secondary_facts`)?
- [x] don't md5 while slugifying scientific_titles?
- [x] ~~add periodic logger.info like scrapy does: `Translated 50 trials (at 50 trials/min)` (speed is missing)~~
- [x] merge records on entity entries? (https://github.com/opentrials/opentrials/issues/103)
### Concrete
- [x] all - fix `extractors` todos with additional `scraper` work (https://github.com/opentrials/opentrials/issues/104)
|
process
|
processors improvements base remove opentrials prefix from env vars check facts search uses index add time checks for mapper stack map only updated created items improve papertrail logging formatter remove trials trialrecords table because it s not a relationship use some kind of slug to improve finder system now it uses person name for example refactor finder move column creation to api migration adjust extractor registers priorities merge links and facts use facts analogue for things like person name and primary facts or secondary facts don t while slugifying scientific titles add periodic logger info like scrapy does translated trials at trials min speed is missing merge records on entity entries concrete all fix extractors todos with additional scraper work
| 1
|
165,273
| 12,835,820,835
|
IssuesEvent
|
2020-07-07 13:28:55
|
softmatterlab/Braph-2.0-Matlab
|
https://api.github.com/repos/softmatterlab/Braph-2.0-Matlab
|
closed
|
ComparisonDTI
|
analysis test
|
**Branch from and merge to gv-analysis-comparison**
- [ ] ComparisonDTI
- [ ] test_ComparisonDTI
Use as reference ComparisonMRI
Double-check with @egolol, see issue #525
|
1.0
|
ComparisonDTI - **Branch from and merge to gv-analysis-comparison**
- [ ] ComparisonDTI
- [ ] test_ComparisonDTI
Use as reference ComparisonMRI
Double-check with @egolol, see issue #525
|
non_process
|
comparisondti branch from and merge to gv analysis comparison comparisondti test comparisondti use as reference comparisonmri double check with egolol see issue
| 0
|
48,708
| 6,102,612,459
|
IssuesEvent
|
2017-06-20 16:52:27
|
mozilla/network-pulse
|
https://api.github.com/repos/mozilla/network-pulse
|
closed
|
Form: identify if submitter is creator
|
design
|
Let's add a field to the form for submitters to identify whether they own the thing or not.
|
1.0
|
Form: identify if submitter is creator - Let's add a field to the form for submitters to identify whether they own the thing or not.
|
non_process
|
form identify if submitter is creator let s add a field to the form for submitters to identify whether they own the thing or not
| 0
|
2,123
| 4,963,797,915
|
IssuesEvent
|
2016-12-03 12:39:01
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] Revue de la semaine #9
|
Language: French Process: [6] Approved
|
Video title
RDLS #9 - YOUTUBE, PROGRAMME, FILLON, ANIMAUX POLLINISATEURS EN DANGER
URL
https://www.youtube.com/watch?v=ekBMfrb14xA
Youtube subtitles language
Francais
Duration
23:20
Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&tab=captions&v=ekBMfrb14xA&ui=hd&action_mde_edit_form=1&ref=player&bl=vmp
|
1.0
|
[subtitles] [FR] Revue de la semaine #9 - Video title
RDLS #9 - YOUTUBE, PROGRAMME, FILLON, ANIMAUX POLLINISATEURS EN DANGER
URL
https://www.youtube.com/watch?v=ekBMfrb14xA
Youtube subtitles language
Francais
Duration
23:20
Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&tab=captions&v=ekBMfrb14xA&ui=hd&action_mde_edit_form=1&ref=player&bl=vmp
|
process
|
revue de la semaine video title rdls youtube programme fillon animaux pollinisateurs en danger url youtube subtitles language francais duration subtitles url
| 1
|
3,886
| 6,818,128,042
|
IssuesEvent
|
2017-11-07 03:18:55
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Improvement at getTokenBal when token or address does not exist
|
status-inprocess tools-getTokenBal type-enhancement
|
It will be a nice improvement if we can detect when the user made a mistake and either the token or the account does not exist.
When I run test cases using fake values with valid format like the following, at the first example the token is just an arbitrary value, on the second both are made up values:
./getTokenBal 0xd26114cd6EE289AccF82350c8d84870000000000 0x5e44c3e467a49c9ca0296a9f130fc43304000000
./getTokenBal 0xd26114cd6EE289AccF82350c8d84870000000000 0x5e44c3e467a49c9ca0296a9f130fc43304000000
In these scenarios we report that it is a 0 balance when maybe we can warn the user that they do not exist. I do not know if this is feasible, but would help to detect when you made at typo entering an invalid digit at your address or token.
|
1.0
|
Improvement at getTokenBal when token or address does not exist - It will be a nice improvement if we can detect when the user made a mistake and either the token or the account does not exist.
When I run test cases using fake values with valid format like the following, at the first example the token is just an arbitrary value, on the second both are made up values:
./getTokenBal 0xd26114cd6EE289AccF82350c8d84870000000000 0x5e44c3e467a49c9ca0296a9f130fc43304000000
./getTokenBal 0xd26114cd6EE289AccF82350c8d84870000000000 0x5e44c3e467a49c9ca0296a9f130fc43304000000
In these scenarios we report that it is a 0 balance when maybe we can warn the user that they do not exist. I do not know if this is feasible, but would help to detect when you made at typo entering an invalid digit at your address or token.
|
process
|
improvement at gettokenbal when token or address does not exist it will be a nice improvement if we can detect when the user made a mistake and either the token or the account does not exist when i run test cases using fake values with valid format like the following at the first example the token is just an arbitrary value on the second both are made up values gettokenbal gettokenbal in these scenarios we report that it is a balance when maybe we can warn the user that they do not exist i do not know if this is feasible but would help to detect when you made at typo entering an invalid digit at your address or token
| 1
|
10,180
| 13,044,162,852
|
IssuesEvent
|
2020-07-29 03:47:37
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `LeastReal` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `LeastReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `LeastReal` from TiDB -
## Description
Port the scalar function `LeastReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function leastreal from tidb description port the scalar function leastreal from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
51,145
| 13,190,289,230
|
IssuesEvent
|
2020-08-13 09:55:47
|
ESA-VirES/WebClient-Framework
|
https://api.github.com/repos/ESA-VirES/WebClient-Framework
|
opened
|
Broken server-side interpolation of the EEF data.
|
defect
|
When selecting MAG and EEF data the server responds with following error:
```
Error: Problem retrieving data: 'Interp1D' object has no attribute 'indices_nearest'
```

This is a regression introduces in v3.3.0.
Observed on the production instance.
Already fixed on staging.
FAO @lmar76
|
1.0
|
Broken server-side interpolation of the EEF data. - When selecting MAG and EEF data the server responds with following error:
```
Error: Problem retrieving data: 'Interp1D' object has no attribute 'indices_nearest'
```

This is a regression introduces in v3.3.0.
Observed on the production instance.
Already fixed on staging.
FAO @lmar76
|
non_process
|
broken server side interpolation of the eef data when selecting mag and eef data the server responds with following error error problem retrieving data object has no attribute indices nearest this is a regression introduces in observed on the production instance already fixed on staging fao
| 0
|
14,494
| 17,604,292,545
|
IssuesEvent
|
2021-08-17 15:13:32
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
port more Processing algorithms to C++ (Request in QGIS)
|
Processing Alg 3.14
|
### Request for documentation
From pull request QGIS/qgis#36372
Author: @alexbruy
QGIS version: 3.14
**port more Processing algorithms to C++**
### PR Description:
## Description
Port some Processing algorithms to C++:
- Split Vector Layer
- PostGIS Execute SQL
- SpatiaLite Execute SQL
- Polygonize
- Snap Geometries
### Commits tagged with [need-docs] or [FEATURE]
* [needs-docs] Add optional parameter for output file type to the vector split algorithm" (#5508)
* "[processing][feature] Add algorithm for executing SQL queries against registered SpatiaLite databases" (#5509)
|
1.0
|
port more Processing algorithms to C++ (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#36372
Author: @alexbruy
QGIS version: 3.14
**port more Processing algorithms to C++**
### PR Description:
## Description
Port some Processing algorithms to C++:
- Split Vector Layer
- PostGIS Execute SQL
- SpatiaLite Execute SQL
- Polygonize
- Snap Geometries
### Commits tagged with [need-docs] or [FEATURE]
* [needs-docs] Add optional parameter for output file type to the vector split algorithm" (#5508)
* "[processing][feature] Add algorithm for executing SQL queries against registered SpatiaLite databases" (#5509)
|
process
|
port more processing algorithms to c request in qgis request for documentation from pull request qgis qgis author alexbruy qgis version port more processing algorithms to c pr description description port some processing algorithms to c split vector layer postgis execute sql spatialite execute sql polygonize snap geometries commits tagged with or add optional parameter for output file type to the vector split algorithm add algorithm for executing sql queries against registered spatialite databases
| 1
|
5,916
| 8,736,243,043
|
IssuesEvent
|
2018-12-11 18:59:33
|
ipfs/go-ipfs
|
https://api.github.com/repos/ipfs/go-ipfs
|
opened
|
Improving work tracking and prioritization
|
process
|
## Goal
Based on voting in #5819. We want to track ongoing and upcoming work in a way that makes progress and priorities clear to everyone - team members and the broader community. And it should make managing priorities easy for technical leads. In this issue we're going to discuss (and decide on) something to try first.
## Summary
*eingenito* - I wish we had a place where we could easily track progress on our highest priority initiatives. [mentions 'meta' issues]
*eingenito* - I wish I understood which issues are the ones that should be worked on.
*hannahhoward* - I wish we marked issues as "good first time issue" for first time contributors [we do have difficulty:easy which might be the same thing]
*DonaldTsang* - [mentions Kanban or similar for tracking work]
*eingenito* - I wish I knew what to do with all the old issues in go-ipfs. [they slow down waffle boards]
|
1.0
|
Improving work tracking and prioritization - ## Goal
Based on voting in #5819. We want to track ongoing and upcoming work in a way that makes progress and priorities clear to everyone - team members and the broader community. And it should make managing priorities easy for technical leads. In this issue we're going to discuss (and decide on) something to try first.
## Summary
*eingenito* - I wish we had a place where we could easily track progress on our highest priority initiatives. [mentions 'meta' issues]
*eingenito* - I wish I understood which issues are the ones that should be worked on.
*hannahhoward* - I wish we marked issues as "good first time issue" for first time contributors [we do have difficulty:easy which might be the same thing]
*DonaldTsang* - [mentions Kanban or similar for tracking work]
*eingenito* - I wish I knew what to do with all the old issues in go-ipfs. [they slow down waffle boards]
|
process
|
improving work tracking and prioritization goal based on voting in we want to track ongoing and upcoming work in a way that makes progress and priorities clear to everyone team members and the broader community and it should make managing priorities easy for technical leads in this issue we re going to discuss and decide on something to try first summary eingenito i wish we had a place where we could easily track progress on our highest priority initiatives eingenito i wish i understood which issues are the ones that should be worked on hannahhoward i wish we marked issues as good first time issue for first time contributors donaldtsang eingenito i wish i knew what to do with all the old issues in go ipfs
| 1
|
15,292
| 19,296,163,005
|
IssuesEvent
|
2021-12-12 16:19:32
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
closed
|
Audit how API routes work when users don't explicitly do anything
|
process
|
```
@Api
fun someRoute(ctx) {
// Do nothing
}
```
right now, this sends an empty "200" response, the idea being that the user defined the endpoint, so that's enough. But I'm not sure if that's intuitive, or if instead we should require the user do _something_:
```
@Api
fun someRoute(ctx) {
ctx.res.status = 200 // body automatically set to "" if not already set
}
```
or
```
@Api
fun someRoute(ctx) {
ctx.res.body = "..." // status automatically set to 200 if not already set
}
```
This way, we can write code like:
```
@Api
fun someRoute(ctx) {
if (ctx.req.method == GET) {
ctx.res.body = ...
}
}
```
and other methods will return a 400
|
1.0
|
Audit how API routes work when users don't explicitly do anything - ```
@Api
fun someRoute(ctx) {
// Do nothing
}
```
right now, this sends an empty "200" response, the idea being that the user defined the endpoint, so that's enough. But I'm not sure if that's intuitive, or if instead we should require the user do _something_:
```
@Api
fun someRoute(ctx) {
ctx.res.status = 200 // body automatically set to "" if not already set
}
```
or
```
@Api
fun someRoute(ctx) {
ctx.res.body = "..." // status automatically set to 200 if not already set
}
```
This way, we can write code like:
```
@Api
fun someRoute(ctx) {
if (ctx.req.method == GET) {
ctx.res.body = ...
}
}
```
and other methods will return a 400
|
process
|
audit how api routes work when users don t explicitly do anything api fun someroute ctx do nothing right now this sends an empty response the idea being that the user defined the endpoint so that s enough but i m not sure if that s intuitive or if instead we should require the user do something api fun someroute ctx ctx res status body automatically set to if not already set or api fun someroute ctx ctx res body status automatically set to if not already set this way we can write code like api fun someroute ctx if ctx req method get ctx res body and other methods will return a
| 1
|
10,044
| 13,044,161,638
|
IssuesEvent
|
2020-07-29 03:47:24
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `TimeFormat` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `TimeFormat` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `TimeFormat` from TiDB -
## Description
Port the scalar function `TimeFormat` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function timeformat from tidb description port the scalar function timeformat from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
792,988
| 27,979,318,684
|
IssuesEvent
|
2023-03-26 00:30:32
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Annoying Slirp Message console output from qemu_x86 board target
|
bug priority: low area: QEMU Stale
|
**Describe the bug**
The easiest way to use Networking with QEMU is in user mode with slirp. here QEMU provides the (tcp|udp)/ip stack. Zephyr works just like a charme, however, the qemu version zephyr sdk ships litters the console output with `qemu-system-i386: Slirp: Failed to send packet, ret: -1` messages. As far as I could trace this output back, it does not originate from Zephyr RTOS but from QEMU itself and is false alarm. QEMU was able to send the packet. I can see the response from the server.
**To Reproduce**
Steps to reproduce the behavior:
1. cd samples/net/dhcpv4_client/
2. mkdir boards
3. save file qemu_x86.conf here (see below)
4. mkdir build; cd build
5. cmake -GNinja -DBOARD=qemu\_x86
6. ninja run
7. See console output
**Expected behavior**
QEMU shall remain silent! there is no error. the ip packet gets transmitted by the qemu user mode just fine.
**Impact**
each ip packet results in one `qemu-system-i386: Slirp: Failed to send packet, ret: -1` line on the console output. debugging output of udp applications is annoying, while debugging ip packet heavy tcp traffic from multiple connections is tedious.
**Logs and console output**
```
$ ninja run
[1/183] Preparing syscall dependency handling
[2/183] Generating include/generated/version.h
-- Zephyr version: 3.2.99 (/home/mark/zephyrproject/zephyr), build: zephyr-v3.2.0-1869-g9ebb9abab7b0
[168/183] Linking C executable zephyr/zephyr_pre0.elf
[172/183] Linking C executable zephyr/zephyr_pre1.elf
[182/183] Linking C executable zephyr/zephyr.elf
Memory region Used Size Region Size %age Used
RAM: 184352 B 3 MB 5.86%
IDT_LIST: 0 GB 2 KB 0.00%
[182/183] To exit from QEMU enter: 'CTRL+a, x'[QEMU] CPU: qemu32,+nx,+pae
SeaBIOS (version zephyr-v1.0.0-0-g31d4e0e-dirty-20200714_234759-fv-az50-zephyr)
iPXE (http://ipxe.org) 00:02.0 CA00 PCI2.10 PnP PMM+00392120+002F2120 CA00
Booting from ROM..
*** Booting Zephyr OS build zephyr-v3.2.0-1869-g9ebb9abab7b0 ***
[00:00:00.000,000] <inf> net_dhcpv4_client_sample: Run dhcpv4 client
uart:~$ qemu-system-i386: Slirp: Failed to send packet, ret: -1
qemu-system-i386: Slirp: Failed to send packet, ret: -1
[00:00:07.010,000] <inf> net_dhcpv4: Received: 10.0.2.15
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Your address: 10.0.2.15
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Lease time: 86400 seconds
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Subnet: 255.255.255.0
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Router: 10.0.2.2
uart:~$ qemu-system-i386: Slirp: Failed to send packet, ret: -1
```
**Environment (please complete the following information):**
- OS: Ubuntu 22.04
- Toolchain Zephyr SDK,
- Version: 9ebb9abab7b0d662e0d5379367fb7f052959c545
**Additional context**
qemu_x86.conf:
```
CONFIG_FPU=y
CONFIG_PCIE=y
CONFIG_NET_L2_ETHERNET=y
CONFIG_NET_QEMU_ETHERNET=y
CONFIG_NET_QEMU_USER=y
CONFIG_NET_CONFIG_PEER_IPV4_ADDR="10.0.2.2"
```
This config advises `ninja run` to use the QEMU-internal virtual network instead of the more complicated tap0 network (with ifconfig, dnsmasq and/or dhcpd) which has to be set up manually and is more error prone than the virtual QEMU-internal network.
|
1.0
|
Annoying Slirp Message console output from qemu_x86 board target - **Describe the bug**
The easiest way to use Networking with QEMU is in user mode with slirp. here QEMU provides the (tcp|udp)/ip stack. Zephyr works just like a charme, however, the qemu version zephyr sdk ships litters the console output with `qemu-system-i386: Slirp: Failed to send packet, ret: -1` messages. As far as I could trace this output back, it does not originate from Zephyr RTOS but from QEMU itself and is false alarm. QEMU was able to send the packet. I can see the response from the server.
**To Reproduce**
Steps to reproduce the behavior:
1. cd samples/net/dhcpv4_client/
2. mkdir boards
3. save file qemu_x86.conf here (see below)
4. mkdir build; cd build
5. cmake -GNinja -DBOARD=qemu\_x86
6. ninja run
7. See console output
**Expected behavior**
QEMU shall remain silent! there is no error. the ip packet gets transmitted by the qemu user mode just fine.
**Impact**
each ip packet results in one `qemu-system-i386: Slirp: Failed to send packet, ret: -1` line on the console output. debugging output of udp applications is annoying, while debugging ip packet heavy tcp traffic from multiple connections is tedious.
**Logs and console output**
```
$ ninja run
[1/183] Preparing syscall dependency handling
[2/183] Generating include/generated/version.h
-- Zephyr version: 3.2.99 (/home/mark/zephyrproject/zephyr), build: zephyr-v3.2.0-1869-g9ebb9abab7b0
[168/183] Linking C executable zephyr/zephyr_pre0.elf
[172/183] Linking C executable zephyr/zephyr_pre1.elf
[182/183] Linking C executable zephyr/zephyr.elf
Memory region Used Size Region Size %age Used
RAM: 184352 B 3 MB 5.86%
IDT_LIST: 0 GB 2 KB 0.00%
[182/183] To exit from QEMU enter: 'CTRL+a, x'[QEMU] CPU: qemu32,+nx,+pae
SeaBIOS (version zephyr-v1.0.0-0-g31d4e0e-dirty-20200714_234759-fv-az50-zephyr)
iPXE (http://ipxe.org) 00:02.0 CA00 PCI2.10 PnP PMM+00392120+002F2120 CA00
Booting from ROM..
*** Booting Zephyr OS build zephyr-v3.2.0-1869-g9ebb9abab7b0 ***
[00:00:00.000,000] <inf> net_dhcpv4_client_sample: Run dhcpv4 client
uart:~$ qemu-system-i386: Slirp: Failed to send packet, ret: -1
qemu-system-i386: Slirp: Failed to send packet, ret: -1
[00:00:07.010,000] <inf> net_dhcpv4: Received: 10.0.2.15
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Your address: 10.0.2.15
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Lease time: 86400 seconds
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Subnet: 255.255.255.0
[00:00:07.010,000] <inf> net_dhcpv4_client_sample: Router: 10.0.2.2
uart:~$ qemu-system-i386: Slirp: Failed to send packet, ret: -1
```
**Environment (please complete the following information):**
- OS: Ubuntu 22.04
- Toolchain Zephyr SDK,
- Version: 9ebb9abab7b0d662e0d5379367fb7f052959c545
**Additional context**
qemu_x86.conf:
```
CONFIG_FPU=y
CONFIG_PCIE=y
CONFIG_NET_L2_ETHERNET=y
CONFIG_NET_QEMU_ETHERNET=y
CONFIG_NET_QEMU_USER=y
CONFIG_NET_CONFIG_PEER_IPV4_ADDR="10.0.2.2"
```
This config advises `ninja run` to use the QEMU-internal virtual network instead of the more complicated tap0 network (with ifconfig, dnsmasq and/or dhcpd) which has to be set up manually and is more error prone than the virtual QEMU-internal network.
|
non_process
|
annoying slirp message console output from qemu board target describe the bug the easiest way to use networking with qemu is in user mode with slirp here qemu provides the tcp udp ip stack zephyr works just like a charme however the qemu version zephyr sdk ships litters the console output with qemu system slirp failed to send packet ret messages as far as i could trace this output back it does not originate from zephyr rtos but from qemu itself and is false alarm qemu was able to send the packet i can see the response from the server to reproduce steps to reproduce the behavior cd samples net client mkdir boards save file qemu conf here see below mkdir build cd build cmake gninja dboard qemu ninja run see console output expected behavior qemu shall remain silent there is no error the ip packet gets transmitted by the qemu user mode just fine impact each ip packet results in one qemu system slirp failed to send packet ret line on the console output debugging output of udp applications is annoying while debugging ip packet heavy tcp traffic from multiple connections is tedious logs and console output ninja run preparing syscall dependency handling generating include generated version h zephyr version home mark zephyrproject zephyr build zephyr linking c executable zephyr zephyr elf linking c executable zephyr zephyr elf linking c executable zephyr zephyr elf memory region used size region size age used ram b mb idt list gb kb to exit from qemu enter ctrl a x cpu nx pae seabios version zephyr dirty fv zephyr ipxe pnp pmm booting from rom booting zephyr os build zephyr net client sample run client uart qemu system slirp failed to send packet ret qemu system slirp failed to send packet ret net received net client sample your address net client sample lease time seconds net client sample subnet net client sample router uart qemu system slirp failed to send packet ret environment please complete the following information os ubuntu toolchain zephyr sdk version additional context qemu conf config fpu y config pcie y config net ethernet y config net qemu ethernet y config net qemu user y config net config peer addr this config advises ninja run to use the qemu internal virtual network instead of the more complicated network with ifconfig dnsmasq and or dhcpd which has to be set up manually and is more error prone than the virtual qemu internal network
| 0
|
10,550
| 13,338,592,475
|
IssuesEvent
|
2020-08-28 11:20:12
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Move full `blog-env-postgresql` to e2e tests
|
kind/improvement process/candidate size/XS team/typescript
|
In our integration test suite, we have a test called `blog-env-postgresql`.
That test goes over the timeouts we have built-in and therefore takes a long time to succeed (about 30s).
We should move this full test instead into the e2e tests, which run async from the normal publish process, so we can iterate faster.
It's still valuable to have a minimal postgres test in the integration tests, so this test can be shortened.
|
1.0
|
Move full `blog-env-postgresql` to e2e tests - In our integration test suite, we have a test called `blog-env-postgresql`.
That test goes over the timeouts we have built-in and therefore takes a long time to succeed (about 30s).
We should move this full test instead into the e2e tests, which run async from the normal publish process, so we can iterate faster.
It's still valuable to have a minimal postgres test in the integration tests, so this test can be shortened.
|
process
|
move full blog env postgresql to tests in our integration test suite we have a test called blog env postgresql that test goes over the timeouts we have built in and therefore takes a long time to succeed about we should move this full test instead into the tests which run async from the normal publish process so we can iterate faster it s still valuable to have a minimal postgres test in the integration tests so this test can be shortened
| 1
|
15,441
| 19,656,243,773
|
IssuesEvent
|
2022-01-10 12:50:06
|
asam-ev/asam-project-guide
|
https://api.github.com/repos/asam-ev/asam-project-guide
|
opened
|
Add template for rules for Editorial Guide
|
processes
|
- [ ] Add template for rules to https://github.com/asam-ev/asam-project-guide/tree/main/doc/modules/compendium/examples.
- [ ] Add process to use template to https://github.com/asam-ev/asam-project-guide/blob/main/doc/modules/compendium/pages/writing_guidelines/documentation_processes.adoc.
|
1.0
|
Add template for rules for Editorial Guide - - [ ] Add template for rules to https://github.com/asam-ev/asam-project-guide/tree/main/doc/modules/compendium/examples.
- [ ] Add process to use template to https://github.com/asam-ev/asam-project-guide/blob/main/doc/modules/compendium/pages/writing_guidelines/documentation_processes.adoc.
|
process
|
add template for rules for editorial guide add template for rules to add process to use template to
| 1
|
8
| 2,496,223,929
|
IssuesEvent
|
2015-01-06 17:56:26
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
[Proposal] OrderMap for Gremlin3
|
enhancement process
|
In Gremlin2 we have `OrderMapStep` which orders the maps flowing through the step. However, what about if you had lists flowing? `OrderListStep`, `OrderStringStep`, `OrderArrayStep`, etc.
Here is a generalization:
```groovy
g.V().out().groupCount().by('age').cap().orderObject().by(incr)
```
What sucks is that we have `order()` for stream ordering and `orderObject()` for per object ordering. This is very similar to the problem with `local()`. Sometimes you want the process local to the object. As such, do we simply promote this:
```groovy
g.V().out().groupCount().by('age').cap().map{it.get().sort()}
```
But that looks lame...and uses Groovy `sort()` where we want something general to Java and Groovy.
```groovy
g.V().out().groupCount().by('age').cap().local(Order.incr)
```
Where `LocalStep` takes not just a `Traversal` but also a `Function<Traverser,Traverser>` where `Order.incr` is a Function.................
Grasping for straws here....ideas?
@mbroecheler @dkuppitz @spmallette @bryncooke
|
1.0
|
[Proposal] OrderMap for Gremlin3 - In Gremlin2 we have `OrderMapStep` which orders the maps flowing through the step. However, what about if you had lists flowing? `OrderListStep`, `OrderStringStep`, `OrderArrayStep`, etc.
Here is a generalization:
```groovy
g.V().out().groupCount().by('age').cap().orderObject().by(incr)
```
What sucks is that we have `order()` for stream ordering and `orderObject()` for per object ordering. This is very similar to the problem with `local()`. Sometimes you want the process local to the object. As such, do we simply promote this:
```groovy
g.V().out().groupCount().by('age').cap().map{it.get().sort()}
```
But that looks lame...and uses Groovy `sort()` where we want something general to Java and Groovy.
```groovy
g.V().out().groupCount().by('age').cap().local(Order.incr)
```
Where `LocalStep` takes not just a `Traversal` but also a `Function<Traverser,Traverser>` where `Order.incr` is a Function.................
Grasping for straws here....ideas?
@mbroecheler @dkuppitz @spmallette @bryncooke
|
process
|
ordermap for in we have ordermapstep which orders the maps flowing through the step however what about if you had lists flowing orderliststep orderstringstep orderarraystep etc here is a generalization groovy g v out groupcount by age cap orderobject by incr what sucks is that we have order for stream ordering and orderobject for per object ordering this is very similar to the problem with local sometimes you want the process local to the object as such do we simply promote this groovy g v out groupcount by age cap map it get sort but that looks lame and uses groovy sort where we want something general to java and groovy groovy g v out groupcount by age cap local order incr where localstep takes not just a traversal but also a function where order incr is a function grasping for straws here ideas mbroecheler dkuppitz spmallette bryncooke
| 1
|
70,393
| 15,085,562,400
|
IssuesEvent
|
2021-02-05 18:52:32
|
mthbernardes/shaggy-rogers
|
https://api.github.com/repos/mthbernardes/shaggy-rogers
|
reopened
|
CVE-2019-16942 (High) detected in jackson-databind-2.9.6.jar
|
security vulnerability
|
## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: shaggy-rogers/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- pantomime-2.11.0.jar (Root Library)
- tika-parsers-1.19.1.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mthbernardes/shaggy-rogers/commit/f72a5cb259e01c0ac208ba3a95eee5232c30fe6c">f72a5cb259e01c0ac208ba3a95eee5232c30fe6c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.novemberain:pantomime:2.11.0;org.apache.tika:tika-parsers:1.19.1;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16942","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-16942 (High) detected in jackson-databind-2.9.6.jar - ## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: shaggy-rogers/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- pantomime-2.11.0.jar (Root Library)
- tika-parsers-1.19.1.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mthbernardes/shaggy-rogers/commit/f72a5cb259e01c0ac208ba3a95eee5232c30fe6c">f72a5cb259e01c0ac208ba3a95eee5232c30fe6c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.novemberain:pantomime:2.11.0;org.apache.tika:tika-parsers:1.19.1;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16942","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file shaggy rogers pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy pantomime jar root library tika parsers jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com novemberain pantomime org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling vulnerabilityurl
| 0
|
4,322
| 7,227,831,261
|
IssuesEvent
|
2018-02-11 01:21:54
|
technovus-sfu/rembot
|
https://api.github.com/repos/technovus-sfu/rembot
|
closed
|
Generate rcode array from image array
|
processing
|
### One proposed method
- All pixels to be drawn are white(255) in the code
- When there is a change in values (0 -> 255 i.e. black to white)
- read the first white pixel then until the next change (255 -> 0, i.e white to black )
- read the last white pixel
- The code generated could then be `R01 x5 X100 y0 Y0`
```python
# Command list to draw two lines across image of width 100,
# one from 5 to the end the next from 100 to the middle of 5.
# Assuming with margins the offset is 20x20
x_offset = 20
y_offset = 20
cmd = [ "R01 x(5+x_offset) X(100+x_offset) y(0+y_offset) Y(0+y_offset)",
"R01 x(100+x_offset) X(5+x_offset) y(1+y_offset) Y(1+y_offset)"]
```
|
1.0
|
Generate rcode array from image array - ### One proposed method
- All pixels to be drawn are white(255) in the code
- When there is a change in values (0 -> 255 i.e. black to white)
- read the first white pixel then until the next change (255 -> 0, i.e white to black )
- read the last white pixel
- The code generated could then be `R01 x5 X100 y0 Y0`
```python
# Command list to draw two lines across image of width 100,
# one from 5 to the end the next from 100 to the middle of 5.
# Assuming with margins the offset is 20x20
x_offset = 20
y_offset = 20
cmd = [ "R01 x(5+x_offset) X(100+x_offset) y(0+y_offset) Y(0+y_offset)",
"R01 x(100+x_offset) X(5+x_offset) y(1+y_offset) Y(1+y_offset)"]
```
|
process
|
generate rcode array from image array one proposed method all pixels to be drawn are white in the code when there is a change in values i e black to white read the first white pixel then until the next change i e white to black read the last white pixel the code generated could then be python command list to draw two lines across image of width one from to the end the next from to the middle of assuming with margins the offset is x offset y offset cmd x x offset x x offset y y offset y y offset x x offset x x offset y y offset y y offset
| 1
|
26
| 2,497,007,735
|
IssuesEvent
|
2015-01-07 00:11:40
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
New thoughts on looping constructs in Gremin3.
|
enhancement process
|
Now that we have "mastered" internal traversals and we know how to compile them into a linear form for OLAP execution, we can re-think the looping construct if we want.
Below are two examples demonstrating a proposal for "while/do" and "do/while", respectively. Moreover, for a more "graph feel", we can call `until` `loop`.
```java
// in Gremlin-Java
g.V().until(t -> t.get().value('name').equals('marko'), g.of().out())
g.V().until(g.of().out(), t -> t.get().value('name').equals('marko'))
```
```groovy
// in Gremlin-Groovy with Sugar
g.V.until({it.name == 'marko'}, g.of().out)
g.V.until(g.of().out){it.name == 'marko'}
// with the emit predicate
g.V.until({it.name == 'marko'}, g.of().out){true}
g.V.until(g.of().out){it.name == 'marko'}{true}
```
/////////////
A few extra thoughts:
* Rules of sideEffect scoping. If internal traversals are always "linearized" for OLAP, then we may want to make the sideEffect scope be the global scope. That is, no nested sideEffect structures.
* What I like about this model is that there is no need for `as()` and the section of what is being looped is being made apparent by `(`. This is what I like about `local()` as well. `( )` is the delimiter of the construct.
* We we still need to keep `jump()` as the `goto` construct necessary for arbitrary jumping, but it would be used primarily by the strategies (not by users). In fact, we could probably remove `GraphTraversal.jump()` and there would simply be a `JumpStep` available for usage by strategies. In fact, we could renamed `JumpStep` to `GoToStep`. Though, to keep with the graph-theme, "jump" may be more appropriate -- analogous to "loop" vs. "until".
@dkuppitz @mbroecheler @BrynCooke @joshsh
|
1.0
|
New thoughts on looping constructs in Gremin3. - Now that we have "mastered" internal traversals and we know how to compile them into a linear form for OLAP execution, we can re-think the looping construct if we want.
Below are two examples demonstrating a proposal for "while/do" and "do/while", respectively. Moreover, for a more "graph feel", we can call `until` `loop`.
```java
// in Gremlin-Java
g.V().until(t -> t.get().value('name').equals('marko'), g.of().out())
g.V().until(g.of().out(), t -> t.get().value('name').equals('marko'))
```
```groovy
// in Gremlin-Groovy with Sugar
g.V.until({it.name == 'marko'}, g.of().out)
g.V.until(g.of().out){it.name == 'marko'}
// with the emit predicate
g.V.until({it.name == 'marko'}, g.of().out){true}
g.V.until(g.of().out){it.name == 'marko'}{true}
```
/////////////
A few extra thoughts:
* Rules of sideEffect scoping. If internal traversals are always "linearized" for OLAP, then we may want to make the sideEffect scope be the global scope. That is, no nested sideEffect structures.
* What I like about this model is that there is no need for `as()` and the section of what is being looped is being made apparent by `(`. This is what I like about `local()` as well. `( )` is the delimiter of the construct.
* We we still need to keep `jump()` as the `goto` construct necessary for arbitrary jumping, but it would be used primarily by the strategies (not by users). In fact, we could probably remove `GraphTraversal.jump()` and there would simply be a `JumpStep` available for usage by strategies. In fact, we could renamed `JumpStep` to `GoToStep`. Though, to keep with the graph-theme, "jump" may be more appropriate -- analogous to "loop" vs. "until".
@dkuppitz @mbroecheler @BrynCooke @joshsh
|
process
|
new thoughts on looping constructs in now that we have mastered internal traversals and we know how to compile them into a linear form for olap execution we can re think the looping construct if we want below are two examples demonstrating a proposal for while do and do while respectively moreover for a more graph feel we can call until loop java in gremlin java g v until t t get value name equals marko g of out g v until g of out t t get value name equals marko groovy in gremlin groovy with sugar g v until it name marko g of out g v until g of out it name marko with the emit predicate g v until it name marko g of out true g v until g of out it name marko true a few extra thoughts rules of sideeffect scoping if internal traversals are always linearized for olap then we may want to make the sideeffect scope be the global scope that is no nested sideeffect structures what i like about this model is that there is no need for as and the section of what is being looped is being made apparent by this is what i like about local as well is the delimiter of the construct we we still need to keep jump as the goto construct necessary for arbitrary jumping but it would be used primarily by the strategies not by users in fact we could probably remove graphtraversal jump and there would simply be a jumpstep available for usage by strategies in fact we could renamed jumpstep to gotostep though to keep with the graph theme jump may be more appropriate analogous to loop vs until dkuppitz mbroecheler bryncooke joshsh
| 1
|
36,885
| 2,813,344,479
|
IssuesEvent
|
2015-05-18 14:21:14
|
georgjaehnig/serchilo-drupal
|
https://api.github.com/repos/georgjaehnig/serchilo-drupal
|
closed
|
Shortcut edit, fieldgroups: Change jQuery styles to Bootstrap styles
|
Priority: could Social: help wanted Type: enhancement
|
In the Shortcut edit form, the CSS of the fieldgroups should look like Bootstrap (and not like jQuery).
|
1.0
|
Shortcut edit, fieldgroups: Change jQuery styles to Bootstrap styles - In the Shortcut edit form, the CSS of the fieldgroups should look like Bootstrap (and not like jQuery).
|
non_process
|
shortcut edit fieldgroups change jquery styles to bootstrap styles in the shortcut edit form the css of the fieldgroups should look like bootstrap and not like jquery
| 0
|
8,291
| 11,457,195,270
|
IssuesEvent
|
2020-02-06 23:02:07
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Oriented Minimum Bounding Box strange behavior
|
Bug Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ x ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ x ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ x ] Create a light and self-contained sample dataset and project file which demonstrates the issue
If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). -->
**Describe the bug**
By running the "Oriented Minimum Bounding Box" algorithm from the Processing Toolbox, on a polygons vector layer, in some (seemingly random) cases, the result is not as expected (it is visibly appreciated that an oriented bounding box of smaller area could be generated).
By creating a very small decimal radius buffer to the geometries of the problematic features, running the Oriented Minimum Bounding Box algorithm over the resulting geometry, gives the expected result.
**How to Reproduce**
1. Create a feature with the following geometry:
`Polygon ((264 -525, 248 -521, 244 -519, 233 -508, 231 -504, 210 -445, 196 -396, 180 -332, 178 -322, 176 -310, 174 -296, 174 -261, 176 -257, 178 -255, 183 -251, 193 -245, 197 -243, 413 -176, 439 -168, 447 -166, 465 -164, 548 -164, 552 -166, 561 -175, 567 -187, 602 -304, 618 -379, 618 -400, 616 -406, 612 -414, 606 -420, 587 -430, 575 -436, 547 -446, 451 -474, 437 -478, 321 -511, 283 -521, 275 -523, 266 -525, 264 -525))`
(Tested in a layer defined in CRS = EPSG:3857, with no projection applied to the project canvas. Although I think it doesn't matter.)
2. Run the Oriented Minimum Bounding Box algorithm to that feature.
The output box attributes are: _width = 361.00_, _height = 444.00_, _angle = 90.00_.
3. Create a Buffer of _Radius = 0.001_ from the original feature.
4. Run the Oriented Minimum Bounding Box algorithm to the buffered geometry feature.
The output box attributes are: _width = 417.63_, _height = 298.02_, _angle = 162.68_.
The area of the box returned by the buffered geometry feature is smaller than that returned by the original one.
**QGIS and OS versions**
QGIS version 3.10.1-A Coruña
Windows 10, 64bit (OSGeoo4W installation).
|
1.0
|
Oriented Minimum Bounding Box strange behavior - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ x ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ x ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ x ] Create a light and self-contained sample dataset and project file which demonstrates the issue
If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). -->
**Describe the bug**
By running the "Oriented Minimum Bounding Box" algorithm from the Processing Toolbox, on a polygons vector layer, in some (seemingly random) cases, the result is not as expected (it is visibly appreciated that an oriented bounding box of smaller area could be generated).
By creating a very small decimal radius buffer to the geometries of the problematic features, running the Oriented Minimum Bounding Box algorithm over the resulting geometry, gives the expected result.
**How to Reproduce**
1. Create a feature with the following geometry:
`Polygon ((264 -525, 248 -521, 244 -519, 233 -508, 231 -504, 210 -445, 196 -396, 180 -332, 178 -322, 176 -310, 174 -296, 174 -261, 176 -257, 178 -255, 183 -251, 193 -245, 197 -243, 413 -176, 439 -168, 447 -166, 465 -164, 548 -164, 552 -166, 561 -175, 567 -187, 602 -304, 618 -379, 618 -400, 616 -406, 612 -414, 606 -420, 587 -430, 575 -436, 547 -446, 451 -474, 437 -478, 321 -511, 283 -521, 275 -523, 266 -525, 264 -525))`
(Tested in a layer defined in CRS = EPSG:3857, with no projection applied to the project canvas. Although I think it doesn't matter.)
2. Run the Oriented Minimum Bounding Box algorithm to that feature.
The output box attributes are: _width = 361.00_, _height = 444.00_, _angle = 90.00_.
3. Create a Buffer of _Radius = 0.001_ from the original feature.
4. Run the Oriented Minimum Bounding Box algorithm to the buffered geometry feature.
The output box attributes are: _width = 417.63_, _height = 298.02_, _angle = 162.68_.
The area of the box returned by the buffered geometry feature is smaller than that returned by the original one.
**QGIS and OS versions**
QGIS version 3.10.1-A Coruña
Windows 10, 64bit (OSGeoo4W installation).
|
process
|
oriented minimum bounding box strange behavior bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue if the issue concerns a third party plugin then it cannot be fixed by the qgis team please raise your issue in the dedicated bug tracker for that specific plugin as listed in the plugin s description describe the bug by running the oriented minimum bounding box algorithm from the processing toolbox on a polygons vector layer in some seemingly random cases the result is not as expected it is visibly appreciated that an oriented bounding box of smaller area could be generated by creating a very small decimal radius buffer to the geometries of the problematic features running the oriented minimum bounding box algorithm over the resulting geometry gives the expected result how to reproduce create a feature with the following geometry polygon tested in a layer defined in crs epsg with no projection applied to the project canvas although i think it doesn t matter run the oriented minimum bounding box algorithm to that feature the output box attributes are width height angle create a buffer of radius from the original feature run the oriented minimum bounding box algorithm to the buffered geometry feature the output box attributes are width height angle the area of the box returned by the buffered geometry feature is smaller than that returned by the original one qgis and os versions qgis version a coruña windows installation
| 1
|
561,278
| 16,614,728,725
|
IssuesEvent
|
2021-06-02 15:21:31
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
Delay JS - Add Smush LazyLoad script to default exclusion list
|
3rd party compatibility module: delay JS priority: high type: enhancement
|
**Is your feature request related to a problem? Please describe.**
Smush has an option to Lazyload images. When our Delay JS is on, images aren't loaded until user interaction.
**Describe the solution you'd like**
Automatically add Smush LazyLoad JS file to our default exclusion list.
Pattern to add: `/assets/js/smush-lazy-load.min.js`
This pattern works for free and paid versions.
|
1.0
|
Delay JS - Add Smush LazyLoad script to default exclusion list - **Is your feature request related to a problem? Please describe.**
Smush has an option to Lazyload images. When our Delay JS is on, images aren't loaded until user interaction.
**Describe the solution you'd like**
Automatically add Smush LazyLoad JS file to our default exclusion list.
Pattern to add: `/assets/js/smush-lazy-load.min.js`
This pattern works for free and paid versions.
|
non_process
|
delay js add smush lazyload script to default exclusion list is your feature request related to a problem please describe smush has an option to lazyload images when our delay js is on images aren t loaded until user interaction describe the solution you d like automatically add smush lazyload js file to our default exclusion list pattern to add assets js smush lazy load min js this pattern works for free and paid versions
| 0
|
7,205
| 10,342,009,060
|
IssuesEvent
|
2019-09-04 04:45:22
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
Whitelisting a set known PC vendors sites?
|
question whitelisting process
|
I was wondering, can we define a specific list of vendors sites which can be whitelisted(like ubuntu)?
```
ALL .debian.org
ALL .redhat.com
ALL .freebsd.org
ALL .openbsd.org
ALL .kernel.org
ALL .fortinet.com
ALL .juniper.net
ALL .checkpoint.com
ALL .paloaltonetworks.com
ALL .sophos.com
ALL .sonicwall.com
ALL .vyos.io
ALL .vyos.net
ALL .pfsense.org
ALL .m0n0.ch
ALL .netbsd.org
ALL .bsdrp.net
ALL .opnsense.org
ALL .ipfire.org
ALL .zeroshell.org
ALL .clearos.com
```
And there are others, but I think it's a good thing to whitelist **if** these are known to not be hostile.
|
1.0
|
Whitelisting a set known PC vendors sites? - I was wondering, can we define a specific list of vendors sites which can be whitelisted(like ubuntu)?
```
ALL .debian.org
ALL .redhat.com
ALL .freebsd.org
ALL .openbsd.org
ALL .kernel.org
ALL .fortinet.com
ALL .juniper.net
ALL .checkpoint.com
ALL .paloaltonetworks.com
ALL .sophos.com
ALL .sonicwall.com
ALL .vyos.io
ALL .vyos.net
ALL .pfsense.org
ALL .m0n0.ch
ALL .netbsd.org
ALL .bsdrp.net
ALL .opnsense.org
ALL .ipfire.org
ALL .zeroshell.org
ALL .clearos.com
```
And there are others, but I think it's a good thing to whitelist **if** these are known to not be hostile.
|
process
|
whitelisting a set known pc vendors sites i was wondering can we define a specific list of vendors sites which can be whitelisted like ubuntu all debian org all redhat com all freebsd org all openbsd org all kernel org all fortinet com all juniper net all checkpoint com all paloaltonetworks com all sophos com all sonicwall com all vyos io all vyos net all pfsense org all ch all netbsd org all bsdrp net all opnsense org all ipfire org all zeroshell org all clearos com and there are others but i think it s a good thing to whitelist if these are known to not be hostile
| 1
|
222,314
| 17,406,948,090
|
IssuesEvent
|
2021-08-03 07:25:57
|
theislab/scvelo
|
https://api.github.com/repos/theislab/scvelo
|
closed
|
Unit test `merge`
|
enhancement testing
|
<!-- What kind of feature would you like to request? -->
## Description
`scvelo/core/_anndata.py::merge` needs to be unit tested.
|
1.0
|
Unit test `merge` - <!-- What kind of feature would you like to request? -->
## Description
`scvelo/core/_anndata.py::merge` needs to be unit tested.
|
non_process
|
unit test merge description scvelo core anndata py merge needs to be unit tested
| 0
|
9,677
| 12,679,746,825
|
IssuesEvent
|
2020-06-19 12:25:45
|
GetTerminus/terminus-oss
|
https://api.github.com/repos/GetTerminus/terminus-oss
|
opened
|
Style: Convert component styles to use custom CSS properties (variables)
|
Goal: Process Improvement
|
We want to empower feature teams to work directly with product to experiment with style changes without requiring help from the library team.
#### Primary items to convert
- Animation
- Color
- Handle themes by changing variable values rather than generating full sets of classes for each theme.
- Spacing (margin/padding/width/height/etc)
- Typography
- Z-index
|
1.0
|
Style: Convert component styles to use custom CSS properties (variables) - We want to empower feature teams to work directly with product to experiment with style changes without requiring help from the library team.
#### Primary items to convert
- Animation
- Color
- Handle themes by changing variable values rather than generating full sets of classes for each theme.
- Spacing (margin/padding/width/height/etc)
- Typography
- Z-index
|
process
|
style convert component styles to use custom css properties variables we want to empower feature teams to work directly with product to experiment with style changes without requiring help from the library team primary items to convert animation color handle themes by changing variable values rather than generating full sets of classes for each theme spacing margin padding width height etc typography z index
| 1
|
2,001
| 4,819,176,973
|
IssuesEvent
|
2016-11-04 18:27:56
|
material-motion/material-motion-family-direct-manipulation-swift
|
https://api.github.com/repos/material-motion/material-motion-family-direct-manipulation-swift
|
closed
|
Cut the v1.0.0 release
|
Process
|
This must be run by a @material-motion/core-team member.
`mdm release cut`
|
1.0
|
Cut the v1.0.0 release - This must be run by a @material-motion/core-team member.
`mdm release cut`
|
process
|
cut the release this must be run by a material motion core team member mdm release cut
| 1
|
16,013
| 20,188,225,176
|
IssuesEvent
|
2022-02-11 01:19:33
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Integrate network logs into a Security Information and Event Management (SIEM)
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Network Security
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-security-operations#leverage-native-detections-and-controls">Integrate network logs into a Security Information and Event Management (SIEM)</a>
<p><b>Why Consider This?</b></p>
Integrating logs from the network devices, and even raw network traffic itself, will provide greater visibility into potential security threats flowing over the wire.
<p><b>Context</b></p>
<p><span>The modern machine learning based analytics platforms support ingestion of extremely large amounts of information and can analyze large datasets very quickly. In addition, these solutions can be tuned to significantly reduce false positive alerts.</span></p><p><span>Examples of network logs that provide visibility include:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Security group logs - flow logs and diagnostic logs</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Web application firewall logs</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Virtual network taps and their equivalents</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Azure Network Watcher</span></li></ul>
<p><b>Suggested Actions</b></p>
<p><span>Integrate network device log information in advanced SIEM solutions or other analytics platforms.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/network-security-containment#enable-enhanced-network-visibility" target="_blank"><span>Enable enhanced network visibility</span></a><span /></p>
|
1.0
|
Integrate network logs into a Security Information and Event Management (SIEM) - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-security-operations#leverage-native-detections-and-controls">Integrate network logs into a Security Information and Event Management (SIEM)</a>
<p><b>Why Consider This?</b></p>
Integrating logs from the network devices, and even raw network traffic itself, will provide greater visibility into potential security threats flowing over the wire.
<p><b>Context</b></p>
<p><span>The modern machine learning based analytics platforms support ingestion of extremely large amounts of information and can analyze large datasets very quickly. In addition, these solutions can be tuned to significantly reduce false positive alerts.</span></p><p><span>Examples of network logs that provide visibility include:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Security group logs - flow logs and diagnostic logs</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Web application firewall logs</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Virtual network taps and their equivalents</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Azure Network Watcher</span></li></ul>
<p><b>Suggested Actions</b></p>
<p><span>Integrate network device log information in advanced SIEM solutions or other analytics platforms.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/network-security-containment#enable-enhanced-network-visibility" target="_blank"><span>Enable enhanced network visibility</span></a><span /></p>
|
process
|
integrate network logs into a security information and event management siem why consider this integrating logs from the network devices and even raw network traffic itself will provide greater visibility into potential security threats flowing over the wire context the modern machine learning based analytics platforms support ingestion of extremely large amounts of information and can analyze large datasets very quickly in addition these solutions can be tuned to significantly reduce false positive alerts examples of network logs that provide visibility include security group logs flow logs and diagnostic logs web application firewall logs virtual network taps and their equivalents azure network watcher suggested actions integrate network device log information in advanced siem solutions or other analytics platforms learn more enable enhanced network visibility
| 1
|
394,534
| 11,644,889,514
|
IssuesEvent
|
2020-02-29 21:20:25
|
momentum-mod/game
|
https://api.github.com/repos/momentum-mod/game
|
closed
|
Remove mumble integration
|
Priority: Low Size: Small Type: Development / Internal
|
Found in `client/mumble.cpp/h`, this class is in charge of providing support for positional audio via Mumble. Mumble has a special place in my heart, but this code not so much. It can be removed.
|
1.0
|
Remove mumble integration - Found in `client/mumble.cpp/h`, this class is in charge of providing support for positional audio via Mumble. Mumble has a special place in my heart, but this code not so much. It can be removed.
|
non_process
|
remove mumble integration found in client mumble cpp h this class is in charge of providing support for positional audio via mumble mumble has a special place in my heart but this code not so much it can be removed
| 0
|
13,212
| 15,683,421,153
|
IssuesEvent
|
2021-03-25 08:45:33
|
ropensci/software-review-meta
|
https://api.github.com/repos/ropensci/software-review-meta
|
closed
|
Build a new template for submission
|
automation process
|
This will work for:
- A web form
- Submission via YML
- Submission via R package (which will also parse YML if present).
Edit template here till we are happy with a version to work with:
https://docs.google.com/document/d/1ej99Ku1qzQrH49sRM1t5wqeqa3cSENV9strEkVG-VQM/edit?usp=sharing
|
1.0
|
Build a new template for submission - This will work for:
- A web form
- Submission via YML
- Submission via R package (which will also parse YML if present).
Edit template here till we are happy with a version to work with:
https://docs.google.com/document/d/1ej99Ku1qzQrH49sRM1t5wqeqa3cSENV9strEkVG-VQM/edit?usp=sharing
|
process
|
build a new template for submission this will work for a web form submission via yml submission via r package which will also parse yml if present edit template here till we are happy with a version to work with
| 1
|
7,084
| 10,232,033,525
|
IssuesEvent
|
2019-08-18 14:28:55
|
threefoldtech/jumpscaleX
|
https://api.github.com/repos/threefoldtech/jumpscaleX
|
closed
|
zerostor client [development_zstor]
|
process_duplicate type_feature
|
Implement client for zerostor that:
- add/get files
- start/install the client for zerostor
|
1.0
|
zerostor client [development_zstor] - Implement client for zerostor that:
- add/get files
- start/install the client for zerostor
|
process
|
zerostor client implement client for zerostor that add get files start install the client for zerostor
| 1
|
331,433
| 10,073,350,249
|
IssuesEvent
|
2019-07-24 09:25:29
|
GovReady/govready-q
|
https://api.github.com/repos/GovReady/govready-q
|
opened
|
Looping through `output_documents` of invalid project `module-id` causes infinite loop
|
bug priority
|
Application appears to go into a run away, infinite loop and consumes 100% of available CPU cycles when attempting to iterate through the output documents of a non-existent project `module-id`. The python process must be terminated or continues to run.
Normally, the templates display a `invalid reference` error when encountering an non-existent reference (or typo of a reference name). But if the invalid `module-id` is used in the `for` portion of the jinja loop, a runaway process occurs.
This is a problem because a person could easily mis-type a `module-id` in creating a loop, or remove a reference to a previously existing `module-id` in loop.
The following fails and causes the infinite loop:
```
{% for od in project.invalid_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.invalid_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
This also fails and causes the infinite loop:
```
{% for od in project.invalid_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.proper_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
If a non-existent or otherwise invalid reference is used in side a loop of valid `module-id`, application works fine and outputs multiple `invalid reference`, as in the below:
```
{% for od in project.proper_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.invalid_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
|
1.0
|
Looping through `output_documents` of invalid project `module-id` causes infinite loop - Application appears to go into a run away, infinite loop and consumes 100% of available CPU cycles when attempting to iterate through the output documents of a non-existent project `module-id`. The python process must be terminated or continues to run.
Normally, the templates display a `invalid reference` error when encountering an non-existent reference (or typo of a reference name). But if the invalid `module-id` is used in the `for` portion of the jinja loop, a runaway process occurs.
This is a problem because a person could easily mis-type a `module-id` in creating a loop, or remove a reference to a previously existing `module-id` in loop.
The following fails and causes the infinite loop:
```
{% for od in project.invalid_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.invalid_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
This also fails and causes the infinite loop:
```
{% for od in project.invalid_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.proper_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
If a non-existent or otherwise invalid reference is used in side a loop of valid `module-id`, application works fine and outputs multiple `invalid reference`, as in the below:
```
{% for od in project.proper_module_id.output_documents %}
<div style="margin: 12px 0 0 0;">
{{project.invalid_module_id.output_documents[od]}}
</div>
<small style="color: #aaa;">{{od}}</small>
{% endfor %}
```
|
non_process
|
looping through output documents of invalid project module id causes infinite loop application appears to go into a run away infinite loop and consumes of available cpu cycles when attempting to iterate through the output documents of a non existent project module id the python process must be terminated or continues to run normally the templates display a invalid reference error when encountering an non existent reference or typo of a reference name but if the invalid module id is used in the for portion of the jinja loop a runaway process occurs this is a problem because a person could easily mis type a module id in creating a loop or remove a reference to a previously existing module id in loop the following fails and causes the infinite loop for od in project invalid module id output documents project invalid module id output documents od endfor this also fails and causes the infinite loop for od in project invalid module id output documents project proper module id output documents od endfor if a non existent or otherwise invalid reference is used in side a loop of valid module id application works fine and outputs multiple invalid reference as in the below for od in project proper module id output documents project invalid module id output documents od endfor
| 0
|
16,893
| 22,195,351,868
|
IssuesEvent
|
2022-06-07 06:15:23
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[Mirror] rules_go 0.33.0
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/rules_go/releases/download/v0.33.0/rules_go-v0.33.0.zip
https://github.com/golang/sys/archive/bc2c85ada10aa9b6aa9607e9ac9ad0761b95cf1d.zip
https://github.com/golang/xerrors/archive/f3a8303e98df87cf4205e70f82c1c3c19f345f91.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.28.0.zip
https://github.com/golang/protobuf/archive/refs/tags/v1.5.2.zip
https://github.com/mwitkow/go-proto-validators/archive/refs/tags/v0.3.2.zip
https://github.com/gogo/protobuf/archive/refs/tags/v1.3.2.zip
https://github.com/googleapis/go-genproto/archive/e326c6e8e9c8d23afed6c564e1c6c7e7693d58d0.zip
https://github.com/googleapis/googleapis/archive/530ca55953b470ab3b37dc9de37fcfa59410b741.zip
https://github.com/golang/mock/archive/refs/tags/v1.6.0.zip
|
1.0
|
[Mirror] rules_go 0.33.0 - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/rules_go/releases/download/v0.33.0/rules_go-v0.33.0.zip
https://github.com/golang/sys/archive/bc2c85ada10aa9b6aa9607e9ac9ad0761b95cf1d.zip
https://github.com/golang/xerrors/archive/f3a8303e98df87cf4205e70f82c1c3c19f345f91.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.28.0.zip
https://github.com/golang/protobuf/archive/refs/tags/v1.5.2.zip
https://github.com/mwitkow/go-proto-validators/archive/refs/tags/v0.3.2.zip
https://github.com/gogo/protobuf/archive/refs/tags/v1.3.2.zip
https://github.com/googleapis/go-genproto/archive/e326c6e8e9c8d23afed6c564e1c6c7e7693d58d0.zip
https://github.com/googleapis/googleapis/archive/530ca55953b470ab3b37dc9de37fcfa59410b741.zip
https://github.com/golang/mock/archive/refs/tags/v1.6.0.zip
|
process
|
rules go please list the urls of the archives you d like to mirror
| 1
|
114,767
| 4,644,069,477
|
IssuesEvent
|
2016-09-30 15:18:14
|
TheScienceMuseum/collectionsonline
|
https://api.github.com/repos/TheScienceMuseum/collectionsonline
|
opened
|
Comma being included in materials href
|
bug priority-2
|
Related to https://github.com/TheScienceMuseum/collectionsonline/issues/401 and https://github.com/TheScienceMuseum/collectionsonline/issues/362
<img width="374" alt="screen shot 2016-09-30 at 16 15 32" src="https://cloud.githubusercontent.com/assets/91365/18996689/3320120c-8729-11e6-85b8-39c29af3a93b.png">
|
1.0
|
Comma being included in materials href - Related to https://github.com/TheScienceMuseum/collectionsonline/issues/401 and https://github.com/TheScienceMuseum/collectionsonline/issues/362
<img width="374" alt="screen shot 2016-09-30 at 16 15 32" src="https://cloud.githubusercontent.com/assets/91365/18996689/3320120c-8729-11e6-85b8-39c29af3a93b.png">
|
non_process
|
comma being included in materials href related to and img width alt screen shot at src
| 0
|
58,818
| 11,905,340,966
|
IssuesEvent
|
2020-03-30 18:23:48
|
home-assistant/brands
|
https://api.github.com/repos/home-assistant/brands
|
opened
|
Tahoma is missing brand images
|
domain-missing has-codeowner
|
## The problem
The Tahoma integration does not have brand images in
this repository.
We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend.
The following images are missing and would ideally be added:
- `src/tahoma/icon.png`
- `src/tahoma/logo.png`
- `src/tahoma/icon@2x.png`
- `src/tahoma/logo@2x.png`
For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md).
## Updating the documentation repository
Our documentation repository already has a logo for this integration, however, it does not meet the image requirements of this new Brands repository.
If adding images to this repository, please open up a PR to the documentation repository as well, removing the `logo: tahoma.png` line from this file:
<https://github.com/home-assistant/home-assistant.io/blob/current/source/_integrations/tahoma.markdown>
**Note**: The documentation PR needs to be opened against the `current` branch.
**Note2**: Please leave the actual logo file in the documentation repository. It will be cleaned up differently.
## Additional information
For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements.
## Codeowner mention
Hi there, @philklei! Mind taking a look at this issue as it is with an integration (tahoma) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/tahoma/manifest.json) for? Thanks!
Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
|
1.0
|
Tahoma is missing brand images -
## The problem
The Tahoma integration does not have brand images in
this repository.
We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend.
The following images are missing and would ideally be added:
- `src/tahoma/icon.png`
- `src/tahoma/logo.png`
- `src/tahoma/icon@2x.png`
- `src/tahoma/logo@2x.png`
For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md).
## Updating the documentation repository
Our documentation repository already has a logo for this integration, however, it does not meet the image requirements of this new Brands repository.
If adding images to this repository, please open up a PR to the documentation repository as well, removing the `logo: tahoma.png` line from this file:
<https://github.com/home-assistant/home-assistant.io/blob/current/source/_integrations/tahoma.markdown>
**Note**: The documentation PR needs to be opened against the `current` branch.
**Note2**: Please leave the actual logo file in the documentation repository. It will be cleaned up differently.
## Additional information
For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements.
## Codeowner mention
Hi there, @philklei! Mind taking a look at this issue as it is with an integration (tahoma) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/tahoma/manifest.json) for? Thanks!
Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
|
non_process
|
tahoma is missing brand images the problem the tahoma integration does not have brand images in this repository we recently started this brands repository to create a centralized storage of all brand related images these images are used on our website and the home assistant frontend the following images are missing and would ideally be added src tahoma icon png src tahoma logo png src tahoma icon png src tahoma logo png for image specifications and requirements please see updating the documentation repository our documentation repository already has a logo for this integration however it does not meet the image requirements of this new brands repository if adding images to this repository please open up a pr to the documentation repository as well removing the logo tahoma png line from this file note the documentation pr needs to be opened against the current branch please leave the actual logo file in the documentation repository it will be cleaned up differently additional information for more information about this repository read the file of this repository it contains information on how this repository works and image specification and requirements codeowner mention hi there philklei mind taking a look at this issue as it is with an integration tahoma you are listed as a for thanks resolving this issue is not limited to codeowners if you want to help us out feel free to resolve this issue thanks already
| 0
|
2,235
| 5,088,455,252
|
IssuesEvent
|
2016-12-31 20:34:44
|
EBrown8534/StackExchangeStatisticsExplorer
|
https://api.github.com/repos/EBrown8534/StackExchangeStatisticsExplorer
|
closed
|
Add chart timeframe selection on detail
|
enhancement in process
|
Add a selection for timeframes for the chart on the detail page.
|
1.0
|
Add chart timeframe selection on detail - Add a selection for timeframes for the chart on the detail page.
|
process
|
add chart timeframe selection on detail add a selection for timeframes for the chart on the detail page
| 1
|
58,019
| 7,112,777,166
|
IssuesEvent
|
2018-01-17 18:08:35
|
c2corg/v6_ui
|
https://api.github.com/repos/c2corg/v6_ui
|
closed
|
Snow conditions layout on mobile has been lost with new layout
|
Css Ergo / Design ready for testing
|
See PR #1936 for restoring the visual to what was done before.
|
1.0
|
Snow conditions layout on mobile has been lost with new layout - See PR #1936 for restoring the visual to what was done before.
|
non_process
|
snow conditions layout on mobile has been lost with new layout see pr for restoring the visual to what was done before
| 0
|
11,689
| 14,542,950,451
|
IssuesEvent
|
2020-12-15 16:18:09
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Agent.ComputerName or Agent.MachineName?
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
[Enter feedback here]
Is the variable (on the _capabilities_ screenshot) Agent.ComputerName? Or should it be Agent.MachineName as documented here: https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables
If you're going to change variable names or provide aliases, please mention this and document which is new, which is old, which is preferred.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372
* Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662
* Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
1.0
|
Agent.ComputerName or Agent.MachineName? - [Enter feedback here]
Is the variable (on the _capabilities_ screenshot) Agent.ComputerName? Or should it be Agent.MachineName as documented here: https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables
If you're going to change variable names or provide aliases, please mention this and document which is new, which is old, which is preferred.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372
* Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662
* Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
process
|
agent computername or agent machinename is the variable on the capabilities screenshot agent computername or should it be agent machinename as documented here if you re going to change variable names or provide aliases please mention this and document which is new which is old which is preferred document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id fead version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
| 1
|
58,759
| 3,091,114,982
|
IssuesEvent
|
2015-08-26 11:10:47
|
pombase/canto
|
https://api.github.com/repos/pombase/canto
|
closed
|
existing annotations are showing as new
|
low priority sourceforge
|
In this session
curs/4794f2ac68f1b5ba/feature/gene/view/1
all of the existing annotations show as new?
Original comment by: ValWood
|
1.0
|
existing annotations are showing as new -
In this session
curs/4794f2ac68f1b5ba/feature/gene/view/1
all of the existing annotations show as new?
Original comment by: ValWood
|
non_process
|
existing annotations are showing as new in this session curs feature gene view all of the existing annotations show as new original comment by valwood
| 0
|
41,604
| 5,377,157,594
|
IssuesEvent
|
2017-02-23 11:13:00
|
DanySpin97/PhpBotFramework
|
https://api.github.com/repos/DanySpin97/PhpBotFramework
|
closed
|
Add localization module test
|
Accepted Database Localization Testing
|
Set a language for a user, get it, than load localization files and get a string for the selected language using getStr() method.
|
1.0
|
Add localization module test - Set a language for a user, get it, than load localization files and get a string for the selected language using getStr() method.
|
non_process
|
add localization module test set a language for a user get it than load localization files and get a string for the selected language using getstr method
| 0
|
15,250
| 19,189,402,545
|
IssuesEvent
|
2021-12-05 18:55:14
|
MasterPlayer/adxl345-sv
|
https://api.github.com/repos/MasterPlayer/adxl345-sv
|
closed
|
Software API can perform calibration process
|
question software process
|
Calibration includes next steps :
1. Read current data N times
2. Calculate average for this times
3. Round them
4. Update OFSX, OFSY, OFSZ registers in device
Calibration completed
or not?
|
1.0
|
Software API can perform calibration process - Calibration includes next steps :
1. Read current data N times
2. Calculate average for this times
3. Round them
4. Update OFSX, OFSY, OFSZ registers in device
Calibration completed
or not?
|
process
|
software api can perform calibration process calibration includes next steps read current data n times calculate average for this times round them update ofsx ofsy ofsz registers in device calibration completed or not
| 1
|
285,640
| 21,527,561,594
|
IssuesEvent
|
2022-04-28 20:06:41
|
twosixlabs/armory
|
https://api.github.com/repos/twosixlabs/armory
|
closed
|
Add docs (and/or jupyter notebook?) for how to run Armory scenarios step-by-step
|
documentation
|
The release of Armory 0.14.0 refactored scenarios such a way that they can be run more interactively/step-by-step using the `next()` and `evaluate_current()` methods. We currently lack documentation on how to do so
|
1.0
|
Add docs (and/or jupyter notebook?) for how to run Armory scenarios step-by-step - The release of Armory 0.14.0 refactored scenarios such a way that they can be run more interactively/step-by-step using the `next()` and `evaluate_current()` methods. We currently lack documentation on how to do so
|
non_process
|
add docs and or jupyter notebook for how to run armory scenarios step by step the release of armory refactored scenarios such a way that they can be run more interactively step by step using the next and evaluate current methods we currently lack documentation on how to do so
| 0
|
298,831
| 22,575,031,662
|
IssuesEvent
|
2022-06-28 06:25:44
|
clusternet/website
|
https://api.github.com/repos/clusternet/website
|
closed
|
Chinese version of Clusternet introduction (introduction 文档中文翻译)
|
documentation help wanted
|
将 [Clusternet introduction](https://github.com/clusternet/website/blob/main/content/en/docs/introduction.md) 翻译为对应的中文文档
翻译后的目标文档路径为 `content/zh-cn/docs/introduction.md`
|
1.0
|
Chinese version of Clusternet introduction (introduction 文档中文翻译) - 将 [Clusternet introduction](https://github.com/clusternet/website/blob/main/content/en/docs/introduction.md) 翻译为对应的中文文档
翻译后的目标文档路径为 `content/zh-cn/docs/introduction.md`
|
non_process
|
chinese version of clusternet introduction introduction 文档中文翻译 将 翻译为对应的中文文档 翻译后的目标文档路径为 content zh cn docs introduction md
| 0
|
7,666
| 10,756,953,696
|
IssuesEvent
|
2019-10-31 12:20:30
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
ARM decode failure: spurious shift from ARM to THUMB
|
Feature: Processor/ARM Type: Bug
|
**Describe the bug**
the original identification was that "BNE decodes as LSL in ARM mode"... but upon closer inspection, it appears the decoder is changing into THUMB mode without apparent cause:
LAB_009bb914 XREF[1]: 009bb908(j)
009bb914 00 00 54 e3 cmp r4,#0x0
009bb918 00 00 53 13 cmpne param_4,#0x0
009bb91c 01 00 lsl param_2,param_1,#0x0
009bb91e 00 1a sub param_1,param_1,param_1
when it should decode to the following:
(from Vivisect)
.text:0x027bb914 000054e3 cmp r4,#0x00
.text:0x027bb918 00005313 cmpne r3,#0x00
.text:0x027bb91c 0100001a bne loc_027bb928
clearly the first two instructions are being decoded correctly, but then Ghidra is decoding the next instruction as THUMB LSL (btw, if imm5==0, it's supposed to be a MOV instruction).
**To Reproduce**
i believe you simply need to decode these bytes in this order, but since i'm not sure what's causing it, I'm not sure.
if it makes any difference, this is taken from an ELF library compiled for Android.
**Expected behavior**
i expect the decoder to track the ARM/THUMB state correctly throughout analysis
**Environment (please complete the following information):**
- OS: Linux (Kubuntu) 18.04.2 64-bit
- Java Version: 11.0.2
- Ghidra Version: 9.0 - public - released 2019-02-28
**Additional context**
i'm not sure what else to say. it isn't the target of a branch that I can tell (unless the sudden mode change somehow doesn't show any XREFs to that address), so I'm not sure what would make Ghidra spontaneously shift gears like this. this is 8-bytes after a branch target, so there's potential for a complicated bug involving the fact that ARM mode execution pipeline has "effective PC" 8 bytes beyond real PC in the Operand decoding context (ie. PC-relative addressing is 8-bytes greater than is intuitively pleasing.
|
1.0
|
ARM decode failure: spurious shift from ARM to THUMB - **Describe the bug**
the original identification was that "BNE decodes as LSL in ARM mode"... but upon closer inspection, it appears the decoder is changing into THUMB mode without apparent cause:
LAB_009bb914 XREF[1]: 009bb908(j)
009bb914 00 00 54 e3 cmp r4,#0x0
009bb918 00 00 53 13 cmpne param_4,#0x0
009bb91c 01 00 lsl param_2,param_1,#0x0
009bb91e 00 1a sub param_1,param_1,param_1
when it should decode to the following:
(from Vivisect)
.text:0x027bb914 000054e3 cmp r4,#0x00
.text:0x027bb918 00005313 cmpne r3,#0x00
.text:0x027bb91c 0100001a bne loc_027bb928
clearly the first two instructions are being decoded correctly, but then Ghidra is decoding the next instruction as THUMB LSL (btw, if imm5==0, it's supposed to be a MOV instruction).
**To Reproduce**
i believe you simply need to decode these bytes in this order, but since i'm not sure what's causing it, I'm not sure.
if it makes any difference, this is taken from an ELF library compiled for Android.
**Expected behavior**
i expect the decoder to track the ARM/THUMB state correctly throughout analysis
**Environment (please complete the following information):**
- OS: Linux (Kubuntu) 18.04.2 64-bit
- Java Version: 11.0.2
- Ghidra Version: 9.0 - public - released 2019-02-28
**Additional context**
i'm not sure what else to say. it isn't the target of a branch that I can tell (unless the sudden mode change somehow doesn't show any XREFs to that address), so I'm not sure what would make Ghidra spontaneously shift gears like this. this is 8-bytes after a branch target, so there's potential for a complicated bug involving the fact that ARM mode execution pipeline has "effective PC" 8 bytes beyond real PC in the Operand decoding context (ie. PC-relative addressing is 8-bytes greater than is intuitively pleasing.
|
process
|
arm decode failure spurious shift from arm to thumb describe the bug the original identification was that bne decodes as lsl in arm mode but upon closer inspection it appears the decoder is changing into thumb mode without apparent cause lab xref j cmp cmpne param lsl param param sub param param param when it should decode to the following from vivisect text cmp text cmpne text bne loc clearly the first two instructions are being decoded correctly but then ghidra is decoding the next instruction as thumb lsl btw if it s supposed to be a mov instruction to reproduce i believe you simply need to decode these bytes in this order but since i m not sure what s causing it i m not sure if it makes any difference this is taken from an elf library compiled for android expected behavior i expect the decoder to track the arm thumb state correctly throughout analysis environment please complete the following information os linux kubuntu bit java version ghidra version public released additional context i m not sure what else to say it isn t the target of a branch that i can tell unless the sudden mode change somehow doesn t show any xrefs to that address so i m not sure what would make ghidra spontaneously shift gears like this this is bytes after a branch target so there s potential for a complicated bug involving the fact that arm mode execution pipeline has effective pc bytes beyond real pc in the operand decoding context ie pc relative addressing is bytes greater than is intuitively pleasing
| 1
|
116,122
| 4,697,180,068
|
IssuesEvent
|
2016-10-12 08:27:33
|
CS2103AUG2016-W15-C4/main
|
https://api.github.com/repos/CS2103AUG2016-W15-C4/main
|
opened
|
As a new user I want to see user instructions
|
priority.high type.story
|
... so that I can learn how to use the application
|
1.0
|
As a new user I want to see user instructions - ... so that I can learn how to use the application
|
non_process
|
as a new user i want to see user instructions so that i can learn how to use the application
| 0
|
749,878
| 26,182,092,322
|
IssuesEvent
|
2023-01-02 17:02:04
|
frequenz-floss/frequenz-sdk-python
|
https://api.github.com/repos/frequenz-floss/frequenz-sdk-python
|
closed
|
Add BatteryPool implementation
|
priority:high type:enhancement part:data-pipeline
|
### What's needed?
BatteryPool implementation will have to support computing formulas for:
* SoC
* SoP
* Total capacity
* Total active power ( from inverter )
* Total upper and lower bound (from battery and adjacent inverter)
Their default formulas will be inferred from the component graph, and the formulas can be overridden from the UI.
It should connect to the ResamplingActor to get resampled data for individual components.
It should use FormuaEngine to compute formulas
To consider:
Should BatteryPool has method `set_power` to charge/discharge batteries?
### Proposed solution
_No response_
### Use cases
The purpose of this feature is to:
* pre-defined some well known formulas, so user doesn't need care about them in code.
* automatically check what components are working and get data only from working batteries.
### Alternatives and workarounds
_No response_
### Additional context
_No response_
|
1.0
|
Add BatteryPool implementation - ### What's needed?
BatteryPool implementation will have to support computing formulas for:
* SoC
* SoP
* Total capacity
* Total active power ( from inverter )
* Total upper and lower bound (from battery and adjacent inverter)
Their default formulas will be inferred from the component graph, and the formulas can be overridden from the UI.
It should connect to the ResamplingActor to get resampled data for individual components.
It should use FormuaEngine to compute formulas
To consider:
Should BatteryPool has method `set_power` to charge/discharge batteries?
### Proposed solution
_No response_
### Use cases
The purpose of this feature is to:
* pre-defined some well known formulas, so user doesn't need care about them in code.
* automatically check what components are working and get data only from working batteries.
### Alternatives and workarounds
_No response_
### Additional context
_No response_
|
non_process
|
add batterypool implementation what s needed batterypool implementation will have to support computing formulas for soc sop total capacity total active power from inverter total upper and lower bound from battery and adjacent inverter their default formulas will be inferred from the component graph and the formulas can be overridden from the ui it should connect to the resamplingactor to get resampled data for individual components it should use formuaengine to compute formulas to consider should batterypool has method set power to charge discharge batteries proposed solution no response use cases the purpose of this feature is to pre defined some well known formulas so user doesn t need care about them in code automatically check what components are working and get data only from working batteries alternatives and workarounds no response additional context no response
| 0
|
21,450
| 29,488,474,664
|
IssuesEvent
|
2023-06-02 11:41:01
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
reply all missing when Zammad email-in is in cc
|
bug enhancement verified prioritised by payment mail processing ticket: actions
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.7
* Installation method (source, package, ..): package
* Operating system: CentOS
* Database + version: postgresql 9.2.23
* Elasticsearch version: 5.6.8-1
* Browser + version: safari 12.0.2
* Ticket-ID: #1047717, #10102970
### Expected behavior:
* reply all should be visible if you have more than one uniq email address in email header and include every email address (including Zammad input adresses)
### Actual behavior:
* reply all is missing if you have another input email address (email-in channel) in cc and only 1 additional client/agent email in cc
### Steps to reproduce the behavior:
* send from agents mail account an email to Zammad email-in 1 and cc Zammad email-in 2 + additional agent - no reply all possible (when adding another second agent mail address to cc field then he reply all is visible but will not include the other Zammad email input address)
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
reply all missing when Zammad email-in is in cc - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.7
* Installation method (source, package, ..): package
* Operating system: CentOS
* Database + version: postgresql 9.2.23
* Elasticsearch version: 5.6.8-1
* Browser + version: safari 12.0.2
* Ticket-ID: #1047717, #10102970
### Expected behavior:
* reply all should be visible if you have more than one uniq email address in email header and include every email address (including Zammad input adresses)
### Actual behavior:
* reply all is missing if you have another input email address (email-in channel) in cc and only 1 additional client/agent email in cc
### Steps to reproduce the behavior:
* send from agents mail account an email to Zammad email-in 1 and cc Zammad email-in 2 + additional agent - no reply all possible (when adding another second agent mail address to cc field then he reply all is visible but will not include the other Zammad email input address)
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
reply all missing when zammad email in is in cc hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package package operating system centos database version postgresql elasticsearch version browser version safari ticket id expected behavior reply all should be visible if you have more than one uniq email address in email header and include every email address including zammad input adresses actual behavior reply all is missing if you have another input email address email in channel in cc and only additional client agent email in cc steps to reproduce the behavior send from agents mail account an email to zammad email in and cc zammad email in additional agent no reply all possible when adding another second agent mail address to cc field then he reply all is visible but will not include the other zammad email input address yes i m sure this is a bug and no feature request or a general question
| 1
|
74,369
| 20,148,402,383
|
IssuesEvent
|
2022-02-09 09:57:24
|
tutao/tutanota
|
https://api.github.com/repos/tutao/tutanota
|
opened
|
Upload web client releases to archive repository
|
build
|
In case we need to come back and re-test certain migrations it is extremely useful to have a way to download arbitrary release versions.
|
1.0
|
Upload web client releases to archive repository - In case we need to come back and re-test certain migrations it is extremely useful to have a way to download arbitrary release versions.
|
non_process
|
upload web client releases to archive repository in case we need to come back and re test certain migrations it is extremely useful to have a way to download arbitrary release versions
| 0
|
137,869
| 20,252,401,269
|
IssuesEvent
|
2022-02-14 19:15:49
|
popcorndao/workspace
|
https://api.github.com/repos/popcorndao/workspace
|
opened
|
Look into all sites we had design, identify, and make spacing and type classes consistent.
|
enhancement frontend needs design Components
|
## Objective
In order to bridge the understanding between designers and engineers, we need to create a system for h1 title, h2 title...spacings, color, and so on so that engineers can create class names in code ( a way to use classnames to style a design). Currently, we had some design exceptions and new spacing across different site designs. Now we need to align them, reduce differences, and boost our productivity/efficiency.
## Our current design system on Figma
https://www.figma.com/file/AEfyqvQLZ5XxlWfi1mehzT/%5BPopcorn%5D-Website?node-id=2975%3A36261
## To-do designers (It is a work in progress and not completed but you can have a look)
1. We need to take key type pages from each site we had designed under one page in Figma, name it "Design Studies", and look into all the spacing we had and font sizes.
2. Identify the below:
(Desktop based on 1440px design laptop design)
**- H1 title through H6 if applicable**
**- Subtitles**
**- Input Field Header**
**- Modal Tab Header**
**- Card Header**
**- Body Copies**
**- Paddings:**
padding between H1 and subtitle
padding for subtitle and body copy
padding between H1 and image
padding (top, bottom, left, right) for modals/cards,
padding between input fields
padding between input field and checkbox/associated items
**- First class Button**
**- Secondary Class Button**
**- Tertiary Button**
**- Text Link**
**- Filter Tag 1**
**- Filter Tag 2**
## To-do Engineers
1. Please confirm the above 1.0 list of style items is what we can formulate and will be useful for frontend engineering to boost production efficiency.
2. Please add and comment in this ticket if there are additional items you think is helpful
3. Lastly, provide all naming conventions that will be used in the frontend so within Figma we can use the same naming conventions for categorizations and incorporate them in our file naming system which is important
|
1.0
|
Look into all sites we had design, identify, and make spacing and type classes consistent. - ## Objective
In order to bridge the understanding between designers and engineers, we need to create a system for h1 title, h2 title...spacings, color, and so on so that engineers can create class names in code ( a way to use classnames to style a design). Currently, we had some design exceptions and new spacing across different site designs. Now we need to align them, reduce differences, and boost our productivity/efficiency.
## Our current design system on Figma
https://www.figma.com/file/AEfyqvQLZ5XxlWfi1mehzT/%5BPopcorn%5D-Website?node-id=2975%3A36261
## To-do designers (It is a work in progress and not completed but you can have a look)
1. We need to take key type pages from each site we had designed under one page in Figma, name it "Design Studies", and look into all the spacing we had and font sizes.
2. Identify the below:
(Desktop based on 1440px design laptop design)
**- H1 title through H6 if applicable**
**- Subtitles**
**- Input Field Header**
**- Modal Tab Header**
**- Card Header**
**- Body Copies**
**- Paddings:**
padding between H1 and subtitle
padding for subtitle and body copy
padding between H1 and image
padding (top, bottom, left, right) for modals/cards,
padding between input fields
padding between input field and checkbox/associated items
**- First class Button**
**- Secondary Class Button**
**- Tertiary Button**
**- Text Link**
**- Filter Tag 1**
**- Filter Tag 2**
## To-do Engineers
1. Please confirm the above 1.0 list of style items is what we can formulate and will be useful for frontend engineering to boost production efficiency.
2. Please add and comment in this ticket if there are additional items you think is helpful
3. Lastly, provide all naming conventions that will be used in the frontend so within Figma we can use the same naming conventions for categorizations and incorporate them in our file naming system which is important
|
non_process
|
look into all sites we had design identify and make spacing and type classes consistent objective in order to bridge the understanding between designers and engineers we need to create a system for title title spacings color and so on so that engineers can create class names in code a way to use classnames to style a design currently we had some design exceptions and new spacing across different site designs now we need to align them reduce differences and boost our productivity efficiency our current design system on figma to do designers it is a work in progress and not completed but you can have a look we need to take key type pages from each site we had designed under one page in figma name it design studies and look into all the spacing we had and font sizes identify the below desktop based on design laptop design title through if applicable subtitles input field header modal tab header card header body copies paddings padding between and subtitle padding for subtitle and body copy padding between and image padding top bottom left right for modals cards padding between input fields padding between input field and checkbox associated items first class button secondary class button tertiary button text link filter tag filter tag to do engineers please confirm the above list of style items is what we can formulate and will be useful for frontend engineering to boost production efficiency please add and comment in this ticket if there are additional items you think is helpful lastly provide all naming conventions that will be used in the frontend so within figma we can use the same naming conventions for categorizations and incorporate them in our file naming system which is important
| 0
|
28,663
| 13,775,911,972
|
IssuesEvent
|
2020-10-08 08:43:01
|
zeek/zeek
|
https://api.github.com/repos/zeek/zeek
|
opened
|
Script optimization through compilation
|
Area: Performance Area: Scripting Complexity: Substantial Type: Project
|
Creating a ticket to track discussion of [Vern's branch](https://github.com/zeek/zeek/tree/topic/vern/script-opt) adding a script compiler to speed up execution. The goal is to merge this in as an experimental features, ideally with 4.0 if timing works out.
Some initial thoughts & discussion on how to approach the merge (with a focus on the integration into the current code base) are in this [Google doc](https://docs.google.com/document/d/1EhgR80BkWgIeHpgxWiXegQfcOT6zfQagpvc8LDjFigU).
|
True
|
Script optimization through compilation - Creating a ticket to track discussion of [Vern's branch](https://github.com/zeek/zeek/tree/topic/vern/script-opt) adding a script compiler to speed up execution. The goal is to merge this in as an experimental features, ideally with 4.0 if timing works out.
Some initial thoughts & discussion on how to approach the merge (with a focus on the integration into the current code base) are in this [Google doc](https://docs.google.com/document/d/1EhgR80BkWgIeHpgxWiXegQfcOT6zfQagpvc8LDjFigU).
|
non_process
|
script optimization through compilation creating a ticket to track discussion of adding a script compiler to speed up execution the goal is to merge this in as an experimental features ideally with if timing works out some initial thoughts discussion on how to approach the merge with a focus on the integration into the current code base are in this
| 0
|
17,262
| 2,994,142,325
|
IssuesEvent
|
2015-07-22 09:48:20
|
colour-science/colour
|
https://api.github.com/repos/colour-science/colour
|
closed
|
Fix inconsistencies in support for fractional steps size in "colour.SpectralPowerDistribution" class.
|
API Defect Major
|
Because of imprecision in floating point representation, fractional steps size could lead to some wavelengths not being represented exactly as the same although they should have been in the perfect world.
Here is a failing test for example, while interpolating the two following spectral data to 0.1 steps size, one would expect for the spd wavelengths to be all present in the CMFS wavelengths which have a larger shape, but it is not the case:
```
import numpy as np
import colour
spd = colour.SMITS_1999_SPDS['White'].clone().interpolate(colour.SpectralShape(steps=0.1))
cmfs = colour.CMFS['CIE 1931 2 Degree Standard Observer'].clone().interpolate(colour.SpectralShape(steps=0.1))
np.all(np.in1d(spd.wavelengths, cmfs.wavelengths))
```
I found two solutions:
- One implemented solution is to use a rounded version of the wavelengths.
- One untested solution is to use the `Decimal` module to represent wavelengths (which may leads to a lot of various issues and incompatibilities with the current API)
|
1.0
|
Fix inconsistencies in support for fractional steps size in "colour.SpectralPowerDistribution" class. - Because of imprecision in floating point representation, fractional steps size could lead to some wavelengths not being represented exactly as the same although they should have been in the perfect world.
Here is a failing test for example, while interpolating the two following spectral data to 0.1 steps size, one would expect for the spd wavelengths to be all present in the CMFS wavelengths which have a larger shape, but it is not the case:
```
import numpy as np
import colour
spd = colour.SMITS_1999_SPDS['White'].clone().interpolate(colour.SpectralShape(steps=0.1))
cmfs = colour.CMFS['CIE 1931 2 Degree Standard Observer'].clone().interpolate(colour.SpectralShape(steps=0.1))
np.all(np.in1d(spd.wavelengths, cmfs.wavelengths))
```
I found two solutions:
- One implemented solution is to use a rounded version of the wavelengths.
- One untested solution is to use the `Decimal` module to represent wavelengths (which may leads to a lot of various issues and incompatibilities with the current API)
|
non_process
|
fix inconsistencies in support for fractional steps size in colour spectralpowerdistribution class because of imprecision in floating point representation fractional steps size could lead to some wavelengths not being represented exactly as the same although they should have been in the perfect world here is a failing test for example while interpolating the two following spectral data to steps size one would expect for the spd wavelengths to be all present in the cmfs wavelengths which have a larger shape but it is not the case import numpy as np import colour spd colour smits spds clone interpolate colour spectralshape steps cmfs colour cmfs clone interpolate colour spectralshape steps np all np spd wavelengths cmfs wavelengths i found two solutions one implemented solution is to use a rounded version of the wavelengths one untested solution is to use the decimal module to represent wavelengths which may leads to a lot of various issues and incompatibilities with the current api
| 0
|
111,162
| 9,515,838,408
|
IssuesEvent
|
2019-04-26 07:09:48
|
Microsoft/AzureStorageExplorer
|
https://api.github.com/repos/Microsoft/AzureStorageExplorer
|
opened
|
Update the tooltip 'Execute Query(F5)' to 'Execute query(F5)'
|
:gear: tables 🧪 testing
|
**Storage Explorer Version:** 1.8.0_20190425.8
**Platform/OS:** Linux Ubuntu/macOS High Sierra/Windows 10
**Architecture**: ia32/x64
**Commit:** f43a9ee4
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand a storage account -> Tables -> Create a new table.
2. Click 'Query' on the toolbar -> Hover the mouse on the Execute query button.
3. Check the tooltip.
**Expect Experience:**
Show 'Execute query(F5)'.
**Actual Experience:**
Show 'Execute Query(F5)'.

|
1.0
|
Update the tooltip 'Execute Query(F5)' to 'Execute query(F5)' - **Storage Explorer Version:** 1.8.0_20190425.8
**Platform/OS:** Linux Ubuntu/macOS High Sierra/Windows 10
**Architecture**: ia32/x64
**Commit:** f43a9ee4
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand a storage account -> Tables -> Create a new table.
2. Click 'Query' on the toolbar -> Hover the mouse on the Execute query button.
3. Check the tooltip.
**Expect Experience:**
Show 'Execute query(F5)'.
**Actual Experience:**
Show 'Execute Query(F5)'.

|
non_process
|
update the tooltip execute query to execute query storage explorer version platform os linux ubuntu macos high sierra windows architecture commit regression from not a regression steps to reproduce expand a storage account tables create a new table click query on the toolbar hover the mouse on the execute query button check the tooltip expect experience show execute query actual experience show execute query
| 0
|
57,900
| 14,239,938,658
|
IssuesEvent
|
2020-11-18 20:56:25
|
DadSchoorse/vkBasalt
|
https://api.github.com/repos/DadSchoorse/vkBasalt
|
closed
|
Cannot build 32b on Ubuntu 18.04
|
build
|
Before installing 64b I input:
`sudo apt install build-essential gcc-multilib libx11-dev libx11-dev:i386 glslang-tools
snap install spirv-tools`
I have edited the 32b install command according to the instructions to:
`sudo ASFLAGS=--32 CFLAGS=-m32 CXXFLAGS=-m32 PKG_CONFIG_PATH=/usr/lib/i386-linux-gnu/pkgconfig meson --prefix=/usr --buildtype=release --libdir=lib/i386-linux-gnu -Dwith_json=false builddir.32`
but this outputs for me:
```
Build started at 2020-07-09T15:34:04.352186
Main binary: /usr/bin/python3
Python system: Linux
The Meson build system
Version: 0.51.2
Source dir: /mnt/freedom/Downloads/vkBasalt
Build dir: /mnt/freedom/Downloads/vkBasalt/builddir.32
Build type: native build
Project name: vkBasalt
Project version: undefined
Appending CFLAGS from environment: '-m32'
No LDFLAGS in the environment, not changing global flags.
No CPPFLAGS in the environment, not changing global flags.
Sanity testing C compiler: cc
Is cross compiler: False.
Sanity check compiler command line: cc -m32 -pipe -D_FILE_OFFSET_BITS=64 /mnt/freedom/Downloads/vkBasalt/builddir.32/meson-private/sanitycheckc.c -o /mnt/freedom/Downloads/vkBasalt/builddir.32/meson-private/sanitycheckc.exe
Sanity check compile stdout:
-----
Sanity check compile stderr:
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/9/libgcc.a when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/9/libgcc.a when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
collect2: error: ld returned 1 exit status
-----
meson.build:1:0: ERROR: Compiler cc can not compile programs.
```
I can see that I have gcc-multilib and build-essential already installed.
Is there something else I can try please?
|
1.0
|
Cannot build 32b on Ubuntu 18.04 - Before installing 64b I input:
`sudo apt install build-essential gcc-multilib libx11-dev libx11-dev:i386 glslang-tools
snap install spirv-tools`
I have edited the 32b install command according to the instructions to:
`sudo ASFLAGS=--32 CFLAGS=-m32 CXXFLAGS=-m32 PKG_CONFIG_PATH=/usr/lib/i386-linux-gnu/pkgconfig meson --prefix=/usr --buildtype=release --libdir=lib/i386-linux-gnu -Dwith_json=false builddir.32`
but this outputs for me:
```
Build started at 2020-07-09T15:34:04.352186
Main binary: /usr/bin/python3
Python system: Linux
The Meson build system
Version: 0.51.2
Source dir: /mnt/freedom/Downloads/vkBasalt
Build dir: /mnt/freedom/Downloads/vkBasalt/builddir.32
Build type: native build
Project name: vkBasalt
Project version: undefined
Appending CFLAGS from environment: '-m32'
No LDFLAGS in the environment, not changing global flags.
No CPPFLAGS in the environment, not changing global flags.
Sanity testing C compiler: cc
Is cross compiler: False.
Sanity check compiler command line: cc -m32 -pipe -D_FILE_OFFSET_BITS=64 /mnt/freedom/Downloads/vkBasalt/builddir.32/meson-private/sanitycheckc.c -o /mnt/freedom/Downloads/vkBasalt/builddir.32/meson-private/sanitycheckc.exe
Sanity check compile stdout:
-----
Sanity check compile stderr:
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/9/libgcc.a when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/9/libgcc.a when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
collect2: error: ld returned 1 exit status
-----
meson.build:1:0: ERROR: Compiler cc can not compile programs.
```
I can see that I have gcc-multilib and build-essential already installed.
Is there something else I can try please?
|
non_process
|
cannot build on ubuntu before installing i input sudo apt install build essential gcc multilib dev dev glslang tools snap install spirv tools i have edited the install command according to the instructions to sudo asflags cflags cxxflags pkg config path usr lib linux gnu pkgconfig meson prefix usr buildtype release libdir lib linux gnu dwith json false builddir but this outputs for me build started at main binary usr bin python system linux the meson build system version source dir mnt freedom downloads vkbasalt build dir mnt freedom downloads vkbasalt builddir build type native build project name vkbasalt project version undefined appending cflags from environment no ldflags in the environment not changing global flags no cppflags in the environment not changing global flags sanity testing c compiler cc is cross compiler false sanity check compiler command line cc pipe d file offset bits mnt freedom downloads vkbasalt builddir meson private sanitycheckc c o mnt freedom downloads vkbasalt builddir meson private sanitycheckc exe sanity check compile stdout sanity check compile stderr usr bin ld skipping incompatible usr lib gcc linux gnu libgcc a when searching for lgcc usr bin ld cannot find lgcc usr bin ld skipping incompatible usr lib gcc linux gnu libgcc a when searching for lgcc usr bin ld cannot find lgcc error ld returned exit status meson build error compiler cc can not compile programs i can see that i have gcc multilib and build essential already installed is there something else i can try please
| 0
|
21,560
| 29,893,207,979
|
IssuesEvent
|
2023-06-21 00:56:31
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
NTP Dot Protocol
|
NOT YET PROCESSED
|
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
[NTP Audio Routers.](https://www.ntp.dk/)
What you would like to be able to make it do from Companion:
Check, set and clear cross points.
Check & report alarms.
Direct links or attachments to the ethernet control protocol or API:
[NTP Dot Alarm Messages.pdf](https://github.com/bitfocus/companion-module-requests/files/11810855/NTP.Dot.Alarm.Messages.pdf)
[NTP Dot Protocol 3.05.pdf](https://github.com/bitfocus/companion-module-requests/files/11810857/NTP.Dot.Protocol.3.05.pdf)
|
1.0
|
NTP Dot Protocol - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
[NTP Audio Routers.](https://www.ntp.dk/)
What you would like to be able to make it do from Companion:
Check, set and clear cross points.
Check & report alarms.
Direct links or attachments to the ethernet control protocol or API:
[NTP Dot Alarm Messages.pdf](https://github.com/bitfocus/companion-module-requests/files/11810855/NTP.Dot.Alarm.Messages.pdf)
[NTP Dot Protocol 3.05.pdf](https://github.com/bitfocus/companion-module-requests/files/11810857/NTP.Dot.Protocol.3.05.pdf)
|
process
|
ntp dot protocol i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control what you would like to be able to make it do from companion check set and clear cross points check report alarms direct links or attachments to the ethernet control protocol or api
| 1
|
8,162
| 11,385,220,477
|
IssuesEvent
|
2020-01-29 10:38:44
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
opened
|
array_count?
|
new process
|
While working on #137, I found that it could be useful for several use cases to have an operation that returns the number of elements in a list, similarly to count.
Some use cases:
- Compute the number of elements in a time series when no data is ignored
- Compute mean, sd, variance etc. Every formula that computes something like `1/n * func(...)` or so.
Side note: Having that said, we could "save" (i.e. remove) quite a lot of processes if we change arrays to be 1D datacubes.
|
1.0
|
array_count? - While working on #137, I found that it could be useful for several use cases to have an operation that returns the number of elements in a list, similarly to count.
Some use cases:
- Compute the number of elements in a time series when no data is ignored
- Compute mean, sd, variance etc. Every formula that computes something like `1/n * func(...)` or so.
Side note: Having that said, we could "save" (i.e. remove) quite a lot of processes if we change arrays to be 1D datacubes.
|
process
|
array count while working on i found that it could be useful for several use cases to have an operation that returns the number of elements in a list similarly to count some use cases compute the number of elements in a time series when no data is ignored compute mean sd variance etc every formula that computes something like n func or so side note having that said we could save i e remove quite a lot of processes if we change arrays to be datacubes
| 1
|
6,630
| 9,739,117,892
|
IssuesEvent
|
2019-06-01 08:18:00
|
haskell/haskell-ide-engine
|
https://api.github.com/repos/haskell/haskell-ide-engine
|
opened
|
Milestone usage
|
meta: organisation - processes type: discussion - decision needed
|
At the moment, milestones are used mainly to record what went in to a given monthly release.
And the incomplete issues are assigned en masse to the next milestone when a release is made.
I suspect this monthly issue move a) spams a lot of people b) creates clutter in the issue's history and c) gives a false sense that some meaningful action has been taken.
I think it might be better to not allocate issues to milestones, unless they are actually being worked on, or planned to be worked on. And then the monthly release just tags them with the milestone they end up in.
Opinions?
|
1.0
|
Milestone usage - At the moment, milestones are used mainly to record what went in to a given monthly release.
And the incomplete issues are assigned en masse to the next milestone when a release is made.
I suspect this monthly issue move a) spams a lot of people b) creates clutter in the issue's history and c) gives a false sense that some meaningful action has been taken.
I think it might be better to not allocate issues to milestones, unless they are actually being worked on, or planned to be worked on. And then the monthly release just tags them with the milestone they end up in.
Opinions?
|
process
|
milestone usage at the moment milestones are used mainly to record what went in to a given monthly release and the incomplete issues are assigned en masse to the next milestone when a release is made i suspect this monthly issue move a spams a lot of people b creates clutter in the issue s history and c gives a false sense that some meaningful action has been taken i think it might be better to not allocate issues to milestones unless they are actually being worked on or planned to be worked on and then the monthly release just tags them with the milestone they end up in opinions
| 1
|
3,858
| 6,808,626,763
|
IssuesEvent
|
2017-11-04 05:48:24
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Clean up conversions
|
libs-utillib status-inprocess type-enhancement
|
There are many, many conversions from and to strings, ints, bools, etc. This needs to be cleaned up and made consistent. Here's a current list of all 'to' conversions. The trouble here is it is not clear what it's to or from. All conversions should have the form: str2uint32 or str2bool or bool2uint32, etc.
And clearly distinguish between 32 bit items and 64 bit items using uint32_t and uint64_t so when we write to hard drive we don't have so many problems. A similar list for 'from' can be created. Plus there are other non-to and non-from conversions such as dateFromTimeStamp.
|
1.0
|
Clean up conversions - There are many, many conversions from and to strings, ints, bools, etc. This needs to be cleaned up and made consistent. Here's a current list of all 'to' conversions. The trouble here is it is not clear what it's to or from. All conversions should have the form: str2uint32 or str2bool or bool2uint32, etc.
And clearly distinguish between 32 bit items and 64 bit items using uint32_t and uint64_t so when we write to hard drive we don't have so many problems. A similar list for 'from' can be created. Plus there are other non-to and non-from conversions such as dateFromTimeStamp.
|
process
|
clean up conversions there are many many conversions from and to strings ints bools etc this needs to be cleaned up and made consistent here s a current list of all to conversions the trouble here is it is not clear what it s to or from all conversions should have the form or or etc and clearly distinguish between bit items and bit items using t and t so when we write to hard drive we don t have so many problems a similar list for from can be created plus there are other non to and non from conversions such as datefromtimestamp
| 1
|
321,601
| 23,863,016,009
|
IssuesEvent
|
2022-09-07 08:42:42
|
vercel/next.js
|
https://api.github.com/repos/vercel/next.js
|
opened
|
Docs: `output: "standalone"` conflicts with discouraged usage of Custom Servers
|
template: documentation
|
### What is the improvement or update you wish to see?
The [Custom Server documentation](https://nextjs.org/docs/advanced-features/custom-server) highlights
> A custom server will remove important performance optimizations, like serverless functions and [Automatic Static Optimization](https://nextjs.org/docs/advanced-features/automatic-static-optimization).
However, `[output:"standalone"](https://nextjs.org/docs/advanced-features/output-file-tracing#automatically-copying-traced-files)` builds a custom server inside `.next/standalone/server.js` which is used in the [official Docker examples](https://github.com/vercel/next.js/blob/canary/examples/with-docker/next.config.js#L3).
I'm not sure if this is intended, a bug, or requires additional documentation, but this seems very confusing.
Why would a Next.js feature yield something that is a Next.js "anti-pattern"?
### Is there any context that might help us understand?
N/A
### Does the docs page already exist? Please link to it.
N/A
|
1.0
|
Docs: `output: "standalone"` conflicts with discouraged usage of Custom Servers - ### What is the improvement or update you wish to see?
The [Custom Server documentation](https://nextjs.org/docs/advanced-features/custom-server) highlights
> A custom server will remove important performance optimizations, like serverless functions and [Automatic Static Optimization](https://nextjs.org/docs/advanced-features/automatic-static-optimization).
However, `[output:"standalone"](https://nextjs.org/docs/advanced-features/output-file-tracing#automatically-copying-traced-files)` builds a custom server inside `.next/standalone/server.js` which is used in the [official Docker examples](https://github.com/vercel/next.js/blob/canary/examples/with-docker/next.config.js#L3).
I'm not sure if this is intended, a bug, or requires additional documentation, but this seems very confusing.
Why would a Next.js feature yield something that is a Next.js "anti-pattern"?
### Is there any context that might help us understand?
N/A
### Does the docs page already exist? Please link to it.
N/A
|
non_process
|
docs output standalone conflicts with discouraged usage of custom servers what is the improvement or update you wish to see the highlights a custom server will remove important performance optimizations like serverless functions and however builds a custom server inside next standalone server js which is used in the i m not sure if this is intended a bug or requires additional documentation but this seems very confusing why would a next js feature yield something that is a next js anti pattern is there any context that might help us understand n a does the docs page already exist please link to it n a
| 0
|
323,681
| 27,746,127,296
|
IssuesEvent
|
2023-03-15 17:09:29
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[DocDB] Data race in accessing queue_state_.current_term leading to failures in tsan runs
|
kind/bug kind/failing-test area/docdb priority/high 2.14 Backport Required
|
Jira Link: [DB-3349](https://yugabyte.atlassian.net/browse/DB-3349)
### Description
https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=RemoteBootstrapITest&fail_tag=all&name=TestLongRemoteBootstrapsAcrossServers&platform=linux
[DB-3349]: https://yugabyte.atlassian.net/browse/DB-3349?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
1.0
|
[DocDB] Data race in accessing queue_state_.current_term leading to failures in tsan runs - Jira Link: [DB-3349](https://yugabyte.atlassian.net/browse/DB-3349)
### Description
https://detective-gcp.dev.yugabyte.com/stability/test?branch=master&build_type=all&class=RemoteBootstrapITest&fail_tag=all&name=TestLongRemoteBootstrapsAcrossServers&platform=linux
[DB-3349]: https://yugabyte.atlassian.net/browse/DB-3349?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
non_process
|
data race in accessing queue state current term leading to failures in tsan runs jira link description
| 0
|
7,585
| 10,696,547,349
|
IssuesEvent
|
2019-10-23 14:55:15
|
hasadna/monorepo
|
https://api.github.com/repos/hasadna/monorepo
|
closed
|
Create a basic pipeline for OpenTrain CSVs
|
OpenTrain Data Processing
|
We have 4 CSVs:
http://otrain.org/files/dumps-csv/
This task is to create a basic pipeline that downloads the files and for now, just prints the number of lines in each file (we will later replace this with something more interesting).
You can see an example pipeline here:
https://github.com/hasadna/hasadna/tree/master/projects/data_analysis
To run the pipeline, follow the instructions in that link.
So let's break this up a little bit:
1. Run the example pipeline using `bazel run`
2. Explore the example pipeline, go over the files and try to understand who's calling who.
3. Copy it into `projects/opentrain/pipeline`, make it work there as is.
4. Rename to `opentrain_pipeline`, make sure it still works :)
5. Replace the data it uses with the CSVs above. The example pipeline's data is defined here:
https://github.com/hasadna/hasadna/blob/master/WORKSPACE#L145-L167
If you encounter any issue, please ask!
|
1.0
|
Create a basic pipeline for OpenTrain CSVs - We have 4 CSVs:
http://otrain.org/files/dumps-csv/
This task is to create a basic pipeline that downloads the files and for now, just prints the number of lines in each file (we will later replace this with something more interesting).
You can see an example pipeline here:
https://github.com/hasadna/hasadna/tree/master/projects/data_analysis
To run the pipeline, follow the instructions in that link.
So let's break this up a little bit:
1. Run the example pipeline using `bazel run`
2. Explore the example pipeline, go over the files and try to understand who's calling who.
3. Copy it into `projects/opentrain/pipeline`, make it work there as is.
4. Rename to `opentrain_pipeline`, make sure it still works :)
5. Replace the data it uses with the CSVs above. The example pipeline's data is defined here:
https://github.com/hasadna/hasadna/blob/master/WORKSPACE#L145-L167
If you encounter any issue, please ask!
|
process
|
create a basic pipeline for opentrain csvs we have csvs this task is to create a basic pipeline that downloads the files and for now just prints the number of lines in each file we will later replace this with something more interesting you can see an example pipeline here to run the pipeline follow the instructions in that link so let s break this up a little bit run the example pipeline using bazel run explore the example pipeline go over the files and try to understand who s calling who copy it into projects opentrain pipeline make it work there as is rename to opentrain pipeline make sure it still works replace the data it uses with the csvs above the example pipeline s data is defined here if you encounter any issue please ask
| 1
|
10,486
| 8,582,876,837
|
IssuesEvent
|
2018-11-13 18:08:42
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
Extra runtime direcotry
|
area-Infrastructure
|
@ViktorHofer noticed we're producing 2 runtime directories.
Eg:
```
artifacts\bin\runtime\uap
artifacts\bin\runtime\uap-Windows_NT-Debug-x64
```
This is most likely an issue with runtime.depproj
|
1.0
|
Extra runtime direcotry - @ViktorHofer noticed we're producing 2 runtime directories.
Eg:
```
artifacts\bin\runtime\uap
artifacts\bin\runtime\uap-Windows_NT-Debug-x64
```
This is most likely an issue with runtime.depproj
|
non_process
|
extra runtime direcotry viktorhofer noticed we re producing runtime directories eg artifacts bin runtime uap artifacts bin runtime uap windows nt debug this is most likely an issue with runtime depproj
| 0
|
40,742
| 8,837,289,934
|
IssuesEvent
|
2019-01-05 02:48:48
|
phan/phan
|
https://api.github.com/repos/phan/phan
|
closed
|
PhanUnusedVariable should suggest parameters/variables occurring elsewhere in the function scope
|
dead code detection enhancement
|
Currently, there are no suggestions.
Walk the AST and generate suggestions from variable /parameter/use statements of closures within the function/method
- This information may already be in the VariableGraph, so that may be unnecessary
|
1.0
|
PhanUnusedVariable should suggest parameters/variables occurring elsewhere in the function scope - Currently, there are no suggestions.
Walk the AST and generate suggestions from variable /parameter/use statements of closures within the function/method
- This information may already be in the VariableGraph, so that may be unnecessary
|
non_process
|
phanunusedvariable should suggest parameters variables occurring elsewhere in the function scope currently there are no suggestions walk the ast and generate suggestions from variable parameter use statements of closures within the function method this information may already be in the variablegraph so that may be unnecessary
| 0
|
128,837
| 27,337,764,391
|
IssuesEvent
|
2023-02-26 12:33:00
|
creativecommons/cc-resource-archive
|
https://api.github.com/repos/creativecommons/cc-resource-archive
|
opened
|
CCS Style sheet needs to incorporate css variables
|
🟩 priority: low 🚦 status: awaiting triage ✨ goal: improvement 💻 aspect: code
|
## Problem
The CSS Style sheet lacks css global variables.
## Description
The CSS style sheet does not contain any global variables (which would reduce the work load if any changes are to be made in the future).
This would also be crucial if a dark mode is added to the website, as changing/toggling the stylings would be fairly simple .
This will also help in improving the responsiveness of the website as all necessary attributes can be stored using these variables and can be changed or added any point of time.
## Implementation
- [x] I would be interested in implementing this feature.
|
1.0
|
CCS Style sheet needs to incorporate css variables - ## Problem
The CSS Style sheet lacks css global variables.
## Description
The CSS style sheet does not contain any global variables (which would reduce the work load if any changes are to be made in the future).
This would also be crucial if a dark mode is added to the website, as changing/toggling the stylings would be fairly simple .
This will also help in improving the responsiveness of the website as all necessary attributes can be stored using these variables and can be changed or added any point of time.
## Implementation
- [x] I would be interested in implementing this feature.
|
non_process
|
ccs style sheet needs to incorporate css variables problem the css style sheet lacks css global variables description the css style sheet does not contain any global variables which would reduce the work load if any changes are to be made in the future this would also be crucial if a dark mode is added to the website as changing toggling the stylings would be fairly simple this will also help in improving the responsiveness of the website as all necessary attributes can be stored using these variables and can be changed or added any point of time implementation i would be interested in implementing this feature
| 0
|
3,223
| 5,638,648,903
|
IssuesEvent
|
2017-04-06 12:33:39
|
zhangn1985/ykdl
|
https://api.github.com/repos/zhangn1985/ykdl
|
closed
|
不能 http://v.ifeng.com/gongkaike/sjdjiangtang/201203/b2a540b4-32b6-41d1-b
|
requirement
|
```
C:\Users\i>python3 C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\
ykdl.py -i http://v.ifeng.com/gongkaike/sjdjiangtang/201203/b2a540b4-32b6-41d1-b
a5f-68552cb05833.shtml
Traceback (most recent call last):
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\ykdl.py", l
ine 158, in <module>
main()
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\ykdl.py", l
ine 136, in main
info = parser(u)
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\lib\site-packages\y
kdl\extractor.py", line 21, in parser
info = self.prepare()
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\lib\site-packages\y
kdl\extractors\ifeng.py", line 26, in prepare
for v in videos[0].getElementsByTagName('video'):
IndexError: list index out of range
C:\Users\i>
```
|
1.0
|
不能 http://v.ifeng.com/gongkaike/sjdjiangtang/201203/b2a540b4-32b6-41d1-b - ```
C:\Users\i>python3 C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\
ykdl.py -i http://v.ifeng.com/gongkaike/sjdjiangtang/201203/b2a540b4-32b6-41d1-b
a5f-68552cb05833.shtml
Traceback (most recent call last):
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\ykdl.py", l
ine 158, in <module>
main()
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\Scripts\ykdl.py", l
ine 136, in main
info = parser(u)
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\lib\site-packages\y
kdl\extractor.py", line 21, in parser
info = self.prepare()
File "C:\Users\i\AppData\Local\Programs\Python\Python35-32\lib\site-packages\y
kdl\extractors\ifeng.py", line 26, in prepare
for v in videos[0].getElementsByTagName('video'):
IndexError: list index out of range
C:\Users\i>
```
|
non_process
|
不能 c users i c users i appdata local programs python scripts ykdl py i shtml traceback most recent call last file c users i appdata local programs python scripts ykdl py l ine in main file c users i appdata local programs python scripts ykdl py l ine in main info parser u file c users i appdata local programs python lib site packages y kdl extractor py line in parser info self prepare file c users i appdata local programs python lib site packages y kdl extractors ifeng py line in prepare for v in videos getelementsbytagname video indexerror list index out of range c users i
| 0
|
176,370
| 13,638,430,506
|
IssuesEvent
|
2020-09-25 09:22:08
|
SenseNet/sn-client
|
https://api.github.com/repos/SenseNet/sn-client
|
opened
|
🧪 [E2E test] Task
|
hacktoberfest test
|
# 🧪E2E test cases
The scope of these tests is to ensure that task creation, modification, and delete works as it is intended.
## 😎 Role
All test should run as admin.
# Test case 1
## 🧫 Purpose of the test
To ensure that task creation work properly.
## 🐾 Steps
1. Login with admin role
2. Click on Content menuitem
3. Click on IT Workspace in the tree
4. Click on Tasks (under IT Workspace) in the tree
5. Click on 'Add new' button
6. Select Task from the dropdown list
7. Fill the form with the following data:
- Name: Test Task
8. **Step:** Click on submit button
**Expected result:** Test Task should be in the list

# Test case 2
## 🧫 Purpose of the test
To ensure that modifying a task content works properly
## 🐾 Steps
1. Right click on Test Task
2. Select Edit from the dropdown list
3. Change the Name from Test Task to Changed Test Task
4. **Step:** Click on submit button
**Expected result:** Changed Test Task should be in the list, Test Task should not be

# Test case 3
## 🧫 Purpose of the test
To ensure that delete a task content works properly
## 🐾 Steps
1. Right click on Changed Test Task
2. Select Delete from the dropdown list
3. Tick permanently
4. **Step:** Click on Delete button
**Expected result:** Changed TestTask should not be in the list
|
1.0
|
🧪 [E2E test] Task - # 🧪E2E test cases
The scope of these tests is to ensure that task creation, modification, and delete works as it is intended.
## 😎 Role
All test should run as admin.
# Test case 1
## 🧫 Purpose of the test
To ensure that task creation work properly.
## 🐾 Steps
1. Login with admin role
2. Click on Content menuitem
3. Click on IT Workspace in the tree
4. Click on Tasks (under IT Workspace) in the tree
5. Click on 'Add new' button
6. Select Task from the dropdown list
7. Fill the form with the following data:
- Name: Test Task
8. **Step:** Click on submit button
**Expected result:** Test Task should be in the list

# Test case 2
## 🧫 Purpose of the test
To ensure that modifying a task content works properly
## 🐾 Steps
1. Right click on Test Task
2. Select Edit from the dropdown list
3. Change the Name from Test Task to Changed Test Task
4. **Step:** Click on submit button
**Expected result:** Changed Test Task should be in the list, Test Task should not be

# Test case 3
## 🧫 Purpose of the test
To ensure that delete a task content works properly
## 🐾 Steps
1. Right click on Changed Test Task
2. Select Delete from the dropdown list
3. Tick permanently
4. **Step:** Click on Delete button
**Expected result:** Changed TestTask should not be in the list
|
non_process
|
🧪 task 🧪 test cases the scope of these tests is to ensure that task creation modification and delete works as it is intended 😎 role all test should run as admin test case 🧫 purpose of the test to ensure that task creation work properly 🐾 steps login with admin role click on content menuitem click on it workspace in the tree click on tasks under it workspace in the tree click on add new button select task from the dropdown list fill the form with the following data name test task step click on submit button expected result test task should be in the list test case 🧫 purpose of the test to ensure that modifying a task content works properly 🐾 steps right click on test task select edit from the dropdown list change the name from test task to changed test task step click on submit button expected result changed test task should be in the list test task should not be test case 🧫 purpose of the test to ensure that delete a task content works properly 🐾 steps right click on changed test task select delete from the dropdown list tick permanently step click on delete button expected result changed testtask should not be in the list
| 0
|
2,897
| 5,886,999,367
|
IssuesEvent
|
2017-05-17 05:43:20
|
Jumpscale/jumpscale_core8
|
https://api.github.com/repos/Jumpscale/jumpscale_core8
|
closed
|
AYS: Services dependences are not resolved
|
AtYourService process_wontfix type_feature
|
Services dependences between bps are not resolved, we have two bps in the same repo:
bp_related_1.yaml
```
datacenter__ovh_germany2:
location: 'germany'
description: 'ovh_germany2'
cockpit__cockpitv1:
description: 'cockpit v1'
datacenter: 'ovh_germany1'
actions:
- action: 'install'
```
bp_related_2.yaml
```
datacenter__ovh_germany1:
location: 'germany'
description: 'ovh_germany1'
actions:
- action: 'install'
```
if bp_related_1.yaml is executed first then there will be an error saying
```
2017-02-24 15:12:34,082 - root - INFO - * Test case : test_directory_structure.yaml
2017-02-24 15:12:34,083 - root - INFO - * creating new repository .....
2017-02-24 15:12:34,195 - root - INFO - * CREATED : b94987498f repo
2017-02-24 15:12:34,195 - root - INFO - * sending blueprint .....
2017-02-24 15:12:34,216 - root - INFO - CREATED : b4b033c872 blueprint in b94987498f repo
2017-02-24 15:12:34,526 - root - INFO - EXECUTED : b4b033c872 blueprint in b94987498f repo
2017-02-24 15:12:34,578 - root - INFO - RAN : b94987498f repo
2017-02-24 15:12:34,578 - root - INFO - key : f0863aff5dd93079da4c36b56d5b81c8
2017-02-24 15:12:34,588 - root - INFO - f0863aff5dd93079da4c36b56d5b81c8 : The Running state is new
2017-02-24 15:12:44,664 - root - INFO - f0863aff5dd93079da4c36b56d5b81c8 : The Running state is ok
2017-02-24 15:12:44,744 - root - INFO - RESULT: ERROR : instance (<class 'JSExceptions.Input'>, ERROR: could not find parent:ovh_germany1 for service:cockpit!cockpitv1, found 0 ((type:input.error))
```
this causes two tests cases
- test_directory_structure.yaml
- test_validate_run_steps.yaml
|
1.0
|
AYS: Services dependences are not resolved - Services dependences between bps are not resolved, we have two bps in the same repo:
bp_related_1.yaml
```
datacenter__ovh_germany2:
location: 'germany'
description: 'ovh_germany2'
cockpit__cockpitv1:
description: 'cockpit v1'
datacenter: 'ovh_germany1'
actions:
- action: 'install'
```
bp_related_2.yaml
```
datacenter__ovh_germany1:
location: 'germany'
description: 'ovh_germany1'
actions:
- action: 'install'
```
if bp_related_1.yaml is executed first then there will be an error saying
```
2017-02-24 15:12:34,082 - root - INFO - * Test case : test_directory_structure.yaml
2017-02-24 15:12:34,083 - root - INFO - * creating new repository .....
2017-02-24 15:12:34,195 - root - INFO - * CREATED : b94987498f repo
2017-02-24 15:12:34,195 - root - INFO - * sending blueprint .....
2017-02-24 15:12:34,216 - root - INFO - CREATED : b4b033c872 blueprint in b94987498f repo
2017-02-24 15:12:34,526 - root - INFO - EXECUTED : b4b033c872 blueprint in b94987498f repo
2017-02-24 15:12:34,578 - root - INFO - RAN : b94987498f repo
2017-02-24 15:12:34,578 - root - INFO - key : f0863aff5dd93079da4c36b56d5b81c8
2017-02-24 15:12:34,588 - root - INFO - f0863aff5dd93079da4c36b56d5b81c8 : The Running state is new
2017-02-24 15:12:44,664 - root - INFO - f0863aff5dd93079da4c36b56d5b81c8 : The Running state is ok
2017-02-24 15:12:44,744 - root - INFO - RESULT: ERROR : instance (<class 'JSExceptions.Input'>, ERROR: could not find parent:ovh_germany1 for service:cockpit!cockpitv1, found 0 ((type:input.error))
```
this causes two tests cases
- test_directory_structure.yaml
- test_validate_run_steps.yaml
|
process
|
ays services dependences are not resolved services dependences between bps are not resolved we have two bps in the same repo bp related yaml datacenter ovh location germany description ovh cockpit description cockpit datacenter ovh actions action install bp related yaml datacenter ovh location germany description ovh actions action install if bp related yaml is executed first then there will be an error saying root info test case test directory structure yaml root info creating new repository root info created repo root info sending blueprint root info created blueprint in repo root info executed blueprint in repo root info ran repo root info key root info the running state is new root info the running state is ok root info result error instance error could not find parent ovh for service cockpit found type input error this causes two tests cases test directory structure yaml test validate run steps yaml
| 1
|
19,473
| 25,783,903,924
|
IssuesEvent
|
2022-12-09 18:25:38
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
closed
|
WebAuthn Level 2 specification page error: ERR_TOO_MANY_REDIRECTS
|
type:process
|
## Description
Hello,
There is an ERR_TOO_MANY_REDIRECTS error that occurs when attempting to access the WebAuthn Level 2 specifications. I tried accessing that page using several different browsers, and they all generated the same "too many redirects" error.
(Chrome, Edge, FireFox, & Safari)
https://www.w3.org/TR/webauthn-2/
www.w3.org redirected you too many times.
ERR_TOO_MANY_REDIRECTS
I hope this is the correct place to report this issue. If not, let me know where I can report this.
Thank you,
Mirko J. Ploch
SurePassID Corporation
|
1.0
|
WebAuthn Level 2 specification page error: ERR_TOO_MANY_REDIRECTS - ## Description
Hello,
There is an ERR_TOO_MANY_REDIRECTS error that occurs when attempting to access the WebAuthn Level 2 specifications. I tried accessing that page using several different browsers, and they all generated the same "too many redirects" error.
(Chrome, Edge, FireFox, & Safari)
https://www.w3.org/TR/webauthn-2/
www.w3.org redirected you too many times.
ERR_TOO_MANY_REDIRECTS
I hope this is the correct place to report this issue. If not, let me know where I can report this.
Thank you,
Mirko J. Ploch
SurePassID Corporation
|
process
|
webauthn level specification page error err too many redirects description hello there is an err too many redirects error that occurs when attempting to access the webauthn level specifications i tried accessing that page using several different browsers and they all generated the same too many redirects error chrome edge firefox safari redirected you too many times err too many redirects i hope this is the correct place to report this issue if not let me know where i can report this thank you mirko j ploch surepassid corporation
| 1
|
1,648
| 4,273,269,620
|
IssuesEvent
|
2016-07-13 16:47:27
|
gcdr/book-project
|
https://api.github.com/repos/gcdr/book-project
|
opened
|
Be Apply Target
|
Processing
|
### Background
* Alias? none.
* I have applied at them, many times. **Perhaps every year,* except 2015.
### Hypothesis
1. _What is Walmart really saying, about me?_ ie. Are they saying, **they fired me** even though, I quit? (Stupid Voc-Rehab, they couldn't even just call past employers, and ask "why fired?" (*) Lazy bums!)
1. **Stating have a Bachelor degree?** KMart, I fibbed just having BSC Associate. This year was the first time, had an interview. (Maybe before 1995?, might of had an interview? Though, I don't remember it, and nothing ever happened, of course.)
1. I have to find a way, **to random poll employers** and call numbers, for references, or what happens. **When someone drops so and such a name?**
1. **Try getting involved with Private Investigators!** Maybe someone online, would give you tips. How to perform reference calls?
|
1.0
|
Be Apply Target - ### Background
* Alias? none.
* I have applied at them, many times. **Perhaps every year,* except 2015.
### Hypothesis
1. _What is Walmart really saying, about me?_ ie. Are they saying, **they fired me** even though, I quit? (Stupid Voc-Rehab, they couldn't even just call past employers, and ask "why fired?" (*) Lazy bums!)
1. **Stating have a Bachelor degree?** KMart, I fibbed just having BSC Associate. This year was the first time, had an interview. (Maybe before 1995?, might of had an interview? Though, I don't remember it, and nothing ever happened, of course.)
1. I have to find a way, **to random poll employers** and call numbers, for references, or what happens. **When someone drops so and such a name?**
1. **Try getting involved with Private Investigators!** Maybe someone online, would give you tips. How to perform reference calls?
|
process
|
be apply target background alias none i have applied at them many times perhaps every year except hypothesis what is walmart really saying about me ie are they saying they fired me even though i quit stupid voc rehab they couldn t even just call past employers and ask why fired lazy bums stating have a bachelor degree kmart i fibbed just having bsc associate this year was the first time had an interview maybe before might of had an interview though i don t remember it and nothing ever happened of course i have to find a way to random poll employers and call numbers for references or what happens when someone drops so and such a name try getting involved with private investigators maybe someone online would give you tips how to perform reference calls
| 1
|
470
| 2,906,118,151
|
IssuesEvent
|
2015-06-19 07:46:40
|
open-app/holodex
|
https://api.github.com/repos/open-app/holodex
|
closed
|
[Process improvement] If a task is blocked, signal this by annotating the issue
|
blocked process
|
emotional reasons are valid blocks
|
1.0
|
[Process improvement] If a task is blocked, signal this by annotating the issue - emotional reasons are valid blocks
|
process
|
if a task is blocked signal this by annotating the issue emotional reasons are valid blocks
| 1
|
21,033
| 27,973,565,314
|
IssuesEvent
|
2023-03-25 09:56:59
|
vnphanquang/svelte-put
|
https://api.github.com/repos/vnphanquang/svelte-put
|
closed
|
[preprocess-inline-svg] Unable to pass "data-inline-src" to a child component
|
op:question scope:preprocess-inline-svg
|
So I'm trying to implement the @svelte-put/preprocess-inline-svg library for static SVG usage in my SvelteKit app.
I would like to use it as a component that has some pre-defined Tailwind style classes applied to it.
```
<script lang="ts">
import { twMerge } from "tailwind-merge";
export let icon = "default";
</script>
<svg class={twMerge(`fill-red-400`, $$props.class)} data-inline-src={icon} />
```
This gives the following warning: "@svelte-put/preprocess-inline-svg: cannot find svg source for {icon} at /src/lib/components/atoms/Icon.svelte"
When I change the code to the following it works perfectly fine:
`<svg... data-inline-src="default" />`
It seems like the "data-inline-src" attribute treats everything that comes after the "=" as a string. Is there any way to use a variable value for this?
|
1.0
|
[preprocess-inline-svg] Unable to pass "data-inline-src" to a child component - So I'm trying to implement the @svelte-put/preprocess-inline-svg library for static SVG usage in my SvelteKit app.
I would like to use it as a component that has some pre-defined Tailwind style classes applied to it.
```
<script lang="ts">
import { twMerge } from "tailwind-merge";
export let icon = "default";
</script>
<svg class={twMerge(`fill-red-400`, $$props.class)} data-inline-src={icon} />
```
This gives the following warning: "@svelte-put/preprocess-inline-svg: cannot find svg source for {icon} at /src/lib/components/atoms/Icon.svelte"
When I change the code to the following it works perfectly fine:
`<svg... data-inline-src="default" />`
It seems like the "data-inline-src" attribute treats everything that comes after the "=" as a string. Is there any way to use a variable value for this?
|
process
|
unable to pass data inline src to a child component so i m trying to implement the svelte put preprocess inline svg library for static svg usage in my sveltekit app i would like to use it as a component that has some pre defined tailwind style classes applied to it import twmerge from tailwind merge export let icon default this gives the following warning svelte put preprocess inline svg cannot find svg source for icon at src lib components atoms icon svelte when i change the code to the following it works perfectly fine it seems like the data inline src attribute treats everything that comes after the as a string is there any way to use a variable value for this
| 1
|
16,346
| 21,003,541,461
|
IssuesEvent
|
2022-03-29 19:56:31
|
acdh-oeaw/abcd-db
|
https://api.github.com/repos/acdh-oeaw/abcd-db
|
closed
|
parse and save text_Stichworte as SkosConcept
|
Data Processing
|
link Event with [vocabs.SkosConcept](https://github.com/acdh-oeaw/abcd-db/blob/24d404dce278ca6ab2dbe0ae06d66d64dd231c99/vocabs/models.py#L66) via [key_word](https://github.com/acdh-oeaw/abcd-db/blob/24d404dce278ca6ab2dbe0ae06d66d64dd231c99/archiv/models.py#L255) property on ingest;
|
1.0
|
parse and save text_Stichworte as SkosConcept - link Event with [vocabs.SkosConcept](https://github.com/acdh-oeaw/abcd-db/blob/24d404dce278ca6ab2dbe0ae06d66d64dd231c99/vocabs/models.py#L66) via [key_word](https://github.com/acdh-oeaw/abcd-db/blob/24d404dce278ca6ab2dbe0ae06d66d64dd231c99/archiv/models.py#L255) property on ingest;
|
process
|
parse and save text stichworte as skosconcept link event with via property on ingest
| 1
|
1,460
| 4,039,476,704
|
IssuesEvent
|
2016-05-20 05:14:05
|
World4Fly/Interface-for-Arduino
|
https://api.github.com/repos/World4Fly/Interface-for-Arduino
|
closed
|
Design the Graphical User Interface
|
process
|
Create a basic layout of the GUI
- [ ] Layout
- [ ] Functionality
- [ ] Style
|
1.0
|
Design the Graphical User Interface - Create a basic layout of the GUI
- [ ] Layout
- [ ] Functionality
- [ ] Style
|
process
|
design the graphical user interface create a basic layout of the gui layout functionality style
| 1
|
74,202
| 9,763,976,300
|
IssuesEvent
|
2019-06-05 14:52:13
|
paritytech/substrate
|
https://api.github.com/repos/paritytech/substrate
|
closed
|
Documentation overview
|
F5-documentation 📄
|
This issue contains, in no particular order, a list of entities that need to be documented either inline in the code, or in the wiki.
Many of the terms below are well known in the blockchain world. Still, it may be useful to put an emphasis on our interpretation, i.e. what is different and what are the usage contracts. Of course, we may also link to a more general description somewhere on the net, like Wikipedia or Ethereum wiki.
For example, when describing blocks and transactions we may say, that usually, blocks in classic chains contain transactions that transfer funds from one account to another. But in our case, it's not the only option. Since block internals are fully specified by the chain and not by the Substrate itself, we call them _extrinsics_.
I believe this list may then act as an index to the whole documentation. The thing is that just by reading through this list one should be able to get a relatively good understanding of general concepts and top-level structure of the code base.
**Note:** If you took an item to work on, please edit the entry for others to see. Upon completion, please put a link to the wiki page or GitHub PR where it was done.
## Explain most commonly used terms in detail
- [ ] Authority
- [ ] Backend
- [ ] Block, authoring, syncing
- [ ] [Byzantine Fault Tolerance](https://wiki.parity.io/Byzantine-Fault-Tolerance) (BFT)
- [x] [Consensus](http://wiki.parity.io/Consensus)
- [ ] ed25519
- [ ] Executor
- [x] [Extrinsic](http://wiki.parity.io/Extrinsic) (Transaction)
- [ ] [Full client](http://wiki.parity.io/Substrate-Full-Client)
- [ ] [Genesis](http://wiki.parity.io/Genesis)
- [ ] Gossip
- [ ] Hash
- [ ] [Light client](http://wiki.parity.io/Substrate-Light-Client)
- [ ] [Nominator](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] [Proof of work](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] [Proof of stake](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] Protocol (specialization)
- [x] [P2P](http://wiki.parity.io/P2P)
- [ ] [Runtime](http://wiki.parity.io/Runtime)
- [ ] Session (PR https://github.com/paritytech/wiki/pull/271 by @ltfschoen requires review)
- [ ] [Staking](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] State Database
- [ ] State Transition Function (STF)
- [ ] Substrate Runtime Module Library (SRML)
- [ ] [Parity Substrate](http://wiki.parity.io/Parity-Substrate) Home
- [ ] Swarm
- [ ] Transaction Pool
- [ ] Trie (Merkle Tree, Patricia Tree)
- [ ] [Validator](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] WebAssembly (WASM)
## Various entry points and concepts to be aware of
- [ ] [`impl_stubs!`](http://wiki.parity.io/impl_stubs)
- [x] `construct_runtime!` - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `decl_apis!`
- [ ] [`decl_module!`](https://github.com/paritytech/wiki/pull/272) (WIP by @0x7CFE) - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `decl_event!`
- [ ] [`decl_storage!`](http://wiki.parity.io/decl_storage) - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `impl_outer_origin!`
- [ ] `impl_outer_dispatch!`
- [ ] `impl_outer_event!`
- [ ] `impl_outer_inherent!`
- [ ] `impl_outer_config!`
- [ ] `impl_outer_log!`
- [ ] `Call` (module level)
- [ ] `PrivCall` (module level)
- [ ] `Call` (dispatch level)
- [ ] `PrivCall` (dispatch level)
- [ ] `aux`
## SRML modules
- [ ] `example` (see this first)
- [x] [`assets`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Assets-Module) - @ltfschoen
- [x] `balances`
- [ ] `consensus`
- [ ] `contract`
- [x] [`council`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Council-Module) - @ltfschoen
- [x] [`democracy`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Democracy-Module) - @ltfschoen
- [ ] [`session`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Session-Module) - @ltfschoen
- [ ] [`staking`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Staking-Module) - @ltfschoen
- [ ] `timestamp`
- [ ] `treasury`
## Important types and traits
- [ ] `*::Trait`
- [ ] `*::Service`
- [ ] `HasPublicAux`
- [ ] `bft::Environment`
- [ ] `bft::Proposer`
- [ ] `BlockBuilder`
- [ ] `Specialization<B>`
- [ ] `Encode` / `Decode`
- [ ] `Executive`
## Typical module structure and module roles
- [ ] `api`
- [ ] `cli`
- [ ] `consensus`
- [ ] `executor`
- [ ] `network`
- [ ] `primitives`
- [ ] `runtime`
- [ ] `service`
## Various subjects that probably need to be explained
- [x] Substrate philosophy, general concepts, etc.
- [x] Overall architecture, higher level description, modules and their interaction
- [ ] Where are the entry points, how to start reading the source
- [ ] How runtime interacts with the outer code
- [ ] How to decide, what should be implemented in runtime and what not
- [ ] What parts are optional and what parts are expected to be implemented by the user
- [ ] Asynchronous model, futures, tasks, etc as it is used in the Substrate
- [ ] Life of a transaction from network to storage, how a block is imported
|
1.0
|
Documentation overview - This issue contains, in no particular order, a list of entities that need to be documented either inline in the code, or in the wiki.
Many of the terms below are well known in the blockchain world. Still, it may be useful to put an emphasis on our interpretation, i.e. what is different and what are the usage contracts. Of course, we may also link to a more general description somewhere on the net, like Wikipedia or Ethereum wiki.
For example, when describing blocks and transactions we may say, that usually, blocks in classic chains contain transactions that transfer funds from one account to another. But in our case, it's not the only option. Since block internals are fully specified by the chain and not by the Substrate itself, we call them _extrinsics_.
I believe this list may then act as an index to the whole documentation. The thing is that just by reading through this list one should be able to get a relatively good understanding of general concepts and top-level structure of the code base.
**Note:** If you took an item to work on, please edit the entry for others to see. Upon completion, please put a link to the wiki page or GitHub PR where it was done.
## Explain most commonly used terms in detail
- [ ] Authority
- [ ] Backend
- [ ] Block, authoring, syncing
- [ ] [Byzantine Fault Tolerance](https://wiki.parity.io/Byzantine-Fault-Tolerance) (BFT)
- [x] [Consensus](http://wiki.parity.io/Consensus)
- [ ] ed25519
- [ ] Executor
- [x] [Extrinsic](http://wiki.parity.io/Extrinsic) (Transaction)
- [ ] [Full client](http://wiki.parity.io/Substrate-Full-Client)
- [ ] [Genesis](http://wiki.parity.io/Genesis)
- [ ] Gossip
- [ ] Hash
- [ ] [Light client](http://wiki.parity.io/Substrate-Light-Client)
- [ ] [Nominator](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] [Proof of work](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] [Proof of stake](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] Protocol (specialization)
- [x] [P2P](http://wiki.parity.io/P2P)
- [ ] [Runtime](http://wiki.parity.io/Runtime)
- [ ] Session (PR https://github.com/paritytech/wiki/pull/271 by @ltfschoen requires review)
- [ ] [Staking](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] State Database
- [ ] State Transition Function (STF)
- [ ] Substrate Runtime Module Library (SRML)
- [ ] [Parity Substrate](http://wiki.parity.io/Parity-Substrate) Home
- [ ] Swarm
- [ ] Transaction Pool
- [ ] Trie (Merkle Tree, Patricia Tree)
- [ ] [Validator](https://forum.parity.io/t/five-commonly-used-terms-for-substrate-issue-578/145)
- [ ] WebAssembly (WASM)
## Various entry points and concepts to be aware of
- [ ] [`impl_stubs!`](http://wiki.parity.io/impl_stubs)
- [x] `construct_runtime!` - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `decl_apis!`
- [ ] [`decl_module!`](https://github.com/paritytech/wiki/pull/272) (WIP by @0x7CFE) - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `decl_event!`
- [ ] [`decl_storage!`](http://wiki.parity.io/decl_storage) - @shawntabrizi - https://github.com/paritytech/substrate/issues/1288
- [ ] `impl_outer_origin!`
- [ ] `impl_outer_dispatch!`
- [ ] `impl_outer_event!`
- [ ] `impl_outer_inherent!`
- [ ] `impl_outer_config!`
- [ ] `impl_outer_log!`
- [ ] `Call` (module level)
- [ ] `PrivCall` (module level)
- [ ] `Call` (dispatch level)
- [ ] `PrivCall` (dispatch level)
- [ ] `aux`
## SRML modules
- [ ] `example` (see this first)
- [x] [`assets`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Assets-Module) - @ltfschoen
- [x] `balances`
- [ ] `consensus`
- [ ] `contract`
- [x] [`council`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Council-Module) - @ltfschoen
- [x] [`democracy`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Democracy-Module) - @ltfschoen
- [ ] [`session`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Session-Module) - @ltfschoen
- [ ] [`staking`](https://hackmd.io/nr6kPD2sR4urmljtvHs0CQ?view#Staking-Module) - @ltfschoen
- [ ] `timestamp`
- [ ] `treasury`
## Important types and traits
- [ ] `*::Trait`
- [ ] `*::Service`
- [ ] `HasPublicAux`
- [ ] `bft::Environment`
- [ ] `bft::Proposer`
- [ ] `BlockBuilder`
- [ ] `Specialization<B>`
- [ ] `Encode` / `Decode`
- [ ] `Executive`
## Typical module structure and module roles
- [ ] `api`
- [ ] `cli`
- [ ] `consensus`
- [ ] `executor`
- [ ] `network`
- [ ] `primitives`
- [ ] `runtime`
- [ ] `service`
## Various subjects that probably need to be explained
- [x] Substrate philosophy, general concepts, etc.
- [x] Overall architecture, higher level description, modules and their interaction
- [ ] Where are the entry points, how to start reading the source
- [ ] How runtime interacts with the outer code
- [ ] How to decide, what should be implemented in runtime and what not
- [ ] What parts are optional and what parts are expected to be implemented by the user
- [ ] Asynchronous model, futures, tasks, etc as it is used in the Substrate
- [ ] Life of a transaction from network to storage, how a block is imported
|
non_process
|
documentation overview this issue contains in no particular order a list of entities that need to be documented either inline in the code or in the wiki many of the terms below are well known in the blockchain world still it may be useful to put an emphasis on our interpretation i e what is different and what are the usage contracts of course we may also link to a more general description somewhere on the net like wikipedia or ethereum wiki for example when describing blocks and transactions we may say that usually blocks in classic chains contain transactions that transfer funds from one account to another but in our case it s not the only option since block internals are fully specified by the chain and not by the substrate itself we call them extrinsics i believe this list may then act as an index to the whole documentation the thing is that just by reading through this list one should be able to get a relatively good understanding of general concepts and top level structure of the code base note if you took an item to work on please edit the entry for others to see upon completion please put a link to the wiki page or github pr where it was done explain most commonly used terms in detail authority backend block authoring syncing bft executor transaction gossip hash protocol specialization session pr by ltfschoen requires review state database state transition function stf substrate runtime module library srml home swarm transaction pool trie merkle tree patricia tree webassembly wasm various entry points and concepts to be aware of construct runtime shawntabrizi decl apis wip by shawntabrizi decl event shawntabrizi impl outer origin impl outer dispatch impl outer event impl outer inherent impl outer config impl outer log call module level privcall module level call dispatch level privcall dispatch level aux srml modules example see this first ltfschoen balances consensus contract ltfschoen ltfschoen ltfschoen ltfschoen timestamp treasury important types and traits trait service haspublicaux bft environment bft proposer blockbuilder specialization encode decode executive typical module structure and module roles api cli consensus executor network primitives runtime service various subjects that probably need to be explained substrate philosophy general concepts etc overall architecture higher level description modules and their interaction where are the entry points how to start reading the source how runtime interacts with the outer code how to decide what should be implemented in runtime and what not what parts are optional and what parts are expected to be implemented by the user asynchronous model futures tasks etc as it is used in the substrate life of a transaction from network to storage how a block is imported
| 0
|
174,794
| 21,300,469,889
|
IssuesEvent
|
2022-04-15 01:56:54
|
jinuem/reactSamplePoc
|
https://api.github.com/repos/jinuem/reactSamplePoc
|
opened
|
CVE-2022-1243 (Medium) detected in urijs-1.19.1.tgz
|
security vulnerability
|
## CVE-2022-1243 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urijs-1.19.1.tgz</b></p></summary>
<p>URI.js is a Javascript library for working with URLs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/urijs/-/urijs-1.19.1.tgz">https://registry.npmjs.org/urijs/-/urijs-1.19.1.tgz</a></p>
<p>Path to dependency file: /reactSamplePoc/package.json</p>
<p>Path to vulnerable library: /node_modules/urijs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.10.tgz (Root Library)
- sw-precache-webpack-plugin-0.11.3.tgz
- sw-precache-5.2.1.tgz
- dom-urls-1.1.0.tgz
- :x: **urijs-1.19.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
CRHTLF can lead to invalid protocol extraction potentially leading to XSS in GitHub repository medialize/uri.js prior to 1.19.11.
<p>Publish Date: 2022-04-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1243>CVE-2022-1243</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/8c5afc47-1553-4eba-a98e-024e4cc3dfb7/">https://huntr.dev/bounties/8c5afc47-1553-4eba-a98e-024e4cc3dfb7/</a></p>
<p>Release Date: 2022-04-05</p>
<p>Fix Resolution (urijs): 1.19.11</p>
<p>Direct dependency fix Resolution (react-scripts): 1.0.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-1243 (Medium) detected in urijs-1.19.1.tgz - ## CVE-2022-1243 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urijs-1.19.1.tgz</b></p></summary>
<p>URI.js is a Javascript library for working with URLs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/urijs/-/urijs-1.19.1.tgz">https://registry.npmjs.org/urijs/-/urijs-1.19.1.tgz</a></p>
<p>Path to dependency file: /reactSamplePoc/package.json</p>
<p>Path to vulnerable library: /node_modules/urijs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.10.tgz (Root Library)
- sw-precache-webpack-plugin-0.11.3.tgz
- sw-precache-5.2.1.tgz
- dom-urls-1.1.0.tgz
- :x: **urijs-1.19.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
CRHTLF can lead to invalid protocol extraction potentially leading to XSS in GitHub repository medialize/uri.js prior to 1.19.11.
<p>Publish Date: 2022-04-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1243>CVE-2022-1243</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/8c5afc47-1553-4eba-a98e-024e4cc3dfb7/">https://huntr.dev/bounties/8c5afc47-1553-4eba-a98e-024e4cc3dfb7/</a></p>
<p>Release Date: 2022-04-05</p>
<p>Fix Resolution (urijs): 1.19.11</p>
<p>Direct dependency fix Resolution (react-scripts): 1.0.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in urijs tgz cve medium severity vulnerability vulnerable library urijs tgz uri js is a javascript library for working with urls library home page a href path to dependency file reactsamplepoc package json path to vulnerable library node modules urijs package json dependency hierarchy react scripts tgz root library sw precache webpack plugin tgz sw precache tgz dom urls tgz x urijs tgz vulnerable library vulnerability details crhtlf can lead to invalid protocol extraction potentially leading to xss in github repository medialize uri js prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution urijs direct dependency fix resolution react scripts step up your open source security game with whitesource
| 0
|
10,613
| 13,438,168,945
|
IssuesEvent
|
2020-09-07 17:23:58
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
New `merge` remap function
|
domain: mapping domain: processing type: feature
|
The `merge` remap function would merge 2 objects together.
## Examples
For all examples we'll use this event:
```js
{
// ...
"parent1": {
"key1": "val1",
"child": {
"grandchild1": "val1"
}
},
"parent2": {
"key2", "val2",
"child": {
"grandchild2": "val2"
}
}
// ...
}
```
### Shallow merge
```
merge(.parent1, .parent2)
del(.parent2)
```
Results in:
```js
{
// ...
"parent1": {
"key1": "val1",
"key2", "val2",
"child": {
"grandchild2": "val2"
}
}
// ...
}
```
### Deep merge
```
merge(.parent1, .parent2, true)
del(.parent2)
```
Results in:
```js
{
// ...
"parent1": {
"key1": "val1",
"key2", "val2",
"child": {
"grandchild1": "val1"
"grandchild2": "val2"
}
}
// ...
}
```
### Root
For clarity, users can merge with the root object with the `.` path:
```
merge(., .parent2)
del(.parent2)
```
|
1.0
|
New `merge` remap function - The `merge` remap function would merge 2 objects together.
## Examples
For all examples we'll use this event:
```js
{
// ...
"parent1": {
"key1": "val1",
"child": {
"grandchild1": "val1"
}
},
"parent2": {
"key2", "val2",
"child": {
"grandchild2": "val2"
}
}
// ...
}
```
### Shallow merge
```
merge(.parent1, .parent2)
del(.parent2)
```
Results in:
```js
{
// ...
"parent1": {
"key1": "val1",
"key2", "val2",
"child": {
"grandchild2": "val2"
}
}
// ...
}
```
### Deep merge
```
merge(.parent1, .parent2, true)
del(.parent2)
```
Results in:
```js
{
// ...
"parent1": {
"key1": "val1",
"key2", "val2",
"child": {
"grandchild1": "val1"
"grandchild2": "val2"
}
}
// ...
}
```
### Root
For clarity, users can merge with the root object with the `.` path:
```
merge(., .parent2)
del(.parent2)
```
|
process
|
new merge remap function the merge remap function would merge objects together examples for all examples we ll use this event js child child shallow merge merge del results in js child deep merge merge true del results in js child root for clarity users can merge with the root object with the path merge del
| 1
|
12,862
| 15,252,873,646
|
IssuesEvent
|
2021-02-20 05:02:29
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
closed
|
SPIR-V produces Vec4 * Vec3 expressions
|
area: processing kind: bug
|
This is from shader 228 in Dota (see #409)
>Entry point main at Vertex is invalid:
Type resolution of [189] failed
Incompatible operands: Vector { size: Quad, kind: Float, width: 4 } x Vector { size: Tri, kind: Float, width: 4 }
|
1.0
|
SPIR-V produces Vec4 * Vec3 expressions - This is from shader 228 in Dota (see #409)
>Entry point main at Vertex is invalid:
Type resolution of [189] failed
Incompatible operands: Vector { size: Quad, kind: Float, width: 4 } x Vector { size: Tri, kind: Float, width: 4 }
|
process
|
spir v produces expressions this is from shader in dota see entry point main at vertex is invalid type resolution of failed incompatible operands vector size quad kind float width x vector size tri kind float width
| 1
|
20,141
| 26,688,503,898
|
IssuesEvent
|
2023-01-27 01:01:50
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
closed
|
Allow configuring values in otel_trace_raw
|
enhancement plugin - processor
|
**Is your feature request related to a problem? Please describe.**
The `otel_trace_raw` processor has some hard-coded values. Make these configurable.
**Describe the solution you'd like**
Add two new configurations:
```
otel_trace_raw:
trace_group_cache_ttl: 10s
trace_group_cache_max_size: 1000000
```
|
1.0
|
Allow configuring values in otel_trace_raw - **Is your feature request related to a problem? Please describe.**
The `otel_trace_raw` processor has some hard-coded values. Make these configurable.
**Describe the solution you'd like**
Add two new configurations:
```
otel_trace_raw:
trace_group_cache_ttl: 10s
trace_group_cache_max_size: 1000000
```
|
process
|
allow configuring values in otel trace raw is your feature request related to a problem please describe the otel trace raw processor has some hard coded values make these configurable describe the solution you d like add two new configurations otel trace raw trace group cache ttl trace group cache max size
| 1
|
131,931
| 18,442,475,565
|
IssuesEvent
|
2021-10-14 19:55:13
|
VSCodeVim/Vim
|
https://api.github.com/repos/VSCodeVim/Vim
|
closed
|
das/dis does removes more than a sentence
|
status/by-design
|
<!--
For questions, ask us on [Slack](https://vscodevim-slackin.azurewebsites.net/) 👫.
DONT CHANGE ANYTHING UNTIL THE -----. Thanks!
-->
* Click *thumbs-up* 👍 on this issue if you want it!
* Click *confused* 😕 on this issue if not having it makes VSCodeVim unusable.
The VSCodeVim team prioritizes issues based on reaction count.
--------
**BUG REPORT ** (choose one):
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
<!--
Please ensure you are on the latest VSCode + VSCodeVim
-->
- **VSCode Version**: 1.14.0
- **VsCodeVim Version**: 0.9.0
- **OS**: windows7
**What happened**:
<!--
das
-->
**What did you expect to happen**:
removing of the sentence
**How to reproduce it**:
```
if something:
raise Exception('something')
# add data to database
for _ind, _row in df_meta.iterrows():
yline = _row['YLine']
try:
# Cast to integers. If NaN is in the list, an exception will be
# thrown and caught below
```
putting the cursor on cast and pressing `das` (or `dis`) removes everything from the beginning until `If NaN` that is everything from the whiteline until the end of the sentence
|
1.0
|
das/dis does removes more than a sentence - <!--
For questions, ask us on [Slack](https://vscodevim-slackin.azurewebsites.net/) 👫.
DONT CHANGE ANYTHING UNTIL THE -----. Thanks!
-->
* Click *thumbs-up* 👍 on this issue if you want it!
* Click *confused* 😕 on this issue if not having it makes VSCodeVim unusable.
The VSCodeVim team prioritizes issues based on reaction count.
--------
**BUG REPORT ** (choose one):
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out
information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
**Environment**:
<!--
Please ensure you are on the latest VSCode + VSCodeVim
-->
- **VSCode Version**: 1.14.0
- **VsCodeVim Version**: 0.9.0
- **OS**: windows7
**What happened**:
<!--
das
-->
**What did you expect to happen**:
removing of the sentence
**How to reproduce it**:
```
if something:
raise Exception('something')
# add data to database
for _ind, _row in df_meta.iterrows():
yline = _row['YLine']
try:
# Cast to integers. If NaN is in the list, an exception will be
# thrown and caught below
```
putting the cursor on cast and pressing `das` (or `dis`) removes everything from the beginning until `If NaN` that is everything from the whiteline until the end of the sentence
|
non_process
|
das dis does removes more than a sentence for questions ask us on 👫 dont change anything until the thanks click thumbs up 👍 on this issue if you want it click confused 😕 on this issue if not having it makes vscodevim unusable the vscodevim team prioritizes issues based on reaction count bug report choose one if this is a bug report please fill in as much of the template below as you can if you leave out information we can t help you as well if this is a feature request please describe in detail the feature behavior change you d like to see if we can t reproduce a bug or think a feature already exists we might close your issue if we re wrong please feel free to reopen it and explain why environment please ensure you are on the latest vscode vscodevim vscode version vscodevim version os what happened das what did you expect to happen removing of the sentence how to reproduce it if something raise exception something add data to database for ind row in df meta iterrows yline row try cast to integers if nan is in the list an exception will be thrown and caught below putting the cursor on cast and pressing das or dis removes everything from the beginning until if nan that is everything from the whiteline until the end of the sentence
| 0
|
739,323
| 25,591,596,810
|
IssuesEvent
|
2022-12-01 13:23:11
|
redhat-developer/service-binding-operator
|
https://api.github.com/repos/redhat-developer/service-binding-operator
|
closed
|
Wish to bind to 'sso' / 'keycloak'
|
kind/question priority/low
|
Do we have samples which show how to bind an application(preferably a java application) to keycloak ?
|
1.0
|
Wish to bind to 'sso' / 'keycloak' - Do we have samples which show how to bind an application(preferably a java application) to keycloak ?
|
non_process
|
wish to bind to sso keycloak do we have samples which show how to bind an application preferably a java application to keycloak
| 0
|
2,013
| 4,836,989,099
|
IssuesEvent
|
2016-11-08 21:14:52
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Process.GetCurrentProcess().MainModule.FileName returns incorrect path in linux
|
area-System.Diagnostics.Process os-linux up for grabs
|
### Steps to reproduce
1. Use dotnet executable from a path with space. E.g. `/tmp/dotnet 2/dotnet-cli`
2. Use following code:
```
using System;
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
Console.WriteLine(GetCurrentProcessFileName());
}
public static string GetCurrentProcessFileName()
{
return Process.GetCurrentProcess().MainModule.FileName;
}
}
```
3. Compile and run the app `/tmp/dotnet\ 2/dotnet-cli/dotnet /tmp/app.dll`
### Expected
App should print `/tmp/dotnet 2/dotnet-cli/dotnet`
### Actual
App prints `/tmp/dotnet`
|
1.0
|
Process.GetCurrentProcess().MainModule.FileName returns incorrect path in linux - ### Steps to reproduce
1. Use dotnet executable from a path with space. E.g. `/tmp/dotnet 2/dotnet-cli`
2. Use following code:
```
using System;
using System.Diagnostics;
class Program
{
static void Main(string[] args)
{
Console.WriteLine(GetCurrentProcessFileName());
}
public static string GetCurrentProcessFileName()
{
return Process.GetCurrentProcess().MainModule.FileName;
}
}
```
3. Compile and run the app `/tmp/dotnet\ 2/dotnet-cli/dotnet /tmp/app.dll`
### Expected
App should print `/tmp/dotnet 2/dotnet-cli/dotnet`
### Actual
App prints `/tmp/dotnet`
|
process
|
process getcurrentprocess mainmodule filename returns incorrect path in linux steps to reproduce use dotnet executable from a path with space e g tmp dotnet dotnet cli use following code using system using system diagnostics class program static void main string args console writeline getcurrentprocessfilename public static string getcurrentprocessfilename return process getcurrentprocess mainmodule filename compile and run the app tmp dotnet dotnet cli dotnet tmp app dll expected app should print tmp dotnet dotnet cli dotnet actual app prints tmp dotnet
| 1
|
88,282
| 8,137,001,814
|
IssuesEvent
|
2018-08-20 10:13:29
|
scalda/Foolcraft_3
|
https://api.github.com/repos/scalda/Foolcraft_3
|
closed
|
Game crashes regularly with Ticking block entity error
|
Close in 7 days update to Beat 1.5 and test please
|
<!-- Thank you for filing a bug report. Please be make sure to fill out the required information specified in the template. -->
<!-- Do not delete the template, failure to fill in the template will result in the issue being marked "invalid" -->
<!-- Also be sure to include a appropriate title for your issue!
<!-->
<!-- MODPACK INFORMATION - Please check the fitting checkboxes.
<!-- To tick the checkboxes replace the "[ ]" with "[x]". -->
<!-- PLEASE FILL IN THE CURRENT PACK VERSION -->
## Modpack Information
<!-->
* Current Pack Version:
- [x] I am running FoolCraft via Curse/Twitch Launcher
- [x] I can reproduce this issue consistently
- [x] In single player
- [ ] In multiplayer
- [x] I have searched for this issue previously and it was either (1) not previously reported, or (2) previously fixed and I'm having the same problem.
- [x] I am crashing and can provide my crash report(s)
- [x] I have not altered the modpack [if you have, note the removed mods/added mods/changes in Additional Information]
<!-->
<!-- If your issue matches AT LEAST 4 of the criteria above or 1 of the below, continue. -->
<!-- ISSUE DESCRIPTION - Please describe the issue in detail. -->
## Issue Description
Trying to play the new 1.4.5 modpack, the game either freezes or crashes within about 10 minutes
<!-- REPRODUCE STEPS - Please describe how I can reproduce this issue below ## Reproduce Steps. -->
## Reproduce Steps
Log in to my single-player world. Do, anything for 2-15 minutes. Get a crash or freeze
<!-- ADDITIONAL INFORMATION - Please post any crash reports, screenshots, etc. here. (use Pastebin or Imgur accordingly) -->
<!-- Please put crash reports onto pastebin, -->
<!-- You can do so by going to Pastebin.com and copying and pasting the crashlog onto there and then clicking "Create New Paste" -->
<!-- And then copying the link it puts you on to the Additional Information section-->
<!-->
<!-- For screenshots please use Imgur, -->
<!-- You can do so by going to Imgur.com and dragging the images onto there. -->
<!-- When they're done uploading you can copy the link to the image / album to the Additional Information section-->
## Additional Information
https://pastebin.com/7xCYR66z
|
1.0
|
Game crashes regularly with Ticking block entity error - <!-- Thank you for filing a bug report. Please be make sure to fill out the required information specified in the template. -->
<!-- Do not delete the template, failure to fill in the template will result in the issue being marked "invalid" -->
<!-- Also be sure to include a appropriate title for your issue!
<!-->
<!-- MODPACK INFORMATION - Please check the fitting checkboxes.
<!-- To tick the checkboxes replace the "[ ]" with "[x]". -->
<!-- PLEASE FILL IN THE CURRENT PACK VERSION -->
## Modpack Information
<!-->
* Current Pack Version:
- [x] I am running FoolCraft via Curse/Twitch Launcher
- [x] I can reproduce this issue consistently
- [x] In single player
- [ ] In multiplayer
- [x] I have searched for this issue previously and it was either (1) not previously reported, or (2) previously fixed and I'm having the same problem.
- [x] I am crashing and can provide my crash report(s)
- [x] I have not altered the modpack [if you have, note the removed mods/added mods/changes in Additional Information]
<!-->
<!-- If your issue matches AT LEAST 4 of the criteria above or 1 of the below, continue. -->
<!-- ISSUE DESCRIPTION - Please describe the issue in detail. -->
## Issue Description
Trying to play the new 1.4.5 modpack, the game either freezes or crashes within about 10 minutes
<!-- REPRODUCE STEPS - Please describe how I can reproduce this issue below ## Reproduce Steps. -->
## Reproduce Steps
Log in to my single-player world. Do, anything for 2-15 minutes. Get a crash or freeze
<!-- ADDITIONAL INFORMATION - Please post any crash reports, screenshots, etc. here. (use Pastebin or Imgur accordingly) -->
<!-- Please put crash reports onto pastebin, -->
<!-- You can do so by going to Pastebin.com and copying and pasting the crashlog onto there and then clicking "Create New Paste" -->
<!-- And then copying the link it puts you on to the Additional Information section-->
<!-->
<!-- For screenshots please use Imgur, -->
<!-- You can do so by going to Imgur.com and dragging the images onto there. -->
<!-- When they're done uploading you can copy the link to the image / album to the Additional Information section-->
## Additional Information
https://pastebin.com/7xCYR66z
|
non_process
|
game crashes regularly with ticking block entity error also be sure to include a appropriate title for your issue modpack information please check the fitting checkboxes modpack information current pack version i am running foolcraft via curse twitch launcher i can reproduce this issue consistently in single player in multiplayer i have searched for this issue previously and it was either not previously reported or previously fixed and i m having the same problem i am crashing and can provide my crash report s i have not altered the modpack issue description trying to play the new modpack the game either freezes or crashes within about minutes reproduce steps log in to my single player world do anything for minutes get a crash or freeze additional information
| 0
|
22,551
| 31,759,130,569
|
IssuesEvent
|
2023-09-12 02:41:33
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@requestly/requestly-core 1.0.4 has 19 guarddog issues
|
npm-install-script npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"cd ..; npm run build:main\"","location":"package/common/rule-processor/node_modules/acorn/package.json:45","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/common/rule-processor/node_modules/colorette/package.json:33","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/common/rule-processor/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/common/rule-processor/node_modules/isbinaryfile/package.json:56","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js \u0026\u0026 runmd --output README.md src/README_js.md\",","location":"package/common/rule-processor/node_modules/mime/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/common/rule-processor/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build \u0026\u0026 husky install\",","location":"package/common/rule-processor/node_modules/schema-utils/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/common/rule-processor/node_modules/terser/package.json:72","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install \u0026\u0026 npm run build\",","location":"package/common/rule-processor/node_modules/terser-webpack-plugin/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"gulp build-eslint-rules\",","location":"package/common/rule-processor/node_modules/typescript/package.json:101","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/common/rule-processor/node_modules/webpack/node_modules/enhanced-resolve/package.json:62","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/common/rule-processor/node_modules/webpack/package.json:160","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/common/rule-processor/node_modules/webpack-cli/node_modules/webpack-merge/package.json:11","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/y18n/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/common/rule-processor/node_modules/yargs/package.json:86","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/yargs-parser/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/package.json:12","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const child = spawn(process.argv[0], [path.resolve(__dirname, '../lib/detached.js'), tmpFile.name], {\n detached: true,\n stdio: 'ignore'\n })","location":"package/common/rule-processor/node_modules/karma/lib/server.js:423","message":"This package is silently executing another executable"}]}```
|
1.0
|
@requestly/requestly-core 1.0.4 has 19 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"cd ..; npm run build:main\"","location":"package/common/rule-processor/node_modules/acorn/package.json:45","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/common/rule-processor/node_modules/colorette/package.json:33","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/common/rule-processor/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/common/rule-processor/node_modules/isbinaryfile/package.json:56","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js \u0026\u0026 runmd --output README.md src/README_js.md\",","location":"package/common/rule-processor/node_modules/mime/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/common/rule-processor/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build \u0026\u0026 husky install\",","location":"package/common/rule-processor/node_modules/schema-utils/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/common/rule-processor/node_modules/terser/package.json:72","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install \u0026\u0026 npm run build\",","location":"package/common/rule-processor/node_modules/terser-webpack-plugin/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"gulp build-eslint-rules\",","location":"package/common/rule-processor/node_modules/typescript/package.json:101","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/common/rule-processor/node_modules/webpack/node_modules/enhanced-resolve/package.json:62","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/common/rule-processor/node_modules/webpack/package.json:160","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\"","location":"package/common/rule-processor/node_modules/webpack-cli/node_modules/webpack-merge/package.json:11","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/y18n/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/common/rule-processor/node_modules/yargs/package.json:86","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/common/rule-processor/node_modules/yargs-parser/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/package.json:12","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const child = spawn(process.argv[0], [path.resolve(__dirname, '../lib/detached.js'), tmpFile.name], {\n detached: true,\n stdio: 'ignore'\n })","location":"package/common/rule-processor/node_modules/karma/lib/server.js:423","message":"This package is silently executing another executable"}]}```
|
process
|
requestly requestly core has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n location package common rule processor node modules karma lib server js message this package is silently executing another executable
| 1
|
49,148
| 10,324,095,943
|
IssuesEvent
|
2019-09-01 05:37:45
|
backdrop/backdrop-issues
|
https://api.github.com/repos/backdrop/backdrop-issues
|
closed
|
[UX] Modules page: Modernize the error icon for incompatible modules
|
milestone candidate - bug pr - needs code review pr - needs testing status - has pull request type - task
|
## Describe your issue or idea
When a module is incompatible and cannot be installed, we render a red x icon instead of a checkbox, but that icons seems outdated.
### Steps to reproduce (if reporting a bug)
1. Install a version of a module that is not compatible with Backdrop core (at the time of writing, XML Sitemap 1.0.2 includes the `xmlsitemap_i18n` submodule which is not compatible).
2. Visit the modules page.
### Actual behavior (if reporting a bug)
We are currently using https://github.com/backdrop/backdrop/blob/1.x/core/misc/watchdog-error.png:
<img width="1177" alt="screen shot 2018-10-20 at 1 54 57 pm" src="https://user-images.githubusercontent.com/2423362/47250620-337c2600-d470-11e8-9ada-5d57b738143a.png">
### Expected behavior (if reporting a bug)
We could use https://github.com/backdrop/backdrop/blob/1.x/core/misc/icon-error.png and have it be the same red as the rest of the UI (`#EE3D23`):
<img width="1187" alt="screen shot 2018-10-20 at 1 58 29 pm" src="https://user-images.githubusercontent.com/2423362/47250630-7a6a1b80-d470-11e8-871c-522b90de1874.png">
---
PR by @klonos: https://github.com/backdrop/backdrop/pull/2331
|
1.0
|
[UX] Modules page: Modernize the error icon for incompatible modules - ## Describe your issue or idea
When a module is incompatible and cannot be installed, we render a red x icon instead of a checkbox, but that icons seems outdated.
### Steps to reproduce (if reporting a bug)
1. Install a version of a module that is not compatible with Backdrop core (at the time of writing, XML Sitemap 1.0.2 includes the `xmlsitemap_i18n` submodule which is not compatible).
2. Visit the modules page.
### Actual behavior (if reporting a bug)
We are currently using https://github.com/backdrop/backdrop/blob/1.x/core/misc/watchdog-error.png:
<img width="1177" alt="screen shot 2018-10-20 at 1 54 57 pm" src="https://user-images.githubusercontent.com/2423362/47250620-337c2600-d470-11e8-9ada-5d57b738143a.png">
### Expected behavior (if reporting a bug)
We could use https://github.com/backdrop/backdrop/blob/1.x/core/misc/icon-error.png and have it be the same red as the rest of the UI (`#EE3D23`):
<img width="1187" alt="screen shot 2018-10-20 at 1 58 29 pm" src="https://user-images.githubusercontent.com/2423362/47250630-7a6a1b80-d470-11e8-871c-522b90de1874.png">
---
PR by @klonos: https://github.com/backdrop/backdrop/pull/2331
|
non_process
|
modules page modernize the error icon for incompatible modules describe your issue or idea when a module is incompatible and cannot be installed we render a red x icon instead of a checkbox but that icons seems outdated steps to reproduce if reporting a bug install a version of a module that is not compatible with backdrop core at the time of writing xml sitemap includes the xmlsitemap submodule which is not compatible visit the modules page actual behavior if reporting a bug we are currently using img width alt screen shot at pm src expected behavior if reporting a bug we could use and have it be the same red as the rest of the ui img width alt screen shot at pm src pr by klonos
| 0
|
13,047
| 10,091,270,572
|
IssuesEvent
|
2019-07-26 13:52:45
|
code4romania/monitorizare-vot-ong
|
https://api.github.com/repos/code4romania/monitorizare-vot-ong
|
opened
|
[Infrastrcuture] Dockerize solution
|
docker infrastructure
|
The NGO API solution should run inside a docker container, alongside the database SQL Server.
The NGO frontend solution should run inside a docker container.
|
1.0
|
[Infrastrcuture] Dockerize solution - The NGO API solution should run inside a docker container, alongside the database SQL Server.
The NGO frontend solution should run inside a docker container.
|
non_process
|
dockerize solution the ngo api solution should run inside a docker container alongside the database sql server the ngo frontend solution should run inside a docker container
| 0
|
9,225
| 12,258,413,256
|
IssuesEvent
|
2020-05-06 15:04:43
|
microsoft/Open-Maps
|
https://api.github.com/repos/microsoft/Open-Maps
|
closed
|
Missing One-Ways in Seattle
|
in process
|
The MS Open Maps team will work to add one ways and turn restrictions to improve the overall network in the Seattle area.
**Sources**:
- We are comparing the Seattle GIS Open data set against the current oneway=* attribution in OSM. The City of Seattle maintains a one-way dataset [here](https://data-seattlecitygis.opendata.arcgis.com/datasets/4d2400f72db04b55b8b8d5ddd9c2e343_1) which is Open and in a compatible license with OSM.
- The team will use both Mapillary & Bing Streetside imagery for validation.
**Scope of editing:**
The bulk of editing is adding one way attribution to the motorized road network. Additionally, the mappers will look at the attributes and classification of features they are editing.
Any others issues encountered will be resolved using the general and local guidelines outlined in the OSM Wiki, and USA guidelines.
**Validation:**
The team will using the built in validator in JOSM editor when uploading.
|
1.0
|
Missing One-Ways in Seattle - The MS Open Maps team will work to add one ways and turn restrictions to improve the overall network in the Seattle area.
**Sources**:
- We are comparing the Seattle GIS Open data set against the current oneway=* attribution in OSM. The City of Seattle maintains a one-way dataset [here](https://data-seattlecitygis.opendata.arcgis.com/datasets/4d2400f72db04b55b8b8d5ddd9c2e343_1) which is Open and in a compatible license with OSM.
- The team will use both Mapillary & Bing Streetside imagery for validation.
**Scope of editing:**
The bulk of editing is adding one way attribution to the motorized road network. Additionally, the mappers will look at the attributes and classification of features they are editing.
Any others issues encountered will be resolved using the general and local guidelines outlined in the OSM Wiki, and USA guidelines.
**Validation:**
The team will using the built in validator in JOSM editor when uploading.
|
process
|
missing one ways in seattle the ms open maps team will work to add one ways and turn restrictions to improve the overall network in the seattle area sources we are comparing the seattle gis open data set against the current oneway attribution in osm the city of seattle maintains a one way dataset which is open and in a compatible license with osm the team will use both mapillary bing streetside imagery for validation scope of editing the bulk of editing is adding one way attribution to the motorized road network additionally the mappers will look at the attributes and classification of features they are editing any others issues encountered will be resolved using the general and local guidelines outlined in the osm wiki and usa guidelines validation the team will using the built in validator in josm editor when uploading
| 1
|
10,265
| 8,869,511,980
|
IssuesEvent
|
2019-01-11 05:46:34
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
closed
|
Bundles supports OSB API contract + test bundle implementation
|
area/service-catalog
|
**Description**
We need to apply all OSB API specific into bundle configuration.
**Reasons**
Enable all options from OSB API.
AC:
- input parameters schema for service bindings
- support all parameters from classes and plans, bindings
- add index-testing.yaml which contains first bundle called something like testing-0.1.0
- contains simple deployment that will be used in test for binding to.
- has two plans, one simple with just manadatory attributes and one expanded with all options set (also input parameters for SI and SB)
|
1.0
|
Bundles supports OSB API contract + test bundle implementation - **Description**
We need to apply all OSB API specific into bundle configuration.
**Reasons**
Enable all options from OSB API.
AC:
- input parameters schema for service bindings
- support all parameters from classes and plans, bindings
- add index-testing.yaml which contains first bundle called something like testing-0.1.0
- contains simple deployment that will be used in test for binding to.
- has two plans, one simple with just manadatory attributes and one expanded with all options set (also input parameters for SI and SB)
|
non_process
|
bundles supports osb api contract test bundle implementation description we need to apply all osb api specific into bundle configuration reasons enable all options from osb api ac input parameters schema for service bindings support all parameters from classes and plans bindings add index testing yaml which contains first bundle called something like testing contains simple deployment that will be used in test for binding to has two plans one simple with just manadatory attributes and one expanded with all options set also input parameters for si and sb
| 0
|
13,712
| 16,470,046,432
|
IssuesEvent
|
2021-05-23 08:29:24
|
pat-rogers/Ada-202x-WG9-Informal-Review
|
https://api.github.com/repos/pat-rogers/Ada-202x-WG9-Informal-Review
|
reopened
|
9.5 (28/5) "is nonblocking" vs "Nonblocking is True"
|
no action planned processed
|
This paragraph differs from its surroundings by using the wording "subprogram is nonblocking" instead of "the Nonblocking aspect of the subprogram is True". Is this intentional, or should it be changed for uniformity and precision? In principle, the property of "being nonblocking" is not the same as having Nonblocking = True.
|
1.0
|
9.5 (28/5) "is nonblocking" vs "Nonblocking is True" - This paragraph differs from its surroundings by using the wording "subprogram is nonblocking" instead of "the Nonblocking aspect of the subprogram is True". Is this intentional, or should it be changed for uniformity and precision? In principle, the property of "being nonblocking" is not the same as having Nonblocking = True.
|
process
|
is nonblocking vs nonblocking is true this paragraph differs from its surroundings by using the wording subprogram is nonblocking instead of the nonblocking aspect of the subprogram is true is this intentional or should it be changed for uniformity and precision in principle the property of being nonblocking is not the same as having nonblocking true
| 1
|
58,379
| 14,274,427,696
|
IssuesEvent
|
2020-11-22 03:53:29
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
CVE-2017-7525 (High) detected in jackson-databind-2.3.4.jar - autoclosed
|
Bug security vulnerability
|
## CVE-2017-7525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: QuickShop-Reremake/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.4/jackson-databind-2.3.4.jar</p>
<p>
Dependency Hierarchy:
- jenkins-client-0.3.8.jar (Root Library)
- :x: **jackson-databind-2.3.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ghost-chu/QuickShop-Reremake/commit/8ee7d2b71191adf05b366e0787aec78ffbdad102">8ee7d2b71191adf05b366e0787aec78ffbdad102</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.
<p>Publish Date: 2018-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525>CVE-2017-7525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525</a></p>
<p>Release Date: 2018-02-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-7525 (High) detected in jackson-databind-2.3.4.jar - autoclosed - ## CVE-2017-7525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: QuickShop-Reremake/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.4/jackson-databind-2.3.4.jar</p>
<p>
Dependency Hierarchy:
- jenkins-client-0.3.8.jar (Root Library)
- :x: **jackson-databind-2.3.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ghost-chu/QuickShop-Reremake/commit/8ee7d2b71191adf05b366e0787aec78ffbdad102">8ee7d2b71191adf05b366e0787aec78ffbdad102</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was discovered in the jackson-databind, versions before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readValue method of the ObjectMapper.
<p>Publish Date: 2018-02-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-7525>CVE-2017-7525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7525</a></p>
<p>Release Date: 2018-02-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.1,2.7.9.1,2.8.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file quickshop reremake pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jenkins client jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a deserialization flaw was discovered in the jackson databind versions before and which could allow an unauthenticated user to perform code execution by sending the maliciously crafted input to the readvalue method of the objectmapper publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
16,178
| 20,624,330,801
|
IssuesEvent
|
2022-03-07 20:44:36
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Customer reports occasional errors in event log when stopping Windows service
|
area-System.ServiceProcess untriaged
|
### Description
Customer reports that when starting/stopping their .NET based Windows service many times, there occasionally will be an error in the event log that erroneously reports that the service stopped unexpectedly, even though the stop is actually expected and was requested via the SCM.
(I have not been able to repro locally yet.)
Reached out to understand whether the concern is simply that the error is wrong/confusing to anyone that looks into the event log, or a more significant functional bug.
### Reproduction Steps
1. Install the test service: sc.exe create TestService binpath= <path-to-extract-dir>\TestWinService.exe
2. Open the services manager, right click on TestService, go to properties, switch to the Log On tab, change the user to "Local Service" (NT AUTHORITY\LOCAL SERVICE). Delete the password fields and click OK.
3. From a Powershell window, run the restart_loop.ps1 script. After awhile (usually within an hour or so, often much less), the event log will have at least one error as listed in the description.
4. To clean the system, run: sc.exe delete TestService
```xml
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="5.0.0" />
<PackageReference Include="Microsoft.Extensions.Hosting.WindowsServices" Version="5.0.1" />
</ItemGroup>
</Project>
```
```c#
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace TestWinService
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args)
{
var builder = Host.CreateDefaultBuilder(args)
.ConfigureLogging((hostContext, logger) =>
{
logger.SetMinimumLevel(LogLevel.Trace);
logger.AddConsole();
})
.ConfigureServices((hostContext, services) =>
{
services.AddSingleton<TestService>();
services.AddHostedService<TestService>(x => x.GetRequiredService<TestService>());
});
builder.UseWindowsService();
return builder;
}
}
class TestService : BackgroundService
{
private readonly ILogger<TestService> logger;
public TestService(ILogger<TestService> logger)
{
this.logger = logger;
}
protected async override Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
this.logger.LogError("Ping!");
await Task.Delay(2000);
}
}
}
}
```
```ps1
while($true)
{
sc.exe start TestService
echo "Service started"
Start-Sleep -m 5000
sc.exe stop TestService
echo "Service stopped"
Start-Sleep -m 5000
}
```
Note that the service when running as "Local Service" does not have permission to write to the event log. That's presumably not relevant: the messages here are coming from the SCM.
### Expected behavior
No erroneous SCM errors in the event log.
### Actual behavior
snippet from event log created by repro code above
```
Relevant log excerpt:
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]------ Starting services in group Null -----
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]BIG LOOP COUNT 1 - Service: TestService
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]TestService is marked START NOW - Loop: 1
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]ScStartMarkedServicesInServiceSet: Starting TestService, share proc ID 0
[0]0338.18A8::06/17/2021-17:40:40.952 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":1,"NewState":2,"DurationInMs":4000,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:40.952","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
Unknown( 71): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 75): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:40.956 [SCM]ScInitControlMessageContext: Channel 0x00000254A71DC130, event 0x0000000000000C20
[0]0338.0ED0::06/17/2021-17:40:41.124 [SCM]RI_ScOpenServiceChannelHandle: Caller PID 0x0000028c, context handle 0x00000254A71DC130, refs 0x2
[0]0338.18A8::06/17/2021-17:40:41.124 [SCM]ScWaitForFirstResponse: Received first response for TestService from pid 0X28C
[0]0338.18A8::06/17/2021-17:40:41.124 [Microsoft.Windows.ServiceControlManager]{"ServiceHostName":"C:\\publish\\TestWinService.exe","PID":652,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceHostStarted","time":"2021-06-17T17:40:41.124","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.124 [SCM]ScSendControl: Sending control 0x00000051 to service/module TestService, caller PID 0x00001620, channel 0x00000254A71DC130, refs 0x3
[0]0338.0ED0::06/17/2021-17:40:41.124 [SCM]RI_ScOpenServiceStatusHandle: Service TestService, seq 0x629, status 0, PID 0x0000028c
[0]0338.18A8::06/17/2021-17:40:41.125 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","StartReason":1,"LoadOrderGroup":"None","SvchostGroup":"None","PID":652,"IsCritical":false,"IsUserService":false,"IsOwnProcess":true,"TriggerType":0,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStarted","time":"2021-06-17T17:40:41.125","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.125 [SCM]Database Lock is OFF (from API)
Unknown( 13): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 14): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:41.127 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000002(SERVICE_START_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:41.127 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000004(SERVICE_RUNNING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:41.130 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":2,"NewState":4,"DurationInMs":188,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:41.130","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]TestService START_PENDING -> RUNNING
[0]0338.18A8::06/17/2021-17:40:46.142 [SCM]ScControlService: Service TestService, control 1, reason 0x00000000, comment NULL
[0]0338.18A8::06/17/2021-17:40:46.142 [SCM]ScSendControl: Sending control 0x00000001 to service/module TestService, caller PID 0x00001634, channel 0x00000254A71DC130, refs 0x3
[0]0338.18A8::06/17/2021-17:40:46.142 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","ControlCode":1,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceSendControl","time":"2021-06-17T17:40:46.142","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.0ED0::06/17/2021-17:40:46.144 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.0ED0::06/17/2021-17:40:46.144 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":4,"NewState":3,"DurationInMs":5000,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:46.144","cpu":0,"pid":824,"tid":3792,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.0ED0::06/17/2021-17:40:46.144 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]ScSendControl: Service TestService, control 1, user S-1-5-21-2088405651-850969028-1873988176-1000, caller PID 0x00001634
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:47.150 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:47.150 [SCM]Service Record updated with new status for service TestService
[0]0338.22C4::06/17/2021-17:40:47.150 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.22C4::06/17/2021-17:40:47.150 [SCM]Service Record updated with new status for service TestService
Unknown( 93): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 38): GUID=2690e78e-f395-3cf2-0a73-3f394cab6882 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceHostName":"C:\\publish\\TestWinService.exe","meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceHostFailed","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
Unknown( 39): GUID=2690e78e-f395-3cf2-0a73-3f394cab6882 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","ImageName":"C:\\publish\\TestWinService.exe","IsOwnProcess":true,"ShutdownInProgress":false,"Reason":1,"Status":1287,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceFailed","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]CWin32ServiceRecord::QueueRecoveryAction: Failure count for TestService incremented to 1.
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","FailCount":1,"ActionDelay":0,"ActionType":0,"ResetPeriod":4294967295,"FailureReason":1,"IsOwnProcess":true,"Action":"None","meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceRecoveryAction","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":3,"NewState":1,"DurationInMs":1046,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]SC_SERVICE_CHANNEL_CONTEXT_HANDLE_rundown: Context handle 0x00000254A71DC130
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]RI_ScCloseServiceChannelHandle: Context handle 0x00000254A71DC130
```
### Regression?
unknown.
### Known Workarounds
_No response_
### Configuration
.NET 5
### Other information
partner report, tracked internally by https://microsoft.visualstudio.com/OS/_workitems/edit/33989430?src=WorkItemMention&src-action=artifact_link
|
1.0
|
Customer reports occasional errors in event log when stopping Windows service - ### Description
Customer reports that when starting/stopping their .NET based Windows service many times, there occasionally will be an error in the event log that erroneously reports that the service stopped unexpectedly, even though the stop is actually expected and was requested via the SCM.
(I have not been able to repro locally yet.)
Reached out to understand whether the concern is simply that the error is wrong/confusing to anyone that looks into the event log, or a more significant functional bug.
### Reproduction Steps
1. Install the test service: sc.exe create TestService binpath= <path-to-extract-dir>\TestWinService.exe
2. Open the services manager, right click on TestService, go to properties, switch to the Log On tab, change the user to "Local Service" (NT AUTHORITY\LOCAL SERVICE). Delete the password fields and click OK.
3. From a Powershell window, run the restart_loop.ps1 script. After awhile (usually within an hour or so, often much less), the event log will have at least one error as listed in the description.
4. To clean the system, run: sc.exe delete TestService
```xml
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="5.0.0" />
<PackageReference Include="Microsoft.Extensions.Hosting.WindowsServices" Version="5.0.1" />
</ItemGroup>
</Project>
```
```c#
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace TestWinService
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args)
{
var builder = Host.CreateDefaultBuilder(args)
.ConfigureLogging((hostContext, logger) =>
{
logger.SetMinimumLevel(LogLevel.Trace);
logger.AddConsole();
})
.ConfigureServices((hostContext, services) =>
{
services.AddSingleton<TestService>();
services.AddHostedService<TestService>(x => x.GetRequiredService<TestService>());
});
builder.UseWindowsService();
return builder;
}
}
class TestService : BackgroundService
{
private readonly ILogger<TestService> logger;
public TestService(ILogger<TestService> logger)
{
this.logger = logger;
}
protected async override Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
this.logger.LogError("Ping!");
await Task.Delay(2000);
}
}
}
}
```
```ps1
while($true)
{
sc.exe start TestService
echo "Service started"
Start-Sleep -m 5000
sc.exe stop TestService
echo "Service stopped"
Start-Sleep -m 5000
}
```
Note that the service when running as "Local Service" does not have permission to write to the event log. That's presumably not relevant: the messages here are coming from the SCM.
### Expected behavior
No erroneous SCM errors in the event log.
### Actual behavior
snippet from event log created by repro code above
```
Relevant log excerpt:
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]------ Starting services in group Null -----
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]BIG LOOP COUNT 1 - Service: TestService
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]TestService is marked START NOW - Loop: 1
[0]0338.18A8::06/17/2021-17:40:40.952 [SCM]ScStartMarkedServicesInServiceSet: Starting TestService, share proc ID 0
[0]0338.18A8::06/17/2021-17:40:40.952 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":1,"NewState":2,"DurationInMs":4000,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:40.952","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
Unknown( 71): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 75): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:40.956 [SCM]ScInitControlMessageContext: Channel 0x00000254A71DC130, event 0x0000000000000C20
[0]0338.0ED0::06/17/2021-17:40:41.124 [SCM]RI_ScOpenServiceChannelHandle: Caller PID 0x0000028c, context handle 0x00000254A71DC130, refs 0x2
[0]0338.18A8::06/17/2021-17:40:41.124 [SCM]ScWaitForFirstResponse: Received first response for TestService from pid 0X28C
[0]0338.18A8::06/17/2021-17:40:41.124 [Microsoft.Windows.ServiceControlManager]{"ServiceHostName":"C:\\publish\\TestWinService.exe","PID":652,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceHostStarted","time":"2021-06-17T17:40:41.124","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.124 [SCM]ScSendControl: Sending control 0x00000051 to service/module TestService, caller PID 0x00001620, channel 0x00000254A71DC130, refs 0x3
[0]0338.0ED0::06/17/2021-17:40:41.124 [SCM]RI_ScOpenServiceStatusHandle: Service TestService, seq 0x629, status 0, PID 0x0000028c
[0]0338.18A8::06/17/2021-17:40:41.125 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","StartReason":1,"LoadOrderGroup":"None","SvchostGroup":"None","PID":652,"IsCritical":false,"IsUserService":false,"IsOwnProcess":true,"TriggerType":0,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStarted","time":"2021-06-17T17:40:41.125","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.125 [SCM]Database Lock is OFF (from API)
Unknown( 13): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 14): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:41.127 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000002(SERVICE_START_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:41.127 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000004(SERVICE_RUNNING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:41.130 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":2,"NewState":4,"DurationInMs":188,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:41.130","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:41.130 [SCM]TestService START_PENDING -> RUNNING
[0]0338.18A8::06/17/2021-17:40:46.142 [SCM]ScControlService: Service TestService, control 1, reason 0x00000000, comment NULL
[0]0338.18A8::06/17/2021-17:40:46.142 [SCM]ScSendControl: Sending control 0x00000001 to service/module TestService, caller PID 0x00001634, channel 0x00000254A71DC130, refs 0x3
[0]0338.18A8::06/17/2021-17:40:46.142 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","ControlCode":1,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceSendControl","time":"2021-06-17T17:40:46.142","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.0ED0::06/17/2021-17:40:46.144 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.0ED0::06/17/2021-17:40:46.144 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":4,"NewState":3,"DurationInMs":5000,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:46.144","cpu":0,"pid":824,"tid":3792,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.0ED0::06/17/2021-17:40:46.144 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]ScSendControl: Service TestService, control 1, user S-1-5-21-2088405651-850969028-1873988176-1000, caller PID 0x00001634
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:46.144 [SCM]Service Record updated with new status for service TestService
[0]0338.18A8::06/17/2021-17:40:47.150 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.18A8::06/17/2021-17:40:47.150 [SCM]Service Record updated with new status for service TestService
[0]0338.22C4::06/17/2021-17:40:47.150 [SCM]RSetServiceStatus: Status field accepted, service TestService, state 0x00000003(SERVICE_STOP_PENDING), exit 0, sexit 0, controls 0x00000005
[0]0338.22C4::06/17/2021-17:40:47.150 [SCM]Service Record updated with new status for service TestService
Unknown( 93): GUID=f563f025-6e86-3268-146c-3bf0aaf62bb4 (No Format Information found).
Unknown( 38): GUID=2690e78e-f395-3cf2-0a73-3f394cab6882 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceHostName":"C:\\publish\\TestWinService.exe","meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceHostFailed","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
Unknown( 39): GUID=2690e78e-f395-3cf2-0a73-3f394cab6882 (No Format Information found).
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","ImageName":"C:\\publish\\TestWinService.exe","IsOwnProcess":true,"ShutdownInProgress":false,"Reason":1,"Status":1287,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceFailed","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]CWin32ServiceRecord::QueueRecoveryAction: Failure count for TestService incremented to 1.
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","FailCount":1,"ActionDelay":0,"ActionType":0,"ResetPeriod":4294967295,"FailureReason":1,"IsOwnProcess":true,"Action":"None","meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceRecoveryAction","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x400000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [Microsoft.Windows.ServiceControlManager]{"ServiceName":"TestService","OldState":3,"NewState":1,"DurationInMs":1046,"meta":{"provider":"Microsoft.Windows.ServiceControlManager","event":"ServiceStateChange","time":"2021-06-17T17:40:47.181","cpu":0,"pid":824,"tid":6312,"channel":11,"level":5,"keywords":"0x200000000000"}}
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]SC_SERVICE_CHANNEL_CONTEXT_HANDLE_rundown: Context handle 0x00000254A71DC130
[0]0338.18A8::06/17/2021-17:40:47.181 [SCM]RI_ScCloseServiceChannelHandle: Context handle 0x00000254A71DC130
```
### Regression?
unknown.
### Known Workarounds
_No response_
### Configuration
.NET 5
### Other information
partner report, tracked internally by https://microsoft.visualstudio.com/OS/_workitems/edit/33989430?src=WorkItemMention&src-action=artifact_link
|
process
|
customer reports occasional errors in event log when stopping windows service description customer reports that when starting stopping their net based windows service many times there occasionally will be an error in the event log that erroneously reports that the service stopped unexpectedly even though the stop is actually expected and was requested via the scm i have not been able to repro locally yet reached out to understand whether the concern is simply that the error is wrong confusing to anyone that looks into the event log or a more significant functional bug reproduction steps install the test service sc exe create testservice binpath testwinservice exe open the services manager right click on testservice go to properties switch to the log on tab change the user to local service nt authority local service delete the password fields and click ok from a powershell window run the restart loop script after awhile usually within an hour or so often much less the event log will have at least one error as listed in the description to clean the system run sc exe delete testservice xml exe c using system using system threading using system threading tasks using microsoft extensions dependencyinjection using microsoft extensions hosting using microsoft extensions logging namespace testwinservice class program static void main string args console writeline hello world createhostbuilder args build run public static ihostbuilder createhostbuilder string args var builder host createdefaultbuilder args configurelogging hostcontext logger logger setminimumlevel loglevel trace logger addconsole configureservices hostcontext services services addsingleton services addhostedservice x x getrequiredservice builder usewindowsservice return builder class testservice backgroundservice private readonly ilogger logger public testservice ilogger logger this logger logger protected async override task executeasync cancellationtoken stoppingtoken while stoppingtoken iscancellationrequested this logger logerror ping await task delay while true sc exe start testservice echo service started start sleep m sc exe stop testservice echo service stopped start sleep m note that the service when running as local service does not have permission to write to the event log that s presumably not relevant the messages here are coming from the scm expected behavior no erroneous scm errors in the event log actual behavior snippet from event log created by repro code above relevant log excerpt starting services in group null big loop count service testservice testservice is marked start now loop scstartmarkedservicesinserviceset starting testservice share proc id servicename testservice oldstate newstate durationinms meta provider microsoft windows servicecontrolmanager event servicestatechange time cpu pid tid channel level keywords unknown guid no format information found unknown guid no format information found scinitcontrolmessagecontext channel event ri scopenservicechannelhandle caller pid context handle refs scwaitforfirstresponse received first response for testservice from pid servicehostname c publish testwinservice exe pid meta provider microsoft windows servicecontrolmanager event servicehoststarted time cpu pid tid channel level keywords scsendcontrol sending control to service module testservice caller pid channel refs ri scopenservicestatushandle service testservice seq status pid servicename testservice startreason loadordergroup none svchostgroup none pid iscritical false isuserservice false isownprocess true triggertype meta provider microsoft windows servicecontrolmanager event servicestarted time cpu pid tid channel level keywords database lock is off from api unknown guid no format information found unknown guid no format information found rsetservicestatus status field accepted service testservice state service start pending exit sexit controls service record updated with new status for service testservice rsetservicestatus status field accepted service testservice state service running exit sexit controls servicename testservice oldstate newstate durationinms meta provider microsoft windows servicecontrolmanager event servicestatechange time cpu pid tid channel level keywords service record updated with new status for service testservice testservice start pending running sccontrolservice service testservice control reason comment null scsendcontrol sending control to service module testservice caller pid channel refs servicename testservice controlcode meta provider microsoft windows servicecontrolmanager event servicesendcontrol time cpu pid tid channel level keywords rsetservicestatus status field accepted service testservice state service stop pending exit sexit controls servicename testservice oldstate newstate durationinms meta provider microsoft windows servicecontrolmanager event servicestatechange time cpu pid tid channel level keywords service record updated with new status for service testservice scsendcontrol service testservice control user s caller pid rsetservicestatus status field accepted service testservice state service stop pending exit sexit controls service record updated with new status for service testservice rsetservicestatus status field accepted service testservice state service stop pending exit sexit controls service record updated with new status for service testservice rsetservicestatus status field accepted service testservice state service stop pending exit sexit controls service record updated with new status for service testservice unknown guid no format information found unknown guid no format information found servicehostname c publish testwinservice exe meta provider microsoft windows servicecontrolmanager event servicehostfailed time cpu pid tid channel level keywords unknown guid no format information found servicename testservice imagename c publish testwinservice exe isownprocess true shutdowninprogress false reason status meta provider microsoft windows servicecontrolmanager event servicefailed time cpu pid tid channel level keywords queuerecoveryaction failure count for testservice incremented to servicename testservice failcount actiondelay actiontype resetperiod failurereason isownprocess true action none meta provider microsoft windows servicecontrolmanager event servicerecoveryaction time cpu pid tid channel level keywords servicename testservice oldstate newstate durationinms meta provider microsoft windows servicecontrolmanager event servicestatechange time cpu pid tid channel level keywords sc service channel context handle rundown context handle ri sccloseservicechannelhandle context handle regression unknown known workarounds no response configuration net other information partner report tracked internally by
| 1
|
417,239
| 28,110,235,016
|
IssuesEvent
|
2023-03-31 06:28:30
|
Varstak/ped
|
https://api.github.com/repos/Varstak/ped
|
opened
|
Specifying case sensitivity in User Guide
|
severity.Low type.DocumentationBug
|
command: add n/NAME p/123 e/a@gmail.com a/block 135 s/Math sch/MoNDay st/START TIME et/END TIME
error: Schedule should only be: monday, tuesday, wednesday, thursday, friday, saturday, sunday,
For schedule and subjects, would be good to specify if the inputs are case sensitive or not.
<!--session: 1680242445810-d0dbcc59-5989-4fde-b15e-e69e993cc520-->
<!--Version: Web v3.4.7-->
|
1.0
|
Specifying case sensitivity in User Guide - command: add n/NAME p/123 e/a@gmail.com a/block 135 s/Math sch/MoNDay st/START TIME et/END TIME
error: Schedule should only be: monday, tuesday, wednesday, thursday, friday, saturday, sunday,
For schedule and subjects, would be good to specify if the inputs are case sensitive or not.
<!--session: 1680242445810-d0dbcc59-5989-4fde-b15e-e69e993cc520-->
<!--Version: Web v3.4.7-->
|
non_process
|
specifying case sensitivity in user guide command add n name p e a gmail com a block s math sch monday st start time et end time error schedule should only be monday tuesday wednesday thursday friday saturday sunday for schedule and subjects would be good to specify if the inputs are case sensitive or not
| 0
|
14,951
| 18,434,416,081
|
IssuesEvent
|
2021-10-14 11:24:46
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
opened
|
Form fields should have character limits
|
application-process
|
So that we can guide users on how much we expect them to write.
## Acceptance criteria
- A character limit can be set on a field
- The limit can contain a min and max character limit
- The limit should be shown to the user and updated on each character entered
- The limit should be validated on both the client and server-side
|
1.0
|
Form fields should have character limits - So that we can guide users on how much we expect them to write.
## Acceptance criteria
- A character limit can be set on a field
- The limit can contain a min and max character limit
- The limit should be shown to the user and updated on each character entered
- The limit should be validated on both the client and server-side
|
process
|
form fields should have character limits so that we can guide users on how much we expect them to write acceptance criteria a character limit can be set on a field the limit can contain a min and max character limit the limit should be shown to the user and updated on each character entered the limit should be validated on both the client and server side
| 1
|
21,543
| 29,864,984,120
|
IssuesEvent
|
2023-06-20 02:26:01
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
[Sec Assess WG] Benefits of a Security Assessment for Projects
|
good first issue assessment-process suggestion inactive
|
This issue was created from results of the Security Assessment Improvement Working Group (https://github.com/cncf/sig-security/issues/167#issuecomment-714514142).
# Benefits of a Security Assessment for Projects
## Premise
- It is not entirely clear that why a project should be incentivized to participate in an assessment
## Ideas
- Add "benefit for the project" to security assessment guide
- Independent evaluation provides a primer to the CNCFs security audit team
- It provides a SECURITY.md
- Self evaluation should be reflective of their software development practices
- Provide badging: As a badge, reference material for security aspect of the project
## Action Items
- [x] PR for adding benefit to assessment guide (@RonVider)
- [ ] Requirements for a SECURITY.md, what is required, and mapping from assessment documents (@itaysk)
## Logistics
- [x] Contributors (For multiple contributors, 1 lead to coordinate)
- @itaysk
- @RonVider
- [x] SIG-Representative @lumjjb
|
1.0
|
[Sec Assess WG] Benefits of a Security Assessment for Projects - This issue was created from results of the Security Assessment Improvement Working Group (https://github.com/cncf/sig-security/issues/167#issuecomment-714514142).
# Benefits of a Security Assessment for Projects
## Premise
- It is not entirely clear that why a project should be incentivized to participate in an assessment
## Ideas
- Add "benefit for the project" to security assessment guide
- Independent evaluation provides a primer to the CNCFs security audit team
- It provides a SECURITY.md
- Self evaluation should be reflective of their software development practices
- Provide badging: As a badge, reference material for security aspect of the project
## Action Items
- [x] PR for adding benefit to assessment guide (@RonVider)
- [ ] Requirements for a SECURITY.md, what is required, and mapping from assessment documents (@itaysk)
## Logistics
- [x] Contributors (For multiple contributors, 1 lead to coordinate)
- @itaysk
- @RonVider
- [x] SIG-Representative @lumjjb
|
process
|
benefits of a security assessment for projects this issue was created from results of the security assessment improvement working group benefits of a security assessment for projects premise it is not entirely clear that why a project should be incentivized to participate in an assessment ideas add benefit for the project to security assessment guide independent evaluation provides a primer to the cncfs security audit team it provides a security md self evaluation should be reflective of their software development practices provide badging as a badge reference material for security aspect of the project action items pr for adding benefit to assessment guide ronvider requirements for a security md what is required and mapping from assessment documents itaysk logistics contributors for multiple contributors lead to coordinate itaysk ronvider sig representative lumjjb
| 1
|
10,231
| 13,094,722,984
|
IssuesEvent
|
2020-08-03 12:56:17
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
IMAP channel cannot connect to ProtonMail bridge
|
bug mail processing verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* IMAP channel connects to ProtonMail bridge
### Actual behavior:
* IMAP channel cannot connect to ProtonMail bridge
### Steps to reproduce the behavior:
* try to setup IMAP channel connecting to ProtonMail bridge
Ticket# 1078050
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
IMAP channel cannot connect to ProtonMail bridge - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
* IMAP channel connects to ProtonMail bridge
### Actual behavior:
* IMAP channel cannot connect to ProtonMail bridge
### Steps to reproduce the behavior:
* try to setup IMAP channel connecting to ProtonMail bridge
Ticket# 1078050
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
imap channel cannot connect to protonmail bridge hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any expected behavior imap channel connects to protonmail bridge actual behavior imap channel cannot connect to protonmail bridge steps to reproduce the behavior try to setup imap channel connecting to protonmail bridge ticket yes i m sure this is a bug and no feature request or a general question
| 1
|
375,911
| 26,181,788,259
|
IssuesEvent
|
2023-01-02 16:35:05
|
bounswe/bounswe2022group2
|
https://api.github.com/repos/bounswe/bounswe2022group2
|
closed
|
Final Milestone: Annotations & Standards
|
priority-high type-documentation milestone
|
### Issue Description
As part of our Final Milestone for Learnify application, we need to update the status of the annotations used in the application and the W3C standards that we follow for them.
Deliverables:
Updated documentation for the annotations and W3C standards used in Learnify
### Step Details
The task involves the following:
- [x] Review the current status of the annotations used in Learnify.
- [x] Update the documentation for Learnify to reflect the current status of the annotations. This includes removing any references to deprecated or obsolete annotations and adding documentation for any new annotations.
- [x] Review the W3C standards that Learnify currently follows, and update the documentation to reflect any changes or updates to these standards.
### Final Actions
The merge of the related pull request to deliverables will close the issue.
The issue is related to #935
### Deadline of the Issue
2.01.2023 - 17.00
### Reviewer
Altay Acar
### Deadline for the Review
2.01.2023 - 19:00
|
1.0
|
Final Milestone: Annotations & Standards - ### Issue Description
As part of our Final Milestone for Learnify application, we need to update the status of the annotations used in the application and the W3C standards that we follow for them.
Deliverables:
Updated documentation for the annotations and W3C standards used in Learnify
### Step Details
The task involves the following:
- [x] Review the current status of the annotations used in Learnify.
- [x] Update the documentation for Learnify to reflect the current status of the annotations. This includes removing any references to deprecated or obsolete annotations and adding documentation for any new annotations.
- [x] Review the W3C standards that Learnify currently follows, and update the documentation to reflect any changes or updates to these standards.
### Final Actions
The merge of the related pull request to deliverables will close the issue.
The issue is related to #935
### Deadline of the Issue
2.01.2023 - 17.00
### Reviewer
Altay Acar
### Deadline for the Review
2.01.2023 - 19:00
|
non_process
|
final milestone annotations standards issue description as part of our final milestone for learnify application we need to update the status of the annotations used in the application and the standards that we follow for them deliverables updated documentation for the annotations and standards used in learnify step details the task involves the following review the current status of the annotations used in learnify update the documentation for learnify to reflect the current status of the annotations this includes removing any references to deprecated or obsolete annotations and adding documentation for any new annotations review the standards that learnify currently follows and update the documentation to reflect any changes or updates to these standards final actions the merge of the related pull request to deliverables will close the issue the issue is related to deadline of the issue reviewer altay acar deadline for the review
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.