Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,568
| 21,578,783,621
|
IssuesEvent
|
2022-05-02 16:21:51
|
PennLINC/RBC
|
https://api.github.com/repos/PennLINC/RBC
|
closed
|
ECAS production: fmriprep
|
Processing Job
|
- [x] check remaining 3 subjects
- [ ] Run remainig 5 ECAS subjects as anat only
|
1.0
|
ECAS production: fmriprep - - [x] check remaining 3 subjects
- [ ] Run remainig 5 ECAS subjects as anat only
|
process
|
ecas production fmriprep check remaining subjects run remainig ecas subjects as anat only
| 1
|
16,692
| 21,791,963,265
|
IssuesEvent
|
2022-05-15 03:02:55
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Batch processing gui for "Export layers to DXF" is missing functionality for loading layers into separate rows in the table
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
BACKGROUND COMMENTS
The "Export layers to DXF" algorithm is relatively unusual (but not unique) because it has a single "input layers" input where more than one input layer can be selected. People are probably relatively unlikely to need to "run as batch process".
ISSUE
If people do need to "run as batch process", for some reason this algorithm is missing this functionality that is provided with other algorithms:
```
Add Files by Pattern...
Select Files...
Add All Files from a Directory...
Select from Open Layers...
````
Maybe someone thought this functionality isn't necessary for this algorithm?

Compare with the vector translate algorithm:

And with r.patch, which also has a single "input layers" input where more than one input layer can be selected:

### Steps to reproduce the issue
- click to open "Export layers to DXF" from the processing toolbox
- click "run as batch process" in the bottom left
- click "Autofill" button in the first column
The options are missing...
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
AnotherDXF2Shape
1.2.3
AutoLayoutTool
1.1
autoSaver
2.6
batchvectorlayersaver
0.9
BulkVectorExport
1.1
CalculateGeometry
0.6.4
changeDataSource
3.1
DataPlotly
3.8.1
deactivate_active_labels
0.5
flowTrace
1.1.1
Generalizer3
1.0
GeoCoding
2.18
geo_sim_processing
1.2.0
getthemfiltered
0.1.3
gridSplitter
0.4.0
GroupStats
2.2.5
HideDocks
0.6.1
ImageServerConnector
2.1.1
ImportPhotos
3.0.3
joinmultiplelines
Version 0.4.1
karika
1.5
LayerBoard
1.0.1
linz-data-importer
2.2.3
loadthemall
3.3.0
MagicWand-master
1.3.1
mask
1.10.1
MemoryLayerSaver
4.0.4
mmqgis
2021.9.10
nominatim_locator_filter
0.2.4
numerator
0.2
numericalDigitize
0.4.6
pathfinder
version 0.4.1
plaingeometryeditor
3.0.0
plugin_reloader
0.9.1
powerpan
2.0
processing_saga
0.5.0
processing_taudem
3.0.0
processing_whitebox
0.14.0
profiletool
4.2.1
qchainage
3.0.1
QCopycanvas
0.5
qgis-plugin-findreplace-main
1
Qgis2threejs
2.6
QGIS3-getWKT
1.4
QuickMultiAttributeEdit3
version 3.0.3
QuickPrint
3.6.1
quicksaveqml
0.1.5
quick_map_services
0.19.27
rasmover-master
version 0.2
redLayer
2.2
selectThemes
3.0.1
simple_tools
0.4.1
splitmultipart
1.0.0
SplitPolygonShowingAreas
0.13
statist
3.2
themeselector
3.2.2
valuetool
3.0.15
ViewshedAnalysis
1.7
volume_calculation_tool
0.4
WaterNetAnalyzer-master
1.7
grassprovider
2.12.99
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Batch processing gui for "Export layers to DXF" is missing functionality for loading layers into separate rows in the table - ### What is the bug or the crash?
BACKGROUND COMMENTS
The "Export layers to DXF" algorithm is relatively unusual (but not unique) because it has a single "input layers" input where more than one input layer can be selected. People are probably relatively unlikely to need to "run as batch process".
ISSUE
If people do need to "run as batch process", for some reason this algorithm is missing this functionality that is provided with other algorithms:
```
Add Files by Pattern...
Select Files...
Add All Files from a Directory...
Select from Open Layers...
````
Maybe someone thought this functionality isn't necessary for this algorithm?

Compare with the vector translate algorithm:

And with r.patch, which also has a single "input layers" input where more than one input layer can be selected:

### Steps to reproduce the issue
- click to open "Export layers to DXF" from the processing toolbox
- click "run as batch process" in the bottom left
- click "Autofill" button in the first column
The options are missing...
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
AnotherDXF2Shape
1.2.3
AutoLayoutTool
1.1
autoSaver
2.6
batchvectorlayersaver
0.9
BulkVectorExport
1.1
CalculateGeometry
0.6.4
changeDataSource
3.1
DataPlotly
3.8.1
deactivate_active_labels
0.5
flowTrace
1.1.1
Generalizer3
1.0
GeoCoding
2.18
geo_sim_processing
1.2.0
getthemfiltered
0.1.3
gridSplitter
0.4.0
GroupStats
2.2.5
HideDocks
0.6.1
ImageServerConnector
2.1.1
ImportPhotos
3.0.3
joinmultiplelines
Version 0.4.1
karika
1.5
LayerBoard
1.0.1
linz-data-importer
2.2.3
loadthemall
3.3.0
MagicWand-master
1.3.1
mask
1.10.1
MemoryLayerSaver
4.0.4
mmqgis
2021.9.10
nominatim_locator_filter
0.2.4
numerator
0.2
numericalDigitize
0.4.6
pathfinder
version 0.4.1
plaingeometryeditor
3.0.0
plugin_reloader
0.9.1
powerpan
2.0
processing_saga
0.5.0
processing_taudem
3.0.0
processing_whitebox
0.14.0
profiletool
4.2.1
qchainage
3.0.1
QCopycanvas
0.5
qgis-plugin-findreplace-main
1
Qgis2threejs
2.6
QGIS3-getWKT
1.4
QuickMultiAttributeEdit3
version 3.0.3
QuickPrint
3.6.1
quicksaveqml
0.1.5
quick_map_services
0.19.27
rasmover-master
version 0.2
redLayer
2.2
selectThemes
3.0.1
simple_tools
0.4.1
splitmultipart
1.0.0
SplitPolygonShowingAreas
0.13
statist
3.2
themeselector
3.2.2
valuetool
3.0.15
ViewshedAnalysis
1.7
volume_calculation_tool
0.4
WaterNetAnalyzer-master
1.7
grassprovider
2.12.99
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
batch processing gui for export layers to dxf is missing functionality for loading layers into separate rows in the table what is the bug or the crash background comments the export layers to dxf algorithm is relatively unusual but not unique because it has a single input layers input where more than one input layer can be selected people are probably relatively unlikely to need to run as batch process issue if people do need to run as batch process for some reason this algorithm is missing this functionality that is provided with other algorithms add files by pattern select files add all files from a directory select from open layers maybe someone thought this functionality isn t necessary for this algorithm compare with the vector translate algorithm and with r patch which also has a single input layers input where more than one input layer can be selected steps to reproduce the issue click to open export layers to dxf from the processing toolbox click run as batch process in the bottom left click autofill button in the first column the options are missing versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins autolayouttool autosaver batchvectorlayersaver bulkvectorexport calculategeometry changedatasource dataplotly deactivate active labels flowtrace geocoding geo sim processing getthemfiltered gridsplitter groupstats hidedocks imageserverconnector importphotos joinmultiplelines version karika layerboard linz data importer loadthemall magicwand master mask memorylayersaver mmqgis nominatim locator filter numerator numericaldigitize pathfinder version plaingeometryeditor plugin reloader powerpan processing saga processing taudem processing whitebox profiletool qchainage qcopycanvas qgis plugin findreplace main getwkt version quickprint quicksaveqml quick map services rasmover master version redlayer selectthemes simple tools splitmultipart splitpolygonshowingareas statist themeselector valuetool viewshedanalysis volume calculation tool waternetanalyzer master grassprovider processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
16,494
| 21,468,810,048
|
IssuesEvent
|
2022-04-26 07:37:33
|
cyfronet-fid/recommender-system
|
https://api.github.com/repos/cyfronet-fid/recommender-system
|
closed
|
pymongo.errors.AutoReconnect: connection pool paused
|
recommender environment multiprocessing
|
When multiprocessing is enabled this error sometimes occurs. Versions of packages may be the reason.
Traceback:
```
Traceback (most recent call last):
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/bin/flask", line 8, in <module>
sys.exit(main())
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 967, in main
cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 426, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/__init__.py", line 65, in tmp_train_command
train_command(task)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/commands/train.py", line 53, in train_command
globals()[f"_{task}"]()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/commands/train.py", line 36, in _rl_v1
RLPipeline(RL_PIPELINE_CONFIG_V1)()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/base/base_pipeline.py", line 71, in __call__
data, details = step(data)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/training/data_extraction_step/data_extraction_step.py", line 47, in __call__
sarses = self._generate_sarses()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/training/data_extraction_step/data_extraction_step.py", line 66, in _generate_sarses
real_sarses = regenerate_sarses(multi_processing=True, verbose=True)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/ml_components/sarses_generator.py", line 341, in regenerate_sarses
verbose=verbose,
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/ml_components/sarses_generator.py", line 317, in generate_sarses
pool.map(executor, recommendations)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/billiard/pool.py", line 1416, in map
return self.map_async(func, iterable, chunksize).get()
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/billiard/pool.py", line 1791, in get
raise self._value.exception
pymongo.errors.AutoReconnect: 127.0.0.1:27022: connection pool paused
```
|
1.0
|
pymongo.errors.AutoReconnect: connection pool paused - When multiprocessing is enabled this error sometimes occurs. Versions of packages may be the reason.
Traceback:
```
Traceback (most recent call last):
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/bin/flask", line 8, in <module>
sys.exit(main())
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 967, in main
cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/flask/cli.py", line 426, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/__init__.py", line 65, in tmp_train_command
train_command(task)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/commands/train.py", line 53, in train_command
globals()[f"_{task}"]()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/commands/train.py", line 36, in _rl_v1
RLPipeline(RL_PIPELINE_CONFIG_V1)()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/base/base_pipeline.py", line 71, in __call__
data, details = step(data)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/training/data_extraction_step/data_extraction_step.py", line 47, in __call__
sarses = self._generate_sarses()
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/training/data_extraction_step/data_extraction_step.py", line 66, in _generate_sarses
real_sarses = regenerate_sarses(multi_processing=True, verbose=True)
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/ml_components/sarses_generator.py", line 341, in regenerate_sarses
verbose=verbose,
File "/home/michal/Desktop/Cyfronet/Recommender/recommender-system/recommender/engines/rl/ml_components/sarses_generator.py", line 317, in generate_sarses
pool.map(executor, recommendations)
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/billiard/pool.py", line 1416, in map
return self.map_async(func, iterable, chunksize).get()
File "/home/michal/.local/share/virtualenvs/recommender-system-dfVHVYpy/lib/python3.7/site-packages/billiard/pool.py", line 1791, in get
raise self._value.exception
pymongo.errors.AutoReconnect: 127.0.0.1:27022: connection pool paused
```
|
process
|
pymongo errors autoreconnect connection pool paused when multiprocessing is enabled this error sometimes occurs versions of packages may be the reason traceback traceback most recent call last file home michal local share virtualenvs recommender system dfvhvypy bin flask line in sys exit main file home michal local share virtualenvs recommender system dfvhvypy lib site packages flask cli py line in main cli main args sys argv prog name python m flask if as module else none file home michal local share virtualenvs recommender system dfvhvypy lib site packages flask cli py line in main return super flaskgroup self main args kwargs file home michal local share virtualenvs recommender system dfvhvypy lib site packages click core py line in main rv self invoke ctx file home michal local share virtualenvs recommender system dfvhvypy lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file home michal local share virtualenvs recommender system dfvhvypy lib site packages click core py line in invoke return ctx invoke self callback ctx params file home michal local share virtualenvs recommender system dfvhvypy lib site packages click core py line in invoke return callback args kwargs file home michal local share virtualenvs recommender system dfvhvypy lib site packages click decorators py line in new func return f get current context args kwargs file home michal local share virtualenvs recommender system dfvhvypy lib site packages flask cli py line in decorator return ctx invoke f args kwargs file home michal local share virtualenvs recommender system dfvhvypy lib site packages click core py line in invoke return callback args kwargs file home michal desktop cyfronet recommender recommender system recommender init py line in tmp train command train command task file home michal desktop cyfronet recommender recommender system recommender commands train py line in train command globals file home michal desktop cyfronet recommender recommender system recommender commands train py line in rl rlpipeline rl pipeline config file home michal desktop cyfronet recommender recommender system recommender engines base base pipeline py line in call data details step data file home michal desktop cyfronet recommender recommender system recommender engines rl training data extraction step data extraction step py line in call sarses self generate sarses file home michal desktop cyfronet recommender recommender system recommender engines rl training data extraction step data extraction step py line in generate sarses real sarses regenerate sarses multi processing true verbose true file home michal desktop cyfronet recommender recommender system recommender engines rl ml components sarses generator py line in regenerate sarses verbose verbose file home michal desktop cyfronet recommender recommender system recommender engines rl ml components sarses generator py line in generate sarses pool map executor recommendations file home michal local share virtualenvs recommender system dfvhvypy lib site packages billiard pool py line in map return self map async func iterable chunksize get file home michal local share virtualenvs recommender system dfvhvypy lib site packages billiard pool py line in get raise self value exception pymongo errors autoreconnect connection pool paused
| 1
|
12,693
| 15,074,642,053
|
IssuesEvent
|
2021-02-05 00:15:15
|
aws-cloudformation/cloudformation-cli
|
https://api.github.com/repos/aws-cloudformation/cloudformation-cli
|
closed
|
Schema with regex pattern validation is accepted but fails later with "Internal Failure"
|
schema processing
|
Opening based on https://github.com/aws-cloudformation/cloudformation-cli-python-plugin/issues/99, in which @OElesin found the following `pattern` caused this problem:
```json
{
"typeName": "CD4AutoML::Workflow::Deploy",
"properties": {
"NotificationEmail": {
"type": "string",
"pattern": "^[\\x20-\\x45]?[\\w-\\+]+(\\.[\\w]+)*@[\\w-]+(\\.[\\w]+)*(\\.[a-z]{2,})$"
},
"WorkflowName": {
"type": "string",
}
},
"additionalProperties": false,
"required": [
"NotificationEmail",
"WorkflowName"
],
"primaryIdentifier": [
"/properties/WorkflowName"
],
"handlers": {
"create": {
"permissions": [""]
},
"read": {
"permissions": [""]
},
"update": {
"permissions": [""]
},
"delete": {
"permissions": [""]
},
"list": {
"permissions": [""]
}
}
}
```
|
1.0
|
Schema with regex pattern validation is accepted but fails later with "Internal Failure" - Opening based on https://github.com/aws-cloudformation/cloudformation-cli-python-plugin/issues/99, in which @OElesin found the following `pattern` caused this problem:
```json
{
"typeName": "CD4AutoML::Workflow::Deploy",
"properties": {
"NotificationEmail": {
"type": "string",
"pattern": "^[\\x20-\\x45]?[\\w-\\+]+(\\.[\\w]+)*@[\\w-]+(\\.[\\w]+)*(\\.[a-z]{2,})$"
},
"WorkflowName": {
"type": "string",
}
},
"additionalProperties": false,
"required": [
"NotificationEmail",
"WorkflowName"
],
"primaryIdentifier": [
"/properties/WorkflowName"
],
"handlers": {
"create": {
"permissions": [""]
},
"read": {
"permissions": [""]
},
"update": {
"permissions": [""]
},
"delete": {
"permissions": [""]
},
"list": {
"permissions": [""]
}
}
}
```
|
process
|
schema with regex pattern validation is accepted but fails later with internal failure opening based on in which oelesin found the following pattern caused this problem json typename workflow deploy properties notificationemail type string pattern workflowname type string additionalproperties false required notificationemail workflowname primaryidentifier properties workflowname handlers create permissions read permissions update permissions delete permissions list permissions
| 1
|
6,499
| 9,480,746,218
|
IssuesEvent
|
2019-04-20 20:42:35
|
SJSURobotics2019/missioncontrol2019
|
https://api.github.com/repos/SJSURobotics2019/missioncontrol2019
|
closed
|
Add Mimic To Arm Module
|
Control Requirement
|
From Allen's Mimic
- Wrist Delta (forward back wrist) :
- Wrist Roll:
- Elbow:
- Shoulder:
- Rotunda:
_Look out for buttons in the future_
|
1.0
|
Add Mimic To Arm Module - From Allen's Mimic
- Wrist Delta (forward back wrist) :
- Wrist Roll:
- Elbow:
- Shoulder:
- Rotunda:
_Look out for buttons in the future_
|
non_process
|
add mimic to arm module from allen s mimic wrist delta forward back wrist wrist roll elbow shoulder rotunda look out for buttons in the future
| 0
|
11,922
| 14,703,535,858
|
IssuesEvent
|
2021-01-04 15:11:04
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
stageDependencies syntax in 'Define Variables' incorrect
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
The [Use outputs in a different stage](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage) section describes the use of `stageDependencies` syntax for accessing output variables from another stage.
This syntax doesn't seem to work at all, and instead there is a different way of accessing other stage outputs.
**Documented format** (always undefined)
```yaml
stageDependencies.StageOne.JobA.outputs['StepA.MyVar']
```
**What actually works**
```yaml
dependencies.StageOne.outputs['JobA.StepA.MyVar']
# or
stageDependencies.StageOne.outputs['JobA.StepA.MyVar']
```
Related to #8366
It would also be great if instead of calling stages, jobs and steps all "A", "B", "C" you used names like "StageA", "JobB" and "StepC" so that we don't need to constantly reference the rest of the file to be able to read it. ☺
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
stageDependencies syntax in 'Define Variables' incorrect - The [Use outputs in a different stage](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage) section describes the use of `stageDependencies` syntax for accessing output variables from another stage.
This syntax doesn't seem to work at all, and instead there is a different way of accessing other stage outputs.
**Documented format** (always undefined)
```yaml
stageDependencies.StageOne.JobA.outputs['StepA.MyVar']
```
**What actually works**
```yaml
dependencies.StageOne.outputs['JobA.StepA.MyVar']
# or
stageDependencies.StageOne.outputs['JobA.StepA.MyVar']
```
Related to #8366
It would also be great if instead of calling stages, jobs and steps all "A", "B", "C" you used names like "StageA", "JobB" and "StepC" so that we don't need to constantly reference the rest of the file to be able to read it. ☺
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
stagedependencies syntax in define variables incorrect the section describes the use of stagedependencies syntax for accessing output variables from another stage this syntax doesn t seem to work at all and instead there is a different way of accessing other stage outputs documented format always undefined yaml stagedependencies stageone joba outputs what actually works yaml dependencies stageone outputs or stagedependencies stageone outputs related to it would also be great if instead of calling stages jobs and steps all a b c you used names like stagea jobb and stepc so that we don t need to constantly reference the rest of the file to be able to read it ☺ document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
1,052
| 3,520,504,577
|
IssuesEvent
|
2016-01-12 21:09:05
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
use the Distribution class to detect hypervisors
|
component:data processing component:virtualization enhancement priority: normal
|
at the moment the hypervisor is temporarily hardcoded kvm, but the host capabilities detection should be able to find what hypervisors are installed
|
1.0
|
use the Distribution class to detect hypervisors - at the moment the hypervisor is temporarily hardcoded kvm, but the host capabilities detection should be able to find what hypervisors are installed
|
process
|
use the distribution class to detect hypervisors at the moment the hypervisor is temporarily hardcoded kvm but the host capabilities detection should be able to find what hypervisors are installed
| 1
|
18,095
| 24,121,951,136
|
IssuesEvent
|
2022-09-20 19:34:36
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
closed
|
Standardize and tidy up finding the elbow/knee/plateau point
|
feature idea :fire: wontfix signal processing :chart_with_upwards_trend: inactive 👻
|
Finding the elbow of a curve is a very common problem and we use it in multiple instances. Currently we always have bespoke implementations but we could gather them into a unique `find_plateau/elbow()` function that would help in terms of re-usability, maintainability and testing.
|
1.0
|
Standardize and tidy up finding the elbow/knee/plateau point - Finding the elbow of a curve is a very common problem and we use it in multiple instances. Currently we always have bespoke implementations but we could gather them into a unique `find_plateau/elbow()` function that would help in terms of re-usability, maintainability and testing.
|
process
|
standardize and tidy up finding the elbow knee plateau point finding the elbow of a curve is a very common problem and we use it in multiple instances currently we always have bespoke implementations but we could gather them into a unique find plateau elbow function that would help in terms of re usability maintainability and testing
| 1
|
17,820
| 23,744,746,953
|
IssuesEvent
|
2022-08-31 15:05:00
|
googleapis/java-compute
|
https://api.github.com/repos/googleapis/java-compute
|
closed
|
compute.v1.integration.ITPaginationTest: testPaginationNextToken failed
|
priority: p2 type: process api: compute flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 56af77345508d42133ac1a47ebf19d2bfa51507b
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/98708ab0-57e2-4e18-9c01-a0ae82a17e78), [Sponge](http://sponge2/98708ab0-57e2-4e18-9c01-a0ae82a17e78)
status: failed
<details><summary>Test output</summary><br><pre>java.lang.AssertionError:
expected:<[id: 2261
kind: "compute#zone"
name: "asia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-b"
, id: 2260
kind: "compute#zone"
name: "asia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-a"
, id: 2262
kind: "compute#zone"
name: "asia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-c"
, id: 2322
kind: "compute#zone"
name: "asia-south1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-c"
, id: 2320
kind: "compute#zone"
name: "asia-south1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-b"
, id: 2321
kind: "compute#zone"
name: "asia-south1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-a"
, id: 2282
kind: "compute#zone"
name: "australia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-b"
, id: 2280
kind: "compute#zone"
name: "australia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-c"
, id: 2281
kind: "compute#zone"
name: "australia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-a"
, id: 2470
kind: "compute#zone"
name: "asia-south2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-a"
, id: 2472
kind: "compute#zone"
name: "asia-south2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-b"
, id: 2471
kind: "compute#zone"
name: "asia-south2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-c"
, id: 2440
kind: "compute#zone"
name: "asia-southeast2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-a"
, id: 2442
kind: "compute#zone"
name: "asia-southeast2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-b"
, id: 2441
kind: "compute#zone"
name: "asia-southeast2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-c"
]> but was:<[id: 2261
kind: "compute#zone"
name: "asia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-b"
, id: 2260
kind: "compute#zone"
name: "asia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-a"
, id: 2262
kind: "compute#zone"
name: "asia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-c"
, id: 2322
kind: "compute#zone"
name: "asia-south1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-c"
, id: 2320
kind: "compute#zone"
name: "asia-south1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-b"
, id: 2321
kind: "compute#zone"
name: "asia-south1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-a"
, id: 2282
kind: "compute#zone"
name: "australia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-b"
, id: 2280
kind: "compute#zone"
name: "australia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-c"
, id: 2281
kind: "compute#zone"
name: "australia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-a"
, id: 2470
kind: "compute#zone"
name: "asia-south2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-a"
, id: 2472
kind: "compute#zone"
name: "asia-south2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-b"
, id: 2471
kind: "compute#zone"
name: "asia-south2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-c"
, id: 2440
kind: "compute#zone"
name: "asia-southeast2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-a"
, id: 2442
kind: "compute#zone"
name: "asia-southeast2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-b"
, id: 2441
kind: "compute#zone"
name: "asia-southeast2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-c"
]>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at com.google.cloud.compute.v1.integration.ITPaginationTest.testPaginationNextToken(ITPaginationTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
</pre></details>
|
1.0
|
compute.v1.integration.ITPaginationTest: testPaginationNextToken failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 56af77345508d42133ac1a47ebf19d2bfa51507b
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/98708ab0-57e2-4e18-9c01-a0ae82a17e78), [Sponge](http://sponge2/98708ab0-57e2-4e18-9c01-a0ae82a17e78)
status: failed
<details><summary>Test output</summary><br><pre>java.lang.AssertionError:
expected:<[id: 2261
kind: "compute#zone"
name: "asia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-b"
, id: 2260
kind: "compute#zone"
name: "asia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-a"
, id: 2262
kind: "compute#zone"
name: "asia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-c"
, id: 2322
kind: "compute#zone"
name: "asia-south1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-c"
, id: 2320
kind: "compute#zone"
name: "asia-south1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-b"
, id: 2321
kind: "compute#zone"
name: "asia-south1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-a"
, id: 2282
kind: "compute#zone"
name: "australia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-b"
, id: 2280
kind: "compute#zone"
name: "australia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-c"
, id: 2281
kind: "compute#zone"
name: "australia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-a"
, id: 2470
kind: "compute#zone"
name: "asia-south2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-a"
, id: 2472
kind: "compute#zone"
name: "asia-south2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-b"
, id: 2471
kind: "compute#zone"
name: "asia-south2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-c"
, id: 2440
kind: "compute#zone"
name: "asia-southeast2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-a"
, id: 2442
kind: "compute#zone"
name: "asia-southeast2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-b"
, id: 2441
kind: "compute#zone"
name: "asia-southeast2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-c"
]> but was:<[id: 2261
kind: "compute#zone"
name: "asia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-b"
, id: 2260
kind: "compute#zone"
name: "asia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-a"
, id: 2262
kind: "compute#zone"
name: "asia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast1"
available_cpu_platforms: "Intel Ice Lake"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
available_cpu_platforms: "AMD Rome"
status: UP
description: "asia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast1-c"
, id: 2322
kind: "compute#zone"
name: "asia-south1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-c"
, id: 2320
kind: "compute#zone"
name: "asia-south1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-b"
, id: 2321
kind: "compute#zone"
name: "asia-south1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "asia-south1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south1-a"
, id: 2282
kind: "compute#zone"
name: "australia-southeast1-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-b"
, id: 2280
kind: "compute#zone"
name: "australia-southeast1-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-c"
, id: 2281
kind: "compute#zone"
name: "australia-southeast1-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: true
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/australia-southeast1"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
available_cpu_platforms: "Intel Sandy Bridge"
status: UP
description: "australia-southeast1-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/australia-southeast1-a"
, id: 2470
kind: "compute#zone"
name: "asia-south2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-a"
, id: 2472
kind: "compute#zone"
name: "asia-south2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-b"
, id: 2471
kind: "compute#zone"
name: "asia-south2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-south2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-south2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-south2-c"
, id: 2440
kind: "compute#zone"
name: "asia-southeast2-a"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-a"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-a"
, id: 2442
kind: "compute#zone"
name: "asia-southeast2-b"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-b"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-b"
, id: 2441
kind: "compute#zone"
name: "asia-southeast2-c"
creation_timestamp: "1969-12-31T16:00:00.000-08:00"
supports_pzs: false
region: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/regions/asia-southeast2"
available_cpu_platforms: "Intel Cascade Lake"
available_cpu_platforms: "Intel Skylake"
available_cpu_platforms: "Intel Broadwell"
available_cpu_platforms: "Intel Haswell"
available_cpu_platforms: "Intel Ivy Bridge"
status: UP
description: "asia-southeast2-c"
self_link: "https://www.googleapis.com/compute/v1/projects/gcloud-devel/zones/asia-southeast2-c"
]>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at com.google.cloud.compute.v1.integration.ITPaginationTest.testPaginationNextToken(ITPaginationTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
</pre></details>
|
process
|
compute integration itpaginationtest testpaginationnexttoken failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java lang assertionerror expected id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia b self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia a self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia c self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia c self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia b self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia a self link id kind compute zone name australia b creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia b self link id kind compute zone name australia c creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia c self link id kind compute zone name australia a creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia a self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia a self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia b self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia c self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia a self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia b self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia c self link but was id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel ice lake available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia b self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel ice lake available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia a self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel ice lake available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge available cpu platforms amd rome status up description asia c self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia c self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia b self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description asia a self link id kind compute zone name australia b creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia b self link id kind compute zone name australia c creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia c self link id kind compute zone name australia a creation timestamp supports pzs true region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge available cpu platforms intel sandy bridge status up description australia a self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia a self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia b self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia c self link id kind compute zone name asia a creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia a self link id kind compute zone name asia b creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia b self link id kind compute zone name asia c creation timestamp supports pzs false region available cpu platforms intel cascade lake available cpu platforms intel skylake available cpu platforms intel broadwell available cpu platforms intel haswell available cpu platforms intel ivy bridge status up description asia c self link at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com google cloud compute integration itpaginationtest testpaginationnexttoken itpaginationtest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java
| 1
|
48,465
| 10,242,207,758
|
IssuesEvent
|
2019-08-20 03:52:43
|
BrightSpots/rcv
|
https://api.github.com/repos/BrightSpots/rcv
|
closed
|
Clean up VVSG 1.0 comments
|
certification-deliverable certification-deliverable-essential code cleanup
|
Since we're certifying to VVSG 1.1 now, no more need for extraneous comments describing what variables do and inputs / outputs of functions. Woohoo! Obviously leave comments in if they add value, but let's purge the cruft otherwise.
|
1.0
|
Clean up VVSG 1.0 comments - Since we're certifying to VVSG 1.1 now, no more need for extraneous comments describing what variables do and inputs / outputs of functions. Woohoo! Obviously leave comments in if they add value, but let's purge the cruft otherwise.
|
non_process
|
clean up vvsg comments since we re certifying to vvsg now no more need for extraneous comments describing what variables do and inputs outputs of functions woohoo obviously leave comments in if they add value but let s purge the cruft otherwise
| 0
|
214,098
| 16,558,662,579
|
IssuesEvent
|
2021-05-28 16:50:03
|
IntellectualSites/FastAsyncWorldEdit
|
https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit
|
closed
|
(repost since I updated my version) //set does random hole that are 1 block deep.
|
Requires Testing
|
### Server Implementation
Paper
### Server Version
1.16.5
### Describe the bug
Whenever I use the //set command, I eventually return, seeing that there are some holes which a 1 block deep. I also tried checking console.
### To Reproduce
1. Use the //set command.
2. Go away from the location, and go back.
### Expected behaviour
Everything should be filled and there should be no holes.
### Screenshots / Videos


### Error log (if applicable)
https://paste.gg/p/anonymous/c3bd7247907a4ad6bbda0c14a61b54d9
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/d5e5d5cec5494749a6b7fd533d3db12d
### Fawe Version
FastAsyncWorldEdit version 1.16-695;0461082
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit-1.16/ and the issue still persists.
### Anything else?
I was trying to use the set command to fill a large areas, but it does these weird holes. I tested with my friend, he also has them.
|
1.0
|
(repost since I updated my version) //set does random hole that are 1 block deep. - ### Server Implementation
Paper
### Server Version
1.16.5
### Describe the bug
Whenever I use the //set command, I eventually return, seeing that there are some holes which a 1 block deep. I also tried checking console.
### To Reproduce
1. Use the //set command.
2. Go away from the location, and go back.
### Expected behaviour
Everything should be filled and there should be no holes.
### Screenshots / Videos


### Error log (if applicable)
https://paste.gg/p/anonymous/c3bd7247907a4ad6bbda0c14a61b54d9
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/d5e5d5cec5494749a6b7fd533d3db12d
### Fawe Version
FastAsyncWorldEdit version 1.16-695;0461082
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit-1.16/ and the issue still persists.
### Anything else?
I was trying to use the set command to fill a large areas, but it does these weird holes. I tested with my friend, he also has them.
|
non_process
|
repost since i updated my version set does random hole that are block deep server implementation paper server version describe the bug whenever i use the set command i eventually return seeing that there are some holes which a block deep i also tried checking console to reproduce use the set command go away from the location and go back expected behaviour everything should be filled and there should be no holes screenshots videos error log if applicable fawe debugpaste fawe version fastasyncworldedit version checklist i have included a fawe debugpaste i am using the newest build from and the issue still persists anything else i was trying to use the set command to fill a large areas but it does these weird holes i tested with my friend he also has them
| 0
|
8,864
| 11,960,416,325
|
IssuesEvent
|
2020-04-05 03:01:35
|
collectivertl/risc8_tcoonan
|
https://api.github.com/repos/collectivertl/risc8_tcoonan
|
opened
|
Enable builds via container
|
process
|
This issue is to enable the ability to do a build of the core independent of host.
This is to enable any user to be able to do development without the need for all of the tools to be installed locally. i.e. Windows and Linux users can do development. In addition this would be a preliminary step to getting the builds done as Continuous Integration.
|
1.0
|
Enable builds via container - This issue is to enable the ability to do a build of the core independent of host.
This is to enable any user to be able to do development without the need for all of the tools to be installed locally. i.e. Windows and Linux users can do development. In addition this would be a preliminary step to getting the builds done as Continuous Integration.
|
process
|
enable builds via container this issue is to enable the ability to do a build of the core independent of host this is to enable any user to be able to do development without the need for all of the tools to be installed locally i e windows and linux users can do development in addition this would be a preliminary step to getting the builds done as continuous integration
| 1
|
9,928
| 12,966,751,979
|
IssuesEvent
|
2020-07-21 01:25:09
|
googleapis/java-spanner
|
https://api.github.com/repos/googleapis/java-spanner
|
closed
|
Update Dependencies (Renovate Bot)
|
api: spanner type: process
|
This [master issue](https://renovatebot.com/blog/master-issue) contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.commons-commons-lang3-3.x -->[deps: update dependency org.apache.commons:commons-lang3 to v3.11](../pull/356)
---
<details><summary>Advanced</summary>
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
</details>
|
1.0
|
Update Dependencies (Renovate Bot) - This [master issue](https://renovatebot.com/blog/master-issue) contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.commons-commons-lang3-3.x -->[deps: update dependency org.apache.commons:commons-lang3 to v3.11](../pull/356)
---
<details><summary>Advanced</summary>
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
</details>
|
process
|
update dependencies renovate bot this contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull advanced check this box to trigger a request for renovate to run again on this repository
| 1
|
7,794
| 10,949,173,652
|
IssuesEvent
|
2019-11-26 10:19:32
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Improve testing template re: onboarding
|
process
|
Currently, this is what the manual test template says about onboarding:
- [ ] Check that steps are being displayed correctly for first entry
- [ ] Check that steps to edit first entry are being displayed correctly
Maybe it would still be good to specify **exactly what is expected** and **after what steps.**
|
1.0
|
Improve testing template re: onboarding - Currently, this is what the manual test template says about onboarding:
- [ ] Check that steps are being displayed correctly for first entry
- [ ] Check that steps to edit first entry are being displayed correctly
Maybe it would still be good to specify **exactly what is expected** and **after what steps.**
|
process
|
improve testing template re onboarding currently this is what the manual test template says about onboarding check that steps are being displayed correctly for first entry check that steps to edit first entry are being displayed correctly maybe it would still be good to specify exactly what is expected and after what steps
| 1
|
19,953
| 26,429,637,751
|
IssuesEvent
|
2023-01-14 16:52:11
|
nextflow-io/nextflow
|
https://api.github.com/repos/nextflow-io/nextflow
|
closed
|
Error message of `file(..., checkIfExists)` only says `No such file` even though the function can also be applied to directories
|
lang/processes
|
## Bug report
### Expected behavior and actual behavior
`file(..., checkIfExists: true)` can be run on files or directories, but always prints `No such file: ...` when the path does not exist. This can be confusing for end users of workflows that require an input directory when they specify an invalid path to their input dir. Could the message be changed to `No such file or directory`?
### Steps to reproduce the problem
```
file("/my/nonexisting/path", checkIfExists: true)
```
### Program output
```
No such file: /my/nonexisting/path
```
### Environment
* Nextflow version: 22.10.4
* Java version: openjdk 17.0.3-internal 2022-04-19
* Operating system: Ubuntu
* GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
### Additional context
The same applies to `watchPath` and other code that throws a `java.nio.file.NoSuchFileException` (the exceptions are handled [here](https://github.com/nextflow-io/nextflow/blob/646776a85883e07212d4edaa9a96fdbefaa0f586/modules/nextflow/src/main/groovy/nextflow/processor/TaskProcessor.groovy#L1280) and [here](https://github.com/nextflow-io/nextflow/blob/646776a85883e07212d4edaa9a96fdbefaa0f586/modules/nextflow/src/main/groovy/nextflow/util/LoggerHelper.groovy#L466)).
|
1.0
|
Error message of `file(..., checkIfExists)` only says `No such file` even though the function can also be applied to directories - ## Bug report
### Expected behavior and actual behavior
`file(..., checkIfExists: true)` can be run on files or directories, but always prints `No such file: ...` when the path does not exist. This can be confusing for end users of workflows that require an input directory when they specify an invalid path to their input dir. Could the message be changed to `No such file or directory`?
### Steps to reproduce the problem
```
file("/my/nonexisting/path", checkIfExists: true)
```
### Program output
```
No such file: /my/nonexisting/path
```
### Environment
* Nextflow version: 22.10.4
* Java version: openjdk 17.0.3-internal 2022-04-19
* Operating system: Ubuntu
* GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
### Additional context
The same applies to `watchPath` and other code that throws a `java.nio.file.NoSuchFileException` (the exceptions are handled [here](https://github.com/nextflow-io/nextflow/blob/646776a85883e07212d4edaa9a96fdbefaa0f586/modules/nextflow/src/main/groovy/nextflow/processor/TaskProcessor.groovy#L1280) and [here](https://github.com/nextflow-io/nextflow/blob/646776a85883e07212d4edaa9a96fdbefaa0f586/modules/nextflow/src/main/groovy/nextflow/util/LoggerHelper.groovy#L466)).
|
process
|
error message of file checkifexists only says no such file even though the function can also be applied to directories bug report expected behavior and actual behavior file checkifexists true can be run on files or directories but always prints no such file when the path does not exist this can be confusing for end users of workflows that require an input directory when they specify an invalid path to their input dir could the message be changed to no such file or directory steps to reproduce the problem file my nonexisting path checkifexists true program output no such file my nonexisting path environment nextflow version java version openjdk internal operating system ubuntu gnu bash version release pc linux gnu additional context the same applies to watchpath and other code that throws a java nio file nosuchfileexception the exceptions are handled and
| 1
|
460,444
| 13,209,684,893
|
IssuesEvent
|
2020-08-15 12:52:46
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
opened
|
Synthesis failed for FirebaseML
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate FirebaseML. :broken_heart:
Here's the output from running `synth.py`:
```
2020-08-15 05:52:43,782 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/elixir-google-api
2020-08-15 05:52:44,665 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-08-15 05:52:44,668 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-08-15 05:52:44,671 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-08-15 05:52:44,674 autosynth [DEBUG] > Running: git config push.default simple
2020-08-15 05:52:44,676 autosynth [DEBUG] > Running: git branch -f autosynth-firebaseml
2020-08-15 05:52:44,679 autosynth [DEBUG] > Running: git checkout autosynth-firebaseml
Switched to branch 'autosynth-firebaseml'
2020-08-15 05:52:44,899 autosynth [INFO] > Running synthtool
2020-08-15 05:52:44,900 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/firebase_ml/synth.metadata', 'synth.py', '--']
2020-08-15 05:52:44,900 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/elixir-google-api/FirebaseML/sponge_log.log
2020-08-15 05:52:44,902 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/firebase_ml/synth.metadata synth.py -- FirebaseML
2020-08-15 05:52:45,108 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-firebaseml
nothing to commit, working tree clean
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 93, in main
with synthtool.metadata.MetadataTrackerAndWriter(metadata):
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 237, in __enter__
self.observer.start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 260, in start
emitter.start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 110, in start
self.on_thread_start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_start
self._inotify = InotifyBuffer(path, self.watch.is_recursive)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 35, in __init__
self._inotify = Inotify(path, recursive)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 203, in __init__
self._add_watch(path, event_mask)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 412, in _add_watch
Inotify._raise_error()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 432, in _raise_error
raise OSError(err, os.strerror(err))
FileNotFoundError: [Errno 2] No such file or directory
2020-08-15 05:52:45,190 autosynth [ERROR] > Synthesis failed
2020-08-15 05:52:45,190 autosynth [DEBUG] > Running: git clean -fdx
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 690, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 539, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 630, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/firebase_ml/synth.metadata', 'synth.py', '--', 'FirebaseML']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/8ae29869-354b-4046-bc0f-6c3bd871fde7/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
1.0
|
Synthesis failed for FirebaseML - Hello! Autosynth couldn't regenerate FirebaseML. :broken_heart:
Here's the output from running `synth.py`:
```
2020-08-15 05:52:43,782 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/elixir-google-api
2020-08-15 05:52:44,665 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-08-15 05:52:44,668 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-08-15 05:52:44,671 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-08-15 05:52:44,674 autosynth [DEBUG] > Running: git config push.default simple
2020-08-15 05:52:44,676 autosynth [DEBUG] > Running: git branch -f autosynth-firebaseml
2020-08-15 05:52:44,679 autosynth [DEBUG] > Running: git checkout autosynth-firebaseml
Switched to branch 'autosynth-firebaseml'
2020-08-15 05:52:44,899 autosynth [INFO] > Running synthtool
2020-08-15 05:52:44,900 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/firebase_ml/synth.metadata', 'synth.py', '--']
2020-08-15 05:52:44,900 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/elixir-google-api/FirebaseML/sponge_log.log
2020-08-15 05:52:44,902 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/firebase_ml/synth.metadata synth.py -- FirebaseML
2020-08-15 05:52:45,108 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-firebaseml
nothing to commit, working tree clean
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 93, in main
with synthtool.metadata.MetadataTrackerAndWriter(metadata):
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 237, in __enter__
self.observer.start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 260, in start
emitter.start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 110, in start
self.on_thread_start()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_start
self._inotify = InotifyBuffer(path, self.watch.is_recursive)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 35, in __init__
self._inotify = Inotify(path, recursive)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 203, in __init__
self._add_watch(path, event_mask)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 412, in _add_watch
Inotify._raise_error()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 432, in _raise_error
raise OSError(err, os.strerror(err))
FileNotFoundError: [Errno 2] No such file or directory
2020-08-15 05:52:45,190 autosynth [ERROR] > Synthesis failed
2020-08-15 05:52:45,190 autosynth [DEBUG] > Running: git clean -fdx
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 690, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 539, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 630, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/firebase_ml/synth.metadata', 'synth.py', '--', 'FirebaseML']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/8ae29869-354b-4046-bc0f-6c3bd871fde7/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
non_process
|
synthesis failed for firebaseml hello autosynth couldn t regenerate firebaseml broken heart here s the output from running synth py autosynth logs will be written to tmpfs src logs elixir google api autosynth running git config global core excludesfile home kbuilder autosynth gitignore autosynth running git config user name yoshi automation autosynth running git config user email yoshi automation google com autosynth running git config push default simple autosynth running git branch f autosynth firebaseml autosynth running git checkout autosynth firebaseml switched to branch autosynth firebaseml autosynth running synthtool autosynth autosynth log file path tmpfs src logs elixir google api firebaseml sponge log log autosynth running tmpfs src github synthtool env bin m synthtool metadata clients firebase ml synth metadata synth py firebaseml synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth firebaseml nothing to commit working tree clean traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main with synthtool metadata metadatatrackerandwriter metadata file tmpfs src github synthtool synthtool metadata py line in enter self observer start file tmpfs src github synthtool env lib site packages watchdog observers api py line in start emitter start file tmpfs src github synthtool env lib site packages watchdog utils init py line in start self on thread start file tmpfs src github synthtool env lib site packages watchdog observers inotify py line in on thread start self inotify inotifybuffer path self watch is recursive file tmpfs src github synthtool env lib site packages watchdog observers inotify buffer py line in init self inotify inotify path recursive file tmpfs src github synthtool env lib site packages watchdog observers inotify c py line in init self add watch path event mask file tmpfs src github synthtool env lib site packages watchdog observers inotify c py line in add watch inotify raise error file tmpfs src github synthtool env lib site packages watchdog observers inotify c py line in raise error raise oserror err os strerror err filenotfounderror no such file or directory autosynth synthesis failed autosynth running git clean fdx traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize synth log path sponge log log file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
| 0
|
34,586
| 4,554,067,909
|
IssuesEvent
|
2016-09-13 08:07:36
|
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
closed
|
APvJhCV6p12A0YcIL6RIApgKafdTMZbPfUPmNo2v5UKf34P45akV1oDi5mq9kAdZIhVJJvgbcFDHG7UkLHVRSBwOlDHLW1lz02spWfnNHiIT5am3hdqL5IJ24YZcAxoPpTvFsclBGv7D3YMS38JPtvquuPRfCuDeCauuJ4o4lPA=
|
design
|
czniGZiYDGqHTcTr7jqxE8fCIHzsGT1f1OeEA04tyJtSZMdCeIxDO6P3xDpKO5Iohdkm0r1qSYIaWprKKxq/4ph4gdkc82Wfb3JeiiVgDjoggozJnHZgY3/Kw36FYneJVwOmgHQtLUB7Q4pMYDFzmEqx0PIFkQgGvVZ2wwMZttv1ebkrsFY0eupNN2ZFH/Tq2rCTNzSUl/+J2dR+LrBcavmBFeYnkfQp8VTbqrZ7Iqty/RPRZYm9QdV5UNo2V+47SjWOy2t0i2nHz279eJ/30V/ODET3B6kuqkIuPVJSbTyzuMxMfoPoNjE5pIAV/wEvSRkr6W2KSa0slnJpVKilNJAghQWqAucJZly0dgxsOy1rOAjVC3Pb1IPG8C90DuVZnyVX0kD++O8JxudU1YHVoqUIqq0vQV3VcELIsg+stzzgQop98FrL4GFRdkf+z2L2zseGMxEceDlEJAPDnyR4cClYjWt3yxGiT88bNsaNSiWHdp/V4unzvTGAPE/BQyF2dUmvlyV0/mtI3KIutEJ5dLiNX7nuBIkzklphg5Dlb6TliGfjyY/ukILRvmQeSCU6404IsoeAMp6Z/M0J+0e8IOVgOFhk0+fio3HOqQYnqEVLHTOjrZ2NEYL8aaNOxcKJACCPE6Vt7D7WpDyI7fRitK06JQuJ7IPUl8MZwDJl1rw0YAab9eBFjpWTCwoLO4omzrcqpfBc0A+Q1o0XosdecJUpZgm2sFTKEpgKSB1zvY0HlxCF48ignL4Lsl8FzDEqOEUBleLE1x45YzloymfdY92DQmRPd4OR/ebkyh5mj9L/jS0gZRkfmiXy0Ptyy9OAUnoX/NtEsqjJwg9x9UOu0h5v/FcNuH23KkHj321zxrqL1ieKgF9sNL2DVKRRnwZ/tomZPRA7D7hEOMk3rJLtXUBA/lPZ3SiTLWpyZ/FscYAGvBOmnz/4aXL0OSQA93PQ4GYgqmwSavh4fnYHgSROk0DQu9J8gAjose7NvwbS+qWSDp1pjbsYyyca2Wq3hUXPDJDhL3nv55HY185onf06By4blGPDvbMH15KuPpBEiz2T1G4eITnXRukMpxvafw7qW6xn8Ee/kuf+IJuswtGuoLCrB61cyX9iPseHrlT7Ap+KfurCnMUP6mOJYcjvLIf2/iUz9DZxvVBZRJ4fjnGP9CnVOASoVRvBX7mGpQFZTFG86bfGdNrp19yOPZBVlAJyPwo+EYwhuV21wg+gmItKjg+1v3OOcq3DxtBXr5VhHrFdlGWYlAaddepNJ9W2E2ZSVi2MVi5wxQkVy7MqRcXAMVJhJM+5QmZtGj5ljA1uzvv2foUDNV05Q5XLpwiwpPrUiyX6LpcTnu+wXh3OLoUT7/RmjxfWcwjaSDVFNvBkeUywaaLtjrDWvi0zWLH1/s82d3+7Vk7qthkGV4HeMYmSB8RrMKUMiyReR2iZiu00M1qv63pfuTLtkPe6FzpHzHOdSkkLWbvmOdDjtVhz1vLJQp3S1Ww4t64prpeMkOeg1+6a1itHMHS0fAajXMVlwjLSiweOYyCbCUl5dSJkUDK3oz1/biEvc5LcfoXmkf9nvwjkCs2BwWrQ8yDNL5XGh259vDwvQX93rUGWZrn9+wBK76u7JmM33V2YbvQplRH6CgbIQoE7aVwEUT/belkslrqEIhPL6MVv4seNk/igCwTDt0cnzhgagsYvomPLDDOPO67WtyxA3fB4XwyCs/6hJl3NknbY7uZg5yYDRpna56c2ZilsWc0LZWK7DCVeoeS8/MV64GxwLjNJEwLugLApCnAXLIDMiRvnqOCByiTI5WJ+gF3BdvCPh6NEXWqhGlz4Tk2fvpH3z7ATir4TQXProsGjWxdxZpfmQjdEfbwyk9Axnf1qX4cN7q+EDZhbFojLJzx/snZ1gOwyF4nFcgWVKohDulHbB/brQe1acJDsKlQmAg==
|
1.0
|
APvJhCV6p12A0YcIL6RIApgKafdTMZbPfUPmNo2v5UKf34P45akV1oDi5mq9kAdZIhVJJvgbcFDHG7UkLHVRSBwOlDHLW1lz02spWfnNHiIT5am3hdqL5IJ24YZcAxoPpTvFsclBGv7D3YMS38JPtvquuPRfCuDeCauuJ4o4lPA= - czniGZiYDGqHTcTr7jqxE8fCIHzsGT1f1OeEA04tyJtSZMdCeIxDO6P3xDpKO5Iohdkm0r1qSYIaWprKKxq/4ph4gdkc82Wfb3JeiiVgDjoggozJnHZgY3/Kw36FYneJVwOmgHQtLUB7Q4pMYDFzmEqx0PIFkQgGvVZ2wwMZttv1ebkrsFY0eupNN2ZFH/Tq2rCTNzSUl/+J2dR+LrBcavmBFeYnkfQp8VTbqrZ7Iqty/RPRZYm9QdV5UNo2V+47SjWOy2t0i2nHz279eJ/30V/ODET3B6kuqkIuPVJSbTyzuMxMfoPoNjE5pIAV/wEvSRkr6W2KSa0slnJpVKilNJAghQWqAucJZly0dgxsOy1rOAjVC3Pb1IPG8C90DuVZnyVX0kD++O8JxudU1YHVoqUIqq0vQV3VcELIsg+stzzgQop98FrL4GFRdkf+z2L2zseGMxEceDlEJAPDnyR4cClYjWt3yxGiT88bNsaNSiWHdp/V4unzvTGAPE/BQyF2dUmvlyV0/mtI3KIutEJ5dLiNX7nuBIkzklphg5Dlb6TliGfjyY/ukILRvmQeSCU6404IsoeAMp6Z/M0J+0e8IOVgOFhk0+fio3HOqQYnqEVLHTOjrZ2NEYL8aaNOxcKJACCPE6Vt7D7WpDyI7fRitK06JQuJ7IPUl8MZwDJl1rw0YAab9eBFjpWTCwoLO4omzrcqpfBc0A+Q1o0XosdecJUpZgm2sFTKEpgKSB1zvY0HlxCF48ignL4Lsl8FzDEqOEUBleLE1x45YzloymfdY92DQmRPd4OR/ebkyh5mj9L/jS0gZRkfmiXy0Ptyy9OAUnoX/NtEsqjJwg9x9UOu0h5v/FcNuH23KkHj321zxrqL1ieKgF9sNL2DVKRRnwZ/tomZPRA7D7hEOMk3rJLtXUBA/lPZ3SiTLWpyZ/FscYAGvBOmnz/4aXL0OSQA93PQ4GYgqmwSavh4fnYHgSROk0DQu9J8gAjose7NvwbS+qWSDp1pjbsYyyca2Wq3hUXPDJDhL3nv55HY185onf06By4blGPDvbMH15KuPpBEiz2T1G4eITnXRukMpxvafw7qW6xn8Ee/kuf+IJuswtGuoLCrB61cyX9iPseHrlT7Ap+KfurCnMUP6mOJYcjvLIf2/iUz9DZxvVBZRJ4fjnGP9CnVOASoVRvBX7mGpQFZTFG86bfGdNrp19yOPZBVlAJyPwo+EYwhuV21wg+gmItKjg+1v3OOcq3DxtBXr5VhHrFdlGWYlAaddepNJ9W2E2ZSVi2MVi5wxQkVy7MqRcXAMVJhJM+5QmZtGj5ljA1uzvv2foUDNV05Q5XLpwiwpPrUiyX6LpcTnu+wXh3OLoUT7/RmjxfWcwjaSDVFNvBkeUywaaLtjrDWvi0zWLH1/s82d3+7Vk7qthkGV4HeMYmSB8RrMKUMiyReR2iZiu00M1qv63pfuTLtkPe6FzpHzHOdSkkLWbvmOdDjtVhz1vLJQp3S1Ww4t64prpeMkOeg1+6a1itHMHS0fAajXMVlwjLSiweOYyCbCUl5dSJkUDK3oz1/biEvc5LcfoXmkf9nvwjkCs2BwWrQ8yDNL5XGh259vDwvQX93rUGWZrn9+wBK76u7JmM33V2YbvQplRH6CgbIQoE7aVwEUT/belkslrqEIhPL6MVv4seNk/igCwTDt0cnzhgagsYvomPLDDOPO67WtyxA3fB4XwyCs/6hJl3NknbY7uZg5yYDRpna56c2ZilsWc0LZWK7DCVeoeS8/MV64GxwLjNJEwLugLApCnAXLIDMiRvnqOCByiTI5WJ+gF3BdvCPh6NEXWqhGlz4Tk2fvpH3z7ATir4TQXProsGjWxdxZpfmQjdEfbwyk9Axnf1qX4cN7q+EDZhbFojLJzx/snZ1gOwyF4nFcgWVKohDulHbB/brQe1acJDsKlQmAg==
|
non_process
|
fscyagvbomnz kuf gmitkjg edzhbfojljzx
| 0
|
238,041
| 7,769,536,400
|
IssuesEvent
|
2018-06-04 04:43:03
|
Jeremyyiu/BatteryManager
|
https://api.github.com/repos/Jeremyyiu/BatteryManager
|
opened
|
GPS move marker via touch feature
|
enhancement medium priority
|
On held press on a certain location, not touching the marker, the marker should automatically go to that pressed location.
|
1.0
|
GPS move marker via touch feature - On held press on a certain location, not touching the marker, the marker should automatically go to that pressed location.
|
non_process
|
gps move marker via touch feature on held press on a certain location not touching the marker the marker should automatically go to that pressed location
| 0
|
225,538
| 7,482,443,459
|
IssuesEvent
|
2018-04-05 01:25:31
|
Cyberjusticelab/JusticeAI
|
https://api.github.com/repos/Cyberjusticelab/JusticeAI
|
closed
|
Create a ReadTheDocs Page for Procezeus
|
Points (5) Priority (Medium) Risk (Low)
|
*Description*
As an open source developer, I would like the system to have clear documentation found in a ReadTheDocs web page.
*Scope of Work*
- [ ] Create a readthedocs page for the system
- [ ] Link READMES to the web page
*Demo requirement*
1. The user does x
2. The system does y
n. Rest of a use case of a flow chart
*Acceptance Criteria*
- The open source developers are able to see our readthedocs page
|
1.0
|
Create a ReadTheDocs Page for Procezeus - *Description*
As an open source developer, I would like the system to have clear documentation found in a ReadTheDocs web page.
*Scope of Work*
- [ ] Create a readthedocs page for the system
- [ ] Link READMES to the web page
*Demo requirement*
1. The user does x
2. The system does y
n. Rest of a use case of a flow chart
*Acceptance Criteria*
- The open source developers are able to see our readthedocs page
|
non_process
|
create a readthedocs page for procezeus description as an open source developer i would like the system to have clear documentation found in a readthedocs web page scope of work create a readthedocs page for the system link readmes to the web page demo requirement the user does x the system does y n rest of a use case of a flow chart acceptance criteria the open source developers are able to see our readthedocs page
| 0
|
65,618
| 3,236,656,816
|
IssuesEvent
|
2015-10-14 07:17:22
|
awesome-raccoons/gqt
|
https://api.github.com/repos/awesome-raccoons/gqt
|
closed
|
Y axis should point upward
|
bug high priority
|
The y axis currently points downwards, so higher values are further down on the screen. This is the opposite of most GIS software.
|
1.0
|
Y axis should point upward - The y axis currently points downwards, so higher values are further down on the screen. This is the opposite of most GIS software.
|
non_process
|
y axis should point upward the y axis currently points downwards so higher values are further down on the screen this is the opposite of most gis software
| 0
|
255,033
| 21,895,542,124
|
IssuesEvent
|
2022-05-20 08:15:50
|
vaadin/testbench
|
https://api.github.com/repos/vaadin/testbench
|
closed
|
Rename UI Unit methods
|
UITest
|
`search` should be `$` so it would match the naming used in elementQuery.
The current `$` should be named `wrap`
|
1.0
|
Rename UI Unit methods - `search` should be `$` so it would match the naming used in elementQuery.
The current `$` should be named `wrap`
|
non_process
|
rename ui unit methods search should be so it would match the naming used in elementquery the current should be named wrap
| 0
|
21,640
| 30,055,701,418
|
IssuesEvent
|
2023-06-28 06:35:16
|
sap-tutorials/sap-build-process-automation
|
https://api.github.com/repos/sap-tutorials/sap-build-process-automation
|
closed
|
Run the Sales Order Business Process
|
tutorial:spa-academy-run-salesorderprocess
|
Tutorials: https://developers.sap.com/tutorials/spa-academy-run-salesorderprocess.html
--------------------------
after step 4 there is no info on where to find inbox
|
1.0
|
Run the Sales Order Business Process - Tutorials: https://developers.sap.com/tutorials/spa-academy-run-salesorderprocess.html
--------------------------
after step 4 there is no info on where to find inbox
|
process
|
run the sales order business process tutorials after step there is no info on where to find inbox
| 1
|
15,522
| 19,703,268,709
|
IssuesEvent
|
2022-01-12 18:52:26
|
googleapis/java-dns
|
https://api.github.com/repos/googleapis/java-dns
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'dns' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'dns' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname dns invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
349,197
| 24,937,521,671
|
IssuesEvent
|
2022-10-31 16:12:29
|
slim-codes/todo
|
https://api.github.com/repos/slim-codes/todo
|
closed
|
DOCS: Improve Contributing.md file
|
documentation good first issue hacktoberfest hacktoberfest-accepted up-for-grabs
|
### Description
Presently, the instructions for contributing to this project are vague and short.
- Improve the contributing.md file
### Screenshots
_No response_
|
1.0
|
DOCS: Improve Contributing.md file - ### Description
Presently, the instructions for contributing to this project are vague and short.
- Improve the contributing.md file
### Screenshots
_No response_
|
non_process
|
docs improve contributing md file description presently the instructions for contributing to this project are vague and short improve the contributing md file screenshots no response
| 0
|
22,684
| 31,987,810,739
|
IssuesEvent
|
2023-09-21 01:47:50
|
prusa3d/Prusa-Firmware
|
https://api.github.com/repos/prusa3d/Prusa-Firmware
|
closed
|
MK3 3.6.0 Print stall after purge line using MMU2
|
bug processing stale-issue
|
I am having a problem with the gcode after flashing. Now when my printer sees the second t-code it just stalls. The temps are all good and it will run all the way through the purge line but then it stalls at that spot. If I am using Pronterface or the Slic3r PE software to run my printer the terminal window shows "duplicit T-code ignored". If I open the code in an editor I can comment out the second T-code and the program will run correctly.
If I am running from the SD card I still get the stall. Heatbed and extruder will stay heated but it will not go past the purge line.
I have flashed and reflashed, downloaded and redownloaded, and tried going back to the old firmware but still have this issue.
I would like to be able to print without having to hand edit my gcode file everytime. :)
firmware = 3.6.0-2069,
MMU2 FW = 1.0.5.297
Slic3r PE 1.41.3
This is the start of my code:
`
; generated by Slic3r Prusa Edition 1.41.3+win64 on 2019-03-18 at 19:11:37
;
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
M73 P0 R497
M73 Q0 S501
M201 X1000 Y1000 Z1000 E8000 ; sets maximum accelerations, mm/sec^2
M203 X200 Y200 Z12 E120 ; sets maximum feedrates, mm/sec
M204 P1250 R1250 T1250 ; sets acceleration (P, T) and retract acceleration (R), mm/sec^2
M205 X8.00 Y8.00 Z0.40 E1.50 ; sets the jerk limits, mm/sec
M205 S0 T0 ; sets the minimum extruding and travel feed rate, mm/sec
M107
M107
M115 U3.5.1 ; tell printer latest fw version
M83 ; extruder relative mode
M104 S215 ; set extruder temp
M140 S60 ; set bed temp
M190 S60 ; wait for bed temp
M109 S215 ; wait for extruder temp
G28 W ; home all without mesh bed level
G80 ; mesh bed leveling
G21 ; set units to millimeters
; Send the filament type to the MMU2.0 unit.
; E stands for extruder number, F stands for filament type (0: default; 1:flex; 2: PVA)
M403 E0 F0
M403 E1 F0
M403 E2 F0
M403 E3 F0
M403 E4 F0
;go outside print area
G1 Y-3.0 F1000.0
G1 Z0.4 F1000.0
; select extruder
T0
; initial load
G1 X55.0 E32.0 F1073.0
M73 Q0 S500
M73 P0 R497
G1 X5.0 E32.0 F1800.0
G1 X55.0 E8.0 F2000.0
G1 Z0.3 F1000.0
G92 E0.0
G1 X240.0 E25.0 F2200.0
G1 Y-2.0 F1000.0
G1 X55.0 E25 F1400.0
G1 Z0.20 F1000.0
G1 X5.0 E4.0 F1000.0
G92 E0.0
M221 S95
G90 ; use absolute coordinates
M83 ; use relative distances for extrusion
G92 E0.0
G21 ; set units to millimeters
G90 ; use absolute coordinates
M83 ; use relative distances for extrusion
T0 ******this is where it stalls always, if I remove it it works fine*****************
M900 K30; Filament gcode
G1 Z0.200 F10800.000
;BEFORE_LAYER_CHANGE
G92 E0.0
;0.2
`
|
1.0
|
MK3 3.6.0 Print stall after purge line using MMU2 - I am having a problem with the gcode after flashing. Now when my printer sees the second t-code it just stalls. The temps are all good and it will run all the way through the purge line but then it stalls at that spot. If I am using Pronterface or the Slic3r PE software to run my printer the terminal window shows "duplicit T-code ignored". If I open the code in an editor I can comment out the second T-code and the program will run correctly.
If I am running from the SD card I still get the stall. Heatbed and extruder will stay heated but it will not go past the purge line.
I have flashed and reflashed, downloaded and redownloaded, and tried going back to the old firmware but still have this issue.
I would like to be able to print without having to hand edit my gcode file everytime. :)
firmware = 3.6.0-2069,
MMU2 FW = 1.0.5.297
Slic3r PE 1.41.3
This is the start of my code:
`
; generated by Slic3r Prusa Edition 1.41.3+win64 on 2019-03-18 at 19:11:37
;
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
; external perimeters extrusion width = 0.45mm
; perimeters extrusion width = 0.45mm
; infill extrusion width = 0.45mm
; solid infill extrusion width = 0.45mm
; top infill extrusion width = 0.40mm
; first layer extrusion width = 0.42mm
M73 P0 R497
M73 Q0 S501
M201 X1000 Y1000 Z1000 E8000 ; sets maximum accelerations, mm/sec^2
M203 X200 Y200 Z12 E120 ; sets maximum feedrates, mm/sec
M204 P1250 R1250 T1250 ; sets acceleration (P, T) and retract acceleration (R), mm/sec^2
M205 X8.00 Y8.00 Z0.40 E1.50 ; sets the jerk limits, mm/sec
M205 S0 T0 ; sets the minimum extruding and travel feed rate, mm/sec
M107
M107
M115 U3.5.1 ; tell printer latest fw version
M83 ; extruder relative mode
M104 S215 ; set extruder temp
M140 S60 ; set bed temp
M190 S60 ; wait for bed temp
M109 S215 ; wait for extruder temp
G28 W ; home all without mesh bed level
G80 ; mesh bed leveling
G21 ; set units to millimeters
; Send the filament type to the MMU2.0 unit.
; E stands for extruder number, F stands for filament type (0: default; 1:flex; 2: PVA)
M403 E0 F0
M403 E1 F0
M403 E2 F0
M403 E3 F0
M403 E4 F0
;go outside print area
G1 Y-3.0 F1000.0
G1 Z0.4 F1000.0
; select extruder
T0
; initial load
G1 X55.0 E32.0 F1073.0
M73 Q0 S500
M73 P0 R497
G1 X5.0 E32.0 F1800.0
G1 X55.0 E8.0 F2000.0
G1 Z0.3 F1000.0
G92 E0.0
G1 X240.0 E25.0 F2200.0
G1 Y-2.0 F1000.0
G1 X55.0 E25 F1400.0
G1 Z0.20 F1000.0
G1 X5.0 E4.0 F1000.0
G92 E0.0
M221 S95
G90 ; use absolute coordinates
M83 ; use relative distances for extrusion
G92 E0.0
G21 ; set units to millimeters
G90 ; use absolute coordinates
M83 ; use relative distances for extrusion
T0 ******this is where it stalls always, if I remove it it works fine*****************
M900 K30; Filament gcode
G1 Z0.200 F10800.000
;BEFORE_LAYER_CHANGE
G92 E0.0
;0.2
`
|
process
|
print stall after purge line using i am having a problem with the gcode after flashing now when my printer sees the second t code it just stalls the temps are all good and it will run all the way through the purge line but then it stalls at that spot if i am using pronterface or the pe software to run my printer the terminal window shows duplicit t code ignored if i open the code in an editor i can comment out the second t code and the program will run correctly if i am running from the sd card i still get the stall heatbed and extruder will stay heated but it will not go past the purge line i have flashed and reflashed downloaded and redownloaded and tried going back to the old firmware but still have this issue i would like to be able to print without having to hand edit my gcode file everytime firmware fw pe this is the start of my code generated by prusa edition on at external perimeters extrusion width perimeters extrusion width infill extrusion width solid infill extrusion width top infill extrusion width first layer extrusion width external perimeters extrusion width perimeters extrusion width infill extrusion width solid infill extrusion width top infill extrusion width first layer extrusion width external perimeters extrusion width perimeters extrusion width infill extrusion width solid infill extrusion width top infill extrusion width first layer extrusion width sets maximum accelerations mm sec sets maximum feedrates mm sec sets acceleration p t and retract acceleration r mm sec sets the jerk limits mm sec sets the minimum extruding and travel feed rate mm sec tell printer latest fw version extruder relative mode set extruder temp set bed temp wait for bed temp wait for extruder temp w home all without mesh bed level mesh bed leveling set units to millimeters send the filament type to the unit e stands for extruder number f stands for filament type default flex pva go outside print area y select extruder initial load y use absolute coordinates use relative distances for extrusion set units to millimeters use absolute coordinates use relative distances for extrusion this is where it stalls always if i remove it it works fine filament gcode before layer change
| 1
|
34,974
| 14,548,474,535
|
IssuesEvent
|
2020-12-16 01:21:30
|
MicrosoftDocs/dynamics-365-customer-engagement
|
https://api.github.com/repos/MicrosoftDocs/dynamics-365-customer-engagement
|
closed
|
Add employee/internal authentication for Portals
|
Pri1 assigned-to-author dynamics-365-customerservice/svc portals portals-auth
|
Would like to see the use of Azure AD for employee/internal authentication called out. There's a page for [Azure AD B2C](https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/azure-ad-b2c) which mentions this foundational capability, but it's worth calling out here as well for internal users.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: da7d593d-5f2e-303e-1938-d5683c17ba98
* Version Independent ID: 12409262-15c8-dce1-9148-2431e4612235
* Content: [Configure portal authentication in Dynamics 365 for Customer Engagement](https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/configure-portal-authentication)
* Content Source: [ce/portals/configure-portal-authentication.md](https://github.com/MicrosoftDocs/dynamics-365-customer-engagement/blob/master/ce/portals/configure-portal-authentication.md)
* Service: **dynamics-365-customerservice**
* GitHub Login: @sbmjais
* Microsoft Alias: **shjais**
|
1.0
|
Add employee/internal authentication for Portals -
Would like to see the use of Azure AD for employee/internal authentication called out. There's a page for [Azure AD B2C](https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/azure-ad-b2c) which mentions this foundational capability, but it's worth calling out here as well for internal users.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: da7d593d-5f2e-303e-1938-d5683c17ba98
* Version Independent ID: 12409262-15c8-dce1-9148-2431e4612235
* Content: [Configure portal authentication in Dynamics 365 for Customer Engagement](https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/configure-portal-authentication)
* Content Source: [ce/portals/configure-portal-authentication.md](https://github.com/MicrosoftDocs/dynamics-365-customer-engagement/blob/master/ce/portals/configure-portal-authentication.md)
* Service: **dynamics-365-customerservice**
* GitHub Login: @sbmjais
* Microsoft Alias: **shjais**
|
non_process
|
add employee internal authentication for portals would like to see the use of azure ad for employee internal authentication called out there s a page for which mentions this foundational capability but it s worth calling out here as well for internal users document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service dynamics customerservice github login sbmjais microsoft alias shjais
| 0
|
6,330
| 9,369,871,319
|
IssuesEvent
|
2019-04-03 12:14:01
|
brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
https://api.github.com/repos/brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
opened
|
Реализовать ввод исходной строки.
|
C++ Work in process
|
* #### требуется написать часть программы, которая позволит пользователю ввести исходную строку.
* #### этой части кода будет достаточно.
int main()
{
string stroka;
cin >> stroka;
return 0;
}
|
1.0
|
Реализовать ввод исходной строки. - * #### требуется написать часть программы, которая позволит пользователю ввести исходную строку.
* #### этой части кода будет достаточно.
int main()
{
string stroka;
cin >> stroka;
return 0;
}
|
process
|
реализовать ввод исходной строки требуется написать часть программы которая позволит пользователю ввести исходную строку этой части кода будет достаточно int main string stroka cin stroka return
| 1
|
34,167
| 16,459,128,877
|
IssuesEvent
|
2021-05-21 16:15:10
|
luntergroup/octopus
|
https://api.github.com/repos/luntergroup/octopus
|
closed
|
DP very low compared to other callers
|
performance question
|
Hi, thanks for your software.
I have multiples samples and I did a population calling. Example of commands for 2 samples:
```
octopus --threads 1 -R ref.fasta -I sample1.sorted.bam -o sample1.bcf
octopus --threads 1 -R ref.fasta -I sample2.sorted.bam -o sample2.bcf
```
`octopus -R ref.fasta -I sample1.sorted.bam sample2.sorted.bam --disable-denovo-variant-discovery -c sample1.bcf sample2.bcf -o octopus.joint.bcf`
The issue is that all the DP for each samples are very low (between 0 to 2) which is not the case with freebayes (don't pay attention to the genotype and number of samples -> this is not the same order and these are subsample to illustrate)
**Example Otopus result:**
```
scf3 82264 . G A 1785.37 PASS AC=31;AN=158;DP=75;MQ=60;NS=74 GT:GQ:DP:MQ:PS:PQ:FT 1|0:15:1:60:82264:100:PASS 0
|0:15:1:60:82264:100:PASS 1|0:7:0:0:82264:100:AD,AFB,DP,ADP 0|0:15:1:60:82264:100:PASS 1|0:15:1:60:82264:100:PASS
```
**Example Freebayes result**
`scf3 82264 . G A 1492.88 . AB=0.486111;ABP=3.37221;AC=34;AF=0.22973;AN=148;AO=111;CIGAR=1X;DP=526;DPB=526;DPRA=1.07105;EPP=244.044;EPPR=904.171;GTI=1;LEN=1;MEANALT=1;MQM=60;MQMR=60;NS=74;NUMALT=1;ODDS=0.356749;PAIRED=0;PAIREDR=0;PAO=0;PQA=0;PQR=0;PRO=0;QA=3983;QR=15177;RO=415;RPL=0;RPP=244.044;RPPR=904.171;RPR=111;RUN=1;SAF=0;SAP=244.044;SAR=111;SRF=0;SRP=904.171;SRR=415;TYPE=snp GT:DP:AD:RO:QR:AO:QA:GL 0/0:17:17,0:17:601:0:0:0,-5.11751,-54.416 0/1:10:6,4:6:222:4:148:-10.6717,0,-17.3278 0/0:15:15,0:15:545:0:0:0,-4.51545,-49.3856
`
Am I doing something wrong ?
Thanks fro your help.
|
True
|
DP very low compared to other callers - Hi, thanks for your software.
I have multiples samples and I did a population calling. Example of commands for 2 samples:
```
octopus --threads 1 -R ref.fasta -I sample1.sorted.bam -o sample1.bcf
octopus --threads 1 -R ref.fasta -I sample2.sorted.bam -o sample2.bcf
```
`octopus -R ref.fasta -I sample1.sorted.bam sample2.sorted.bam --disable-denovo-variant-discovery -c sample1.bcf sample2.bcf -o octopus.joint.bcf`
The issue is that all the DP for each samples are very low (between 0 to 2) which is not the case with freebayes (don't pay attention to the genotype and number of samples -> this is not the same order and these are subsample to illustrate)
**Example Otopus result:**
```
scf3 82264 . G A 1785.37 PASS AC=31;AN=158;DP=75;MQ=60;NS=74 GT:GQ:DP:MQ:PS:PQ:FT 1|0:15:1:60:82264:100:PASS 0
|0:15:1:60:82264:100:PASS 1|0:7:0:0:82264:100:AD,AFB,DP,ADP 0|0:15:1:60:82264:100:PASS 1|0:15:1:60:82264:100:PASS
```
**Example Freebayes result**
`scf3 82264 . G A 1492.88 . AB=0.486111;ABP=3.37221;AC=34;AF=0.22973;AN=148;AO=111;CIGAR=1X;DP=526;DPB=526;DPRA=1.07105;EPP=244.044;EPPR=904.171;GTI=1;LEN=1;MEANALT=1;MQM=60;MQMR=60;NS=74;NUMALT=1;ODDS=0.356749;PAIRED=0;PAIREDR=0;PAO=0;PQA=0;PQR=0;PRO=0;QA=3983;QR=15177;RO=415;RPL=0;RPP=244.044;RPPR=904.171;RPR=111;RUN=1;SAF=0;SAP=244.044;SAR=111;SRF=0;SRP=904.171;SRR=415;TYPE=snp GT:DP:AD:RO:QR:AO:QA:GL 0/0:17:17,0:17:601:0:0:0,-5.11751,-54.416 0/1:10:6,4:6:222:4:148:-10.6717,0,-17.3278 0/0:15:15,0:15:545:0:0:0,-4.51545,-49.3856
`
Am I doing something wrong ?
Thanks fro your help.
|
non_process
|
dp very low compared to other callers hi thanks for your software i have multiples samples and i did a population calling example of commands for samples octopus threads r ref fasta i sorted bam o bcf octopus threads r ref fasta i sorted bam o bcf octopus r ref fasta i sorted bam sorted bam disable denovo variant discovery c bcf bcf o octopus joint bcf the issue is that all the dp for each samples are very low between to which is not the case with freebayes don t pay attention to the genotype and number of samples this is not the same order and these are subsample to illustrate example otopus result g a pass ac an dp mq ns gt gq dp mq ps pq ft pass pass ad afb dp adp pass pass example freebayes result g a ab abp ac af an ao cigar dp dpb dpra epp eppr gti len meanalt mqm mqmr ns numalt odds paired pairedr pao pqa pqr pro qa qr ro rpl rpp rppr rpr run saf sap sar srf srp srr type snp gt dp ad ro qr ao qa gl am i doing something wrong thanks fro your help
| 0
|
6,936
| 10,101,637,963
|
IssuesEvent
|
2019-07-29 09:13:04
|
CurtinFRC/ModularVisionTracking
|
https://api.github.com/repos/CurtinFRC/ModularVisionTracking
|
opened
|
Add a square tracking function
|
Processes Threading enhancement visionMap
|
Adds a square tracking function to the vision map, so you can easily select it if the game requires square object tracking
|
1.0
|
Add a square tracking function - Adds a square tracking function to the vision map, so you can easily select it if the game requires square object tracking
|
process
|
add a square tracking function adds a square tracking function to the vision map so you can easily select it if the game requires square object tracking
| 1
|
27,544
| 12,627,636,770
|
IssuesEvent
|
2020-06-14 22:35:53
|
Ryujinx/Ryujinx-Games-List
|
https://api.github.com/repos/Ryujinx/Ryujinx-Games-List
|
opened
|
Dragon Quest XI S: Echoes of an Elusive Age - Definitive Edition
|
UE4 crash services status-nothing
|
## Dragon Quest XI S: Echoes of an Elusive Age - Definitive Edition
#### Current on `master` : 1.0.4697
Crashes on launch.
```
Emulation CurrentDomain_UnhandledException: Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Nim.IShopServiceAccessServerInterface: 0
```
#### Outstanding Issues:
https://github.com/Ryujinx/Ryujinx/issues/1084
https://github.com/Ryujinx/Ryujinx/issues/1282
#### Log file :
[DQXISEoaEADE.zip](https://github.com/Ryujinx/Ryujinx-Games-List/files/4777323/DQXISEoaEADE.zip)
|
1.0
|
Dragon Quest XI S: Echoes of an Elusive Age - Definitive Edition - ## Dragon Quest XI S: Echoes of an Elusive Age - Definitive Edition
#### Current on `master` : 1.0.4697
Crashes on launch.
```
Emulation CurrentDomain_UnhandledException: Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Nim.IShopServiceAccessServerInterface: 0
```
#### Outstanding Issues:
https://github.com/Ryujinx/Ryujinx/issues/1084
https://github.com/Ryujinx/Ryujinx/issues/1282
#### Log file :
[DQXISEoaEADE.zip](https://github.com/Ryujinx/Ryujinx-Games-List/files/4777323/DQXISEoaEADE.zip)
|
non_process
|
dragon quest xi s echoes of an elusive age definitive edition dragon quest xi s echoes of an elusive age definitive edition current on master crashes on launch emulation currentdomain unhandledexception unhandled exception caught ryujinx hle exceptions servicenotimplementedexception ryujinx hle hos services nim ishopserviceaccessserverinterface outstanding issues log file
| 0
|
6,283
| 9,260,599,533
|
IssuesEvent
|
2019-03-18 06:25:49
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
Create <DestinationCard /> component
|
Processing
|
## Description
- DestinationCard is a component which is used on the landing page and also on the search page.
## Visual style

Zeplin: https://zpl.io/2jPQo6W
### Interactions
- on `hover` is showed a dark overlay with additional information about the flight
- animations are the same as on current DestinationCard on the Kiwi.com landing page, but with Orbit duration token.
### Additional information
- the background image have to be always stretched
## Functional specs
- `width` is `100%`
- `height` is set up randomly from the range of `min` and `max` height
**Default**
- for `city name` is used `title1` heading
- for `price` is used `title3` heading
- for `time of stay` is used `title4` heading
**Hover**
- for `departure city` is used `title3` heading
- for `arrival city` is used `title1` heading
- for `arrival country` is used `title4` heading
- for the bottom section is used `small` text with `bold` variant
*All the texts are in `inversed style`.*
## Storybook chapters
- TBD later
|
1.0
|
Create <DestinationCard /> component - ## Description
- DestinationCard is a component which is used on the landing page and also on the search page.
## Visual style

Zeplin: https://zpl.io/2jPQo6W
### Interactions
- on `hover` is showed a dark overlay with additional information about the flight
- animations are the same as on current DestinationCard on the Kiwi.com landing page, but with Orbit duration token.
### Additional information
- the background image have to be always stretched
## Functional specs
- `width` is `100%`
- `height` is set up randomly from the range of `min` and `max` height
**Default**
- for `city name` is used `title1` heading
- for `price` is used `title3` heading
- for `time of stay` is used `title4` heading
**Hover**
- for `departure city` is used `title3` heading
- for `arrival city` is used `title1` heading
- for `arrival country` is used `title4` heading
- for the bottom section is used `small` text with `bold` variant
*All the texts are in `inversed style`.*
## Storybook chapters
- TBD later
|
process
|
create component description destinationcard is a component which is used on the landing page and also on the search page visual style zeplin interactions on hover is showed a dark overlay with additional information about the flight animations are the same as on current destinationcard on the kiwi com landing page but with orbit duration token additional information the background image have to be always stretched functional specs width is height is set up randomly from the range of min and max height default for city name is used heading for price is used heading for time of stay is used heading hover for departure city is used heading for arrival city is used heading for arrival country is used heading for the bottom section is used small text with bold variant all the texts are in inversed style storybook chapters tbd later
| 1
|
265,067
| 20,068,753,179
|
IssuesEvent
|
2022-02-04 02:02:28
|
byadav1/RentMe
|
https://api.github.com/repos/byadav1/RentMe
|
opened
|
Establish wiki page
|
documentation
|
compile the initial markups, product backlog, and entity-relationship diagram into a navigable wiki page (as a pdf)
|
1.0
|
Establish wiki page - compile the initial markups, product backlog, and entity-relationship diagram into a navigable wiki page (as a pdf)
|
non_process
|
establish wiki page compile the initial markups product backlog and entity relationship diagram into a navigable wiki page as a pdf
| 0
|
72,734
| 7,310,257,644
|
IssuesEvent
|
2018-02-28 14:31:23
|
ZerBea/hcxdumptool_bleeding_testing
|
https://api.github.com/repos/ZerBea/hcxdumptool_bleeding_testing
|
closed
|
No capture results on Gentoo/4.15.14 with ALFA AWUS036NH
|
testing
|
Moved from https://github.com/ZerBea/hcxdumptool_bleeding_testing/commit/3ab20eda8503c8e64d9a8ffa4839f311b1802aab
@ZerBea: You may wanna mark these with "Testing" labels since it's all about testing at this point.
|
1.0
|
No capture results on Gentoo/4.15.14 with ALFA AWUS036NH - Moved from https://github.com/ZerBea/hcxdumptool_bleeding_testing/commit/3ab20eda8503c8e64d9a8ffa4839f311b1802aab
@ZerBea: You may wanna mark these with "Testing" labels since it's all about testing at this point.
|
non_process
|
no capture results on gentoo with alfa moved from zerbea you may wanna mark these with testing labels since it s all about testing at this point
| 0
|
22,095
| 30,615,163,569
|
IssuesEvent
|
2023-07-24 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 24 Jul 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 24 Jul 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
process
|
new submissions for mon jul keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw there is no result keyword raw image there is no result
| 1
|
21,887
| 30,339,204,227
|
IssuesEvent
|
2023-07-11 11:38:31
|
SpikeInterface/spikeinterface
|
https://api.github.com/repos/SpikeInterface/spikeinterface
|
opened
|
Improve `UnsignedToSignedRecording` with view
|
preprocessing
|
On https://github.com/SpikeInterface/spikeinterface/pull/1707 @samuelgarcia [mentioned ](https://github.com/SpikeInterface/spikeinterface/pull/1707#issuecomment-1609309591)that this implementation could be improved by using view.
I am opening an issue here so we can keep our finger on this. @samuelgarcia when you have time can you outline how that would work like?
|
1.0
|
Improve `UnsignedToSignedRecording` with view - On https://github.com/SpikeInterface/spikeinterface/pull/1707 @samuelgarcia [mentioned ](https://github.com/SpikeInterface/spikeinterface/pull/1707#issuecomment-1609309591)that this implementation could be improved by using view.
I am opening an issue here so we can keep our finger on this. @samuelgarcia when you have time can you outline how that would work like?
|
process
|
improve unsignedtosignedrecording with view on samuelgarcia this implementation could be improved by using view i am opening an issue here so we can keep our finger on this samuelgarcia when you have time can you outline how that would work like
| 1
|
162,261
| 20,170,242,644
|
IssuesEvent
|
2022-02-10 09:46:02
|
tabacws-remediation-demos/JS-Demo-Rem
|
https://api.github.com/repos/tabacws-remediation-demos/JS-Demo-Rem
|
closed
|
selenium-webdriver-2.53.3.tgz: 1 vulnerabilities (highest severity is: 5.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>selenium-webdriver-2.53.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/tabacws-remediation-demos/JS-Demo-Rem/commit/9812cf454807e6394a0d7f92fc28f8b6fa94dbc5">9812cf454807e6394a0d7f92fc28f8b6fa94dbc5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1002204](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | adm-zip-0.4.4.tgz | Transitive | 3.0.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1002204</summary>
### Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-2.53.3.tgz (Root Library)
- :x: **adm-zip-0.4.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tabacws-remediation-demos/JS-Demo-Rem/commit/9812cf454807e6394a0d7f92fc28f8b6fa94dbc5">9812cf454807e6394a0d7f92fc28f8b6fa94dbc5</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
<p>Publish Date: 2018-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204>CVE-2018-1002204</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1002204">https://nvd.nist.gov/vuln/detail/CVE-2018-1002204</a></p>
<p>Release Date: 2018-07-25</p>
<p>Fix Resolution (adm-zip): 0.4.9</p>
<p>Direct dependency fix Resolution (selenium-webdriver): 3.0.0</p>
</p>
<p></p>
<p>In order to enable automatic remediation, please create <a target="_blank" href="https://whitesource.atlassian.net/wiki/spaces/WD/pages/697696422/WhiteSource+for+GitHub.com#Remediate-Settings-(remediateSettings)">workflow rules</a></p>
</details>
***
<p>In order to enable automatic remediation for this issue, please create <a target="_blank" href="https://whitesource.atlassian.net/wiki/spaces/WD/pages/697696422/WhiteSource+for+GitHub.com#Remediate-Settings-(remediateSettings)">workflow rules</a></p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"selenium-webdriver","packageVersion":"2.53.3","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"selenium-webdriver:2.53.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.0.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2018-1002204","vulnerabilityDetails":"adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as \u0027Zip-Slip\u0027.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
True
|
selenium-webdriver-2.53.3.tgz: 1 vulnerabilities (highest severity is: 5.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>selenium-webdriver-2.53.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/tabacws-remediation-demos/JS-Demo-Rem/commit/9812cf454807e6394a0d7f92fc28f8b6fa94dbc5">9812cf454807e6394a0d7f92fc28f8b6fa94dbc5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-1002204](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | adm-zip-0.4.4.tgz | Transitive | 3.0.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1002204</summary>
### Vulnerable Library - <b>adm-zip-0.4.4.tgz</b></p>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-2.53.3.tgz (Root Library)
- :x: **adm-zip-0.4.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tabacws-remediation-demos/JS-Demo-Rem/commit/9812cf454807e6394a0d7f92fc28f8b6fa94dbc5">9812cf454807e6394a0d7f92fc28f8b6fa94dbc5</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
<p>Publish Date: 2018-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204>CVE-2018-1002204</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1002204">https://nvd.nist.gov/vuln/detail/CVE-2018-1002204</a></p>
<p>Release Date: 2018-07-25</p>
<p>Fix Resolution (adm-zip): 0.4.9</p>
<p>Direct dependency fix Resolution (selenium-webdriver): 3.0.0</p>
</p>
<p></p>
<p>In order to enable automatic remediation, please create <a target="_blank" href="https://whitesource.atlassian.net/wiki/spaces/WD/pages/697696422/WhiteSource+for+GitHub.com#Remediate-Settings-(remediateSettings)">workflow rules</a></p>
</details>
***
<p>In order to enable automatic remediation for this issue, please create <a target="_blank" href="https://whitesource.atlassian.net/wiki/spaces/WD/pages/697696422/WhiteSource+for+GitHub.com#Remediate-Settings-(remediateSettings)">workflow rules</a></p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"selenium-webdriver","packageVersion":"2.53.3","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"selenium-webdriver:2.53.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.0.0","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2018-1002204","vulnerabilityDetails":"adm-zip npm library before 0.4.9 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as \u0027Zip-Slip\u0027.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1002204","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
non_process
|
selenium webdriver tgz vulnerabilities highest severity is autoclosed vulnerable library selenium webdriver tgz path to dependency file package json path to vulnerable library node modules adm zip package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium adm zip tgz transitive ✅ details cve vulnerable library adm zip tgz a javascript implementation of zip for nodejs allows user to create or extract zip files both in memory or to from disk library home page a href path to dependency file package json path to vulnerable library node modules adm zip package json dependency hierarchy selenium webdriver tgz root library x adm zip tgz vulnerable library found in head commit a href found in base branch main vulnerability details adm zip npm library before is vulnerable to directory traversal allowing attackers to write to arbitrary files via a dot dot slash in a zip archive entry that is mishandled during extraction this vulnerability is also known as zip slip publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution adm zip direct dependency fix resolution selenium webdriver in order to enable automatic remediation please create in order to enable automatic remediation for this issue please create istransitivedependency false dependencytree selenium webdriver isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails adm zip npm library before is vulnerable to directory traversal allowing attackers to write to arbitrary files via a dot dot slash in a zip archive entry that is mishandled during extraction this vulnerability is also known as slip vulnerabilityurl
| 0
|
440,010
| 12,692,051,062
|
IssuesEvent
|
2020-06-21 20:15:09
|
naphthasl/sakamoto
|
https://api.github.com/repos/naphthasl/sakamoto
|
closed
|
Find some way to handle extremely large document titles
|
bug enhancement medium priority
|
They currently result in undefined behaviour, and are sometimes misaligned to the list element as well as plenty of other wacky issues.
|
1.0
|
Find some way to handle extremely large document titles - They currently result in undefined behaviour, and are sometimes misaligned to the list element as well as plenty of other wacky issues.
|
non_process
|
find some way to handle extremely large document titles they currently result in undefined behaviour and are sometimes misaligned to the list element as well as plenty of other wacky issues
| 0
|
37,910
| 4,859,889,832
|
IssuesEvent
|
2016-11-13 21:34:12
|
CS3398-Sapphire-Sourcerer/Group-Project
|
https://api.github.com/repos/CS3398-Sapphire-Sourcerer/Group-Project
|
closed
|
Create socket communication functions
|
Design Infrastructure Research Server
|
Create event listener and emitters in api.py for socket functionality.
|
1.0
|
Create socket communication functions - Create event listener and emitters in api.py for socket functionality.
|
non_process
|
create socket communication functions create event listener and emitters in api py for socket functionality
| 0
|
361,399
| 25,338,170,677
|
IssuesEvent
|
2022-11-18 18:46:33
|
pluralsight/tva
|
https://api.github.com/repos/pluralsight/tva
|
closed
|
[Docs?]: add new styles sub-path info
|
documentation
|
### Latest version
- [X] I have checked the latest version
### Summary 💡
HS now has a feature from #740 that adds a new subpath `/styles` which exports all component generated style objects via `<componentName>Styles` export. Need a page about extending styles now.
Example:
```javascript
import { buttonStyles } from '@pluralsight/headless-styles/styles'
```
### Motivation 🔦
_No response_
|
1.0
|
[Docs?]: add new styles sub-path info - ### Latest version
- [X] I have checked the latest version
### Summary 💡
HS now has a feature from #740 that adds a new subpath `/styles` which exports all component generated style objects via `<componentName>Styles` export. Need a page about extending styles now.
Example:
```javascript
import { buttonStyles } from '@pluralsight/headless-styles/styles'
```
### Motivation 🔦
_No response_
|
non_process
|
add new styles sub path info latest version i have checked the latest version summary 💡 hs now has a feature from that adds a new subpath styles which exports all component generated style objects via styles export need a page about extending styles now example javascript import buttonstyles from pluralsight headless styles styles motivation 🔦 no response
| 0
|
51,676
| 13,211,280,319
|
IssuesEvent
|
2020-08-15 22:01:03
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[icetray] logging is broken on Ubuntu 14.04 (Trac #817)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/817">https://code.icecube.wisc.edu/projects/icecube/ticket/817</a>, reported by olivasand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:13:59",
"_ts": "1458335639558230",
"description": "...and maybe other platforms, but this is the one I'm working on now.\n\nSetting the logging level using :\nicetray.set_log_level_for_unit(...) does nothing.\n\nicetray.set_log_level doesn't change the logging output either.\n\nFinally do we still have a mechanism to change the logging level from C++ (i.e. tests) or did we lose that with the last logging shakeup?\n",
"reporter": "olivas",
"cc": "",
"resolution": "worksforme",
"time": "2014-11-29T03:49:44",
"component": "combo core",
"summary": "[icetray] logging is broken on Ubuntu 14.04",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[icetray] logging is broken on Ubuntu 14.04 (Trac #817) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/817">https://code.icecube.wisc.edu/projects/icecube/ticket/817</a>, reported by olivasand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:13:59",
"_ts": "1458335639558230",
"description": "...and maybe other platforms, but this is the one I'm working on now.\n\nSetting the logging level using :\nicetray.set_log_level_for_unit(...) does nothing.\n\nicetray.set_log_level doesn't change the logging output either.\n\nFinally do we still have a mechanism to change the logging level from C++ (i.e. tests) or did we lose that with the last logging shakeup?\n",
"reporter": "olivas",
"cc": "",
"resolution": "worksforme",
"time": "2014-11-29T03:49:44",
"component": "combo core",
"summary": "[icetray] logging is broken on Ubuntu 14.04",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
logging is broken on ubuntu trac migrated from json status closed changetime ts description and maybe other platforms but this is the one i m working on now n nsetting the logging level using nicetray set log level for unit does nothing n nicetray set log level doesn t change the logging output either n nfinally do we still have a mechanism to change the logging level from c i e tests or did we lose that with the last logging shakeup n reporter olivas cc resolution worksforme time component combo core summary logging is broken on ubuntu priority major keywords milestone owner olivas type defect
| 0
|
157,382
| 19,957,169,566
|
IssuesEvent
|
2022-01-28 01:32:30
|
snykiotcubedev/shadowsocks-4.4.0.0
|
https://api.github.com/repos/snykiotcubedev/shadowsocks-4.4.0.0
|
opened
|
CVE-2021-22570 (Medium) detected in google.protobuf.3.14.0.nupkg
|
security vulnerability
|
## CVE-2021-22570 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>google.protobuf.3.14.0.nupkg</b></p></summary>
<p>C# runtime library for Protocol Buffers - Google's data interchange format.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/google.protobuf.3.14.0.nupkg">https://api.nuget.org/packages/google.protobuf.3.14.0.nupkg</a></p>
<p>Path to dependency file: /shadowsocks-csharp/shadowsocks-csharp.csproj</p>
<p>Path to vulnerable library: /es/google.protobuf/3.14.0/google.protobuf.3.14.0.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **google.protobuf.3.14.0.nupkg** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nullptr dereference when a null char is present in a proto symbol. The symbol is parsed incorrectly, leading to an unchecked call into the proto file's name during generation of the resulting error message. Since the symbol is incorrectly parsed, the file is nullptr. We recommend upgrading to version 3.15.0 or greater.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22570>CVE-2021-22570</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0">https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution: Google.Protobuf - 3.15.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-22570 (Medium) detected in google.protobuf.3.14.0.nupkg - ## CVE-2021-22570 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>google.protobuf.3.14.0.nupkg</b></p></summary>
<p>C# runtime library for Protocol Buffers - Google's data interchange format.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/google.protobuf.3.14.0.nupkg">https://api.nuget.org/packages/google.protobuf.3.14.0.nupkg</a></p>
<p>Path to dependency file: /shadowsocks-csharp/shadowsocks-csharp.csproj</p>
<p>Path to vulnerable library: /es/google.protobuf/3.14.0/google.protobuf.3.14.0.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **google.protobuf.3.14.0.nupkg** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nullptr dereference when a null char is present in a proto symbol. The symbol is parsed incorrectly, leading to an unchecked call into the proto file's name during generation of the resulting error message. Since the symbol is incorrectly parsed, the file is nullptr. We recommend upgrading to version 3.15.0 or greater.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22570>CVE-2021-22570</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0">https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution: Google.Protobuf - 3.15.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in google protobuf nupkg cve medium severity vulnerability vulnerable library google protobuf nupkg c runtime library for protocol buffers google s data interchange format library home page a href path to dependency file shadowsocks csharp shadowsocks csharp csproj path to vulnerable library es google protobuf google protobuf nupkg dependency hierarchy x google protobuf nupkg vulnerable library found in base branch main vulnerability details nullptr dereference when a null char is present in a proto symbol the symbol is parsed incorrectly leading to an unchecked call into the proto file s name during generation of the resulting error message since the symbol is incorrectly parsed the file is nullptr we recommend upgrading to version or greater publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution google protobuf step up your open source security game with whitesource
| 0
|
193,368
| 22,216,142,229
|
IssuesEvent
|
2022-06-08 02:00:15
|
AlexRogalskiy/weather-time
|
https://api.github.com/repos/AlexRogalskiy/weather-time
|
closed
|
CVE-2022-21680 (High) detected in marked-2.0.7.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-21680 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-2.0.7.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-2.0.7.tgz">https://registry.npmjs.org/marked/-/marked-2.0.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/typedoc/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- typedoc-0.20.37.tgz (Root Library)
- :x: **marked-2.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/weather-time/commit/0604d6bbe372de0167c60b03483b43dddf3cbfae">0604d6bbe372de0167c60b03483b43dddf3cbfae</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `block.def` may cause catastrophic backtracking against some strings and lead to a regular expression denial of service (ReDoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21680>CVE-2022-21680</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rrrm-qjm4-v8hf">https://github.com/advisories/GHSA-rrrm-qjm4-v8hf</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-21680 (High) detected in marked-2.0.7.tgz - autoclosed - ## CVE-2022-21680 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-2.0.7.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-2.0.7.tgz">https://registry.npmjs.org/marked/-/marked-2.0.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/typedoc/node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- typedoc-0.20.37.tgz (Root Library)
- :x: **marked-2.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/weather-time/commit/0604d6bbe372de0167c60b03483b43dddf3cbfae">0604d6bbe372de0167c60b03483b43dddf3cbfae</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `block.def` may cause catastrophic backtracking against some strings and lead to a regular expression denial of service (ReDoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21680>CVE-2022-21680</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rrrm-qjm4-v8hf">https://github.com/advisories/GHSA-rrrm-qjm4-v8hf</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in marked tgz autoclosed cve high severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file package json path to vulnerable library node modules typedoc node modules marked package json dependency hierarchy typedoc tgz root library x marked tgz vulnerable library found in head commit a href vulnerability details marked is a markdown parser and compiler prior to version the regular expression block def may cause catastrophic backtracking against some strings and lead to a regular expression denial of service redos anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected this issue is patched in version as a workaround avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource
| 0
|
16,220
| 20,748,617,848
|
IssuesEvent
|
2022-03-15 03:40:59
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_exception_all (__main__.SpawnTest)
|
module: multiprocessing module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_all%2C%20SpawnTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/actions/runs/1984255288).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
|
1.0
|
DISABLED test_exception_all (__main__.SpawnTest) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_exception_all%2C%20SpawnTest) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/actions/runs/1984255288).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
|
process
|
disabled test exception all main spawntest platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green
| 1
|
8,830
| 11,940,965,880
|
IssuesEvent
|
2020-04-02 17:36:21
|
MicrosoftDocs/vsts-docs
|
https://api.github.com/repos/MicrosoftDocs/vsts-docs
|
closed
|
Filtered arrays example is in JSON, not in YAML
|
Pri1 devops-cicd-process/tech devops/prod
|
Example in "Filtered arrays" section above is in JSON when this article is about YAML. Please put it in YAML. Thanks in advance.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Filtered arrays example is in JSON, not in YAML - Example in "Filtered arrays" section above is in JSON when this article is about YAML. Please put it in YAML. Thanks in advance.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
filtered arrays example is in json not in yaml example in filtered arrays section above is in json when this article is about yaml please put it in yaml thanks in advance document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
294,306
| 9,015,660,402
|
IssuesEvent
|
2019-02-06 04:17:40
|
Sub6Resources/flutter_html
|
https://api.github.com/repos/Sub6Resources/flutter_html
|
opened
|
Add support for all elements in the `RichText` parser
|
enhancement good first issue medium-priority
|
The following still need support in the `RichText` version of the parser:
- [ ] `address`
- [ ] `aside`
- [ ] `bdi`
- [ ] `bdo`
- [ ] `big`
- [ ] `cite`
- [ ] `data`
- [ ] `ins`
- [ ] `kbd`
- [ ] `mark`
- [ ] `nav`
- [ ] `noscript`
- [ ] `q`
- [ ] `rp`
- [ ] `rt`
- [ ] `ruby`
- [ ] `s`
- [ ] `samp`
- [ ] `span`
- [ ] `strike`
- [ ] `sub`
- [ ] `sup`
- [ ] `template`
- [ ] `time`
- [ ] `tt`
- [ ] `var`
|
1.0
|
Add support for all elements in the `RichText` parser - The following still need support in the `RichText` version of the parser:
- [ ] `address`
- [ ] `aside`
- [ ] `bdi`
- [ ] `bdo`
- [ ] `big`
- [ ] `cite`
- [ ] `data`
- [ ] `ins`
- [ ] `kbd`
- [ ] `mark`
- [ ] `nav`
- [ ] `noscript`
- [ ] `q`
- [ ] `rp`
- [ ] `rt`
- [ ] `ruby`
- [ ] `s`
- [ ] `samp`
- [ ] `span`
- [ ] `strike`
- [ ] `sub`
- [ ] `sup`
- [ ] `template`
- [ ] `time`
- [ ] `tt`
- [ ] `var`
|
non_process
|
add support for all elements in the richtext parser the following still need support in the richtext version of the parser address aside bdi bdo big cite data ins kbd mark nav noscript q rp rt ruby s samp span strike sub sup template time tt var
| 0
|
21,889
| 30,340,161,120
|
IssuesEvent
|
2023-07-11 12:15:40
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
cannot use routingprocessor.NewFactory() as exporter.Factory, does not implement exporter.Factory.CreateLogsExporter
|
bug processor/routing needs triage exporter/azuredataexplorer
|
### Component(s)
exporter/azuredataexplorer, processor/routing
### What happened?
Using ocb builder which uses a router processor to push logs towards either or azure adx cluster dependent on service attribute.
ocb build fails.
------
```
dist:
name: otelcollector
version: "0.2.0"
otelcol_version: "0.80.0"
include_core: true
output_path: bin
debug_compilation: false
receivers:
- gomod: go.opentelemetry.io/collector/receiver/otlpreceiver v0.80.0
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/receiver/hostmetricsreceiver v0.80.0 # beta
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver v0.80.0 # alpha
processors:
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourceprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/attributesprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor v0.80.0 # alpha
contrib/tree/main/processor/resourcedetectionprocessor#resource-detection-processor
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourcedetectionprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/cumulativetodeltaprocessor v0.80.0 # beta
otelcol/tree/ab7ac790b11e0e21bd36f6afd69ac781366a794c/processors/authenticationprocessor
- gomod: go.opentelemetry.io/collector/processor/batchprocessor v0.80.0 # stable
exporters:
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/datadogexporter v0.80.0 # stable
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/azuredataexplorerexporter v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/routingprocessor v0.80.0 # beta
connectors:
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector v0.80.0 # alpha
extensions:
- gomod: go.opentelemetry.io/collector/extension/zpagesextension v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/extension/pprofextension v0.80.0 # beta
- gomod: go.opentelemetry.io/collector/extension/ballastextension v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension v0.80.0 # beta
replaces:
# Datadog Exporter causing the dependency issue - https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/23425#issuecomment-1595557595
- google.golang.org/genproto => google.golang.org/genproto v0.0.0-20230530153820-e85fd2cbaebc
```
### Collector version
0.80.0
### Environment information
## Environment
Ubuntu 22.04
go 1.20.5
### OpenTelemetry Collector configuration
```yaml
# container ports exposed:
#ports:
# - "1888:1888" # pprof extension
# - "13133:13133" # health_check extension
# - "4317:4317" # OTLP gRPC receiver
# - "4318:4318" # OTLP HTTP receiver
# - "55670:55679" # zpages extension
extensions:
# zPages extension enables in-process diagnostics
zpages:
endpoint: "localhost:55679"
# Health Check extension responds to health check requests
health_check:
endpoint: localhost:8081 # health check result is pushed to 8081 port and this is referred in livenessProbe of the collector app
memory_ballast:
size_mib: 512
# PProf extension allows fetching Collector's performance profile
#https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/pprofextension
pprof:
endpoint: "localhost:1777"
# block_profile_fraction: 3
# mutex_profile_fraction: 5
receivers:
hostmetrics:
collection_interval: 10s # The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.
scrapers:
paging:
metrics:
system.paging.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
cpu:
metrics:
system.cpu.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
memory:
load:
cpu_average: true
network:
#<include|exclude>:
# interfaces: [ <interface name>, ... ]
# match_type: <strict|regexp>
process:
#<include|exclude>:
# names: [ <process name>, ... ]
# match_type: <strict|regexp>
# https://github.com/open-telemetry/opentelemetry-collector/issues/3004
mute_process_name_error: false
mute_process_exe_error: false
mute_process_io_error: false
#scrape_process_delay: <time>
hostmetrics/disk:
collection_interval: 3m
scrapers:
disk: # This metric is required to get correct infrastructure metrics in Datadog.
#<include|exclude>:
# devices: [ <device name>, ... ]
# match_type: <strict|regexp>
filesystem:
metrics:
system.filesystem.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
#<include_devices|exclude_devices>:
# devices: [ <device name>, ... ]
# match_type: <strict|regexp>
#<include_fs_types|exclude_fs_types>:
# fs_types: [ <filesystem type>, ... ]
# match_type: <strict|regexp>
#<include_mount_points|exclude_mount_points>:
# mount_points: [ <mount point>, ... ]
# match_type: <strict|regexp>
otlp:
protocols:
grpc:
endpoint: "localhost:4317"
http:
endpoint: "localhost:4318"
# scrape collector's own metrics and push
prometheus/otelcol:
config:
scrape_configs:
- job_name: 'otelcol'
scrape_interval: 10s
static_configs:
- targets: ['localhost:8888']
processors:
# https://github.com/observatorium/observatorium-otelcol/tree/ab7ac790b11e0e21bd36f6afd69ac781366a794c/processors/authenticationprocessor
#plugin processor for token oauth ?
# detect docker env
# To detect resource information from the host, Otel Collector comes with resourcedetection processor.
# Resource Detection Processor automatically detects and labels metadata about the environment in which the data was generated.
# Such metadata, known as "resources", provides context to the telemetry data and can include information such as the host, service,
# container, and cloud provider.
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor
resourcedetection: # Queries the host machine to retrieve the following resource attributes: host.name, host.id, os.type,
detectors: [env, system, docker, azure]
timeout: 10s
override: false
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.80.0/processor/cumulativetodeltaprocessor
cumulativetodelta:
batch/metrics:
# Datadog APM Intake limit is 3.2MB. Let's make sure the batches do not go over that.
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 10s # (default = 5s): Maximum time to wait until the batch is sent. The default value is 5s.
batch/traces:
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 5s # (default = 5s): Maximum time to wait until the batched traces are sent. The default value is 5s.
batch/logs:
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 30s # (default = 5s): Maximum time to wait until the batched logs are sent. The default value is 5s.
attributes:
actions:
- key: tags
value: ["DD_ENV:${env:ENVIRONMENT}", "geo:${env:GEO}"]
action: upsert
resource:
attributes:
- key: DD_ENV
value: ${env:ENVIRONMENT}
action: upsert
- key: env
value: ${env:ENVIRONMENT}
action: upsert
- key: geo
value: ${env:GEO} # china, emea, amap
action: upsert
routing:
from_attribute: service
attribute_source: context
default_exporters:
- azuredataexplorer/unrouted
table:
- value: micotelcollector
exporters: [azuredataexplorer/emeadev]
- value: micotelgateway
exporters: [azuredataexplorer/emeadev]
connectors:
# get metrics about the collector traces by connecting trace exporter to metric receiver producing trace collector metrics
spanmetrics:
# histogram:
# explicit:
# buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
# dimensions:
# - name: http.method
# default: GET
# - name: http.status_code
# dimensions_cache_size: 1000
# aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
exporters:
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/datadogexporter
datadog:
api:
site: datadoghq.com
key: ${env:DATADOG_API_KEY}
#tls:
# insecure_skip_verify: true
metrics:
resource_attributes_as_tags: true
host_metadata:
enabled: true
tags: ["DD_ENV:${env:ENVIRONMENT}", "geo:${env:GEO}", "region:${env:REGION}"]
#tags: ["DD_SERVICE:${env:APP_NAME}", "DD_VERSION:${env:APP_VERSION}", "fqin:${env:APP_NAME}", "geo:${env:GEO}", "region:${env:REGION}", "release_name:${env:APP_NAME}", "service:${env:APP_NAME}", "key:${env:APP_ID}", "hostname:"]
# this exporter is used for logs in ownership of en2ned monitoring, like logs from collector and or gateway
azuredataexplorer/emeadev:
cluster_uri: "https://XXXXXXXXXXX.westeurope.kusto.windows.net"
application_id: "XXXXXXXXXXXXXXXXXXXXXXXX"
application_key: "${ADX_APPREG_SECRET}"
tenant_id: "${AZURE_TENANT_ID}"
db_name: "emeadev"
metrics_table_name: "metrics"
logs_table_name: "logs"
traces_table_name: "traces"
ingestion_type : "managed" # https://learn.microsoft.com/en-us/azure/data-explorer/open-telemetry-connector?tabs=command-line#set-up-streaming-ingestion
#otelmetrics_mapping : "<json metrics_table_name mapping>"
#otellogs_mapping : "<json logs_table_name mapping>"
#oteltraces_mapping : "<json traces_table_name mapping>"
azuredataexplorer/unrouted:
cluster_uri: "https://XXXXXXXX.westeurope.kusto.windows.net"
application_id: "XXXXXXXXXXXXXXXXXXXX"
application_key: "${ADX_APPREG_SECRET}"
tenant_id: "${AZURE_TENANT_ID}"
db_name: "unrouted"
metrics_table_name: "metrics"
logs_table_name: "logs"
traces_table_name: "traces"
ingestion_type : "managed" # https://learn.microsoft.com/en-us/azure/data-explorer/open-telemetry-connector?tabs=command-line#set-up-streaming-ingestion
#otelmetrics_mapping : "<json metrics_table_name mapping>"
#otellogs_mapping : "<json logs_table_name mapping>"
#oteltraces_mapping : "<json traces_table_name mapping>"
logging:
logLevel: info
service:
extensions: [pprof, zpages, health_check,memory_ballast]
telemetry:
# collect metrics about collector itself
metrics:
address: 'localhost:8888'
# debug if collector is not working
logs:
level: 'info'
pipelines:
#The ballast should be configured to be 1/3 to 1/2 of the memory allocated to the collector.
traces:
receivers: [otlp]
processors: [batch/traces,attributes, resource]
exporters: [datadog, spanmetrics]
metrics/hostmetrics:
receivers: [spanmetrics,otlp]
processors: [batch/metrics, attributes, resource]
exporters: [datadog]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch/metrics,attributes, resource]
exporters: [datadog]
logs:
receivers: [otlp]
processors: [routing,batch/logs,attributes]
exporters: [azuredataexplorer/emeadev,azuredataexplorer/unrouted]
```
### Log output
```shell
> [build 11/11] RUN ocb --config /workspace/builder.yaml:
#18 0.405 2023-07-05T09:01:57.895Z INFO internal/command.go:115 OpenTelemetry Collector Builder {"version": "0.80.0", "date": "2023-06-20T19:29:20Z"}
#18 0.417 2023-07-05T09:01:57.904Z INFO internal/command.go:148 Using config file {"path": "/workspace/builder.yaml"}
#18 0.417 2023-07-05T09:01:57.904Z INFO builder/config.go:105 Using go {"go-executable": "/usr/local/go/bin/go"}
#18 0.417 2023-07-05T09:01:57.907Z INFO builder/main.go:65 Sources created {"path": "bin"}
#18 132.9 2023-07-05T09:04:10.[425](https://gitlab.com/Mercedes-Intelligent-Cloud/mic-monlog/mic-otel-gateway/-/jobs/4595588176#L425)Z INFO builder/main.go:117 Getting go modules
#18 135.6 2023-07-05T09:04:13.116Z INFO builder/main.go:76 Compiling
#18 1406.1 Error: failed to compile the OpenTelemetry Collector distribution: exit status 1. Output:
#18 1406.1 # gitlab.com/otelcollector
#18 1406.1 ./components.go:57:3: cannot use routingprocessor.NewFactory() (value of type processor.Factory) as exporter.Factory value in argument to exporter.MakeFactoryMap: processor.Factory does not implement exporter.Factory (missing method CreateLogsExporter)
#18 1406.1
```
```
### Additional context
_No response_
|
1.0
|
cannot use routingprocessor.NewFactory() as exporter.Factory, does not implement exporter.Factory.CreateLogsExporter - ### Component(s)
exporter/azuredataexplorer, processor/routing
### What happened?
Using ocb builder which uses a router processor to push logs towards either or azure adx cluster dependent on service attribute.
ocb build fails.
------
```
dist:
name: otelcollector
version: "0.2.0"
otelcol_version: "0.80.0"
include_core: true
output_path: bin
debug_compilation: false
receivers:
- gomod: go.opentelemetry.io/collector/receiver/otlpreceiver v0.80.0
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/receiver/hostmetricsreceiver v0.80.0 # beta
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver v0.80.0 # alpha
processors:
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourceprocessor
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourceprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/attributesprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/metricstransformprocessor v0.80.0 # alpha
contrib/tree/main/processor/resourcedetectionprocessor#resource-detection-processor
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourcedetectionprocessor v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/cumulativetodeltaprocessor v0.80.0 # beta
otelcol/tree/ab7ac790b11e0e21bd36f6afd69ac781366a794c/processors/authenticationprocessor
- gomod: go.opentelemetry.io/collector/processor/batchprocessor v0.80.0 # stable
exporters:
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/datadogexporter v0.80.0 # stable
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/exporter/azuredataexplorerexporter v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/processor/routingprocessor v0.80.0 # beta
connectors:
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/connector/spanmetricsconnector v0.80.0 # alpha
extensions:
- gomod: go.opentelemetry.io/collector/extension/zpagesextension v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/extension/pprofextension v0.80.0 # beta
- gomod: go.opentelemetry.io/collector/extension/ballastextension v0.80.0 # beta
- gomod: github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension v0.80.0 # beta
replaces:
# Datadog Exporter causing the dependency issue - https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/23425#issuecomment-1595557595
- google.golang.org/genproto => google.golang.org/genproto v0.0.0-20230530153820-e85fd2cbaebc
```
### Collector version
0.80.0
### Environment information
## Environment
Ubuntu 22.04
go 1.20.5
### OpenTelemetry Collector configuration
```yaml
# container ports exposed:
#ports:
# - "1888:1888" # pprof extension
# - "13133:13133" # health_check extension
# - "4317:4317" # OTLP gRPC receiver
# - "4318:4318" # OTLP HTTP receiver
# - "55670:55679" # zpages extension
extensions:
# zPages extension enables in-process diagnostics
zpages:
endpoint: "localhost:55679"
# Health Check extension responds to health check requests
health_check:
endpoint: localhost:8081 # health check result is pushed to 8081 port and this is referred in livenessProbe of the collector app
memory_ballast:
size_mib: 512
# PProf extension allows fetching Collector's performance profile
#https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/pprofextension
pprof:
endpoint: "localhost:1777"
# block_profile_fraction: 3
# mutex_profile_fraction: 5
receivers:
hostmetrics:
collection_interval: 10s # The hostmetrics receiver is required to get correct infrastructure metrics in Datadog.
scrapers:
paging:
metrics:
system.paging.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
cpu:
metrics:
system.cpu.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
memory:
load:
cpu_average: true
network:
#<include|exclude>:
# interfaces: [ <interface name>, ... ]
# match_type: <strict|regexp>
process:
#<include|exclude>:
# names: [ <process name>, ... ]
# match_type: <strict|regexp>
# https://github.com/open-telemetry/opentelemetry-collector/issues/3004
mute_process_name_error: false
mute_process_exe_error: false
mute_process_io_error: false
#scrape_process_delay: <time>
hostmetrics/disk:
collection_interval: 3m
scrapers:
disk: # This metric is required to get correct infrastructure metrics in Datadog.
#<include|exclude>:
# devices: [ <device name>, ... ]
# match_type: <strict|regexp>
filesystem:
metrics:
system.filesystem.utilization:
enabled: true # This metric is required to get correct infrastructure metrics in Datadog.
#<include_devices|exclude_devices>:
# devices: [ <device name>, ... ]
# match_type: <strict|regexp>
#<include_fs_types|exclude_fs_types>:
# fs_types: [ <filesystem type>, ... ]
# match_type: <strict|regexp>
#<include_mount_points|exclude_mount_points>:
# mount_points: [ <mount point>, ... ]
# match_type: <strict|regexp>
otlp:
protocols:
grpc:
endpoint: "localhost:4317"
http:
endpoint: "localhost:4318"
# scrape collector's own metrics and push
prometheus/otelcol:
config:
scrape_configs:
- job_name: 'otelcol'
scrape_interval: 10s
static_configs:
- targets: ['localhost:8888']
processors:
# https://github.com/observatorium/observatorium-otelcol/tree/ab7ac790b11e0e21bd36f6afd69ac781366a794c/processors/authenticationprocessor
#plugin processor for token oauth ?
# detect docker env
# To detect resource information from the host, Otel Collector comes with resourcedetection processor.
# Resource Detection Processor automatically detects and labels metadata about the environment in which the data was generated.
# Such metadata, known as "resources", provides context to the telemetry data and can include information such as the host, service,
# container, and cloud provider.
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor
resourcedetection: # Queries the host machine to retrieve the following resource attributes: host.name, host.id, os.type,
detectors: [env, system, docker, azure]
timeout: 10s
override: false
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/v0.80.0/processor/cumulativetodeltaprocessor
cumulativetodelta:
batch/metrics:
# Datadog APM Intake limit is 3.2MB. Let's make sure the batches do not go over that.
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 10s # (default = 5s): Maximum time to wait until the batch is sent. The default value is 5s.
batch/traces:
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 5s # (default = 5s): Maximum time to wait until the batched traces are sent. The default value is 5s.
batch/logs:
send_batch_max_size: 1000 # (default = 8192): Maximum batch size of spans to be sent to the backend. The default value is 8192 spans.
send_batch_size: 100 # (default = 512): Maximum number of spans to process in a batch. The default value is 512 spans.
timeout: 30s # (default = 5s): Maximum time to wait until the batched logs are sent. The default value is 5s.
attributes:
actions:
- key: tags
value: ["DD_ENV:${env:ENVIRONMENT}", "geo:${env:GEO}"]
action: upsert
resource:
attributes:
- key: DD_ENV
value: ${env:ENVIRONMENT}
action: upsert
- key: env
value: ${env:ENVIRONMENT}
action: upsert
- key: geo
value: ${env:GEO} # china, emea, amap
action: upsert
routing:
from_attribute: service
attribute_source: context
default_exporters:
- azuredataexplorer/unrouted
table:
- value: micotelcollector
exporters: [azuredataexplorer/emeadev]
- value: micotelgateway
exporters: [azuredataexplorer/emeadev]
connectors:
# get metrics about the collector traces by connecting trace exporter to metric receiver producing trace collector metrics
spanmetrics:
# histogram:
# explicit:
# buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
# dimensions:
# - name: http.method
# default: GET
# - name: http.status_code
# dimensions_cache_size: 1000
# aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
exporters:
# https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/datadogexporter
datadog:
api:
site: datadoghq.com
key: ${env:DATADOG_API_KEY}
#tls:
# insecure_skip_verify: true
metrics:
resource_attributes_as_tags: true
host_metadata:
enabled: true
tags: ["DD_ENV:${env:ENVIRONMENT}", "geo:${env:GEO}", "region:${env:REGION}"]
#tags: ["DD_SERVICE:${env:APP_NAME}", "DD_VERSION:${env:APP_VERSION}", "fqin:${env:APP_NAME}", "geo:${env:GEO}", "region:${env:REGION}", "release_name:${env:APP_NAME}", "service:${env:APP_NAME}", "key:${env:APP_ID}", "hostname:"]
# this exporter is used for logs in ownership of en2ned monitoring, like logs from collector and or gateway
azuredataexplorer/emeadev:
cluster_uri: "https://XXXXXXXXXXX.westeurope.kusto.windows.net"
application_id: "XXXXXXXXXXXXXXXXXXXXXXXX"
application_key: "${ADX_APPREG_SECRET}"
tenant_id: "${AZURE_TENANT_ID}"
db_name: "emeadev"
metrics_table_name: "metrics"
logs_table_name: "logs"
traces_table_name: "traces"
ingestion_type : "managed" # https://learn.microsoft.com/en-us/azure/data-explorer/open-telemetry-connector?tabs=command-line#set-up-streaming-ingestion
#otelmetrics_mapping : "<json metrics_table_name mapping>"
#otellogs_mapping : "<json logs_table_name mapping>"
#oteltraces_mapping : "<json traces_table_name mapping>"
azuredataexplorer/unrouted:
cluster_uri: "https://XXXXXXXX.westeurope.kusto.windows.net"
application_id: "XXXXXXXXXXXXXXXXXXXX"
application_key: "${ADX_APPREG_SECRET}"
tenant_id: "${AZURE_TENANT_ID}"
db_name: "unrouted"
metrics_table_name: "metrics"
logs_table_name: "logs"
traces_table_name: "traces"
ingestion_type : "managed" # https://learn.microsoft.com/en-us/azure/data-explorer/open-telemetry-connector?tabs=command-line#set-up-streaming-ingestion
#otelmetrics_mapping : "<json metrics_table_name mapping>"
#otellogs_mapping : "<json logs_table_name mapping>"
#oteltraces_mapping : "<json traces_table_name mapping>"
logging:
logLevel: info
service:
extensions: [pprof, zpages, health_check,memory_ballast]
telemetry:
# collect metrics about collector itself
metrics:
address: 'localhost:8888'
# debug if collector is not working
logs:
level: 'info'
pipelines:
#The ballast should be configured to be 1/3 to 1/2 of the memory allocated to the collector.
traces:
receivers: [otlp]
processors: [batch/traces,attributes, resource]
exporters: [datadog, spanmetrics]
metrics/hostmetrics:
receivers: [spanmetrics,otlp]
processors: [batch/metrics, attributes, resource]
exporters: [datadog]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch/metrics,attributes, resource]
exporters: [datadog]
logs:
receivers: [otlp]
processors: [routing,batch/logs,attributes]
exporters: [azuredataexplorer/emeadev,azuredataexplorer/unrouted]
```
### Log output
```shell
> [build 11/11] RUN ocb --config /workspace/builder.yaml:
#18 0.405 2023-07-05T09:01:57.895Z INFO internal/command.go:115 OpenTelemetry Collector Builder {"version": "0.80.0", "date": "2023-06-20T19:29:20Z"}
#18 0.417 2023-07-05T09:01:57.904Z INFO internal/command.go:148 Using config file {"path": "/workspace/builder.yaml"}
#18 0.417 2023-07-05T09:01:57.904Z INFO builder/config.go:105 Using go {"go-executable": "/usr/local/go/bin/go"}
#18 0.417 2023-07-05T09:01:57.907Z INFO builder/main.go:65 Sources created {"path": "bin"}
#18 132.9 2023-07-05T09:04:10.[425](https://gitlab.com/Mercedes-Intelligent-Cloud/mic-monlog/mic-otel-gateway/-/jobs/4595588176#L425)Z INFO builder/main.go:117 Getting go modules
#18 135.6 2023-07-05T09:04:13.116Z INFO builder/main.go:76 Compiling
#18 1406.1 Error: failed to compile the OpenTelemetry Collector distribution: exit status 1. Output:
#18 1406.1 # gitlab.com/otelcollector
#18 1406.1 ./components.go:57:3: cannot use routingprocessor.NewFactory() (value of type processor.Factory) as exporter.Factory value in argument to exporter.MakeFactoryMap: processor.Factory does not implement exporter.Factory (missing method CreateLogsExporter)
#18 1406.1
```
```
### Additional context
_No response_
|
process
|
cannot use routingprocessor newfactory as exporter factory does not implement exporter factory createlogsexporter component s exporter azuredataexplorer processor routing what happened using ocb builder which uses a router processor to push logs towards either or azure adx cluster dependent on service attribute ocb build fails dist name otelcollector version otelcol version include core true output path bin debug compilation false receivers gomod go opentelemetry io collector receiver otlpreceiver gomod github com open telemetry opentelemetry collector contrib receiver hostmetricsreceiver beta gomod github com open telemetry opentelemetry collector contrib receiver prometheusreceiver alpha processors gomod github com open telemetry opentelemetry collector contrib processor resourceprocessor beta gomod github com open telemetry opentelemetry collector contrib processor attributesprocessor beta gomod github com open telemetry opentelemetry collector contrib processor metricstransformprocessor alpha contrib tree main processor resourcedetectionprocessor resource detection processor gomod github com open telemetry opentelemetry collector contrib processor resourcedetectionprocessor beta gomod github com open telemetry opentelemetry collector contrib processor cumulativetodeltaprocessor beta otelcol tree processors authenticationprocessor gomod go opentelemetry io collector processor batchprocessor stable exporters gomod github com open telemetry opentelemetry collector contrib exporter datadogexporter stable gomod github com open telemetry opentelemetry collector contrib exporter azuredataexplorerexporter beta gomod github com open telemetry opentelemetry collector contrib processor routingprocessor beta connectors gomod github com open telemetry opentelemetry collector contrib connector spanmetricsconnector alpha extensions gomod go opentelemetry io collector extension zpagesextension beta gomod github com open telemetry opentelemetry collector contrib extension pprofextension beta gomod go opentelemetry io collector extension ballastextension beta gomod github com open telemetry opentelemetry collector contrib extension healthcheckextension beta replaces datadog exporter causing the dependency issue google golang org genproto google golang org genproto collector version environment information environment ubuntu go opentelemetry collector configuration yaml container ports exposed ports pprof extension health check extension otlp grpc receiver otlp http receiver zpages extension extensions zpages extension enables in process diagnostics zpages endpoint localhost health check extension responds to health check requests health check endpoint localhost health check result is pushed to port and this is referred in livenessprobe of the collector app memory ballast size mib pprof extension allows fetching collector s performance profile pprof endpoint localhost block profile fraction mutex profile fraction receivers hostmetrics collection interval the hostmetrics receiver is required to get correct infrastructure metrics in datadog scrapers paging metrics system paging utilization enabled true this metric is required to get correct infrastructure metrics in datadog cpu metrics system cpu utilization enabled true this metric is required to get correct infrastructure metrics in datadog memory load cpu average true network interfaces match type process names match type mute process name error false mute process exe error false mute process io error false scrape process delay hostmetrics disk collection interval scrapers disk this metric is required to get correct infrastructure metrics in datadog devices match type filesystem metrics system filesystem utilization enabled true this metric is required to get correct infrastructure metrics in datadog devices match type fs types match type mount points match type otlp protocols grpc endpoint localhost http endpoint localhost scrape collector s own metrics and push prometheus otelcol config scrape configs job name otelcol scrape interval static configs targets processors plugin processor for token oauth detect docker env to detect resource information from the host otel collector comes with resourcedetection processor resource detection processor automatically detects and labels metadata about the environment in which the data was generated such metadata known as resources provides context to the telemetry data and can include information such as the host service container and cloud provider resourcedetection queries the host machine to retrieve the following resource attributes host name host id os type detectors timeout override false cumulativetodelta batch metrics datadog apm intake limit is let s make sure the batches do not go over that send batch max size default maximum batch size of spans to be sent to the backend the default value is spans send batch size default maximum number of spans to process in a batch the default value is spans timeout default maximum time to wait until the batch is sent the default value is batch traces send batch max size default maximum batch size of spans to be sent to the backend the default value is spans send batch size default maximum number of spans to process in a batch the default value is spans timeout default maximum time to wait until the batched traces are sent the default value is batch logs send batch max size default maximum batch size of spans to be sent to the backend the default value is spans send batch size default maximum number of spans to process in a batch the default value is spans timeout default maximum time to wait until the batched logs are sent the default value is attributes actions key tags value action upsert resource attributes key dd env value env environment action upsert key env value env environment action upsert key geo value env geo china emea amap action upsert routing from attribute service attribute source context default exporters azuredataexplorer unrouted table value micotelcollector exporters value micotelgateway exporters connectors get metrics about the collector traces by connecting trace exporter to metric receiver producing trace collector metrics spanmetrics histogram explicit buckets dimensions name http method default get name http status code dimensions cache size aggregation temporality aggregation temporality cumulative exporters datadog api site datadoghq com key env datadog api key tls insecure skip verify true metrics resource attributes as tags true host metadata enabled true tags tags this exporter is used for logs in ownership of monitoring like logs from collector and or gateway azuredataexplorer emeadev cluster uri application id xxxxxxxxxxxxxxxxxxxxxxxx application key adx appreg secret tenant id azure tenant id db name emeadev metrics table name metrics logs table name logs traces table name traces ingestion type managed otelmetrics mapping otellogs mapping oteltraces mapping azuredataexplorer unrouted cluster uri application id xxxxxxxxxxxxxxxxxxxx application key adx appreg secret tenant id azure tenant id db name unrouted metrics table name metrics logs table name logs traces table name traces ingestion type managed otelmetrics mapping otellogs mapping oteltraces mapping logging loglevel info service extensions telemetry collect metrics about collector itself metrics address localhost debug if collector is not working logs level info pipelines the ballast should be configured to be to of the memory allocated to the collector traces receivers processors exporters metrics hostmetrics receivers processors exporters metrics receivers processors exporters logs receivers processors exporters log output shell run ocb config workspace builder yaml info internal command go opentelemetry collector builder version date info internal command go using config file path workspace builder yaml info builder config go using go go executable usr local go bin go info builder main go sources created path bin info builder main go getting go modules info builder main go compiling error failed to compile the opentelemetry collector distribution exit status output gitlab com otelcollector components go cannot use routingprocessor newfactory value of type processor factory as exporter factory value in argument to exporter makefactorymap processor factory does not implement exporter factory missing method createlogsexporter additional context no response
| 1
|
137,644
| 20,188,187,429
|
IssuesEvent
|
2022-02-11 01:16:23
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Implement a landing zone concept with Azure Blueprints and Azure Policies
|
WARP-Import WAF FEB 2021 Security Application Design Performance and Scalability Capacity Management Processes Dependencies
|
<a href="https://docs.microsoft.com/azure/architecture/framework/Security/governance#increase-automation-with-azure-blueprints">Implement a landing zone concept with Azure Blueprints and Azure Policies</a>
<p><b>Why Consider This?</b></p>
The purpose of a "Landing Zone" is to ensure that when a workload lands on Azure, the required "plumbing" is already in place, providing greater agility and compliance with enterprise security and governance requirements. It is crucial that a Landing Zone is handed over to the workload owner with security guardrails deployed.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Implement a landing zone concept with Azure Blueprints and Azure Policies.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/overview</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/</span></a><span /></p>
|
1.0
|
Implement a landing zone concept with Azure Blueprints and Azure Policies - <a href="https://docs.microsoft.com/azure/architecture/framework/Security/governance#increase-automation-with-azure-blueprints">Implement a landing zone concept with Azure Blueprints and Azure Policies</a>
<p><b>Why Consider This?</b></p>
The purpose of a "Landing Zone" is to ensure that when a workload lands on Azure, the required "plumbing" is already in place, providing greater agility and compliance with enterprise security and governance requirements. It is crucial that a Landing Zone is handed over to the workload owner with security guardrails deployed.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Implement a landing zone concept with Azure Blueprints and Azure Policies.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/overview</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/landing-zone/</span></a><span /></p>
|
non_process
|
implement a landing zone concept with azure blueprints and azure policies why consider this the purpose of a landing zone is to ensure that when a workload lands on azure the required plumbing is already in place providing greater agility and compliance with enterprise security and governance requirements it is crucial that a landing zone is handed over to the workload owner with security guardrails deployed context suggested actions implement a landing zone concept with azure blueprints and azure policies learn more
| 0
|
8,588
| 11,757,956,189
|
IssuesEvent
|
2020-03-13 14:35:46
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
opened
|
AUTO BATCH IMPORT - Basic Auth
|
EPIC - Auto Batch Process :oncoming_automobile:
|
## User want
As a user
I want to call `doc-index-updater`
So that it is more difficult to forge requests
## Acceptance Criteria
- [ ] Correct username & password should be defined in env
- [ ] Requests without correct username & password (HTTP Basic Auth) should be declined
### Customer acceptance criteria
- [ ] Requests without correct username & password (HTTP Basic Auth) should be declined
### Technical acceptance criteria
- [ ] Correct username & password should be defined in env
### Data acceptance criteria
N/A
### Testing acceptance criteria
See acceptance criteria.
## Data - Potential impact
N/A
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
AUTO BATCH IMPORT - Basic Auth - ## User want
As a user
I want to call `doc-index-updater`
So that it is more difficult to forge requests
## Acceptance Criteria
- [ ] Correct username & password should be defined in env
- [ ] Requests without correct username & password (HTTP Basic Auth) should be declined
### Customer acceptance criteria
- [ ] Requests without correct username & password (HTTP Basic Auth) should be declined
### Technical acceptance criteria
- [ ] Correct username & password should be defined in env
### Data acceptance criteria
N/A
### Testing acceptance criteria
See acceptance criteria.
## Data - Potential impact
N/A
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
auto batch import basic auth user want as a user i want to call doc index updater so that it is more difficult to forge requests acceptance criteria correct username password should be defined in env requests without correct username password http basic auth should be declined customer acceptance criteria requests without correct username password http basic auth should be declined technical acceptance criteria correct username password should be defined in env data acceptance criteria n a testing acceptance criteria see acceptance criteria data potential impact n a exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
16,666
| 21,771,275,547
|
IssuesEvent
|
2022-05-13 09:18:54
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
Investigate exporting command/rejection records to ES
|
scope/broker area/performance kind/research area/observability team/process-automation
|
**Description**
8338 only concerns the exporting of event records, but the log streams also contain commands and command rejections. For troubleshooting purposes, it may be useful to persist these records as well in ES.
Likely, there are additional costs associated with exporting these records, both in terms of a performance penalty as well as higher infrastructure costs (e.g. storage space). We should experiment with exporting these records to discover the real costs associated, in order to determine whether this is worth it.
|
1.0
|
Investigate exporting command/rejection records to ES - **Description**
8338 only concerns the exporting of event records, but the log streams also contain commands and command rejections. For troubleshooting purposes, it may be useful to persist these records as well in ES.
Likely, there are additional costs associated with exporting these records, both in terms of a performance penalty as well as higher infrastructure costs (e.g. storage space). We should experiment with exporting these records to discover the real costs associated, in order to determine whether this is worth it.
|
process
|
investigate exporting command rejection records to es description only concerns the exporting of event records but the log streams also contain commands and command rejections for troubleshooting purposes it may be useful to persist these records as well in es likely there are additional costs associated with exporting these records both in terms of a performance penalty as well as higher infrastructure costs e g storage space we should experiment with exporting these records to discover the real costs associated in order to determine whether this is worth it
| 1
|
21,523
| 29,805,713,736
|
IssuesEvent
|
2023-06-16 11:30:11
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
membrane bending vs membrane curvature
|
New term request PomBase ready cellular processes molecular_function
|
membrane bending
https://www.ebi.ac.uk/QuickGO/term/GO:0097753
A membrane organization process resulting in the bending of a membrane.
synonym membrane curvature | related
A number of us think that membrane bending should be a function term
@krchristie https://github.com/geneontology/go-ontology/issues/14562#issuecomment-345393895
@hattrill https://github.com/geneontology/go-ontology/issues/13024#issuecomment-316634820
It seems that different proteins in different modification states can cause different degrees of curvature.
Different types of assemblies (clathrin coats,eisosomes etc form different "tessellation patterns"
for example clathrin forms the tessellated polygonal scaffold.
So the assemblies of different proteins in different contexts can form different types of membrane protrusions and invaginations (curvatures).
It would seem better to call the processes membrane curvature, to group the different types of invagination.
"membrane bending" can then be the gene product specific activity for the proteins that can intrinsiclly bend the membrane?
|
1.0
|
membrane bending vs membrane curvature - membrane bending
https://www.ebi.ac.uk/QuickGO/term/GO:0097753
A membrane organization process resulting in the bending of a membrane.
synonym membrane curvature | related
A number of us think that membrane bending should be a function term
@krchristie https://github.com/geneontology/go-ontology/issues/14562#issuecomment-345393895
@hattrill https://github.com/geneontology/go-ontology/issues/13024#issuecomment-316634820
It seems that different proteins in different modification states can cause different degrees of curvature.
Different types of assemblies (clathrin coats,eisosomes etc form different "tessellation patterns"
for example clathrin forms the tessellated polygonal scaffold.
So the assemblies of different proteins in different contexts can form different types of membrane protrusions and invaginations (curvatures).
It would seem better to call the processes membrane curvature, to group the different types of invagination.
"membrane bending" can then be the gene product specific activity for the proteins that can intrinsiclly bend the membrane?
|
process
|
membrane bending vs membrane curvature membrane bending a membrane organization process resulting in the bending of a membrane synonym membrane curvature related a number of us think that membrane bending should be a function term krchristie hattrill it seems that different proteins in different modification states can cause different degrees of curvature different types of assemblies clathrin coats eisosomes etc form different tessellation patterns for example clathrin forms the tessellated polygonal scaffold so the assemblies of different proteins in different contexts can form different types of membrane protrusions and invaginations curvatures it would seem better to call the processes membrane curvature to group the different types of invagination membrane bending can then be the gene product specific activity for the proteins that can intrinsiclly bend the membrane
| 1
|
5,797
| 8,641,277,451
|
IssuesEvent
|
2018-11-24 16:03:28
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Silent crash after adding specific reference to VBA Project
|
bug critical typeinfo-processing
|
I experience a "silent crash" when I add a VBA Project reference to a specific installed DLL and reparse the VBA project using RD 2.2.0.3086 (2.2.6672.28001).
The Reference is called "CCo" and is a COM DLL that can be used as an adapter to interact with SAP RFC libraries and allows COM-enabled application to interact with SAP's RFC libraries. It doesn't have any dependencies (for the purposes of this test anyway).
Steps to recreate:
1. Download and extract all files from: https://cco.stschnell.de/cco.zip.
2. Register the file `CCo.dll` using `regsvr32`, for example `regsvr32.exe CCo.dll`. This will add a new reference called "CCo" to the list of references you can select in VBA.
3. Create a new Excel file and VBA Project.
4. Add the reference "CCo" to your VBA Project.
5. Click the "Ready" button on the RD toolbar.
After performing step 5, RD starts parsing and when it moves on to "Resolving References", Excel immediately quits without any kind of error message (not even a stack trace). Below is all that I have from the trace log of RD itself:
```
2018-04-17 16:47:25.3982;DEBUG-2.2.6676.7522;Rubberduck.UI.Command.MenuItems.CommandBars.AppCommandBarBase;(1182010) Executing click handler for commandbar item 'Ready', hash code 9779294;
2018-04-17 16:47:25.3982;DEBUG-2.2.6676.7522;Rubberduck.Parsing.VBA.ParseCoordinator;Parsing run started. (thread 27).;
2018-04-17 16:47:25.3982;INFO-2.2.6676.7522;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState (13) is invoking StateChanged (Started);
2018-04-17 16:47:25.6032;INFO-2.2.6676.7522;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState (14) is invoking StateChanged (LoadingReference);
2018-04-17 16:47:25.8082;TRACE-2.2.6676.7522;Rubberduck.Parsing.VBA.COMReferenceSynchronizerBase;Loading referenced type 'CCo'.;
2018-04-17 16:47:25.8132;TRACE-2.2.6676.7522;Rubberduck.Parsing.VBA.COMReferenceSynchronizerBase;COM reflecting reference 'CCo'.;
```
I'm using 32-bit Excel 2013 (15.0.5015.1000), VBA Retail 7.1.1068, on Win 7 64-bit SP1 (I've also tried this on a Win10 1709 64-bit with Excel 2016 with the same results.
Sorry if the above is not clear, this is my first issue - please let me know if you need any further information.
|
1.0
|
Silent crash after adding specific reference to VBA Project - I experience a "silent crash" when I add a VBA Project reference to a specific installed DLL and reparse the VBA project using RD 2.2.0.3086 (2.2.6672.28001).
The Reference is called "CCo" and is a COM DLL that can be used as an adapter to interact with SAP RFC libraries and allows COM-enabled application to interact with SAP's RFC libraries. It doesn't have any dependencies (for the purposes of this test anyway).
Steps to recreate:
1. Download and extract all files from: https://cco.stschnell.de/cco.zip.
2. Register the file `CCo.dll` using `regsvr32`, for example `regsvr32.exe CCo.dll`. This will add a new reference called "CCo" to the list of references you can select in VBA.
3. Create a new Excel file and VBA Project.
4. Add the reference "CCo" to your VBA Project.
5. Click the "Ready" button on the RD toolbar.
After performing step 5, RD starts parsing and when it moves on to "Resolving References", Excel immediately quits without any kind of error message (not even a stack trace). Below is all that I have from the trace log of RD itself:
```
2018-04-17 16:47:25.3982;DEBUG-2.2.6676.7522;Rubberduck.UI.Command.MenuItems.CommandBars.AppCommandBarBase;(1182010) Executing click handler for commandbar item 'Ready', hash code 9779294;
2018-04-17 16:47:25.3982;DEBUG-2.2.6676.7522;Rubberduck.Parsing.VBA.ParseCoordinator;Parsing run started. (thread 27).;
2018-04-17 16:47:25.3982;INFO-2.2.6676.7522;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState (13) is invoking StateChanged (Started);
2018-04-17 16:47:25.6032;INFO-2.2.6676.7522;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState (14) is invoking StateChanged (LoadingReference);
2018-04-17 16:47:25.8082;TRACE-2.2.6676.7522;Rubberduck.Parsing.VBA.COMReferenceSynchronizerBase;Loading referenced type 'CCo'.;
2018-04-17 16:47:25.8132;TRACE-2.2.6676.7522;Rubberduck.Parsing.VBA.COMReferenceSynchronizerBase;COM reflecting reference 'CCo'.;
```
I'm using 32-bit Excel 2013 (15.0.5015.1000), VBA Retail 7.1.1068, on Win 7 64-bit SP1 (I've also tried this on a Win10 1709 64-bit with Excel 2016 with the same results.
Sorry if the above is not clear, this is my first issue - please let me know if you need any further information.
|
process
|
silent crash after adding specific reference to vba project i experience a silent crash when i add a vba project reference to a specific installed dll and reparse the vba project using rd the reference is called cco and is a com dll that can be used as an adapter to interact with sap rfc libraries and allows com enabled application to interact with sap s rfc libraries it doesn t have any dependencies for the purposes of this test anyway steps to recreate download and extract all files from register the file cco dll using for example exe cco dll this will add a new reference called cco to the list of references you can select in vba create a new excel file and vba project add the reference cco to your vba project click the ready button on the rd toolbar after performing step rd starts parsing and when it moves on to resolving references excel immediately quits without any kind of error message not even a stack trace below is all that i have from the trace log of rd itself debug rubberduck ui command menuitems commandbars appcommandbarbase executing click handler for commandbar item ready hash code debug rubberduck parsing vba parsecoordinator parsing run started thread info rubberduck parsing vba rubberduckparserstate rubberduckparserstate is invoking statechanged started info rubberduck parsing vba rubberduckparserstate rubberduckparserstate is invoking statechanged loadingreference trace rubberduck parsing vba comreferencesynchronizerbase loading referenced type cco trace rubberduck parsing vba comreferencesynchronizerbase com reflecting reference cco i m using bit excel vba retail on win bit i ve also tried this on a bit with excel with the same results sorry if the above is not clear this is my first issue please let me know if you need any further information
| 1
|
27,897
| 2,697,234,934
|
IssuesEvent
|
2015-04-02 18:41:59
|
cs2103jan2015-f09-1c/main
|
https://api.github.com/repos/cs2103jan2015-f09-1c/main
|
opened
|
INTERPRETER: Only do parsing
|
priority.high type.task
|
Edit interpreter won't do the actual editing. The actual editing and user feedback done by EditCmd
|
1.0
|
INTERPRETER: Only do parsing - Edit interpreter won't do the actual editing. The actual editing and user feedback done by EditCmd
|
non_process
|
interpreter only do parsing edit interpreter won t do the actual editing the actual editing and user feedback done by editcmd
| 0
|
5,008
| 7,840,806,714
|
IssuesEvent
|
2018-06-18 17:33:02
|
AmpersandTarski/Ampersand
|
https://api.github.com/repos/AmpersandTarski/Ampersand
|
closed
|
"no git info" in --version text
|
bug component:global software process
|
For some time now, I have found that the `--version` switch returns texts such as
~~~
Ampersand-v3.10.1 [no git info], build time: 18-Jun-18 15:14:25 W. Europe Daylight Time
~~~
which means that I cannot specify the Ampersand version in tickets with better info than that.
Can you replace the `no git info` text with the text of the git version again?
|
1.0
|
"no git info" in --version text - For some time now, I have found that the `--version` switch returns texts such as
~~~
Ampersand-v3.10.1 [no git info], build time: 18-Jun-18 15:14:25 W. Europe Daylight Time
~~~
which means that I cannot specify the Ampersand version in tickets with better info than that.
Can you replace the `no git info` text with the text of the git version again?
|
process
|
no git info in version text for some time now i have found that the version switch returns texts such as ampersand build time jun w europe daylight time which means that i cannot specify the ampersand version in tickets with better info than that can you replace the no git info text with the text of the git version again
| 1
|
9,941
| 12,975,022,877
|
IssuesEvent
|
2020-07-21 16:18:35
|
googleapis/python-storage
|
https://api.github.com/repos/googleapis/python-storage
|
closed
|
Unit tests fail without systest environment variables
|
api: storage testing type: process
|
Running in a "clean" environment:
```python
___________________ test_conformance_post_policy[test_data0] ___________________
test_data = {'description': 'POST Policy Simple', 'policyInput': {'bucket': 'rsaposttest-1579902670-h3q7wvodjor6bc7y', 'expiration...3/auto/storage/goog4_request', ...}, 'url': 'https://storage.googleapis.com/rsaposttest-1579902670-h3q7wvodjor6bc7y/'}}
@pytest.mark.parametrize("test_data", _POST_POLICY_TESTS)
def test_conformance_post_policy(test_data):
import datetime
from google.cloud.storage.client import Client
in_data = test_data["policyInput"]
timestamp = datetime.datetime.strptime(in_data["timestamp"], "%Y-%m-%dT%H:%M:%SZ")
> client = Client(credentials=_DUMMY_CREDENTIALS)
tests/unit/test_client.py:1847:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/storage/client.py:111: in __init__
project=project, credentials=credentials, _http=_http
.nox/unit-3-6/lib/python3.6/site-packages/google/cloud/client.py:226: in __init__
_ClientProjectMixin.__init__(self, project=project)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.storage.client.Client object at 0x7fe605cc77f0>
project = None
def __init__(self, project=None):
project = self._determine_default(project)
if project is None:
raise EnvironmentError(
> "Project was not passed and could not be "
"determined from the environment."
)
E OSError: Project was not passed and could not be determined from the environment.
.nox/unit-3-6/lib/python3.6/site-packages/google/cloud/client.py:181: OSError
------------------------------ Captured log call -------------------------------
WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
...
```
and likewise for the other parameterized `test_conformance_post_policy` tests.
|
1.0
|
Unit tests fail without systest environment variables - Running in a "clean" environment:
```python
___________________ test_conformance_post_policy[test_data0] ___________________
test_data = {'description': 'POST Policy Simple', 'policyInput': {'bucket': 'rsaposttest-1579902670-h3q7wvodjor6bc7y', 'expiration...3/auto/storage/goog4_request', ...}, 'url': 'https://storage.googleapis.com/rsaposttest-1579902670-h3q7wvodjor6bc7y/'}}
@pytest.mark.parametrize("test_data", _POST_POLICY_TESTS)
def test_conformance_post_policy(test_data):
import datetime
from google.cloud.storage.client import Client
in_data = test_data["policyInput"]
timestamp = datetime.datetime.strptime(in_data["timestamp"], "%Y-%m-%dT%H:%M:%SZ")
> client = Client(credentials=_DUMMY_CREDENTIALS)
tests/unit/test_client.py:1847:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/storage/client.py:111: in __init__
project=project, credentials=credentials, _http=_http
.nox/unit-3-6/lib/python3.6/site-packages/google/cloud/client.py:226: in __init__
_ClientProjectMixin.__init__(self, project=project)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.storage.client.Client object at 0x7fe605cc77f0>
project = None
def __init__(self, project=None):
project = self._determine_default(project)
if project is None:
raise EnvironmentError(
> "Project was not passed and could not be "
"determined from the environment."
)
E OSError: Project was not passed and could not be determined from the environment.
.nox/unit-3-6/lib/python3.6/site-packages/google/cloud/client.py:181: OSError
------------------------------ Captured log call -------------------------------
WARNING google.auth._default:_default.py:334 No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
...
```
and likewise for the other parameterized `test_conformance_post_policy` tests.
|
process
|
unit tests fail without systest environment variables running in a clean environment python test conformance post policy test data description post policy simple policyinput bucket rsaposttest expiration auto storage request url pytest mark parametrize test data post policy tests def test conformance post policy test data import datetime from google cloud storage client import client in data test data timestamp datetime datetime strptime in data y m dt h m sz client client credentials dummy credentials tests unit test client py google cloud storage client py in init project project credentials credentials http http nox unit lib site packages google cloud client py in init clientprojectmixin init self project project self project none def init self project none project self determine default project if project is none raise environmenterror project was not passed and could not be determined from the environment e oserror project was not passed and could not be determined from the environment nox unit lib site packages google cloud client py oserror captured log call warning google auth default default py no project id could be determined consider running gcloud config set project or setting the google cloud project environment variable and likewise for the other parameterized test conformance post policy tests
| 1
|
49,982
| 6,048,710,595
|
IssuesEvent
|
2017-06-12 17:05:18
|
omeka/omeka-s
|
https://api.github.com/repos/omeka/omeka-s
|
closed
|
Quick Add: Clicking outside the checkbox should select the resource's checkbox when quick-add is active
|
bug testing/seeking feedback
|
Right now clicking outside the checkbox a) does nothing, or b) opens the options sidebar for that individual resource.
|
1.0
|
Quick Add: Clicking outside the checkbox should select the resource's checkbox when quick-add is active - Right now clicking outside the checkbox a) does nothing, or b) opens the options sidebar for that individual resource.
|
non_process
|
quick add clicking outside the checkbox should select the resource s checkbox when quick add is active right now clicking outside the checkbox a does nothing or b opens the options sidebar for that individual resource
| 0
|
35,126
| 2,789,806,779
|
IssuesEvent
|
2015-05-08 21:36:49
|
google/google-visualization-api-issues
|
https://api.github.com/repos/google/google-visualization-api-issues
|
opened
|
Manipulate Spreadsheet Cell Comments as part of DataTable object
|
Priority-Low Type-Enhancement
|
Original [issue 167](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=167) created by orwant on 2010-01-19T01:38:05.000Z:
<b>What would you like to see us add to this API?</b>
Please add the ability to retrieve and set a cell's comment in a
Spreadsheet. Comments appear to be directly related to cells, so it would
be great if more methods were to be added to the DataTable object, such as:
getComments(int row);
getComments(int column);
getComment(int row, int column);
setComment(int row, int column);
...
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
DataTable
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
1.0
|
Manipulate Spreadsheet Cell Comments as part of DataTable object - Original [issue 167](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=167) created by orwant on 2010-01-19T01:38:05.000Z:
<b>What would you like to see us add to this API?</b>
Please add the ability to retrieve and set a cell's comment in a
Spreadsheet. Comments appear to be directly related to cells, so it would
be great if more methods were to be added to the DataTable object, such as:
getComments(int row);
getComments(int column);
getComment(int row, int column);
setComment(int row, int column);
...
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
DataTable
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
non_process
|
manipulate spreadsheet cell comments as part of datatable object original created by orwant on what would you like to see us add to this api please add the ability to retrieve and set a cell s comment in a spreadsheet comments appear to be directly related to cells so it would be great if more methods were to be added to the datatable object such as getcomments int row getcomments int column getcomment int row int column setcomment int row int column what component is this issue related to piechart linechart datatable query etc datatable for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
| 0
|
6,075
| 8,921,830,245
|
IssuesEvent
|
2019-01-21 11:10:20
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
deleted discussions + projects can still be assigned to a task
|
2.0.6 Process bug
|
if you delete a project or a discussion that a task is related to, the task stays related to the deleted project or discussion even after deleting it
|
1.0
|
deleted discussions + projects can still be assigned to a task - if you delete a project or a discussion that a task is related to, the task stays related to the deleted project or discussion even after deleting it
|
process
|
deleted discussions projects can still be assigned to a task if you delete a project or a discussion that a task is related to the task stays related to the deleted project or discussion even after deleting it
| 1
|
11,955
| 14,725,635,483
|
IssuesEvent
|
2021-01-06 05:12:52
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Show Account number next to invoice number in “search” mode.
|
anc-ui anp-2 ant-feature grt-ui processes
|
In GitLab by @kdjstudios on Oct 21, 2016, 17:32
Keener: Many large hospitals will pay invoices for several accounts on 1 check. They then list the Invoice numbers of the invoices they are paying.
When we type that invoice number into SA Billing to post the payment, It gives us the PDF of the invoice which we then have to open to see the account number. Example:
|
1.0
|
Show Account number next to invoice number in “search” mode. - In GitLab by @kdjstudios on Oct 21, 2016, 17:32
Keener: Many large hospitals will pay invoices for several accounts on 1 check. They then list the Invoice numbers of the invoices they are paying.
When we type that invoice number into SA Billing to post the payment, It gives us the PDF of the invoice which we then have to open to see the account number. Example:
|
process
|
show account number next to invoice number in “search” mode in gitlab by kdjstudios on oct keener many large hospitals will pay invoices for several accounts on check they then list the invoice numbers of the invoices they are paying when we type that invoice number into sa billing to post the payment it gives us the pdf of the invoice which we then have to open to see the account number example
| 1
|
4,586
| 7,428,876,262
|
IssuesEvent
|
2018-03-24 07:33:06
|
kookmin-sw/2018-cap1-2
|
https://api.github.com/repos/kookmin-sw/2018-cap1-2
|
opened
|
암묵적 의미가 동일한 기호처리
|
ImageProcessing
|
for 구문에서 동일한 의미를 가지는 -> ~ 기호가 들어왔을 때,
<현재>
- 인터프리터 대상에서 제외되는 -> 기호를 그대로 반환해서 로지컬 에러 발생
이를 영상처리 단계에서 -> 기호를 ~ 기호로 대체 변환해줘
인터프리팅 성공 가능성을 높이는 것에 대해 고려.
+ >=, <=, != 기호를 하나의 기호로 합치는 것이 기호 표기 상 가능하다면
해당 기호들도 고려 대상에 포함.
|
1.0
|
암묵적 의미가 동일한 기호처리 - for 구문에서 동일한 의미를 가지는 -> ~ 기호가 들어왔을 때,
<현재>
- 인터프리터 대상에서 제외되는 -> 기호를 그대로 반환해서 로지컬 에러 발생
이를 영상처리 단계에서 -> 기호를 ~ 기호로 대체 변환해줘
인터프리팅 성공 가능성을 높이는 것에 대해 고려.
+ >=, <=, != 기호를 하나의 기호로 합치는 것이 기호 표기 상 가능하다면
해당 기호들도 고려 대상에 포함.
|
process
|
암묵적 의미가 동일한 기호처리 for 구문에서 동일한 의미를 가지는 기호가 들어왔을 때 인터프리터 대상에서 제외되는 기호를 그대로 반환해서 로지컬 에러 발생 이를 영상처리 단계에서 기호를 기호로 대체 변환해줘 인터프리팅 성공 가능성을 높이는 것에 대해 고려 기호를 하나의 기호로 합치는 것이 기호 표기 상 가능하다면 해당 기호들도 고려 대상에 포함
| 1
|
17,558
| 23,371,891,072
|
IssuesEvent
|
2022-08-10 20:42:35
|
firebase/firebase-unity-sdk
|
https://api.github.com/repos/firebase/firebase-unity-sdk
|
reopened
|
Nightly Integration Testing Report
|
nightly-testing type: process
|
### ❌ Integration test FAILED
Requested by @firebase-workflow-trigger[bot] on commit 3c876e9faffce65051fb1b378e797cc9a71adfa1
Last updated: Wed Aug 10 06:45 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-unity-sdk/actions/runs/2831226390)**
| Failures | Configs |
|----------|---------|
| analytics | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| dynamic_links | [TEST] [FLAKINESS] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/> |
| firestore | [TEST] [FLAKINESS] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/> |
| functions | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| installations | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| messaging | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/>[TEST] [FAILURE] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| storage | [TEST] [FAILURE] [2019] [macos] [3/6 Platform(s): ubuntu windows iOS] [1/3 Test Device(s): simulator_target]<br/> |
|
1.0
|
Nightly Integration Testing Report - ### ❌ Integration test FAILED
Requested by @firebase-workflow-trigger[bot] on commit 3c876e9faffce65051fb1b378e797cc9a71adfa1
Last updated: Wed Aug 10 06:45 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-unity-sdk/actions/runs/2831226390)**
| Failures | Configs |
|----------|---------|
| analytics | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| dynamic_links | [TEST] [FLAKINESS] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/> |
| firestore | [TEST] [FLAKINESS] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/> |
| functions | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| installations | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| messaging | [TEST] [ERROR] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): ios_target]<br/>[TEST] [FAILURE] [2019] [macos] [1/6 Platform(s): iOS] [1/3 Test Device(s): simulator_target]<br/> |
| storage | [TEST] [FAILURE] [2019] [macos] [3/6 Platform(s): ubuntu windows iOS] [1/3 Test Device(s): simulator_target]<br/> |
|
process
|
nightly integration testing report ❌ nbsp integration test failed requested by firebase workflow trigger on commit last updated wed aug pdt failures configs analytics dynamic links firestore functions installations messaging storage
| 1
|
19,578
| 25,901,828,640
|
IssuesEvent
|
2022-12-15 06:41:27
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_success_non_blocking (__main__.SpawnTest)
|
high priority triage review module: multiprocessing triaged module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_success_non_blocking%2C%20SpawnTest) and the most recent
[workflow logs](https://github.com/pytorch/pytorch/actions/runs/1854275274).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with
1 red and 3 green.
cc @ezyang @gchanan @zou3519 @VitalyFedyunin
|
1.0
|
DISABLED test_success_non_blocking (__main__.SpawnTest) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_success_non_blocking%2C%20SpawnTest) and the most recent
[workflow logs](https://github.com/pytorch/pytorch/actions/runs/1854275274).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with
1 red and 3 green.
cc @ezyang @gchanan @zou3519 @VitalyFedyunin
|
process
|
disabled test success non blocking main spawntest platforms linux this test was disabled because it is failing in ci see and the most recent over the past hours it has been determined flaky in workflow s with red and green cc ezyang gchanan vitalyfedyunin
| 1
|
102
| 2,539,214,495
|
IssuesEvent
|
2015-01-27 13:54:02
|
rhattersley/docbook2asciidoc
|
https://api.github.com/repos/rhattersley/docbook2asciidoc
|
closed
|
Remove leading space in <othercredit>
|
pre-process
|
```xml
<othercredit>
<contrib><para> Many others have contributed to the development of CF
through their participation in discussions about proposed changes.
</para></contrib>
</othercredit>
```
The leading space comes through into the markup and makes it render like a quotation (or similar).
(Alternatively, make d2a.xsl get rid of leading whitespace inside <para> elements.)
|
1.0
|
Remove leading space in <othercredit> - ```xml
<othercredit>
<contrib><para> Many others have contributed to the development of CF
through their participation in discussions about proposed changes.
</para></contrib>
</othercredit>
```
The leading space comes through into the markup and makes it render like a quotation (or similar).
(Alternatively, make d2a.xsl get rid of leading whitespace inside <para> elements.)
|
process
|
remove leading space in xml many others have contributed to the development of cf through their participation in discussions about proposed changes the leading space comes through into the markup and makes it render like a quotation or similar alternatively make xsl get rid of leading whitespace inside elements
| 1
|
10,582
| 13,390,273,407
|
IssuesEvent
|
2020-09-02 20:16:50
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
add tests for {moveToStart} / {moveToEnd} special sequences
|
first-timers-only internal-priority process: tests stage: work in progress type: chore
|
add better tests for {moveToStart} / {moveToEnd} special char sequences
they were added in #4870 but not called out in changelog / documented / or tested, making it one of those "undocumented features")
* after this is fixed, we can open PR for adding to docs
|
1.0
|
add tests for {moveToStart} / {moveToEnd} special sequences - add better tests for {moveToStart} / {moveToEnd} special char sequences
they were added in #4870 but not called out in changelog / documented / or tested, making it one of those "undocumented features")
* after this is fixed, we can open PR for adding to docs
|
process
|
add tests for movetostart movetoend special sequences add better tests for movetostart movetoend special char sequences they were added in but not called out in changelog documented or tested making it one of those undocumented features after this is fixed we can open pr for adding to docs
| 1
|
984
| 3,439,112,607
|
IssuesEvent
|
2015-12-14 07:25:00
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
opened
|
hsv-based displacement
|
enhancement video processing
|
rather than using two rgb channels for x and y displacement, use hue as angle and value/brightness as distance (or maybe saturation)
|
1.0
|
hsv-based displacement - rather than using two rgb channels for x and y displacement, use hue as angle and value/brightness as distance (or maybe saturation)
|
process
|
hsv based displacement rather than using two rgb channels for x and y displacement use hue as angle and value brightness as distance or maybe saturation
| 1
|
4,080
| 3,690,355,179
|
IssuesEvent
|
2016-02-25 19:41:54
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
opened
|
Debugging unstarted pods is hard
|
area/usability
|
Just bringing up an e2e cluster....
```console
% kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
[...]
kube-system fluentd-elasticsearch-e2e-test-mml-minion-siao 0/1 ImagePullBackOff 0 32m
% kubecetl describe pod fluentd-elasticsearch-e2e-test-mml-minion-siao --namespace=kube-system
Name: fluentd-elasticsearch-e2e-test-mml-minion-siao
Namespace: kube-system
Image(s): gcr.io/google_containers/fluentd-elasticsearch:1.14
Node: e2e-test-mml-minion-siao/10.240.0.5
Start Time: Thu, 25 Feb 2016 11:02:53 -0800
Labels: k8s-app=fluentd-logging
Status: Pending
Reason:
Message:
IP:
Controllers: <none>
Containers:
fluentd-elasticsearch:
Container ID:
Image: gcr.io/google_containers/fluentd-elasticsearch:1.14
Image ID:
QoS Tier:
cpu: Guaranteed
memory: BestEffort
Limits:
cpu: 100m
Requests:
cpu: 100m
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
No events.
```
1. The values for "Status" don't match. ImagePullBackOff vs Pending
2. No events, no error messages, no indication of what the problem is, probably without looking at the kubelet logs. http://www.sadtrombone.com/
|
True
|
Debugging unstarted pods is hard - Just bringing up an e2e cluster....
```console
% kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
[...]
kube-system fluentd-elasticsearch-e2e-test-mml-minion-siao 0/1 ImagePullBackOff 0 32m
% kubecetl describe pod fluentd-elasticsearch-e2e-test-mml-minion-siao --namespace=kube-system
Name: fluentd-elasticsearch-e2e-test-mml-minion-siao
Namespace: kube-system
Image(s): gcr.io/google_containers/fluentd-elasticsearch:1.14
Node: e2e-test-mml-minion-siao/10.240.0.5
Start Time: Thu, 25 Feb 2016 11:02:53 -0800
Labels: k8s-app=fluentd-logging
Status: Pending
Reason:
Message:
IP:
Controllers: <none>
Containers:
fluentd-elasticsearch:
Container ID:
Image: gcr.io/google_containers/fluentd-elasticsearch:1.14
Image ID:
QoS Tier:
cpu: Guaranteed
memory: BestEffort
Limits:
cpu: 100m
Requests:
cpu: 100m
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
No events.
```
1. The values for "Status" don't match. ImagePullBackOff vs Pending
2. No events, no error messages, no indication of what the problem is, probably without looking at the kubelet logs. http://www.sadtrombone.com/
|
non_process
|
debugging unstarted pods is hard just bringing up an cluster console kubectl get pods all namespaces namespace name ready status restarts age kube system fluentd elasticsearch test mml minion siao imagepullbackoff kubecetl describe pod fluentd elasticsearch test mml minion siao namespace kube system name fluentd elasticsearch test mml minion siao namespace kube system image s gcr io google containers fluentd elasticsearch node test mml minion siao start time thu feb labels app fluentd logging status pending reason message ip controllers containers fluentd elasticsearch container id image gcr io google containers fluentd elasticsearch image id qos tier cpu guaranteed memory besteffort limits cpu requests cpu state waiting reason imagepullbackoff ready false restart count environment variables conditions type status ready false volumes varlog type hostpath bare host directory volume path var log varlibdockercontainers type hostpath bare host directory volume path var lib docker containers no events the values for status don t match imagepullbackoff vs pending no events no error messages no indication of what the problem is probably without looking at the kubelet logs
| 0
|
83,693
| 10,426,646,057
|
IssuesEvent
|
2019-09-16 18:04:26
|
aspnet/AspNetCore
|
https://api.github.com/repos/aspnet/AspNetCore
|
closed
|
Blazor validation for custom class
|
area-blazor enhancement needs design
|
I'm testing out Blazor and I've run into a validation issue. When validating a simple class I can just use annotations. If I have my own custom class inside though validation doesn't run for everything inside my custom class. The issue seems to be specific to Blazor since I can use this validation in ASP. I've got all the details in this Stack Overflow question https://stackoverflow.com/questions/56447004/blazor-validation-for-custom-class
Is this a platform limitation or am I just doing something wrong?
|
1.0
|
Blazor validation for custom class - I'm testing out Blazor and I've run into a validation issue. When validating a simple class I can just use annotations. If I have my own custom class inside though validation doesn't run for everything inside my custom class. The issue seems to be specific to Blazor since I can use this validation in ASP. I've got all the details in this Stack Overflow question https://stackoverflow.com/questions/56447004/blazor-validation-for-custom-class
Is this a platform limitation or am I just doing something wrong?
|
non_process
|
blazor validation for custom class i m testing out blazor and i ve run into a validation issue when validating a simple class i can just use annotations if i have my own custom class inside though validation doesn t run for everything inside my custom class the issue seems to be specific to blazor since i can use this validation in asp i ve got all the details in this stack overflow question is this a platform limitation or am i just doing something wrong
| 0
|
2,650
| 2,700,049,148
|
IssuesEvent
|
2015-04-03 21:59:07
|
TechEmpower/FrameworkBenchmarks
|
https://api.github.com/repos/TechEmpower/FrameworkBenchmarks
|
closed
|
Clarify max-threads and the assumption of homogeneous testing machines
|
Enhance: Documentation Enhance: Toolset
|
`toolset/run-tests.py` has:
parser.add_argument('--max-threads', default=8, help='The max number of threads to run weight at, this should be set to the number of cores for your system.', type=int)
This is passed to `start()` in `setup.py` as `args.max_threads`. But some scripts like `wsgi/setup.py` instead use:
import multiprocessing
NCPU = multiprocessing.cpu_count()
I don't mind either style, but what do we want? Proposals:
1. Scripts are fine to call cpu_count(). Change run-tests.py to set the default max_threads to cpu_count().
2. Scripts are fine to call cpu_count(). Don't let --max-threads be set on the command line and always set args.max_threads to cpu_count().
3. Change scripts to use args.max_threads and change run-tests.py to set the default max_threads to cpu_count().
Thoughts?
|
1.0
|
Clarify max-threads and the assumption of homogeneous testing machines - `toolset/run-tests.py` has:
parser.add_argument('--max-threads', default=8, help='The max number of threads to run weight at, this should be set to the number of cores for your system.', type=int)
This is passed to `start()` in `setup.py` as `args.max_threads`. But some scripts like `wsgi/setup.py` instead use:
import multiprocessing
NCPU = multiprocessing.cpu_count()
I don't mind either style, but what do we want? Proposals:
1. Scripts are fine to call cpu_count(). Change run-tests.py to set the default max_threads to cpu_count().
2. Scripts are fine to call cpu_count(). Don't let --max-threads be set on the command line and always set args.max_threads to cpu_count().
3. Change scripts to use args.max_threads and change run-tests.py to set the default max_threads to cpu_count().
Thoughts?
|
non_process
|
clarify max threads and the assumption of homogeneous testing machines toolset run tests py has parser add argument max threads default help the max number of threads to run weight at this should be set to the number of cores for your system type int this is passed to start in setup py as args max threads but some scripts like wsgi setup py instead use import multiprocessing ncpu multiprocessing cpu count i don t mind either style but what do we want proposals scripts are fine to call cpu count change run tests py to set the default max threads to cpu count scripts are fine to call cpu count don t let max threads be set on the command line and always set args max threads to cpu count change scripts to use args max threads and change run tests py to set the default max threads to cpu count thoughts
| 0
|
16,008
| 20,188,222,910
|
IssuesEvent
|
2022-02-11 01:19:22
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Establish a designated group responsible for central network management
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Network Security
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-segmentation#functions-and-teams">Establish a designated group responsible for central network management</a>
<p><b>Why Consider This?</b></p>
Centralizing network management and security can reduce the potential for inconsistent strategies that create potential attacker exploitable security risks. Because all divisions of the IT and development organizations do not have the same level of network management and security knowledge and sophistication, organizations benefit from leveraging a centralized network team's expertise and tooling.
<p><b>Context</b></p>
<p><span>Network security has been the traditional linchpin of enterprise security efforts. However, cloud computing has increased the requirement for network perimeters to be more porous and many attackers have mastered the art of attacks on identity system elements (which nearly always bypass network controls). These factors have increased the need to focus primarily on identity-based access controls to protect resources rather than network-based access controls.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Centralize the organizational responsibility for management and security of core networking functions such as cross-premises links, virtual networking, subnetting, and IP address schemes as well as network security elements such as virtual network appliances, encryption of cloud virtual network activity and cross-premises traffic, network-based access controls, and other traditional network security components.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/network-security-containment#centralize-network-management-and-security" target="_blank"><span>Centralize network management and security</span></a><span /></p>
|
1.0
|
Establish a designated group responsible for central network management - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-segmentation#functions-and-teams">Establish a designated group responsible for central network management</a>
<p><b>Why Consider This?</b></p>
Centralizing network management and security can reduce the potential for inconsistent strategies that create potential attacker exploitable security risks. Because all divisions of the IT and development organizations do not have the same level of network management and security knowledge and sophistication, organizations benefit from leveraging a centralized network team's expertise and tooling.
<p><b>Context</b></p>
<p><span>Network security has been the traditional linchpin of enterprise security efforts. However, cloud computing has increased the requirement for network perimeters to be more porous and many attackers have mastered the art of attacks on identity system elements (which nearly always bypass network controls). These factors have increased the need to focus primarily on identity-based access controls to protect resources rather than network-based access controls.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Centralize the organizational responsibility for management and security of core networking functions such as cross-premises links, virtual networking, subnetting, and IP address schemes as well as network security elements such as virtual network appliances, encryption of cloud virtual network activity and cross-premises traffic, network-based access controls, and other traditional network security components.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/network-security-containment#centralize-network-management-and-security" target="_blank"><span>Centralize network management and security</span></a><span /></p>
|
process
|
establish a designated group responsible for central network management why consider this centralizing network management and security can reduce the potential for inconsistent strategies that create potential attacker exploitable security risks because all divisions of the it and development organizations do not have the same level of network management and security knowledge and sophistication organizations benefit from leveraging a centralized network team s expertise and tooling context network security has been the traditional linchpin of enterprise security efforts however cloud computing has increased the requirement for network perimeters to be more porous and many attackers have mastered the art of attacks on identity system elements which nearly always bypass network controls these factors have increased the need to focus primarily on identity based access controls to protect resources rather than network based access controls nbsp suggested actions centralize the organizational responsibility for management and security of core networking functions such as cross premises links virtual networking subnetting and ip address schemes as well as network security elements such as virtual network appliances encryption of cloud virtual network activity and cross premises traffic network based access controls and other traditional network security components learn more centralize network management and security
| 1
|
178,247
| 6,601,629,188
|
IssuesEvent
|
2017-09-18 02:37:58
|
RoboJackets/robocup-firmware
|
https://api.github.com/repos/RoboJackets/robocup-firmware
|
closed
|
Use serial silicon id chip for radio uid
|
area / mbed exp / beginner priority / high status / new type / enhancement
|
Right now the robot's uid is hard-coded to 2 in main.cpp.
Migrated from RoboJackets/robocup-software#684
|
1.0
|
Use serial silicon id chip for radio uid - Right now the robot's uid is hard-coded to 2 in main.cpp.
Migrated from RoboJackets/robocup-software#684
|
non_process
|
use serial silicon id chip for radio uid right now the robot s uid is hard coded to in main cpp migrated from robojackets robocup software
| 0
|
428,232
| 29,934,682,968
|
IssuesEvent
|
2023-06-22 11:58:56
|
opensafely-core/ehrql
|
https://api.github.com/repos/opensafely-core/ehrql
|
opened
|
Separate out inline Python examples from documentation
|
documentation
|
These were all added in one go, copy-pasted from the ehrQL wiki.
They should be separate files, and then included into the documentation.
This is needed to make testing easier: #1325.
|
1.0
|
Separate out inline Python examples from documentation - These were all added in one go, copy-pasted from the ehrQL wiki.
They should be separate files, and then included into the documentation.
This is needed to make testing easier: #1325.
|
non_process
|
separate out inline python examples from documentation these were all added in one go copy pasted from the ehrql wiki they should be separate files and then included into the documentation this is needed to make testing easier
| 0
|
144,506
| 11,622,006,103
|
IssuesEvent
|
2020-02-27 05:03:04
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
closed
|
WooCommerce+Jetpack connect flow has differing landing pages at the end
|
Jetpack Signup Testing [Status] Stale [Type] Bug
|
<!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Have a site with WooCommerce but no Jetpack installed, e.g.: https://jurassic.ninja/create/?nojetpack&woocommerce
2. Start with the wizzard
<img width="300" alt="Screenshot 2019-05-23 at 12 27 28" src="https://user-images.githubusercontent.com/87168/58241704-2a30a000-7d56-11e9-8202-ec6672a56f06.png">
3. Fill in the infos/disable them in steps
4. In Jetpack connect step, press "Continue with Jetpack"
<img width="400" alt="Screenshot 2019-05-23 at 09 12 33" src="https://user-images.githubusercontent.com/87168/58244535-73372300-7d5b-11e9-820e-f2640795e851.png">
5. You'll land in WordPress.com: test as authenticated and un-authenticated users.
- When you are authenticated already, you'll eventually end up in wp-admin's Jetpack dashboard, skipping the "ready" woo step visible in previous screenshot's breadcrumps navi:
<img width="500" alt="Screenshot 2019-05-23 at 09 14 37" src="https://user-images.githubusercontent.com/87168/58244593-8ea22e00-7d5b-11e9-83d3-8b8253ad75b7.png">
- When you were not authenticated and login during the connection flow, you'll land in "ready" step in woo flow:
<img width="500" alt="Screenshot 2019-05-23 at 09 18 42" src="https://user-images.githubusercontent.com/87168/58244648-b42f3780-7d5b-11e9-9369-2818f5e360c6.png">
Feels like authenticated flow where you land in Jetpack dash is a bug.
I didn't test what happens if you step aside the from the flow and either create or recover an account.
#### What I expected
Continue with Woo onboarding during these steps.
#### What happened instead
Landed in Jetpack dash.
# E2E tests
There are existing E2E tests [for this flow](https://github.com/Automattic/wp-calypso/blob/0580a07a70dc2e8bcfd2e27cc65daadd94a2befc/test/e2e/specs-jetpack-calypso/wp-jetpack-connect-spec.js#L370) but [I had to disable](https://github.com/Automattic/wp-calypso/pull/33232) the Jetpack connect flow parts today because they were consistently failing/broken (the test was broken, not the flow).
Conversation p1558593446013600-slack-e2e-testing-discuss
Looks like the test for the final step (that this bug report is about) had been disabled already previously: https://github.com/Automattic/wp-calypso/blob/0580a07a70dc2e8bcfd2e27cc65daadd94a2befc/test/e2e/specs-jetpack-calypso/wp-jetpack-connect-spec.js#L470-L473
Conversation p1556635263170900-slack-proton
|
1.0
|
WooCommerce+Jetpack connect flow has differing landing pages at the end - <!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Have a site with WooCommerce but no Jetpack installed, e.g.: https://jurassic.ninja/create/?nojetpack&woocommerce
2. Start with the wizzard
<img width="300" alt="Screenshot 2019-05-23 at 12 27 28" src="https://user-images.githubusercontent.com/87168/58241704-2a30a000-7d56-11e9-8202-ec6672a56f06.png">
3. Fill in the infos/disable them in steps
4. In Jetpack connect step, press "Continue with Jetpack"
<img width="400" alt="Screenshot 2019-05-23 at 09 12 33" src="https://user-images.githubusercontent.com/87168/58244535-73372300-7d5b-11e9-820e-f2640795e851.png">
5. You'll land in WordPress.com: test as authenticated and un-authenticated users.
- When you are authenticated already, you'll eventually end up in wp-admin's Jetpack dashboard, skipping the "ready" woo step visible in previous screenshot's breadcrumps navi:
<img width="500" alt="Screenshot 2019-05-23 at 09 14 37" src="https://user-images.githubusercontent.com/87168/58244593-8ea22e00-7d5b-11e9-83d3-8b8253ad75b7.png">
- When you were not authenticated and login during the connection flow, you'll land in "ready" step in woo flow:
<img width="500" alt="Screenshot 2019-05-23 at 09 18 42" src="https://user-images.githubusercontent.com/87168/58244648-b42f3780-7d5b-11e9-9369-2818f5e360c6.png">
Feels like authenticated flow where you land in Jetpack dash is a bug.
I didn't test what happens if you step aside the from the flow and either create or recover an account.
#### What I expected
Continue with Woo onboarding during these steps.
#### What happened instead
Landed in Jetpack dash.
# E2E tests
There are existing E2E tests [for this flow](https://github.com/Automattic/wp-calypso/blob/0580a07a70dc2e8bcfd2e27cc65daadd94a2befc/test/e2e/specs-jetpack-calypso/wp-jetpack-connect-spec.js#L370) but [I had to disable](https://github.com/Automattic/wp-calypso/pull/33232) the Jetpack connect flow parts today because they were consistently failing/broken (the test was broken, not the flow).
Conversation p1558593446013600-slack-e2e-testing-discuss
Looks like the test for the final step (that this bug report is about) had been disabled already previously: https://github.com/Automattic/wp-calypso/blob/0580a07a70dc2e8bcfd2e27cc65daadd94a2befc/test/e2e/specs-jetpack-calypso/wp-jetpack-connect-spec.js#L470-L473
Conversation p1556635263170900-slack-proton
|
non_process
|
woocommerce jetpack connect flow has differing landing pages at the end steps to reproduce have a site with woocommerce but no jetpack installed e g start with the wizzard img width alt screenshot at src fill in the infos disable them in steps in jetpack connect step press continue with jetpack img width alt screenshot at src you ll land in wordpress com test as authenticated and un authenticated users when you are authenticated already you ll eventually end up in wp admin s jetpack dashboard skipping the ready woo step visible in previous screenshot s breadcrumps navi img width alt screenshot at src when you were not authenticated and login during the connection flow you ll land in ready step in woo flow img width alt screenshot at src feels like authenticated flow where you land in jetpack dash is a bug i didn t test what happens if you step aside the from the flow and either create or recover an account what i expected continue with woo onboarding during these steps what happened instead landed in jetpack dash tests there are existing tests but the jetpack connect flow parts today because they were consistently failing broken the test was broken not the flow conversation slack testing discuss looks like the test for the final step that this bug report is about had been disabled already previously conversation slack proton
| 0
|
807
| 3,285,422,387
|
IssuesEvent
|
2015-10-28 20:32:42
|
GsDevKit/zinc
|
https://api.github.com/repos/GsDevKit/zinc
|
reopened
|
Timeout passed to a ZnClient is ignored while making the connection via the underlying socket
|
inprocess
|
### Suppose the following situation:
- Create a ZnClient
- Pass in an explicit timeout (e.g., 3 seconds)
- Pass in a host/port combination that does not exist
### Expected behavior: an error should be thrown after 3 seconds.
What happens: the user has to wait for the (default) timeout of GsSocket; the timeout passed in to the client is ignored.
### Reason: the SocketStream does not use the timeout while connecting to the socket created via SocketStreamSocket.
In the code this is reflected in the following extension method
SocketStream>>openConnectionToHost: host port: portNumber timeout: timeout
| socket |
socket :=SocketStreamSocket newTCPSocket.
socket connectTo: host port: portNumber.
^(self on: socket)
timeout: timeout;
yourself
I compared this with the Pharo implementation, and here the timeout is passed.
|
1.0
|
Timeout passed to a ZnClient is ignored while making the connection via the underlying socket - ### Suppose the following situation:
- Create a ZnClient
- Pass in an explicit timeout (e.g., 3 seconds)
- Pass in a host/port combination that does not exist
### Expected behavior: an error should be thrown after 3 seconds.
What happens: the user has to wait for the (default) timeout of GsSocket; the timeout passed in to the client is ignored.
### Reason: the SocketStream does not use the timeout while connecting to the socket created via SocketStreamSocket.
In the code this is reflected in the following extension method
SocketStream>>openConnectionToHost: host port: portNumber timeout: timeout
| socket |
socket :=SocketStreamSocket newTCPSocket.
socket connectTo: host port: portNumber.
^(self on: socket)
timeout: timeout;
yourself
I compared this with the Pharo implementation, and here the timeout is passed.
|
process
|
timeout passed to a znclient is ignored while making the connection via the underlying socket suppose the following situation create a znclient pass in an explicit timeout e g seconds pass in a host port combination that does not exist expected behavior an error should be thrown after seconds what happens the user has to wait for the default timeout of gssocket the timeout passed in to the client is ignored reason the socketstream does not use the timeout while connecting to the socket created via socketstreamsocket in the code this is reflected in the following extension method socketstream openconnectiontohost host port portnumber timeout timeout socket socket socketstreamsocket newtcpsocket socket connectto host port portnumber self on socket timeout timeout yourself i compared this with the pharo implementation and here the timeout is passed
| 1
|
402,783
| 27,386,623,887
|
IssuesEvent
|
2023-02-28 13:45:35
|
bihealth/sodar-core
|
https://api.github.com/repos/bihealth/sodar-core
|
closed
|
Automated readthedocs builds not working
|
bug documentation environment
|
I'm guessing this is a similar issue to what I recently fixed in SODAR.
|
1.0
|
Automated readthedocs builds not working - I'm guessing this is a similar issue to what I recently fixed in SODAR.
|
non_process
|
automated readthedocs builds not working i m guessing this is a similar issue to what i recently fixed in sodar
| 0
|
4,076
| 7,017,167,202
|
IssuesEvent
|
2017-12-21 08:31:33
|
mrchypark/krlandprice
|
https://api.github.com/repos/mrchypark/krlandprice
|
opened
|
데이터 컬럼 통일
|
data pre-processing
|
각 년도 데이터 컬럼을 통일합니다. 시간이 지날 수록 데이터의 개선이 발생할 수도 있습니다만, 지난 년도와 함께 사용할 수 있게 호환성을 고려해서 컬럼을 결정합니다.
|
1.0
|
데이터 컬럼 통일 - 각 년도 데이터 컬럼을 통일합니다. 시간이 지날 수록 데이터의 개선이 발생할 수도 있습니다만, 지난 년도와 함께 사용할 수 있게 호환성을 고려해서 컬럼을 결정합니다.
|
process
|
데이터 컬럼 통일 각 년도 데이터 컬럼을 통일합니다 시간이 지날 수록 데이터의 개선이 발생할 수도 있습니다만 지난 년도와 함께 사용할 수 있게 호환성을 고려해서 컬럼을 결정합니다
| 1
|
11,791
| 14,618,796,962
|
IssuesEvent
|
2020-12-22 16:47:07
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Where can runtime expressions be used?
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement ready-to-doc
|
> Compile-time expressions can be used anywhere; runtime expressions are more limited.
Could you please clarify where runtime expressions can be used? From what I understand, there are `conditions` and `variables` at least.
According to [one of those samples](https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml#use-a-variable-group), it seems that it can also be used in `script`.
Where else?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Where can runtime expressions be used? - > Compile-time expressions can be used anywhere; runtime expressions are more limited.
Could you please clarify where runtime expressions can be used? From what I understand, there are `conditions` and `variables` at least.
According to [one of those samples](https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&tabs=yaml#use-a-variable-group), it seems that it can also be used in `script`.
Where else?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
where can runtime expressions be used compile time expressions can be used anywhere runtime expressions are more limited could you please clarify where runtime expressions can be used from what i understand there are conditions and variables at least according to it seems that it can also be used in script where else document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
20,358
| 27,015,452,442
|
IssuesEvent
|
2023-02-10 18:56:00
|
darkside-princeton/sipm-analysis
|
https://api.github.com/repos/darkside-princeton/sipm-analysis
|
closed
|
Complete pulse-processing methods and scripts
|
pre-processing
|
Reproduce the tasks done with `script/root_calibration.py` using the new framework.
1. Have one script under `sipm/exe/` that handles pulse information without pulse shape analysis.
2. Include more information to analyze and save, including baseline, matched filter, amplitude, and charge integral.
3. Modify `calibration.ipynb` to work with h5 files instead of root files.
|
1.0
|
Complete pulse-processing methods and scripts - Reproduce the tasks done with `script/root_calibration.py` using the new framework.
1. Have one script under `sipm/exe/` that handles pulse information without pulse shape analysis.
2. Include more information to analyze and save, including baseline, matched filter, amplitude, and charge integral.
3. Modify `calibration.ipynb` to work with h5 files instead of root files.
|
process
|
complete pulse processing methods and scripts reproduce the tasks done with script root calibration py using the new framework have one script under sipm exe that handles pulse information without pulse shape analysis include more information to analyze and save including baseline matched filter amplitude and charge integral modify calibration ipynb to work with files instead of root files
| 1
|
375,537
| 26,167,447,949
|
IssuesEvent
|
2023-01-01 13:19:01
|
TSBots/TechSupportBot
|
https://api.github.com/repos/TSBots/TechSupportBot
|
opened
|
Fix the GPL3 license
|
documentation
|
The current license is a template and includes directions.
I don't think this makes the license invalid, but it could at least look better
|
1.0
|
Fix the GPL3 license - The current license is a template and includes directions.
I don't think this makes the license invalid, but it could at least look better
|
non_process
|
fix the license the current license is a template and includes directions i don t think this makes the license invalid but it could at least look better
| 0
|
712
| 3,203,628,506
|
IssuesEvent
|
2015-10-02 20:01:54
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
reopened
|
Process.GetProcessesByName significantly slower on Linux than Windows for nonexistent process
|
performance System.Diagnostics.Process X-Plat
|
The performance test being run:
```
[Benchmark]
[InlineData(1)]
[InlineData(2)]
[InlineData(3)]
public void GetProcessesByName(int innerIterations)
{
foreach (var iteration in Benchmark.Iterations)
using (iteration.StartMeasurement())
{
for (int i = 0; i < innerIterations; i++)
{
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
}
}
}
```
Linux perf results (38.455 total seconds):
```
<assemblies>
<assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (1 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-29" run-time="03:16:25" total="3" passed="3" failed="0" skipped="0" time="38.455" errors="0">
<errors />
<collection total="3" passed="3" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="35.243">
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="6.5254048" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="11.3182348" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="17.3998431" result="Pass" />
</collection>
</assembly>
</assemblies>
```
Windows perf results (3.355 total seconds):
```
<assemblies>
<assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (8 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-28" run-time="12:22:29" total="3" passed="3" failed="0" skipped="0" time="3.355" errors="0">
<errors />
<collection total="3" passed="3" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="3.183">
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.0872349" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.040529" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.0556937" result="Pass" />
</collection>
</assembly>
</assemblies>
```
Further increasing the number of calls to GetProcessesByName makes comparatively little difference on Windows. For example, with an ```InnerIterations``` of 5000 (45000 total function calls), the elapsed time on Windows is only 106 seconds. In that same time Linux can only complete ~20 InnerIterations (180 total function calls). This suggests the Linux implementation takes roughly 250 times as long as the Windows implementation.
|
1.0
|
Process.GetProcessesByName significantly slower on Linux than Windows for nonexistent process -
The performance test being run:
```
[Benchmark]
[InlineData(1)]
[InlineData(2)]
[InlineData(3)]
public void GetProcessesByName(int innerIterations)
{
foreach (var iteration in Benchmark.Iterations)
using (iteration.StartMeasurement())
{
for (int i = 0; i < innerIterations; i++)
{
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
Process.GetProcessesByName("1"); Process.GetProcessesByName("1"); Process.GetProcessesByName("1");
}
}
}
```
Linux perf results (38.455 total seconds):
```
<assemblies>
<assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (1 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-29" run-time="03:16:25" total="3" passed="3" failed="0" skipped="0" time="38.455" errors="0">
<errors />
<collection total="3" passed="3" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="35.243">
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="6.5254048" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="11.3182348" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="17.3998431" result="Pass" />
</collection>
</assembly>
</assemblies>
```
Windows perf results (3.355 total seconds):
```
<assemblies>
<assembly name="System.Diagnostics.Process.Tests.dll" environment="64-bit .NET (unknown version) [collection-per-assembly, parallel (8 threads)]" test-framework="xUnit.net 2.1.0.3168" run-date="2015-09-28" run-time="12:22:29" total="3" passed="3" failed="0" skipped="0" time="3.355" errors="0">
<errors />
<collection total="3" passed="3" failed="0" skipped="0" name="Test collection for System.Diagnostics.Process.Tests.dll" time="3.183">
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 1)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.0872349" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 2)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.040529" result="Pass" />
<test name="System.Diagnostics.Tests.Perf_Process.GetProcessesByName(innerIterations: 3)" type="System.Diagnostics.Tests.Perf_Process" method="GetProcessesByName" time="1.0556937" result="Pass" />
</collection>
</assembly>
</assemblies>
```
Further increasing the number of calls to GetProcessesByName makes comparatively little difference on Windows. For example, with an ```InnerIterations``` of 5000 (45000 total function calls), the elapsed time on Windows is only 106 seconds. In that same time Linux can only complete ~20 InnerIterations (180 total function calls). This suggests the Linux implementation takes roughly 250 times as long as the Windows implementation.
|
process
|
process getprocessesbyname significantly slower on linux than windows for nonexistent process the performance test being run public void getprocessesbyname int inneriterations foreach var iteration in benchmark iterations using iteration startmeasurement for int i i inneriterations i process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname process getprocessesbyname linux perf results total seconds windows perf results total seconds further increasing the number of calls to getprocessesbyname makes comparatively little difference on windows for example with an inneriterations of total function calls the elapsed time on windows is only seconds in that same time linux can only complete inneriterations total function calls this suggests the linux implementation takes roughly times as long as the windows implementation
| 1
|
20,936
| 27,787,570,003
|
IssuesEvent
|
2023-03-17 05:33:37
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
reopened
|
Re-evaluate gosec error about using "weak cryptographic primitive"
|
bug help wanted priority:p3 processor/attributes processor/resource
|
`crypto/sha1` is imported in processor/processorhelper/hasher.go and some tests, and we suppress the warning from Gosec about it being a weak cryptographic primitive. We should document why SHA1 is appropriate (e.g. it's part of an external specification), or switch to something else.
```
[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:18] - G505 (CWE-327): Blacklisted import crypto/sha1: weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
> "crypto/sha1"
[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:61] - G401 (CWE-326): Use of weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
> sha1.New()
```
|
2.0
|
Re-evaluate gosec error about using "weak cryptographic primitive" - `crypto/sha1` is imported in processor/processorhelper/hasher.go and some tests, and we suppress the warning from Gosec about it being a weak cryptographic primitive. We should document why SHA1 is appropriate (e.g. it's part of an external specification), or switch to something else.
```
[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:18] - G505 (CWE-327): Blacklisted import crypto/sha1: weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
> "crypto/sha1"
[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:61] - G401 (CWE-326): Use of weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
> sha1.New()
```
|
process
|
re evaluate gosec error about using weak cryptographic primitive crypto is imported in processor processorhelper hasher go and some tests and we suppress the warning from gosec about it being a weak cryptographic primitive we should document why is appropriate e g it s part of an external specification or switch to something else cwe blacklisted import crypto weak cryptographic primitive confidence high severity medium crypto cwe use of weak cryptographic primitive confidence high severity medium new
| 1
|
13,476
| 15,987,317,027
|
IssuesEvent
|
2021-04-19 00:06:51
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Why pid is +1 on debian and not on alpine
|
Bug Process Status: Needs Review
|
```php
self::$process = new Process('php --server localhost:9999 ' . self::FILE);
exec('kill -9 ' . (self::$process->getPid() + 1));// For debian
exec('kill -9 ' . self::$process->getPid());// For alpine
```
@fabpot Do you know why is does have this behavior
Using docker `php:7.2-fpm` vs `php:7.2-fpm-alpine`
Ref: https://stackoverflow.com/questions/41226894/symfonys-process-pid-increments-by-1-during-execution
|
1.0
|
Why pid is +1 on debian and not on alpine - ```php
self::$process = new Process('php --server localhost:9999 ' . self::FILE);
exec('kill -9 ' . (self::$process->getPid() + 1));// For debian
exec('kill -9 ' . self::$process->getPid());// For alpine
```
@fabpot Do you know why is does have this behavior
Using docker `php:7.2-fpm` vs `php:7.2-fpm-alpine`
Ref: https://stackoverflow.com/questions/41226894/symfonys-process-pid-increments-by-1-during-execution
|
process
|
why pid is on debian and not on alpine php self process new process php server localhost self file exec kill self process getpid for debian exec kill self process getpid for alpine fabpot do you know why is does have this behavior using docker php fpm vs php fpm alpine ref
| 1
|
5,524
| 8,381,048,232
|
IssuesEvent
|
2018-10-07 20:47:28
|
MichiganDataScienceTeam/googleanalytics
|
https://api.github.com/repos/MichiganDataScienceTeam/googleanalytics
|
opened
|
Preprocess: u'trafficSource.adContent', u'trafficSource.adwordsClickInfo.adNetworkType', u'trafficSource.adwordsClickInfo.criteriaParameters', u'trafficSource.adwordsClickInfo.gclId',
|
easy preprocessing
|
Preprocess the following features:
u'trafficSource.adContent',
u'trafficSource.adwordsClickInfo.adNetworkType',
u'trafficSource.adwordsClickInfo.criteriaParameters',
u'trafficSource.adwordsClickInfo.gclId',
1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling)
2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html)
3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization)
4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization)
[http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
|
1.0
|
Preprocess: u'trafficSource.adContent', u'trafficSource.adwordsClickInfo.adNetworkType', u'trafficSource.adwordsClickInfo.criteriaParameters', u'trafficSource.adwordsClickInfo.gclId', - Preprocess the following features:
u'trafficSource.adContent',
u'trafficSource.adwordsClickInfo.adNetworkType',
u'trafficSource.adwordsClickInfo.criteriaParameters',
u'trafficSource.adwordsClickInfo.gclId',
1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling)
2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html)
3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization)
4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization)
[http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
|
process
|
preprocess u trafficsource adcontent u trafficsource adwordsclickinfo adnetworktype u trafficsource adwordsclickinfo criteriaparameters u trafficsource adwordsclickinfo gclid preprocess the following features u trafficsource adcontent u trafficsource adwordsclickinfo adnetworktype u trafficsource adwordsclickinfo criteriaparameters u trafficsource adwordsclickinfo gclid standardization impute missing values normalization encode categorical features optional discretization optional
| 1
|
21,170
| 28,141,106,987
|
IssuesEvent
|
2023-04-02 00:03:25
|
metallb/metallb
|
https://api.github.com/repos/metallb/metallb
|
closed
|
Review and improve contribution guidelines
|
process lifecycle-stale
|
Our code review process could likely be improved significantly if we had useful guidelines for the contribution process.
The following things come to mind:
- PRs should contain some minimal info in their description so that reviewers can review them easily. On the other hand, we shouldn't bother contributors with unnecessary bureaucracy. See https://github.com/metallb/metallb/issues/719.
- The "contributing" section on the [website](https://metallb.org/community/) needs to be updated.
- I think we could benefit from written code review guidelines. For example, if a maintainer asks for a change and the same pattern has multiple occurrences, it should be implied that all occurrences should be fixed.
|
1.0
|
Review and improve contribution guidelines - Our code review process could likely be improved significantly if we had useful guidelines for the contribution process.
The following things come to mind:
- PRs should contain some minimal info in their description so that reviewers can review them easily. On the other hand, we shouldn't bother contributors with unnecessary bureaucracy. See https://github.com/metallb/metallb/issues/719.
- The "contributing" section on the [website](https://metallb.org/community/) needs to be updated.
- I think we could benefit from written code review guidelines. For example, if a maintainer asks for a change and the same pattern has multiple occurrences, it should be implied that all occurrences should be fixed.
|
process
|
review and improve contribution guidelines our code review process could likely be improved significantly if we had useful guidelines for the contribution process the following things come to mind prs should contain some minimal info in their description so that reviewers can review them easily on the other hand we shouldn t bother contributors with unnecessary bureaucracy see the contributing section on the needs to be updated i think we could benefit from written code review guidelines for example if a maintainer asks for a change and the same pattern has multiple occurrences it should be implied that all occurrences should be fixed
| 1
|
540,418
| 15,811,560,622
|
IssuesEvent
|
2021-04-05 02:56:43
|
naoTimesdev/webpanel
|
https://api.github.com/repos/naoTimesdev/webpanel
|
closed
|
Merubah Kanal Announcement untuk progress
|
difficulty: easy module: API module: backend module: page priority: medium type: enhancements
|
To-do:
- [ ] API Handler
- [x] Socket parity untuk cek kanal-nya ada apa tidak
|
1.0
|
Merubah Kanal Announcement untuk progress - To-do:
- [ ] API Handler
- [x] Socket parity untuk cek kanal-nya ada apa tidak
|
non_process
|
merubah kanal announcement untuk progress to do api handler socket parity untuk cek kanal nya ada apa tidak
| 0
|
2,260
| 5,093,448,569
|
IssuesEvent
|
2017-01-03 06:07:30
|
CS3216-Bubble/bubble-frontend-deprecated
|
https://api.github.com/repos/CS3216-Bubble/bubble-frontend-deprecated
|
closed
|
Process flag request view
|
counsel-ui feature high-priority process-flag-view
|
A view to approve or reject a professional help flag request sent to report on a user. This view will display the message context and the user profile summary of the reporter and reportee. On approving, a SOS chat will be created to facilitate the conversation between the counsellor and the reported user.
|
1.0
|
Process flag request view - A view to approve or reject a professional help flag request sent to report on a user. This view will display the message context and the user profile summary of the reporter and reportee. On approving, a SOS chat will be created to facilitate the conversation between the counsellor and the reported user.
|
process
|
process flag request view a view to approve or reject a professional help flag request sent to report on a user this view will display the message context and the user profile summary of the reporter and reportee on approving a sos chat will be created to facilitate the conversation between the counsellor and the reported user
| 1
|
81,517
| 10,146,267,828
|
IssuesEvent
|
2019-08-05 07:41:26
|
MozillaReality/FirefoxReality
|
https://api.github.com/repos/MozillaReality/FirefoxReality
|
closed
|
make and place "too many windows" dialog
|
Final Design PM/UX review UX
|
If three windows are already opened, the user can still click the + button in the tray or the context menu button. In this case, a new window is not created, but instead a dialog appears with error text. (UIS-70 800)
needed by #1319
[Multi-window spec](
https://trello.com/c/78zHL5U4/343-uis-70-window-positioning-3-windows)
[Window positioning spec](
https://trello.com/c/McQtjuOB/344-uis-71-open-link-in-new-window)
|
1.0
|
make and place "too many windows" dialog - If three windows are already opened, the user can still click the + button in the tray or the context menu button. In this case, a new window is not created, but instead a dialog appears with error text. (UIS-70 800)
needed by #1319
[Multi-window spec](
https://trello.com/c/78zHL5U4/343-uis-70-window-positioning-3-windows)
[Window positioning spec](
https://trello.com/c/McQtjuOB/344-uis-71-open-link-in-new-window)
|
non_process
|
make and place too many windows dialog if three windows are already opened the user can still click the button in the tray or the context menu button in this case a new window is not created but instead a dialog appears with error text uis needed by
| 0
|
108,587
| 23,633,353,353
|
IssuesEvent
|
2022-08-25 11:15:54
|
odpi/egeria
|
https://api.github.com/repos/odpi/egeria
|
closed
|
Implement Sonar scan across all repos
|
enhancement code-quality triage
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Please describe the new behavior that that will improve Egeria
We currently run a sonar scan, hosted on sonar cloud, for the egeria repo, and a few others.
However this uses an old 'azure' pipeline and should be migrated to current environment.
Sonar-initiated scans work for some languages, but not java. An attempt to get this working for java was made in the hms connector repo - see https://github.com/odpi/egeria-connector-hivemetastore/issues/7 . However with the fork/PR model we use, sonar recommended approach does not work. Some workarounds are suggested including splitting the job in 2.
opening up this issue to track updating the scans across all our repositories
cc: @lpal
### Alternatives
n/a
### Any Further Information?
none
### Would you be prepared to be assigned this issue to work on?
- [X] I can work on this
|
1.0
|
Implement Sonar scan across all repos - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Please describe the new behavior that that will improve Egeria
We currently run a sonar scan, hosted on sonar cloud, for the egeria repo, and a few others.
However this uses an old 'azure' pipeline and should be migrated to current environment.
Sonar-initiated scans work for some languages, but not java. An attempt to get this working for java was made in the hms connector repo - see https://github.com/odpi/egeria-connector-hivemetastore/issues/7 . However with the fork/PR model we use, sonar recommended approach does not work. Some workarounds are suggested including splitting the job in 2.
opening up this issue to track updating the scans across all our repositories
cc: @lpal
### Alternatives
n/a
### Any Further Information?
none
### Would you be prepared to be assigned this issue to work on?
- [X] I can work on this
|
non_process
|
implement sonar scan across all repos is there an existing issue for this i have searched the existing issues please describe the new behavior that that will improve egeria we currently run a sonar scan hosted on sonar cloud for the egeria repo and a few others however this uses an old azure pipeline and should be migrated to current environment sonar initiated scans work for some languages but not java an attempt to get this working for java was made in the hms connector repo see however with the fork pr model we use sonar recommended approach does not work some workarounds are suggested including splitting the job in opening up this issue to track updating the scans across all our repositories cc lpal alternatives n a any further information none would you be prepared to be assigned this issue to work on i can work on this
| 0
|
164,781
| 26,023,858,944
|
IssuesEvent
|
2022-12-21 14:49:02
|
phetsims/quadrilateral
|
https://api.github.com/repos/phetsims/quadrilateral
|
opened
|
Possibly reword angle "slices" to "wedges"
|
design:description design:voicing
|
"pie slices" was the original idea, but we didn't want confusion between "pie" and mathematical "Pi", so we dropped "pie". "30 degree slices" in the hint is a little hard to find. And the numbered slices has been criticized by two people (low vision and blind populations).
@BLFiedler suggested "wedges".
- portions
- slivers
- chunks
Are other words that could be considered.
I will look at the help text and the object vales to see if a rename to "wedges" will work.
|
2.0
|
Possibly reword angle "slices" to "wedges" - "pie slices" was the original idea, but we didn't want confusion between "pie" and mathematical "Pi", so we dropped "pie". "30 degree slices" in the hint is a little hard to find. And the numbered slices has been criticized by two people (low vision and blind populations).
@BLFiedler suggested "wedges".
- portions
- slivers
- chunks
Are other words that could be considered.
I will look at the help text and the object vales to see if a rename to "wedges" will work.
|
non_process
|
possibly reword angle slices to wedges pie slices was the original idea but we didn t want confusion between pie and mathematical pi so we dropped pie degree slices in the hint is a little hard to find and the numbered slices has been criticized by two people low vision and blind populations blfiedler suggested wedges portions slivers chunks are other words that could be considered i will look at the help text and the object vales to see if a rename to wedges will work
| 0
|
444,352
| 31,034,072,788
|
IssuesEvent
|
2023-08-10 14:13:02
|
bpfd-dev/bpfd
|
https://api.github.com/repos/bpfd-dev/bpfd
|
opened
|
Introductory blog post for BPFD
|
documentation
|
Discussed on the community meeting, we want to put together an introductory level blog post for bpfd which covers:
* high level information about what bpfd is, and why it exists
* basic concepts and use cases
* links to the project, and community (discussions, slack, community sync) to lead interested people into the project
The intention is to post this on our website at https://bpfd.dev first, and then consider posting it on places like medium, e.t.c. for further reach.
|
1.0
|
Introductory blog post for BPFD - Discussed on the community meeting, we want to put together an introductory level blog post for bpfd which covers:
* high level information about what bpfd is, and why it exists
* basic concepts and use cases
* links to the project, and community (discussions, slack, community sync) to lead interested people into the project
The intention is to post this on our website at https://bpfd.dev first, and then consider posting it on places like medium, e.t.c. for further reach.
|
non_process
|
introductory blog post for bpfd discussed on the community meeting we want to put together an introductory level blog post for bpfd which covers high level information about what bpfd is and why it exists basic concepts and use cases links to the project and community discussions slack community sync to lead interested people into the project the intention is to post this on our website at first and then consider posting it on places like medium e t c for further reach
| 0
|
39,257
| 5,060,353,156
|
IssuesEvent
|
2016-12-22 11:35:50
|
OAButton/backend
|
https://api.github.com/repos/OAButton/backend
|
opened
|
Anonymity for users during requests
|
Blocked: Copy Blocked: Design Blocked: Development Blocked: Test enhancement JISC question / discussion
|
I've had several people suggest that they'd like to see us implement anonymity as part of the request system & on the site generally. This is an issue to think about the implications of that & how to make it work.
|
1.0
|
Anonymity for users during requests - I've had several people suggest that they'd like to see us implement anonymity as part of the request system & on the site generally. This is an issue to think about the implications of that & how to make it work.
|
non_process
|
anonymity for users during requests i ve had several people suggest that they d like to see us implement anonymity as part of the request system on the site generally this is an issue to think about the implications of that how to make it work
| 0
|
9,156
| 12,217,061,466
|
IssuesEvent
|
2020-05-01 16:24:29
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
opened
|
Remove `editor` role from the test service account
|
priority: p2 testing type: process
|
Currently the service account used for our Kokoro builds has `editor` role. While this is convenient, strictly saying we should stop using the deprecated role and instead add specific minimum roles to the service account.
|
1.0
|
Remove `editor` role from the test service account - Currently the service account used for our Kokoro builds has `editor` role. While this is convenient, strictly saying we should stop using the deprecated role and instead add specific minimum roles to the service account.
|
process
|
remove editor role from the test service account currently the service account used for our kokoro builds has editor role while this is convenient strictly saying we should stop using the deprecated role and instead add specific minimum roles to the service account
| 1
|
12,810
| 15,187,242,409
|
IssuesEvent
|
2021-02-15 13:32:07
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
incompatible_dont_collect_so_artifacts
|
incompatible-change migration-ready team-Rules-Java type: process
|
**Flag:** `--incompatible_dont_collect_so_artifacts`
**Available since:** 5.0.0
**Flipped in:** 5.0.0
Rules `java_library`, `java_binary` and `java_test` will stop collecting `.so` libraries directly, which may happen when they are provided using `filegroup` or `genrule` targets. `so` libraries were collected from `deps`, `runtime_deps`, `exports` and `data` attributes.
# Migration
Use `cc_binary` or `cc_import` rule to import the library or point directly to those targets from Java targets. When needed pass it using an `alias` instead of `filegroup`.
|
1.0
|
incompatible_dont_collect_so_artifacts - **Flag:** `--incompatible_dont_collect_so_artifacts`
**Available since:** 5.0.0
**Flipped in:** 5.0.0
Rules `java_library`, `java_binary` and `java_test` will stop collecting `.so` libraries directly, which may happen when they are provided using `filegroup` or `genrule` targets. `so` libraries were collected from `deps`, `runtime_deps`, `exports` and `data` attributes.
# Migration
Use `cc_binary` or `cc_import` rule to import the library or point directly to those targets from Java targets. When needed pass it using an `alias` instead of `filegroup`.
|
process
|
incompatible dont collect so artifacts flag incompatible dont collect so artifacts available since flipped in rules java library java binary and java test will stop collecting so libraries directly which may happen when they are provided using filegroup or genrule targets so libraries were collected from deps runtime deps exports and data attributes migration use cc binary or cc import rule to import the library or point directly to those targets from java targets when needed pass it using an alias instead of filegroup
| 1
|
11,457
| 14,280,025,811
|
IssuesEvent
|
2020-11-23 04:49:03
|
aodn/imos-toolbox
|
https://api.github.com/repos/aodn/imos-toolbox
|
opened
|
Standardisation of PP/QC history comments in the netcdf file
|
Type:enhancement Unit:Processing Unit:QC
|
It would be interesting to review/extend the standard commenting operation in regard of QC/PP tests over the `history` field in the NetCDF exported files. This would help, for example, on external reporting scripts (or any other external tool) that need to inspect what has been done within a particular file.
Ideally, the `history` would provide _**complete string representations**_ of what has been done, where applicable. Optionally, the `history` may even have **reversible/parsable string representations** of the operations performed.
**Complete** means that all arguments/options used are explicitly stated/referenced. This is mostly the case, but maybe missing some information that is contained in variable comments only.
**Reversible** means that the `history` comment can be read, processed, parsed and converted to be re-used to obtain the same results. The NetCDF history field is not ideal for that, but a lot of conventions used some kind of string introspection for conversions anyway (e.g. mapping transformations). This kind of functionality may help in reprocessing from the file itself (no DB, no previous session files).
A deeper change in formatting, like timestamps on each operation or toolbox transition information (loading -> PP -> Metadata edits -> QCs -> export) would be good candidates too. Maybe we need a more powerful and structured log?
|
1.0
|
Standardisation of PP/QC history comments in the netcdf file - It would be interesting to review/extend the standard commenting operation in regard of QC/PP tests over the `history` field in the NetCDF exported files. This would help, for example, on external reporting scripts (or any other external tool) that need to inspect what has been done within a particular file.
Ideally, the `history` would provide _**complete string representations**_ of what has been done, where applicable. Optionally, the `history` may even have **reversible/parsable string representations** of the operations performed.
**Complete** means that all arguments/options used are explicitly stated/referenced. This is mostly the case, but maybe missing some information that is contained in variable comments only.
**Reversible** means that the `history` comment can be read, processed, parsed and converted to be re-used to obtain the same results. The NetCDF history field is not ideal for that, but a lot of conventions used some kind of string introspection for conversions anyway (e.g. mapping transformations). This kind of functionality may help in reprocessing from the file itself (no DB, no previous session files).
A deeper change in formatting, like timestamps on each operation or toolbox transition information (loading -> PP -> Metadata edits -> QCs -> export) would be good candidates too. Maybe we need a more powerful and structured log?
|
process
|
standardisation of pp qc history comments in the netcdf file it would be interesting to review extend the standard commenting operation in regard of qc pp tests over the history field in the netcdf exported files this would help for example on external reporting scripts or any other external tool that need to inspect what has been done within a particular file ideally the history would provide complete string representations of what has been done where applicable optionally the history may even have reversible parsable string representations of the operations performed complete means that all arguments options used are explicitly stated referenced this is mostly the case but maybe missing some information that is contained in variable comments only reversible means that the history comment can be read processed parsed and converted to be re used to obtain the same results the netcdf history field is not ideal for that but a lot of conventions used some kind of string introspection for conversions anyway e g mapping transformations this kind of functionality may help in reprocessing from the file itself no db no previous session files a deeper change in formatting like timestamps on each operation or toolbox transition information loading pp metadata edits qcs export would be good candidates too maybe we need a more powerful and structured log
| 1
|
12,748
| 15,107,550,207
|
IssuesEvent
|
2021-02-08 15:33:04
|
alphagov/govuk-design-system
|
https://api.github.com/repos/alphagov/govuk-design-system
|
closed
|
Update the Design System to use GOV.UK Frontend v3.11.0
|
process 🕔 hours
|
## What
Once [GOV.UK Frontend 3.11.0 has been released](https://github.com/alphagov/govuk-frontend/issues/2115), update the Design System to use it.
## Why
So that the Design System reflects the latest release of GOV.UK Frontend.
## Who needs to know about this
Whoever is doing the release
## Done when
[add PR when ready]
|
1.0
|
Update the Design System to use GOV.UK Frontend v3.11.0 - ## What
Once [GOV.UK Frontend 3.11.0 has been released](https://github.com/alphagov/govuk-frontend/issues/2115), update the Design System to use it.
## Why
So that the Design System reflects the latest release of GOV.UK Frontend.
## Who needs to know about this
Whoever is doing the release
## Done when
[add PR when ready]
|
process
|
update the design system to use gov uk frontend what once update the design system to use it why so that the design system reflects the latest release of gov uk frontend who needs to know about this whoever is doing the release done when
| 1
|
502,667
| 14,564,330,869
|
IssuesEvent
|
2020-12-17 04:49:45
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Port it to termux. Please
|
disposition/help wanted disposition/stale kind/enhancement lang/Python priority/P2
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Error log
```
http://sprunge.us/PJOVfj
```
### Describe the solution you'd like
A clear and concise description of what you want to happen.
### Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
### Additional context
Add any other context about the feature request here.
|
1.0
|
Port it to termux. Please - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Error log
```
http://sprunge.us/PJOVfj
```
### Describe the solution you'd like
A clear and concise description of what you want to happen.
### Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
### Additional context
Add any other context about the feature request here.
|
non_process
|
port it to termux please please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when error log describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context about the feature request here
| 0
|
92,166
| 8,353,778,156
|
IssuesEvent
|
2018-10-02 11:13:33
|
ethereumjs/ethereumjs-vm
|
https://api.github.com/repos/ethereumjs/ethereumjs-vm
|
closed
|
Separate API Tests for the VM
|
PR state: WIP type: tests
|
**Introduction / Situation**
In the current [tests](https://github.com/ethereumjs/ethereumjs-vm/tree/master/tests) setup only the official Ethereum state and blockchain [tests](https://github.com/ethereum/tests) are run to test the VM functionality.
This gives a good impression of the current state of compatibility towards the latest HF changes but leaves the outer API of the library more-or-less untested, see the red files in one of the latest [coverage reports](https://coveralls.io/github/ethereumjs/ethereumjs-vm).
This is simply not sufficient, since this currently means that various possible invocations of the API (instantiation with or without ``blockchain``,...) as well as larger parts of the non-VM-executing code parts remain completely untested, see e.g. the poor coverage for the ``runBlock.js`` file (< 20%, *sigh*).
**Task Description**
This can be tackled by adding a second test suite and setup, resembling more traditional tests like e.g. in the [block](https://github.com/ethereumjs/ethereumjs-block/blob/master/tests/block.js) library (super-random example, don't go along too closely) and testing the different instantiation and execution paths for the VM.
Like always with tests even a very basic test setup with just a handful of tests would already make a huge difference and prevent issues like https://github.com/ethereumjs/ethereumjs-vm/issues/303 where functionality completely broke and no one noticed.
On a side note: Beside improve test coverage and library quality this will also have the benefit of giving a comprise and up-to-date overview on library usage and instantiation, since our [examples](https://github.com/ethereumjs/ethereumjs-vm/tree/master/examples) have a tendency to always be out-of-date.
**Goals**
A suite of initially maybe 15-20 additional tests - e.g. in ``tests/api/`` should be developed which tests:
- At least 2-3 additional instantiation paths of the ``VM`` in [index.js](https://coveralls.io/builds/17945610/source?filename=lib/index.js)
- Larger parts of [runBlockchain.js](https://coveralls.io/builds/17945610/source?filename=lib/runBlockchain.js)
- Larger parts of [runBlock.js](https://coveralls.io/builds/17945610/source?filename=lib/runBlock.js)
- 1-2 new instantiation paths in [runTx.js](https://coveralls.io/builds/17945610/source?filename=lib/runTx.js)
- Additional parts of [bloom.js](https://coveralls.io/builds/17945610/source?filename=lib/bloom.js), [fakeBlockChain.js](https://coveralls.io/builds/17945610/source?filename=lib/fakeBlockChain.js), [runJit.js](https://coveralls.io/builds/17945610/source?filename=lib/runJit.js) (scope to be determined)
- ``StateManager`` is under refactoring so not too much emphasis on this, maybe 1-2 tests to start would be nevertheless good
Generally the setup of a high-quality test structure with non-code repetition, utility functions and eventually good structured test data has precedence over the amount and extent of tests or coverage.
**Skills**
For taking on this issue a solid understanding of the internal working of the Ethereum VM and some broader picture of the execution process within a blockchain environment is needed.
Generally this should be a really rewarding task since this will frankly lead to a deeper understanding of the various parts of the VM on tackling.
|
1.0
|
Separate API Tests for the VM - **Introduction / Situation**
In the current [tests](https://github.com/ethereumjs/ethereumjs-vm/tree/master/tests) setup only the official Ethereum state and blockchain [tests](https://github.com/ethereum/tests) are run to test the VM functionality.
This gives a good impression of the current state of compatibility towards the latest HF changes but leaves the outer API of the library more-or-less untested, see the red files in one of the latest [coverage reports](https://coveralls.io/github/ethereumjs/ethereumjs-vm).
This is simply not sufficient, since this currently means that various possible invocations of the API (instantiation with or without ``blockchain``,...) as well as larger parts of the non-VM-executing code parts remain completely untested, see e.g. the poor coverage for the ``runBlock.js`` file (< 20%, *sigh*).
**Task Description**
This can be tackled by adding a second test suite and setup, resembling more traditional tests like e.g. in the [block](https://github.com/ethereumjs/ethereumjs-block/blob/master/tests/block.js) library (super-random example, don't go along too closely) and testing the different instantiation and execution paths for the VM.
Like always with tests even a very basic test setup with just a handful of tests would already make a huge difference and prevent issues like https://github.com/ethereumjs/ethereumjs-vm/issues/303 where functionality completely broke and no one noticed.
On a side note: Beside improve test coverage and library quality this will also have the benefit of giving a comprise and up-to-date overview on library usage and instantiation, since our [examples](https://github.com/ethereumjs/ethereumjs-vm/tree/master/examples) have a tendency to always be out-of-date.
**Goals**
A suite of initially maybe 15-20 additional tests - e.g. in ``tests/api/`` should be developed which tests:
- At least 2-3 additional instantiation paths of the ``VM`` in [index.js](https://coveralls.io/builds/17945610/source?filename=lib/index.js)
- Larger parts of [runBlockchain.js](https://coveralls.io/builds/17945610/source?filename=lib/runBlockchain.js)
- Larger parts of [runBlock.js](https://coveralls.io/builds/17945610/source?filename=lib/runBlock.js)
- 1-2 new instantiation paths in [runTx.js](https://coveralls.io/builds/17945610/source?filename=lib/runTx.js)
- Additional parts of [bloom.js](https://coveralls.io/builds/17945610/source?filename=lib/bloom.js), [fakeBlockChain.js](https://coveralls.io/builds/17945610/source?filename=lib/fakeBlockChain.js), [runJit.js](https://coveralls.io/builds/17945610/source?filename=lib/runJit.js) (scope to be determined)
- ``StateManager`` is under refactoring so not too much emphasis on this, maybe 1-2 tests to start would be nevertheless good
Generally the setup of a high-quality test structure with non-code repetition, utility functions and eventually good structured test data has precedence over the amount and extent of tests or coverage.
**Skills**
For taking on this issue a solid understanding of the internal working of the Ethereum VM and some broader picture of the execution process within a blockchain environment is needed.
Generally this should be a really rewarding task since this will frankly lead to a deeper understanding of the various parts of the VM on tackling.
|
non_process
|
separate api tests for the vm introduction situation in the current setup only the official ethereum state and blockchain are run to test the vm functionality this gives a good impression of the current state of compatibility towards the latest hf changes but leaves the outer api of the library more or less untested see the red files in one of the latest this is simply not sufficient since this currently means that various possible invocations of the api instantiation with or without blockchain as well as larger parts of the non vm executing code parts remain completely untested see e g the poor coverage for the runblock js file sigh task description this can be tackled by adding a second test suite and setup resembling more traditional tests like e g in the library super random example don t go along too closely and testing the different instantiation and execution paths for the vm like always with tests even a very basic test setup with just a handful of tests would already make a huge difference and prevent issues like where functionality completely broke and no one noticed on a side note beside improve test coverage and library quality this will also have the benefit of giving a comprise and up to date overview on library usage and instantiation since our have a tendency to always be out of date goals a suite of initially maybe additional tests e g in tests api should be developed which tests at least additional instantiation paths of the vm in larger parts of larger parts of new instantiation paths in additional parts of scope to be determined statemanager is under refactoring so not too much emphasis on this maybe tests to start would be nevertheless good generally the setup of a high quality test structure with non code repetition utility functions and eventually good structured test data has precedence over the amount and extent of tests or coverage skills for taking on this issue a solid understanding of the internal working of the ethereum vm and some broader picture of the execution process within a blockchain environment is needed generally this should be a really rewarding task since this will frankly lead to a deeper understanding of the various parts of the vm on tackling
| 0
|
4,204
| 7,164,576,204
|
IssuesEvent
|
2018-01-29 11:43:48
|
Incubaid/crm
|
https://api.github.com/repos/Incubaid/crm
|
closed
|
Control Dashboard to watch fundraising progress
|
process_wontfix
|
Create a configurable dashboard.
Dashboard should show
- progress bar of amount raised vs target amount
- amount of unclosed leads
- counter of days remaining
1. Dashboard settings:
- Total amount needed
- Date end for the counter
- Field from the deals table to count in the counter
|
1.0
|
Control Dashboard to watch fundraising progress - Create a configurable dashboard.
Dashboard should show
- progress bar of amount raised vs target amount
- amount of unclosed leads
- counter of days remaining
1. Dashboard settings:
- Total amount needed
- Date end for the counter
- Field from the deals table to count in the counter
|
process
|
control dashboard to watch fundraising progress create a configurable dashboard dashboard should show progress bar of amount raised vs target amount amount of unclosed leads counter of days remaining dashboard settings total amount needed date end for the counter field from the deals table to count in the counter
| 1
|
12,848
| 15,228,956,921
|
IssuesEvent
|
2021-02-18 12:14:52
|
digitalmethodsinitiative/4cat
|
https://api.github.com/repos/digitalmethodsinitiative/4cat
|
closed
|
Integrate memespector
|
(mostly) back-end big enhancement processors
|
Or something [memespector](https://github.com/amintz/memespector-python)-like. The Google Vision API could be useful as a post-processor for the 'Top Images' analysis.
One issue is that this requires a Google API key, but we could make this a setting for the post-processor so the user supplies this by themselves. We could also allow users to store their API credentials in their user account. All of that is a bit of a security hole so we should definitely fix #16 first but we can be responsible with this and have users make an informed choice of whether they want to (temporarily) share their credentials or not.
|
1.0
|
Integrate memespector - Or something [memespector](https://github.com/amintz/memespector-python)-like. The Google Vision API could be useful as a post-processor for the 'Top Images' analysis.
One issue is that this requires a Google API key, but we could make this a setting for the post-processor so the user supplies this by themselves. We could also allow users to store their API credentials in their user account. All of that is a bit of a security hole so we should definitely fix #16 first but we can be responsible with this and have users make an informed choice of whether they want to (temporarily) share their credentials or not.
|
process
|
integrate memespector or something the google vision api could be useful as a post processor for the top images analysis one issue is that this requires a google api key but we could make this a setting for the post processor so the user supplies this by themselves we could also allow users to store their api credentials in their user account all of that is a bit of a security hole so we should definitely fix first but we can be responsible with this and have users make an informed choice of whether they want to temporarily share their credentials or not
| 1
|
21,511
| 29,799,308,807
|
IssuesEvent
|
2023-06-16 06:46:50
|
parca-dev/parca-agent
|
https://api.github.com/repos/parca-dev/parca-agent
|
closed
|
process maps: Check why agent fails to find referred mapped object
|
bug help wanted P0 area/process-mapping
|
```txt
level=warn ts=2022-08-04T12:14:20.873202543Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/26597/root/usr/lib/libexpat.so.1.8.8 err="failed to open elf: open /proc/26597/root/usr/lib/libexpat.so.1.8.8: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974094999Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libm.so.6 err="failed to open elf: open /proc/72167/root/usr/lib/libm.so.6: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974109479Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libexpat.so.1.8.8 err="failed to open elf: open /proc/72167/root/usr/lib/libexpat.so.1.8.8: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974124849Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdbus-1.so.3.32.0 err="failed to open elf: open /proc/72167/root/usr/lib/libdbus-1.so.3.32.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974139679Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdrm.so.2.4.0 err="failed to open elf: open /proc/72167/root/usr/lib/libdrm.so.2.4.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974155249Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libgio-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libgio-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.9741707Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libcups.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/libcups.so.2: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97418925Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libatk-bridge-2.0.so.0.0.0 err="failed to open elf: open /proc/72167/root/usr/lib/libatk-bridge-2.0.so.0.0.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97420458Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libatk-1.0.so.0.23809.1 err="failed to open elf: open /proc/72167/root/usr/lib/libatk-1.0.so.0.23809.1: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97422049Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnspr4.so err="failed to open elf: open /proc/72167/root/usr/lib/libnspr4.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97423528Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libsmime3.so err="failed to open elf: open /proc/72167/root/usr/lib/libsmime3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97425043Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnssutil3.so err="failed to open elf: open /proc/72167/root/usr/lib/libnssutil3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97426537Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnss3.so err="failed to open elf: open /proc/72167/root/usr/lib/libnss3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974281051Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libglib-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libglib-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974295901Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libgobject-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libgobject-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974311351Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libpthread.so.0 err="failed to open elf: open /proc/72167/root/usr/lib/libpthread.so.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974326031Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdl.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/libdl.so.2: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974340531Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/ld-linux-x86-64.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/ld-linux-x86-64.so.2: no such file or directory"
```
|
1.0
|
process maps: Check why agent fails to find referred mapped object - ```txt
level=warn ts=2022-08-04T12:14:20.873202543Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/26597/root/usr/lib/libexpat.so.1.8.8 err="failed to open elf: open /proc/26597/root/usr/lib/libexpat.so.1.8.8: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974094999Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libm.so.6 err="failed to open elf: open /proc/72167/root/usr/lib/libm.so.6: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974109479Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libexpat.so.1.8.8 err="failed to open elf: open /proc/72167/root/usr/lib/libexpat.so.1.8.8: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974124849Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdbus-1.so.3.32.0 err="failed to open elf: open /proc/72167/root/usr/lib/libdbus-1.so.3.32.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974139679Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdrm.so.2.4.0 err="failed to open elf: open /proc/72167/root/usr/lib/libdrm.so.2.4.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974155249Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libgio-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libgio-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.9741707Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libcups.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/libcups.so.2: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97418925Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libatk-bridge-2.0.so.0.0.0 err="failed to open elf: open /proc/72167/root/usr/lib/libatk-bridge-2.0.so.0.0.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97420458Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libatk-1.0.so.0.23809.1 err="failed to open elf: open /proc/72167/root/usr/lib/libatk-1.0.so.0.23809.1: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97422049Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnspr4.so err="failed to open elf: open /proc/72167/root/usr/lib/libnspr4.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97423528Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libsmime3.so err="failed to open elf: open /proc/72167/root/usr/lib/libsmime3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97425043Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnssutil3.so err="failed to open elf: open /proc/72167/root/usr/lib/libnssutil3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.97426537Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libnss3.so err="failed to open elf: open /proc/72167/root/usr/lib/libnss3.so: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974281051Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libglib-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libglib-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974295901Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libgobject-2.0.so.0.7200.3 err="failed to open elf: open /proc/72167/root/usr/lib/libgobject-2.0.so.0.7200.3: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974311351Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libpthread.so.0 err="failed to open elf: open /proc/72167/root/usr/lib/libpthread.so.0: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974326031Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/libdl.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/libdl.so.2: no such file or directory"
level=warn ts=2022-08-04T12:13:50.974340531Z caller=maps.go:106 msg="failed to read object build ID" object=/proc/72167/root/usr/lib/ld-linux-x86-64.so.2 err="failed to open elf: open /proc/72167/root/usr/lib/ld-linux-x86-64.so.2: no such file or directory"
```
|
process
|
process maps check why agent fails to find referred mapped object txt level warn ts caller maps go msg failed to read object build id object proc root usr lib libexpat so err failed to open elf open proc root usr lib libexpat so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libm so err failed to open elf open proc root usr lib libm so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libexpat so err failed to open elf open proc root usr lib libexpat so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libdbus so err failed to open elf open proc root usr lib libdbus so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libdrm so err failed to open elf open proc root usr lib libdrm so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libgio so err failed to open elf open proc root usr lib libgio so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libcups so err failed to open elf open proc root usr lib libcups so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libatk bridge so err failed to open elf open proc root usr lib libatk bridge so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libatk so err failed to open elf open proc root usr lib libatk so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib so err failed to open elf open proc root usr lib so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib so err failed to open elf open proc root usr lib so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib so err failed to open elf open proc root usr lib so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib so err failed to open elf open proc root usr lib so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libglib so err failed to open elf open proc root usr lib libglib so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libgobject so err failed to open elf open proc root usr lib libgobject so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libpthread so err failed to open elf open proc root usr lib libpthread so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib libdl so err failed to open elf open proc root usr lib libdl so no such file or directory level warn ts caller maps go msg failed to read object build id object proc root usr lib ld linux so err failed to open elf open proc root usr lib ld linux so no such file or directory
| 1
|
22,875
| 10,799,627,647
|
IssuesEvent
|
2019-11-06 12:37:09
|
MicrosoftDocs/CloudAppSecurityDocs
|
https://api.github.com/repos/MicrosoftDocs/CloudAppSecurityDocs
|
closed
|
Can iboss be added to the notes please
|
Pri2 assigned-to-author cloud-app-security/svc duplicate triaged
|
iboss also can automatically block unsanctioned apps using the MCAS API, this can be seen using the link below.
https://docs.microsoft.com/en-us/cloud-app-security/iboss-integration
Could the note please be updated to include the iboss integration as well as zscaler to give customers visibility of this feature?
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b77ab1f7-e882-65ed-8763-2ca0e7f2d3a6
* Version Independent ID: decde69d-a97a-6eb5-c98a-a7f93fe80793
* Content: [Blocking discovered apps - Cloud App Security](https://docs.microsoft.com/en-us/cloud-app-security/governance-discovery#feedback)
* Content Source: [CloudAppSecurityDocs/governance-discovery.md](https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/governance-discovery.md)
* Service: **cloud-app-security**
* GitHub Login: @shsagir
* Microsoft Alias: **shsagir**
|
True
|
Can iboss be added to the notes please - iboss also can automatically block unsanctioned apps using the MCAS API, this can be seen using the link below.
https://docs.microsoft.com/en-us/cloud-app-security/iboss-integration
Could the note please be updated to include the iboss integration as well as zscaler to give customers visibility of this feature?
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b77ab1f7-e882-65ed-8763-2ca0e7f2d3a6
* Version Independent ID: decde69d-a97a-6eb5-c98a-a7f93fe80793
* Content: [Blocking discovered apps - Cloud App Security](https://docs.microsoft.com/en-us/cloud-app-security/governance-discovery#feedback)
* Content Source: [CloudAppSecurityDocs/governance-discovery.md](https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/governance-discovery.md)
* Service: **cloud-app-security**
* GitHub Login: @shsagir
* Microsoft Alias: **shsagir**
|
non_process
|
can iboss be added to the notes please iboss also can automatically block unsanctioned apps using the mcas api this can be seen using the link below could the note please be updated to include the iboss integration as well as zscaler to give customers visibility of this feature thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cloud app security github login shsagir microsoft alias shsagir
| 0
|
8,840
| 11,947,819,722
|
IssuesEvent
|
2020-04-03 10:37:51
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
known problem with 2020/02/29?
|
log-processing
|
I have installed goaccess under MacOS for the first time and everything works fine. Unfortunately all reports end on 29.2.2020:
"Last Updated: 2020-03-26 12:38:52 +0100
Dashboard
OVERALL ANALYZED REQUESTS07/FEB/2019 — 29/FEB/2020"
|
1.0
|
known problem with 2020/02/29? - I have installed goaccess under MacOS for the first time and everything works fine. Unfortunately all reports end on 29.2.2020:
"Last Updated: 2020-03-26 12:38:52 +0100
Dashboard
OVERALL ANALYZED REQUESTS07/FEB/2019 — 29/FEB/2020"
|
process
|
known problem with i have installed goaccess under macos for the first time and everything works fine unfortunately all reports end on last updated dashboard overall analyzed feb — feb
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.