Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
765,103
26,833,209,891
IssuesEvent
2023-02-02 17:24:50
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Problem seen with touch screen on RT1170 when running the LVGL sample
bug priority: low platform: NXP area: Kscan
**Describe the bug** Touch screen is not functional on RT1170 EVK when running the LVGL sample. - What target platform are you using? : RT1170 EVK - What have you tried to diagnose or workaround this issue? : run `samples/subsys/display/lvgl` - Is this a regression? No **To Reproduce** Steps to reproduce the behavior: 1. west build -p -b mimxrt1170_evk_cm7 samples/subsys/display/lvgl 2. west flash 3. Press the button on the panel, nothing happens **Expected behavior** The button on the panel should respond to touch presses. **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr SDK - Version used: Zephyr 3.3 rc1
1.0
Problem seen with touch screen on RT1170 when running the LVGL sample - **Describe the bug** Touch screen is not functional on RT1170 EVK when running the LVGL sample. - What target platform are you using? : RT1170 EVK - What have you tried to diagnose or workaround this issue? : run `samples/subsys/display/lvgl` - Is this a regression? No **To Reproduce** Steps to reproduce the behavior: 1. west build -p -b mimxrt1170_evk_cm7 samples/subsys/display/lvgl 2. west flash 3. Press the button on the panel, nothing happens **Expected behavior** The button on the panel should respond to touch presses. **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr SDK - Version used: Zephyr 3.3 rc1
non_process
problem seen with touch screen on when running the lvgl sample describe the bug touch screen is not functional on evk when running the lvgl sample what target platform are you using evk what have you tried to diagnose or workaround this issue run samples subsys display lvgl is this a regression no to reproduce steps to reproduce the behavior west build p b evk samples subsys display lvgl west flash press the button on the panel nothing happens expected behavior the button on the panel should respond to touch presses environment please complete the following information os linux toolchain zephyr sdk version used zephyr
0
725
3,213,348,077
IssuesEvent
2015-10-06 19:28:21
nationalparkservice/places-data
https://api.github.com/repos/nationalparkservice/places-data
closed
Add POI type: Food Box
pending-other-process schema
Parks like SEKI have first-come, first-serve boxes that can be shared by multiple camping groups to keep food away from bears: http://www.nps.gov/seki/planyourvisit/bear_bc.htm http://www.nps.gov/seki/planyourvisit/upload/FoodStorageBoxes_PageSize_v2.pdf
1.0
Add POI type: Food Box - Parks like SEKI have first-come, first-serve boxes that can be shared by multiple camping groups to keep food away from bears: http://www.nps.gov/seki/planyourvisit/bear_bc.htm http://www.nps.gov/seki/planyourvisit/upload/FoodStorageBoxes_PageSize_v2.pdf
process
add poi type food box parks like seki have first come first serve boxes that can be shared by multiple camping groups to keep food away from bears
1
21,558
29,893,070,875
IssuesEvent
2023-06-21 00:46:43
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
Dante Domain Manager API
NOT YET PROCESSED
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: [Audinate Dante Domain Manager](https://www.audinate.com/products/software/dante-domain-manager) What you would like to be able to make it do from Companion: Check domain & device status. Check, set & clear device subscriptions. Direct links or attachments to the ethernet control protocol or API: [AUD-MAN_Managed_API_v1.3.pdf](https://github.com/bitfocus/companion-module-requests/files/11810775/AUD-MAN_Managed_API_v1.3.pdf)
1.0
Dante Domain Manager API - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: [Audinate Dante Domain Manager](https://www.audinate.com/products/software/dante-domain-manager) What you would like to be able to make it do from Companion: Check domain & device status. Check, set & clear device subscriptions. Direct links or attachments to the ethernet control protocol or API: [AUD-MAN_Managed_API_v1.3.pdf](https://github.com/bitfocus/companion-module-requests/files/11810775/AUD-MAN_Managed_API_v1.3.pdf)
process
dante domain manager api i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control what you would like to be able to make it do from companion check domain device status check set clear device subscriptions direct links or attachments to the ethernet control protocol or api
1
25,849
4,178,006,854
IssuesEvent
2016-06-22 03:48:52
linz/QGIS-AIMS-Plugin
https://api.github.com/repos/linz/QGIS-AIMS-Plugin
closed
Roads - Chatham Islands
bug effort/small test
When you navigate to the Chatham Islands the roads appear in a different location to the AIMS features (addresses).
1.0
Roads - Chatham Islands - When you navigate to the Chatham Islands the roads appear in a different location to the AIMS features (addresses).
non_process
roads chatham islands when you navigate to the chatham islands the roads appear in a different location to the aims features addresses
0
6,694
9,813,700,372
IssuesEvent
2019-06-13 08:36:23
opengeospatial/CityGML-3.0CM
https://api.github.com/repos/opengeospatial/CityGML-3.0CM
closed
Does ReadMe Reflect Intentions of SWG?
SWG Process
The initial readme file for this repo is a statement of purpose written by just of one of the chairs (Steve = 3DXScape). The readme should reflect consensus among the SWG participants. More people should express their opinions.
1.0
Does ReadMe Reflect Intentions of SWG? - The initial readme file for this repo is a statement of purpose written by just of one of the chairs (Steve = 3DXScape). The readme should reflect consensus among the SWG participants. More people should express their opinions.
process
does readme reflect intentions of swg the initial readme file for this repo is a statement of purpose written by just of one of the chairs steve the readme should reflect consensus among the swg participants more people should express their opinions
1
18,125
24,166,032,967
IssuesEvent
2022-09-22 15:06:11
gradle/gradle
https://api.github.com/repos/gradle/gradle
closed
When I comment out the non-incremental processor and need to delete the build folder to make another incremental processor take effect again.
a:bug stale in:annotation-processing
My English is not good, so the expression is not very clear. https://github.com/fanmingyi/bugAptIncremental My project has an incremental processor `dagger` and a non-incremental processor `arouter`. When I only use dagger, the annotation processor increment is normal. Every time the relevant files are changed AnnotateStudy/app/build/generated/source/kapt will change accordingly, instead of deleting the entire folder. ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' //Use only the dagger //kapt 'com.alibaba:arouter-compiler:1.5.1' ``` When I use the two together, the annotation processor increment is invalid(All the files in the `app/build/generated/source/kapt` folder will be generated when the current relevant code changes). ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' kapt 'com.alibaba:arouter-compiler:1.5.1' ``` ```kotlin //arouter @Route(path = "/test/my") class MainActivity : AppCompatActivity() { } ``` ```java //dagger @Component interface MyCom { } //Comment out @Component below and recompile, and then check the changes in the app/build/generated/source/kapt folder @Component interface MyCom2 { } ``` When I Comment out the `kapt 'com.alibaba:arouter-compiler:1.5.1'`, the annotation processor increment still fails(Modifying the relevant code will regenerate all classes). ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' //Comment the code below, //kapt 'com.alibaba:arouter-compiler:1.5.1' ``` When I deleted the app/build folder, the increment of the dagger annotation processor was successful again. I think some cache of the build folder under gradle is causing this problem.
1.0
When I comment out the non-incremental processor and need to delete the build folder to make another incremental processor take effect again. - My English is not good, so the expression is not very clear. https://github.com/fanmingyi/bugAptIncremental My project has an incremental processor `dagger` and a non-incremental processor `arouter`. When I only use dagger, the annotation processor increment is normal. Every time the relevant files are changed AnnotateStudy/app/build/generated/source/kapt will change accordingly, instead of deleting the entire folder. ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' //Use only the dagger //kapt 'com.alibaba:arouter-compiler:1.5.1' ``` When I use the two together, the annotation processor increment is invalid(All the files in the `app/build/generated/source/kapt` folder will be generated when the current relevant code changes). ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' kapt 'com.alibaba:arouter-compiler:1.5.1' ``` ```kotlin //arouter @Route(path = "/test/my") class MainActivity : AppCompatActivity() { } ``` ```java //dagger @Component interface MyCom { } //Comment out @Component below and recompile, and then check the changes in the app/build/generated/source/kapt folder @Component interface MyCom2 { } ``` When I Comment out the `kapt 'com.alibaba:arouter-compiler:1.5.1'`, the annotation processor increment still fails(Modifying the relevant code will regenerate all classes). ```groovy implementation 'com.google.dagger:dagger:2.35.1' kapt 'com.google.dagger:dagger-compiler:2.35.1' implementation 'com.alibaba:arouter-api:1.5.1' //Comment the code below, //kapt 'com.alibaba:arouter-compiler:1.5.1' ``` When I deleted the app/build folder, the increment of the dagger annotation processor was successful again. I think some cache of the build folder under gradle is causing this problem.
process
when i comment out the non incremental processor and need to delete the build folder to make another incremental processor take effect again my english is not good so the expression is not very clear my project has an incremental processor dagger and a non incremental processor arouter when i only use dagger the annotation processor increment is normal every time the relevant files are changed annotatestudy app build generated source kapt will change accordingly instead of deleting the entire folder groovy implementation com google dagger dagger kapt com google dagger dagger compiler implementation com alibaba arouter api use only the dagger kapt com alibaba arouter compiler when i use the two together the annotation processor increment is invalid all the files in the app build generated source kapt folder will be generated when the current relevant code changes groovy implementation com google dagger dagger kapt com google dagger dagger compiler implementation com alibaba arouter api kapt com alibaba arouter compiler kotlin arouter route path test my class mainactivity appcompatactivity java dagger component interface mycom comment out component below and recompile and then check the changes in the app build generated source kapt folder component interface when i comment out the kapt com alibaba arouter compiler the annotation processor increment still fails modifying the relevant code will regenerate all classes groovy implementation com google dagger dagger kapt com google dagger dagger compiler implementation com alibaba arouter api comment the code below kapt com alibaba arouter compiler when i deleted the app build folder the increment of the dagger annotation processor was successful again i think some cache of the build folder under gradle is causing this problem
1
13,123
15,511,834,746
IssuesEvent
2021-03-12 00:24:13
googleapis/python-monitoring
https://api.github.com/repos/googleapis/python-monitoring
opened
Resolve warnings in docgen
type: process
We typically treat all docs warnings as errors. Figure out which warnings are in these docs and resolve them. https://github.com/googleapis/python-monitoring/blob/4cdb1ff439154409c94e347dd5f3b6e2bc40e998/synth.py#L105-L106 CC @parthea
1.0
Resolve warnings in docgen - We typically treat all docs warnings as errors. Figure out which warnings are in these docs and resolve them. https://github.com/googleapis/python-monitoring/blob/4cdb1ff439154409c94e347dd5f3b6e2bc40e998/synth.py#L105-L106 CC @parthea
process
resolve warnings in docgen we typically treat all docs warnings as errors figure out which warnings are in these docs and resolve them cc parthea
1
18,154
24,191,190,304
IssuesEvent
2022-09-23 17:45:22
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
opened
Add support for new point_weight_flag to the Point-Stat and Ensemble-Stat tools
type: new feature requestor: UK Met Office alert: NEED MORE DEFINITION alert: NEED ACCOUNT KEY alert: NEED PROJECT ASSIGNMENT MET: PreProcessing Tools (Point) priority: high
## Describe the New Feature ## The MET Grid-Stat tool supports a configuration option named [grid_weight_flag](https://met.readthedocs.io/en/develop/Users_Guide/config_options.html?highlight=grid_weight_flag#grid-weight-flag) to define weights to be applied to each grid point when aggregating statistics across multiple grid points. The grid weighting is based on the true area contained within each grid point, giving larger weight to grid boxes with larger areas. This task is to develop a method for weighting the aggregation of point observations. And the same basic motivation applies, wanting to avoid overemphasizing areas with dense observations, and underemphasizing areas with sparse observations. This request originally arose when aggregating SEEPS for individual stations into a spatial summary. The UK Met Office defines weights for that aggregation based on the spatial density of those stations. However, those weights are pre-defined and static since the stations they use are consistent run-to-run. Recommend that when implementing this in MET, the weights NOT be static. Instead, dynamically compute them for each verification task based on the current set of observations available. The number of and location of point observations can change dramatically run-to-run based on the masking region, variable, and data source. That being said, re-defining them in each run would likely be slower. Recommend that when specifying a `mask.sid` list of station id's we provide an option to provide a weight to be used for that station. The tasks for this issue include: - Collaborating with @RachelNorth and @mpm-meto to clearly define the algorithm for defining these weights. - Add `point_weight_flag` configuration option for each verification task in Point-Stat and Ensemble-Stat with a default value of `NONE`, meaning apply a weight of 1 to all points. - Support setting `point_weight_flag` equal to `DENSITY` to define the weights on the fly based on station location. - When `mask.sid` is set to a file and station names are read from that file, add an option for a raw weight to be specified for that station. When aggregating across multiple stations, note that the *true weight* should be computed as the *raw station weight* divided by the sum of the weights of all points. - Set the existing weights in the PairDataPoint and PairDataEnsemble classes and ensure that those weights are used correctly in the computation of statistics. - In particular, ensure that the these weights are used in the aggregation of SEEPS_MPR data to compute the aggregated SEEPS data. ### Acceptance Testing ### *List input data types and sources.* *Describe tests required for new functionality.* ### Time Estimate ### 1 week. ### Sub-Issues ### Consider breaking the new feature down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) - [ ] Need a METplus issue to handle configuring the new `point_weight_flag` option. - [ ] If changes to any output line types are made, would need a METdataio issue to handle those changes. ## New Feature Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Add support for new point_weight_flag to the Point-Stat and Ensemble-Stat tools - ## Describe the New Feature ## The MET Grid-Stat tool supports a configuration option named [grid_weight_flag](https://met.readthedocs.io/en/develop/Users_Guide/config_options.html?highlight=grid_weight_flag#grid-weight-flag) to define weights to be applied to each grid point when aggregating statistics across multiple grid points. The grid weighting is based on the true area contained within each grid point, giving larger weight to grid boxes with larger areas. This task is to develop a method for weighting the aggregation of point observations. And the same basic motivation applies, wanting to avoid overemphasizing areas with dense observations, and underemphasizing areas with sparse observations. This request originally arose when aggregating SEEPS for individual stations into a spatial summary. The UK Met Office defines weights for that aggregation based on the spatial density of those stations. However, those weights are pre-defined and static since the stations they use are consistent run-to-run. Recommend that when implementing this in MET, the weights NOT be static. Instead, dynamically compute them for each verification task based on the current set of observations available. The number of and location of point observations can change dramatically run-to-run based on the masking region, variable, and data source. That being said, re-defining them in each run would likely be slower. Recommend that when specifying a `mask.sid` list of station id's we provide an option to provide a weight to be used for that station. The tasks for this issue include: - Collaborating with @RachelNorth and @mpm-meto to clearly define the algorithm for defining these weights. - Add `point_weight_flag` configuration option for each verification task in Point-Stat and Ensemble-Stat with a default value of `NONE`, meaning apply a weight of 1 to all points. - Support setting `point_weight_flag` equal to `DENSITY` to define the weights on the fly based on station location. - When `mask.sid` is set to a file and station names are read from that file, add an option for a raw weight to be specified for that station. When aggregating across multiple stations, note that the *true weight* should be computed as the *raw station weight* divided by the sum of the weights of all points. - Set the existing weights in the PairDataPoint and PairDataEnsemble classes and ensure that those weights are used correctly in the computation of statistics. - In particular, ensure that the these weights are used in the aggregation of SEEPS_MPR data to compute the aggregated SEEPS data. ### Acceptance Testing ### *List input data types and sources.* *Describe tests required for new functionality.* ### Time Estimate ### 1 week. ### Sub-Issues ### Consider breaking the new feature down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) - [ ] Need a METplus issue to handle configuring the new `point_weight_flag` option. - [ ] If changes to any output line types are made, would need a METdataio issue to handle those changes. ## New Feature Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Linked issues** Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
process
add support for new point weight flag to the point stat and ensemble stat tools describe the new feature the met grid stat tool supports a configuration option named to define weights to be applied to each grid point when aggregating statistics across multiple grid points the grid weighting is based on the true area contained within each grid point giving larger weight to grid boxes with larger areas this task is to develop a method for weighting the aggregation of point observations and the same basic motivation applies wanting to avoid overemphasizing areas with dense observations and underemphasizing areas with sparse observations this request originally arose when aggregating seeps for individual stations into a spatial summary the uk met office defines weights for that aggregation based on the spatial density of those stations however those weights are pre defined and static since the stations they use are consistent run to run recommend that when implementing this in met the weights not be static instead dynamically compute them for each verification task based on the current set of observations available the number of and location of point observations can change dramatically run to run based on the masking region variable and data source that being said re defining them in each run would likely be slower recommend that when specifying a mask sid list of station id s we provide an option to provide a weight to be used for that station the tasks for this issue include collaborating with rachelnorth and mpm meto to clearly define the algorithm for defining these weights add point weight flag configuration option for each verification task in point stat and ensemble stat with a default value of none meaning apply a weight of to all points support setting point weight flag equal to density to define the weights on the fly based on station location when mask sid is set to a file and station names are read from that file add an option for a raw weight to be specified for that station when aggregating across multiple stations note that the true weight should be computed as the raw station weight divided by the sum of the weights of all points set the existing weights in the pairdatapoint and pairdataensemble classes and ensure that those weights are used correctly in the computation of statistics in particular ensure that the these weights are used in the aggregation of seeps mpr data to compute the aggregated seeps data acceptance testing list input data types and sources describe tests required for new functionality time estimate week sub issues consider breaking the new feature down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components need a metplus issue to handle configuring the new point weight flag option if changes to any output line types are made would need a metdataio issue to handle those changes new feature checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
1,778
4,500,099,934
IssuesEvent
2016-09-01 02:26:46
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Sort mostly empty columns last in QP results
No Longer Relevant Proposal Query Processor
Rather than by name as the fallback sort. It would be preferable to move them to the end. This would be easy enough to do with the new `core.logic` column ordering logic, so long as we knew which ones to move... We'd could iterate over the results to see which ones were "mostly empty", meaning our DB results couldn't be truly lazy sequences anymore. Probably still a net win in some cases because we'll spend less time serializing the results to JSON, but it would increase memory usage even for result sequences where there were no empty columns. Or we could count null values during sync and mark "mostly null" columns... we could keep laziness, yay :heart_eyes_cat: Or if we *did* do some counting maybe we could just drop the columns that are completely blank for a given result set.
1.0
Sort mostly empty columns last in QP results - Rather than by name as the fallback sort. It would be preferable to move them to the end. This would be easy enough to do with the new `core.logic` column ordering logic, so long as we knew which ones to move... We'd could iterate over the results to see which ones were "mostly empty", meaning our DB results couldn't be truly lazy sequences anymore. Probably still a net win in some cases because we'll spend less time serializing the results to JSON, but it would increase memory usage even for result sequences where there were no empty columns. Or we could count null values during sync and mark "mostly null" columns... we could keep laziness, yay :heart_eyes_cat: Or if we *did* do some counting maybe we could just drop the columns that are completely blank for a given result set.
process
sort mostly empty columns last in qp results rather than by name as the fallback sort it would be preferable to move them to the end this would be easy enough to do with the new core logic column ordering logic so long as we knew which ones to move we d could iterate over the results to see which ones were mostly empty meaning our db results couldn t be truly lazy sequences anymore probably still a net win in some cases because we ll spend less time serializing the results to json but it would increase memory usage even for result sequences where there were no empty columns or we could count null values during sync and mark mostly null columns we could keep laziness yay heart eyes cat or if we did do some counting maybe we could just drop the columns that are completely blank for a given result set
1
44,409
2,904,745,320
IssuesEvent
2015-06-18 19:47:46
JMurk/Utility_Viewer_Issues
https://api.github.com/repos/JMurk/Utility_Viewer_Issues
closed
Utility Viewer - Identify Tool Enhancement
enhancement high priority
Update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down.
1.0
Utility Viewer - Identify Tool Enhancement - Update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down.
non_process
utility viewer identify tool enhancement update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down
0
19,785
26,163,972,914
IssuesEvent
2023-01-01 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Thu, 29 Dec 22
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Position-Aware Contrastive Alignment for Referring Image Segmentation - **Authors:** Bo Chen, Zhiwei Hu, Zhilong Ji, Jinfeng Bai, Wangmeng Zuo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13419 - **Pdf link:** https://arxiv.org/pdf/2212.13419 - **Abstract** Referring image segmentation aims to segment the target object described by a given natural language expression. Typically, referring expressions contain complex relationships between the target and its surrounding objects. The main challenge of this task is to understand the visual and linguistic content simultaneously and to find the referred object accurately among all instances in the image. Currently, the most effective way to solve the above problem is to obtain aligned multi-modal features by computing the correlation between visual and linguistic feature modalities under the supervision of the ground-truth mask. However, existing paradigms have difficulty in thoroughly understanding visual and linguistic content due to the inability to perceive information directly about surrounding objects that refer to the target. This prevents them from learning aligned multi-modal features, which leads to inaccurate segmentation. To address this issue, we present a position-aware contrastive alignment network (PCAN) to enhance the alignment of multi-modal features by guiding the interaction between vision and language through prior position information. Our PCAN consists of two modules: 1) Position Aware Module (PAM), which provides position information of all objects related to natural language descriptions, and 2) Contrastive Language Understanding Module (CLUM), which enhances multi-modal alignment by comparing the features of the referred object with those of related objects. Extensive experiments on three benchmarks demonstrate our PCAN performs favorably against the state-of-the-art methods. Our code will be made publicly available. ### Efficient Semantic Segmentation on Edge Devices - **Authors:** Farshad Safavi, Irfan Ali, Venkatesh Dasari, Guanqun Song, Ting Zhu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.13691 - **Pdf link:** https://arxiv.org/pdf/2212.13691 - **Abstract** Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Scaling Painting Style Transfer - **Authors:** Bruno Galerne, Lara Raad, Josรฉ Lezama, Jean-Michel Morel - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13459 - **Pdf link:** https://arxiv.org/pdf/2212.13459 - **Abstract** Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles. ## Keyword: ISP ### Deep Learning Models for River Classification at Sub-Meter Resolutions from Multispectral and Panchromatic Commercial Satellite Imagery - **Authors:** Joachim Moortgat, Ziwei Li, Michael Durand, Ian Howat, Bidhyananda Yadav, Chunli Dai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV); Geophysics (physics.geo-ph) - **Arxiv link:** https://arxiv.org/abs/2212.13613 - **Pdf link:** https://arxiv.org/pdf/2212.13613 - **Abstract** Remote sensing of the Earth's surface water is critical in a wide range of environmental studies, from evaluating the societal impacts of seasonal droughts and floods to the large-scale implications of climate change. Consequently, a large literature exists on the classification of water from satellite imagery. Yet, previous methods have been limited by 1) the spatial resolution of public satellite imagery, 2) classification schemes that operate at the pixel level, and 3) the need for multiple spectral bands. We advance the state-of-the-art by 1) using commercial imagery with panchromatic and multispectral resolutions of 30 cm and 1.2 m, respectively, 2) developing multiple fully convolutional neural networks (FCN) that can learn the morphological features of water bodies in addition to their spectral properties, and 3) FCN that can classify water even from panchromatic imagery. This study focuses on rivers in the Arctic, using images from the Quickbird, WorldView, and GeoEye satellites. Because no training data are available at such high resolutions, we construct those manually. First, we use the RGB, and NIR bands of the 8-band multispectral sensors. Those trained models all achieve excellent precision and recall over 90% on validation data, aided by on-the-fly preprocessing of the training data specific to satellite imagery. In a novel approach, we then use results from the multispectral model to generate training data for FCN that only require panchromatic imagery, of which considerably more is available. Despite the smaller feature space, these models still achieve a precision and recall of over 85%. We provide our open-source codes and trained model parameters to the remote sensing community, which paves the way to a wide range of environmental hydrology applications at vastly superior accuracies and 2 orders of magnitude higher spatial resolution than previously possible. ### Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image Change Detection - **Authors:** Hichem Sahbi, Sebastien Deschamps - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13974 - **Pdf link:** https://arxiv.org/pdf/2212.13974 - **Abstract** Satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants. This task is highly challenging due to the acquisition conditions and also to the subjectivity of changes. In this paper, we investigate satellite image change detection using active learning. Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display (dubbed as virtual exemplars), and according to the user's responses, updates change detections. The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars. The latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re-estimate of these criteria in the following iterations of active learning. Conducted experiments show the out-performance of our proposed adversarial display model against other display strategies as well as the related work. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Multi-Realism Image Compression with a Conditional Generator - **Authors:** Eirikur Agustsson, David Minnen, George Toderici, Fabian Mentzer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13824 - **Pdf link:** https://arxiv.org/pdf/2212.13824 - **Abstract** By optimizing the rate-distortion-realism trade-off, generative compression approaches produce detailed, realistic images, even at low bit rates, instead of the blurry reconstructions produced by rate-distortion optimized models. However, previous methods do not explicitly control how much detail is synthesized, which results in a common criticism of these methods: users might be worried that a misleading reconstruction far from the input image is generated. In this work, we alleviate these concerns by training a decoder that can bridge the two regimes and navigate the distortion-realism trade-off. From a single compressed representation, the receiver can decide to either reconstruct a low mean squared error reconstruction that is close to the input, a realistic reconstruction with high perceptual quality, or anything in between. With our method, we set a new state-of-the-art in distortion-realism, pushing the frontier of achievable distortion-realism pairs, i.e., our method achieves better distortions at high realism and better realism at low distortion than ever before. ## Keyword: RAW ### Scaling Painting Style Transfer - **Authors:** Bruno Galerne, Lara Raad, Josรฉ Lezama, Jean-Michel Morel - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13459 - **Pdf link:** https://arxiv.org/pdf/2212.13459 - **Abstract** Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles. ### Noise-aware Learning from Web-crawled Image-Text Data for Image Captioning - **Authors:** Wooyoung Kang, Jonghwan Mun, Sungjun Lee, Byungseok Roh - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.13563 - **Pdf link:** https://arxiv.org/pdf/2212.13563 - **Abstract** Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}. ### Shape-Aware Fine-Grained Classification of Erythroid Cells - **Authors:** Ye Wang, Rui Ma, Xiaoqing Ma, Honghua Cui, Yubin Xiao, Xuan Wu, You Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13695 - **Pdf link:** https://arxiv.org/pdf/2212.13695 - **Abstract** Fine-grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy. Due to the subtle visual differences between different types of erythroid cells, it is challenging to apply existing image-based deep learning models for fine-grained erythroid cell classification. Moreover, there is no large open-source datasets on erythroid cells to support the model training. In this paper, we introduce BMEC (Bone Morrow Erythroid Cells), the first large fine-grained image dataset of erythroid cells, to facilitate more deep learning research on erythroid cells. BMEC contains 5,666 images of individual erythroid cells, each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells. To distinguish the erythroid cells, one key indicator is the cell shape which is closely related to the cell growth and maturation. Therefore, we design a novel shape-aware image classification network for fine-grained erythroid cell classification. The shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module. With the shape-attended image feature, our network achieved superior classification performance (81.12\% top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine-grained cell classification. To further verify the generalizability of our method, we tested our network on two additional public white blood cells (WBC) datasets and the results show our shape-aware method can generally outperform recent state-of-the-art works on classifying the WBC. The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC. ### A Segmentation Method for fluorescence images without a machine learning approach - **Authors:** Giuseppe Giacopelli, Michele Migliore, Domenico Tegolo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.13945 - **Pdf link:** https://arxiv.org/pdf/2212.13945 - **Abstract** Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps, and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing Indirect ImmunoFluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is far from the conventional neural network approach, but it is equivalent to their quantitative and qualitative performance, and it is also solid to adversative noise. The method is robust, based on formally correct functions, and does not suffer from tuning on specific data sets. Results: This work demonstrates the robustness of the method against the variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on two datasets (Neuroblastoma and NucleusSegData) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional to a structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) to segment cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches. ## Keyword: raw image ### Shape-Aware Fine-Grained Classification of Erythroid Cells - **Authors:** Ye Wang, Rui Ma, Xiaoqing Ma, Honghua Cui, Yubin Xiao, Xuan Wu, You Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13695 - **Pdf link:** https://arxiv.org/pdf/2212.13695 - **Abstract** Fine-grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy. Due to the subtle visual differences between different types of erythroid cells, it is challenging to apply existing image-based deep learning models for fine-grained erythroid cell classification. Moreover, there is no large open-source datasets on erythroid cells to support the model training. In this paper, we introduce BMEC (Bone Morrow Erythroid Cells), the first large fine-grained image dataset of erythroid cells, to facilitate more deep learning research on erythroid cells. BMEC contains 5,666 images of individual erythroid cells, each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells. To distinguish the erythroid cells, one key indicator is the cell shape which is closely related to the cell growth and maturation. Therefore, we design a novel shape-aware image classification network for fine-grained erythroid cell classification. The shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module. With the shape-attended image feature, our network achieved superior classification performance (81.12\% top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine-grained cell classification. To further verify the generalizability of our method, we tested our network on two additional public white blood cells (WBC) datasets and the results show our shape-aware method can generally outperform recent state-of-the-art works on classifying the WBC. The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC.
2.0
New submissions for Thu, 29 Dec 22 - ## Keyword: events ### Position-Aware Contrastive Alignment for Referring Image Segmentation - **Authors:** Bo Chen, Zhiwei Hu, Zhilong Ji, Jinfeng Bai, Wangmeng Zuo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13419 - **Pdf link:** https://arxiv.org/pdf/2212.13419 - **Abstract** Referring image segmentation aims to segment the target object described by a given natural language expression. Typically, referring expressions contain complex relationships between the target and its surrounding objects. The main challenge of this task is to understand the visual and linguistic content simultaneously and to find the referred object accurately among all instances in the image. Currently, the most effective way to solve the above problem is to obtain aligned multi-modal features by computing the correlation between visual and linguistic feature modalities under the supervision of the ground-truth mask. However, existing paradigms have difficulty in thoroughly understanding visual and linguistic content due to the inability to perceive information directly about surrounding objects that refer to the target. This prevents them from learning aligned multi-modal features, which leads to inaccurate segmentation. To address this issue, we present a position-aware contrastive alignment network (PCAN) to enhance the alignment of multi-modal features by guiding the interaction between vision and language through prior position information. Our PCAN consists of two modules: 1) Position Aware Module (PAM), which provides position information of all objects related to natural language descriptions, and 2) Contrastive Language Understanding Module (CLUM), which enhances multi-modal alignment by comparing the features of the referred object with those of related objects. Extensive experiments on three benchmarks demonstrate our PCAN performs favorably against the state-of-the-art methods. Our code will be made publicly available. ### Efficient Semantic Segmentation on Edge Devices - **Authors:** Farshad Safavi, Irfan Ali, Venkatesh Dasari, Guanqun Song, Ting Zhu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.13691 - **Pdf link:** https://arxiv.org/pdf/2212.13691 - **Abstract** Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Scaling Painting Style Transfer - **Authors:** Bruno Galerne, Lara Raad, Josรฉ Lezama, Jean-Michel Morel - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13459 - **Pdf link:** https://arxiv.org/pdf/2212.13459 - **Abstract** Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles. ## Keyword: ISP ### Deep Learning Models for River Classification at Sub-Meter Resolutions from Multispectral and Panchromatic Commercial Satellite Imagery - **Authors:** Joachim Moortgat, Ziwei Li, Michael Durand, Ian Howat, Bidhyananda Yadav, Chunli Dai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV); Geophysics (physics.geo-ph) - **Arxiv link:** https://arxiv.org/abs/2212.13613 - **Pdf link:** https://arxiv.org/pdf/2212.13613 - **Abstract** Remote sensing of the Earth's surface water is critical in a wide range of environmental studies, from evaluating the societal impacts of seasonal droughts and floods to the large-scale implications of climate change. Consequently, a large literature exists on the classification of water from satellite imagery. Yet, previous methods have been limited by 1) the spatial resolution of public satellite imagery, 2) classification schemes that operate at the pixel level, and 3) the need for multiple spectral bands. We advance the state-of-the-art by 1) using commercial imagery with panchromatic and multispectral resolutions of 30 cm and 1.2 m, respectively, 2) developing multiple fully convolutional neural networks (FCN) that can learn the morphological features of water bodies in addition to their spectral properties, and 3) FCN that can classify water even from panchromatic imagery. This study focuses on rivers in the Arctic, using images from the Quickbird, WorldView, and GeoEye satellites. Because no training data are available at such high resolutions, we construct those manually. First, we use the RGB, and NIR bands of the 8-band multispectral sensors. Those trained models all achieve excellent precision and recall over 90% on validation data, aided by on-the-fly preprocessing of the training data specific to satellite imagery. In a novel approach, we then use results from the multispectral model to generate training data for FCN that only require panchromatic imagery, of which considerably more is available. Despite the smaller feature space, these models still achieve a precision and recall of over 85%. We provide our open-source codes and trained model parameters to the remote sensing community, which paves the way to a wide range of environmental hydrology applications at vastly superior accuracies and 2 orders of magnitude higher spatial resolution than previously possible. ### Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image Change Detection - **Authors:** Hichem Sahbi, Sebastien Deschamps - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13974 - **Pdf link:** https://arxiv.org/pdf/2212.13974 - **Abstract** Satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants. This task is highly challenging due to the acquisition conditions and also to the subjectivity of changes. In this paper, we investigate satellite image change detection using active learning. Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display (dubbed as virtual exemplars), and according to the user's responses, updates change detections. The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars. The latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re-estimate of these criteria in the following iterations of active learning. Conducted experiments show the out-performance of our proposed adversarial display model against other display strategies as well as the related work. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Multi-Realism Image Compression with a Conditional Generator - **Authors:** Eirikur Agustsson, David Minnen, George Toderici, Fabian Mentzer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13824 - **Pdf link:** https://arxiv.org/pdf/2212.13824 - **Abstract** By optimizing the rate-distortion-realism trade-off, generative compression approaches produce detailed, realistic images, even at low bit rates, instead of the blurry reconstructions produced by rate-distortion optimized models. However, previous methods do not explicitly control how much detail is synthesized, which results in a common criticism of these methods: users might be worried that a misleading reconstruction far from the input image is generated. In this work, we alleviate these concerns by training a decoder that can bridge the two regimes and navigate the distortion-realism trade-off. From a single compressed representation, the receiver can decide to either reconstruct a low mean squared error reconstruction that is close to the input, a realistic reconstruction with high perceptual quality, or anything in between. With our method, we set a new state-of-the-art in distortion-realism, pushing the frontier of achievable distortion-realism pairs, i.e., our method achieves better distortions at high realism and better realism at low distortion than ever before. ## Keyword: RAW ### Scaling Painting Style Transfer - **Authors:** Bruno Galerne, Lara Raad, Josรฉ Lezama, Jean-Michel Morel - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.13459 - **Pdf link:** https://arxiv.org/pdf/2212.13459 - **Abstract** Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles. ### Noise-aware Learning from Web-crawled Image-Text Data for Image Captioning - **Authors:** Wooyoung Kang, Jonghwan Mun, Sungjun Lee, Byungseok Roh - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.13563 - **Pdf link:** https://arxiv.org/pdf/2212.13563 - **Abstract** Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}. ### Shape-Aware Fine-Grained Classification of Erythroid Cells - **Authors:** Ye Wang, Rui Ma, Xiaoqing Ma, Honghua Cui, Yubin Xiao, Xuan Wu, You Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13695 - **Pdf link:** https://arxiv.org/pdf/2212.13695 - **Abstract** Fine-grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy. Due to the subtle visual differences between different types of erythroid cells, it is challenging to apply existing image-based deep learning models for fine-grained erythroid cell classification. Moreover, there is no large open-source datasets on erythroid cells to support the model training. In this paper, we introduce BMEC (Bone Morrow Erythroid Cells), the first large fine-grained image dataset of erythroid cells, to facilitate more deep learning research on erythroid cells. BMEC contains 5,666 images of individual erythroid cells, each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells. To distinguish the erythroid cells, one key indicator is the cell shape which is closely related to the cell growth and maturation. Therefore, we design a novel shape-aware image classification network for fine-grained erythroid cell classification. The shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module. With the shape-attended image feature, our network achieved superior classification performance (81.12\% top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine-grained cell classification. To further verify the generalizability of our method, we tested our network on two additional public white blood cells (WBC) datasets and the results show our shape-aware method can generally outperform recent state-of-the-art works on classifying the WBC. The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC. ### A Segmentation Method for fluorescence images without a machine learning approach - **Authors:** Giuseppe Giacopelli, Michele Migliore, Domenico Tegolo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2212.13945 - **Pdf link:** https://arxiv.org/pdf/2212.13945 - **Abstract** Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps, and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing Indirect ImmunoFluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is far from the conventional neural network approach, but it is equivalent to their quantitative and qualitative performance, and it is also solid to adversative noise. The method is robust, based on formally correct functions, and does not suffer from tuning on specific data sets. Results: This work demonstrates the robustness of the method against the variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on two datasets (Neuroblastoma and NucleusSegData) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional to a structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) to segment cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches. ## Keyword: raw image ### Shape-Aware Fine-Grained Classification of Erythroid Cells - **Authors:** Ye Wang, Rui Ma, Xiaoqing Ma, Honghua Cui, Yubin Xiao, Xuan Wu, You Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.13695 - **Pdf link:** https://arxiv.org/pdf/2212.13695 - **Abstract** Fine-grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy. Due to the subtle visual differences between different types of erythroid cells, it is challenging to apply existing image-based deep learning models for fine-grained erythroid cell classification. Moreover, there is no large open-source datasets on erythroid cells to support the model training. In this paper, we introduce BMEC (Bone Morrow Erythroid Cells), the first large fine-grained image dataset of erythroid cells, to facilitate more deep learning research on erythroid cells. BMEC contains 5,666 images of individual erythroid cells, each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells. To distinguish the erythroid cells, one key indicator is the cell shape which is closely related to the cell growth and maturation. Therefore, we design a novel shape-aware image classification network for fine-grained erythroid cell classification. The shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module. With the shape-attended image feature, our network achieved superior classification performance (81.12\% top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine-grained cell classification. To further verify the generalizability of our method, we tested our network on two additional public white blood cells (WBC) datasets and the results show our shape-aware method can generally outperform recent state-of-the-art works on classifying the WBC. The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC.
process
new submissions for thu dec keyword events position aware contrastive alignment for referring image segmentation authors bo chen zhiwei hu zhilong ji jinfeng bai wangmeng zuo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract referring image segmentation aims to segment the target object described by a given natural language expression typically referring expressions contain complex relationships between the target and its surrounding objects the main challenge of this task is to understand the visual and linguistic content simultaneously and to find the referred object accurately among all instances in the image currently the most effective way to solve the above problem is to obtain aligned multi modal features by computing the correlation between visual and linguistic feature modalities under the supervision of the ground truth mask however existing paradigms have difficulty in thoroughly understanding visual and linguistic content due to the inability to perceive information directly about surrounding objects that refer to the target this prevents them from learning aligned multi modal features which leads to inaccurate segmentation to address this issue we present a position aware contrastive alignment network pcan to enhance the alignment of multi modal features by guiding the interaction between vision and language through prior position information our pcan consists of two modules position aware module pam which provides position information of all objects related to natural language descriptions and contrastive language understanding module clum which enhances multi modal alignment by comparing the features of the referred object with those of related objects extensive experiments on three benchmarks demonstrate our pcan performs favorably against the state of the art methods our code will be made publicly available efficient semantic segmentation on edge devices authors farshad safavi irfan ali venkatesh dasari guanqun song ting zhu subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class the task of semantic segmentation should be performed with both accuracy and efficiency most of the existing deep fcns yield to heavy computations and these networks are very power hungry unsuitable for real time applications on portable devices this project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events we compare the performance of real time semantic segmentation models with non real time counterparts constrained by aerial images under oppositional settings furthermore we train several models on the flood net dataset containing uav images captured after hurricane harvey and benchmark their execution on special classes such as flooded buildings vs non flooded buildings or flooded roads vs non flooded roads in this project we developed a real time unet based model and deployed that network on jetson agx xavier module keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb scaling painting style transfer authors bruno galerne lara raad josรฉ lezama jean michel morel subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image it was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image the two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high gpu memory requirements many solutions have been proposed to both accelerate neural style transfer and increase its resolution but they all compromise the quality of the produced images indeed transferring the style of a painting is a complex task involving features at different scales from the color palette and compositional style to the fine brushstrokes and texture of the canvas this paper provides a solution to solve the original global optimization for ultra high resolution images enabling multiscale style transfer at unprecedented image sizes this is achieved by spatially localizing the computation of each forward and backward passes through the vgg network extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles keyword isp deep learning models for river classification at sub meter resolutions from multispectral and panchromatic commercial satellite imagery authors joachim moortgat ziwei li michael durand ian howat bidhyananda yadav chunli dai subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv geophysics physics geo ph arxiv link pdf link abstract remote sensing of the earth s surface water is critical in a wide range of environmental studies from evaluating the societal impacts of seasonal droughts and floods to the large scale implications of climate change consequently a large literature exists on the classification of water from satellite imagery yet previous methods have been limited by the spatial resolution of public satellite imagery classification schemes that operate at the pixel level and the need for multiple spectral bands we advance the state of the art by using commercial imagery with panchromatic and multispectral resolutions of cm and m respectively developing multiple fully convolutional neural networks fcn that can learn the morphological features of water bodies in addition to their spectral properties and fcn that can classify water even from panchromatic imagery this study focuses on rivers in the arctic using images from the quickbird worldview and geoeye satellites because no training data are available at such high resolutions we construct those manually first we use the rgb and nir bands of the band multispectral sensors those trained models all achieve excellent precision and recall over on validation data aided by on the fly preprocessing of the training data specific to satellite imagery in a novel approach we then use results from the multispectral model to generate training data for fcn that only require panchromatic imagery of which considerably more is available despite the smaller feature space these models still achieve a precision and recall of over we provide our open source codes and trained model parameters to the remote sensing community which paves the way to a wide range of environmental hydrology applications at vastly superior accuracies and orders of magnitude higher spatial resolution than previously possible adversarial virtual exemplar learning for label frugal satellite image change detection authors hichem sahbi sebastien deschamps subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants this task is highly challenging due to the acquisition conditions and also to the subjectivity of changes in this paper we investigate satellite image change detection using active learning our method is interactive and relies on a question and answer model which asks the oracle user questions about the most informative display dubbed as virtual exemplars and according to the user s responses updates change detections the main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative diverse and uncertain virtual exemplars the latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re estimate of these criteria in the following iterations of active learning conducted experiments show the out performance of our proposed adversarial display model against other display strategies as well as the related work keyword image signal processing there is no result keyword image signal process there is no result keyword compression multi realism image compression with a conditional generator authors eirikur agustsson david minnen george toderici fabian mentzer subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract by optimizing the rate distortion realism trade off generative compression approaches produce detailed realistic images even at low bit rates instead of the blurry reconstructions produced by rate distortion optimized models however previous methods do not explicitly control how much detail is synthesized which results in a common criticism of these methods users might be worried that a misleading reconstruction far from the input image is generated in this work we alleviate these concerns by training a decoder that can bridge the two regimes and navigate the distortion realism trade off from a single compressed representation the receiver can decide to either reconstruct a low mean squared error reconstruction that is close to the input a realistic reconstruction with high perceptual quality or anything in between with our method we set a new state of the art in distortion realism pushing the frontier of achievable distortion realism pairs i e our method achieves better distortions at high realism and better realism at low distortion than ever before keyword raw scaling painting style transfer authors bruno galerne lara raad josรฉ lezama jean michel morel subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image it was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image the two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high gpu memory requirements many solutions have been proposed to both accelerate neural style transfer and increase its resolution but they all compromise the quality of the produced images indeed transferring the style of a painting is a complex task involving features at different scales from the color palette and compositional style to the fine brushstrokes and texture of the canvas this paper provides a solution to solve the original global optimization for ultra high resolution images enabling multiscale style transfer at unprecedented image sizes this is achieved by spatially localizing the computation of each forward and backward passes through the vgg network extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles noise aware learning from web crawled image text data for image captioning authors wooyoung kang jonghwan mun sungjun lee byungseok roh subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract image captioning is one of the straightforward tasks that can take advantage of large scale web crawled data which provides rich knowledge about the visual world for a captioning model however since web crawled data contains image text pairs that are aligned at different levels the inherent noises e g misaligned pairs make it difficult to learn a precise captioning model while the filtering strategy can effectively remove noisy data however it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency to take the best of both worlds we propose a noise aware learning framework which learns rich knowledge from the whole web crawled data while being less affected by the noises this is achieved by the proposed quality controllable model which is learned using alignment levels of the image text pairs as an additional control signal during training the alignment conditioned training allows the model to generate high quality captions of well aligned by simply setting the control signal to desired alignment level at inference time through in depth analysis we show that our controllable captioning model is effective in handling noise in addition with two tasks of zero shot captioning and text to image retrieval using generated captions i e self retrieval we also demonstrate our model can produce high quality captions in terms of descriptiveness and distinctiveness code is available at url shape aware fine grained classification of erythroid cells authors ye wang rui ma xiaoqing ma honghua cui yubin xiao xuan wu you zhou subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fine grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy due to the subtle visual differences between different types of erythroid cells it is challenging to apply existing image based deep learning models for fine grained erythroid cell classification moreover there is no large open source datasets on erythroid cells to support the model training in this paper we introduce bmec bone morrow erythroid cells the first large fine grained image dataset of erythroid cells to facilitate more deep learning research on erythroid cells bmec contains images of individual erythroid cells each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells to distinguish the erythroid cells one key indicator is the cell shape which is closely related to the cell growth and maturation therefore we design a novel shape aware image classification network for fine grained erythroid cell classification the shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module with the shape attended image feature our network achieved superior classification performance top accuracy on the bmec dataset comparing to the baseline methods ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine grained cell classification to further verify the generalizability of our method we tested our network on two additional public white blood cells wbc datasets and the results show our shape aware method can generally outperform recent state of the art works on classifying the wbc the code and bmec dataset can be found on a segmentation method for fluorescence images without a machine learning approach authors giuseppe giacopelli michele migliore domenico tegolo subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract background image analysis applications in digital pathology include various methods for segmenting regions of interest their identification is one of the most complex steps and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning ml approach method a fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing indirect immunofluorescence iif raw data this study describes a deterministic computational neuroscience approach for identifying cells and nuclei it is far from the conventional neural network approach but it is equivalent to their quantitative and qualitative performance and it is also solid to adversative noise the method is robust based on formally correct functions and does not suffer from tuning on specific data sets results this work demonstrates the robustness of the method against the variability of parameters such as image size mode and signal to noise ratio we validated the method on two datasets neuroblastoma and nucleussegdata using images annotated by independent medical doctors conclusions the definition of deterministic and formally correct methods from a functional to a structural point of view guarantees the achievement of optimized and functionally correct results the excellent performance of our deterministic method neuronalalg to segment cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ml approaches keyword raw image shape aware fine grained classification of erythroid cells authors ye wang rui ma xiaoqing ma honghua cui yubin xiao xuan wu you zhou subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fine grained classification and counting of bone marrow erythroid cells are vital for evaluating the health status and formulating therapeutic schedules for leukemia or hematopathy due to the subtle visual differences between different types of erythroid cells it is challenging to apply existing image based deep learning models for fine grained erythroid cell classification moreover there is no large open source datasets on erythroid cells to support the model training in this paper we introduce bmec bone morrow erythroid cells the first large fine grained image dataset of erythroid cells to facilitate more deep learning research on erythroid cells bmec contains images of individual erythroid cells each of which is extracted from the bone marrow erythroid cell smears and professionally annotated to one of the four types of erythroid cells to distinguish the erythroid cells one key indicator is the cell shape which is closely related to the cell growth and maturation therefore we design a novel shape aware image classification network for fine grained erythroid cell classification the shape feature is extracted from the shape mask image and aggregated to the raw image feature with a shape attention module with the shape attended image feature our network achieved superior classification performance top accuracy on the bmec dataset comparing to the baseline methods ablation studies also demonstrate the effectiveness of incorporating the shape information for the fine grained cell classification to further verify the generalizability of our method we tested our network on two additional public white blood cells wbc datasets and the results show our shape aware method can generally outperform recent state of the art works on classifying the wbc the code and bmec dataset can be found on
1
13,271
15,732,820,211
IssuesEvent
2021-03-29 18:46:08
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Provide complete, correct instructions for deploying on Linux
Pri2 devops-cicd-process/tech devops/prod doc-bug
The instructions on this page were demonstrably not tested on anything but Windows VMs. "4. Choose Windows or Linux for the Operating System." "6. Run the copied script from an administrator PowerShell command prompt". Choosing the Linux operating system and running the script in /opt after running `sudo su` throws a "Must not run with sudo" error. Is there some *other* way of running commands as root on a stock Ubuntu Azure VM? If I run without `sudo su` in ./opt I get permission denied errors because obvious the default user doesn't have write perms in /opt. If I run the script in $HOME everything works but now the agent is installed in the default user's home directory and runs under that user context, which means I have to mess round with sudoers in order to send commands to the VM. There's clearly a lot of assumptions about how this script ought to be deployed to Linux VMs that aren't documented here, and they need to be. --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 91d0d31f-81ee-c024-db7e-daddbf525f71 * Version Independent ID: 330f1649-386c-d0aa-5f96-b8343a1480d3 * Content: [Environment - Virtual machine resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops) * Content Source: [docs/pipelines/process/environments-virtual-machines.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-virtual-machines.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Provide complete, correct instructions for deploying on Linux - The instructions on this page were demonstrably not tested on anything but Windows VMs. "4. Choose Windows or Linux for the Operating System." "6. Run the copied script from an administrator PowerShell command prompt". Choosing the Linux operating system and running the script in /opt after running `sudo su` throws a "Must not run with sudo" error. Is there some *other* way of running commands as root on a stock Ubuntu Azure VM? If I run without `sudo su` in ./opt I get permission denied errors because obvious the default user doesn't have write perms in /opt. If I run the script in $HOME everything works but now the agent is installed in the default user's home directory and runs under that user context, which means I have to mess round with sudoers in order to send commands to the VM. There's clearly a lot of assumptions about how this script ought to be deployed to Linux VMs that aren't documented here, and they need to be. --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 91d0d31f-81ee-c024-db7e-daddbf525f71 * Version Independent ID: 330f1649-386c-d0aa-5f96-b8343a1480d3 * Content: [Environment - Virtual machine resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops) * Content Source: [docs/pipelines/process/environments-virtual-machines.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-virtual-machines.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
provide complete correct instructions for deploying on linux the instructions on this page were demonstrably not tested on anything but windows vms choose windows or linux for the operating system run the copied script from an administrator powershell command prompt choosing the linux operating system and running the script in opt after running sudo su throws a must not run with sudo error is there some other way of running commands as root on a stock ubuntu azure vm if i run without sudo su in opt i get permission denied errors because obvious the default user doesn t have write perms in opt if i run the script in home everything works but now the agent is installed in the default user s home directory and runs under that user context which means i have to mess round with sudoers in order to send commands to the vm there s clearly a lot of assumptions about how this script ought to be deployed to linux vms that aren t documented here and they need to be document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
159,175
13,757,343,596
IssuesEvent
2020-10-06 21:28:21
AladW/aurutils
https://api.github.com/repos/AladW/aurutils
closed
aur-build: warn when a repose-made repo is used
documentation
`aurutils` does not currently support `repose`. But if the user has a `repose` created database `repo-add` calls will give cryptic error messages. Since `aurutils` supported `repose` in the past, it may be a nice for us to detect such cases and give the user a more helpful error message.
1.0
aur-build: warn when a repose-made repo is used - `aurutils` does not currently support `repose`. But if the user has a `repose` created database `repo-add` calls will give cryptic error messages. Since `aurutils` supported `repose` in the past, it may be a nice for us to detect such cases and give the user a more helpful error message.
non_process
aur build warn when a repose made repo is used aurutils does not currently support repose but if the user has a repose created database repo add calls will give cryptic error messages since aurutils supported repose in the past it may be a nice for us to detect such cases and give the user a more helpful error message
0
357,201
25,176,342,598
IssuesEvent
2022-11-11 09:35:53
Darren12345677/pe
https://api.github.com/repos/Darren12345677/pe
opened
Sequence diagram for adding appointment seems to be incorrect.
severity.VeryLow type.DocumentationBug
![image.png](https://raw.githubusercontent.com/Darren12345677/pe/main/files/b49297a1-f340-4484-9d8e-1d80b56987ff.png) The activation bar for model is missing. <!--session: 1668153192805-cd74d675-1c46-40e6-94c3-3c719c346698--> <!--Version: Web v3.4.4-->
1.0
Sequence diagram for adding appointment seems to be incorrect. - ![image.png](https://raw.githubusercontent.com/Darren12345677/pe/main/files/b49297a1-f340-4484-9d8e-1d80b56987ff.png) The activation bar for model is missing. <!--session: 1668153192805-cd74d675-1c46-40e6-94c3-3c719c346698--> <!--Version: Web v3.4.4-->
non_process
sequence diagram for adding appointment seems to be incorrect the activation bar for model is missing
0
68,591
3,291,129,755
IssuesEvent
2015-10-30 06:29:12
otavanopisto/muikku
https://api.github.com/repos/otavanopisto/muikku
closed
Error- publishing the workspace
bug in progress priority
User describes: >Maanantaina tosiaan on terveystiedon ryhmรคkurssi starttaamassa ja kaikki on muutoin ok, mutta kun yritรคn kurssin etusivulla painaa "julkaise" niin saan jatkuvasti vain error-viestiรค. Nรคkyykรถ kurssi kuitenkin normaalisti opiskelijoille, jotka kurssille on liitetty eli voinko vain jรคttรครค tรคmรคn asian huomiotta...? >Sain ongelmaa nyt ratkottua sen verran, ettรค Firefoxista Exploreriin vaihtamalla onnistuin kurssin etusivulla painamaan julkaise-nappia. Explorerilla ei kuitenkaan ainakaan tรคllรค hetkellรค nรคytรค onnistuvan ohjeiden ja etusivun muokkaus (tai lรคhinnรค muokkauksen julkaiseminen), joka aiemmin parisen viikkoa sitten toimi ainakin Firefoxilla ok. Mutta tรคmรคn kanssa ei sinรคnsรค ole akuuttia hรคtรครค, oleelliset tiedot etusivulla ja ohjeissa ovat ok
1.0
Error- publishing the workspace - User describes: >Maanantaina tosiaan on terveystiedon ryhmรคkurssi starttaamassa ja kaikki on muutoin ok, mutta kun yritรคn kurssin etusivulla painaa "julkaise" niin saan jatkuvasti vain error-viestiรค. Nรคkyykรถ kurssi kuitenkin normaalisti opiskelijoille, jotka kurssille on liitetty eli voinko vain jรคttรครค tรคmรคn asian huomiotta...? >Sain ongelmaa nyt ratkottua sen verran, ettรค Firefoxista Exploreriin vaihtamalla onnistuin kurssin etusivulla painamaan julkaise-nappia. Explorerilla ei kuitenkaan ainakaan tรคllรค hetkellรค nรคytรค onnistuvan ohjeiden ja etusivun muokkaus (tai lรคhinnรค muokkauksen julkaiseminen), joka aiemmin parisen viikkoa sitten toimi ainakin Firefoxilla ok. Mutta tรคmรคn kanssa ei sinรคnsรค ole akuuttia hรคtรครค, oleelliset tiedot etusivulla ja ohjeissa ovat ok
non_process
error publishing the workspace user describes maanantaina tosiaan on terveystiedon ryhmรคkurssi starttaamassa ja kaikki on muutoin ok mutta kun yritรคn kurssin etusivulla painaa julkaise niin saan jatkuvasti vain error viestiรค nรคkyykรถ kurssi kuitenkin normaalisti opiskelijoille jotka kurssille on liitetty eli voinko vain jรคttรครค tรคmรคn asian huomiotta sain ongelmaa nyt ratkottua sen verran ettรค firefoxista exploreriin vaihtamalla onnistuin kurssin etusivulla painamaan julkaise nappia explorerilla ei kuitenkaan ainakaan tรคllรค hetkellรค nรคytรค onnistuvan ohjeiden ja etusivun muokkaus tai lรคhinnรค muokkauksen julkaiseminen joka aiemmin parisen viikkoa sitten toimi ainakin firefoxilla ok mutta tรคmรคn kanssa ei sinรคnsรค ole akuuttia hรคtรครค oleelliset tiedot etusivulla ja ohjeissa ovat ok
0
19,118
25,170,172,461
IssuesEvent
2022-11-11 02:01:41
googleapis/nodejs-analytics-data
https://api.github.com/repos/googleapis/nodejs-analytics-data
closed
Your .repo-metadata.json file has a problem ๐Ÿค’
type: process api: analyticsdata repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'analytics-data' invalid in .repo-metadata.json โ˜๏ธ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem ๐Ÿค’ - You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'analytics-data' invalid in .repo-metadata.json โ˜๏ธ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem ๐Ÿค’ you have a problem with your repo metadata json file result of scan ๐Ÿ“ˆ api shortname analytics data invalid in repo metadata json โ˜๏ธ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
1
518
2,993,176,303
IssuesEvent
2015-07-22 00:43:55
broadinstitute/hellbender
https://api.github.com/repos/broadinstitute/hellbender
opened
Expand ReadsPreprocessingPipelineTestData with multi-contig tests
Dataflow DataflowPreprocessingPipeline Engine
All of our tests use contig "1". This is fragile, we should add tests that include two contigs. This is not currently possible with ArtificialReadUtils (issue #688).
1.0
Expand ReadsPreprocessingPipelineTestData with multi-contig tests - All of our tests use contig "1". This is fragile, we should add tests that include two contigs. This is not currently possible with ArtificialReadUtils (issue #688).
process
expand readspreprocessingpipelinetestdata with multi contig tests all of our tests use contig this is fragile we should add tests that include two contigs this is not currently possible with artificialreadutils issue
1
16,363
21,048,361,905
IssuesEvent
2022-03-31 18:16:14
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
[Question] Why does there need to be separate databases for --date-spec?
question log-processing
I assumed that I could use the same database when accessing --date-spec=hr when I already created a database with --date-spec=date. I assumed that all the data for --date-spec=hr would be there in the database from --date-spec=date. However, when I try that it does not work and only shows the results from --date-spec=date instead of hourly. I am using output of json. It does work if I create 2 separate databases. Is this by design? I have another feature request for --date-spec=min. I am also curious about that. I need to generate a report for all three (date, hr, min) and I am trying to find the most efficient way possible. Thanks
1.0
[Question] Why does there need to be separate databases for --date-spec? - I assumed that I could use the same database when accessing --date-spec=hr when I already created a database with --date-spec=date. I assumed that all the data for --date-spec=hr would be there in the database from --date-spec=date. However, when I try that it does not work and only shows the results from --date-spec=date instead of hourly. I am using output of json. It does work if I create 2 separate databases. Is this by design? I have another feature request for --date-spec=min. I am also curious about that. I need to generate a report for all three (date, hr, min) and I am trying to find the most efficient way possible. Thanks
process
why does there need to be separate databases for date spec i assumed that i could use the same database when accessing date spec hr when i already created a database with date spec date i assumed that all the data for date spec hr would be there in the database from date spec date however when i try that it does not work and only shows the results from date spec date instead of hourly i am using output of json it does work if i create separate databases is this by design i have another feature request for date spec min i am also curious about that i need to generate a report for all three date hr min and i am trying to find the most efficient way possible thanks
1
11,190
13,957,699,323
IssuesEvent
2020-10-24 08:12:40
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
PT: Harvesting
Geoportal Harvesting process tbc
Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Marta Medeiros
1.0
PT: Harvesting - Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Marta Medeiros
process
pt harvesting geoportal team we kindly request that you start a harvesting to the portuguese catalogue we have made some updatings and we would like to see if they are some results of our work thank you best regards marta medeiros
1
201,528
15,211,631,259
IssuesEvent
2021-02-17 09:20:51
ISISScientificComputing/autoreduce
https://api.github.com/repos/ISISScientificComputing/autoreduce
closed
Randomly failing queue client tests in pytest action
:key: Continuous Integration :key: Testing
Issue raised by: [developer/user/project requirement] ### What? QueueClient tests are failing seemingly randomly withtin the pytest action. This can be seen in more detail if you check the builds for PR #1044 ### Where? utils/clients/tests/test_queue_client.py ### How? How did the issue come about ### Reproducible? As this is seemingly random I cannot say what is the way to reproduce this ### How to test the issue is resolved A set of instructions / A script / Include test files if required
1.0
Randomly failing queue client tests in pytest action - Issue raised by: [developer/user/project requirement] ### What? QueueClient tests are failing seemingly randomly withtin the pytest action. This can be seen in more detail if you check the builds for PR #1044 ### Where? utils/clients/tests/test_queue_client.py ### How? How did the issue come about ### Reproducible? As this is seemingly random I cannot say what is the way to reproduce this ### How to test the issue is resolved A set of instructions / A script / Include test files if required
non_process
randomly failing queue client tests in pytest action issue raised by what queueclient tests are failing seemingly randomly withtin the pytest action this can be seen in more detail if you check the builds for pr where utils clients tests test queue client py how how did the issue come about reproducible as this is seemingly random i cannot say what is the way to reproduce this how to test the issue is resolved a set of instructions a script include test files if required
0
26,273
26,634,067,935
IssuesEvent
2023-01-24 20:17:04
hpc/charliecloud
https://api.github.com/repos/hpc/charliecloud
opened
Fail faster for registry push with non-constant password
bug medium image usability
PR #1472 changes the behavior of `ch-image push` so that it requests user authentication before gathering and preparing the container. However, the implemented solution will not work if the user has a non-constant form of authentication (such as MFA). In this case, if the user's "password" changes before an upload token is requested from the registry (which seems very likely), they will need to re-authenticate. The upload token is currently requested after the container is prepared for upload, so there's a lot of potential here for wasted time if something goes wrong with authentication. We should fix this. See also: #1426
True
Fail faster for registry push with non-constant password - PR #1472 changes the behavior of `ch-image push` so that it requests user authentication before gathering and preparing the container. However, the implemented solution will not work if the user has a non-constant form of authentication (such as MFA). In this case, if the user's "password" changes before an upload token is requested from the registry (which seems very likely), they will need to re-authenticate. The upload token is currently requested after the container is prepared for upload, so there's a lot of potential here for wasted time if something goes wrong with authentication. We should fix this. See also: #1426
non_process
fail faster for registry push with non constant password pr changes the behavior of ch image push so that it requests user authentication before gathering and preparing the container however the implemented solution will not work if the user has a non constant form of authentication such as mfa in this case if the user s password changes before an upload token is requested from the registry which seems very likely they will need to re authenticate the upload token is currently requested after the container is prepared for upload so there s a lot of potential here for wasted time if something goes wrong with authentication we should fix this see also
0
6,483
9,553,745,508
IssuesEvent
2019-05-02 20:06:02
bow-simulation/virtualbow
https://api.github.com/repos/bow-simulation/virtualbow
closed
Automated building and testing on all supported platforms
area: software process type: improvement
In GitLab by **spfeifer** on Apr 23, 2017, 04:09 **Linux** Could be done with GitLab CI and the shared runners on gitlab.com. See [4] for a good example. **Windows** Three options: * Use GitLab CI and register an own Windows runner * Install runner on local virtual machine (disadvantage: have to start up manually whenever tests should be run) * Use VirtualBox executor ([2]) (disadvantage: more stuff to set up, still only works as long as the host system is online. Overall probably the best solution for now.) * Pay for a host machine and use that (disadvantage: $$$) * Use GitLab CI and do cross-compilation (disadvantage: probably tricky) * Use AppVeyor [3]. It supports GitLab, but would be separate from GitLab CI. Might also run the Linux build there in that case [5]. **MacOS** Will be supported again after %"Version 0.6". Only free CI service that seems to support MacOS is travis-ci.com. Only works with GitHub though. A solution would be to mirror the GitLab repo to a GitHub one and register that with travis. **Links** [1] http://ghostlyrics.net/building-and-deploying-a-c-library-with-gitlab.html [2] https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/virtualbox.md [3] https://github.com/appveyor/ci/issues/219 [4] https://gitlab.com/probono/QtQuickApp/blob/master/.gitlab-ci.yml [5] https://www.appveyor.com/docs/getting-started-with-appveyor-for-linux/
1.0
Automated building and testing on all supported platforms - In GitLab by **spfeifer** on Apr 23, 2017, 04:09 **Linux** Could be done with GitLab CI and the shared runners on gitlab.com. See [4] for a good example. **Windows** Three options: * Use GitLab CI and register an own Windows runner * Install runner on local virtual machine (disadvantage: have to start up manually whenever tests should be run) * Use VirtualBox executor ([2]) (disadvantage: more stuff to set up, still only works as long as the host system is online. Overall probably the best solution for now.) * Pay for a host machine and use that (disadvantage: $$$) * Use GitLab CI and do cross-compilation (disadvantage: probably tricky) * Use AppVeyor [3]. It supports GitLab, but would be separate from GitLab CI. Might also run the Linux build there in that case [5]. **MacOS** Will be supported again after %"Version 0.6". Only free CI service that seems to support MacOS is travis-ci.com. Only works with GitHub though. A solution would be to mirror the GitLab repo to a GitHub one and register that with travis. **Links** [1] http://ghostlyrics.net/building-and-deploying-a-c-library-with-gitlab.html [2] https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/virtualbox.md [3] https://github.com/appveyor/ci/issues/219 [4] https://gitlab.com/probono/QtQuickApp/blob/master/.gitlab-ci.yml [5] https://www.appveyor.com/docs/getting-started-with-appveyor-for-linux/
process
automated building and testing on all supported platforms in gitlab by spfeifer on apr linux could be done with gitlab ci and the shared runners on gitlab com see for a good example windows three options use gitlab ci and register an own windows runner install runner on local virtual machine disadvantage have to start up manually whenever tests should be run use virtualbox executor disadvantage more stuff to set up still only works as long as the host system is online overall probably the best solution for now pay for a host machine and use that disadvantage use gitlab ci and do cross compilation disadvantage probably tricky use appveyor it supports gitlab but would be separate from gitlab ci might also run the linux build there in that case macos will be supported again after version only free ci service that seems to support macos is travis ci com only works with github though a solution would be to mirror the gitlab repo to a github one and register that with travis links
1
6,372
9,421,405,684
IssuesEvent
2019-04-11 06:41:52
plazi/arcadia-project
https://api.github.com/repos/plazi/arcadia-project
opened
selection of articles for BLR processing: IRMNG data
Article processing
Here is the list of articles from the list of genus names in taxonomy, provided by Tony Rees. can you please create a ranking of contribution of the journals, eg how many time is a journal present in this list? This is important to decide where the most new names are [irmng-sources-extra-Jul2017.xlsx](https://github.com/plazi/arcadia-project/files/3067248/irmng-sources-extra-Jul2017.xlsx)
1.0
selection of articles for BLR processing: IRMNG data - Here is the list of articles from the list of genus names in taxonomy, provided by Tony Rees. can you please create a ranking of contribution of the journals, eg how many time is a journal present in this list? This is important to decide where the most new names are [irmng-sources-extra-Jul2017.xlsx](https://github.com/plazi/arcadia-project/files/3067248/irmng-sources-extra-Jul2017.xlsx)
process
selection of articles for blr processing irmng data here is the list of articles from the list of genus names in taxonomy provided by tony rees can you please create a ranking of contribution of the journals eg how many time is a journal present in this list this is important to decide where the most new names are
1
1,958
4,775,644,341
IssuesEvent
2016-10-27 11:08:26
paulkornikov/Pragonas
https://api.github.com/repos/paulkornikov/Pragonas
closed
Add propriรฉtรฉs weblink - numรฉro compte - frรฉquence - extension to compte banque
a-new feature compte processus workload II
plus le rรฉpertoire de chargement et d'upload pour intรฉgration sur le pc client
1.0
Add propriรฉtรฉs weblink - numรฉro compte - frรฉquence - extension to compte banque - plus le rรฉpertoire de chargement et d'upload pour intรฉgration sur le pc client
process
add propriรฉtรฉs weblink numรฉro compte frรฉquence extension to compte banque plus le rรฉpertoire de chargement et d upload pour intรฉgration sur le pc client
1
22,259
30,810,937,681
IssuesEvent
2023-08-01 10:19:55
B-Lang-org/bsc
https://api.github.com/repos/B-Lang-org/bsc
opened
SV preprocessor parentheses parsing
bug ICE sv-preprocessor
As mentioned in issue #469, there are warnings about non-exhaustive pattern matching in `SystemVerilogScanner.lhs`, in the function [`scanLinePosDirective`](https://github.com/B-Lang-org/bsc/blob/0e9a5b2624d91d514a2fb586b7e5d8c94050853b/src/comp/SystemVerilogScanner.lhs#L309-L322) which parses `` `line `` directives. The preprocessor accepts two kinds of `` `line `` directive: one according to the Verilog/SystemVerilog standards (without parentheses) and one with parentheses (which may be something that Bluespec invented, for preserving position information in the pre-processor output?). The function `scanLinePosDirective` is for parsing the directives with parentheses. A comment on the function says "One day the gods will smite me for this" -- I don't know if this is for inventing the new syntax or because the function doesn't handle any error conditions! I wrote a few examples of error situations: closing paren but not enough arguments; EOF before the closing paren; missing closing paren; closing paren (or arguments) on the following line; non-numeric values where numeric arguments expected; etc. Surprisingly, I found that a directive spread over multiple lines will still parse -- I don't know if that's expected. (This is not high priority, but I wanted to record the issue.) Macro definitions (the `` `define `` directive) can also take arguments in parentheses. This is handled in [`SystemVerilogPreprocess.lhs`](https://github.com/B-Lang-org/bsc/blob/0e9a5b2624d91d514a2fb586b7e5d8c94050853b/src/comp/SystemVerilogPreprocess.lhs#L219) (which also generates non-exhaustive pattern warnings). I wrote some examples and found that error situations are also not being handled there. (This is more serious, because it's a user-visible feature and an actual part of the SV spec.) One interesting quirk is that, if the close paren is missing, the directive will continue to be parsed on later lines until a close paren is found (and extra text is silently ignored). I will commit the examples into `testsuite/bsc.preprocessor/misc/`.
1.0
SV preprocessor parentheses parsing - As mentioned in issue #469, there are warnings about non-exhaustive pattern matching in `SystemVerilogScanner.lhs`, in the function [`scanLinePosDirective`](https://github.com/B-Lang-org/bsc/blob/0e9a5b2624d91d514a2fb586b7e5d8c94050853b/src/comp/SystemVerilogScanner.lhs#L309-L322) which parses `` `line `` directives. The preprocessor accepts two kinds of `` `line `` directive: one according to the Verilog/SystemVerilog standards (without parentheses) and one with parentheses (which may be something that Bluespec invented, for preserving position information in the pre-processor output?). The function `scanLinePosDirective` is for parsing the directives with parentheses. A comment on the function says "One day the gods will smite me for this" -- I don't know if this is for inventing the new syntax or because the function doesn't handle any error conditions! I wrote a few examples of error situations: closing paren but not enough arguments; EOF before the closing paren; missing closing paren; closing paren (or arguments) on the following line; non-numeric values where numeric arguments expected; etc. Surprisingly, I found that a directive spread over multiple lines will still parse -- I don't know if that's expected. (This is not high priority, but I wanted to record the issue.) Macro definitions (the `` `define `` directive) can also take arguments in parentheses. This is handled in [`SystemVerilogPreprocess.lhs`](https://github.com/B-Lang-org/bsc/blob/0e9a5b2624d91d514a2fb586b7e5d8c94050853b/src/comp/SystemVerilogPreprocess.lhs#L219) (which also generates non-exhaustive pattern warnings). I wrote some examples and found that error situations are also not being handled there. (This is more serious, because it's a user-visible feature and an actual part of the SV spec.) One interesting quirk is that, if the close paren is missing, the directive will continue to be parsed on later lines until a close paren is found (and extra text is silently ignored). I will commit the examples into `testsuite/bsc.preprocessor/misc/`.
process
sv preprocessor parentheses parsing as mentioned in issue there are warnings about non exhaustive pattern matching in systemverilogscanner lhs in the function which parses line directives the preprocessor accepts two kinds of line directive one according to the verilog systemverilog standards without parentheses and one with parentheses which may be something that bluespec invented for preserving position information in the pre processor output the function scanlineposdirective is for parsing the directives with parentheses a comment on the function says one day the gods will smite me for this i don t know if this is for inventing the new syntax or because the function doesn t handle any error conditions i wrote a few examples of error situations closing paren but not enough arguments eof before the closing paren missing closing paren closing paren or arguments on the following line non numeric values where numeric arguments expected etc surprisingly i found that a directive spread over multiple lines will still parse i don t know if that s expected this is not high priority but i wanted to record the issue macro definitions the define directive can also take arguments in parentheses this is handled in which also generates non exhaustive pattern warnings i wrote some examples and found that error situations are also not being handled there this is more serious because it s a user visible feature and an actual part of the sv spec one interesting quirk is that if the close paren is missing the directive will continue to be parsed on later lines until a close paren is found and extra text is silently ignored i will commit the examples into testsuite bsc preprocessor misc
1
761,537
26,684,960,854
IssuesEvent
2023-01-26 21:03:07
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
opened
Enhance TC-Stat to write the CTC/CTS output from the RIRW job to a real .stat output file
type: new feature requestor: NOAA/GSL alert: NEED ACCOUNT KEY alert: NEED PROJECT ASSIGNMENT MET: Tropical Cyclone Tools priority: high
## Describe the New Feature ## The RIRW job in TC-Stat applies a categorical verification approach to verify rapid intensification or weakening events identified in A-Decks and B-Decks. The job can write CTC/CTS/MPR data to the output. However, that output is only written to the terminal or redirected to a log file. And the CTC/CTS line written do NOT contain the full 21 header columns common to all other .stat line types. This task to add support for the `-out_stat` job command option to the TC-Stat tool. When supplied, reformat the custom header columns for the RIRW job type into the existing 21 .stat header columns. The challenge is finding an intuitive and logical place for each piece of metadata. This is requested by NOAA/GSL for use during the 2023 hurricane season. ### Acceptance Testing ### *List input data types and sources.* *Describe tests required for new functionality.* ### Time Estimate ### *Estimate the amount of work required here.* *Issues should represent approximately 1 to 3 days of work.* ### Sub-Issues ### Consider breaking the new feature down into sub-issues. No sub-issues required ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## New Feature Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Enhance TC-Stat to write the CTC/CTS output from the RIRW job to a real .stat output file - ## Describe the New Feature ## The RIRW job in TC-Stat applies a categorical verification approach to verify rapid intensification or weakening events identified in A-Decks and B-Decks. The job can write CTC/CTS/MPR data to the output. However, that output is only written to the terminal or redirected to a log file. And the CTC/CTS line written do NOT contain the full 21 header columns common to all other .stat line types. This task to add support for the `-out_stat` job command option to the TC-Stat tool. When supplied, reformat the custom header columns for the RIRW job type into the existing 21 .stat header columns. The challenge is finding an intuitive and logical place for each piece of metadata. This is requested by NOAA/GSL for use during the 2023 hurricane season. ### Acceptance Testing ### *List input data types and sources.* *Describe tests required for new functionality.* ### Time Estimate ### *Estimate the amount of work required here.* *Issues should represent approximately 1 to 3 days of work.* ### Sub-Issues ### Consider breaking the new feature down into sub-issues. No sub-issues required ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [x] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next official version or **Future Versions** ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## New Feature Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>_<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issues Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
non_process
enhance tc stat to write the ctc cts output from the rirw job to a real stat output file describe the new feature the rirw job in tc stat applies a categorical verification approach to verify rapid intensification or weakening events identified in a decks and b decks the job can write ctc cts mpr data to the output however that output is only written to the terminal or redirected to a log file and the ctc cts line written do not contain the full header columns common to all other stat line types this task to add support for the out stat job command option to the tc stat tool when supplied reformat the custom header columns for the rirw job type into the existing stat header columns the challenge is finding an intuitive and logical place for each piece of metadata this is requested by noaa gsl for use during the hurricane season acceptance testing list input data types and sources describe tests required for new functionality time estimate estimate the amount of work required here issues should represent approximately to days of work sub issues consider breaking the new feature down into sub issues no sub issues required relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components no impacts new feature checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and development issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
0
300,540
25,975,337,377
IssuesEvent
2022-12-19 14:28:27
pulumi/pulumi
https://api.github.com/repos/pulumi/pulumi
opened
Flaky `TestResourceRefsGetResourceNode` test
area/testing kind/engineering
The test inconsistently passes and fails. Disabling for now. This issue tracks fixing the underlying issue and re-enabling the test.
1.0
Flaky `TestResourceRefsGetResourceNode` test - The test inconsistently passes and fails. Disabling for now. This issue tracks fixing the underlying issue and re-enabling the test.
non_process
flaky testresourcerefsgetresourcenode test the test inconsistently passes and fails disabling for now this issue tracks fixing the underlying issue and re enabling the test
0
146,268
13,175,411,157
IssuesEvent
2020-08-12 01:32:24
shakacode/react_on_rails
https://api.github.com/repos/shakacode/react_on_rails
closed
Update documentation to not suggest moving node_modules
documentation needed
https://github.com/shakacode/react_on_rails/blob/master/docs/basics/recommended-project-structure.md Instead, let's consider recommending leaving package.json and node_modules at the top-level... There are no big advantages of having this be inside of the /client directory and there are a few additional bits of unnecessary complexity. @ashgaliyev please add to this discussion.
1.0
Update documentation to not suggest moving node_modules - https://github.com/shakacode/react_on_rails/blob/master/docs/basics/recommended-project-structure.md Instead, let's consider recommending leaving package.json and node_modules at the top-level... There are no big advantages of having this be inside of the /client directory and there are a few additional bits of unnecessary complexity. @ashgaliyev please add to this discussion.
non_process
update documentation to not suggest moving node modules instead let s consider recommending leaving package json and node modules at the top level there are no big advantages of having this be inside of the client directory and there are a few additional bits of unnecessary complexity ashgaliyev please add to this discussion
0
427,550
12,396,643,127
IssuesEvent
2020-05-20 20:55:59
minio/mc
https://api.github.com/repos/minio/mc
closed
mc mirror --watch ignores files owned by other user
community priority: medium stale triage
## Expected behavior "mc mirror --watch" should sync all files, even owned by other users than the one which runs the mc command ## Actual behavior After starting "mc mirror --watch" all files in source directory are synced to target, no matter by which user the source files are owned. File permissions are 644 for all files. When creating a new file by a user which is not the user that runs the "mc mirror --watch", while the "mc mirror --watch" is running, the new file doesn't get synced ## Steps to reproduce the behavior 1. Source-directory owned by UserA:GroupA, Permissions 776 2. Create file "test1" in Source-directory by UserA, group GroupA, permissions 644 3. start "mc mirror --watch" with UserB, syncing Source-directory 4. -> "test1" is synced to target 5. Create file "test2" with user UserA, group GroupA, permissions 644, in Source-directory while "mc mirror --watch" is still running. 6. File "test2" is not synced 7. Create file "test3" with user UserB (running the mc), group GroupA, permissions 644, in Source-directory while "mc mirror --watch" is still running. 8. -> "test3" is synced to target ## mc version - (paste output of `mc version`) Version: 2018-11-06T01:12:20Z Release-tag: RELEASE.2018-11-06T01-12-20Z Commit-id: b827fac6356f80354a1cf61f43cf5e5b60bbb511 ## System information SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 4
1.0
mc mirror --watch ignores files owned by other user - ## Expected behavior "mc mirror --watch" should sync all files, even owned by other users than the one which runs the mc command ## Actual behavior After starting "mc mirror --watch" all files in source directory are synced to target, no matter by which user the source files are owned. File permissions are 644 for all files. When creating a new file by a user which is not the user that runs the "mc mirror --watch", while the "mc mirror --watch" is running, the new file doesn't get synced ## Steps to reproduce the behavior 1. Source-directory owned by UserA:GroupA, Permissions 776 2. Create file "test1" in Source-directory by UserA, group GroupA, permissions 644 3. start "mc mirror --watch" with UserB, syncing Source-directory 4. -> "test1" is synced to target 5. Create file "test2" with user UserA, group GroupA, permissions 644, in Source-directory while "mc mirror --watch" is still running. 6. File "test2" is not synced 7. Create file "test3" with user UserB (running the mc), group GroupA, permissions 644, in Source-directory while "mc mirror --watch" is still running. 8. -> "test3" is synced to target ## mc version - (paste output of `mc version`) Version: 2018-11-06T01:12:20Z Release-tag: RELEASE.2018-11-06T01-12-20Z Commit-id: b827fac6356f80354a1cf61f43cf5e5b60bbb511 ## System information SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 4
non_process
mc mirror watch ignores files owned by other user expected behavior mc mirror watch should sync all files even owned by other users than the one which runs the mc command actual behavior after starting mc mirror watch all files in source directory are synced to target no matter by which user the source files are owned file permissions are for all files when creating a new file by a user which is not the user that runs the mc mirror watch while the mc mirror watch is running the new file doesn t get synced steps to reproduce the behavior source directory owned by usera groupa permissions create file in source directory by usera group groupa permissions start mc mirror watch with userb syncing source directory is synced to target create file with user usera group groupa permissions in source directory while mc mirror watch is still running file is not synced create file with user userb running the mc group groupa permissions in source directory while mc mirror watch is still running is synced to target mc version paste output of mc version version release tag release commit id system information suse linux enterprise server version patchlevel
0
17,374
23,198,583,822
IssuesEvent
2022-08-01 18:58:03
vectordotdev/vector
https://api.github.com/repos/vectordotdev/vector
closed
Implement end-to-end record acknowledgement
type: enhancement have: should domain: processing Epic
## Summary There does not appear to be any communication of acknowledgements from sinks to their source(s). This could cause dropped records if Vector is interrupted (killed) between receiving a record from a source and fully communicating it to the corresponding sink(s), which in turn voids the "at least once" delivery guarantee listed in most of the sinks. For sources that record state checkpoints, this may be avoided by communicating acknowledgements upstream, and only marking records as read after they have been acknowledged by the sinks. - docker source explicitly only looks at records starting at Vector start time, so no changes required - file source stores checkpoints for each opened file to avoid re-reading - journald source stores a checkpoint after each read batch - kafka source acknowledges each message as soon as they are read, which is committed every 5 seconds by default (if I'm reading it right) - The stdin, syslog, tcp, udp, and vector sources have no backing store, and no protocol support for acknowledgements, so cannot save any state. This is of course complicated by the fact that each source may have multiple sinks (and each each sink may be receiving from multiple sources) and that the data may be passing through transforms that may drop records. ## Jobs to be done - When using Vector with otherwise high-reliability components, Vector should match those existing guarantees and not be a weak link in the overall system - When a customer asks about reliability or delivery guarantees, we need to provide a simple and satisfactory answer so that they can feel confident adopting Vector ## Requirements - [ ] Acknowledgments to sources like Kafka should be precise and only include events Vector has finished processing - [ ] Sources like the file source that manage checkpoints internally should only save those for which it does not need to reread on restart (similar to Kafka above) - [ ] For sources like HTTP, Vector should have the ability (but not the requirement) to defer responding 200 OK until it has finished processing the incoming events - [ ] Acknowledgements must be flexible and configurable, accounting for use cases with multiple sinks, disk buffers, etc without forcing users down a single path - [ ] Clear, understandable documentation around delivery guarantees in various scenarios ## Out of scope - This is an attempt to better match the semantics of some of the existing systems we integrate with, not to try to add the idea of acknowledgement to sources that do not natively support it (e.g. sockets, syslog, etc)
1.0
Implement end-to-end record acknowledgement - ## Summary There does not appear to be any communication of acknowledgements from sinks to their source(s). This could cause dropped records if Vector is interrupted (killed) between receiving a record from a source and fully communicating it to the corresponding sink(s), which in turn voids the "at least once" delivery guarantee listed in most of the sinks. For sources that record state checkpoints, this may be avoided by communicating acknowledgements upstream, and only marking records as read after they have been acknowledged by the sinks. - docker source explicitly only looks at records starting at Vector start time, so no changes required - file source stores checkpoints for each opened file to avoid re-reading - journald source stores a checkpoint after each read batch - kafka source acknowledges each message as soon as they are read, which is committed every 5 seconds by default (if I'm reading it right) - The stdin, syslog, tcp, udp, and vector sources have no backing store, and no protocol support for acknowledgements, so cannot save any state. This is of course complicated by the fact that each source may have multiple sinks (and each each sink may be receiving from multiple sources) and that the data may be passing through transforms that may drop records. ## Jobs to be done - When using Vector with otherwise high-reliability components, Vector should match those existing guarantees and not be a weak link in the overall system - When a customer asks about reliability or delivery guarantees, we need to provide a simple and satisfactory answer so that they can feel confident adopting Vector ## Requirements - [ ] Acknowledgments to sources like Kafka should be precise and only include events Vector has finished processing - [ ] Sources like the file source that manage checkpoints internally should only save those for which it does not need to reread on restart (similar to Kafka above) - [ ] For sources like HTTP, Vector should have the ability (but not the requirement) to defer responding 200 OK until it has finished processing the incoming events - [ ] Acknowledgements must be flexible and configurable, accounting for use cases with multiple sinks, disk buffers, etc without forcing users down a single path - [ ] Clear, understandable documentation around delivery guarantees in various scenarios ## Out of scope - This is an attempt to better match the semantics of some of the existing systems we integrate with, not to try to add the idea of acknowledgement to sources that do not natively support it (e.g. sockets, syslog, etc)
process
implement end to end record acknowledgement summary there does not appear to be any communication of acknowledgements from sinks to their source s this could cause dropped records if vector is interrupted killed between receiving a record from a source and fully communicating it to the corresponding sink s which in turn voids the at least once delivery guarantee listed in most of the sinks for sources that record state checkpoints this may be avoided by communicating acknowledgements upstream and only marking records as read after they have been acknowledged by the sinks docker source explicitly only looks at records starting at vector start time so no changes required file source stores checkpoints for each opened file to avoid re reading journald source stores a checkpoint after each read batch kafka source acknowledges each message as soon as they are read which is committed every seconds by default if i m reading it right the stdin syslog tcp udp and vector sources have no backing store and no protocol support for acknowledgements so cannot save any state this is of course complicated by the fact that each source may have multiple sinks and each each sink may be receiving from multiple sources and that the data may be passing through transforms that may drop records jobs to be done when using vector with otherwise high reliability components vector should match those existing guarantees and not be a weak link in the overall system when a customer asks about reliability or delivery guarantees we need to provide a simple and satisfactory answer so that they can feel confident adopting vector requirements acknowledgments to sources like kafka should be precise and only include events vector has finished processing sources like the file source that manage checkpoints internally should only save those for which it does not need to reread on restart similar to kafka above for sources like http vector should have the ability but not the requirement to defer responding ok until it has finished processing the incoming events acknowledgements must be flexible and configurable accounting for use cases with multiple sinks disk buffers etc without forcing users down a single path clear understandable documentation around delivery guarantees in various scenarios out of scope this is an attempt to better match the semantics of some of the existing systems we integrate with not to try to add the idea of acknowledgement to sources that do not natively support it e g sockets syslog etc
1
446,825
31,560,753,416
IssuesEvent
2023-09-03 08:13:41
Dayana-N/AutoMarket-PP4
https://api.github.com/repos/Dayana-N/AutoMarket-PP4
closed
DEV TASK: Document Testing Process MS8
documentation Dev Task
**Describe Task** Document the testing process of the application to include: - Responsiveness - Browser Compatibility - Lighthouse - Code Validation - User stories - Features - Automated testing (Optional)
1.0
DEV TASK: Document Testing Process MS8 - **Describe Task** Document the testing process of the application to include: - Responsiveness - Browser Compatibility - Lighthouse - Code Validation - User stories - Features - Automated testing (Optional)
non_process
dev task document testing process describe task document the testing process of the application to include responsiveness browser compatibility lighthouse code validation user stories features automated testing optional
0
10,285
13,133,408,102
IssuesEvent
2020-08-06 20:51:36
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
closed
O2 enrichment Baseline/Mod updates
Calculator Process Heating
From #3831: Change headers to "Baseline" and "Modification" Maybe reformat to be like other BL vs Mod calcs (almost any calc from the TH) {this might be better as a separate issue} Once the user hits "Plot" the BL is "locked" (unlocks when user hits "Reset data") then the user can change various fields in the mod and "plot" more lines
1.0
O2 enrichment Baseline/Mod updates - From #3831: Change headers to "Baseline" and "Modification" Maybe reformat to be like other BL vs Mod calcs (almost any calc from the TH) {this might be better as a separate issue} Once the user hits "Plot" the BL is "locked" (unlocks when user hits "Reset data") then the user can change various fields in the mod and "plot" more lines
process
enrichment baseline mod updates from change headers to baseline and modification maybe reformat to be like other bl vs mod calcs almost any calc from the th this might be better as a separate issue once the user hits plot the bl is locked unlocks when user hits reset data then the user can change various fields in the mod and plot more lines
1
752,061
26,271,821,730
IssuesEvent
2023-01-06 17:44:24
mypyc/mypyc
https://api.github.com/repos/mypyc/mypyc
opened
Unexpected runtime type error with inferred optional type
priority-0-high crash
This generates a `TypeError` at runtime, though the code is fine: ```py def f(b: bool) -> None: if b: y = 1 else: y = None f(False) ``` Traceback: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "t/a.py", line 7, in <module> f(False) File "t/a.py", line 5, in f y = None TypeError: int object expected; got None ``` The trigger seems to that the type of `y` is inferred from two assignments, and the latter one assigns `None`. Somehow this results in mypyc thinking that the type of `y` is `int` instead of `int | None` in the assignment.
1.0
Unexpected runtime type error with inferred optional type - This generates a `TypeError` at runtime, though the code is fine: ```py def f(b: bool) -> None: if b: y = 1 else: y = None f(False) ``` Traceback: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "t/a.py", line 7, in <module> f(False) File "t/a.py", line 5, in f y = None TypeError: int object expected; got None ``` The trigger seems to that the type of `y` is inferred from two assignments, and the latter one assigns `None`. Somehow this results in mypyc thinking that the type of `y` is `int` instead of `int | None` in the assignment.
non_process
unexpected runtime type error with inferred optional type this generates a typeerror at runtime though the code is fine py def f b bool none if b y else y none f false traceback traceback most recent call last file line in file t a py line in f false file t a py line in f y none typeerror int object expected got none the trigger seems to that the type of y is inferred from two assignments and the latter one assigns none somehow this results in mypyc thinking that the type of y is int instead of int none in the assignment
0
1,447
4,020,060,312
IssuesEvent
2016-05-16 17:03:28
emergence-lab/emergence-lab
https://api.github.com/repos/emergence-lab/emergence-lab
closed
Create from template - D180
backend bug process
Issue where the create from template button on process detail page for a D180 growth redirects to the generic create process page rather than the create growth page.
1.0
Create from template - D180 - Issue where the create from template button on process detail page for a D180 growth redirects to the generic create process page rather than the create growth page.
process
create from template issue where the create from template button on process detail page for a growth redirects to the generic create process page rather than the create growth page
1
79,528
7,718,862,016
IssuesEvent
2018-05-23 17:33:00
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
workload: support distributed TPC-C load generation
A-testing C-enhancement
Throughout all previous TPC-C testing, we have used a single centralized load generator process. Even when running against a 30 node cluster with 10k warehouses, this has not been an issue - the load generator has always been able to produce the desired amount of load. However, now that we're beginning to look into running TPC-C on a distributed cluster, we're going to need to figure out how to distribute the load generator. This is because we're going to move from running all cockroach nodes in a single dc to running them across multiple. Naively, we would just continue to run a single load generator in one of the data centers and point it only at the dc's local nodes. This would work to some degree, but it would create a huge imbalance of load on that datacenter's nodes. We've already seen how important load distribution is to maximum TPC-C performance, so we'll need to do better. We could point this single load generator at all nodes in the cluster, but then we'd be introducing a huge amount of latency just between the load generator and the remove SQL gateways. Either way, a centralized load generator isn't going to work. Moving past this idea, the next logical idea is to run the same TPC-C load generator in each data center and point each generator at only its local gateways. We could then combine the tpmC results of all generators. Unfortunately, this would produce invalid results because we would not be correctly respecting TPC-C's enforced `10 workers/warehouse` limit. This would allow us to run too many transactions for a given number of warehouses. Telling each load generator about only a fraction of the warehouses would solve this specific problem, but it would result in an incorrect distribution of remote warehouse accesses so it's also not an option. Instead, we need to divide the workers up amongst the different load generators (remember: each worker is bound to a warehouse) while having each load generator remain aware of the total number of warehouses. The current idea is to add a `--warehouse-range` flag to the load generator. This would indicate that even though there may be `--warehouses` total warehouses in the cluster, the generator should only create individual workers for some subset of those. An important optimization to this is that the divide should directly correspond to the dataset partitioning. When running TPC-C in a distributed cluster, we are going to partition the dataset on warehouse boundaries. This is already provided by `workload's` `--partition` flag, and it works out pretty well in theory because only 10% of transactions touch remote warehouses. However, if the load generators were unaware of this then they might end up getting bound to remote warehouses. This would result in far more cross-dc transactions, which would destroy performance. To avoid this, we'll need to make sure there's an affinity between the warehouse partitioning and the use of the `--warehouse-range` flag. @benesch mentioned that the `roachmart` load generator already had to solve this kind of issue. We should explore that solution and see how much of it applies to this issue. @jordanlewis you're the expert here. Did I miss anything?
1.0
workload: support distributed TPC-C load generation - Throughout all previous TPC-C testing, we have used a single centralized load generator process. Even when running against a 30 node cluster with 10k warehouses, this has not been an issue - the load generator has always been able to produce the desired amount of load. However, now that we're beginning to look into running TPC-C on a distributed cluster, we're going to need to figure out how to distribute the load generator. This is because we're going to move from running all cockroach nodes in a single dc to running them across multiple. Naively, we would just continue to run a single load generator in one of the data centers and point it only at the dc's local nodes. This would work to some degree, but it would create a huge imbalance of load on that datacenter's nodes. We've already seen how important load distribution is to maximum TPC-C performance, so we'll need to do better. We could point this single load generator at all nodes in the cluster, but then we'd be introducing a huge amount of latency just between the load generator and the remove SQL gateways. Either way, a centralized load generator isn't going to work. Moving past this idea, the next logical idea is to run the same TPC-C load generator in each data center and point each generator at only its local gateways. We could then combine the tpmC results of all generators. Unfortunately, this would produce invalid results because we would not be correctly respecting TPC-C's enforced `10 workers/warehouse` limit. This would allow us to run too many transactions for a given number of warehouses. Telling each load generator about only a fraction of the warehouses would solve this specific problem, but it would result in an incorrect distribution of remote warehouse accesses so it's also not an option. Instead, we need to divide the workers up amongst the different load generators (remember: each worker is bound to a warehouse) while having each load generator remain aware of the total number of warehouses. The current idea is to add a `--warehouse-range` flag to the load generator. This would indicate that even though there may be `--warehouses` total warehouses in the cluster, the generator should only create individual workers for some subset of those. An important optimization to this is that the divide should directly correspond to the dataset partitioning. When running TPC-C in a distributed cluster, we are going to partition the dataset on warehouse boundaries. This is already provided by `workload's` `--partition` flag, and it works out pretty well in theory because only 10% of transactions touch remote warehouses. However, if the load generators were unaware of this then they might end up getting bound to remote warehouses. This would result in far more cross-dc transactions, which would destroy performance. To avoid this, we'll need to make sure there's an affinity between the warehouse partitioning and the use of the `--warehouse-range` flag. @benesch mentioned that the `roachmart` load generator already had to solve this kind of issue. We should explore that solution and see how much of it applies to this issue. @jordanlewis you're the expert here. Did I miss anything?
non_process
workload support distributed tpc c load generation throughout all previous tpc c testing we have used a single centralized load generator process even when running against a node cluster with warehouses this has not been an issue the load generator has always been able to produce the desired amount of load however now that we re beginning to look into running tpc c on a distributed cluster we re going to need to figure out how to distribute the load generator this is because we re going to move from running all cockroach nodes in a single dc to running them across multiple naively we would just continue to run a single load generator in one of the data centers and point it only at the dc s local nodes this would work to some degree but it would create a huge imbalance of load on that datacenter s nodes we ve already seen how important load distribution is to maximum tpc c performance so we ll need to do better we could point this single load generator at all nodes in the cluster but then we d be introducing a huge amount of latency just between the load generator and the remove sql gateways either way a centralized load generator isn t going to work moving past this idea the next logical idea is to run the same tpc c load generator in each data center and point each generator at only its local gateways we could then combine the tpmc results of all generators unfortunately this would produce invalid results because we would not be correctly respecting tpc c s enforced workers warehouse limit this would allow us to run too many transactions for a given number of warehouses telling each load generator about only a fraction of the warehouses would solve this specific problem but it would result in an incorrect distribution of remote warehouse accesses so it s also not an option instead we need to divide the workers up amongst the different load generators remember each worker is bound to a warehouse while having each load generator remain aware of the total number of warehouses the current idea is to add a warehouse range flag to the load generator this would indicate that even though there may be warehouses total warehouses in the cluster the generator should only create individual workers for some subset of those an important optimization to this is that the divide should directly correspond to the dataset partitioning when running tpc c in a distributed cluster we are going to partition the dataset on warehouse boundaries this is already provided by workload s partition flag and it works out pretty well in theory because only of transactions touch remote warehouses however if the load generators were unaware of this then they might end up getting bound to remote warehouses this would result in far more cross dc transactions which would destroy performance to avoid this we ll need to make sure there s an affinity between the warehouse partitioning and the use of the warehouse range flag benesch mentioned that the roachmart load generator already had to solve this kind of issue we should explore that solution and see how much of it applies to this issue jordanlewis you re the expert here did i miss anything
0
18,077
24,094,936,128
IssuesEvent
2022-09-19 17:51:45
Open-Data-Product-Initiative/open-data-product-spec
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec
opened
Data Pipeline component and element renaming to DataOps
enhancement unprocessed
``` "dataOps": { "infrastructure": { "platform": "Azure", "storageTechnology": "Azure SQL", "storageType": "sql", "containerTool": "helm", "format": "yaml", "status": "development", "scriptURL": "http://192.168.10.1/test/rundatapipeline.yml", "deploymentDocumentationURL": "http://192.168.10.1/test/docs/datapipeline", "hashType": "SHA-2", "checksum": "7b7444ab8f5832e9ae8f54834782af995d0a83b4a1d77a75833eda7e19b4c921" } ``` DataOps is intended for workflow automation, so it is the most descriptive name for the component and element. In the example, the data product is in SQL format and is deployed by running a YAML script.
1.0
Data Pipeline component and element renaming to DataOps - ``` "dataOps": { "infrastructure": { "platform": "Azure", "storageTechnology": "Azure SQL", "storageType": "sql", "containerTool": "helm", "format": "yaml", "status": "development", "scriptURL": "http://192.168.10.1/test/rundatapipeline.yml", "deploymentDocumentationURL": "http://192.168.10.1/test/docs/datapipeline", "hashType": "SHA-2", "checksum": "7b7444ab8f5832e9ae8f54834782af995d0a83b4a1d77a75833eda7e19b4c921" } ``` DataOps is intended for workflow automation, so it is the most descriptive name for the component and element. In the example, the data product is in SQL format and is deployed by running a YAML script.
process
data pipeline component and element renaming to dataops dataops infrastructure platform azure storagetechnology azure sql storagetype sql containertool helm format yaml status development scripturl deploymentdocumentationurl hashtype sha checksum dataops is intended for workflow automation so it is the most descriptive name for the component and element in the example the data product is in sql format and is deployed by running a yaml script
1
17,100
22,620,032,162
IssuesEvent
2022-06-30 04:59:55
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Review + Create tab - list of resources created need updating
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
In Review + Create tab: https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal#review--create-tab It says the following runbooks are automatically created upon completion: - AzureAutomationTutorialWithIdentity - AzureAutomationTutorialScript - AzureAutomationTutorialPython2Runbook I created three different automation accounts, and for each one, the following two runbooks are created: - AzureAutomationTutorialWithIdentity - AzureAutomationTutorialWithIdentityGraphical --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 9b4440e0-1ff5-0fd3-6983-d5f6ed86e818 * Version Independent ID: 8d6aecae-1a58-83aa-45f7-306fb6c92d38 * Content: [Create a standalone Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal) * Content Source: [articles/automation/automation-create-standalone-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-create-standalone-account.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **sudhirsneha**
1.0
Review + Create tab - list of resources created need updating - In Review + Create tab: https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal#review--create-tab It says the following runbooks are automatically created upon completion: - AzureAutomationTutorialWithIdentity - AzureAutomationTutorialScript - AzureAutomationTutorialPython2Runbook I created three different automation accounts, and for each one, the following two runbooks are created: - AzureAutomationTutorialWithIdentity - AzureAutomationTutorialWithIdentityGraphical --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 9b4440e0-1ff5-0fd3-6983-d5f6ed86e818 * Version Independent ID: 8d6aecae-1a58-83aa-45f7-306fb6c92d38 * Content: [Create a standalone Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal) * Content Source: [articles/automation/automation-create-standalone-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-create-standalone-account.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **sudhirsneha**
process
review create tab list of resources created need updating in review create tab it says the following runbooks are automatically created upon completion azureautomationtutorialwithidentity azureautomationtutorialscript i created three different automation accounts and for each one the following two runbooks are created azureautomationtutorialwithidentity azureautomationtutorialwithidentitygraphical document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias sudhirsneha
1
15,909
20,113,514,745
IssuesEvent
2022-02-07 17:07:58
akrherz/iem
https://api.github.com/repos/akrherz/iem
closed
Create SWAT files for UMRB
enhancement Data Processing
Requested two datasets 1) A "realtime" dataset updated daily for the 2019-yesterday period that contains MRMS precip + IEMRE temps 2) One-time dump of 2001-2021 MRMS + IEMRE. For the 2001-2010 period, -15% bias correction applied based on some previous checks I made.
1.0
Create SWAT files for UMRB - Requested two datasets 1) A "realtime" dataset updated daily for the 2019-yesterday period that contains MRMS precip + IEMRE temps 2) One-time dump of 2001-2021 MRMS + IEMRE. For the 2001-2010 period, -15% bias correction applied based on some previous checks I made.
process
create swat files for umrb requested two datasets a realtime dataset updated daily for the yesterday period that contains mrms precip iemre temps one time dump of mrms iemre for the period bias correction applied based on some previous checks i made
1
182
2,562,477,967
IssuesEvent
2015-02-06 02:05:52
phtimmins/cs373-collatz
https://api.github.com/repos/phtimmins/cs373-collatz
closed
Write more unit tests in TestCollatz.py
Project Requirement Testing
Write more unit tests in TestCollatz.py that test corner cases and failure cases until you have an average of 3 tests for each function, confirm the expected failures, and add, commit, and push to the private code repo.
1.0
Write more unit tests in TestCollatz.py - Write more unit tests in TestCollatz.py that test corner cases and failure cases until you have an average of 3 tests for each function, confirm the expected failures, and add, commit, and push to the private code repo.
non_process
write more unit tests in testcollatz py write more unit tests in testcollatz py that test corner cases and failure cases until you have an average of tests for each function confirm the expected failures and add commit and push to the private code repo
0
7,577
10,686,799,495
IssuesEvent
2019-10-22 15:00:56
openwdl/wdl
https://api.github.com/repos/openwdl/wdl
opened
Update RFC to Include Test cases
Discussion RFC Process
Before making a PR to the RFC process, I wanted to start a conversation here to see whate you guys think. Whenever an new change is submitted, we currently have now way of testing whether the feature has been implemented or no, OR whether the implemented feature meets the requirements of the WDL specification. This approach makes it a bit difficult for engine implementors to know whether or not they are adding the requested features as intended. It would be good for each PR to include a set of positive and negative test cases which could help guide engine implementors when developing new features. These test cases should be simple WDL's that are made to specifically test a single feature against the current release(? or maybe development?) specification, assuming that the new feature has been added to the spec. The question we should focus on are: 1. Is this necessary? 2. How will this benefit the Engine implementors? 3. Where do we put test cases? 4. How many test cases are sufficient to accept a single PR? 5. How do we create test cases that will actually work when the development specification is rapidly changing?
1.0
Update RFC to Include Test cases - Before making a PR to the RFC process, I wanted to start a conversation here to see whate you guys think. Whenever an new change is submitted, we currently have now way of testing whether the feature has been implemented or no, OR whether the implemented feature meets the requirements of the WDL specification. This approach makes it a bit difficult for engine implementors to know whether or not they are adding the requested features as intended. It would be good for each PR to include a set of positive and negative test cases which could help guide engine implementors when developing new features. These test cases should be simple WDL's that are made to specifically test a single feature against the current release(? or maybe development?) specification, assuming that the new feature has been added to the spec. The question we should focus on are: 1. Is this necessary? 2. How will this benefit the Engine implementors? 3. Where do we put test cases? 4. How many test cases are sufficient to accept a single PR? 5. How do we create test cases that will actually work when the development specification is rapidly changing?
process
update rfc to include test cases before making a pr to the rfc process i wanted to start a conversation here to see whate you guys think whenever an new change is submitted we currently have now way of testing whether the feature has been implemented or no or whether the implemented feature meets the requirements of the wdl specification this approach makes it a bit difficult for engine implementors to know whether or not they are adding the requested features as intended it would be good for each pr to include a set of positive and negative test cases which could help guide engine implementors when developing new features these test cases should be simple wdl s that are made to specifically test a single feature against the current release or maybe development specification assuming that the new feature has been added to the spec the question we should focus on are is this necessary how will this benefit the engine implementors where do we put test cases how many test cases are sufficient to accept a single pr how do we create test cases that will actually work when the development specification is rapidly changing
1
30,988
5,892,424,859
IssuesEvent
2017-05-17 19:26:22
ESMCI/cime
https://api.github.com/repos/ESMCI/cime
closed
Create clear and complete porting documentation
documentation ready
There is a new section on CIME internals that has been pushed to gh-pages that will help the porting documentation. The porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas.
1.0
Create clear and complete porting documentation - There is a new section on CIME internals that has been pushed to gh-pages that will help the porting documentation. The porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas.
non_process
create clear and complete porting documentation there is a new section on cime internals that has been pushed to gh pages that will help the porting documentation the porting documentation needs to be significantly enhanced with examples to enable users to easily determine what needs to be done to port to their platformas
0
74,949
7,453,140,987
IssuesEvent
2018-03-29 10:48:56
italia/spid
https://api.github.com/repos/italia/spid
closed
Controllo metadata: Comune di Parabiago
metadata nuovo md test
Richiesta avanzata dal Comune di Parabiago Invio metadata Comune di Parabiago
1.0
Controllo metadata: Comune di Parabiago - Richiesta avanzata dal Comune di Parabiago Invio metadata Comune di Parabiago
non_process
controllo metadata comune di parabiago richiesta avanzata dal comune di parabiago invio metadata comune di parabiago
0
8,960
12,068,521,823
IssuesEvent
2020-04-16 14:52:53
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
closed
(PHAST) Show Mods in Report
Process Heating enhancement
Child issue of #3635 Selected 'savings opportunities'/ modifications as well as 'energy projects' should be reflected in report > result data
1.0
(PHAST) Show Mods in Report - Child issue of #3635 Selected 'savings opportunities'/ modifications as well as 'energy projects' should be reflected in report > result data
process
phast show mods in report child issue of selected savings opportunities modifications as well as energy projects should be reflected in report result data
1
642,759
20,912,643,024
IssuesEvent
2022-03-24 10:42:32
ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal
https://api.github.com/repos/ASE-Projekte-WS-2021/ase-ws-21-unser-horsaal
opened
(ONBOARDING) Onboarding beim ersten Nutzen der App
Medium Priority
Als Nutzer mรถchte ich beim ersten Nutzen der App in die App eingefรผhrt werden, damit ich verstehe wie die App zu nutzen ist und ich motiviert bin die Features der App zu nutzen.
1.0
(ONBOARDING) Onboarding beim ersten Nutzen der App - Als Nutzer mรถchte ich beim ersten Nutzen der App in die App eingefรผhrt werden, damit ich verstehe wie die App zu nutzen ist und ich motiviert bin die Features der App zu nutzen.
non_process
onboarding onboarding beim ersten nutzen der app als nutzer mรถchte ich beim ersten nutzen der app in die app eingefรผhrt werden damit ich verstehe wie die app zu nutzen ist und ich motiviert bin die features der app zu nutzen
0
1,526
4,117,595,403
IssuesEvent
2016-06-08 08:10:49
Jumpscale/github_automation
https://api.github.com/repos/Jumpscale/github_automation
closed
move ays templates relevant to this automation to this repo
process_wontfix
ays template dir will be result make sure that installer of cockpit uses also this ays_templates dir
1.0
move ays templates relevant to this automation to this repo - ays template dir will be result make sure that installer of cockpit uses also this ays_templates dir
process
move ays templates relevant to this automation to this repo ays template dir will be result make sure that installer of cockpit uses also this ays templates dir
1
19,393
25,536,909,421
IssuesEvent
2022-11-29 12:41:29
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
Mailbox messages lost when performing SPV resync.
a:bug in:trade-process
### Description When performing an SPV resync, any received mailbox messages will be tossed irretrievably into the void. #### Version All versions. ### Steps to reproduce - Open mediation on a trade. - Go to Settings -> Network -> and press the `SPV resync` button. - Close Bisq. - Mediator issues a payout proposal which will be sent to you as a mailbox message. - Start Bisq to perform the SPV resync. - The payout proposal addressed to you will be silently ignored and removed from the network. ### Expected behaviour Mailbox messages should not be removed from the network when an SPV resync is happening. ### Actual behaviour The mailbox message addressed to you will be silently ignored and removed from the network. ### Additional information This happened to a user in the scenario outlined above, there are likely other examples of mailbox messages tossed when SPV resyncing, for example - payment started / confirmed. - trader/mediator chat messages. - any message / ticket sent to the support agent when the support agent is SPV resyncing.
1.0
Mailbox messages lost when performing SPV resync. - ### Description When performing an SPV resync, any received mailbox messages will be tossed irretrievably into the void. #### Version All versions. ### Steps to reproduce - Open mediation on a trade. - Go to Settings -> Network -> and press the `SPV resync` button. - Close Bisq. - Mediator issues a payout proposal which will be sent to you as a mailbox message. - Start Bisq to perform the SPV resync. - The payout proposal addressed to you will be silently ignored and removed from the network. ### Expected behaviour Mailbox messages should not be removed from the network when an SPV resync is happening. ### Actual behaviour The mailbox message addressed to you will be silently ignored and removed from the network. ### Additional information This happened to a user in the scenario outlined above, there are likely other examples of mailbox messages tossed when SPV resyncing, for example - payment started / confirmed. - trader/mediator chat messages. - any message / ticket sent to the support agent when the support agent is SPV resyncing.
process
mailbox messages lost when performing spv resync description when performing an spv resync any received mailbox messages will be tossed irretrievably into the void version all versions steps to reproduce open mediation on a trade go to settings network and press the spv resync button close bisq mediator issues a payout proposal which will be sent to you as a mailbox message start bisq to perform the spv resync the payout proposal addressed to you will be silently ignored and removed from the network expected behaviour mailbox messages should not be removed from the network when an spv resync is happening actual behaviour the mailbox message addressed to you will be silently ignored and removed from the network additional information this happened to a user in the scenario outlined above there are likely other examples of mailbox messages tossed when spv resyncing for example payment started confirmed trader mediator chat messages any message ticket sent to the support agent when the support agent is spv resyncing
1
48,028
13,067,402,466
IssuesEvent
2020-07-31 00:20:26
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
Wavedeform needs new/better tests (Trac #1679)
Migrated from Trac combo reconstruction defect
The tests are sensitive to the random seed. Choose a different seed and they may crash and burn. Also, the existing tests don't really get to the point. We should test: 1. Are all the bins of the refolded waveforms within tolerance? 2. If we add one SPE pulse well above the noise, does the returned pulse have the correct time and charge? The tolerance should be set appropriately above the noise (or the noise set to zero) such that splitting is a non-issue 3. Test the average pulse splitting/combination under typical noise/tolerance. This is done reasonably well by the pulsesplit_enthusiasm test. Migrated from https://code.icecube.wisc.edu/ticket/1679 ```json { "status": "closed", "changetime": "2019-09-18T07:51:29", "description": "The tests are sensitive to the random seed. Choose a different seed and they may crash and burn.\n\nAlso, the existing tests don't really get to the point. We should test:\n\n1. Are all the bins of the refolded waveforms within tolerance?\n2. If we add one SPE pulse well above the noise, does the returned pulse have the correct time and charge? The tolerance should be set appropriately above the noise (or the noise set to zero) such that splitting is a non-issue\n3. Test the average pulse splitting/combination under typical noise/tolerance. This is done reasonably well by the pulsesplit_enthusiasm test.", "reporter": "jbraun", "cc": "", "resolution": "insufficient resources", "_ts": "1568793089537035", "component": "combo reconstruction", "summary": "Wavedeform needs new/better tests", "priority": "normal", "keywords": "", "time": "2016-04-29T19:46:30", "milestone": "Long-Term Future", "owner": "jbraun", "type": "defect" } ```
1.0
Wavedeform needs new/better tests (Trac #1679) - The tests are sensitive to the random seed. Choose a different seed and they may crash and burn. Also, the existing tests don't really get to the point. We should test: 1. Are all the bins of the refolded waveforms within tolerance? 2. If we add one SPE pulse well above the noise, does the returned pulse have the correct time and charge? The tolerance should be set appropriately above the noise (or the noise set to zero) such that splitting is a non-issue 3. Test the average pulse splitting/combination under typical noise/tolerance. This is done reasonably well by the pulsesplit_enthusiasm test. Migrated from https://code.icecube.wisc.edu/ticket/1679 ```json { "status": "closed", "changetime": "2019-09-18T07:51:29", "description": "The tests are sensitive to the random seed. Choose a different seed and they may crash and burn.\n\nAlso, the existing tests don't really get to the point. We should test:\n\n1. Are all the bins of the refolded waveforms within tolerance?\n2. If we add one SPE pulse well above the noise, does the returned pulse have the correct time and charge? The tolerance should be set appropriately above the noise (or the noise set to zero) such that splitting is a non-issue\n3. Test the average pulse splitting/combination under typical noise/tolerance. This is done reasonably well by the pulsesplit_enthusiasm test.", "reporter": "jbraun", "cc": "", "resolution": "insufficient resources", "_ts": "1568793089537035", "component": "combo reconstruction", "summary": "Wavedeform needs new/better tests", "priority": "normal", "keywords": "", "time": "2016-04-29T19:46:30", "milestone": "Long-Term Future", "owner": "jbraun", "type": "defect" } ```
non_process
wavedeform needs new better tests trac the tests are sensitive to the random seed choose a different seed and they may crash and burn also the existing tests don t really get to the point we should test are all the bins of the refolded waveforms within tolerance if we add one spe pulse well above the noise does the returned pulse have the correct time and charge the tolerance should be set appropriately above the noise or the noise set to zero such that splitting is a non issue test the average pulse splitting combination under typical noise tolerance this is done reasonably well by the pulsesplit enthusiasm test migrated from json status closed changetime description the tests are sensitive to the random seed choose a different seed and they may crash and burn n nalso the existing tests don t really get to the point we should test n are all the bins of the refolded waveforms within tolerance if we add one spe pulse well above the noise does the returned pulse have the correct time and charge the tolerance should be set appropriately above the noise or the noise set to zero such that splitting is a non issue test the average pulse splitting combination under typical noise tolerance this is done reasonably well by the pulsesplit enthusiasm test reporter jbraun cc resolution insufficient resources ts component combo reconstruction summary wavedeform needs new better tests priority normal keywords time milestone long term future owner jbraun type defect
0
18,936
24,895,537,409
IssuesEvent
2022-10-28 15:29:07
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR cellular response to disruption of membrane integrity
New term request waiting for feedback cellular processes
more details to be provided by @sylvainpoux
1.0
NTR cellular response to disruption of membrane integrity - more details to be provided by @sylvainpoux
process
ntr cellular response to disruption of membrane integrity more details to be provided by sylvainpoux
1
238,987
19,802,936,702
IssuesEvent
2022-01-19 01:13:26
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
opened
Upgrade from Chromium 97.0.4692.71 to Chromium 97.0.4692.100(?).
QA/Yes release-notes/include QA/Test-Plan-Specified OS/Android Chromium/upgrade minor OS/Desktop
Minor Chromium bump. https://chromium.googlesource.com/chromium/src/+log/97.0.4692.71..97.0.4692.100?pretty=fuller&n=10000 QA tests: Check branding items Check for version bump Additional checks: No specific code changes in Brave (only line number changes in patches).
1.0
Upgrade from Chromium 97.0.4692.71 to Chromium 97.0.4692.100(?). - Minor Chromium bump. https://chromium.googlesource.com/chromium/src/+log/97.0.4692.71..97.0.4692.100?pretty=fuller&n=10000 QA tests: Check branding items Check for version bump Additional checks: No specific code changes in Brave (only line number changes in patches).
non_process
upgrade from chromium to chromium minor chromium bump qa tests check branding items check for version bump additional checks no specific code changes in brave only line number changes in patches
0
283,963
24,573,202,861
IssuesEvent
2022-10-13 10:13:34
claritychallenge/clarity
https://api.github.com/repos/claritychallenge/clarity
opened
Unit tests for dataset module
tests
Develop unit tests for the data module (see #80 for overview). Specific modules within `dataset` are... - [ ] `dataset/cec1_dataset.py`
1.0
Unit tests for dataset module - Develop unit tests for the data module (see #80 for overview). Specific modules within `dataset` are... - [ ] `dataset/cec1_dataset.py`
non_process
unit tests for dataset module develop unit tests for the data module see for overview specific modules within dataset are dataset dataset py
0
164,819
20,507,885,741
IssuesEvent
2022-03-01 01:07:08
Sh2dowFi3nd/Test_2
https://api.github.com/repos/Sh2dowFi3nd/Test_2
opened
CVE-2022-24615 (Medium) detected in zip4j-1.3.2.jar
security vulnerability
## CVE-2022-24615 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zip4j-1.3.2.jar</b></p></summary> <p>An open source java library to handle zip files</p> <p>Library home page: <a href="http://www.lingala.net/zip4j/">http://www.lingala.net/zip4j/</a></p> <p>Path to dependency file: /Test_2/fs-agent-master/fs-agent-master/pom.xml</p> <p>Path to vulnerable library: /2/repository/net/lingala/zip4j/zip4j/1.3.2/zip4j-1.3.2.jar</p> <p> Dependency Hierarchy: - :x: **zip4j-1.3.2.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> zip4j up to 2.9.0 can throw various uncaught exceptions while parsing a specially crafted ZIP file, which could result in an application crash. This could be used to mount a denial of service attack against services that use zip4j library. <p>Publish Date: 2022-02-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24615>CVE-2022-24615</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24615">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24615</a></p> <p>Release Date: 2022-02-24</p> <p>Fix Resolution: net.lingala.zip4j:zip4j:2.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-24615 (Medium) detected in zip4j-1.3.2.jar - ## CVE-2022-24615 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zip4j-1.3.2.jar</b></p></summary> <p>An open source java library to handle zip files</p> <p>Library home page: <a href="http://www.lingala.net/zip4j/">http://www.lingala.net/zip4j/</a></p> <p>Path to dependency file: /Test_2/fs-agent-master/fs-agent-master/pom.xml</p> <p>Path to vulnerable library: /2/repository/net/lingala/zip4j/zip4j/1.3.2/zip4j-1.3.2.jar</p> <p> Dependency Hierarchy: - :x: **zip4j-1.3.2.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> zip4j up to 2.9.0 can throw various uncaught exceptions while parsing a specially crafted ZIP file, which could result in an application crash. This could be used to mount a denial of service attack against services that use zip4j library. <p>Publish Date: 2022-02-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24615>CVE-2022-24615</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24615">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24615</a></p> <p>Release Date: 2022-02-24</p> <p>Fix Resolution: net.lingala.zip4j:zip4j:2.9.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jar cve medium severity vulnerability vulnerable library jar an open source java library to handle zip files library home page a href path to dependency file test fs agent master fs agent master pom xml path to vulnerable library repository net lingala jar dependency hierarchy x jar vulnerable library vulnerability details up to can throw various uncaught exceptions while parsing a specially crafted zip file which could result in an application crash this could be used to mount a denial of service attack against services that use library publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution net lingala step up your open source security game with whitesource
0
24,686
5,096,484,298
IssuesEvent
2017-01-03 18:21:04
zooniverse/Panoptes
https://api.github.com/repos/zooniverse/Panoptes
closed
Fix incomplete classification docs
documentation
http://docs.panoptes.apiary.io/#reference/classification/classification/list-all-classifications This is out of date, it should say that a user can retrieve only their finished classifications instead of this: > A User may only retrieve a single classification if it not completed in order to finish it. Otherwise they may retrieve a list of classifications from the Classification Collection resource.
1.0
Fix incomplete classification docs - http://docs.panoptes.apiary.io/#reference/classification/classification/list-all-classifications This is out of date, it should say that a user can retrieve only their finished classifications instead of this: > A User may only retrieve a single classification if it not completed in order to finish it. Otherwise they may retrieve a list of classifications from the Classification Collection resource.
non_process
fix incomplete classification docs this is out of date it should say that a user can retrieve only their finished classifications instead of this a user may only retrieve a single classification if it not completed in order to finish it otherwise they may retrieve a list of classifications from the classification collection resource
0
6,907
10,059,471,018
IssuesEvent
2019-07-22 16:30:53
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
`npm run link` fails on Windows because of EPERM
OS: windows process: contributing stage: needs review
<!-- Is this a question? Don't open an issue. Ask in our chat https://on.cypress.io/chat --> ### Current behavior: https://github.com/cypress-io/cypress/blame/develop/scripts/link-packages.js#L46 `fs.symlink` fails in `npm run link` on Windows with `EPERM`. You must run as administrator for `fs.symlink` to complete successfully: #### without admin ![image](https://user-images.githubusercontent.com/1151760/61639748-175d1d80-ac6a-11e9-96e6-e573b3193220.png) #### with admin ![image](https://user-images.githubusercontent.com/1151760/61639811-2e037480-ac6a-11e9-80a9-c43a8340df8f.png) Also, I have no clue why, but the symlinks from `fs.symlink` are totally unusable by Node (can't be required) and Windows (can't `cd` into them), but symlinks from `mklink` work fine... not sure why it works in Appveyor. ### Desired behavior: Use `mklink` instead of `fs.symlink` on Windows so that elevated permissions are not needed to develop with Cypress ### Steps to reproduce: (app code and test code) Try to `npm run link` in Cypress on a non-elevated Windows command prompt ### Versions Node 8+
1.0
`npm run link` fails on Windows because of EPERM - <!-- Is this a question? Don't open an issue. Ask in our chat https://on.cypress.io/chat --> ### Current behavior: https://github.com/cypress-io/cypress/blame/develop/scripts/link-packages.js#L46 `fs.symlink` fails in `npm run link` on Windows with `EPERM`. You must run as administrator for `fs.symlink` to complete successfully: #### without admin ![image](https://user-images.githubusercontent.com/1151760/61639748-175d1d80-ac6a-11e9-96e6-e573b3193220.png) #### with admin ![image](https://user-images.githubusercontent.com/1151760/61639811-2e037480-ac6a-11e9-80a9-c43a8340df8f.png) Also, I have no clue why, but the symlinks from `fs.symlink` are totally unusable by Node (can't be required) and Windows (can't `cd` into them), but symlinks from `mklink` work fine... not sure why it works in Appveyor. ### Desired behavior: Use `mklink` instead of `fs.symlink` on Windows so that elevated permissions are not needed to develop with Cypress ### Steps to reproduce: (app code and test code) Try to `npm run link` in Cypress on a non-elevated Windows command prompt ### Versions Node 8+
process
npm run link fails on windows because of eperm current behavior fs symlink fails in npm run link on windows with eperm you must run as administrator for fs symlink to complete successfully without admin with admin also i have no clue why but the symlinks from fs symlink are totally unusable by node can t be required and windows can t cd into them but symlinks from mklink work fine not sure why it works in appveyor desired behavior use mklink instead of fs symlink on windows so that elevated permissions are not needed to develop with cypress steps to reproduce app code and test code try to npm run link in cypress on a non elevated windows command prompt versions node
1
3,648
6,678,487,787
IssuesEvent
2017-10-05 14:24:08
wpninjas/ninja-forms-uploads
https://api.github.com/repos/wpninjas/ninja-forms-uploads
opened
Nonce error happens without Save Progress activated.
DIFFICULTY: Easy FRONT: Processing PRIORITY: High VALUE: Modern
Nonce error happens without Save Progress activated. This nonce error happens during the upload of the file and before the processing of the form. WP Version: 4.8.2 - Supported WP Multisite Enabled: No Web Server Info: Apache/2.4.10 (Debian) TLS Version: 1.2 PHP Version: 5.6.30-0+deb8u1 MySQL Version: 5.6.37 PHP Locale: negative_sign: WP Memory Limit: 40M WP Debug Mode: No WP Language: Default WP Max Upload Size: 100 MB PHP Post Max Size: 100M Max Input Nesting Level: 64 PHP Time Limit: 60 PHP Max Input Vars: 2500 Ninja Forms - File Uploads by The WP Ninjas version 3.0.15 Ninja Forms - Zoho CRM by Stuart Sequeira version 3.0.2 Ninja Forms by The WP Ninjas version 3.2.1
1.0
Nonce error happens without Save Progress activated. - Nonce error happens without Save Progress activated. This nonce error happens during the upload of the file and before the processing of the form. WP Version: 4.8.2 - Supported WP Multisite Enabled: No Web Server Info: Apache/2.4.10 (Debian) TLS Version: 1.2 PHP Version: 5.6.30-0+deb8u1 MySQL Version: 5.6.37 PHP Locale: negative_sign: WP Memory Limit: 40M WP Debug Mode: No WP Language: Default WP Max Upload Size: 100 MB PHP Post Max Size: 100M Max Input Nesting Level: 64 PHP Time Limit: 60 PHP Max Input Vars: 2500 Ninja Forms - File Uploads by The WP Ninjas version 3.0.15 Ninja Forms - Zoho CRM by Stuart Sequeira version 3.0.2 Ninja Forms by The WP Ninjas version 3.2.1
process
nonce error happens without save progress activated nonce error happens without save progress activated this nonce error happens during the upload of the file and before the processing of the form wp version supported wp multisite enabled no web server info apache debian tls version php version mysql version php locale negative sign wp memory limit wp debug mode no wp language default wp max upload size mb php post max size max input nesting level php time limit php max input vars ninja forms file uploads by the wp ninjas version ninja forms zoho crm by stuart sequeira version ninja forms by the wp ninjas version
1
53,751
23,052,026,634
IssuesEvent
2022-07-24 19:24:59
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Different spelling of externalTrafficPolicy value: local vs Local
container-service/svc triaged cxp doc-enhancement Pri1
Not sure if important, but there is a slight mismatch in spelling: Text uses `local` > you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. Code block that follows uses `Local` > externalTrafficPolicy: Local --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 60a7a0a8-97e7-0fda-763c-1a9972f4e9bc * Version Independent ID: 82b46441-43fc-fe48-97e2-0f3fca3d6eab * Content: [Use a Public Load Balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard#maintain-the-clients-ip-on-inbound-connections) * Content Source: [articles/aks/load-balancer-standard.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/load-balancer-standard.md) * Service: **container-service** * GitHub Login: @palma21 * Microsoft Alias: **jpalma**
1.0
Different spelling of externalTrafficPolicy value: local vs Local - Not sure if important, but there is a slight mismatch in spelling: Text uses `local` > you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. Code block that follows uses `Local` > externalTrafficPolicy: Local --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 60a7a0a8-97e7-0fda-763c-1a9972f4e9bc * Version Independent ID: 82b46441-43fc-fe48-97e2-0f3fca3d6eab * Content: [Use a Public Load Balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard#maintain-the-clients-ip-on-inbound-connections) * Content Source: [articles/aks/load-balancer-standard.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/load-balancer-standard.md) * Service: **container-service** * GitHub Login: @palma21 * Microsoft Alias: **jpalma**
non_process
different spelling of externaltrafficpolicy value local vs local not sure if important but there is a slight mismatch in spelling text uses local you must set service spec externaltrafficpolicy to local in the service definition code block that follows uses local externaltrafficpolicy local document details โš  do not edit this section it is required for docs microsoft com โžŸ github issue linking id version independent id content content source service container service github login microsoft alias jpalma
0
16,503
21,485,342,672
IssuesEvent
2022-04-26 22:27:39
googleapis/python-optimization
https://api.github.com/repos/googleapis/python-optimization
closed
Your .repo-metadata.json file has a problem ๐Ÿค’
type: process api: cloudoptimization repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'optimization' invalid in .repo-metadata.json โ˜๏ธ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem ๐Ÿค’ - You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'optimization' invalid in .repo-metadata.json โ˜๏ธ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem ๐Ÿค’ you have a problem with your repo metadata json file result of scan ๐Ÿ“ˆ api shortname optimization invalid in repo metadata json โ˜๏ธ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
1
2,089
4,927,338,407
IssuesEvent
2016-11-26 17:44:20
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
Docker push to AWS ECR
post-processor/docker
I am trying to push the docker container to AWS ECR, and it works as long as value for "login_password" is provided inline in packer JSON file, if value is set via variable then i get following error. Details below for error case. Error ``` ==> docker: Running post-processor: docker-push docker (docker-push): Logging in... docker (docker-push): Error: Cannot perform an interactive login from a non TTY device Build 'docker' errored: 1 error(s) occurred: ``` Packer JSON ``` json { "variables": { "version": "", **"aws_ecr_pwd": ""** }, "builders": [ { "type": "docker", "image": "centos", "commit": true }], "provisioners": [{ "type": "file", "source": "./file1", "destination": "/tmp/file1" }, { "type" : "shell", "inline" : [ "cp -v /tmp/file1 /etc/yum.repos.d/file1", "yum clean all", "yum -y install application" ] }], "post-processors": [ [ { "type":"docker-tag", "repository" : "acctdid.dkr.ecr.us-east-1.amazonaws.com/sbg_cloudinfra_sse_data_exchange", "tag" : "application-{{user `version`}}" }, { "type":"docker-push", "login" : true, "login_server": "https://acctid.dkr.ecr.us-east-1.amazonaws.com", "login_username" : "AWS", **"login_password":"{{user `aws_ecr_pwd`}}"** } ] ] } ``` Makefile to push variable to Packer ``` make RELEASE_BUILD := N VERSION := 1.0.1 BLDNUM := 2 AWS_ECR_PWD := <Actual AWS PWD) VARS := -var 'aws_erc_pwd=$(AWS_ECR_PWD)' application: packer build -var **'aws_erc_pwd=$(AWS_ECR_PWD)'** -var 'version=$(VERSION)-$(BLDNUM)' application-packer.json ```
1.0
Docker push to AWS ECR - I am trying to push the docker container to AWS ECR, and it works as long as value for "login_password" is provided inline in packer JSON file, if value is set via variable then i get following error. Details below for error case. Error ``` ==> docker: Running post-processor: docker-push docker (docker-push): Logging in... docker (docker-push): Error: Cannot perform an interactive login from a non TTY device Build 'docker' errored: 1 error(s) occurred: ``` Packer JSON ``` json { "variables": { "version": "", **"aws_ecr_pwd": ""** }, "builders": [ { "type": "docker", "image": "centos", "commit": true }], "provisioners": [{ "type": "file", "source": "./file1", "destination": "/tmp/file1" }, { "type" : "shell", "inline" : [ "cp -v /tmp/file1 /etc/yum.repos.d/file1", "yum clean all", "yum -y install application" ] }], "post-processors": [ [ { "type":"docker-tag", "repository" : "acctdid.dkr.ecr.us-east-1.amazonaws.com/sbg_cloudinfra_sse_data_exchange", "tag" : "application-{{user `version`}}" }, { "type":"docker-push", "login" : true, "login_server": "https://acctid.dkr.ecr.us-east-1.amazonaws.com", "login_username" : "AWS", **"login_password":"{{user `aws_ecr_pwd`}}"** } ] ] } ``` Makefile to push variable to Packer ``` make RELEASE_BUILD := N VERSION := 1.0.1 BLDNUM := 2 AWS_ECR_PWD := <Actual AWS PWD) VARS := -var 'aws_erc_pwd=$(AWS_ECR_PWD)' application: packer build -var **'aws_erc_pwd=$(AWS_ECR_PWD)'** -var 'version=$(VERSION)-$(BLDNUM)' application-packer.json ```
process
docker push to aws ecr i am trying to push the docker container to aws ecr and it works as long as value for login password is provided inline in packer json file if value is set via variable then i get following error details below for error case error docker running post processor docker push docker docker push logging in docker docker push error cannot perform an interactive login from a non tty device build docker errored error s occurred packer json json variables version aws ecr pwd builders type docker image centos commit true provisioners type file source destination tmp type shell inline cp v tmp etc yum repos d yum clean all yum y install application post processors type docker tag repository acctdid dkr ecr us east amazonaws com sbg cloudinfra sse data exchange tag application user version type docker push login true login server login username aws login password user aws ecr pwd makefile to push variable to packer make release build n version bldnum aws ecr pwd actual aws pwd vars var aws erc pwd aws ecr pwd application packer build var aws erc pwd aws ecr pwd var version version bldnum application packer json
1
18,254
24,335,626,345
IssuesEvent
2022-10-01 03:13:52
GoogleCloudPlatform/cloud-ops-sandbox
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
opened
Cleanup repo from all artifacts of the microservice demo application
priority: p1 type: process
Remove all artifacts of the Hipster shop from the repo. *NOTE:* Completing this task will break the repo.
1.0
Cleanup repo from all artifacts of the microservice demo application - Remove all artifacts of the Hipster shop from the repo. *NOTE:* Completing this task will break the repo.
process
cleanup repo from all artifacts of the microservice demo application remove all artifacts of the hipster shop from the repo note completing this task will break the repo
1
72,747
8,774,525,621
IssuesEvent
2018-12-18 20:05:02
brave/brave-ios
https://api.github.com/repos/brave/brave-ios
opened
redesign the error page
needs design
### Description: The error page in V1.7 is not clear and needs design work <img width="437" alt="screen shot 2018-12-18 at 11 15 19 am" src="https://user-images.githubusercontent.com/7606853/50178478-78c6a380-02b9-11e9-9db2-360a875cf0e4.png">
1.0
redesign the error page - ### Description: The error page in V1.7 is not clear and needs design work <img width="437" alt="screen shot 2018-12-18 at 11 15 19 am" src="https://user-images.githubusercontent.com/7606853/50178478-78c6a380-02b9-11e9-9db2-360a875cf0e4.png">
non_process
redesign the error page description the error page in is not clear and needs design work img width alt screen shot at am src
0
5,197
7,974,026,180
IssuesEvent
2018-07-17 02:48:12
factor/factor
https://api.github.com/repos/factor/factor
opened
tools.which and process-launcher don't search standard-login-paths
paths process-launcher tools unix
The ``standard-login-paths`` word starts the default shell, echos ``$PATH``, and returns. This is different from what the ``"PATH" os-env`` sees because of ``.bashrc`` etc adding to the path. ```factor ! on a macbook standard-login-paths . /Users/erg/miniconda2/bin:/Users/erg/.nvm/versions/node/v9.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/texbin:/opt/X11/bin:/Applications/Wireshark.app/Contents/MacOS "PATH" os-env . "/usr/bin:/bin:/usr/sbin:/sbin" ``` This is a problem for finding utilities like the shell finds them. ```factor "docker" find-in-standard-login-path . "/usr/local/bin/docker" "docker" which . f ``` The ``tools.which`` utility should incoporate ``find-in-standard-login-path``. Also, ``"docker" try-output-process`` should probably use the same fix so it doesn't break.
1.0
tools.which and process-launcher don't search standard-login-paths - The ``standard-login-paths`` word starts the default shell, echos ``$PATH``, and returns. This is different from what the ``"PATH" os-env`` sees because of ``.bashrc`` etc adding to the path. ```factor ! on a macbook standard-login-paths . /Users/erg/miniconda2/bin:/Users/erg/.nvm/versions/node/v9.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/texbin:/opt/X11/bin:/Applications/Wireshark.app/Contents/MacOS "PATH" os-env . "/usr/bin:/bin:/usr/sbin:/sbin" ``` This is a problem for finding utilities like the shell finds them. ```factor "docker" find-in-standard-login-path . "/usr/local/bin/docker" "docker" which . f ``` The ``tools.which`` utility should incoporate ``find-in-standard-login-path``. Also, ``"docker" try-output-process`` should probably use the same fix so it doesn't break.
process
tools which and process launcher don t search standard login paths the standard login paths word starts the default shell echos path and returns this is different from what the path os env sees because of bashrc etc adding to the path factor on a macbook standard login paths users erg bin users erg nvm versions node bin usr local sbin usr local bin usr bin bin usr sbin sbin usr texbin opt bin applications wireshark app contents macos path os env usr bin bin usr sbin sbin this is a problem for finding utilities like the shell finds them factor docker find in standard login path usr local bin docker docker which f the tools which utility should incoporate find in standard login path also docker try output process should probably use the same fix so it doesn t break
1
98,444
29,870,547,119
IssuesEvent
2023-06-20 08:12:16
GSS-Cogs/dd-cms
https://api.github.com/repos/GSS-Cogs/dd-cms
closed
Automate legend font change
chart builder high priority
At the moment the only way to update the legend font to the new font and size is to resave each chart.
1.0
Automate legend font change - At the moment the only way to update the legend font to the new font and size is to resave each chart.
non_process
automate legend font change at the moment the only way to update the legend font to the new font and size is to resave each chart
0
10,153
13,044,162,599
IssuesEvent
2020-07-29 03:47:34
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `Decode` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `Decode` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `Decode` from TiDB - ## Description Port the scalar function `Decode` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function decode from tidb description port the scalar function decode from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
4,443
7,313,464,833
IssuesEvent
2018-03-01 01:16:21
P2Poker/RandomCat
https://api.github.com/repos/P2Poker/RandomCat
opened
As a developer, I need a system of build/test configurations for various parts of the desired software
c) dev origin d) release 0.1 e) dev tools f) priority 2 g) change request h) in process j) difficult workaround l) minor completion cost l) no risk l) no ux impact n) no impact n) no users affected o) as a dev p) triage completed
## Story **(REQUIRED)** As a developer, I need a system of build/test configurations for various parts of the desired software. ## Explanation **(REQUIRED)** Create a series of pre-determined build/test configurations, which can be expanded in the future as needed.
1.0
As a developer, I need a system of build/test configurations for various parts of the desired software - ## Story **(REQUIRED)** As a developer, I need a system of build/test configurations for various parts of the desired software. ## Explanation **(REQUIRED)** Create a series of pre-determined build/test configurations, which can be expanded in the future as needed.
process
as a developer i need a system of build test configurations for various parts of the desired software story required as a developer i need a system of build test configurations for various parts of the desired software explanation required create a series of pre determined build test configurations which can be expanded in the future as needed
1
204,382
7,087,353,515
IssuesEvent
2018-01-11 17:31:07
salesagility/SuiteCRM
https://api.github.com/repos/salesagility/SuiteCRM
closed
PHP Fatal error when I try to edit or view subpanels on the modulebuilder
Fix Proposed Medium Priority Resolved: Next Release bug
#### Issue I can not edit or view subpanels on the modulebuilder. Whenever I press the link called Default placed on Available Subpanels folder, nothing happens #### Expected Behavior Edit or view subpanels on the modulebuilder. #### Actual Behavior I can not edit or view subpanels on the modulebuilder. Whenever I press the link called Default placed on Available Subpanels folder, nothing happens. The CRM add a fatal error in the error_log. Thu Oct 19 17:58:24.551466 2017] [php7:error] [pid 12712] [client 10.100.32.46:64767] PHP Fatal error: Class UndeployedSubpanelImplementation contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (AbstractMetaDataImplementation::getFileName) in /var/www/html/SuiteCRM/modules/ModuleBuilder/parsers/views/UndeployedSubpanelImplementation.php on line 52, referer: http://hostname/SuiteCRM/index.php?module=ModuleBuilder&action=index&type=mb [Thu Oct 19 17:58:34.181583 2017] [php7:error] [pid 13800] [client 10.100.32.46:64774] PHP Fatal error: Class UndeployedSubpanelImplementation contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (AbstractMetaDataImplementation::getFileName) in /var/www/html/SuiteCRM/modules/ModuleBuilder/parsers/views/UndeployedSubpanelImplementation.php on line 52, referer: http://hostname/SuiteCRM/index.php?module=ModuleBuilder&action=index&type=mb #### Your Environment PHP 7.1.9 SuiteCRM 7.9.7 CentOS 7
1.0
PHP Fatal error when I try to edit or view subpanels on the modulebuilder - #### Issue I can not edit or view subpanels on the modulebuilder. Whenever I press the link called Default placed on Available Subpanels folder, nothing happens #### Expected Behavior Edit or view subpanels on the modulebuilder. #### Actual Behavior I can not edit or view subpanels on the modulebuilder. Whenever I press the link called Default placed on Available Subpanels folder, nothing happens. The CRM add a fatal error in the error_log. Thu Oct 19 17:58:24.551466 2017] [php7:error] [pid 12712] [client 10.100.32.46:64767] PHP Fatal error: Class UndeployedSubpanelImplementation contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (AbstractMetaDataImplementation::getFileName) in /var/www/html/SuiteCRM/modules/ModuleBuilder/parsers/views/UndeployedSubpanelImplementation.php on line 52, referer: http://hostname/SuiteCRM/index.php?module=ModuleBuilder&action=index&type=mb [Thu Oct 19 17:58:34.181583 2017] [php7:error] [pid 13800] [client 10.100.32.46:64774] PHP Fatal error: Class UndeployedSubpanelImplementation contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (AbstractMetaDataImplementation::getFileName) in /var/www/html/SuiteCRM/modules/ModuleBuilder/parsers/views/UndeployedSubpanelImplementation.php on line 52, referer: http://hostname/SuiteCRM/index.php?module=ModuleBuilder&action=index&type=mb #### Your Environment PHP 7.1.9 SuiteCRM 7.9.7 CentOS 7
non_process
php fatal error when i try to edit or view subpanels on the modulebuilder issue i can not edit or view subpanels on the modulebuilder whenever i press the link called default placed on available subpanels folder nothing happens expected behavior edit or view subpanels on the modulebuilder actual behavior i can not edit or view subpanels on the modulebuilder whenever i press the link called default placed on available subpanels folder nothing happens the crm add a fatal error in the error log thu oct php fatal error class undeployedsubpanelimplementation contains abstract method and must therefore be declared abstract or implement the remaining methods abstractmetadataimplementation getfilename in var www html suitecrm modules modulebuilder parsers views undeployedsubpanelimplementation php on line referer php fatal error class undeployedsubpanelimplementation contains abstract method and must therefore be declared abstract or implement the remaining methods abstractmetadataimplementation getfilename in var www html suitecrm modules modulebuilder parsers views undeployedsubpanelimplementation php on line referer your environment php suitecrm centos
0
125,228
16,748,406,727
IssuesEvent
2021-06-11 18:47:14
angular/angular
https://api.github.com/repos/angular/angular
closed
Fix tables in markdown to be HTML per Angular doc style
comp: docs docsarea: global effort1: hours freq2: medium subtype: docs-design subtype: docs-edit type: bug/fix
<!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [X ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> The following files contain tables in plain markdown: https://angular.io/guide/i18n#setting-up-the-locale-of-your-app https://angular.io/guide/aot-compiler#folding There may also be others. ## Expected behavior <!-- Describe what the desired behavior would be. --> Per Angular docs style, tables should be marked up in HTML: https://angular.io/guide/docs-style-guide#tables Other correct examples: https://angular.io/guide/docs-style-guide#tables https://angular.io/guide/module-types https://angular.io/guide/ngmodule-api#ngmodule-metadata https://angular.io/guide/ajs-quick-reference https://angular.io/guide/router#router-events https://angular.io/guide/template-syntax#binding-targets ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> See links above ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> Consistent style Ability to control style in same css Adhere to Angular Authors Guide ## Environment <pre><code> Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
1.0
Fix tables in markdown to be HTML per Angular doc style - <!-- PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION. ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. --> ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [ ] Performance issue [ ] Feature request [X ] Documentation issue or request [ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question [ ] Other... Please describe: </code></pre> ## Current behavior <!-- Describe how the issue manifests. --> The following files contain tables in plain markdown: https://angular.io/guide/i18n#setting-up-the-locale-of-your-app https://angular.io/guide/aot-compiler#folding There may also be others. ## Expected behavior <!-- Describe what the desired behavior would be. --> Per Angular docs style, tables should be marked up in HTML: https://angular.io/guide/docs-style-guide#tables Other correct examples: https://angular.io/guide/docs-style-guide#tables https://angular.io/guide/module-types https://angular.io/guide/ngmodule-api#ngmodule-metadata https://angular.io/guide/ajs-quick-reference https://angular.io/guide/router#router-events https://angular.io/guide/template-syntax#binding-targets ## Minimal reproduction of the problem with instructions <!-- For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://stackblitz.com or similar (you can use this template as a starting point: https://stackblitz.com/fork/angular-gitter). --> See links above ## What is the motivation / use case for changing the behavior? <!-- Describe the motivation or the concrete use case. --> Consistent style Ability to control style in same css Adhere to Angular Authors Guide ## Environment <pre><code> Angular version: X.Y.Z <!-- Check whether this is still an issue in the most recent Angular version --> Browser: - [ ] Chrome (desktop) version XX - [ ] Chrome (Android) version XX - [ ] Chrome (iOS) version XX - [ ] Firefox version XX - [ ] Safari (desktop) version XX - [ ] Safari (iOS) version XX - [ ] IE version XX - [ ] Edge version XX For Tooling issues: - Node version: XX <!-- run `node --version` --> - Platform: <!-- Mac, Linux, Windows --> Others: <!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... --> </code></pre>
non_process
fix tables in markdown to be html per angular doc style please help us process github issues faster by providing the following information issues missing important information may be closed without investigation i m submitting a regression a behavior that used to work and stopped working in a new release bug report performance issue feature request documentation issue or request support request please do not submit support request here instead see other please describe current behavior the following files contain tables in plain markdown there may also be others expected behavior per angular docs style tables should be marked up in html other correct examples minimal reproduction of the problem with instructions for bug reports please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point see links above what is the motivation use case for changing the behavior consistent style ability to control style in same css adhere to angular authors guide environment angular version x y z browser chrome desktop version xx chrome android version xx chrome ios version xx firefox version xx safari desktop version xx safari ios version xx ie version xx edge version xx for tooling issues node version xx platform others
0
16,590
21,639,663,157
IssuesEvent
2022-05-05 17:24:36
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Gaussian Process Regression: "normalize_y=True" screws up model fit
module:gaussian_process Needs Triage
### Describe the bug Hello everybody, First of all let me thank you for the great work you've done. That being said, I've run into the situation where using "normalize_y=True" in a Gaussian Process Regression fit yields wildly different results ### Steps/Code to Reproduce ```python # Imports import numpy as np import matplotlib.pyplot as plt from sklearn.gaussian_process.kernels import ConstantKernel as C,\ WhiteKernel as WK from sklearn.gaussian_process.kernels import Matern from sklearn.gaussian_process import GaussianProcessRegressor # Training data x_tr = np.array( [20, 20, 140, 140] ).reshape(-1, 1) y_tr = np.array( [740, 680, 1260, 1200 ] ).reshape(-1, 1) # Visualize training data fig, ax = plt.subplots() ax.scatter( x_tr, y_tr, label='Training data' ) # GPR model without normalization kernel_t = WK(0.1, (1e-3, 1e9)) + C(0.1, (1e-3, 1e9)) *\ Matern( 0.1, (1e-3, 1e9), 1.5 ) gp_t = GaussianProcessRegressor( kernel=kernel_t, n_restarts_optimizer=30 ) gp_t.fit( x_tr, y_tr ) print( gp_t.kernel_ ) # Visualize fit x_pred = (np.arange(0, 200)).reshape(-1, 1) y_pred = gp_t.predict(x_pred, return_std=False) ax.plot( x_pred, y_pred, label='GPR w/o normal.' ) # GPR model with normalization gp_t_n = GaussianProcessRegressor( kernel=kernel_t, n_restarts_optimizer=30, normalize_y=True ) gp_t_n.fit( x_tr, y_tr ) print( gp_t_n.kernel_ ) # Visualize fit y_pred = gp_t_n.predict(x_pred, return_std=False) ax.plot( x_pred, y_pred, label='GPR w/ normal.' ) ax.legend() ax.grid( True ) ``` ### Expected Results I would expect the same result or not a worse fit. In the documentation it says that normalization is recommended for cases where zero-mean, unit-variance priors are used and I am using a Matern kernel so it should apply. ### Actual Results ![Q_normalize](https://user-images.githubusercontent.com/87704701/132008511-b0acc6e5-01a7-4367-a4d4-dd465deee717.png) ### Versions System: python: 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] executable: C:\Users\YayoGuridi\Anaconda3\python.exe machine: Windows-10-10.0.18369-SP0 Python dependencies: pip: 21.0.1 setuptools: 52.0.0.post20210125 sklearn: 0.24.1 numpy: 1.20.1 scipy: 1.6.2 Cython: 0.29.23 pandas: 1.2.4 matplotlib: 3.3.4 joblib: 1.0.1 threadpoolctl: 2.1.0 Built with OpenMP: True
1.0
Gaussian Process Regression: "normalize_y=True" screws up model fit - ### Describe the bug Hello everybody, First of all let me thank you for the great work you've done. That being said, I've run into the situation where using "normalize_y=True" in a Gaussian Process Regression fit yields wildly different results ### Steps/Code to Reproduce ```python # Imports import numpy as np import matplotlib.pyplot as plt from sklearn.gaussian_process.kernels import ConstantKernel as C,\ WhiteKernel as WK from sklearn.gaussian_process.kernels import Matern from sklearn.gaussian_process import GaussianProcessRegressor # Training data x_tr = np.array( [20, 20, 140, 140] ).reshape(-1, 1) y_tr = np.array( [740, 680, 1260, 1200 ] ).reshape(-1, 1) # Visualize training data fig, ax = plt.subplots() ax.scatter( x_tr, y_tr, label='Training data' ) # GPR model without normalization kernel_t = WK(0.1, (1e-3, 1e9)) + C(0.1, (1e-3, 1e9)) *\ Matern( 0.1, (1e-3, 1e9), 1.5 ) gp_t = GaussianProcessRegressor( kernel=kernel_t, n_restarts_optimizer=30 ) gp_t.fit( x_tr, y_tr ) print( gp_t.kernel_ ) # Visualize fit x_pred = (np.arange(0, 200)).reshape(-1, 1) y_pred = gp_t.predict(x_pred, return_std=False) ax.plot( x_pred, y_pred, label='GPR w/o normal.' ) # GPR model with normalization gp_t_n = GaussianProcessRegressor( kernel=kernel_t, n_restarts_optimizer=30, normalize_y=True ) gp_t_n.fit( x_tr, y_tr ) print( gp_t_n.kernel_ ) # Visualize fit y_pred = gp_t_n.predict(x_pred, return_std=False) ax.plot( x_pred, y_pred, label='GPR w/ normal.' ) ax.legend() ax.grid( True ) ``` ### Expected Results I would expect the same result or not a worse fit. In the documentation it says that normalization is recommended for cases where zero-mean, unit-variance priors are used and I am using a Matern kernel so it should apply. ### Actual Results ![Q_normalize](https://user-images.githubusercontent.com/87704701/132008511-b0acc6e5-01a7-4367-a4d4-dd465deee717.png) ### Versions System: python: 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] executable: C:\Users\YayoGuridi\Anaconda3\python.exe machine: Windows-10-10.0.18369-SP0 Python dependencies: pip: 21.0.1 setuptools: 52.0.0.post20210125 sklearn: 0.24.1 numpy: 1.20.1 scipy: 1.6.2 Cython: 0.29.23 pandas: 1.2.4 matplotlib: 3.3.4 joblib: 1.0.1 threadpoolctl: 2.1.0 Built with OpenMP: True
process
gaussian process regression normalize y true screws up model fit describe the bug hello everybody first of all let me thank you for the great work you ve done that being said i ve run into the situation where using normalize y true in a gaussian process regression fit yields wildly different results steps code to reproduce python imports import numpy as np import matplotlib pyplot as plt from sklearn gaussian process kernels import constantkernel as c whitekernel as wk from sklearn gaussian process kernels import matern from sklearn gaussian process import gaussianprocessregressor training data x tr np array reshape y tr np array reshape visualize training data fig ax plt subplots ax scatter x tr y tr label training data gpr model without normalization kernel t wk c matern gp t gaussianprocessregressor kernel kernel t n restarts optimizer gp t fit x tr y tr print gp t kernel visualize fit x pred np arange reshape y pred gp t predict x pred return std false ax plot x pred y pred label gpr w o normal gpr model with normalization gp t n gaussianprocessregressor kernel kernel t n restarts optimizer normalize y true gp t n fit x tr y tr print gp t n kernel visualize fit y pred gp t n predict x pred return std false ax plot x pred y pred label gpr w normal ax legend ax grid true expected results i would expect the same result or not a worse fit in the documentation it says that normalization is recommended for cases where zero mean unit variance priors are used and i am using a matern kernel so it should apply actual results versions system python default apr executable c users yayoguridi python exe machine windows python dependencies pip setuptools sklearn numpy scipy cython pandas matplotlib joblib threadpoolctl built with openmp true
1
21,525
29,806,664,524
IssuesEvent
2023-06-16 12:13:31
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
reopened
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="integration-test-status-comment"></hidden> ### โœ…&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Thu Jun 15 04:48 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5277440361)** <hidden value="integration-test-status-comment"></hidden> *** ### โœ…&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Thu Jun 15 08:03 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5279234059)** <hidden value="integration-test-status-comment"></hidden> *** ### โœ…&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 39c8aa61d3a3b2f453458dd90dbe577a1f9afcc5 Last updated: Fri Jun 16 04:48 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5289192200)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="integration-test-status-comment"></hidden> ### โœ…&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Thu Jun 15 04:48 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5277440361)** <hidden value="integration-test-status-comment"></hidden> *** ### โœ…&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Thu Jun 15 08:03 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5279234059)** <hidden value="integration-test-status-comment"></hidden> *** ### โœ…&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 39c8aa61d3a3b2f453458dd90dbe577a1f9afcc5 Last updated: Fri Jun 16 04:48 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5289192200)**
process
nightly integration testing report for firestore โœ… nbsp integration test succeeded requested by on commit last updated thu jun pdt โœ… nbsp integration test succeeded requested by firebase workflow trigger on commit last updated thu jun pdt โœ… nbsp integration test succeeded requested by on commit last updated fri jun pdt
1
69,415
14,988,512,931
IssuesEvent
2021-01-29 01:26:11
Omni3Tech/corda
https://api.github.com/repos/Omni3Tech/corda
opened
CVE-2020-10968 (High) detected in jackson-databind-2.9.7.jar, jackson-databind-2.8.4.jar
security vulnerability
## CVE-2020-10968 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.7.jar</b>, <b>jackson-databind-2.8.4.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: corda/tools/demobench/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar,canner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar</p> <p> Dependency Hierarchy: - jersey-media-json-jackson-2.25.jar (Root Library) - jackson-jaxrs-json-provider-2.8.4.jar - :x: **jackson-databind-2.9.7.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: corda/testing/testserver/build.gradle</p> <p>Path to vulnerable library: /tmp/ws-ua_20210129003010_HURYRX/downloadResource_DWADGL/20210129011449/jackson-databind-2.8.4.jar</p> <p> Dependency Hierarchy: - jersey-media-json-jackson-2.25.jar (Root Library) - jackson-jaxrs-json-provider-2.8.4.jar - :x: **jackson-databind-2.8.4.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Omni3Tech/corda/commit/29c33d3b0ae2ca5fdb1be95ae420943d69013d34">29c33d3b0ae2ca5fdb1be95ae420943d69013d34</a></p> <p>Found in base branch: <b>release/os/4.8</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy). <p>Publish Date: 2020-03-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p> <p>Release Date: 2020-03-26</p> <p>Fix Resolution: jackson-databind-2.9.10.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-10968 (High) detected in jackson-databind-2.9.7.jar, jackson-databind-2.8.4.jar - ## CVE-2020-10968 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.7.jar</b>, <b>jackson-databind-2.8.4.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: corda/tools/demobench/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar,canner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.7/e6faad47abd3179666e89068485a1b88a195ceb7/jackson-databind-2.9.7.jar</p> <p> Dependency Hierarchy: - jersey-media-json-jackson-2.25.jar (Root Library) - jackson-jaxrs-json-provider-2.8.4.jar - :x: **jackson-databind-2.9.7.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: corda/testing/testserver/build.gradle</p> <p>Path to vulnerable library: /tmp/ws-ua_20210129003010_HURYRX/downloadResource_DWADGL/20210129011449/jackson-databind-2.8.4.jar</p> <p> Dependency Hierarchy: - jersey-media-json-jackson-2.25.jar (Root Library) - jackson-jaxrs-json-provider-2.8.4.jar - :x: **jackson-databind-2.8.4.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Omni3Tech/corda/commit/29c33d3b0ae2ca5fdb1be95ae420943d69013d34">29c33d3b0ae2ca5fdb1be95ae420943d69013d34</a></p> <p>Found in base branch: <b>release/os/4.8</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy). <p>Publish Date: 2020-03-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p> <p>Release Date: 2020-03-26</p> <p>Fix Resolution: jackson-databind-2.9.10.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file corda tools demobench build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar canner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jersey media json jackson jar root library jackson jaxrs json provider jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file corda testing testserver build gradle path to vulnerable library tmp ws ua huryrx downloadresource dwadgl jackson databind jar dependency hierarchy jersey media json jackson jar root library jackson jaxrs json provider jar x jackson databind jar vulnerable library found in head commit a href found in base branch release os vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org aoju bus proxy provider remoting rmiprovider aka bus proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind step up your open source security game with whitesource
0
633,366
20,253,059,140
IssuesEvent
2022-02-14 19:57:25
UnifespCodeLab/plasmedis-api
https://api.github.com/repos/UnifespCodeLab/plasmedis-api
closed
[Gerenciamento de Postagem] Remover Postagem
priority: high type: feature
# Histรณria de Usuรกrio - O usuรกrio deve ser capaz de excluir suas postagens. - Um moderador/administrador deve ser capaz de excluir quaisquer postagens. # Resumo > **DELETE** /postagens/<postagem_id> Criar a rota acima para remover postagens e seus comentรกrios. Essa rota deve verificar se o usuรกrio autenticado รฉ o criador da postagem **ou** รฉ um administrador/moderador. # Critรฉrios de Aceite - [ ] Criar rota para remover postagens. - [ ] Verificar se o usuรกrio รฉ o criador **ou** administrador/moderador - [ ] Comentรกrios de postagens tambรฉm devem ser removidos. # Protรณtipo ---- # Informaรงรตes Adicionais Um Guia para verificaรงรฃo do usuรกrio autenticado pode ser encontrado no notion em [PlaSMeDIS > Guias de Uso > [API] Usuรกrio Autenticado](https://www.notion.so/API-Usu-rio-Autenticado-193c8d8b0dd44c2c940616d4b15ab5a6)
1.0
[Gerenciamento de Postagem] Remover Postagem - # Histรณria de Usuรกrio - O usuรกrio deve ser capaz de excluir suas postagens. - Um moderador/administrador deve ser capaz de excluir quaisquer postagens. # Resumo > **DELETE** /postagens/<postagem_id> Criar a rota acima para remover postagens e seus comentรกrios. Essa rota deve verificar se o usuรกrio autenticado รฉ o criador da postagem **ou** รฉ um administrador/moderador. # Critรฉrios de Aceite - [ ] Criar rota para remover postagens. - [ ] Verificar se o usuรกrio รฉ o criador **ou** administrador/moderador - [ ] Comentรกrios de postagens tambรฉm devem ser removidos. # Protรณtipo ---- # Informaรงรตes Adicionais Um Guia para verificaรงรฃo do usuรกrio autenticado pode ser encontrado no notion em [PlaSMeDIS > Guias de Uso > [API] Usuรกrio Autenticado](https://www.notion.so/API-Usu-rio-Autenticado-193c8d8b0dd44c2c940616d4b15ab5a6)
non_process
remover postagem histรณria de usuรกrio o usuรกrio deve ser capaz de excluir suas postagens um moderador administrador deve ser capaz de excluir quaisquer postagens resumo delete postagens criar a rota acima para remover postagens e seus comentรกrios essa rota deve verificar se o usuรกrio autenticado รฉ o criador da postagem ou รฉ um administrador moderador critรฉrios de aceite criar rota para remover postagens verificar se o usuรกrio รฉ o criador ou administrador moderador comentรกrios de postagens tambรฉm devem ser removidos protรณtipo informaรงรตes adicionais um guia para verificaรงรฃo do usuรกrio autenticado pode ser encontrado no notion em usuรกrio autenticado
0
618,511
19,472,536,814
IssuesEvent
2021-12-24 05:21:30
bryntum/support
https://api.github.com/repos/bryntum/support
closed
DayView DragZone doesn't handle interday events which belong in the DayView
bug resolved high-priority
Configure the Calendar with ``` modes : { week : { showAllDayHeader : false } } ``` Which means interDay events will show in the day columns. Then create a new day spanning event. Drag-moving and drag-resizing the event do not work - the DragZone assumes the event belongs in the AllDayZone.
1.0
DayView DragZone doesn't handle interday events which belong in the DayView - Configure the Calendar with ``` modes : { week : { showAllDayHeader : false } } ``` Which means interDay events will show in the day columns. Then create a new day spanning event. Drag-moving and drag-resizing the event do not work - the DragZone assumes the event belongs in the AllDayZone.
non_process
dayview dragzone doesn t handle interday events which belong in the dayview configure the calendar with modes week showalldayheader false which means interday events will show in the day columns then create a new day spanning event drag moving and drag resizing the event do not work the dragzone assumes the event belongs in the alldayzone
0
58,144
16,371,384,264
IssuesEvent
2021-05-15 07:22:52
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
[UX anti pattern] "Explore rooms" button inconsistent compared to search bar
T-Defect
### Description The "Filter all spaces" search bar filters *all* your spaces (big surprise) and rooms. Therefore I expected the "Explore rooms" button located **directly next to it** to be global too i.e. to open *the server's room list* instead of the Space home. I've stumbled across this quite often so I really think this should be redesigned in order to clarify the behavior. ### Version information develop.element.io (e716e484a68d-react-1b8402f39c40-js-91af9a411d3c) ![grafik](https://user-images.githubusercontent.com/67947338/118351413-383be480-b55c-11eb-8cc9-bccd604b9127.png)
1.0
[UX anti pattern] "Explore rooms" button inconsistent compared to search bar - ### Description The "Filter all spaces" search bar filters *all* your spaces (big surprise) and rooms. Therefore I expected the "Explore rooms" button located **directly next to it** to be global too i.e. to open *the server's room list* instead of the Space home. I've stumbled across this quite often so I really think this should be redesigned in order to clarify the behavior. ### Version information develop.element.io (e716e484a68d-react-1b8402f39c40-js-91af9a411d3c) ![grafik](https://user-images.githubusercontent.com/67947338/118351413-383be480-b55c-11eb-8cc9-bccd604b9127.png)
non_process
explore rooms button inconsistent compared to search bar description the filter all spaces search bar filters all your spaces big surprise and rooms therefore i expected the explore rooms button located directly next to it to be global too i e to open the server s room list instead of the space home i ve stumbled across this quite often so i really think this should be redesigned in order to clarify the behavior version information develop element io react js
0
314,013
26,970,463,536
IssuesEvent
2023-02-09 04:06:45
seleniumbase/SeleniumBase
https://api.github.com/repos/seleniumbase/SeleniumBase
closed
Examples that use Google might need to change
tests
### Examples that use Google might need to change **Automated tests started hitting this today:** <img width="550" alt="Screenshot 2023-02-06 at 10 12 05 PM" src="https://user-images.githubusercontent.com/6788579/217139401-a80660d6-2c11-46fe-a5db-0bc335dc1f64.png"> **Impacted examples:** * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/parameterized_test.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_pytest_parametrize.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_sb_fixture.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/boilerplates/samples/test_page_objects.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/boilerplates/samples/google_test.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_test_scripts.py -------- These are examples I use to verify the framework before a new release. I'll adopt a wait-and-see approach for now. If this behavior continues, I may need to switch search engines for the tests. Possible options for a replacement, if needed: * [Bing](https://www.bing.com/) * [DuckDuckGo](https://duckduckgo.com/) * https://www.onesearch.com/ * https://www.startpage.com/
1.0
Examples that use Google might need to change - ### Examples that use Google might need to change **Automated tests started hitting this today:** <img width="550" alt="Screenshot 2023-02-06 at 10 12 05 PM" src="https://user-images.githubusercontent.com/6788579/217139401-a80660d6-2c11-46fe-a5db-0bc335dc1f64.png"> **Impacted examples:** * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/parameterized_test.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_pytest_parametrize.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/test_sb_fixture.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/boilerplates/samples/test_page_objects.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/boilerplates/samples/google_test.py * https://github.com/seleniumbase/SeleniumBase/blob/master/examples/raw_test_scripts.py -------- These are examples I use to verify the framework before a new release. I'll adopt a wait-and-see approach for now. If this behavior continues, I may need to switch search engines for the tests. Possible options for a replacement, if needed: * [Bing](https://www.bing.com/) * [DuckDuckGo](https://duckduckgo.com/) * https://www.onesearch.com/ * https://www.startpage.com/
non_process
examples that use google might need to change examples that use google might need to change automated tests started hitting this today img width alt screenshot at pm src impacted examples these are examples i use to verify the framework before a new release i ll adopt a wait and see approach for now if this behavior continues i may need to switch search engines for the tests possible options for a replacement if needed
0
299,639
25,915,601,419
IssuesEvent
2022-12-15 17:06:05
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Test failure: System.Net.Sockets.Tests.DnsEndPointTest.Socket_ConnectAsyncDnsEndPoint_Success(type: Async)
area-System.Net.Sockets test-run-core
failed in job: [runtime-libraries outerloop 20200519.3 ](https://dev.azure.com/dnceng/public/_build/results?buildId=651228&view=ms.vss-test-web.build-test-results-tab&runId=20205164&resultId=103418&paneView=debug) Error message ~~~ Timed out while waiting for connection Expected: True Actual: False Stack trace at System.Net.Sockets.Tests.DnsEndPointTest.Socket_ConnectAsyncDnsEndPoint_Success(SocketImplementationType type) in /_/src/libraries/System.Net.Sockets/tests/FunctionalTests/DnsEndPointTest.cs:line 209 ~~~
1.0
Test failure: System.Net.Sockets.Tests.DnsEndPointTest.Socket_ConnectAsyncDnsEndPoint_Success(type: Async) - failed in job: [runtime-libraries outerloop 20200519.3 ](https://dev.azure.com/dnceng/public/_build/results?buildId=651228&view=ms.vss-test-web.build-test-results-tab&runId=20205164&resultId=103418&paneView=debug) Error message ~~~ Timed out while waiting for connection Expected: True Actual: False Stack trace at System.Net.Sockets.Tests.DnsEndPointTest.Socket_ConnectAsyncDnsEndPoint_Success(SocketImplementationType type) in /_/src/libraries/System.Net.Sockets/tests/FunctionalTests/DnsEndPointTest.cs:line 209 ~~~
non_process
test failure system net sockets tests dnsendpointtest socket connectasyncdnsendpoint success type async failed in job error message timed out while waiting for connection expected true actual false stack trace at system net sockets tests dnsendpointtest socket connectasyncdnsendpoint success socketimplementationtype type in src libraries system net sockets tests functionaltests dnsendpointtest cs line
0
43,910
17,767,884,341
IssuesEvent
2021-08-30 09:53:23
azure-deprecation/dashboard
https://api.github.com/repos/azure-deprecation/dashboard
opened
Azure Application Insights Release annotations using API keys are retiring on 31 August 2024
impact:migration-required verified area:feature cloud:public services:app-insights
Azure Application Insights Release annotations using API keys are retiring on 31 August 2024 **Deadline:** Aug 31, 2024 **Impacted Services:** - Azure Application Insights **More information:** - https://azure.microsoft.com/en-au/updates/transition-to-new-release-annotations-in-application-insights/ - https://docs.microsoft.com/en-gb/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation ### Notice Here's the official report from Microsoft: > As the [new release annotations](https://docs.microsoft.com/azure/azure-monitor/app/annotations) in Application Insights provide a simple way to re-use your existing deployment tasks or custom deployment scripts, weโ€™re retiring release annotations using API Keys on 31 August 2024. > > With the new release annotations, you no longer need to maintain a dedicated task. Your pre-existing deployment task for App Services or Functions will be sufficient or you can add a custom PowerShell script for [other scenarios](https://docs.microsoft.com/azure/azure-monitor/app/annotations#release-annotations-with-azure-pipelines-build). ### Timeline | Phase | Date | Description | |:------|------|-------------| |Announcement|Aug 20, 2021|Deprecation was announced| |Deprecation|Aug 31, 2024|Annotation will stop working| ### Impact Annotation will stop working. ### Required Action Migration guide is available [here](https://docs.microsoft.com/en-gb/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation). Here's the official report from Microsoft: > [Follow the detailed steps to transition to new release annotations](https://docs.microsoft.com/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation) before 31 August 2024. After 31 August 2024, Release annotations using API Keys will not be supported. ### Contact You can get in touch through the following options: - Get answers from Microsoft Q&A ([link](mailto:https://aka.ms/qna-azure-monitor-ai-release-annotations)). - Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
1.0
Azure Application Insights Release annotations using API keys are retiring on 31 August 2024 - Azure Application Insights Release annotations using API keys are retiring on 31 August 2024 **Deadline:** Aug 31, 2024 **Impacted Services:** - Azure Application Insights **More information:** - https://azure.microsoft.com/en-au/updates/transition-to-new-release-annotations-in-application-insights/ - https://docs.microsoft.com/en-gb/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation ### Notice Here's the official report from Microsoft: > As the [new release annotations](https://docs.microsoft.com/azure/azure-monitor/app/annotations) in Application Insights provide a simple way to re-use your existing deployment tasks or custom deployment scripts, weโ€™re retiring release annotations using API Keys on 31 August 2024. > > With the new release annotations, you no longer need to maintain a dedicated task. Your pre-existing deployment task for App Services or Functions will be sufficient or you can add a custom PowerShell script for [other scenarios](https://docs.microsoft.com/azure/azure-monitor/app/annotations#release-annotations-with-azure-pipelines-build). ### Timeline | Phase | Date | Description | |:------|------|-------------| |Announcement|Aug 20, 2021|Deprecation was announced| |Deprecation|Aug 31, 2024|Annotation will stop working| ### Impact Annotation will stop working. ### Required Action Migration guide is available [here](https://docs.microsoft.com/en-gb/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation). Here's the official report from Microsoft: > [Follow the detailed steps to transition to new release annotations](https://docs.microsoft.com/azure/azure-monitor/app/annotations#transition-to-the-new-release-annotation) before 31 August 2024. After 31 August 2024, Release annotations using API Keys will not be supported. ### Contact You can get in touch through the following options: - Get answers from Microsoft Q&A ([link](mailto:https://aka.ms/qna-azure-monitor-ai-release-annotations)). - Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
non_process
azure application insights release annotations using api keys are retiring on august azure application insights release annotations using api keys are retiring on august deadline aug impacted services azure application insights more information notice here s the official report from microsoft as the in application insights provide a simple way to re use your existing deployment tasks or custom deployment scripts weโ€™re retiring release annotations using api keys on august with the new release annotations you no longer need to maintain a dedicated task your pre existing deployment task for app services or functions will be sufficient or you can add a custom powershell script for timeline phase date description announcement aug deprecation was announced deprecation aug annotation will stop working impact annotation will stop working required action migration guide is available here s the official report from microsoft before august after august release annotations using api keys will not be supported contact you can get in touch through the following options get answers from microsoft q a mailto contact azure support
0
14,355
17,377,905,137
IssuesEvent
2021-07-31 04:14:52
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess failing in the CI
area-System.Diagnostics.Process in pr untriaged
https://dev.azure.com/dnceng/public/_build/results?buildId=1268768&view=logs&j=1422aa79-6880-5d83-e13f-65c73e5c446b&t=f314303b-c6ab-51e8-2c96-4cbe16c1a0a2 https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-56504-merge-b7005c257cb84dc688/System.Diagnostics.Process.Tests/console.0ad0143c.log?sv=2019-07-07&se=2021-08-19T22%3A06%3A57Z&sr=c&sp=rl&sig=2CWyX9R8Vs5TyDW7fMIet3VfEWnMAX6eeyqKDZ6tsB4%3D ``` C:\h\w\A9F70974\w\A2CA08DC\e>"C:\h\w\A9F70974\p\dotnet.exe" exec --runtimeconfig System.Diagnostics.Process.Tests.runtimeconfig.json --depsfile System.Diagnostics.Process.Tests.deps.json xunit.console.dll System.Diagnostics.Process.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 254 of 278 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 2) System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess [FAIL] Assert.Equal() Failure ? (pos 26) Expected: ....Diagnostics.Process Actual: ....Diagnostics.Process (dotnet) ? (pos 26) Stack Trace: /_/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs(492,0): at System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess() Invalid number of parameters ```
1.0
System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess failing in the CI - https://dev.azure.com/dnceng/public/_build/results?buildId=1268768&view=logs&j=1422aa79-6880-5d83-e13f-65c73e5c446b&t=f314303b-c6ab-51e8-2c96-4cbe16c1a0a2 https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-56504-merge-b7005c257cb84dc688/System.Diagnostics.Process.Tests/console.0ad0143c.log?sv=2019-07-07&se=2021-08-19T22%3A06%3A57Z&sr=c&sp=rl&sig=2CWyX9R8Vs5TyDW7fMIet3VfEWnMAX6eeyqKDZ6tsB4%3D ``` C:\h\w\A9F70974\w\A2CA08DC\e>"C:\h\w\A9F70974\p\dotnet.exe" exec --runtimeconfig System.Diagnostics.Process.Tests.runtimeconfig.json --depsfile System.Diagnostics.Process.Tests.deps.json xunit.console.dll System.Diagnostics.Process.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 254 of 278 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 2) System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess [FAIL] Assert.Equal() Failure ? (pos 26) Expected: ....Diagnostics.Process Actual: ....Diagnostics.Process (dotnet) ? (pos 26) Stack Trace: /_/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs(492,0): at System.Diagnostics.Tests.ProcessTests.TestToString_OnExitedProcess() Invalid number of parameters ```
process
system diagnostics tests processtests testtostring onexitedprocess failing in the ci c h w w e c h w p dotnet exe exec runtimeconfig system diagnostics process tests runtimeconfig json depsfile system diagnostics process tests deps json xunit console dll system diagnostics process tests dll xml testresults xml nologo nocolor notrait category ignoreforci notrait category outerloop notrait category failing discovering system diagnostics process tests method display classandmethod method display options none discovered system diagnostics process tests found of test cases starting system diagnostics process tests parallel test collections on max threads system diagnostics tests processstartinfotests shellexecute nano fails start condition s not met iswindowsnanoserver system diagnostics tests processtests testtostring onexitedprocess assert equal failure pos expected diagnostics process actual diagnostics process dotnet pos stack trace src libraries system diagnostics process tests processtests cs at system diagnostics tests processtests testtostring onexitedprocess invalid number of parameters
1
329,371
24,216,332,576
IssuesEvent
2022-09-26 07:07:55
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Docs]: Table Widget v2.1: Checkbox Column Type Documentation
Documentation User Education Pod Ready for Doc Team
### Is there an existing issue for this? Documentation for https://github.com/appsmithorg/appsmith/issues/7338 ### Description We wish to support checkbox as a column type in table ## Link to documentation https://www.notion.so/appsmith/Table-Widget-Checkbox-column-type-5a8c09ffdc024d39bb8f511c07b403b5
1.0
[Docs]: Table Widget v2.1: Checkbox Column Type Documentation - ### Is there an existing issue for this? Documentation for https://github.com/appsmithorg/appsmith/issues/7338 ### Description We wish to support checkbox as a column type in table ## Link to documentation https://www.notion.so/appsmith/Table-Widget-Checkbox-column-type-5a8c09ffdc024d39bb8f511c07b403b5
non_process
table widget checkbox column type documentation is there an existing issue for this documentation for description we wish to support checkbox as a column type in table link to documentation
0
20,732
27,430,378,384
IssuesEvent
2023-03-02 00:32:12
okTurtles/group-income
https://api.github.com/repos/okTurtles/group-income
opened
Create staging server
Kind:Bug Kind:Enhancement App:Frontend App:Backend Kind:Test Priority:High Note:Tooling Kind:Process Kind:Core
### Problem Multiple times now we've had the issue of everything appearing to work during testing and Travis, etc., but once merged into production everything breaks. This is caused because of the way the contracts need to be upgraded with missing data sometimes (and failed to be upgraded). Or other reasons. ### Solution Create a staging server URL and update that before merging changes to production.
1.0
Create staging server - ### Problem Multiple times now we've had the issue of everything appearing to work during testing and Travis, etc., but once merged into production everything breaks. This is caused because of the way the contracts need to be upgraded with missing data sometimes (and failed to be upgraded). Or other reasons. ### Solution Create a staging server URL and update that before merging changes to production.
process
create staging server problem multiple times now we ve had the issue of everything appearing to work during testing and travis etc but once merged into production everything breaks this is caused because of the way the contracts need to be upgraded with missing data sometimes and failed to be upgraded or other reasons solution create a staging server url and update that before merging changes to production
1
10,067
13,044,161,811
IssuesEvent
2020-07-29 03:47:26
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `StrToDateDuration` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `StrToDateDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `StrToDateDuration` from TiDB - ## Description Port the scalar function `StrToDateDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function strtodateduration from tidb description port the scalar function strtodateduration from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
1
139,431
5,375,273,691
IssuesEvent
2017-02-23 03:53:42
chaorace/cqui
https://api.github.com/repos/chaorace/cqui
opened
Barbarian icons are *too* red
bug easy low priority
Jokes aside, 84b4f365f52d9af76745591a77321f1aeecc86e3 introduced an issue where barbarian icons were reddened, due to being hostile by default. This behavior doesn't do anything to help the player and makes the color too bright and a little painful to look at
1.0
Barbarian icons are *too* red - Jokes aside, 84b4f365f52d9af76745591a77321f1aeecc86e3 introduced an issue where barbarian icons were reddened, due to being hostile by default. This behavior doesn't do anything to help the player and makes the color too bright and a little painful to look at
non_process
barbarian icons are too red jokes aside introduced an issue where barbarian icons were reddened due to being hostile by default this behavior doesn t do anything to help the player and makes the color too bright and a little painful to look at
0
11,806
14,627,699,253
IssuesEvent
2020-12-23 12:48:12
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
opened
Links with the 'rel' attribute that equals to 'modulepreload' are overridden incorrectly
AREA: client AREA: server FREQUENCY: level 1 SYSTEM: resource processing SYSTEM: script processing TYPE: bug
Server example: ```js require('http') .createServer((req, res) => { if (req.url === '/') { res.writeHead(200, { 'content-type': 'text/html' }); res.end(` <meta charset="utf-8"> <link rel="modulepreload" href="foo.js"> <link rel="modulepreload" href="bar.js"> <script type="module"> import foo from './foo.js'; import bar from './bar.js'; document.body.appendChild(document.createTextNode(foo === bar)); </script> `); } else if (req.url === '/foo.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end(` import bar from './bar.js'; export default bar; `); } else if (req.url === '/bar.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end('export default {};'); } else res.destroy(); }) .listen(2020, () => console.log('http://localhost:2020')); ```
2.0
Links with the 'rel' attribute that equals to 'modulepreload' are overridden incorrectly - Server example: ```js require('http') .createServer((req, res) => { if (req.url === '/') { res.writeHead(200, { 'content-type': 'text/html' }); res.end(` <meta charset="utf-8"> <link rel="modulepreload" href="foo.js"> <link rel="modulepreload" href="bar.js"> <script type="module"> import foo from './foo.js'; import bar from './bar.js'; document.body.appendChild(document.createTextNode(foo === bar)); </script> `); } else if (req.url === '/foo.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end(` import bar from './bar.js'; export default bar; `); } else if (req.url === '/bar.js') { res.writeHead(200, { 'content-type': 'application/javascript' }); res.end('export default {};'); } else res.destroy(); }) .listen(2020, () => console.log('http://localhost:2020')); ```
process
links with the rel attribute that equals to modulepreload are overridden incorrectly server example js require http createserver req res if req url res writehead content type text html res end import foo from foo js import bar from bar js document body appendchild document createtextnode foo bar else if req url foo js res writehead content type application javascript res end import bar from bar js export default bar else if req url bar js res writehead content type application javascript res end export default else res destroy listen console log
1
620
3,086,889,736
IssuesEvent
2015-08-25 07:59:02
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
ะะฐ ะฑัะบะต (wf-base) ั€ะตะฐะปะธะทะพะฒะฐั‚ัŒ ัะตั€ะฒะธั ะฟะพะปัƒั‡ะตะฝะธั ะฐะบั‚ะธะฒะฝั‹ั… ั‚ะธะบะตั‚ะพะฒ
hi priority In process of testing test
- [x] 1) ะกะพะทะดะฐั‚ัŒ ัะตั€ะฒะธั getFlowSlotTickets (ะฒ ั‚ะพะผ-ะถะต ะบะพะฝั‚ั€ะพะปะปะตั€ะต ะณะดะต ะธะดะตั‚ ั€ะฐะฑะพั‚ะฐ ัะพ ัะปะพั‚ะฐะผะธ) - [x] 2) ะŸั€ะธะฝะธะผะฐั‚ัŒ ะฟะฐั€ะฐะผะตั‚ั€ั‹ sLogin - ัั‚ั€ะพะบะฐ ะปะพะณะธะฝะฐ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั bEmployeeUnassigned - ะฑัƒะปะตะฒั‹ะน, true - ะฝะต ะฐััะธะณะฝัƒั‚ั‹ะต ะณะพััะปัƒะถะฐั‰ะธะผ //ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน, ัƒะผะพะปั‡ะฐั‚ะตะปัŒะฝะพ false sDate - ัั‚ั€ะพะบะฐ ั ะดะฐั‚ะพะน //ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน - [x] 3) ะ’ั‹ะดะฐะฒะฐั‚ัŒ ะผะฐััะธะฒ ะฒะธะดะฐ (ััƒั‰ะฝะพัั‚ัŒ FlowSlotTicket): [ { "nID":1 ,"nID_FlowSlot":2 ,"nID_Subject":2 ,"nID_Task_Activiti":NULL ,"sDateStart":"2015-07-20T15:00:00" ,"sDateFinish":"2015-07-20T15:15:00" ,"sDateEdit":"2015-07-20T15:00:00" ,"sUserTaskName":"ะ—ะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ" ,"sNameBP":"ะ ะตะณะธัั‚ั€ะฐั†ะธั ะฑ/ัƒ ะฐะฒั‚ะพ ะฒ ะœะ ะ•ะž" ,"sTaskDate":"2015-07-20T15:00:00" } ] - [x] 4) ะ’ั‹ะฑะพั€ะบัƒ ะฟั€ะพะธะทะฒะพะดะธั‚ัŒ ะฟะพ ัะปะตะดัƒัŽั‰ะตะน ะปะพะณะธะบะต: - [x] 4.1) ะ’ั‹ะฑะธั€ะฐั‚ัŒ ะฒัะต ั‚ะฐัะบะธ ะฐะบั‚ะธะฒะธั‚ะธ, ะบะพั‚ะพั€ั‹ะต ะพั‚ะพะถะดะตัั‚ะฒะปะตะฝั‹ ั ะปะพะณะธะฝะพะผ sLogin - [x] 4.2) ะ•ัะปะธ bEmployeeUnassigned=true, ั‚ะพ ั ัƒั‡ะตั‚ะพะผ ั„ะปะฐะณะฐ unassigned=true - [x] 4.3) ะ”ะฐะปะตะต ะฒั‹ะฑะธั€ะฐั‚ัŒ ั‚ะพะปัŒะบะพ ั‚ะต ะธะท ะฝะธั…, ั‡ัŒะธ ะ˜ะ” ัˆะฝะธะบะธ ะตัั‚ัŒ ะฒ ะฟะพะปะต "nID_Task_Activiti", ััƒั‰ะฝะพัั‚ะธ "FlowSlotTicket", ะธ ะดะถะพะธะฝะธั‚ัŒ ัั‚ะธ ะทะฐะฟะธัะธ. - [x] 4.4) ะ”ะพะฟะพะปะฝะธั‚ัŒ ะพะฑัŠะตะดะธะฝะตะฝะฝัƒัŽ ััƒั‰ะฝะพัั‚ัŒ ะฟะพะปัะผะธ (ะธะท ะพะฑัŠะตะบั‚ะฐ ั‚ะฐัะบะธ) - "sUserTaskName" - ะะฐะทะฒะฐะฝะธะต ัŽะทะตั€ั‚ะฐัะบะธ //ะฝะฐะฟั€ะธะผะตั€: "ะ—ะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ" - "sNameBP" - ะะฐะทะฒะฐะฝะธะต ะ‘ะŸ //ะฝะฐะฟั€ะธะผะตั€: "ะ ะตะณะธัั‚ั€ะฐั†ะธั ะฑ/ัƒ ะฐะฒั‚ะพ ะฒ ะœะ ะ•ะž" - "sTaskDate" - ะ”ะฐั‚ะฐ ะฟะพะดะฐั‡ะธ ะทะฐัะฒะบะธ //ะฝะฐะฟั€ะธะผะตั€: "2015-07-20T15:00:00" - [x] 4.5) ะกะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ ัะฟะธัะพะบ ะฒ ะฟะพั€ัะดะบะต ะฒะพะทั€ะฐัั‚ะฐะฝะธั "sDateStart" ะพะฑัŠะตะดะตะฝะตะฝะฝะพะน ััƒั‰ะฝะพัั‚ะธ (ะฒ ั‚.ั‡. ั ะดะฐะฝะฝั‹ะผะธ "FlowSlotTicket") - [x] 4.6) ะ•ัะปะธ ะทะฐะดะฐะฝ ะฟะฐั€ะฐะผะตั‚ั€ sDate, ั‚ะพ ะฒั‹ะฑะธั€ะฐั‚ัŒ ะทะฐะฟะธัะธ/ั‚ะฐัะบะธ ั‚ะพะปัŒะบะพ ะฒ ั€ะฐะผะบะฐั… ัั‚ะพะน ะดะฐั‚ั‹ - [x] 5) ะะบั‚ัƒะฐะปะธะทะธั€ะพะฒะฐั‚ัŒ ะดะพะบัƒ ะฟะพ ะะŸะ˜ ะฝะฐ ะฝะฐัˆะตะน ะ’ะธะบะธ
1.0
ะะฐ ะฑัะบะต (wf-base) ั€ะตะฐะปะธะทะพะฒะฐั‚ัŒ ัะตั€ะฒะธั ะฟะพะปัƒั‡ะตะฝะธั ะฐะบั‚ะธะฒะฝั‹ั… ั‚ะธะบะตั‚ะพะฒ - - [x] 1) ะกะพะทะดะฐั‚ัŒ ัะตั€ะฒะธั getFlowSlotTickets (ะฒ ั‚ะพะผ-ะถะต ะบะพะฝั‚ั€ะพะปะปะตั€ะต ะณะดะต ะธะดะตั‚ ั€ะฐะฑะพั‚ะฐ ัะพ ัะปะพั‚ะฐะผะธ) - [x] 2) ะŸั€ะธะฝะธะผะฐั‚ัŒ ะฟะฐั€ะฐะผะตั‚ั€ั‹ sLogin - ัั‚ั€ะพะบะฐ ะปะพะณะธะฝะฐ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั bEmployeeUnassigned - ะฑัƒะปะตะฒั‹ะน, true - ะฝะต ะฐััะธะณะฝัƒั‚ั‹ะต ะณะพััะปัƒะถะฐั‰ะธะผ //ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน, ัƒะผะพะปั‡ะฐั‚ะตะปัŒะฝะพ false sDate - ัั‚ั€ะพะบะฐ ั ะดะฐั‚ะพะน //ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน - [x] 3) ะ’ั‹ะดะฐะฒะฐั‚ัŒ ะผะฐััะธะฒ ะฒะธะดะฐ (ััƒั‰ะฝะพัั‚ัŒ FlowSlotTicket): [ { "nID":1 ,"nID_FlowSlot":2 ,"nID_Subject":2 ,"nID_Task_Activiti":NULL ,"sDateStart":"2015-07-20T15:00:00" ,"sDateFinish":"2015-07-20T15:15:00" ,"sDateEdit":"2015-07-20T15:00:00" ,"sUserTaskName":"ะ—ะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ" ,"sNameBP":"ะ ะตะณะธัั‚ั€ะฐั†ะธั ะฑ/ัƒ ะฐะฒั‚ะพ ะฒ ะœะ ะ•ะž" ,"sTaskDate":"2015-07-20T15:00:00" } ] - [x] 4) ะ’ั‹ะฑะพั€ะบัƒ ะฟั€ะพะธะทะฒะพะดะธั‚ัŒ ะฟะพ ัะปะตะดัƒัŽั‰ะตะน ะปะพะณะธะบะต: - [x] 4.1) ะ’ั‹ะฑะธั€ะฐั‚ัŒ ะฒัะต ั‚ะฐัะบะธ ะฐะบั‚ะธะฒะธั‚ะธ, ะบะพั‚ะพั€ั‹ะต ะพั‚ะพะถะดะตัั‚ะฒะปะตะฝั‹ ั ะปะพะณะธะฝะพะผ sLogin - [x] 4.2) ะ•ัะปะธ bEmployeeUnassigned=true, ั‚ะพ ั ัƒั‡ะตั‚ะพะผ ั„ะปะฐะณะฐ unassigned=true - [x] 4.3) ะ”ะฐะปะตะต ะฒั‹ะฑะธั€ะฐั‚ัŒ ั‚ะพะปัŒะบะพ ั‚ะต ะธะท ะฝะธั…, ั‡ัŒะธ ะ˜ะ” ัˆะฝะธะบะธ ะตัั‚ัŒ ะฒ ะฟะพะปะต "nID_Task_Activiti", ััƒั‰ะฝะพัั‚ะธ "FlowSlotTicket", ะธ ะดะถะพะธะฝะธั‚ัŒ ัั‚ะธ ะทะฐะฟะธัะธ. - [x] 4.4) ะ”ะพะฟะพะปะฝะธั‚ัŒ ะพะฑัŠะตะดะธะฝะตะฝะฝัƒัŽ ััƒั‰ะฝะพัั‚ัŒ ะฟะพะปัะผะธ (ะธะท ะพะฑัŠะตะบั‚ะฐ ั‚ะฐัะบะธ) - "sUserTaskName" - ะะฐะทะฒะฐะฝะธะต ัŽะทะตั€ั‚ะฐัะบะธ //ะฝะฐะฟั€ะธะผะตั€: "ะ—ะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ" - "sNameBP" - ะะฐะทะฒะฐะฝะธะต ะ‘ะŸ //ะฝะฐะฟั€ะธะผะตั€: "ะ ะตะณะธัั‚ั€ะฐั†ะธั ะฑ/ัƒ ะฐะฒั‚ะพ ะฒ ะœะ ะ•ะž" - "sTaskDate" - ะ”ะฐั‚ะฐ ะฟะพะดะฐั‡ะธ ะทะฐัะฒะบะธ //ะฝะฐะฟั€ะธะผะตั€: "2015-07-20T15:00:00" - [x] 4.5) ะกะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ ัะฟะธัะพะบ ะฒ ะฟะพั€ัะดะบะต ะฒะพะทั€ะฐัั‚ะฐะฝะธั "sDateStart" ะพะฑัŠะตะดะตะฝะตะฝะฝะพะน ััƒั‰ะฝะพัั‚ะธ (ะฒ ั‚.ั‡. ั ะดะฐะฝะฝั‹ะผะธ "FlowSlotTicket") - [x] 4.6) ะ•ัะปะธ ะทะฐะดะฐะฝ ะฟะฐั€ะฐะผะตั‚ั€ sDate, ั‚ะพ ะฒั‹ะฑะธั€ะฐั‚ัŒ ะทะฐะฟะธัะธ/ั‚ะฐัะบะธ ั‚ะพะปัŒะบะพ ะฒ ั€ะฐะผะบะฐั… ัั‚ะพะน ะดะฐั‚ั‹ - [x] 5) ะะบั‚ัƒะฐะปะธะทะธั€ะพะฒะฐั‚ัŒ ะดะพะบัƒ ะฟะพ ะะŸะ˜ ะฝะฐ ะฝะฐัˆะตะน ะ’ะธะบะธ
process
ะฝะฐ ะฑัะบะต wf base ั€ะตะฐะปะธะทะพะฒะฐั‚ัŒ ัะตั€ะฒะธั ะฟะพะปัƒั‡ะตะฝะธั ะฐะบั‚ะธะฒะฝั‹ั… ั‚ะธะบะตั‚ะพะฒ ัะพะทะดะฐั‚ัŒ ัะตั€ะฒะธั getflowslottickets ะฒ ั‚ะพะผ ะถะต ะบะพะฝั‚ั€ะพะปะปะตั€ะต ะณะดะต ะธะดะตั‚ ั€ะฐะฑะพั‚ะฐ ัะพ ัะปะพั‚ะฐะผะธ ะฟั€ะธะฝะธะผะฐั‚ัŒ ะฟะฐั€ะฐะผะตั‚ั€ั‹ slogin ัั‚ั€ะพะบะฐ ะปะพะณะธะฝะฐ ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั bemployeeunassigned ะฑัƒะปะตะฒั‹ะน true ะฝะต ะฐััะธะณะฝัƒั‚ั‹ะต ะณะพััะปัƒะถะฐั‰ะธะผ ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน ัƒะผะพะปั‡ะฐั‚ะตะปัŒะฝะพ false sdate ัั‚ั€ะพะบะฐ ั ะดะฐั‚ะพะน ะพะฟั†ะธะพะฝะฐะปัŒะฝั‹ะน ะฒั‹ะดะฐะฒะฐั‚ัŒ ะผะฐััะธะฒ ะฒะธะดะฐ ััƒั‰ะฝะพัั‚ัŒ flowslotticket nid nid flowslot nid subject nid task activiti null sdatestart sdatefinish sdateedit susertaskname ะทะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ snamebp ั€ะตะณะธัั‚ั€ะฐั†ะธั ะฑ ัƒ ะฐะฒั‚ะพ ะฒ ะผั€ะตะพ staskdate ะฒั‹ะฑะพั€ะบัƒ ะฟั€ะพะธะทะฒะพะดะธั‚ัŒ ะฟะพ ัะปะตะดัƒัŽั‰ะตะน ะปะพะณะธะบะต ะฒั‹ะฑะธั€ะฐั‚ัŒ ะฒัะต ั‚ะฐัะบะธ ะฐะบั‚ะธะฒะธั‚ะธ ะบะพั‚ะพั€ั‹ะต ะพั‚ะพะถะดะตัั‚ะฒะปะตะฝั‹ ั ะปะพะณะธะฝะพะผ slogin ะตัะปะธ bemployeeunassigned true ั‚ะพ ั ัƒั‡ะตั‚ะพะผ ั„ะปะฐะณะฐ unassigned true ะดะฐะปะตะต ะฒั‹ะฑะธั€ะฐั‚ัŒ ั‚ะพะปัŒะบะพ ั‚ะต ะธะท ะฝะธั… ั‡ัŒะธ ะธะด ัˆะฝะธะบะธ ะตัั‚ัŒ ะฒ ะฟะพะปะต nid task activiti ััƒั‰ะฝะพัั‚ะธ flowslotticket ะธ ะดะถะพะธะฝะธั‚ัŒ ัั‚ะธ ะทะฐะฟะธัะธ ะดะพะฟะพะปะฝะธั‚ัŒ ะพะฑัŠะตะดะธะฝะตะฝะฝัƒัŽ ััƒั‰ะฝะพัั‚ัŒ ะฟะพะปัะผะธ ะธะท ะพะฑัŠะตะบั‚ะฐ ั‚ะฐัะบะธ susertaskname ะฝะฐะทะฒะฐะฝะธะต ัŽะทะตั€ั‚ะฐัะบะธ ะฝะฐะฟั€ะธะผะตั€ ะทะฐัะฒะบะฐ ะฟะพะดะฐะฝะฐ snamebp ะฝะฐะทะฒะฐะฝะธะต ะฑะฟ ะฝะฐะฟั€ะธะผะตั€ ั€ะตะณะธัั‚ั€ะฐั†ะธั ะฑ ัƒ ะฐะฒั‚ะพ ะฒ ะผั€ะตะพ staskdate ะดะฐั‚ะฐ ะฟะพะดะฐั‡ะธ ะทะฐัะฒะบะธ ะฝะฐะฟั€ะธะผะตั€ ัะพั€ั‚ะธั€ะพะฒะฐั‚ัŒ ัะฟะธัะพะบ ะฒ ะฟะพั€ัะดะบะต ะฒะพะทั€ะฐัั‚ะฐะฝะธั sdatestart ะพะฑัŠะตะดะตะฝะตะฝะฝะพะน ััƒั‰ะฝะพัั‚ะธ ะฒ ั‚ ั‡ ั ะดะฐะฝะฝั‹ะผะธ flowslotticket ะตัะปะธ ะทะฐะดะฐะฝ ะฟะฐั€ะฐะผะตั‚ั€ sdate ั‚ะพ ะฒั‹ะฑะธั€ะฐั‚ัŒ ะทะฐะฟะธัะธ ั‚ะฐัะบะธ ั‚ะพะปัŒะบะพ ะฒ ั€ะฐะผะบะฐั… ัั‚ะพะน ะดะฐั‚ั‹ ะฐะบั‚ัƒะฐะปะธะทะธั€ะพะฒะฐั‚ัŒ ะดะพะบัƒ ะฟะพ ะฐะฟะธ ะฝะฐ ะฝะฐัˆะตะน ะฒะธะบะธ
1
330,416
28,376,037,120
IssuesEvent
2023-04-12 20:56:58
CommunityToolkit/Labs-Windows
https://api.github.com/repos/CommunityToolkit/Labs-Windows
reopened
๐Ÿงช [Experiment] SettingsCard & SettingsExpander
experiment :test_tube:
# Approved from Toolkit - https://github.com/CommunityToolkit/Labs-Windows/discussions/129 ## Problem Statement: THere are currently no controls in WinUI or the Toolkit that allows for creating consistent settings experiences that can be found across Windows 11. This experiment introduces a SettingsCard component that allows to display simple cards. SettingsExpander will be an follow-up experiment to introduce collapsible cards. ## Overview This experiment adds the following components: - **SettingsCard** a simple card component that allows for displaying a setting. The `IsClickEnabled` property can be used to turn it into a Button-like control. - **SettingsExpander** a control that uses the same properties as `SettingsCard`, and `SettingsCard` can be used to set the SettingsExpander.Items. Binding is also supported. ![SettingsExperiment](https://user-images.githubusercontent.com/9866362/179772289-abc9bb1a-bb06-49df-8fe4-1628e4dd3852.gif) https://user-images.githubusercontent.com/9866362/223989590-611a26f6-74ce-4c6e-9bb6-055df743a5c5.mp4 ## Using You can try it out via the NuGet Packages here: - UWP: https://dev.azure.com/dotnet/CommunityToolkit/_artifacts/feed/CommunityToolkit-Labs/NuGet/CommunityToolkit.Labs.Uwp.SettingsControls - WinUI 3: https://dev.azure.com/dotnet/CommunityToolkit/_artifacts/feed/CommunityToolkit-Labs/NuGet/CommunityToolkit.Labs.WinUI.SettingsControls [SettingsCard (doc + samples)](https://github.com/CommunityToolkit/Labs-Windows/blob/main/components/SettingsControls/samples/SettingsCard.md) [SettingsExpander (doc + samples)](https://github.com/CommunityToolkit/Labs-Windows/blob/main/components/SettingsControls/samples/SettingsExpander.md) Read more about [Preview Packages here](https://aka.ms/wct/wiki/previewpackages). CommunityToolkit members can also try it out with Codespaces. ## TO DO - [ ] Tests ## Implementation Requirements Not all these items are required to submit a PR. This list is here to help track what is remaining to implement before a technical review and discussion of moving into the main repository can occur. - [x] Working Prototype - [x] Feature Complete - [x] Documentation - [x] Samples - [ ] Tests - [ ] Community Feedback / Usage Testimonies ## Tested Platforms - [x] UWP - [x] WinAppSDK / WinUI 3 - [x] Web Assembly (WASM) - [ ] Android - [ ] iOS - [ ] MacOS - [ ] Linux / GTK ## Technical Review These items can sometimes be done ahead of time, but are usually started and completed after all implementation details are finished. - [ ] Accessibility Audit - [ ] API/Naming Review - [ ] Code Quality/Style - [ ] Dependency Review - [ ] Design/Style Review - [ ] Final Approval
1.0
๐Ÿงช [Experiment] SettingsCard & SettingsExpander - # Approved from Toolkit - https://github.com/CommunityToolkit/Labs-Windows/discussions/129 ## Problem Statement: THere are currently no controls in WinUI or the Toolkit that allows for creating consistent settings experiences that can be found across Windows 11. This experiment introduces a SettingsCard component that allows to display simple cards. SettingsExpander will be an follow-up experiment to introduce collapsible cards. ## Overview This experiment adds the following components: - **SettingsCard** a simple card component that allows for displaying a setting. The `IsClickEnabled` property can be used to turn it into a Button-like control. - **SettingsExpander** a control that uses the same properties as `SettingsCard`, and `SettingsCard` can be used to set the SettingsExpander.Items. Binding is also supported. ![SettingsExperiment](https://user-images.githubusercontent.com/9866362/179772289-abc9bb1a-bb06-49df-8fe4-1628e4dd3852.gif) https://user-images.githubusercontent.com/9866362/223989590-611a26f6-74ce-4c6e-9bb6-055df743a5c5.mp4 ## Using You can try it out via the NuGet Packages here: - UWP: https://dev.azure.com/dotnet/CommunityToolkit/_artifacts/feed/CommunityToolkit-Labs/NuGet/CommunityToolkit.Labs.Uwp.SettingsControls - WinUI 3: https://dev.azure.com/dotnet/CommunityToolkit/_artifacts/feed/CommunityToolkit-Labs/NuGet/CommunityToolkit.Labs.WinUI.SettingsControls [SettingsCard (doc + samples)](https://github.com/CommunityToolkit/Labs-Windows/blob/main/components/SettingsControls/samples/SettingsCard.md) [SettingsExpander (doc + samples)](https://github.com/CommunityToolkit/Labs-Windows/blob/main/components/SettingsControls/samples/SettingsExpander.md) Read more about [Preview Packages here](https://aka.ms/wct/wiki/previewpackages). CommunityToolkit members can also try it out with Codespaces. ## TO DO - [ ] Tests ## Implementation Requirements Not all these items are required to submit a PR. This list is here to help track what is remaining to implement before a technical review and discussion of moving into the main repository can occur. - [x] Working Prototype - [x] Feature Complete - [x] Documentation - [x] Samples - [ ] Tests - [ ] Community Feedback / Usage Testimonies ## Tested Platforms - [x] UWP - [x] WinAppSDK / WinUI 3 - [x] Web Assembly (WASM) - [ ] Android - [ ] iOS - [ ] MacOS - [ ] Linux / GTK ## Technical Review These items can sometimes be done ahead of time, but are usually started and completed after all implementation details are finished. - [ ] Accessibility Audit - [ ] API/Naming Review - [ ] Code Quality/Style - [ ] Dependency Review - [ ] Design/Style Review - [ ] Final Approval
non_process
๐Ÿงช settingscard settingsexpander approved from toolkit problem statement there are currently no controls in winui or the toolkit that allows for creating consistent settings experiences that can be found across windows this experiment introduces a settingscard component that allows to display simple cards settingsexpander will be an follow up experiment to introduce collapsible cards overview this experiment adds the following components settingscard a simple card component that allows for displaying a setting the isclickenabled property can be used to turn it into a button like control settingsexpander a control that uses the same properties as settingscard and settingscard can be used to set the settingsexpander items binding is also supported using you can try it out via the nuget packages here uwp winui read more about communitytoolkit members can also try it out with codespaces to do tests implementation requirements not all these items are required to submit a pr this list is here to help track what is remaining to implement before a technical review and discussion of moving into the main repository can occur working prototype feature complete documentation samples tests community feedback usage testimonies tested platforms uwp winappsdk winui web assembly wasm android ios macos linux gtk technical review these items can sometimes be done ahead of time but are usually started and completed after all implementation details are finished accessibility audit api naming review code quality style dependency review design style review final approval
0
185,469
15,023,224,916
IssuesEvent
2021-02-01 17:57:11
2i2c-org/pilot-hubs
https://api.github.com/repos/2i2c-org/pilot-hubs
opened
Document how to set up GitHub authentication by organization membership
documentation enhancement
In the Farallon Hub we have a setup where users can log in to the hub if they are a **public member** of a GitHub organization. We should document how this was set up, how to do the same for other hubs, and "gotchas" that are important to keep in mind such as "the person's membership must be public".
1.0
Document how to set up GitHub authentication by organization membership - In the Farallon Hub we have a setup where users can log in to the hub if they are a **public member** of a GitHub organization. We should document how this was set up, how to do the same for other hubs, and "gotchas" that are important to keep in mind such as "the person's membership must be public".
non_process
document how to set up github authentication by organization membership in the farallon hub we have a setup where users can log in to the hub if they are a public member of a github organization we should document how this was set up how to do the same for other hubs and gotchas that are important to keep in mind such as the person s membership must be public
0
10,731
8,147,794,672
IssuesEvent
2018-08-22 01:52:33
rchain/bounties
https://api.github.com/repos/rchain/bounties
closed
O> Attack Governance Model
Governance Operations Security Voting needs-SMART-objective
Phil's POV "a critical part of the security of the whole platform. one has to avoid that the coop ends up being a SPOF due to political centralization & centralization of development decisions as well as centralization of staking coins. after all the coop has a jurisdiction and that might be an SPOF as well. I know it sounds weird at this point but maybe more restrictions of the coop's power might be better for the overall platform security. also it is much harder to restrict the power of the coop later on." 1. Define Governance Process' 2. Postulate attack vectors 3. Test through small scale experiments within a Circle 4. Review the results
True
O> Attack Governance Model - Phil's POV "a critical part of the security of the whole platform. one has to avoid that the coop ends up being a SPOF due to political centralization & centralization of development decisions as well as centralization of staking coins. after all the coop has a jurisdiction and that might be an SPOF as well. I know it sounds weird at this point but maybe more restrictions of the coop's power might be better for the overall platform security. also it is much harder to restrict the power of the coop later on." 1. Define Governance Process' 2. Postulate attack vectors 3. Test through small scale experiments within a Circle 4. Review the results
non_process
o attack governance model phil s pov a critical part of the security of the whole platform one has to avoid that the coop ends up being a spof due to political centralization centralization of development decisions as well as centralization of staking coins after all the coop has a jurisdiction and that might be an spof as well i know it sounds weird at this point but maybe more restrictions of the coop s power might be better for the overall platform security also it is much harder to restrict the power of the coop later on define governance process postulate attack vectors test through small scale experiments within a circle review the results
0
15,493
19,703,227,482
IssuesEvent
2022-01-12 18:49:41
googleapis/cloud-trace-nodejs
https://api.github.com/repos/googleapis/cloud-trace-nodejs
opened
Your .repo-metadata.json file has a problem ๐Ÿค’
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'trace' invalid in .repo-metadata.json โ˜๏ธ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem ๐Ÿค’ - You have a problem with your .repo-metadata.json file: Result of scan ๐Ÿ“ˆ: * api_shortname 'trace' invalid in .repo-metadata.json โ˜๏ธ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem ๐Ÿค’ you have a problem with your repo metadata json file result of scan ๐Ÿ“ˆ api shortname trace invalid in repo metadata json โ˜๏ธ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
14,861
18,266,462,934
IssuesEvent
2021-10-04 09:02:06
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Load and use fx variable even if not broadcastable onto model cube data
enhancement preprocessor
Thanks to @sloosvel we now have a nice integration of anciliary vars for fx vars, including CMOR checks and all that fancy jazz. There is, however, room for improvement, as I see it: there is a hard [cut-off](https://github.com/ESMValGroup/ESMValCore/blob/fc87d72c3fce12a2117b4b4e42a4857045c4d404/esmvalcore/preprocessor/_ancillary_vars.py#L34) that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array: I think we can relax this a bit: - if the fx data is time-invariant, and the model data is not, we can still use the fx data by propagating the fx data at every time point of the model data - if the fx data is on a finer/coarser time axis than the model data, we can perform a time regridding and still use the fx data (and it'd even be broadcastable onto model data this time around) These are statistical massages of the fx data that **may** not be sound, so please correct me here if you see any issues with these approaches :beer:
1.0
Load and use fx variable even if not broadcastable onto model cube data - Thanks to @sloosvel we now have a nice integration of anciliary vars for fx vars, including CMOR checks and all that fancy jazz. There is, however, room for improvement, as I see it: there is a hard [cut-off](https://github.com/ESMValGroup/ESMValCore/blob/fc87d72c3fce12a2117b4b4e42a4857045c4d404/esmvalcore/preprocessor/_ancillary_vars.py#L34) that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array: I think we can relax this a bit: - if the fx data is time-invariant, and the model data is not, we can still use the fx data by propagating the fx data at every time point of the model data - if the fx data is on a finer/coarser time axis than the model data, we can perform a time regridding and still use the fx data (and it'd even be broadcastable onto model data this time around) These are statistical massages of the fx data that **may** not be sound, so please correct me here if you see any issues with these approaches :beer:
process
load and use fx variable even if not broadcastable onto model cube data thanks to sloosvel we now have a nice integration of anciliary vars for fx vars including cmor checks and all that fancy jazz there is however room for improvement as i see it there is a hard that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array i think we can relax this a bit if the fx data is time invariant and the model data is not we can still use the fx data by propagating the fx data at every time point of the model data if the fx data is on a finer coarser time axis than the model data we can perform a time regridding and still use the fx data and it d even be broadcastable onto model data this time around these are statistical massages of the fx data that may not be sound so please correct me here if you see any issues with these approaches beer
1
21,715
30,215,130,791
IssuesEvent
2023-07-05 15:06:05
Azure/azure-sdk-tools
https://api.github.com/repos/Azure/azure-sdk-tools
closed
Release Planner Arch Board Requirements
Epic Engagement Experience WS: Process Tools & Automation user study - stakeholder
The purpose of this Epic is the following: - Identify gaps and changes in the Arch Board activities and processes due to TypeSpec. - Determine any requirements for the release planner or scheduling tool. - [x] https://github.com/Azure/azure-sdk-tools/issues/4599 - [x] https://github.com/Azure/azure-sdk-tools/issues/4598 - [ ] https://github.com/Azure/azure-sdk-tools/issues/4601
1.0
Release Planner Arch Board Requirements - The purpose of this Epic is the following: - Identify gaps and changes in the Arch Board activities and processes due to TypeSpec. - Determine any requirements for the release planner or scheduling tool. - [x] https://github.com/Azure/azure-sdk-tools/issues/4599 - [x] https://github.com/Azure/azure-sdk-tools/issues/4598 - [ ] https://github.com/Azure/azure-sdk-tools/issues/4601
process
release planner arch board requirements the purpose of this epic is the following identify gaps and changes in the arch board activities and processes due to typespec determine any requirements for the release planner or scheduling tool
1
6,505
9,583,179,773
IssuesEvent
2019-05-08 04:09:57
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
0.32.5: java.lang.Exception: Invalid response from database driver. No :status provided.
Correctness Query Processor
Loading a dashboard with multiple questions, a couple of these show up in the log: ``` 04-20 11:57:58 DEBUG middleware.log :: POST /api/card/83/query 200 [ASYNC: completed] 737 ms (13 DB calls) Jetty threads: 8/50 (4 busy, 2 idle, 0 queued) (63 total active threads) 04-20 11:57:58 WARN middleware.async :: Unhandled exception, exepected `catch-exceptions` middleware to handle it. java.lang.Exception: Invalid response from database driver. No :status provided.true at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invokeStatic(process_userland_query.clj:97) at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invoke(process_userland_query.clj:87) at clojure.core$partial$fn__5828.invoke(core.clj:2638) at metabase.query_processor.middleware.cache$run_query_with_cache$respond__34325.invoke(cache.clj:110) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623$fn__33625.invoke(async_wait.clj:49) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623.invoke(async_wait.clj:49) at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973) at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975) at clojure.core.async.impl.ioc_macros$take_BANG_$fn__8592.invoke(ioc_macros.clj:986) ' at clojure.core.async.impl.channels.ManyToManyChannel$fn__3478$fn__3479.invoke(channels.clj:95) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 04-20 11:57:58 INFO middleware.cache :: Query took 678 ms to run; miminum for cache eligibility is 10000 ms 04-20 11:57:58 INFO middleware.cache :: Query took 683 ms to run; miminum for cache eligibility is 10000 ms 04-20 11:57:58 DEBUG async.api-response :: Async response finished, closing channels. 04-20 11:57:58 WARN middleware.async :: Unhandled exception, exepected `catch-exceptions` middleware to handle it. java.lang.Exception: Invalid response from database driver. No :status provided.true at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invokeStatic(process_userland_query.clj:97) at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invoke(process_userland_query.clj:87) at clojure.core$partial$fn__5828.invoke(core.clj:2638) at metabase.query_processor.middleware.cache$run_query_with_cache$respond__34325.invoke(cache.clj:110) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623$fn__33625.invoke(async_wait.clj:49) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623.invoke(async_wait.clj:49) at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973) at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975) at clojure.core.async.impl.ioc_macros$take_BANG_$fn__8592.invoke(ioc_macros.clj:986) at clojure.core.async.impl.channels.ManyToManyChannel$fn__3478$fn__3479.invoke(channels.clj:95) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ' at java.lang.Thread.run(Thread.java:748) ``` All questions contents seem to have loaded successfully this time, whereas it's been usual for some dashboard question card to display a database connection error, where the solution is to try reloading the dashboard until all questions display their results. Searching "provided.true" here had 0 results, so posting this.
1.0
0.32.5: java.lang.Exception: Invalid response from database driver. No :status provided. - Loading a dashboard with multiple questions, a couple of these show up in the log: ``` 04-20 11:57:58 DEBUG middleware.log :: POST /api/card/83/query 200 [ASYNC: completed] 737 ms (13 DB calls) Jetty threads: 8/50 (4 busy, 2 idle, 0 queued) (63 total active threads) 04-20 11:57:58 WARN middleware.async :: Unhandled exception, exepected `catch-exceptions` middleware to handle it. java.lang.Exception: Invalid response from database driver. No :status provided.true at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invokeStatic(process_userland_query.clj:97) at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invoke(process_userland_query.clj:87) at clojure.core$partial$fn__5828.invoke(core.clj:2638) at metabase.query_processor.middleware.cache$run_query_with_cache$respond__34325.invoke(cache.clj:110) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623$fn__33625.invoke(async_wait.clj:49) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623.invoke(async_wait.clj:49) at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973) at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975) at clojure.core.async.impl.ioc_macros$take_BANG_$fn__8592.invoke(ioc_macros.clj:986) ' at clojure.core.async.impl.channels.ManyToManyChannel$fn__3478$fn__3479.invoke(channels.clj:95) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 04-20 11:57:58 INFO middleware.cache :: Query took 678 ms to run; miminum for cache eligibility is 10000 ms 04-20 11:57:58 INFO middleware.cache :: Query took 683 ms to run; miminum for cache eligibility is 10000 ms 04-20 11:57:58 DEBUG async.api-response :: Async response finished, closing channels. 04-20 11:57:58 WARN middleware.async :: Unhandled exception, exepected `catch-exceptions` middleware to handle it. java.lang.Exception: Invalid response from database driver. No :status provided.true at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invokeStatic(process_userland_query.clj:97) at metabase.query_processor.middleware.process_userland_query$format_userland_query_result.invoke(process_userland_query.clj:87) at clojure.core$partial$fn__5828.invoke(core.clj:2638) at metabase.query_processor.middleware.cache$run_query_with_cache$respond__34325.invoke(cache.clj:110) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623$fn__33625.invoke(async_wait.clj:49) at metabase.query_processor.middleware.async_wait$wait_for_permit$fn__33563$fn__33602$state_machine__8574__auto____33623.invoke(async_wait.clj:49) at clojure.core.async.impl.ioc_macros$run_state_machine.invokeStatic(ioc_macros.clj:973) at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:972) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invokeStatic(ioc_macros.clj:977) at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:975) at clojure.core.async.impl.ioc_macros$take_BANG_$fn__8592.invoke(ioc_macros.clj:986) at clojure.core.async.impl.channels.ManyToManyChannel$fn__3478$fn__3479.invoke(channels.clj:95) at clojure.lang.AFn.run(AFn.java:22) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ' at java.lang.Thread.run(Thread.java:748) ``` All questions contents seem to have loaded successfully this time, whereas it's been usual for some dashboard question card to display a database connection error, where the solution is to try reloading the dashboard until all questions display their results. Searching "provided.true" here had 0 results, so posting this.
process
java lang exception invalid response from database driver no status provided loading a dashboard with multiple questions a couple of these show up in the log debug middleware log post api card query ms db calls jetty threads busy idle queued total active threads warn middleware async unhandled exception exepected catch exceptions middleware to handle it java lang exception invalid response from database driver no status provided true at metabase query processor middleware process userland query format userland query result invokestatic process userland query clj at metabase query processor middleware process userland query format userland query result invoke process userland query clj at clojure core partial fn invoke core clj at metabase query processor middleware cache run query with cache respond invoke cache clj at metabase query processor middleware async wait wait for permit fn fn state machine auto fn invoke async wait clj at metabase query processor middleware async wait wait for permit fn fn state machine auto invoke async wait clj at clojure core async impl ioc macros run state machine invokestatic ioc macros clj at clojure core async impl ioc macros run state machine invoke ioc macros clj at clojure core async impl ioc macros run state machine wrapped invokestatic ioc macros clj at clojure core async impl ioc macros run state machine wrapped invoke ioc macros clj at clojure core async impl ioc macros take bang fn invoke ioc macros clj at clojure core async impl channels manytomanychannel fn fn invoke channels clj at clojure lang afn run afn java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java info middleware cache query took ms to run miminum for cache eligibility is ms info middleware cache query took ms to run miminum for cache eligibility is ms debug async api response async response finished closing channels warn middleware async unhandled exception exepected catch exceptions middleware to handle it java lang exception invalid response from database driver no status provided true at metabase query processor middleware process userland query format userland query result invokestatic process userland query clj at metabase query processor middleware process userland query format userland query result invoke process userland query clj at clojure core partial fn invoke core clj at metabase query processor middleware cache run query with cache respond invoke cache clj at metabase query processor middleware async wait wait for permit fn fn state machine auto fn invoke async wait clj at metabase query processor middleware async wait wait for permit fn fn state machine auto invoke async wait clj at clojure core async impl ioc macros run state machine invokestatic ioc macros clj at clojure core async impl ioc macros run state machine invoke ioc macros clj at clojure core async impl ioc macros run state machine wrapped invokestatic ioc macros clj at clojure core async impl ioc macros run state machine wrapped invoke ioc macros clj at clojure core async impl ioc macros take bang fn invoke ioc macros clj at clojure core async impl channels manytomanychannel fn fn invoke channels clj at clojure lang afn run afn java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java all questions contents seem to have loaded successfully this time whereas it s been usual for some dashboard question card to display a database connection error where the solution is to try reloading the dashboard until all questions display their results searching provided true here had results so posting this
1
12,102
14,740,301,482
IssuesEvent
2021-01-07 08:52:02
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Site 053 Late Charges
anc-process anp-important ant-bug
In GitLab by @kdjstudios on Oct 30, 2018, 15:07 **Submitted by:** "Melissa Miller " <melissa.miller@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5970134 **Server:** Internal **Client/Site:** 053 **Account:** Multiple **Issue:** I am seeing a lot of my accounts charged with a late fee where they should not be charged. Examples of who should not have been charged: T3959 A3036 TAS6764 All happen to be auto pay and was charged on the 15th of the month. There are several other accounts as well. I have not finalized my billing as of yet. Please let me know how this can be corrected before my billing cycle is finalized.
1.0
Site 053 Late Charges - In GitLab by @kdjstudios on Oct 30, 2018, 15:07 **Submitted by:** "Melissa Miller " <melissa.miller@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5970134 **Server:** Internal **Client/Site:** 053 **Account:** Multiple **Issue:** I am seeing a lot of my accounts charged with a late fee where they should not be charged. Examples of who should not have been charged: T3959 A3036 TAS6764 All happen to be auto pay and was charged on the 15th of the month. There are several other accounts as well. I have not finalized my billing as of yet. Please let me know how this can be corrected before my billing cycle is finalized.
process
site late charges in gitlab by kdjstudios on oct submitted by melissa miller helpdesk server internal client site account multiple issue i am seeing a lot of my accounts charged with a late fee where they should not be charged examples of who should not have been charged all happen to be auto pay and was charged on the of the month there are several other accounts as well i have not finalized my billing as of yet please let me know how this can be corrected before my billing cycle is finalized
1
6,606
4,384,592,617
IssuesEvent
2016-08-08 03:54:29
pitonneux/website
https://api.github.com/repos/pitonneux/website
closed
Show only featured and upcoming events for guests, all events for admin
usability
Right now the events index shows all events for everyone. Change this to show only featured and upcoming events for guests.
True
Show only featured and upcoming events for guests, all events for admin - Right now the events index shows all events for everyone. Change this to show only featured and upcoming events for guests.
non_process
show only featured and upcoming events for guests all events for admin right now the events index shows all events for everyone change this to show only featured and upcoming events for guests
0
109,643
23,803,487,603
IssuesEvent
2022-09-03 17:15:03
DS-13-Dev-Team/DS13
https://api.github.com/repos/DS-13-Dev-Team/DS13
closed
Suggestion: Add low wall contruction
Type: Code Suggestion: Accepted Priority: Low Difficulty: Easy
<!-- For the title make sure to put the words, 'Suggestion:' or 'Bug:' before the actual title of your issue, it helps with sorting! If a specific field doesn't apply, remove it! Anything inside tags like these is a comment and will not be displayed in the final issue. Be careful not to write inside them! Joke or spammed issues can and will result in punishment. PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS (The lines with four #'s) Don't edit them or delete them it's part of the formatting --> <!-- The next three lines are for Bugs, delete these if you aren't making a bug report. --> <!-- The next two lines are for Suggestions, delete these if you aren't making a suggestion. --> #### Suggestion: You cant construct low walls or I couldn't find a way to. Because of that you cant actually fix windows after necromorph broke low walls under them (happens often because low wall has lower health then reinforced window). #### What do you think it'd add: Ability to fix broken windows and another way to fortificate some places.
1.0
Suggestion: Add low wall contruction - <!-- For the title make sure to put the words, 'Suggestion:' or 'Bug:' before the actual title of your issue, it helps with sorting! If a specific field doesn't apply, remove it! Anything inside tags like these is a comment and will not be displayed in the final issue. Be careful not to write inside them! Joke or spammed issues can and will result in punishment. PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS (The lines with four #'s) Don't edit them or delete them it's part of the formatting --> <!-- The next three lines are for Bugs, delete these if you aren't making a bug report. --> <!-- The next two lines are for Suggestions, delete these if you aren't making a suggestion. --> #### Suggestion: You cant construct low walls or I couldn't find a way to. Because of that you cant actually fix windows after necromorph broke low walls under them (happens often because low wall has lower health then reinforced window). #### What do you think it'd add: Ability to fix broken windows and another way to fortificate some places.
non_process
suggestion add low wall contruction for the title make sure to put the words suggestion or bug before the actual title of your issue it helps with sorting if a specific field doesn t apply remove it anything inside tags like these is a comment and will not be displayed in the final issue be careful not to write inside them joke or spammed issues can and will result in punishment put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting the next three lines are for bugs delete these if you aren t making a bug report the next two lines are for suggestions delete these if you aren t making a suggestion suggestion you cant construct low walls or i couldn t find a way to because of that you cant actually fix windows after necromorph broke low walls under them happens often because low wall has lower health then reinforced window what do you think it d add ability to fix broken windows and another way to fortificate some places
0
14,592
17,703,544,070
IssuesEvent
2021-08-25 03:14:53
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
New term - infragenericEpithet
Term - add Class - Taxon normative Process - complete
## New Term Submitter: Markus Dรถring Justification: A new term is needed to represent a parsed scientific name of an infrageneric rank, e.g a subgenus. The TDWG ontology defines the exact same concept: http://rs.tdwg.org/ontology/voc/TaxonName.rdf#infragenericEpithet Proponents: GBIF (already in use), Catalogue of Life (already in use), IPNI (already in use) Definition: The infrageneric part of a binomial name at ranks above species but below genus. Comment: The term infragenericEpithet should be used in conjunction with genericName, specificEpithet, infraspecificEpithet, taxonRank and scientificNameAuthorship to represent the individual elements of the complete scientificName. It can be used to indicate the subgenus placement of a species, which in zoology is often given in parentheses. Can also be used to share infrageneric names such as botanical sections (e.g., `Vicia sect. Cracca`) Examples: `Abacetillus` for scientificName `Abacetus (Abacetillus) ambiguus`, `Cracca` for scientificName `Vicia sect. Cracca` Refines: None Replaces: None ABCD 2.06: //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Bacterial/Subgenus (bacterial names), //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Zoological/Subgenus (zoological names), //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Botanical/FirstEpithet (botanical names) Mar 27, 2014 comment #1 chuck.miller@mobot.org Referring to my comment in 227. Doesn't it make more sense to have the subgenus name to be composed of a genericEpithet and an infragenericEpithet, rather than genericName and infragenericEpithet? Original comment: Was https://code.google.com/p/darwincore/issues/detail?id=228 ==New Term Recommendation== Submitter: Markus Dรถring Justification: A new term is needed to represent a parsed scientific name of an infrageneric rank, e.g a subgenus. The TDWG ontology defines the exact same concept: http://rs.tdwg.org/ontology/voc/TaxonName.rdf#infragenericEpithet Definition: The infrageneric part of a binomial name at ranks above species but below genus. Names at ranks between species and genus are composed of two words; the genus and this infrageneric epithet. This term should therefore usually be accompanied by the genericName term. Comment: Refines: Has Domain: Has Range: Replaces: ABCD 2.06: Mar 27, 2014 comment #1 chuck.miller@mobot.org Referring to my comment in 227. Doesn't it make more sense to have the subgenus name to be composed of a genericEpithet and an infragenericEpithet, rather than genericName and infragenericEpithet?
1.0
New term - infragenericEpithet - ## New Term Submitter: Markus Dรถring Justification: A new term is needed to represent a parsed scientific name of an infrageneric rank, e.g a subgenus. The TDWG ontology defines the exact same concept: http://rs.tdwg.org/ontology/voc/TaxonName.rdf#infragenericEpithet Proponents: GBIF (already in use), Catalogue of Life (already in use), IPNI (already in use) Definition: The infrageneric part of a binomial name at ranks above species but below genus. Comment: The term infragenericEpithet should be used in conjunction with genericName, specificEpithet, infraspecificEpithet, taxonRank and scientificNameAuthorship to represent the individual elements of the complete scientificName. It can be used to indicate the subgenus placement of a species, which in zoology is often given in parentheses. Can also be used to share infrageneric names such as botanical sections (e.g., `Vicia sect. Cracca`) Examples: `Abacetillus` for scientificName `Abacetus (Abacetillus) ambiguus`, `Cracca` for scientificName `Vicia sect. Cracca` Refines: None Replaces: None ABCD 2.06: //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Bacterial/Subgenus (bacterial names), //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Zoological/Subgenus (zoological names), //DataSets/DataSet/Units/Unit/Identifications/Identification/Result/TaxonIdentified/ScientificName/NameAtomised/Botanical/FirstEpithet (botanical names) Mar 27, 2014 comment #1 chuck.miller@mobot.org Referring to my comment in 227. Doesn't it make more sense to have the subgenus name to be composed of a genericEpithet and an infragenericEpithet, rather than genericName and infragenericEpithet? Original comment: Was https://code.google.com/p/darwincore/issues/detail?id=228 ==New Term Recommendation== Submitter: Markus Dรถring Justification: A new term is needed to represent a parsed scientific name of an infrageneric rank, e.g a subgenus. The TDWG ontology defines the exact same concept: http://rs.tdwg.org/ontology/voc/TaxonName.rdf#infragenericEpithet Definition: The infrageneric part of a binomial name at ranks above species but below genus. Names at ranks between species and genus are composed of two words; the genus and this infrageneric epithet. This term should therefore usually be accompanied by the genericName term. Comment: Refines: Has Domain: Has Range: Replaces: ABCD 2.06: Mar 27, 2014 comment #1 chuck.miller@mobot.org Referring to my comment in 227. Doesn't it make more sense to have the subgenus name to be composed of a genericEpithet and an infragenericEpithet, rather than genericName and infragenericEpithet?
process
new term infragenericepithet new term submitter markus dรถring justification a new term is needed to represent a parsed scientific name of an infrageneric rank e g a subgenus the tdwg ontology defines the exact same concept proponents gbif already in use catalogue of life already in use ipni already in use definition the infrageneric part of a binomial name at ranks above species but below genus comment the term infragenericepithet should be used in conjunction with genericname specificepithet infraspecificepithet taxonrank and scientificnameauthorship to represent the individual elements of the complete scientificname it can be used to indicate the subgenus placement of a species which in zoology is often given in parentheses can also be used to share infrageneric names such as botanical sections e g vicia sect cracca examples abacetillus for scientificname abacetus abacetillus ambiguus cracca for scientificname vicia sect cracca refines none replaces none abcd datasets dataset units unit identifications identification result taxonidentified scientificname nameatomised bacterial subgenus bacterial names datasets dataset units unit identifications identification result taxonidentified scientificname nameatomised zoological subgenus zoological names datasets dataset units unit identifications identification result taxonidentified scientificname nameatomised botanical firstepithet botanical names mar comment chuck miller mobot org referring to my comment in doesn t it make more sense to have the subgenus name to be composed of a genericepithet and an infragenericepithet rather than genericname and infragenericepithet original comment was new term recommendation submitter markus dรถring justification a new term is needed to represent a parsed scientific name of an infrageneric rank e g a subgenus the tdwg ontology defines the exact same concept definition the infrageneric part of a binomial name at ranks above species but below genus names at ranks between species and genus are composed of two words the genus and this infrageneric epithet this term should therefore usually be accompanied by the genericname term comment refines has domain has range replaces abcd mar comment chuck miller mobot org referring to my comment in doesn t it make more sense to have the subgenus name to be composed of a genericepithet and an infragenericepithet rather than genericname and infragenericepithet
1
301,780
22,774,357,573
IssuesEvent
2022-07-08 13:09:22
owncloud/docs-ocis
https://api.github.com/repos/owncloud/docs-ocis
opened
web service - ownCloud Web
documentation
The web service env's have configurations for owncloud web. owncloud web uses environment variables from the web service and/or config.json and themes.json where the envs overwrite any *.json value set if applicable. This needs documentation in the web service documentation **Referencing:** https://github.com/owncloud/docs-webui/issues/48 (Make "Configuring ownCloud Web" a partial) https://github.com/owncloud/docs-webui/issues/49 (Document theming for ownCloud Web) (more to come) @kulmann fyi
1.0
web service - ownCloud Web - The web service env's have configurations for owncloud web. owncloud web uses environment variables from the web service and/or config.json and themes.json where the envs overwrite any *.json value set if applicable. This needs documentation in the web service documentation **Referencing:** https://github.com/owncloud/docs-webui/issues/48 (Make "Configuring ownCloud Web" a partial) https://github.com/owncloud/docs-webui/issues/49 (Document theming for ownCloud Web) (more to come) @kulmann fyi
non_process
web service owncloud web the web service env s have configurations for owncloud web owncloud web uses environment variables from the web service and or config json and themes json where the envs overwrite any json value set if applicable this needs documentation in the web service documentation referencing make configuring owncloud web a partial document theming for owncloud web more to come kulmann fyi
0
232,013
17,768,395,063
IssuesEvent
2021-08-30 10:30:46
niteshseram/Portfolio
https://api.github.com/repos/niteshseram/Portfolio
closed
Update README file
documentation
Update the default README file by writing about the project and technologies used
1.0
Update README file - Update the default README file by writing about the project and technologies used
non_process
update readme file update the default readme file by writing about the project and technologies used
0
20,917
27,755,126,240
IssuesEvent
2023-03-16 01:23:51
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
The latest document failed to build in readthedocs
issue-processing-state-06
The latest document failed to build in readthedocs. Refer to [here](https://readthedocs.org/projects/quark-engine/builds/19787528/).
1.0
The latest document failed to build in readthedocs - The latest document failed to build in readthedocs. Refer to [here](https://readthedocs.org/projects/quark-engine/builds/19787528/).
process
the latest document failed to build in readthedocs the latest document failed to build in readthedocs refer to
1
105,756
13,213,684,720
IssuesEvent
2020-08-16 14:04:23
woowa-techcamp-2020/bmart-10
https://api.github.com/repos/woowa-techcamp-2020/bmart-10
closed
ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ, ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ
design
> ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ, ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ ๊ธฐํ•œ : 2020.08.15 ์˜ˆ์ƒ ์ž‘์—… ์‹œ๊ฐ„ : 2์‹œ๊ฐ„ ์‹ค์ œ ์ž‘์—… ์‹œ๊ฐ„ : 3์‹œ๊ฐ„ ### ์ž‘์—… ๋‚ด์šฉ - [x] ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ - [x] ๋กœ๊ทธ์ธ ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ
1.0
ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ, ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ - > ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ, ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ ๊ธฐํ•œ : 2020.08.15 ์˜ˆ์ƒ ์ž‘์—… ์‹œ๊ฐ„ : 2์‹œ๊ฐ„ ์‹ค์ œ ์ž‘์—… ์‹œ๊ฐ„ : 3์‹œ๊ฐ„ ### ์ž‘์—… ๋‚ด์šฉ - [x] ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ - [x] ๋กœ๊ทธ์ธ ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ
non_process
ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ ํ”„๋ก ํŠธ์—”๋“œ ๋กœ๊ทธ์ธ ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ๋””์ž์ธ ๊ธฐํ•œ ์˜ˆ์ƒ ์ž‘์—… ์‹œ๊ฐ„ ์‹ค์ œ ์ž‘์—… ์‹œ๊ฐ„ ์ž‘์—… ๋‚ด์šฉ ํšŒ์›๊ฐ€์ž… ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ ๋กœ๊ทธ์ธ ํŽ˜์ด์ง€ ํ”ผ๊ทธ๋งˆ์— ๋””์ž์ธํ•˜๊ธฐ
0
175,562
27,880,494,874
IssuesEvent
2023-03-21 19:00:14
JohnsL-U/Project-1
https://api.github.com/repos/JohnsL-U/Project-1
closed
Create Mock-Ups
documentation good first issue design usability
Create mockups of the user interface to help visualize the app's design and flow.
1.0
Create Mock-Ups - Create mockups of the user interface to help visualize the app's design and flow.
non_process
create mock ups create mockups of the user interface to help visualize the app s design and flow
0
104,003
13,019,421,609
IssuesEvent
2020-07-26 22:30:57
SLB-Pizza/radio-pizza
https://api.github.com/repos/SLB-Pizza/radio-pizza
closed
Make home events and news use SingleMixCard
design/layout
## Tasks - [ ] Make HomeEvents use SingleMixCard - [ ] Make HomeNews use SingleMixCard - [ ] Rename SingleMixCard to SingleItemCard
1.0
Make home events and news use SingleMixCard - ## Tasks - [ ] Make HomeEvents use SingleMixCard - [ ] Make HomeNews use SingleMixCard - [ ] Rename SingleMixCard to SingleItemCard
non_process
make home events and news use singlemixcard tasks make homeevents use singlemixcard make homenews use singlemixcard rename singlemixcard to singleitemcard
0
10,857
13,630,600,817
IssuesEvent
2020-09-24 16:41:10
googleapis/gapic-generator-typescript
https://api.github.com/repos/googleapis/gapic-generator-typescript
closed
Bundle renovate PRs for dev dependencies
type: process
This repository gets a significant number of pull requests from renovate for patch updates to dev dependencies. Due to the volume, I think we should bundle these into a weekly rollup.
1.0
Bundle renovate PRs for dev dependencies - This repository gets a significant number of pull requests from renovate for patch updates to dev dependencies. Due to the volume, I think we should bundle these into a weekly rollup.
process
bundle renovate prs for dev dependencies this repository gets a significant number of pull requests from renovate for patch updates to dev dependencies due to the volume i think we should bundle these into a weekly rollup
1
331,118
28,507,778,047
IssuesEvent
2023-04-18 23:39:50
InstituteforDiseaseModeling/PACE-HRH
https://api.github.com/repos/InstituteforDiseaseModeling/PACE-HRH
opened
Validate_cadre test passed message/plot
testing
Add a message or plot when the cadre sheet validation tests all pass. A potential plot could be for each scenario, plot the start year and end year on x-axis and RoleID on y-axis (geom_point).
1.0
Validate_cadre test passed message/plot - Add a message or plot when the cadre sheet validation tests all pass. A potential plot could be for each scenario, plot the start year and end year on x-axis and RoleID on y-axis (geom_point).
non_process
validate cadre test passed message plot add a message or plot when the cadre sheet validation tests all pass a potential plot could be for each scenario plot the start year and end year on x axis and roleid on y axis geom point
0