Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
855
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
13 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
240k
binary_label
int64
0
1
363,101
10,737,648,196
IssuesEvent
2019-10-29 13:29:56
AY1920S1-CS2113T-W17-4/main
https://api.github.com/repos/AY1920S1-CS2113T-W17-4/main
closed
As a Computing student, I can view my tasks for the week in calendar format
priority.High type.Epic type.Story
so that I can plan my time for the week.
1.0
As a Computing student, I can view my tasks for the week in calendar format - so that I can plan my time for the week.
priority
as a computing student i can view my tasks for the week in calendar format so that i can plan my time for the week
1
614,129
19,142,811,069
IssuesEvent
2021-12-02 02:07:54
teamc0/heart-muscle-be
https://api.github.com/repos/teamc0/heart-muscle-be
opened
- [Style]: 프로젝트명 "heart-muscle" 해서 새로 올리기
priority:high status : In progress status : to do
### Issue Type 리스트 - [ ] style : 코드 형식 변경, 세미콜론 추가, 변수 명 통일화 etc.. (비지니스 로직에 변경 없음) ## 본문 내용 - [ ] 프로젝트 명 "heart-muscle"로 해서 올리기
1.0
- [Style]: 프로젝트명 "heart-muscle" 해서 새로 올리기 - ### Issue Type 리스트 - [ ] style : 코드 형식 변경, 세미콜론 추가, 변수 명 통일화 etc.. (비지니스 로직에 변경 없음) ## 본문 내용 - [ ] 프로젝트 명 "heart-muscle"로 해서 올리기
priority
프로젝트명 heart muscle 해서 새로 올리기 issue type 리스트 style 코드 형식 변경 세미콜론 추가 변수 명 통일화 etc 비지니스 로직에 변경 없음 본문 내용 프로젝트 명 heart muscle 로 해서 올리기
1
187,087
6,744,757,770
IssuesEvent
2017-10-20 16:50:51
canonical-websites/www.ubuntu.com
https://api.github.com/repos/canonical-websites/www.ubuntu.com
closed
The spelling of GNOME is inconsistent
Priority: High Type: Bug
## Summary If someone navigates to https://www.ubuntu.com/desktop/1710 and reads the text, GNOME is spelled like "Gnome" but further down it is spelled like "GNOME." [As per the official GNOME website](https://www.gnome.org/), the spelling is "GNOME." Please correct this. (I don't feel the other headers are necessary because this is a pretty simple problem and using those would be redundant...)
1.0
The spelling of GNOME is inconsistent - ## Summary If someone navigates to https://www.ubuntu.com/desktop/1710 and reads the text, GNOME is spelled like "Gnome" but further down it is spelled like "GNOME." [As per the official GNOME website](https://www.gnome.org/), the spelling is "GNOME." Please correct this. (I don't feel the other headers are necessary because this is a pretty simple problem and using those would be redundant...)
priority
the spelling of gnome is inconsistent summary if someone navigates to and reads the text gnome is spelled like gnome but further down it is spelled like gnome the spelling is gnome please correct this i don t feel the other headers are necessary because this is a pretty simple problem and using those would be redundant
1
388,805
11,492,716,010
IssuesEvent
2020-02-11 21:33:09
DynamicProgrammingEECS441/PicassoXS
https://api.github.com/repos/DynamicProgrammingEECS441/PicassoXS
opened
General Filter & Portrait Mode Filter - Preprocess data from client
Back End Feature CORE Feature Type::Skeletal Product Sprint::High Priority
## Info 1. get client request, downlaod image, laod image into tensor 2. may require using extra web server
1.0
General Filter & Portrait Mode Filter - Preprocess data from client - ## Info 1. get client request, downlaod image, laod image into tensor 2. may require using extra web server
priority
general filter portrait mode filter preprocess data from client info get client request downlaod image laod image into tensor may require using extra web server
1
467,846
13,456,659,448
IssuesEvent
2020-09-09 08:09:54
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.ebay.com - see bug description
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox 81.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/57966 --> **URL**: https://www.ebay.com/ **Browser / Version**: Firefox 81.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Something else **Description**: no GUI just listing of information and selection of boxes. **Steps to Reproduce**: The website would open and work slowly. The website "Reverb.com" would not cooperate in explorer. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/138a01f7-4433-4718-98bf-1e34f0c6ac5b.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200906164749</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/38d01030-7167-4204-b8a5-1ea8a68bf42e) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.ebay.com - see bug description - <!-- @browser: Firefox 81.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/57966 --> **URL**: https://www.ebay.com/ **Browser / Version**: Firefox 81.0 **Operating System**: Windows 7 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Something else **Description**: no GUI just listing of information and selection of boxes. **Steps to Reproduce**: The website would open and work slowly. The website "Reverb.com" would not cooperate in explorer. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/138a01f7-4433-4718-98bf-1e34f0c6ac5b.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200906164749</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/38d01030-7167-4204-b8a5-1ea8a68bf42e) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
see bug description url browser version firefox operating system windows tested another browser yes internet explorer problem type something else description no gui just listing of information and selection of boxes steps to reproduce the website would open and work slowly the website reverb com would not cooperate in explorer view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
718,178
24,706,471,823
IssuesEvent
2022-10-19 19:32:09
opendatahub-io/odh-dashboard
https://api.github.com/repos/opendatahub-io/odh-dashboard
closed
[DSG]: Hook up Prometheus
kind/enhancement priority/high feature/dsg
### Feature description Our current backend makes a call to prometheus using the NodeJS `https` library. We should try to make this call from the frontend if we can. This may be problematic with the needing of the user token... if we can get that we should be able to make the call as OpenShift Console does it. ### Describe alternatives you've considered We can fall back on a special backend route that is for Prometheus. It won't need to be secured as everything will be done as the user as if they were on OpenShift Console. ### Anything else? However we do this, we should make utilities and isolated code for only reaching out to Prometheus. Ideally whatever that code is is then wrapped with use-cases (likely on the frontend in either design). The coder who calls these methods shouldn't need to have to reconstruct query language for Prometheus in order to use the hook.
1.0
[DSG]: Hook up Prometheus - ### Feature description Our current backend makes a call to prometheus using the NodeJS `https` library. We should try to make this call from the frontend if we can. This may be problematic with the needing of the user token... if we can get that we should be able to make the call as OpenShift Console does it. ### Describe alternatives you've considered We can fall back on a special backend route that is for Prometheus. It won't need to be secured as everything will be done as the user as if they were on OpenShift Console. ### Anything else? However we do this, we should make utilities and isolated code for only reaching out to Prometheus. Ideally whatever that code is is then wrapped with use-cases (likely on the frontend in either design). The coder who calls these methods shouldn't need to have to reconstruct query language for Prometheus in order to use the hook.
priority
hook up prometheus feature description our current backend makes a call to prometheus using the nodejs https library we should try to make this call from the frontend if we can this may be problematic with the needing of the user token if we can get that we should be able to make the call as openshift console does it describe alternatives you ve considered we can fall back on a special backend route that is for prometheus it won t need to be secured as everything will be done as the user as if they were on openshift console anything else however we do this we should make utilities and isolated code for only reaching out to prometheus ideally whatever that code is is then wrapped with use cases likely on the frontend in either design the coder who calls these methods shouldn t need to have to reconstruct query language for prometheus in order to use the hook
1
322,353
9,816,768,347
IssuesEvent
2019-06-13 15:17:36
roboticslab-uc3m/vision
https://api.github.com/repos/roboticslab-uc3m/vision
closed
Further improvements on yarp::dev::IRGBDSensor
priority: high
I left some ideas in https://github.com/roboticslab-uc3m/vision/pull/86 for further development and enhancement of the current depth-frame apps. Since that PR was aimed to solve a bug, I have split the underlying low-priority tasks into this issue: - ~~we no longer create the RGBD device locally, a client device is opened instead (current default: *RGBDSensorClient*) and connects to the corresponding network wrapper -> partially restore previous behavior and allow local devices, too (e.g. *depthCamera*)~~ - [x] fetch intrinsic/extrinsic camera parameters from device via `IRGBDSensor`'s getters (https://github.com/roboticslab-uc3m/vision/commit/cf963808bbbb2cfce2f4f866198bc0127eb23d15) - [x] generate convenient .ini files at `share/` for each camera+mode (old OpenNI2DeviceServer plugin had several preconfigured modes, [investigate](https://github.com/roboticslab-uc3m/teo-configuration-files/blob/b37b041ddcfddfe1347cfb4d5c22737d0aa069d2/share/teoBase/scripts/teoBase.xml#L36)) -> https://github.com/roboticslab-uc3m/teo-configuration-files/issues/16 - [x] update the [installation guides](https://github.com/roboticslab-uc3m/installation-guides/blob/4969a46fe95f32ddd9710296a720adbc67c1afcb/install-yarp.md), which currently cover the installation of old OpenNI2-based YARP plugins (https://github.com/roboticslab-uc3m/installation-guides/commit/2491869375820d9dfa1492305ade2e119135c2b8) Expanding on the intrinsic/extrinsic params stuff: https://github.com/roboticslab-uc3m/vision/blob/5c7709b9f03c38aa4fd2058451502c0de7c07194/programs/colorRegionDetection/main.cpp#L32-L39 Now, such parameters belong to (are required by) the *depthCamera* device and should be loaded via .ini file: http://www.yarp.it/classyarp_1_1dev_1_1depthCameraDriver.html#details. Note that YARP 3 introduces `RGBDSensorParamParser` (https://github.com/robotology/yarp/pull/1634). Config settings for old `OpenNI2DeviceServer` device: http://wiki.icub.org/wiki/OpenNI2.
1.0
Further improvements on yarp::dev::IRGBDSensor - I left some ideas in https://github.com/roboticslab-uc3m/vision/pull/86 for further development and enhancement of the current depth-frame apps. Since that PR was aimed to solve a bug, I have split the underlying low-priority tasks into this issue: - ~~we no longer create the RGBD device locally, a client device is opened instead (current default: *RGBDSensorClient*) and connects to the corresponding network wrapper -> partially restore previous behavior and allow local devices, too (e.g. *depthCamera*)~~ - [x] fetch intrinsic/extrinsic camera parameters from device via `IRGBDSensor`'s getters (https://github.com/roboticslab-uc3m/vision/commit/cf963808bbbb2cfce2f4f866198bc0127eb23d15) - [x] generate convenient .ini files at `share/` for each camera+mode (old OpenNI2DeviceServer plugin had several preconfigured modes, [investigate](https://github.com/roboticslab-uc3m/teo-configuration-files/blob/b37b041ddcfddfe1347cfb4d5c22737d0aa069d2/share/teoBase/scripts/teoBase.xml#L36)) -> https://github.com/roboticslab-uc3m/teo-configuration-files/issues/16 - [x] update the [installation guides](https://github.com/roboticslab-uc3m/installation-guides/blob/4969a46fe95f32ddd9710296a720adbc67c1afcb/install-yarp.md), which currently cover the installation of old OpenNI2-based YARP plugins (https://github.com/roboticslab-uc3m/installation-guides/commit/2491869375820d9dfa1492305ade2e119135c2b8) Expanding on the intrinsic/extrinsic params stuff: https://github.com/roboticslab-uc3m/vision/blob/5c7709b9f03c38aa4fd2058451502c0de7c07194/programs/colorRegionDetection/main.cpp#L32-L39 Now, such parameters belong to (are required by) the *depthCamera* device and should be loaded via .ini file: http://www.yarp.it/classyarp_1_1dev_1_1depthCameraDriver.html#details. Note that YARP 3 introduces `RGBDSensorParamParser` (https://github.com/robotology/yarp/pull/1634). Config settings for old `OpenNI2DeviceServer` device: http://wiki.icub.org/wiki/OpenNI2.
priority
further improvements on yarp dev irgbdsensor i left some ideas in for further development and enhancement of the current depth frame apps since that pr was aimed to solve a bug i have split the underlying low priority tasks into this issue we no longer create the rgbd device locally a client device is opened instead current default rgbdsensorclient and connects to the corresponding network wrapper partially restore previous behavior and allow local devices too e g depthcamera fetch intrinsic extrinsic camera parameters from device via irgbdsensor s getters generate convenient ini files at share for each camera mode old plugin had several preconfigured modes update the which currently cover the installation of old based yarp plugins expanding on the intrinsic extrinsic params stuff now such parameters belong to are required by the depthcamera device and should be loaded via ini file note that yarp introduces rgbdsensorparamparser config settings for old device
1
239,770
7,800,012,427
IssuesEvent
2018-06-09 03:27:13
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
closed
0006680: folder permissions dialog is broken
Bug Filemanager Mantis high priority
**Reported by pschuele on 27 Jun 2012 10:00** **Version:** Milan (2012.03.5) folder permissions dialog is broken - it shows object [object] and no permissions can be set
1.0
0006680: folder permissions dialog is broken - **Reported by pschuele on 27 Jun 2012 10:00** **Version:** Milan (2012.03.5) folder permissions dialog is broken - it shows object [object] and no permissions can be set
priority
folder permissions dialog is broken reported by pschuele on jun version milan folder permissions dialog is broken it shows object and no permissions can be set
1
397,308
11,726,590,784
IssuesEvent
2020-03-10 14:44:14
AY1920S2-CS2103T-W16-2/main
https://api.github.com/repos/AY1920S2-CS2103T-W16-2/main
opened
As a user I want to keep track of how many repetitions per exercise
priority.High type.Epic
... so that I know the details of each exercise.
1.0
As a user I want to keep track of how many repetitions per exercise - ... so that I know the details of each exercise.
priority
as a user i want to keep track of how many repetitions per exercise so that i know the details of each exercise
1
617,144
19,344,034,112
IssuesEvent
2021-12-15 08:57:28
ls1intum/Artemis
https://api.github.com/repos/ls1intum/Artemis
closed
Nullpointer Exception while Trigger All
bug component:Programming priority:high
### Describe the bug We had to trigger all build plans, since we've updates some test cases. I've marked it as high, since we need this feature if we have to adapt some test cases due to student responses. What happened: * Not all Builds have been triggered * Got this exception in the BE ``` 2021-12-07 10:07:22.610 ERROR 12 --- [ artemis-task-1] .a.i.SimpleAsyncUncaughtExceptionHandler : Unexpected exception occurred invoking async method: public void de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerInstructorBuildForExercise(java.lang.Long) throws de.tum.in.www1.artemis.web.rest.errors.EntityNotFoundException java.lang.NullPointerException: Cannot invoke "org.eclipse.jgit.lib.ObjectId.getName()" because the return value of "de.tum.in.www1.artemis.service.connectors.GitService.getLastCommitHash(de.tum.in.www1.artemis.domain.VcsRepositoryUrl)" is null at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.getLastCommitHashForParticipation(ProgrammingSubmissionService.java:368) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.getOrCreateSubmissionWithLastCommitHashForParticipation(ProgrammingSubmissionService.java:359) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerBuildAndNotifyUser(ProgrammingSubmissionService.java:429) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerBuildForParticipations(ProgrammingSubmissionService.java:316) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerInstructorBuildForExercise(ProgrammingSubmissionService.java:291) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService$$FastClassBySpringCGLIB$$fadb5611.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at tech.jhipster.async.ExceptionHandlingAsyncTaskExecutor.lambda$createWrappedRunnable$1(ExceptionHandlingAsyncTaskExecutor.java:78) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) at java.base/java.lang.Thread.run(Thread.java:831) ``` ### To Reproduce 1. Trigger All for a Programming Task (Unclear what the internal state of all participations must be) ### Expected behavior Trigger all should not fail. And should trigger all participations. ### Screenshots _No response_ ### What browsers are you seeing the problem on? Firefox ### Additional context _No response_ ### Relevant log output _No response_
1.0
Nullpointer Exception while Trigger All - ### Describe the bug We had to trigger all build plans, since we've updates some test cases. I've marked it as high, since we need this feature if we have to adapt some test cases due to student responses. What happened: * Not all Builds have been triggered * Got this exception in the BE ``` 2021-12-07 10:07:22.610 ERROR 12 --- [ artemis-task-1] .a.i.SimpleAsyncUncaughtExceptionHandler : Unexpected exception occurred invoking async method: public void de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerInstructorBuildForExercise(java.lang.Long) throws de.tum.in.www1.artemis.web.rest.errors.EntityNotFoundException java.lang.NullPointerException: Cannot invoke "org.eclipse.jgit.lib.ObjectId.getName()" because the return value of "de.tum.in.www1.artemis.service.connectors.GitService.getLastCommitHash(de.tum.in.www1.artemis.domain.VcsRepositoryUrl)" is null at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.getLastCommitHashForParticipation(ProgrammingSubmissionService.java:368) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.getOrCreateSubmissionWithLastCommitHashForParticipation(ProgrammingSubmissionService.java:359) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerBuildAndNotifyUser(ProgrammingSubmissionService.java:429) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerBuildForParticipations(ProgrammingSubmissionService.java:316) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService.triggerInstructorBuildForExercise(ProgrammingSubmissionService.java:291) at de.tum.in.www1.artemis.service.programming.ProgrammingSubmissionService$$FastClassBySpringCGLIB$$fadb5611.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.interceptor.AsyncExecutionInterceptor.lambda$invoke$0(AsyncExecutionInterceptor.java:115) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at tech.jhipster.async.ExceptionHandlingAsyncTaskExecutor.lambda$createWrappedRunnable$1(ExceptionHandlingAsyncTaskExecutor.java:78) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) at java.base/java.lang.Thread.run(Thread.java:831) ``` ### To Reproduce 1. Trigger All for a Programming Task (Unclear what the internal state of all participations must be) ### Expected behavior Trigger all should not fail. And should trigger all participations. ### Screenshots _No response_ ### What browsers are you seeing the problem on? Firefox ### Additional context _No response_ ### Relevant log output _No response_
priority
nullpointer exception while trigger all describe the bug we had to trigger all build plans since we ve updates some test cases i ve marked it as high since we need this feature if we have to adapt some test cases due to student responses what happened not all builds have been triggered got this exception in the be error a i simpleasyncuncaughtexceptionhandler unexpected exception occurred invoking async method public void de tum in artemis service programming programmingsubmissionservice triggerinstructorbuildforexercise java lang long throws de tum in artemis web rest errors entitynotfoundexception java lang nullpointerexception cannot invoke org eclipse jgit lib objectid getname because the return value of de tum in artemis service connectors gitservice getlastcommithash de tum in artemis domain vcsrepositoryurl is null at de tum in artemis service programming programmingsubmissionservice getlastcommithashforparticipation programmingsubmissionservice java at de tum in artemis service programming programmingsubmissionservice getorcreatesubmissionwithlastcommithashforparticipation programmingsubmissionservice java at de tum in artemis service programming programmingsubmissionservice triggerbuildandnotifyuser programmingsubmissionservice java at de tum in artemis service programming programmingsubmissionservice triggerbuildforparticipations programmingsubmissionservice java at de tum in artemis service programming programmingsubmissionservice triggerinstructorbuildforexercise programmingsubmissionservice java at de tum in artemis service programming programmingsubmissionservice fastclassbyspringcglib invoke at org springframework cglib proxy methodproxy invoke methodproxy java at org springframework aop framework cglibaopproxy cglibmethodinvocation invokejoinpoint cglibaopproxy java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework cglibaopproxy cglibmethodinvocation proceed cglibaopproxy java at org springframework aop interceptor asyncexecutioninterceptor lambda invoke asyncexecutioninterceptor java at java base java util concurrent futuretask run futuretask java at tech jhipster async exceptionhandlingasynctaskexecutor lambda createwrappedrunnable exceptionhandlingasynctaskexecutor java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java to reproduce trigger all for a programming task unclear what the internal state of all participations must be expected behavior trigger all should not fail and should trigger all participations screenshots no response what browsers are you seeing the problem on firefox additional context no response relevant log output no response
1
291,944
8,951,649,647
IssuesEvent
2019-01-25 14:33:04
OpenSRP/opensrp-client-reveal
https://api.github.com/repos/OpenSRP/opensrp-client-reveal
closed
Implement Hamburger menu
Android Client Priority: High
The Reveal app has a hamburger menu that slides from the left side of the app. ![irs - menu tapped 3](https://user-images.githubusercontent.com/6067053/48386717-c68f3100-e6a7-11e8-8d1f-877dc77cfd57.png) Navigation: - [x] This menu displays when the user touches the hamburger menu on the map view. - [x] Users can close this menu by sliding it from right to left - [x] This view acts as an overlay that makes the underlying map navigation unresponsive. We don't want users accidentally moving the map around when they try to close this menu. Data: The data on this screen needs to act as a filter for the map: - [x] Campaign is a dropdown menu that filters the available tasks by campaign. - [x] Operational Area is a hierarchy menu that filters the operational areas that have been downloaded on the Android client for that user. This filters the task by taskGroup. ![irs - menu tapped oa dialog](https://user-images.githubusercontent.com/6067053/48386747-dc045b00-e6a7-11e8-8340-29f05ca60214.png) Actions: - [x] The user can touch the word "Sync" to close this menu and perform a sync operation - [x] The user can touch the word "Logout" to close this menu and logout of the app
1.0
Implement Hamburger menu - The Reveal app has a hamburger menu that slides from the left side of the app. ![irs - menu tapped 3](https://user-images.githubusercontent.com/6067053/48386717-c68f3100-e6a7-11e8-8d1f-877dc77cfd57.png) Navigation: - [x] This menu displays when the user touches the hamburger menu on the map view. - [x] Users can close this menu by sliding it from right to left - [x] This view acts as an overlay that makes the underlying map navigation unresponsive. We don't want users accidentally moving the map around when they try to close this menu. Data: The data on this screen needs to act as a filter for the map: - [x] Campaign is a dropdown menu that filters the available tasks by campaign. - [x] Operational Area is a hierarchy menu that filters the operational areas that have been downloaded on the Android client for that user. This filters the task by taskGroup. ![irs - menu tapped oa dialog](https://user-images.githubusercontent.com/6067053/48386747-dc045b00-e6a7-11e8-8340-29f05ca60214.png) Actions: - [x] The user can touch the word "Sync" to close this menu and perform a sync operation - [x] The user can touch the word "Logout" to close this menu and logout of the app
priority
implement hamburger menu the reveal app has a hamburger menu that slides from the left side of the app navigation this menu displays when the user touches the hamburger menu on the map view users can close this menu by sliding it from right to left this view acts as an overlay that makes the underlying map navigation unresponsive we don t want users accidentally moving the map around when they try to close this menu data the data on this screen needs to act as a filter for the map campaign is a dropdown menu that filters the available tasks by campaign operational area is a hierarchy menu that filters the operational areas that have been downloaded on the android client for that user this filters the task by taskgroup actions the user can touch the word sync to close this menu and perform a sync operation the user can touch the word logout to close this menu and logout of the app
1
310,109
9,485,868,048
IssuesEvent
2019-04-22 12:01:31
strapi/strapi
https://api.github.com/repos/strapi/strapi
closed
Delete entry with an Media relation error Parameter "obj" to Document()
priority: high status: confirmed type: bug 🐛
**Informations** - **Node.js version**: 10.15.0 - **NPM version**: 6.4.1 - **Strapi version**: v3.0.0-alpha.18 - **Database**: MongoDB 3.6.9 - **Operating system**: Debian 9 **What is the current behavior?** Issuing delete requests causes 500 internal error. but the entry gets deleted anyway. in a non-custom install. only some content types added The delete request: /products/5c3a52af383fe663810f0abc ``` (node:613) DeprecationWarning: collection.findAndModify is deprecated. Use findOneAndUpdate, findOneAndReplace or findOneAndDelete instead. { ObjectParameterError: Parameter "obj" to Document() must be an object, got 5c3a995fc4859602655bfe33 at new ObjectParameterError ``` **Steps to reproduce the problem** setup insomina. create content type. issue delete request **What is the expected behavior?** No internal error **Suggested solutions** No idea atm.
1.0
Delete entry with an Media relation error Parameter "obj" to Document() - **Informations** - **Node.js version**: 10.15.0 - **NPM version**: 6.4.1 - **Strapi version**: v3.0.0-alpha.18 - **Database**: MongoDB 3.6.9 - **Operating system**: Debian 9 **What is the current behavior?** Issuing delete requests causes 500 internal error. but the entry gets deleted anyway. in a non-custom install. only some content types added The delete request: /products/5c3a52af383fe663810f0abc ``` (node:613) DeprecationWarning: collection.findAndModify is deprecated. Use findOneAndUpdate, findOneAndReplace or findOneAndDelete instead. { ObjectParameterError: Parameter "obj" to Document() must be an object, got 5c3a995fc4859602655bfe33 at new ObjectParameterError ``` **Steps to reproduce the problem** setup insomina. create content type. issue delete request **What is the expected behavior?** No internal error **Suggested solutions** No idea atm.
priority
delete entry with an media relation error parameter obj to document informations node js version npm version strapi version alpha database mongodb operating system debian what is the current behavior issuing delete requests causes internal error but the entry gets deleted anyway in a non custom install only some content types added the delete request products node deprecationwarning collection findandmodify is deprecated use findoneandupdate findoneandreplace or findoneanddelete instead objectparametererror parameter obj to document must be an object got at new objectparametererror steps to reproduce the problem setup insomina create content type issue delete request what is the expected behavior no internal error suggested solutions no idea atm
1
201,946
7,042,690,440
IssuesEvent
2017-12-30 16:46:41
tripl3dogdare/scjson
https://api.github.com/repos/tripl3dogdare/scjson
closed
Get ScJson on SBT package management systems
enhancement priority:high
Currently, ScJson can only be installed via unmanaged dependencies or direct source addition. This needs to be dealt with sooner than later.
1.0
Get ScJson on SBT package management systems - Currently, ScJson can only be installed via unmanaged dependencies or direct source addition. This needs to be dealt with sooner than later.
priority
get scjson on sbt package management systems currently scjson can only be installed via unmanaged dependencies or direct source addition this needs to be dealt with sooner than later
1
794,372
28,033,554,508
IssuesEvent
2023-03-28 13:49:43
scaleway/scaleway-cli
https://api.github.com/repos/scaleway/scaleway-cli
closed
shell: completion description is missing with positional arguments
bug shell priority:high
<!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ## Command attempted ![image](https://user-images.githubusercontent.com/12870834/190412816-be26ef5c-50e8-4fb8-8c58-120380c7ad92.png) ``` lb backend update 18b068f7-4ad8-4fb7-8101-f01e399cdb29 forward-port-algorith ``` ### Expected Behavior Field should be documented ### Actual Behavior Field description is "command not found" ## More info <!-- output of `scw version`, your OS version, steps to reproduce, etc. -->
1.0
shell: completion description is missing with positional arguments - <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ## Command attempted ![image](https://user-images.githubusercontent.com/12870834/190412816-be26ef5c-50e8-4fb8-8c58-120380c7ad92.png) ``` lb backend update 18b068f7-4ad8-4fb7-8101-f01e399cdb29 forward-port-algorith ``` ### Expected Behavior Field should be documented ### Actual Behavior Field description is "command not found" ## More info <!-- output of `scw version`, your OS version, steps to reproduce, etc. -->
priority
shell completion description is missing with positional arguments community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment command attempted lb backend update forward port algorith expected behavior field should be documented actual behavior field description is command not found more info
1
787,013
27,701,797,677
IssuesEvent
2023-03-14 08:35:22
Signbank/Global-signbank
https://api.github.com/repos/Signbank/Global-signbank
reopened
Problem when playing an uploaded video
high priority
An error occurs in the drag and drop code, or the form that it uses, when opening a gloss detail page See: https://signbank.cls.ru.nl/dictionary/gloss/44008/ Might be a browser problem, or maybe it is saved somewhere different than in my local setup, or it is something else. Check why "id_videofile" is nowhere as 'id' but is only used for 'for'?
1.0
Problem when playing an uploaded video - An error occurs in the drag and drop code, or the form that it uses, when opening a gloss detail page See: https://signbank.cls.ru.nl/dictionary/gloss/44008/ Might be a browser problem, or maybe it is saved somewhere different than in my local setup, or it is something else. Check why "id_videofile" is nowhere as 'id' but is only used for 'for'?
priority
problem when playing an uploaded video an error occurs in the drag and drop code or the form that it uses when opening a gloss detail page see might be a browser problem or maybe it is saved somewhere different than in my local setup or it is something else check why id videofile is nowhere as id but is only used for for
1
678,469
23,198,665,018
IssuesEvent
2022-08-01 19:02:43
PenPow/Sentry
https://api.github.com/repos/PenPow/Sentry
closed
Interactions: Not Clearing Out Prior / Commands on Post
bug priority:high semver:major managers
# Overview Not clearing old slash commands when deployed also make it singular if only giving one command, it was annoying me
1.0
Interactions: Not Clearing Out Prior / Commands on Post - # Overview Not clearing old slash commands when deployed also make it singular if only giving one command, it was annoying me
priority
interactions not clearing out prior commands on post overview not clearing old slash commands when deployed also make it singular if only giving one command it was annoying me
1
196,558
6,935,118,206
IssuesEvent
2017-12-03 03:54:02
vmware/harbor
https://api.github.com/repos/vmware/harbor
closed
Harbor will be started multiple times in the bosh release/tile
area/bosh-release area/tile kind/bug priority/high target/pks-0.8
Here are logs: ------------------- [Fri Dec 1 10:48:10 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:48:10 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:50:59 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:50:59 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:51:11 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:51:11 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:52:45 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:52:45 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:52:56 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:52:56 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:54:35 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:54:35 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:55:16 UTC 2017] Error: Harbor Service failed to start in 120 seconds [Fri Dec 1 10:55:26 UTC 2017] Error: Harbor Service failed to start in 120 seconds [Fri Dec 1 10:56:32 UTC 2017] Error: Harbor Service failed to start in 120 seconds
1.0
Harbor will be started multiple times in the bosh release/tile - Here are logs: ------------------- [Fri Dec 1 10:48:10 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:48:10 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:50:59 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:50:59 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:51:11 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:51:11 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:52:45 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:52:45 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:52:56 UTC 2017] Starting Harbor 1.2.0 at https://testing.habor.vmware.com [Fri Dec 1 10:52:56 UTC 2017] Loading docker images ... Loaded image: vmware/mariadb-photon:10.2.8 Loaded image: vmware/harbor-ui:v1.2.0-219-gdb6def3 Loaded image: vmware/harbor-jobservice:v1.2.0-219-gdb6def3 Loaded image: vmware/nginx-photon:1.11.13 Loaded image: vmware/postgresql:9.6.5-photon Loaded image: vmware/harbor-db:v1.2.0-219-gdb6def3 Loaded image: vmware/photon:1.0 Loaded image: vmware/clair:v2.0.1-photon Loaded image: vmware/harbor-adminserver:v1.2.0-219-gdb6def3 Loaded image: vmware/registry:2.6.2-photon Loaded image: vmware/notary-photon:server-0.5.1 Loaded image: vmware/notary-photon:signer-0.5.1 Loaded image: vmware/harbor-log:v1.2.0-219-gdb6def3 [Fri Dec 1 10:54:35 UTC 2017] Launching docker-compose up ... [Fri Dec 1 10:54:35 UTC 2017] Waiting 120 seconds for Harbor Service to be ready ... [Fri Dec 1 10:55:16 UTC 2017] Error: Harbor Service failed to start in 120 seconds [Fri Dec 1 10:55:26 UTC 2017] Error: Harbor Service failed to start in 120 seconds [Fri Dec 1 10:56:32 UTC 2017] Error: Harbor Service failed to start in 120 seconds
priority
harbor will be started multiple times in the bosh release tile here are logs starting harbor at loading docker images loaded image vmware mariadb photon loaded image vmware harbor ui loaded image vmware harbor jobservice loaded image vmware nginx photon loaded image vmware postgresql photon loaded image vmware harbor db loaded image vmware photon loaded image vmware clair photon loaded image vmware harbor adminserver loaded image vmware registry photon loaded image vmware notary photon server loaded image vmware notary photon signer loaded image vmware harbor log launching docker compose up waiting seconds for harbor service to be ready starting harbor at loading docker images loaded image vmware mariadb photon loaded image vmware harbor ui loaded image vmware harbor jobservice loaded image vmware nginx photon loaded image vmware postgresql photon loaded image vmware harbor db loaded image vmware photon loaded image vmware clair photon loaded image vmware harbor adminserver loaded image vmware registry photon loaded image vmware notary photon server loaded image vmware notary photon signer loaded image vmware harbor log launching docker compose up waiting seconds for harbor service to be ready starting harbor at loading docker images loaded image vmware mariadb photon loaded image vmware harbor ui loaded image vmware harbor jobservice loaded image vmware nginx photon loaded image vmware postgresql photon loaded image vmware harbor db loaded image vmware photon loaded image vmware clair photon loaded image vmware harbor adminserver loaded image vmware registry photon loaded image vmware notary photon server loaded image vmware notary photon signer loaded image vmware harbor log launching docker compose up waiting seconds for harbor service to be ready error harbor service failed to start in seconds error harbor service failed to start in seconds error harbor service failed to start in seconds
1
829,675
31,886,415,501
IssuesEvent
2023-09-17 01:23:46
primaryodors/primarydock
https://api.github.com/repos/primaryodors/primarydock
opened
Increase memory robustness.
high priority
There is a segfault that's sometimes happening during predictions with no clear pattern for reproducibility. It is not happening on the dev machines where the code can be stepped through, and running the faulty dock under valgrind would probably take days. Let's see if it resolves on its own after applying some better practices to memory management. All functions that return any type of pointer to pointers must be changed to not do this. When such a function is called, it allocates a pointer array on the heap and sets the elements to point to persistent objects that must remain active throughout program execution. If the array is deallocated with delete[], it frees up memory that's still being used and leads to segfaults and corrupted data. If deallocated with delete, it causes a "mismatched free/delete" error. If not deallocated, it causes a memory leak. Where performance is not critical, it is helpful to use std::vector and std::shared_ptr. Where there is a known limit to the size of the returned array, a stack allocated array can be passed in as a pointer argument. Another option is to use arrays of Star objects.
1.0
Increase memory robustness. - There is a segfault that's sometimes happening during predictions with no clear pattern for reproducibility. It is not happening on the dev machines where the code can be stepped through, and running the faulty dock under valgrind would probably take days. Let's see if it resolves on its own after applying some better practices to memory management. All functions that return any type of pointer to pointers must be changed to not do this. When such a function is called, it allocates a pointer array on the heap and sets the elements to point to persistent objects that must remain active throughout program execution. If the array is deallocated with delete[], it frees up memory that's still being used and leads to segfaults and corrupted data. If deallocated with delete, it causes a "mismatched free/delete" error. If not deallocated, it causes a memory leak. Where performance is not critical, it is helpful to use std::vector and std::shared_ptr. Where there is a known limit to the size of the returned array, a stack allocated array can be passed in as a pointer argument. Another option is to use arrays of Star objects.
priority
increase memory robustness there is a segfault that s sometimes happening during predictions with no clear pattern for reproducibility it is not happening on the dev machines where the code can be stepped through and running the faulty dock under valgrind would probably take days let s see if it resolves on its own after applying some better practices to memory management all functions that return any type of pointer to pointers must be changed to not do this when such a function is called it allocates a pointer array on the heap and sets the elements to point to persistent objects that must remain active throughout program execution if the array is deallocated with delete it frees up memory that s still being used and leads to segfaults and corrupted data if deallocated with delete it causes a mismatched free delete error if not deallocated it causes a memory leak where performance is not critical it is helpful to use std vector and std shared ptr where there is a known limit to the size of the returned array a stack allocated array can be passed in as a pointer argument another option is to use arrays of star objects
1
109,502
4,388,624,023
IssuesEvent
2016-08-08 19:29:05
DistrictDataLabs/partisan-discourse
https://api.github.com/repos/DistrictDataLabs/partisan-discourse
opened
Model Management View: Listing all the models in the system and for your user
priority: high ready type: feature
To increase the value of `partisan-discourse` as a tool to demonstrate a model management system and as a system to learn more about ML, we want to incorporate a model display view in the UI that showcases to a user the currently stored models, particularly as a breakdown between the global models and a user's models. This issue is closed when a user can login, navigate to a model display page, and there see a listing of their user's models (and their respective scoring data) and the global (aka null user) models and their respective scoring data. The page design is pretty open ended but for whomever takes this issue we can brainstorm on it a bit together as needed.
1.0
Model Management View: Listing all the models in the system and for your user - To increase the value of `partisan-discourse` as a tool to demonstrate a model management system and as a system to learn more about ML, we want to incorporate a model display view in the UI that showcases to a user the currently stored models, particularly as a breakdown between the global models and a user's models. This issue is closed when a user can login, navigate to a model display page, and there see a listing of their user's models (and their respective scoring data) and the global (aka null user) models and their respective scoring data. The page design is pretty open ended but for whomever takes this issue we can brainstorm on it a bit together as needed.
priority
model management view listing all the models in the system and for your user to increase the value of partisan discourse as a tool to demonstrate a model management system and as a system to learn more about ml we want to incorporate a model display view in the ui that showcases to a user the currently stored models particularly as a breakdown between the global models and a user s models this issue is closed when a user can login navigate to a model display page and there see a listing of their user s models and their respective scoring data and the global aka null user models and their respective scoring data the page design is pretty open ended but for whomever takes this issue we can brainstorm on it a bit together as needed
1
522,037
15,147,445,165
IssuesEvent
2021-02-11 09:09:27
BirminghamConservatoire/IntegraLive
https://api.github.com/repos/BirminghamConservatoire/IntegraLive
closed
Integra Live won't launch on macOs Big Sur
Mac-only priority high
Pablo Furman at San José State University reported last week that IL won't launch on Macs running macOs Big Sur. A message comes up requesting an updated version. Is there an easy way to fix this? I'm still running on High Sierra and students here use mostly Catalina without problems, but I'll try to reproduce on an extra machine if I can get hold of one.
1.0
Integra Live won't launch on macOs Big Sur - Pablo Furman at San José State University reported last week that IL won't launch on Macs running macOs Big Sur. A message comes up requesting an updated version. Is there an easy way to fix this? I'm still running on High Sierra and students here use mostly Catalina without problems, but I'll try to reproduce on an extra machine if I can get hold of one.
priority
integra live won t launch on macos big sur pablo furman at san josé state university reported last week that il won t launch on macs running macos big sur a message comes up requesting an updated version is there an easy way to fix this i m still running on high sierra and students here use mostly catalina without problems but i ll try to reproduce on an extra machine if i can get hold of one
1
500,185
14,492,260,988
IssuesEvent
2020-12-11 06:36:30
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
m.youtube.com - see bug description
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox Mobile 83.0 --> <!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63447 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://m.youtube.com/watch?v=EWjZOxs87yg **Browser / Version**: Firefox Mobile 83.0 **Operating System**: Android 6.0 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Sound quality **Steps to Reproduce**: The problem with YouTube audio quality <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
m.youtube.com - see bug description - <!-- @browser: Firefox Mobile 83.0 --> <!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:83.0) Gecko/83.0 Firefox/83.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63447 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://m.youtube.com/watch?v=EWjZOxs87yg **Browser / Version**: Firefox Mobile 83.0 **Operating System**: Android 6.0 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Sound quality **Steps to Reproduce**: The problem with YouTube audio quality <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
m youtube com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description sound quality steps to reproduce the problem with youtube audio quality browser configuration none from with ❤️
1
465,827
13,392,874,270
IssuesEvent
2020-09-03 02:36:20
WordPress/learn
https://api.github.com/repos/WordPress/learn
closed
Search results: Differentiate between CPTs
[Component] Learn Theme [Priority] High
The search function searches the lesson plans and the workshops, but they are displayed in a single list with no visual distinction - example: https://learn.wordpress.org/?s=block Could this be updated to show some sort of visual differentiator? Either a highlight of some kind or, better yet, display results in two columns/areas - one for each CPT.
1.0
Search results: Differentiate between CPTs - The search function searches the lesson plans and the workshops, but they are displayed in a single list with no visual distinction - example: https://learn.wordpress.org/?s=block Could this be updated to show some sort of visual differentiator? Either a highlight of some kind or, better yet, display results in two columns/areas - one for each CPT.
priority
search results differentiate between cpts the search function searches the lesson plans and the workshops but they are displayed in a single list with no visual distinction example could this be updated to show some sort of visual differentiator either a highlight of some kind or better yet display results in two columns areas one for each cpt
1
268,660
8,409,993,796
IssuesEvent
2018-10-12 09:11:55
hajkmap/Hajk
https://api.github.com/repos/hajkmap/Hajk
closed
Byt så att alla plugins som har en panel renderar i en drawer
High priority
Byt så att alla plugins som har en panel renderar i en drawer
1.0
Byt så att alla plugins som har en panel renderar i en drawer - Byt så att alla plugins som har en panel renderar i en drawer
priority
byt så att alla plugins som har en panel renderar i en drawer byt så att alla plugins som har en panel renderar i en drawer
1
735,839
25,443,345,671
IssuesEvent
2022-11-24 02:04:41
Automattic/abacus
https://api.github.com/repos/Automattic/abacus
closed
Add new analysis columns
[!priority] high [type] enhancement [section] experiment results [!team] explat [!milestone] current
After adding absolute impact to Abacus (https://github.com/Automattic/abacus/pull/772), I think we can change the way we show the results of the experiment: - [ ] Display baseline interval below metric name p10gg3-aFD-p2#comment-33110. - [ ] Rename "absolute change" to "estimated difference" - [x] Rename "relative change (lift)" to "estimated impact". Done in https://github.com/Automattic/abacus/pull/772 - [ ] Reduce the font size and font color to a grey `#828282` of absolute change and relative change - [x] Add absolute impact to "estimated impact" with medium weight. Done in https://github.com/Automattic/abacus/pull/772 - [ ] Adjust analysis text to be medium weight - [ ] Align absolute change and relative change to be on the same horizontal ruler - [ ] The gap between absolute impact and relative change should be vertically aligned with metric name and analysis text <img width="1194" alt="Screen Shot 2022-09-16 at 3 18 32 PM" src="https://user-images.githubusercontent.com/4505888/190815505-bf891f65-2fd8-4dfa-aba8-973976b90063.png">
1.0
Add new analysis columns - After adding absolute impact to Abacus (https://github.com/Automattic/abacus/pull/772), I think we can change the way we show the results of the experiment: - [ ] Display baseline interval below metric name p10gg3-aFD-p2#comment-33110. - [ ] Rename "absolute change" to "estimated difference" - [x] Rename "relative change (lift)" to "estimated impact". Done in https://github.com/Automattic/abacus/pull/772 - [ ] Reduce the font size and font color to a grey `#828282` of absolute change and relative change - [x] Add absolute impact to "estimated impact" with medium weight. Done in https://github.com/Automattic/abacus/pull/772 - [ ] Adjust analysis text to be medium weight - [ ] Align absolute change and relative change to be on the same horizontal ruler - [ ] The gap between absolute impact and relative change should be vertically aligned with metric name and analysis text <img width="1194" alt="Screen Shot 2022-09-16 at 3 18 32 PM" src="https://user-images.githubusercontent.com/4505888/190815505-bf891f65-2fd8-4dfa-aba8-973976b90063.png">
priority
add new analysis columns after adding absolute impact to abacus i think we can change the way we show the results of the experiment display baseline interval below metric name afd comment rename absolute change to estimated difference rename relative change lift to estimated impact done in reduce the font size and font color to a grey of absolute change and relative change add absolute impact to estimated impact with medium weight done in adjust analysis text to be medium weight align absolute change and relative change to be on the same horizontal ruler the gap between absolute impact and relative change should be vertically aligned with metric name and analysis text img width alt screen shot at pm src
1
468,631
13,487,133,479
IssuesEvent
2020-09-11 10:28:53
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.zoho.com - site is not usable
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-important
<!-- @browser: Firefox 81.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/58073 --> **URL**: https://www.zoho.com/meeting/login.html **Browser / Version**: Firefox 81.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: The site has certificate issues <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/bd33879d-87a3-45f1-9214-ca1e23a1f5c2.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200910180444</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/0f2d2807-a9b8-4d15-bef9-cc9ad8b0b895) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.zoho.com - site is not usable - <!-- @browser: Firefox 81.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/58073 --> **URL**: https://www.zoho.com/meeting/login.html **Browser / Version**: Firefox 81.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: The site has certificate issues <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/bd33879d-87a3-45f1-9214-ca1e23a1f5c2.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200910180444</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/0f2d2807-a9b8-4d15-bef9-cc9ad8b0b895) _From [webcompat.com](https://webcompat.com/) with ❤️_
priority
site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce the site has certificate issues view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
1
412,898
12,057,864,772
IssuesEvent
2020-04-15 16:30:11
hobbit-project/platform
https://api.github.com/repos/hobbit-project/platform
closed
Platform should make sure that container names are valid
component: controller priority: high type: bug
## Problem For some images, the generated container name is too long: ``` 2018-05-08 09:49:36,381 ERROR [org.hobbit.controller.docker.ContainerManagerImpl] - <Couldn't create Docker container. Returning null.> com.spotify.docker.client.exceptions.DockerRequestException: Request error: POST unix://localhost:80/services/create: 400, body: {"message":"rpc error: code = InvalidArgument desc = name must be 63 characters or fewer"} at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:2702) at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:2652) at com.spotify.docker.client.DefaultDockerClient.createService(DefaultDockerClient.java:1848) at org.hobbit.controller.docker.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:499) at org.hobbit.controller.docker.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:559) at org.hobbit.controller.docker.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:575) at org.hobbit.controller.ExperimentManager.createNextExperiment(ExperimentManager.java:229) at org.hobbit.controller.ExperimentManager$1.run(ExperimentManager.java:128) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: javax.ws.rs.BadRequestException: HTTP 400 Bad Request at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:999) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:816) at org.glassfish.jersey.client.JerseyInvocation.access$700(JerseyInvocation.java:92) at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:773) at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:198) at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:79) at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:180) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:340) at org.glassfish.jersey.client.ClientRuntime$3.run(ClientRuntime.java:210) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-05-08 09:49:36,383 ERROR [org.hobbit.controller.ExperimentManager] - <Exception while trying to start a new benchmark. Removing it from the queue.> java.lang.Exception: Couldn't create benchmark controller http://w3id.org/gerbil/qa/hobbit/vocab#GerbilBenchmarkTask3Testing at org.hobbit.controller.ExperimentManager.createNextExperiment(ExperimentManager.java:239) at org.hobbit.controller.ExperimentManager$1.run(ExperimentManager.java:128) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) ```
1.0
Platform should make sure that container names are valid - ## Problem For some images, the generated container name is too long: ``` 2018-05-08 09:49:36,381 ERROR [org.hobbit.controller.docker.ContainerManagerImpl] - <Couldn't create Docker container. Returning null.> com.spotify.docker.client.exceptions.DockerRequestException: Request error: POST unix://localhost:80/services/create: 400, body: {"message":"rpc error: code = InvalidArgument desc = name must be 63 characters or fewer"} at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:2702) at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:2652) at com.spotify.docker.client.DefaultDockerClient.createService(DefaultDockerClient.java:1848) at org.hobbit.controller.docker.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:499) at org.hobbit.controller.docker.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:559) at org.hobbit.controller.docker.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:575) at org.hobbit.controller.ExperimentManager.createNextExperiment(ExperimentManager.java:229) at org.hobbit.controller.ExperimentManager$1.run(ExperimentManager.java:128) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: javax.ws.rs.BadRequestException: HTTP 400 Bad Request at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:999) at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:816) at org.glassfish.jersey.client.JerseyInvocation.access$700(JerseyInvocation.java:92) at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:773) at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:198) at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:79) at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:180) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:340) at org.glassfish.jersey.client.ClientRuntime$3.run(ClientRuntime.java:210) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-05-08 09:49:36,383 ERROR [org.hobbit.controller.ExperimentManager] - <Exception while trying to start a new benchmark. Removing it from the queue.> java.lang.Exception: Couldn't create benchmark controller http://w3id.org/gerbil/qa/hobbit/vocab#GerbilBenchmarkTask3Testing at org.hobbit.controller.ExperimentManager.createNextExperiment(ExperimentManager.java:239) at org.hobbit.controller.ExperimentManager$1.run(ExperimentManager.java:128) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) ```
priority
platform should make sure that container names are valid problem for some images the generated container name is too long error com spotify docker client exceptions dockerrequestexception request error post unix localhost services create body message rpc error code invalidargument desc name must be characters or fewer at com spotify docker client defaultdockerclient propagate defaultdockerclient java at com spotify docker client defaultdockerclient request defaultdockerclient java at com spotify docker client defaultdockerclient createservice defaultdockerclient java at org hobbit controller docker containermanagerimpl createcontainer containermanagerimpl java at org hobbit controller docker containermanagerimpl startcontainer containermanagerimpl java at org hobbit controller docker containermanagerimpl startcontainer containermanagerimpl java at org hobbit controller experimentmanager createnextexperiment experimentmanager java at org hobbit controller experimentmanager run experimentmanager java at java util timerthread mainloop timer java at java util timerthread run timer java caused by javax ws rs badrequestexception http bad request at org glassfish jersey client jerseyinvocation converttoexception jerseyinvocation java at org glassfish jersey client jerseyinvocation translate jerseyinvocation java at org glassfish jersey client jerseyinvocation access jerseyinvocation java at org glassfish jersey client jerseyinvocation completed jerseyinvocation java at org glassfish jersey client clientruntime processresponse clientruntime java at org glassfish jersey client clientruntime access clientruntime java at org glassfish jersey client clientruntime run clientruntime java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey process internal requestscope runinscope requestscope java at org glassfish jersey client clientruntime run clientruntime java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java error java lang exception couldn t create benchmark controller at org hobbit controller experimentmanager createnextexperiment experimentmanager java at org hobbit controller experimentmanager run experimentmanager java at java util timerthread mainloop timer java at java util timerthread run timer java
1
189,274
6,795,935,153
IssuesEvent
2017-11-01 17:18:17
drewrehfeld/opened
https://api.github.com/repos/drewrehfeld/opened
opened
style issue on comment thread
bug HIGHEST! P1 (highest priority)
Something weird has happened with the spacing on posts on the newsfeed current style: ![screen shot 2017-11-01 at 10 18 30 vm](https://user-images.githubusercontent.com/19864464/32287409-f620117e-beed-11e7-83d1-cf6b9e2e1b86.png) what it should look like: ![screen shot 2017-11-01 at 10 18 33 vm](https://user-images.githubusercontent.com/19864464/32287402-f0149d04-beed-11e7-9a6c-b6a0f078c342.png)
1.0
style issue on comment thread - Something weird has happened with the spacing on posts on the newsfeed current style: ![screen shot 2017-11-01 at 10 18 30 vm](https://user-images.githubusercontent.com/19864464/32287409-f620117e-beed-11e7-83d1-cf6b9e2e1b86.png) what it should look like: ![screen shot 2017-11-01 at 10 18 33 vm](https://user-images.githubusercontent.com/19864464/32287402-f0149d04-beed-11e7-9a6c-b6a0f078c342.png)
priority
style issue on comment thread something weird has happened with the spacing on posts on the newsfeed current style what it should look like
1
551,959
16,191,894,096
IssuesEvent
2021-05-04 09:39:17
openshift/odo
https://api.github.com/repos/openshift/odo
closed
use Devfiles for s2i components
kind/cleanup points/2 priority/High
Use devfile.yml for s2i components instead of LocalConfig.yaml # Motivation Odo has two types of components, Devfile and s2i. Currently, each component type has its own separate code path. This means that each command needs to understand how to do the action for Devfile and also for s2i. Technically it is possible to create a Devfile that will mimic what odo is doing with s2i. We already have this in `odo utils convert`. Odo should leverage that logic and start treating s2i components as regular Devfile. This will reduce the odo code base and will make code maintenance a lot simpler. ## `odo create --s2i` `odo create --s2i` command would no longer generate `LocalConfig` (`./odo/config.yml`). Instead, it should just generate `devfile.yml`, and optionally `./odo/env.yml`) depending on what will be required. ## all other commands The rest of the commands (like `odo storage`, `odo url`, etc..) should not need to know anything about s2i. For them, it will be just another Devfile component. ## Acceptance Criteria - [x] `odo create --s2i` should use the s2i to devfile tool to build the devfile on the fly and follow devfile flow in the future. - [x] if user has existing `LocalConfig` s2i component it still needs to work - [ ] We're going to need to make changes to odo create -h as well. - [ ] Finally, we're going to need to update docs that end on odo.dev site as well.
1.0
use Devfiles for s2i components - Use devfile.yml for s2i components instead of LocalConfig.yaml # Motivation Odo has two types of components, Devfile and s2i. Currently, each component type has its own separate code path. This means that each command needs to understand how to do the action for Devfile and also for s2i. Technically it is possible to create a Devfile that will mimic what odo is doing with s2i. We already have this in `odo utils convert`. Odo should leverage that logic and start treating s2i components as regular Devfile. This will reduce the odo code base and will make code maintenance a lot simpler. ## `odo create --s2i` `odo create --s2i` command would no longer generate `LocalConfig` (`./odo/config.yml`). Instead, it should just generate `devfile.yml`, and optionally `./odo/env.yml`) depending on what will be required. ## all other commands The rest of the commands (like `odo storage`, `odo url`, etc..) should not need to know anything about s2i. For them, it will be just another Devfile component. ## Acceptance Criteria - [x] `odo create --s2i` should use the s2i to devfile tool to build the devfile on the fly and follow devfile flow in the future. - [x] if user has existing `LocalConfig` s2i component it still needs to work - [ ] We're going to need to make changes to odo create -h as well. - [ ] Finally, we're going to need to update docs that end on odo.dev site as well.
priority
use devfiles for components use devfile yml for components instead of localconfig yaml motivation odo has two types of components devfile and currently each component type has its own separate code path this means that each command needs to understand how to do the action for devfile and also for technically it is possible to create a devfile that will mimic what odo is doing with we already have this in odo utils convert odo should leverage that logic and start treating components as regular devfile this will reduce the odo code base and will make code maintenance a lot simpler odo create odo create command would no longer generate localconfig odo config yml instead it should just generate devfile yml and optionally odo env yml depending on what will be required all other commands the rest of the commands like odo storage odo url etc should not need to know anything about for them it will be just another devfile component acceptance criteria odo create should use the to devfile tool to build the devfile on the fly and follow devfile flow in the future if user has existing localconfig component it still needs to work we re going to need to make changes to odo create h as well finally we re going to need to update docs that end on odo dev site as well
1
57,562
3,082,971,084
IssuesEvent
2015-08-24 04:22:03
aodn/aatams
https://api.github.com/repos/aodn/aatams
closed
Deployments with null initialisation date
bug high priority
Some deployments have a null initialisation date. Probably, they were entered before the system had the concept of initialisation date. This was discovered whilst following up on https://github.com/aodn/aatams/issues/223 We can probably fix the issue by just setting the initialisation date equal to the deployment date in these cases. **Steps to reproduce** ```sql select * from receiver_deployment where initialisationdatetime_timestamp is null ``` **What happens?** Some results are returned (9 at the time of writing). **What should happen?** There should be 0 results.
1.0
Deployments with null initialisation date - Some deployments have a null initialisation date. Probably, they were entered before the system had the concept of initialisation date. This was discovered whilst following up on https://github.com/aodn/aatams/issues/223 We can probably fix the issue by just setting the initialisation date equal to the deployment date in these cases. **Steps to reproduce** ```sql select * from receiver_deployment where initialisationdatetime_timestamp is null ``` **What happens?** Some results are returned (9 at the time of writing). **What should happen?** There should be 0 results.
priority
deployments with null initialisation date some deployments have a null initialisation date probably they were entered before the system had the concept of initialisation date this was discovered whilst following up on we can probably fix the issue by just setting the initialisation date equal to the deployment date in these cases steps to reproduce sql select from receiver deployment where initialisationdatetime timestamp is null what happens some results are returned at the time of writing what should happen there should be results
1
358,678
10,622,617,715
IssuesEvent
2019-10-14 08:01:22
storybookjs/storybook
https://api.github.com/repos/storybookjs/storybook
closed
@storybook/angular: Expected 'styles' to be an array of strings
app: angular bug high priority
**Describe the bug** Components with SCSS or LESS (probably others too) `styleUrls` don’t load in Storybook. The following error message is printed to the console: ``` zone.js:703 Unhandled Promise rejection: Expected 'styles' to be an array of strings. ; Zone: <root> ; Task: setTimeout ; Value: Error: Expected 'styles' to be an array of strings. at assertArrayOfStrings (compiler.js:5668) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNonNormalizedDirectiveMetadata (compiler.js:21024) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver._getEntryComponentMetadata (compiler.js:21670) at compiler.js:21318 at Array.map (<anonymous>) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNgModuleMetadata (compiler.js:21318) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._loadModules (compiler.js:27376) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._compileModuleAndComponents (compiler.js:27357) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler.compileModuleAsync (compiler.js:27317) at CompilerImpl.push../node_modules/@angular/platform-browser-dynamic/fesm5/platform-browser-dynamic.js.CompilerImpl.compileModuleAsync (platform-browser-dynamic.js:143) Error: Expected 'styles' to be an array of strings. at assertArrayOfStrings (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:12731:19) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNonNormalizedDirectiveMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28087:13) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver._getEntryComponentMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28733:28) at http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28381:53 at Array.map (<anonymous>) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNgModuleMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28381:18) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._loadModules (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34439:51) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._compileModuleAndComponents (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34420:36) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler.compileModuleAsync (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34380:37) at CompilerImpl.push../node_modules/@angular/platform-browser-dynamic/fesm5/platform-browser-dynamic.js.CompilerImpl.compileModuleAsync (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:73681:31) ``` The referenced stylesheet seems to be loaded as a module, where it should actually be a string (see screenshot below). **To Reproduce** Steps to reproduce the behavior: 1. `ng new test-app` 2. `npx -p @storybook/cli@next sb init` 3. In `stories/Welcome.stories.ts`, use `AppComponent` instead of `Welcome` as the `component` 4. Launch Storybook. **Expected behavior** The components should successfully show. **Screenshots** ![image](https://user-images.githubusercontent.com/6698344/63751241-40974c00-c8af-11e9-94e6-7c49a38962f3.png) **Code snippets** N/A **System:** ``` System: OS: macOS 10.14.6 CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz Binaries: Node: 11.3.0 - ~/.nvm/versions/node/v11.3.0/bin/node Yarn: 1.12.3 - /usr/local/bin/yarn npm: 6.9.0 - ~/.nvm/versions/node/v11.3.0/bin/npm Browsers: Chrome: 76.0.3809.132 Safari: 12.1.2 npmPackages: @storybook/addon-actions: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addon-links: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addon-notes: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addons: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/angular: ^5.2.0-beta.40 => 5.2.0-beta.40 npmGlobalPackages: @storybook/cli: 5.2.0-beta.19 ``` ``` Angular CLI: 8.3.0 Node: 11.3.0 OS: darwin x64 Angular: 8.2.3 ... animations, common, compiler, compiler-cli, core, forms ... language-service, platform-browser, platform-browser-dynamic ... router Package Version ----------------------------------------------------------- @angular-devkit/architect 0.803.0 @angular-devkit/build-angular 0.803.0 @angular-devkit/build-optimizer 0.803.0 @angular-devkit/build-webpack 0.803.0 @angular-devkit/core 8.3.0 @angular-devkit/schematics 8.3.0 @angular/cli 8.3.0 @ngtools/webpack 8.3.0 @schematics/angular 8.3.0 @schematics/update 0.803.0 rxjs 6.4.0 typescript 3.5.3 webpack 4.39.2 ``` **Additional context** This seems to be a reincarnation of https://github.com/storybookjs/storybook/issues/3593. The workarounds from this issue did not work for me.
1.0
@storybook/angular: Expected 'styles' to be an array of strings - **Describe the bug** Components with SCSS or LESS (probably others too) `styleUrls` don’t load in Storybook. The following error message is printed to the console: ``` zone.js:703 Unhandled Promise rejection: Expected 'styles' to be an array of strings. ; Zone: <root> ; Task: setTimeout ; Value: Error: Expected 'styles' to be an array of strings. at assertArrayOfStrings (compiler.js:5668) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNonNormalizedDirectiveMetadata (compiler.js:21024) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver._getEntryComponentMetadata (compiler.js:21670) at compiler.js:21318 at Array.map (<anonymous>) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNgModuleMetadata (compiler.js:21318) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._loadModules (compiler.js:27376) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._compileModuleAndComponents (compiler.js:27357) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler.compileModuleAsync (compiler.js:27317) at CompilerImpl.push../node_modules/@angular/platform-browser-dynamic/fesm5/platform-browser-dynamic.js.CompilerImpl.compileModuleAsync (platform-browser-dynamic.js:143) Error: Expected 'styles' to be an array of strings. at assertArrayOfStrings (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:12731:19) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNonNormalizedDirectiveMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28087:13) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver._getEntryComponentMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28733:28) at http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28381:53 at Array.map (<anonymous>) at CompileMetadataResolver.push../node_modules/@angular/compiler/fesm5/compiler.js.CompileMetadataResolver.getNgModuleMetadata (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:28381:18) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._loadModules (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34439:51) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler._compileModuleAndComponents (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34420:36) at JitCompiler.push../node_modules/@angular/compiler/fesm5/compiler.js.JitCompiler.compileModuleAsync (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:34380:37) at CompilerImpl.push../node_modules/@angular/platform-browser-dynamic/fesm5/platform-browser-dynamic.js.CompilerImpl.compileModuleAsync (http://localhost:6006/vendors~main.34d29019bd40f8e6720a.bundle.js:73681:31) ``` The referenced stylesheet seems to be loaded as a module, where it should actually be a string (see screenshot below). **To Reproduce** Steps to reproduce the behavior: 1. `ng new test-app` 2. `npx -p @storybook/cli@next sb init` 3. In `stories/Welcome.stories.ts`, use `AppComponent` instead of `Welcome` as the `component` 4. Launch Storybook. **Expected behavior** The components should successfully show. **Screenshots** ![image](https://user-images.githubusercontent.com/6698344/63751241-40974c00-c8af-11e9-94e6-7c49a38962f3.png) **Code snippets** N/A **System:** ``` System: OS: macOS 10.14.6 CPU: (12) x64 Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz Binaries: Node: 11.3.0 - ~/.nvm/versions/node/v11.3.0/bin/node Yarn: 1.12.3 - /usr/local/bin/yarn npm: 6.9.0 - ~/.nvm/versions/node/v11.3.0/bin/npm Browsers: Chrome: 76.0.3809.132 Safari: 12.1.2 npmPackages: @storybook/addon-actions: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addon-links: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addon-notes: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/addons: ^5.2.0-beta.40 => 5.2.0-beta.40 @storybook/angular: ^5.2.0-beta.40 => 5.2.0-beta.40 npmGlobalPackages: @storybook/cli: 5.2.0-beta.19 ``` ``` Angular CLI: 8.3.0 Node: 11.3.0 OS: darwin x64 Angular: 8.2.3 ... animations, common, compiler, compiler-cli, core, forms ... language-service, platform-browser, platform-browser-dynamic ... router Package Version ----------------------------------------------------------- @angular-devkit/architect 0.803.0 @angular-devkit/build-angular 0.803.0 @angular-devkit/build-optimizer 0.803.0 @angular-devkit/build-webpack 0.803.0 @angular-devkit/core 8.3.0 @angular-devkit/schematics 8.3.0 @angular/cli 8.3.0 @ngtools/webpack 8.3.0 @schematics/angular 8.3.0 @schematics/update 0.803.0 rxjs 6.4.0 typescript 3.5.3 webpack 4.39.2 ``` **Additional context** This seems to be a reincarnation of https://github.com/storybookjs/storybook/issues/3593. The workarounds from this issue did not work for me.
priority
storybook angular expected styles to be an array of strings describe the bug components with scss or less probably others too styleurls don’t load in storybook the following error message is printed to the console zone js unhandled promise rejection expected styles to be an array of strings zone task settimeout value error expected styles to be an array of strings at assertarrayofstrings compiler js at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getnonnormalizeddirectivemetadata compiler js at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getentrycomponentmetadata compiler js at compiler js at array map at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getngmodulemetadata compiler js at jitcompiler push node modules angular compiler compiler js jitcompiler loadmodules compiler js at jitcompiler push node modules angular compiler compiler js jitcompiler compilemoduleandcomponents compiler js at jitcompiler push node modules angular compiler compiler js jitcompiler compilemoduleasync compiler js at compilerimpl push node modules angular platform browser dynamic platform browser dynamic js compilerimpl compilemoduleasync platform browser dynamic js error expected styles to be an array of strings at assertarrayofstrings at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getnonnormalizeddirectivemetadata at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getentrycomponentmetadata at at array map at compilemetadataresolver push node modules angular compiler compiler js compilemetadataresolver getngmodulemetadata at jitcompiler push node modules angular compiler compiler js jitcompiler loadmodules at jitcompiler push node modules angular compiler compiler js jitcompiler compilemoduleandcomponents at jitcompiler push node modules angular compiler compiler js jitcompiler compilemoduleasync at compilerimpl push node modules angular platform browser dynamic platform browser dynamic js compilerimpl compilemoduleasync the referenced stylesheet seems to be loaded as a module where it should actually be a string see screenshot below to reproduce steps to reproduce the behavior ng new test app npx p storybook cli next sb init in stories welcome stories ts use appcomponent instead of welcome as the component launch storybook expected behavior the components should successfully show screenshots code snippets n a system system os macos cpu intel r core tm cpu binaries node nvm versions node bin node yarn usr local bin yarn npm nvm versions node bin npm browsers chrome safari npmpackages storybook addon actions beta beta storybook addon links beta beta storybook addon notes beta beta storybook addons beta beta storybook angular beta beta npmglobalpackages storybook cli beta angular cli node os darwin angular animations common compiler compiler cli core forms language service platform browser platform browser dynamic router package version angular devkit architect angular devkit build angular angular devkit build optimizer angular devkit build webpack angular devkit core angular devkit schematics angular cli ngtools webpack schematics angular schematics update rxjs typescript webpack additional context this seems to be a reincarnation of the workarounds from this issue did not work for me
1
125,921
4,969,708,068
IssuesEvent
2016-12-05 14:13:18
isawnyu/isaw.web
https://api.github.com/repos/isawnyu/isaw.web
closed
images in news view do not scale proportionally
bug deploy high priority style
Aspect ratio is ignored when scaling down images in the news tiled view. This is probably a CSS issue; these images should always retain their cropped aspect ratio, even when browser width forces a width scale-down.
1.0
images in news view do not scale proportionally - Aspect ratio is ignored when scaling down images in the news tiled view. This is probably a CSS issue; these images should always retain their cropped aspect ratio, even when browser width forces a width scale-down.
priority
images in news view do not scale proportionally aspect ratio is ignored when scaling down images in the news tiled view this is probably a css issue these images should always retain their cropped aspect ratio even when browser width forces a width scale down
1
88,556
3,779,072,517
IssuesEvent
2016-03-18 05:33:52
ClinGen/clincoded
https://api.github.com/repos/ClinGen/clincoded
closed
Navigation breadcrumb link generates "Cannot read property 'article' of null" error
bug priority: high R5 release ready
<img width="615" alt="screen shot 2016-02-22 at 1 37 31 pm" src="https://cloud.githubusercontent.com/assets/11320314/13233694/ba9f121e-d969-11e5-880b-8e405b8b8a01.png">
1.0
Navigation breadcrumb link generates "Cannot read property 'article' of null" error - <img width="615" alt="screen shot 2016-02-22 at 1 37 31 pm" src="https://cloud.githubusercontent.com/assets/11320314/13233694/ba9f121e-d969-11e5-880b-8e405b8b8a01.png">
priority
navigation breadcrumb link generates cannot read property article of null error img width alt screen shot at pm src
1
431,746
12,484,957,589
IssuesEvent
2020-05-30 17:15:29
Immersive-Geology-Team/Immersive-Geology
https://api.github.com/repos/Immersive-Geology-Team/Immersive-Geology
reopened
Item/Block registry rework
High Priority Important
Because of the complications with the new tags system, a need to use tile entities to store material data and general hassle with the items, the way we register things should be changed. TODO: - [x] Clean up the existing code - [ ] ~~Add an event for other mods to register materials~~ - [x] Change the items to register in the old way (using only ids, like immersivegeology:ingot_copper) - [x] Add a custom model loader for mass loading of the item models - [x] Add a custom blockstate loader for mass loading of the blockstates Optional: - [ ] Add an ability to specify a custom texture / model for the material
1.0
Item/Block registry rework - Because of the complications with the new tags system, a need to use tile entities to store material data and general hassle with the items, the way we register things should be changed. TODO: - [x] Clean up the existing code - [ ] ~~Add an event for other mods to register materials~~ - [x] Change the items to register in the old way (using only ids, like immersivegeology:ingot_copper) - [x] Add a custom model loader for mass loading of the item models - [x] Add a custom blockstate loader for mass loading of the blockstates Optional: - [ ] Add an ability to specify a custom texture / model for the material
priority
item block registry rework because of the complications with the new tags system a need to use tile entities to store material data and general hassle with the items the way we register things should be changed todo clean up the existing code add an event for other mods to register materials change the items to register in the old way using only ids like immersivegeology ingot copper add a custom model loader for mass loading of the item models add a custom blockstate loader for mass loading of the blockstates optional add an ability to specify a custom texture model for the material
1
334,758
10,144,751,331
IssuesEvent
2019-08-05 00:03:47
ncssar/sartopo_address
https://api.github.com/repos/ncssar/sartopo_address
closed
edit marker pulldown field is too narrow
Priority: high bug fire
pulldown field width (when expanded) should be as wide as widest choice; right now it stays at the fixed width that it appears with on the form when not expanded
1.0
edit marker pulldown field is too narrow - pulldown field width (when expanded) should be as wide as widest choice; right now it stays at the fixed width that it appears with on the form when not expanded
priority
edit marker pulldown field is too narrow pulldown field width when expanded should be as wide as widest choice right now it stays at the fixed width that it appears with on the form when not expanded
1
741,114
25,780,011,700
IssuesEvent
2022-12-09 15:09:09
stadt-bielefeld/auik
https://api.github.com/repos/stadt-bielefeld/auik
closed
To Do- Zusammenfassung Import
high priority
Die beiden Importvarianten "Sielhaut" und "Abwasser" sollen in einem Fenster auftauchen. Das Modul Sielhautimport soll zum Labor - Import hinzugefügt werden. Über eine Auswahl "Sielhaut" oder " Abwasser" soll in einem 1. Schritt zwischen den beiden Importvarianten differenziert werden Wie bei dem Sielhautimport hier iene Rückmeldung über importierte Datensätze [22-08-03 E178.csv](https://github.com/stadt-bielefeld/auik/files/9723492/22-08-03.E178.csv)
1.0
To Do- Zusammenfassung Import - Die beiden Importvarianten "Sielhaut" und "Abwasser" sollen in einem Fenster auftauchen. Das Modul Sielhautimport soll zum Labor - Import hinzugefügt werden. Über eine Auswahl "Sielhaut" oder " Abwasser" soll in einem 1. Schritt zwischen den beiden Importvarianten differenziert werden Wie bei dem Sielhautimport hier iene Rückmeldung über importierte Datensätze [22-08-03 E178.csv](https://github.com/stadt-bielefeld/auik/files/9723492/22-08-03.E178.csv)
priority
to do zusammenfassung import die beiden importvarianten sielhaut und abwasser sollen in einem fenster auftauchen das modul sielhautimport soll zum labor import hinzugefügt werden über eine auswahl sielhaut oder abwasser soll in einem schritt zwischen den beiden importvarianten differenziert werden wie bei dem sielhautimport hier iene rückmeldung über importierte datensätze
1
494,650
14,262,358,152
IssuesEvent
2020-11-20 12:51:00
YangCatalog/bottle-yang-extractor-validator
https://api.github.com/repos/YangCatalog/bottle-yang-extractor-validator
closed
Imports not resolved in a zipped yang module set
Priority: High bug
I uploaded a zipped set of yang files that depend on each other (import each other). However imports between the uploaded files were not resolved. I got a lot of error messages like: err : Importing "_3gpp-common-measurements" module into "_3gpp-common-managed-function" failed. Is it possible to solve this? How?
1.0
Imports not resolved in a zipped yang module set - I uploaded a zipped set of yang files that depend on each other (import each other). However imports between the uploaded files were not resolved. I got a lot of error messages like: err : Importing "_3gpp-common-measurements" module into "_3gpp-common-managed-function" failed. Is it possible to solve this? How?
priority
imports not resolved in a zipped yang module set i uploaded a zipped set of yang files that depend on each other import each other however imports between the uploaded files were not resolved i got a lot of error messages like err importing common measurements module into common managed function failed is it possible to solve this how
1
555,588
16,458,475,197
IssuesEvent
2021-05-21 15:29:31
fgpv-vpgf/contributed-plugins
https://api.github.com/repos/fgpv-vpgf/contributed-plugins
closed
Remove definition query when close
plugin-range-slider priority - high
When we close the range slider from the plugin menu, we should remove the definition query and hide the slider. When it comes back, re enable the definition query...
1.0
Remove definition query when close - When we close the range slider from the plugin menu, we should remove the definition query and hide the slider. When it comes back, re enable the definition query...
priority
remove definition query when close when we close the range slider from the plugin menu we should remove the definition query and hide the slider when it comes back re enable the definition query
1
261,344
8,229,966,787
IssuesEvent
2018-09-07 11:12:08
VirtoCommerce/vc-module-catalog
https://api.github.com/repos/VirtoCommerce/vc-module-catalog
closed
Allow to search by product type in indexed search
High priority bug client request
Currently only criteria for database search has ProductType property. Indexed search isn't support it
1.0
Allow to search by product type in indexed search - Currently only criteria for database search has ProductType property. Indexed search isn't support it
priority
allow to search by product type in indexed search currently only criteria for database search has producttype property indexed search isn t support it
1
597,159
18,156,497,052
IssuesEvent
2021-09-27 02:49:53
jakartaresearch/earth-vision
https://api.github.com/repos/jakartaresearch/earth-vision
closed
Cars Overhead With Context (COWC) 54GB
good first issue priority: high type: feature work: obvious
- [Cars Overhead With Context (COWC)](https://gdo152.llnl.gov/cowc/) (Lawrence Livermore National Laboratory, Sep 2016) 32k car bounding boxes, aerial imagery (0.15m res.), 6 cities, Paper: Mundhenk et al. 2016 Paper: [A Large Contextual Dataset for Classification, Detection, and Counting of Cars with Deep Learning](https://link.springer.com/chapter/10.1007/978-3-319-46487-9_48)
1.0
Cars Overhead With Context (COWC) 54GB - - [Cars Overhead With Context (COWC)](https://gdo152.llnl.gov/cowc/) (Lawrence Livermore National Laboratory, Sep 2016) 32k car bounding boxes, aerial imagery (0.15m res.), 6 cities, Paper: Mundhenk et al. 2016 Paper: [A Large Contextual Dataset for Classification, Detection, and Counting of Cars with Deep Learning](https://link.springer.com/chapter/10.1007/978-3-319-46487-9_48)
priority
cars overhead with context cowc lawrence livermore national laboratory sep car bounding boxes aerial imagery res cities paper mundhenk et al paper
1
295,971
9,102,958,861
IssuesEvent
2019-02-20 14:55:04
eaudeweb/ozone
https://api.github.com/repos/eaudeweb/ozone
closed
"Create submission" default values
Component: Backend Component: Vue Priority: High Status: In progress Type: Feedback
We think it would be nice for the “Create submission” area to have default values for “Obligation” and “Period”. These should be “Article 7” and “the immediate year preceding the current year” (2018 for this year). This way, the very initial view has some utility. The default values should be set from the Django Admin.
1.0
"Create submission" default values - We think it would be nice for the “Create submission” area to have default values for “Obligation” and “Period”. These should be “Article 7” and “the immediate year preceding the current year” (2018 for this year). This way, the very initial view has some utility. The default values should be set from the Django Admin.
priority
create submission default values we think it would be nice for the “create submission” area to have default values for “obligation” and “period” these should be “article ” and “the immediate year preceding the current year” for this year this way the very initial view has some utility the default values should be set from the django admin
1
412,118
12,035,524,827
IssuesEvent
2020-04-13 18:02:42
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[platform] expired transaction and conflict metrics shows incorrect values
area/platform kind/bug priority/high
Show correct values with every refresh for Expired Transactions and Conflicts in the Transactions metric shown in the DocDB section.
1.0
[platform] expired transaction and conflict metrics shows incorrect values - Show correct values with every refresh for Expired Transactions and Conflicts in the Transactions metric shown in the DocDB section.
priority
expired transaction and conflict metrics shows incorrect values show correct values with every refresh for expired transactions and conflicts in the transactions metric shown in the docdb section
1
549,722
16,098,912,384
IssuesEvent
2021-04-27 06:38:07
ever-co/ever-gauzy
https://api.github.com/repos/ever-co/ever-gauzy
closed
Bug: Save button not work in edit organizations in location
priority: highest type: bug :bug:
The "Save" button did not work on this page when I try to edit some data, please create GitHub Issue and assign to yourself to fix ![image (2)](https://user-images.githubusercontent.com/70377939/111271807-b90f5a80-8657-11eb-8def-7fcc5584ed05.png)
1.0
Bug: Save button not work in edit organizations in location - The "Save" button did not work on this page when I try to edit some data, please create GitHub Issue and assign to yourself to fix ![image (2)](https://user-images.githubusercontent.com/70377939/111271807-b90f5a80-8657-11eb-8def-7fcc5584ed05.png)
priority
bug save button not work in edit organizations in location the save button did not work on this page when i try to edit some data please create github issue and assign to yourself to fix
1
93,898
3,916,580,737
IssuesEvent
2016-04-21 02:42:24
phetsims/circuit-construction-kit-basics
https://api.github.com/repos/phetsims/circuit-construction-kit-basics
closed
Objects placed outside of the dev bounds can get lost when the window resized
priority:2-high
Objects placed outside of the dev bounds can get lost when the window resized @arouinfar and @ariel-phet how would you recommend to address this problem? ![devbounds](https://cloud.githubusercontent.com/assets/679486/14164553/c0351cfc-f6bd-11e5-8687-38015f1de4e0.gif)
1.0
Objects placed outside of the dev bounds can get lost when the window resized - Objects placed outside of the dev bounds can get lost when the window resized @arouinfar and @ariel-phet how would you recommend to address this problem? ![devbounds](https://cloud.githubusercontent.com/assets/679486/14164553/c0351cfc-f6bd-11e5-8687-38015f1de4e0.gif)
priority
objects placed outside of the dev bounds can get lost when the window resized objects placed outside of the dev bounds can get lost when the window resized arouinfar and ariel phet how would you recommend to address this problem
1
356,268
10,590,921,614
IssuesEvent
2019-10-09 09:45:25
larray-project/larray
https://api.github.com/repos/larray-project/larray
closed
rename LArray class to Array?
priority: high syntax work in progress
Pros: * We are in the larray module, so the L of LArray is kinda redundant * It would be one less character to type * The L of LArray is a bit weird for arrays without labels (arrays with only wildcard axes) * I like it :) Cons: * more deprecated warnings for existing users (though a single string replace could fix the problem) * LGroup will look slightly odd (but having Group and IGroup would be worse). On the other hand, if we converge both kinds of groups to Group (like I discussed in a few issues) this would no longer be a con but a pro.
1.0
rename LArray class to Array? - Pros: * We are in the larray module, so the L of LArray is kinda redundant * It would be one less character to type * The L of LArray is a bit weird for arrays without labels (arrays with only wildcard axes) * I like it :) Cons: * more deprecated warnings for existing users (though a single string replace could fix the problem) * LGroup will look slightly odd (but having Group and IGroup would be worse). On the other hand, if we converge both kinds of groups to Group (like I discussed in a few issues) this would no longer be a con but a pro.
priority
rename larray class to array pros we are in the larray module so the l of larray is kinda redundant it would be one less character to type the l of larray is a bit weird for arrays without labels arrays with only wildcard axes i like it cons more deprecated warnings for existing users though a single string replace could fix the problem lgroup will look slightly odd but having group and igroup would be worse on the other hand if we converge both kinds of groups to group like i discussed in a few issues this would no longer be a con but a pro
1
126,703
5,002,610,437
IssuesEvent
2016-12-11 13:55:29
gama-platform/gama
https://api.github.com/repos/gama-platform/gama
closed
Enhancement: allow additional or third-party plugins to contribute to the toolbar/action bars
> Enhancement Affects Usability Concerns Interface In Extensions Priority High
``` 1. In order to keep the UI as uncluttered as possible, a startup mechanism, implemented in ActionWiper.class, wipes all the ActionSets that are not defined in GAMA. Unfortunately, instead of relying on the list of ActionSets contributed by the plugins included in GAMA, it systematically removes all the ones that do not include "gama" in their id. 2. The result is that plugins that only install their contributions in the form of ActionSets (like the ones present here: https://code.google.com/p/sandipchitaleseclipseplugins/, notably the VERY USEFUL Color Sampler, Snapshot tool, Search Bar), although they are compatible with GAMA, are not available to the user once installed. They can be briefly seen at startup, and then disappear. This is frustrating and fairly stupid. To test them, it is possible to switch to the Resource Perspective, but it is not a viable solution on the long-term. 3. There are some possibilities to implement a better mechanism for the next release: - Add some preferences populated with the current list of "wiped" action sets, and allows the user to select/deselect them. - Do the contrary: maintain a list of the plugins allowed to contribute and asks the user, for each new installation, whether their contribution should be accepted. - Automatically allow new plugins to install their contributions (meaning that the removal of ActionSets needs to rely on some hard-coded information about the "pristine" state of GAMA when released — for instance this initial state could be saved when running GAMA for the first time). ``` Original issue reported on code.google.com by `gama.platform` on 2014-11-20 15:12:27 <!--- @huboard:{"order":146.99551391601562,"milestone_order":1134,"custom_state":""} -->
1.0
Enhancement: allow additional or third-party plugins to contribute to the toolbar/action bars - ``` 1. In order to keep the UI as uncluttered as possible, a startup mechanism, implemented in ActionWiper.class, wipes all the ActionSets that are not defined in GAMA. Unfortunately, instead of relying on the list of ActionSets contributed by the plugins included in GAMA, it systematically removes all the ones that do not include "gama" in their id. 2. The result is that plugins that only install their contributions in the form of ActionSets (like the ones present here: https://code.google.com/p/sandipchitaleseclipseplugins/, notably the VERY USEFUL Color Sampler, Snapshot tool, Search Bar), although they are compatible with GAMA, are not available to the user once installed. They can be briefly seen at startup, and then disappear. This is frustrating and fairly stupid. To test them, it is possible to switch to the Resource Perspective, but it is not a viable solution on the long-term. 3. There are some possibilities to implement a better mechanism for the next release: - Add some preferences populated with the current list of "wiped" action sets, and allows the user to select/deselect them. - Do the contrary: maintain a list of the plugins allowed to contribute and asks the user, for each new installation, whether their contribution should be accepted. - Automatically allow new plugins to install their contributions (meaning that the removal of ActionSets needs to rely on some hard-coded information about the "pristine" state of GAMA when released — for instance this initial state could be saved when running GAMA for the first time). ``` Original issue reported on code.google.com by `gama.platform` on 2014-11-20 15:12:27 <!--- @huboard:{"order":146.99551391601562,"milestone_order":1134,"custom_state":""} -->
priority
enhancement allow additional or third party plugins to contribute to the toolbar action bars in order to keep the ui as uncluttered as possible a startup mechanism implemented in actionwiper class wipes all the actionsets that are not defined in gama unfortunately instead of relying on the list of actionsets contributed by the plugins included in gama it systematically removes all the ones that do not include gama in their id the result is that plugins that only install their contributions in the form of actionsets like the ones present here notably the very useful color sampler snapshot tool search bar although they are compatible with gama are not available to the user once installed they can be briefly seen at startup and then disappear this is frustrating and fairly stupid to test them it is possible to switch to the resource perspective but it is not a viable solution on the long term there are some possibilities to implement a better mechanism for the next release add some preferences populated with the current list of wiped action sets and allows the user to select deselect them do the contrary maintain a list of the plugins allowed to contribute and asks the user for each new installation whether their contribution should be accepted automatically allow new plugins to install their contributions meaning that the removal of actionsets needs to rely on some hard coded information about the pristine state of gama when released — for instance this initial state could be saved when running gama for the first time original issue reported on code google com by gama platform on huboard order milestone order custom state
1
458,563
13,177,383,096
IssuesEvent
2020-08-12 07:15:39
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0.0 beta staging-stable-8] Animal Distribution
Category: Tech Priority: High Status: Fixed
**This might be caused by animals not being loaded in or by them all being in one place.** I recently opened up a new singleplayer world. I had not seen any animals while playing over an hour. So I wondered around eventually found some then I headed back to where I started and there were animals all over the place. I believe something similar happened on the play test server. (When it first started) People kept saying they were not seeing animals anywhere and at the same time me living in the mid of the desert was seeing nothing but animals. I saw jaguars, alligators, wolfs, etc. in large numbers. This problem could be caused by all the animals being generated in one place and dispersing overtime and with the case of my world getting scared away from where they were all at. Or this is caused by problems with loading in animals. Because it seemed strange that animals suddenly were everywhere after I had seen any in any direction for over an hour.
1.0
[0.9.0.0 beta staging-stable-8] Animal Distribution - **This might be caused by animals not being loaded in or by them all being in one place.** I recently opened up a new singleplayer world. I had not seen any animals while playing over an hour. So I wondered around eventually found some then I headed back to where I started and there were animals all over the place. I believe something similar happened on the play test server. (When it first started) People kept saying they were not seeing animals anywhere and at the same time me living in the mid of the desert was seeing nothing but animals. I saw jaguars, alligators, wolfs, etc. in large numbers. This problem could be caused by all the animals being generated in one place and dispersing overtime and with the case of my world getting scared away from where they were all at. Or this is caused by problems with loading in animals. Because it seemed strange that animals suddenly were everywhere after I had seen any in any direction for over an hour.
priority
animal distribution this might be caused by animals not being loaded in or by them all being in one place i recently opened up a new singleplayer world i had not seen any animals while playing over an hour so i wondered around eventually found some then i headed back to where i started and there were animals all over the place i believe something similar happened on the play test server when it first started people kept saying they were not seeing animals anywhere and at the same time me living in the mid of the desert was seeing nothing but animals i saw jaguars alligators wolfs etc in large numbers this problem could be caused by all the animals being generated in one place and dispersing overtime and with the case of my world getting scared away from where they were all at or this is caused by problems with loading in animals because it seemed strange that animals suddenly were everywhere after i had seen any in any direction for over an hour
1
307,488
9,417,840,991
IssuesEvent
2019-04-10 17:43:49
visit-dav/visit
https://api.github.com/repos/visit-dav/visit
opened
Robustify CMFE by tetrahedralizatiion
bug impact high impact medium priority wrong results
### Describe the bug When processing cells with faces of more than 3 points, those faces are not necessarily planar. However, we believe portions of the position-based CMFE code assume this. As a result, meshes that do overlap spatially can wind up getting non-overlap value due to numerical issues caused by non-planar faces. The solution is to decompose cells consisting of non-3-point faces into tets, on demand, as the computation proceeds (doing so for the whole input ahead of time could be ridicululous memory consumption) and to do so such that decomposition is consistent across a face encountered multiple times. This is important to an engineering effort/user.
1.0
Robustify CMFE by tetrahedralizatiion - ### Describe the bug When processing cells with faces of more than 3 points, those faces are not necessarily planar. However, we believe portions of the position-based CMFE code assume this. As a result, meshes that do overlap spatially can wind up getting non-overlap value due to numerical issues caused by non-planar faces. The solution is to decompose cells consisting of non-3-point faces into tets, on demand, as the computation proceeds (doing so for the whole input ahead of time could be ridicululous memory consumption) and to do so such that decomposition is consistent across a face encountered multiple times. This is important to an engineering effort/user.
priority
robustify cmfe by tetrahedralizatiion describe the bug when processing cells with faces of more than points those faces are not necessarily planar however we believe portions of the position based cmfe code assume this as a result meshes that do overlap spatially can wind up getting non overlap value due to numerical issues caused by non planar faces the solution is to decompose cells consisting of non point faces into tets on demand as the computation proceeds doing so for the whole input ahead of time could be ridicululous memory consumption and to do so such that decomposition is consistent across a face encountered multiple times this is important to an engineering effort user
1
157,643
6,010,199,867
IssuesEvent
2017-06-06 12:38:44
GeekyAnts/NativeBase
https://api.github.com/repos/GeekyAnts/NativeBase
closed
Windows compatibility - package.json compile script
0.25 high priority
Since 0.5.21, the compile script was added in package.json, which runs bash commands, breaking Windows compatibility. I am not able to perform "npm install" on NativeBase because of this. The error I am getting is the following: > npm ERR! code ELIFECYCLE > npm ERR! native-base@0.5.22 prepublish: `npm run compile` > npm ERR! Exit status 1 > npm ERR! > npm ERR! Failed at the native-base@0.5.22 prepublish script 'npm run compile'. Given this, what is the recommended approach for Windows users ?
1.0
Windows compatibility - package.json compile script - Since 0.5.21, the compile script was added in package.json, which runs bash commands, breaking Windows compatibility. I am not able to perform "npm install" on NativeBase because of this. The error I am getting is the following: > npm ERR! code ELIFECYCLE > npm ERR! native-base@0.5.22 prepublish: `npm run compile` > npm ERR! Exit status 1 > npm ERR! > npm ERR! Failed at the native-base@0.5.22 prepublish script 'npm run compile'. Given this, what is the recommended approach for Windows users ?
priority
windows compatibility package json compile script since the compile script was added in package json which runs bash commands breaking windows compatibility i am not able to perform npm install on nativebase because of this the error i am getting is the following npm err code elifecycle npm err native base prepublish npm run compile npm err exit status npm err npm err failed at the native base prepublish script npm run compile given this what is the recommended approach for windows users
1
491,758
14,170,629,993
IssuesEvent
2020-11-12 14:46:20
pytorch/vision
https://api.github.com/repos/pytorch/vision
closed
File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/feature_pyramid_network.py", line 53, in __init__ raise ValueError("in_channels=0 is currently not supported") ValueError: in_channels=0 is currently not supported
enhancement high priority module: ops triage review
``` File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/feature_pyramid_network.py", line 53, in __init__ raise ValueError("in_channels=0 is currently not supported") ValueError: in_channels=0 is currently not supported ``` How to fix this error in newest vision with older written style which I have a FPN start channel with 0. ``` in_channels_list = [ 0, in_channels_stage2 * 2, in_channels_stage2 * 4, in_channels_stage2 * 8, ] self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) self.fpn = FeaturePyramidNetwork( in_channels_list=in_channels_list, out_channels=out_channels, extra_blocks=LastLevelP6P7(out_channels, out_channels), ) ```
1.0
File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/feature_pyramid_network.py", line 53, in __init__ raise ValueError("in_channels=0 is currently not supported") ValueError: in_channels=0 is currently not supported - ``` File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/feature_pyramid_network.py", line 53, in __init__ raise ValueError("in_channels=0 is currently not supported") ValueError: in_channels=0 is currently not supported ``` How to fix this error in newest vision with older written style which I have a FPN start channel with 0. ``` in_channels_list = [ 0, in_channels_stage2 * 2, in_channels_stage2 * 4, in_channels_stage2 * 8, ] self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) self.fpn = FeaturePyramidNetwork( in_channels_list=in_channels_list, out_channels=out_channels, extra_blocks=LastLevelP6P7(out_channels, out_channels), ) ```
priority
file usr local lib dist packages torchvision ops feature pyramid network py line in init raise valueerror in channels is currently not supported valueerror in channels is currently not supported file usr local lib dist packages torchvision ops feature pyramid network py line in init raise valueerror in channels is currently not supported valueerror in channels is currently not supported how to fix this error in newest vision with older written style which i have a fpn start channel with in channels list in channels in channels in channels self body intermediatelayergetter backbone return layers return layers self fpn featurepyramidnetwork in channels list in channels list out channels out channels extra blocks out channels out channels
1
478,196
13,774,673,006
IssuesEvent
2020-10-08 06:40:50
aau-giraf/weekplanner
https://api.github.com/repos/aau-giraf/weekplanner
closed
When uploading a large image to a new weekplan, the image is not uploaded properly
group 6 point: 8 priority: high type: bug
**Describe the bug** When I upload an image taken with my phones camera, the image shows as a loading icon, instead of the actual image. When creating a new activity and searching for the image, the image still shows as a loading icon. **To Reproduce** Steps to reproduce the behavior: 1. Create a new activity 2. Choose an image from the phones internal storage 3. Save and see the error Try also creating a new activity again and searching for the image, to see the error again **Expected behavior** The image to show up as itself **Actual behavior** Image shows as a loading icon **Screenshots** **Environment (please complete the following information):** - Galaxy S9 running Android 10 **Additional context** Add any other context about the problem here.
1.0
When uploading a large image to a new weekplan, the image is not uploaded properly - **Describe the bug** When I upload an image taken with my phones camera, the image shows as a loading icon, instead of the actual image. When creating a new activity and searching for the image, the image still shows as a loading icon. **To Reproduce** Steps to reproduce the behavior: 1. Create a new activity 2. Choose an image from the phones internal storage 3. Save and see the error Try also creating a new activity again and searching for the image, to see the error again **Expected behavior** The image to show up as itself **Actual behavior** Image shows as a loading icon **Screenshots** **Environment (please complete the following information):** - Galaxy S9 running Android 10 **Additional context** Add any other context about the problem here.
priority
when uploading a large image to a new weekplan the image is not uploaded properly describe the bug when i upload an image taken with my phones camera the image shows as a loading icon instead of the actual image when creating a new activity and searching for the image the image still shows as a loading icon to reproduce steps to reproduce the behavior create a new activity choose an image from the phones internal storage save and see the error try also creating a new activity again and searching for the image to see the error again expected behavior the image to show up as itself actual behavior image shows as a loading icon screenshots environment please complete the following information galaxy running android additional context add any other context about the problem here
1
73,191
3,408,999,310
IssuesEvent
2015-12-04 13:49:13
johnlees/seer
https://api.github.com/repos/johnlees/seer
closed
Put sample names in kmds output
enhancement high priority interface
Have a column of sample names in structure matrix, and check these when merging at the start of seer. This will prevent errors if different .pheno files are used
1.0
Put sample names in kmds output - Have a column of sample names in structure matrix, and check these when merging at the start of seer. This will prevent errors if different .pheno files are used
priority
put sample names in kmds output have a column of sample names in structure matrix and check these when merging at the start of seer this will prevent errors if different pheno files are used
1
374,213
11,082,021,538
IssuesEvent
2019-12-13 11:05:06
laurapauly/Travellet
https://api.github.com/repos/laurapauly/Travellet
opened
As a user I want to be able to add a new trip
High Priority User-Story
[ ] travel destination [ ] date [ ] travelbuddy [ ] space for notes [ ] budget
1.0
As a user I want to be able to add a new trip - [ ] travel destination [ ] date [ ] travelbuddy [ ] space for notes [ ] budget
priority
as a user i want to be able to add a new trip travel destination date travelbuddy space for notes budget
1
725,893
24,979,833,234
IssuesEvent
2022-11-02 10:47:49
WordPress/Learn
https://api.github.com/repos/WordPress/Learn
closed
Classic Theme Template Parts - Tutorial
[Priority] High Theme [Experience Level] Intermediate [Audience] Developers [Content Type] Tutorial Review 1 complete Review 2 complete Ready to publish Review 3 complete 6.1 hacktoberfest
# Topic Description How to enable block templates and template parts in classic themes (non-FSE) # Related Resources Links to related content on Learn, HelpHub, DevHub, GitHub Gutenberg Issues, DevNotes, etc. Code example: [Very simple example theme](https://github.com/Mamaduka/block-fragments) with more details for implementation in [Testing and Feedback for using block based template parts in classic themes](https://make.wordpress.org/themes/2022/09/12/testing-and-feedback-for-using-block-based-template-parts-in-classic-themes/). For a more detailed guide, [review this post from the Gutenberg Times](https://gutenbergtimes.com/building-a-block-based-header-template-in-a-classic-theme/). Visual: [video of the experience of using template parts in classic themes](https://drive.google.com/file/d/1qy6jonIbX9rTQSiqEvyVOe4W9Qrrmy2L/view?usp=sharing). Key Make Posts/GitHub/Trac Issue(s): [Testing and Feedback for using block based template parts in classic themes](https://make.wordpress.org/themes/2022/09/12/testing-and-feedback-for-using-block-based-template-parts-in-classic-themes/). For a more detailed walkthrough, [review this post from the Gutenberg Times](https://gutenbergtimes.com/block-based-template-parts-for-classic-themes/). # Guidelines Review the [team guidelines] (https://make.wordpress.org/training/handbook/guidelines/) # Tutorial Development Checklist - [x] Vetted by instructional designers for content idea - [x] Provide feedback of the idea - [x] Gather links to Support and Developer Docs - [x] Consider any MarComms (marketing communications) resources and link to those - [x] Review any related material on Learn - [x] Define several SEO keywords to use in the article and where they should be prominently used - [x] Description and Objectives finalized - [x] Create an outline of the workshop - [x] Tutorial submitted to the team for Q/A review https://blog.wordpress.tv/submission-guidelines/ & https://make.wordpress.org/training/2021/08/17/proposal-brand-guidelines-for-learn-wordpress-content/ - [x] Tutorial submitted to WPTV https://wordpress.tv/submit-video/ - [x] Tutorial published on WPTV - [x] Tutorial is captioned https://make.wordpress.org/training/handbook/tutorials/tutorial-subtitles-and-transcripts/ - [x] Tutorial created on Learn.WordPress.org - [x] Tutorial post is reviewed for grammar, spelling, etc. - [x] Tutorial published on Learn.WordPress.org - [x] Tutorial announced to training team - [x] Tutorial announced to creator - [x] Tutorial announced to Marketing Team for promotion - [ ] Gather feedback from workshop viewers/participants
1.0
Classic Theme Template Parts - Tutorial - # Topic Description How to enable block templates and template parts in classic themes (non-FSE) # Related Resources Links to related content on Learn, HelpHub, DevHub, GitHub Gutenberg Issues, DevNotes, etc. Code example: [Very simple example theme](https://github.com/Mamaduka/block-fragments) with more details for implementation in [Testing and Feedback for using block based template parts in classic themes](https://make.wordpress.org/themes/2022/09/12/testing-and-feedback-for-using-block-based-template-parts-in-classic-themes/). For a more detailed guide, [review this post from the Gutenberg Times](https://gutenbergtimes.com/building-a-block-based-header-template-in-a-classic-theme/). Visual: [video of the experience of using template parts in classic themes](https://drive.google.com/file/d/1qy6jonIbX9rTQSiqEvyVOe4W9Qrrmy2L/view?usp=sharing). Key Make Posts/GitHub/Trac Issue(s): [Testing and Feedback for using block based template parts in classic themes](https://make.wordpress.org/themes/2022/09/12/testing-and-feedback-for-using-block-based-template-parts-in-classic-themes/). For a more detailed walkthrough, [review this post from the Gutenberg Times](https://gutenbergtimes.com/block-based-template-parts-for-classic-themes/). # Guidelines Review the [team guidelines] (https://make.wordpress.org/training/handbook/guidelines/) # Tutorial Development Checklist - [x] Vetted by instructional designers for content idea - [x] Provide feedback of the idea - [x] Gather links to Support and Developer Docs - [x] Consider any MarComms (marketing communications) resources and link to those - [x] Review any related material on Learn - [x] Define several SEO keywords to use in the article and where they should be prominently used - [x] Description and Objectives finalized - [x] Create an outline of the workshop - [x] Tutorial submitted to the team for Q/A review https://blog.wordpress.tv/submission-guidelines/ & https://make.wordpress.org/training/2021/08/17/proposal-brand-guidelines-for-learn-wordpress-content/ - [x] Tutorial submitted to WPTV https://wordpress.tv/submit-video/ - [x] Tutorial published on WPTV - [x] Tutorial is captioned https://make.wordpress.org/training/handbook/tutorials/tutorial-subtitles-and-transcripts/ - [x] Tutorial created on Learn.WordPress.org - [x] Tutorial post is reviewed for grammar, spelling, etc. - [x] Tutorial published on Learn.WordPress.org - [x] Tutorial announced to training team - [x] Tutorial announced to creator - [x] Tutorial announced to Marketing Team for promotion - [ ] Gather feedback from workshop viewers/participants
priority
classic theme template parts tutorial topic description how to enable block templates and template parts in classic themes non fse related resources links to related content on learn helphub devhub github gutenberg issues devnotes etc code example with more details for implementation in for a more detailed guide visual key make posts github trac issue s for a more detailed walkthrough guidelines review the tutorial development checklist vetted by instructional designers for content idea provide feedback of the idea gather links to support and developer docs consider any marcomms marketing communications resources and link to those review any related material on learn define several seo keywords to use in the article and where they should be prominently used description and objectives finalized create an outline of the workshop tutorial submitted to the team for q a review tutorial submitted to wptv tutorial published on wptv tutorial is captioned tutorial created on learn wordpress org tutorial post is reviewed for grammar spelling etc tutorial published on learn wordpress org tutorial announced to training team tutorial announced to creator tutorial announced to marketing team for promotion gather feedback from workshop viewers participants
1
508,543
14,702,352,865
IssuesEvent
2021-01-04 13:28:12
fecgov/fec-cms
https://api.github.com/repos/fecgov/fec-cms
closed
[Do on January 4] Update Commissioner pages to reflect new Chair and Vice Chair
High priority Work: Content
### Summary Every year, the leadership of the Commission changes. We will need to ensure that we update all references on the website accordingly once that change takes effect. 2021 Chair: Commissioner Broussard 2021 Vice Chair: Commissioner Dickerson Related ticket: #3137 (reorganizing grid for three person vacancy) ### Completion criteria - [x] Homepage, All Commissioners page, and Leadership and Structure page - On each page, ensure that the new Chair is listed as Chair, the new Vice Chair/Chair is listed as Vice Chair/Chair, and the remaining Commissioners are listed with title removed. (This happens automatically by editing "Commissioner title field on each Commissioner's page - see Wagtail changes below.) - [x] Press Commissioner page at https://www.fec.gov/press/resources-journalists/commissioners/ - the sentence about the Chair is updated. **Wagtail changes** Leadership & structure pages: https://www.fec.gov/about/leadership-and-structure/ This is updated automatically when the bio pages are updated. - [ ] Edit the bio pages for the affected Commissioners to adjust their titles. This is done by filling in or changing what's filled into the "Commissioner Title" field. ![image](https://user-images.githubusercontent.com/24437369/68951040-58ce8700-078b-11ea-98bd-efed5fb830e2.png)
1.0
[Do on January 4] Update Commissioner pages to reflect new Chair and Vice Chair - ### Summary Every year, the leadership of the Commission changes. We will need to ensure that we update all references on the website accordingly once that change takes effect. 2021 Chair: Commissioner Broussard 2021 Vice Chair: Commissioner Dickerson Related ticket: #3137 (reorganizing grid for three person vacancy) ### Completion criteria - [x] Homepage, All Commissioners page, and Leadership and Structure page - On each page, ensure that the new Chair is listed as Chair, the new Vice Chair/Chair is listed as Vice Chair/Chair, and the remaining Commissioners are listed with title removed. (This happens automatically by editing "Commissioner title field on each Commissioner's page - see Wagtail changes below.) - [x] Press Commissioner page at https://www.fec.gov/press/resources-journalists/commissioners/ - the sentence about the Chair is updated. **Wagtail changes** Leadership & structure pages: https://www.fec.gov/about/leadership-and-structure/ This is updated automatically when the bio pages are updated. - [ ] Edit the bio pages for the affected Commissioners to adjust their titles. This is done by filling in or changing what's filled into the "Commissioner Title" field. ![image](https://user-images.githubusercontent.com/24437369/68951040-58ce8700-078b-11ea-98bd-efed5fb830e2.png)
priority
update commissioner pages to reflect new chair and vice chair summary every year the leadership of the commission changes we will need to ensure that we update all references on the website accordingly once that change takes effect chair commissioner broussard vice chair commissioner dickerson related ticket reorganizing grid for three person vacancy completion criteria homepage all commissioners page and leadership and structure page on each page ensure that the new chair is listed as chair the new vice chair chair is listed as vice chair chair and the remaining commissioners are listed with title removed this happens automatically by editing commissioner title field on each commissioner s page see wagtail changes below press commissioner page at the sentence about the chair is updated wagtail changes leadership structure pages this is updated automatically when the bio pages are updated edit the bio pages for the affected commissioners to adjust their titles this is done by filling in or changing what s filled into the commissioner title field
1
75,220
3,460,352,386
IssuesEvent
2015-12-19 03:31:36
antialiasis/serebii-fanfic-awards
https://api.github.com/repos/antialiasis/serebii-fanfic-awards
closed
Restrict nomination drop-downs to items already nominated this year
enhancement nominations priority: high
To stop nomination drop-downs from getting huge and cluttered with ineligible items, restrict them to showing items nominated during the current awards cycle. Make sure this doesn't cause problems if a user nominates something not nominated yet this year but already in the database.
1.0
Restrict nomination drop-downs to items already nominated this year - To stop nomination drop-downs from getting huge and cluttered with ineligible items, restrict them to showing items nominated during the current awards cycle. Make sure this doesn't cause problems if a user nominates something not nominated yet this year but already in the database.
priority
restrict nomination drop downs to items already nominated this year to stop nomination drop downs from getting huge and cluttered with ineligible items restrict them to showing items nominated during the current awards cycle make sure this doesn t cause problems if a user nominates something not nominated yet this year but already in the database
1
435,168
12,532,198,685
IssuesEvent
2020-06-04 15:35:48
luna/enso
https://api.github.com/repos/luna/enso
opened
Changeset Algorithm Optimizations
Category: Backend Category: RTS Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement
### Summary <!-- - A summary of the task. --> Changeset algorithm computes the set of invalidated nodes after applying the text edit. It calculates the nodes directly affected by the edit and then uses the metadata from the Dataflow Analysis pass to compute _all_ invalidated nodes. Potential improvements could be: - Return the outermost fully invalidated node in the tree, instead of returning the innermost as it is currently implemented. This will reduce the calls to the Dataflow Analysis metadata and should improve the overall performance. - Merge overlapped edits. When two consecutive edits overlap, we can create a single edit by combining the two. This will reduce the number of applied edits. ### Value <!-- - This section should describe the value of this task. - This value can be for users, to the team, etc. --> Improve the speed of recompilation. ### Specification <!-- - Detailed requirements for the feature. - The performance requirements for the feature. --> - [ ] Implement the optimization described in the summary section. - [ ] Add a benchmark measuring the performance of Changeset algorithm. - [ ] Ensure that edits are applied correctly. `applyEdits` performs some validation and returns an optional value: ``` java Optional<Rope> editedSource = JavaEditorAdapter.applyEdits(module.getLiteralSource(), edits); editedSource.ifPresent(module::setLiteralSource); ``` Probably we need to check that edits are applied correctly in the language-server and only send edit the notification to the runtime in case of success. ### Acceptance Criteria & Test Cases <!-- - Any criteria that must be satisfied for the task to be accepted. - The test plan for the feature, related to the acceptance criteria. --> - There is a benchmark measuring the Changeset algorithm performance. - The optimizations described in the summary section are implemented.
1.0
Changeset Algorithm Optimizations - ### Summary <!-- - A summary of the task. --> Changeset algorithm computes the set of invalidated nodes after applying the text edit. It calculates the nodes directly affected by the edit and then uses the metadata from the Dataflow Analysis pass to compute _all_ invalidated nodes. Potential improvements could be: - Return the outermost fully invalidated node in the tree, instead of returning the innermost as it is currently implemented. This will reduce the calls to the Dataflow Analysis metadata and should improve the overall performance. - Merge overlapped edits. When two consecutive edits overlap, we can create a single edit by combining the two. This will reduce the number of applied edits. ### Value <!-- - This section should describe the value of this task. - This value can be for users, to the team, etc. --> Improve the speed of recompilation. ### Specification <!-- - Detailed requirements for the feature. - The performance requirements for the feature. --> - [ ] Implement the optimization described in the summary section. - [ ] Add a benchmark measuring the performance of Changeset algorithm. - [ ] Ensure that edits are applied correctly. `applyEdits` performs some validation and returns an optional value: ``` java Optional<Rope> editedSource = JavaEditorAdapter.applyEdits(module.getLiteralSource(), edits); editedSource.ifPresent(module::setLiteralSource); ``` Probably we need to check that edits are applied correctly in the language-server and only send edit the notification to the runtime in case of success. ### Acceptance Criteria & Test Cases <!-- - Any criteria that must be satisfied for the task to be accepted. - The test plan for the feature, related to the acceptance criteria. --> - There is a benchmark measuring the Changeset algorithm performance. - The optimizations described in the summary section are implemented.
priority
changeset algorithm optimizations summary a summary of the task changeset algorithm computes the set of invalidated nodes after applying the text edit it calculates the nodes directly affected by the edit and then uses the metadata from the dataflow analysis pass to compute all invalidated nodes potential improvements could be return the outermost fully invalidated node in the tree instead of returning the innermost as it is currently implemented this will reduce the calls to the dataflow analysis metadata and should improve the overall performance merge overlapped edits when two consecutive edits overlap we can create a single edit by combining the two this will reduce the number of applied edits value this section should describe the value of this task this value can be for users to the team etc improve the speed of recompilation specification detailed requirements for the feature the performance requirements for the feature implement the optimization described in the summary section add a benchmark measuring the performance of changeset algorithm ensure that edits are applied correctly applyedits performs some validation and returns an optional value java optional editedsource javaeditoradapter applyedits module getliteralsource edits editedsource ifpresent module setliteralsource probably we need to check that edits are applied correctly in the language server and only send edit the notification to the runtime in case of success acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria there is a benchmark measuring the changeset algorithm performance the optimizations described in the summary section are implemented
1
151,101
5,798,182,448
IssuesEvent
2017-05-03 00:30:31
fui/fui-kk
https://api.github.com/repos/fui/fui-kk
closed
Cannot create files with "*" in the name in Windows.
high priority windows
In download_reports.py, if the downloaded report has "*" in its title, download report will try to create a file with this name in Windows, which is not possible. I suggest changing this behaviour in some manner so that the file is either created with a different character which is supported, or skipped. Error message: > Fetching Evaluation INF**** - V2017 (id=77928) Traceback (most recent call last): File "scripts/download_reports.py", line 205, in <module> main() File "scripts/download_reports.py", line 199, in main download_files(driver, args) File "scripts/download_reports.py", line 177, in download_files write_to_file(tsv_path, name_underscored, 'tsv', response.text) File "scripts/download_reports.py", line 75, in write_to_file with open(filename, 'w', encoding="utf-8") as f: OSError: [Errno 22] Invalid argument: './downloads/tsv/Evaluation_INF****_-_V2017.tsv' make.exe": *** [download] Error 1
1.0
Cannot create files with "*" in the name in Windows. - In download_reports.py, if the downloaded report has "*" in its title, download report will try to create a file with this name in Windows, which is not possible. I suggest changing this behaviour in some manner so that the file is either created with a different character which is supported, or skipped. Error message: > Fetching Evaluation INF**** - V2017 (id=77928) Traceback (most recent call last): File "scripts/download_reports.py", line 205, in <module> main() File "scripts/download_reports.py", line 199, in main download_files(driver, args) File "scripts/download_reports.py", line 177, in download_files write_to_file(tsv_path, name_underscored, 'tsv', response.text) File "scripts/download_reports.py", line 75, in write_to_file with open(filename, 'w', encoding="utf-8") as f: OSError: [Errno 22] Invalid argument: './downloads/tsv/Evaluation_INF****_-_V2017.tsv' make.exe": *** [download] Error 1
priority
cannot create files with in the name in windows in download reports py if the downloaded report has in its title download report will try to create a file with this name in windows which is not possible i suggest changing this behaviour in some manner so that the file is either created with a different character which is supported or skipped error message fetching evaluation inf id traceback most recent call last file scripts download reports py line in main file scripts download reports py line in main download files driver args file scripts download reports py line in download files write to file tsv path name underscored tsv response text file scripts download reports py line in write to file with open filename w encoding utf as f oserror invalid argument downloads tsv evaluation inf tsv make exe error
1
106,419
4,272,102,602
IssuesEvent
2016-07-13 13:34:54
northern-bites/nbites
https://api.github.com/repos/northern-bites/nbites
closed
Respond to whistle
Behaviors Bug Fixing High Priority
@philipkoch has implemented whistle detection but we need to make sure we are responding to it correctly in behaviors.
1.0
Respond to whistle - @philipkoch has implemented whistle detection but we need to make sure we are responding to it correctly in behaviors.
priority
respond to whistle philipkoch has implemented whistle detection but we need to make sure we are responding to it correctly in behaviors
1
432,677
12,496,593,140
IssuesEvent
2020-06-01 15:05:27
CatalogueOfLife/data
https://api.github.com/repos/CatalogueOfLife/data
opened
Fix sectors for missing taxa
assembly sector high priority
Fix the sector registration for the missing taxa brought back in #111.
1.0
Fix sectors for missing taxa - Fix the sector registration for the missing taxa brought back in #111.
priority
fix sectors for missing taxa fix the sector registration for the missing taxa brought back in
1
217,917
7,328,991,275
IssuesEvent
2018-03-05 01:52:33
BuckleScript/bucklescript
https://api.github.com/repos/BuckleScript/bucklescript
closed
Defining or using a module named "Block" or "Curry" causes runtime errors when certain features are used
PRIORITY:HIGH bug
Possible solutions: - Mangle reserved module names - Emit compile time error when encountering reserved module names - Rename internal modules to something much less likely to conflict with user-defined module names Repros: https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAISiAxgazgXjgZwC4BOEKecMAdmAFB4CeADvKdgJLmkgBmcAlu3AB84AZUJ8A5nC64x5cVVikAHljhtSAFgBMVIA ```ml module Block = struct end type t = Int of int | String of string let x = Int 42 ``` https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAYQgJ2QTzgXjgZwC7IQDGecMAdmAFBWykCGADo1BgGZwAeWVccdcAPpY4AKRwA6KCADmACgBEOAJZh4MNmxgkFASjjLycDp15xaMUsOxMWGcVNlwATEA ```ml module Curry = struct end let apply f x = let _ = Js.log("side effect") in f x let _ = apply Js.log 2 ```
1.0
Defining or using a module named "Block" or "Curry" causes runtime errors when certain features are used - Possible solutions: - Mangle reserved module names - Emit compile time error when encountering reserved module names - Rename internal modules to something much less likely to conflict with user-defined module names Repros: https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAISiAxgazgXjgZwC4BOEKecMAdmAFB4CeADvKdgJLmkgBmcAlu3AB84AZUJ8A5nC64x5cVVikAHljhtSAFgBMVIA ```ml module Block = struct end type t = Int of int | String of string let x = Int 42 ``` https://reasonml.github.io/try/?ocaml=LYewJgrgNgpgBAYQgJ2QTzgXjgZwC7IQDGecMAdmAFBWykCGADo1BgGZwAeWVccdcAPpY4AKRwA6KCADmACgBEOAJZh4MNmxgkFASjjLycDp15xaMUsOxMWGcVNlwATEA ```ml module Curry = struct end let apply f x = let _ = Js.log("side effect") in f x let _ = apply Js.log 2 ```
priority
defining or using a module named block or curry causes runtime errors when certain features are used possible solutions mangle reserved module names emit compile time error when encountering reserved module names rename internal modules to something much less likely to conflict with user defined module names repros ml module block struct end type t int of int string of string let x int ml module curry struct end let apply f x let js log side effect in f x let apply js log
1
186,791
6,742,705,367
IssuesEvent
2017-10-20 08:55:43
davidberard2/SOEN341GROUPC
https://api.github.com/repos/davidberard2/SOEN341GROUPC
closed
Icon image
app icon feature high priority sp 2
By creating an image for the app, it will allow people to recognize the app on Google Play Store. - [x] Create Logo - [x] Upload logo to GitHub - [x] Implement app icon image Link to relevant User Story
1.0
Icon image - By creating an image for the app, it will allow people to recognize the app on Google Play Store. - [x] Create Logo - [x] Upload logo to GitHub - [x] Implement app icon image Link to relevant User Story
priority
icon image by creating an image for the app it will allow people to recognize the app on google play store create logo upload logo to github implement app icon image link to relevant user story
1
490,210
14,116,730,443
IssuesEvent
2020-11-08 05:05:34
AY2021S1-CS2113T-W12-4/tp
https://api.github.com/repos/AY2021S1-CS2113T-W12-4/tp
closed
Handle StorageCorruptedException
priority.High type.Enhancement
Ask if user wants to reset, or carry out troubleshooting steps by themselves.
1.0
Handle StorageCorruptedException - Ask if user wants to reset, or carry out troubleshooting steps by themselves.
priority
handle storagecorruptedexception ask if user wants to reset or carry out troubleshooting steps by themselves
1
808,272
30,053,455,159
IssuesEvent
2023-06-28 03:54:05
juno-fx/report
https://api.github.com/repos/juno-fx/report
opened
Notes Visibility
enhancement high priority
In hubble, it is very hard to find any notes that have been issued to the user. I would like to setup a notes stream for the user to see what they need to address.
1.0
Notes Visibility - In hubble, it is very hard to find any notes that have been issued to the user. I would like to setup a notes stream for the user to see what they need to address.
priority
notes visibility in hubble it is very hard to find any notes that have been issued to the user i would like to setup a notes stream for the user to see what they need to address
1
754,851
26,406,145,468
IssuesEvent
2023-01-13 08:11:49
Northeastern-Electric-Racing/shepherd_bms
https://api.github.com/repos/Northeastern-Electric-Racing/shepherd_bms
closed
Implement a Cell Balancing Algorithm (Charging)
Feature Optimization High Priority
When doing this, it'll be important to weigh performance vs efficiency, so we need a way of easily configuring the parameters for cell balancing
1.0
Implement a Cell Balancing Algorithm (Charging) - When doing this, it'll be important to weigh performance vs efficiency, so we need a way of easily configuring the parameters for cell balancing
priority
implement a cell balancing algorithm charging when doing this it ll be important to weigh performance vs efficiency so we need a way of easily configuring the parameters for cell balancing
1
86,521
3,725,272,312
IssuesEvent
2016-03-05 00:07:37
RestComm/mediaserver
https://api.github.com/repos/RestComm/mediaserver
closed
Deny access to datagram channel from BouncyCastle DTLS
enhancement High-Priority WebRTC
Currently, the BouncyCastle DTLS classes have access to the DatagramChannel via [NioUdpTransport](https://github.com/RestComm/mediaserver/blob/master/io/rtp/src/main/java/org/mobicents/media/server/impl/srtp/NioUdpTransport.java#L56). This may lead to problems as both UDPManager and [DTLSReliableHandshake](https://github.com/bcgit/bc-java/blob/master/core/src/main/java/org/bouncycastle/crypto/tls/DTLSReliableHandshake.java) are reading/writing from the channel concurrently. I recommend shielding the DatagramChannel from the DTLS classes and let the UDPManager be the only class allowed to perform IO operations on it. To achieve this, a DTLS handler must be attached to the channel (joining the collection of handlers: RTP, RTCP, STUN) and able to [recognize incoming DTLS packets](https://tools.ietf.org/html/rfc5764#section-5.1.2). Whenever a DTLS packet comes in, the DtlsHandler places it in a rxQueue. This queue is access whenever the DTLS handshake algorithm attempts to [perform a read operation](https://github.com/RestComm/mediaserver/blob/master/io/rtp/src/main/java/org/mobicents/media/server/impl/srtp/NioUdpTransport.java#L90).
1.0
Deny access to datagram channel from BouncyCastle DTLS - Currently, the BouncyCastle DTLS classes have access to the DatagramChannel via [NioUdpTransport](https://github.com/RestComm/mediaserver/blob/master/io/rtp/src/main/java/org/mobicents/media/server/impl/srtp/NioUdpTransport.java#L56). This may lead to problems as both UDPManager and [DTLSReliableHandshake](https://github.com/bcgit/bc-java/blob/master/core/src/main/java/org/bouncycastle/crypto/tls/DTLSReliableHandshake.java) are reading/writing from the channel concurrently. I recommend shielding the DatagramChannel from the DTLS classes and let the UDPManager be the only class allowed to perform IO operations on it. To achieve this, a DTLS handler must be attached to the channel (joining the collection of handlers: RTP, RTCP, STUN) and able to [recognize incoming DTLS packets](https://tools.ietf.org/html/rfc5764#section-5.1.2). Whenever a DTLS packet comes in, the DtlsHandler places it in a rxQueue. This queue is access whenever the DTLS handshake algorithm attempts to [perform a read operation](https://github.com/RestComm/mediaserver/blob/master/io/rtp/src/main/java/org/mobicents/media/server/impl/srtp/NioUdpTransport.java#L90).
priority
deny access to datagram channel from bouncycastle dtls currently the bouncycastle dtls classes have access to the datagramchannel via this may lead to problems as both udpmanager and are reading writing from the channel concurrently i recommend shielding the datagramchannel from the dtls classes and let the udpmanager be the only class allowed to perform io operations on it to achieve this a dtls handler must be attached to the channel joining the collection of handlers rtp rtcp stun and able to whenever a dtls packet comes in the dtlshandler places it in a rxqueue this queue is access whenever the dtls handshake algorithm attempts to
1
927
2,505,201,798
IssuesEvent
2015-01-11 06:09:34
chessmasterhong/WaterEmblem
https://api.github.com/repos/chessmasterhong/WaterEmblem
opened
Viewing the unit stats screen the second time during a non-main player unit's turn locks the game in screen.
bug high priority
During a non-main player unit's turn, if the player views the unit stats screen (hovering over any unit and pressing the `SHIFT` key) on the **second** time and tries to escape (using the `ESCAPE` key), the player is unable to leave the screen, leaving the game in an unplayable state. The player should be able to enter and leave the unit stats screen as many times as desired provided that the active unit is a player unit.
1.0
Viewing the unit stats screen the second time during a non-main player unit's turn locks the game in screen. - During a non-main player unit's turn, if the player views the unit stats screen (hovering over any unit and pressing the `SHIFT` key) on the **second** time and tries to escape (using the `ESCAPE` key), the player is unable to leave the screen, leaving the game in an unplayable state. The player should be able to enter and leave the unit stats screen as many times as desired provided that the active unit is a player unit.
priority
viewing the unit stats screen the second time during a non main player unit s turn locks the game in screen during a non main player unit s turn if the player views the unit stats screen hovering over any unit and pressing the shift key on the second time and tries to escape using the escape key the player is unable to leave the screen leaving the game in an unplayable state the player should be able to enter and leave the unit stats screen as many times as desired provided that the active unit is a player unit
1
302,104
9,255,203,967
IssuesEvent
2019-03-16 07:26:20
storybooks/storybook
https://api.github.com/repos/storybooks/storybook
closed
Static file path returned instead of file content on require('./file.svg')
babel / webpack high priority question / support
I'm building a component which loads SVG files. But when I use `icon = import()` or `icon = require()`, the value returned is a static-file-path instead of the content of the file. ``` this.prop.iconName = 'cart' (propType.oneOf) const icon require(`../assets/svg/${iconName}.svg`) console.log(icon) ``` Yields ``` static/media/cart.67bd7202.svg ``` The issue https://github.com/storybooks/storybook/issues/1776 is about a similar issue, except that I'm using the full control mode, and I did reset the server. I tried no custom loaders, svg-url-loader, url-loader and file-loader. My current .storybook/webpack.config.js: ``` const path = require('path'); module.exports = (baseConfig, env, defaultConfig) => { // Extend defaultConfig as you need. // For example, add scss loader: defaultConfig.module.rules.push({ test: /\.scss$/, loaders: ['style-loader', 'css-loader', 'sass-loader'], include: path.resolve(__dirname,'../') }); defaultConfig.resolve.extensions.push('.scss'); // For example, add SVG loader: defaultConfig.module.rules.push({ test: /\.svg$/, loaders: ['svg-url-loader'], include: path.resolve(__dirname,'../') }); defaultConfig.resolve.extensions.push('.svg'); return defaultConfig; }; ``` Project webpack.config.js: ``` { test: /\.svg$/, use: [ { loader: 'svg-url-loader', options: {}, } ] } "webpack": "^4.27.1", "webpack-cli": "^3.1.2", "webpack-dev-server": "^3.1.14" ``` Command to run storybook locally: `"start-storybook -p 9001 -s .storybook/static -c .storybook"` Can anyone help me in the right direction?
1.0
Static file path returned instead of file content on require('./file.svg') - I'm building a component which loads SVG files. But when I use `icon = import()` or `icon = require()`, the value returned is a static-file-path instead of the content of the file. ``` this.prop.iconName = 'cart' (propType.oneOf) const icon require(`../assets/svg/${iconName}.svg`) console.log(icon) ``` Yields ``` static/media/cart.67bd7202.svg ``` The issue https://github.com/storybooks/storybook/issues/1776 is about a similar issue, except that I'm using the full control mode, and I did reset the server. I tried no custom loaders, svg-url-loader, url-loader and file-loader. My current .storybook/webpack.config.js: ``` const path = require('path'); module.exports = (baseConfig, env, defaultConfig) => { // Extend defaultConfig as you need. // For example, add scss loader: defaultConfig.module.rules.push({ test: /\.scss$/, loaders: ['style-loader', 'css-loader', 'sass-loader'], include: path.resolve(__dirname,'../') }); defaultConfig.resolve.extensions.push('.scss'); // For example, add SVG loader: defaultConfig.module.rules.push({ test: /\.svg$/, loaders: ['svg-url-loader'], include: path.resolve(__dirname,'../') }); defaultConfig.resolve.extensions.push('.svg'); return defaultConfig; }; ``` Project webpack.config.js: ``` { test: /\.svg$/, use: [ { loader: 'svg-url-loader', options: {}, } ] } "webpack": "^4.27.1", "webpack-cli": "^3.1.2", "webpack-dev-server": "^3.1.14" ``` Command to run storybook locally: `"start-storybook -p 9001 -s .storybook/static -c .storybook"` Can anyone help me in the right direction?
priority
static file path returned instead of file content on require file svg i m building a component which loads svg files but when i use icon import or icon require the value returned is a static file path instead of the content of the file this prop iconname cart proptype oneof const icon require assets svg iconname svg console log icon yields static media cart svg the issue is about a similar issue except that i m using the full control mode and i did reset the server i tried no custom loaders svg url loader url loader and file loader my current storybook webpack config js const path require path module exports baseconfig env defaultconfig extend defaultconfig as you need for example add scss loader defaultconfig module rules push test scss loaders include path resolve dirname defaultconfig resolve extensions push scss for example add svg loader defaultconfig module rules push test svg loaders include path resolve dirname defaultconfig resolve extensions push svg return defaultconfig project webpack config js test svg use loader svg url loader options webpack webpack cli webpack dev server command to run storybook locally start storybook p s storybook static c storybook can anyone help me in the right direction
1
134,535
5,229,262,653
IssuesEvent
2017-01-29 01:02:37
abentele/Fraise
https://api.github.com/repos/abentele/Fraise
closed
Publish Fraise release
marketing priority:high
After renaming Fraise (#46), the software should be published on different sites, like Fraise 3.7.3: Examples: https://de.wikipedia.org/wiki/Liste_von_Texteditoren https://www.macupdate.com/app/mac/33751/fraise http://www.chip.de/downloads/Fraise-Smultron_39197408.html https://fraise.en.softonic.com/mac http://telecharger.tomsguide.fr/Fraise,0301-33070.html http://de.download.cnet.com/Fraise/3000-2079_4-51296.html http://lowendmac.com/misc/10mr/fraise-3.7.3-review.html http://mac.freedownload123.xyz/fraise-373-188d35cd.html http://en.freedownloadmanager.org/Mac-OS/Fraise-FREE.html http://mac.brothersoft.com/fraise.html … and some others Before, the automatic update feature should be implemented: #2
1.0
Publish Fraise release - After renaming Fraise (#46), the software should be published on different sites, like Fraise 3.7.3: Examples: https://de.wikipedia.org/wiki/Liste_von_Texteditoren https://www.macupdate.com/app/mac/33751/fraise http://www.chip.de/downloads/Fraise-Smultron_39197408.html https://fraise.en.softonic.com/mac http://telecharger.tomsguide.fr/Fraise,0301-33070.html http://de.download.cnet.com/Fraise/3000-2079_4-51296.html http://lowendmac.com/misc/10mr/fraise-3.7.3-review.html http://mac.freedownload123.xyz/fraise-373-188d35cd.html http://en.freedownloadmanager.org/Mac-OS/Fraise-FREE.html http://mac.brothersoft.com/fraise.html … and some others Before, the automatic update feature should be implemented: #2
priority
publish fraise release after renaming fraise the software should be published on different sites like fraise examples … and some others before the automatic update feature should be implemented
1
632,740
20,205,653,574
IssuesEvent
2022-02-11 20:01:34
SAP/xsk
https://api.github.com/repos/SAP/xsk
closed
[HDBDD] Duplicate name error
parsers priority-high effort-medium customer
There is an error when a column name is the same as the entity name. This behaviour should be allowed. ``` namespace sap.db; @Schema: 'ADMIN' context Products { Entity Item { key Item : String(32); OrderId : String(500); }; }; ``` See com/sap/xsk/parser/hdbdd/symbols/entity/EntitySymbol.java:51
1.0
[HDBDD] Duplicate name error - There is an error when a column name is the same as the entity name. This behaviour should be allowed. ``` namespace sap.db; @Schema: 'ADMIN' context Products { Entity Item { key Item : String(32); OrderId : String(500); }; }; ``` See com/sap/xsk/parser/hdbdd/symbols/entity/EntitySymbol.java:51
priority
duplicate name error there is an error when a column name is the same as the entity name this behaviour should be allowed namespace sap db schema admin context products entity item key item string orderid string see com sap xsk parser hdbdd symbols entity entitysymbol java
1
434,577
12,520,265,128
IssuesEvent
2020-06-03 15:36:34
wso2/devstudio-tooling-ei
https://api.github.com/repos/wso2/devstudio-tooling-ei
closed
[DSS] Empty <result> tag in query when no output mappings are given
Priority/High
**Description:** When `Result (Output Mapping)` section in the Edit/Add query view is left empty, it still adds a empty `<result/>` tag to the query element. This will generate an error when deploying the DataService. ![image](https://user-images.githubusercontent.com/25482980/83615421-26a83980-a5a4-11ea-8176-9f4a8655db94.png) **Steps to reproduce:** 1. Create a dataservice. 2. Add a query but do not add anything to the `Result (Output Mapping)` section. 3. Save the query and switch to `source` view. ![image](https://user-images.githubusercontent.com/25482980/83615247-eb0d6f80-a5a3-11ea-837f-05e395e02adb.png)
1.0
[DSS] Empty <result> tag in query when no output mappings are given - **Description:** When `Result (Output Mapping)` section in the Edit/Add query view is left empty, it still adds a empty `<result/>` tag to the query element. This will generate an error when deploying the DataService. ![image](https://user-images.githubusercontent.com/25482980/83615421-26a83980-a5a4-11ea-8176-9f4a8655db94.png) **Steps to reproduce:** 1. Create a dataservice. 2. Add a query but do not add anything to the `Result (Output Mapping)` section. 3. Save the query and switch to `source` view. ![image](https://user-images.githubusercontent.com/25482980/83615247-eb0d6f80-a5a3-11ea-837f-05e395e02adb.png)
priority
empty tag in query when no output mappings are given description when result output mapping section in the edit add query view is left empty it still adds a empty tag to the query element this will generate an error when deploying the dataservice steps to reproduce create a dataservice add a query but do not add anything to the result output mapping section save the query and switch to source view
1
528,937
15,377,509,163
IssuesEvent
2021-03-02 17:10:07
phetsims/molecule-polarity
https://api.github.com/repos/phetsims/molecule-polarity
opened
Add basic PhET-iO instrumentation
dev:phet-io priority:2-high
@kathy-phet asked me to add basic instrumentation to Molecule Polarity, with high priority. A client (reasearcher) has made this sim their top priority. Saving state is of primary importance.
1.0
Add basic PhET-iO instrumentation - @kathy-phet asked me to add basic instrumentation to Molecule Polarity, with high priority. A client (reasearcher) has made this sim their top priority. Saving state is of primary importance.
priority
add basic phet io instrumentation kathy phet asked me to add basic instrumentation to molecule polarity with high priority a client reasearcher has made this sim their top priority saving state is of primary importance
1
197,590
6,961,688,812
IssuesEvent
2017-12-08 10:27:50
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
makedhcp -d not working
component:coral component:dhcp priority:high sprint2 status:pending
I haven't tried this before today, so maybe I'm doing something wrong. Should this work? ``` [root@mgmt1 ~]# makedhcp -d a05n13 [root@mgmt1 ~]# echo $? 0 [root@mgmt1 ~]# grep a05n13 /var/lib/dhcpd/dhcpd.leases host a05n13 { supersede server.ddns-hostname = "a05n13"; supersede host-name = "a05n13"; supersede conf-file = "http://10.10.0.11/tftpboot/petitboot/a05n13"; ```
1.0
makedhcp -d not working - I haven't tried this before today, so maybe I'm doing something wrong. Should this work? ``` [root@mgmt1 ~]# makedhcp -d a05n13 [root@mgmt1 ~]# echo $? 0 [root@mgmt1 ~]# grep a05n13 /var/lib/dhcpd/dhcpd.leases host a05n13 { supersede server.ddns-hostname = "a05n13"; supersede host-name = "a05n13"; supersede conf-file = "http://10.10.0.11/tftpboot/petitboot/a05n13"; ```
priority
makedhcp d not working i haven t tried this before today so maybe i m doing something wrong should this work makedhcp d echo grep var lib dhcpd dhcpd leases host supersede server ddns hostname supersede host name supersede conf file
1
175,962
6,556,331,432
IssuesEvent
2017-09-06 13:49:24
ProjectSidewalk/SidewalkWebpage
https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage
closed
Modifications needed to allow turkers to perform audits on the production website
CHI2018 Priority: Very High pull-request-submitted
The idea is to use the custom urls to automatically signup turkers with an account that will be accessible by them for a single session. These accounts will also be associated with a separate 'Turker' role rather than volunteer, administrator, or researcher. Within that session they can complete as many missions as they like. Completing the first mission will allow them to get a confirmation code to be submitted on MTurk for the base reward. They will be compensated for any other subsequent mission completions in the form of bonuses. - [x] Check if posting a link on mturk gives a query string with worker id, assignment id, hit id etc (It doesnt) - [x] Add a new Turker role in the role table. - [x] If referrer is mturk then change the role of the user to Turker. - [x] Check if this turker was present in the list of turkers who've completed the controlled experiments. - [x] Add a turkerId column to the amt_assignment table (modify evolutions file, modify amt_assignment_table model) - [x] Write code to display a confirmation code on completion of the first mission by a turker - [x] Write code to generate the confirmation code on the first mission completion. - [x] Write a table to store previously used confirmation code. (Time generated, time used, associated user). (We can just add a column to the amt_assignment table) - [x] Write code to approve assignments that have the correct confirmation code submitted. - [x] Write code to prevent reuse of confirmation codes (this is not required since the confirmation code will be linked with the assignmentId and hitId in addition to workerId during the verification process). - [x] Write code to automatically transfer a bonus reward to a turker on mission completion. (Add a column to the mission_user table to indicate if the user was monetarily compensated for completing the mission)
1.0
Modifications needed to allow turkers to perform audits on the production website - The idea is to use the custom urls to automatically signup turkers with an account that will be accessible by them for a single session. These accounts will also be associated with a separate 'Turker' role rather than volunteer, administrator, or researcher. Within that session they can complete as many missions as they like. Completing the first mission will allow them to get a confirmation code to be submitted on MTurk for the base reward. They will be compensated for any other subsequent mission completions in the form of bonuses. - [x] Check if posting a link on mturk gives a query string with worker id, assignment id, hit id etc (It doesnt) - [x] Add a new Turker role in the role table. - [x] If referrer is mturk then change the role of the user to Turker. - [x] Check if this turker was present in the list of turkers who've completed the controlled experiments. - [x] Add a turkerId column to the amt_assignment table (modify evolutions file, modify amt_assignment_table model) - [x] Write code to display a confirmation code on completion of the first mission by a turker - [x] Write code to generate the confirmation code on the first mission completion. - [x] Write a table to store previously used confirmation code. (Time generated, time used, associated user). (We can just add a column to the amt_assignment table) - [x] Write code to approve assignments that have the correct confirmation code submitted. - [x] Write code to prevent reuse of confirmation codes (this is not required since the confirmation code will be linked with the assignmentId and hitId in addition to workerId during the verification process). - [x] Write code to automatically transfer a bonus reward to a turker on mission completion. (Add a column to the mission_user table to indicate if the user was monetarily compensated for completing the mission)
priority
modifications needed to allow turkers to perform audits on the production website the idea is to use the custom urls to automatically signup turkers with an account that will be accessible by them for a single session these accounts will also be associated with a separate turker role rather than volunteer administrator or researcher within that session they can complete as many missions as they like completing the first mission will allow them to get a confirmation code to be submitted on mturk for the base reward they will be compensated for any other subsequent mission completions in the form of bonuses check if posting a link on mturk gives a query string with worker id assignment id hit id etc it doesnt add a new turker role in the role table if referrer is mturk then change the role of the user to turker check if this turker was present in the list of turkers who ve completed the controlled experiments add a turkerid column to the amt assignment table modify evolutions file modify amt assignment table model write code to display a confirmation code on completion of the first mission by a turker write code to generate the confirmation code on the first mission completion write a table to store previously used confirmation code time generated time used associated user we can just add a column to the amt assignment table write code to approve assignments that have the correct confirmation code submitted write code to prevent reuse of confirmation codes this is not required since the confirmation code will be linked with the assignmentid and hitid in addition to workerid during the verification process write code to automatically transfer a bonus reward to a turker on mission completion add a column to the mission user table to indicate if the user was monetarily compensated for completing the mission
1
443,294
12,791,924,932
IssuesEvent
2020-07-02 00:01:14
GeyserMC/Geyser
https://api.github.com/repos/GeyserMC/Geyser
closed
[1.16] After the player drowns, there is no death GUI, but free movement.
Confirmed Bug Priority: High
**Describe the bug** After the player drowns, there is no death GUI, but free movement. After exiting the server, the GUI keeps showing death and does not resurrect the player. **Screenshots / Videos** ![image](https://user-images.githubusercontent.com/16282726/86129093-c362e600-bb14-11ea-8dab-578ce57d38bb.png) **Server Version** 1.16.1 / Paper-27 **Geyser Version** 1.16 **Minecraft: Bedrock Edition Version** 1.16.1
1.0
[1.16] After the player drowns, there is no death GUI, but free movement. - **Describe the bug** After the player drowns, there is no death GUI, but free movement. After exiting the server, the GUI keeps showing death and does not resurrect the player. **Screenshots / Videos** ![image](https://user-images.githubusercontent.com/16282726/86129093-c362e600-bb14-11ea-8dab-578ce57d38bb.png) **Server Version** 1.16.1 / Paper-27 **Geyser Version** 1.16 **Minecraft: Bedrock Edition Version** 1.16.1
priority
after the player drowns there is no death gui but free movement describe the bug after the player drowns there is no death gui but free movement after exiting the server the gui keeps showing death and does not resurrect the player screenshots videos server version paper geyser version minecraft bedrock edition version
1
405,196
11,869,526,730
IssuesEvent
2020-03-26 11:06:56
AbsaOSS/enceladus
https://api.github.com/repos/AbsaOSS/enceladus
opened
Menas can return 201 with empty body
Menas Standardization bug priority: high
## Describe the bug This happens (presumably) when a run creation request is received during a period of very high load. This happens after 30 seconds. from the start of the query ## To Reproduce * Run 1000 Standardization jobs at once ## Expected behaviour * The timeout should be configurable * The backend should return a 5xx response. ## Screenshots ``` 20/03/24 06:10:40 ERROR Client: Application diagnostics message: User class threw exception: java.lang.NullPointerException @at com.shaded.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:871) @at com.shaded.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2726) @at za.co.absa.enceladus.dao.rest.JsonSerializer$.fromJson(JsonSerializer.scala:39) @at za.co.absa.enceladus.dao.rest.RestClient.send(RestClient.scala:82) @at za.co.absa.enceladus.dao.rest.RestClient.sendPost(RestClient.scala:53) @at za.co.absa.enceladus.dao.rest.MenasRestDAO$$anonfun$storeNewRunObject$1.apply(MenasRestDAO.scala:65) @at za.co.absa.enceladus.dao.rest.MenasRestDAO$$anonfun$storeNewRunObject$1.apply(MenasRestDAO.scala:63) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller$$anonfun$za$co$absa$enceladus$dao$rest$CrossHostApiCaller$$attempt$1$3.apply(CrossHostApiCaller.scala:44) @at scala.util.Try$.apply(Try.scala:192) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller.za$co$absa$enceladus$dao$rest$CrossHostApiCaller$$attempt$1(CrossHostApiCaller.scala:43) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller.call(CrossHostApiCaller.scala:57) @at za.co.absa.enceladus.dao.rest.MenasRestDAO.storeNewRunObject(MenasRestDAO.scala:63) @at za.co.absa.enceladus.dao.menasplugin.EventListenerMenas.onLoad(EventListenerMenas.scala:66) @at za.co.absa.atum.core.ControlFrameworkState$$anonfun$initializeProcessor$1.apply(ControlFrameworkState.scala:262) @at za.co.absa.atum.core.ControlFrameworkState$$anonfun$initializeProcessor$1.apply(ControlFrameworkState.scala:261) @at scala.collection.immutable.List.foreach(List.scala:392) @at za.co.absa.atum.core.ControlFrameworkState.initializeProcessor(ControlFrameworkState.scala:261) @at za.co.absa.atum.core.ControlFrameworkState.addEventListener(ControlFrameworkState.scala:234) @at za.co.absa.atum.core.Atum$.addEventListener(Atum.scala:223) @at za.co.absa.atum.plugins.PluginManager$.loadPlugin(PluginManager.scala:22) @at za.co.absa.enceladus.dao.menasplugin.MenasPlugin$.enableMenas(MenasPlugin.scala:47) @at za.co.absa.enceladus.standardization.StandardizationJob$.main(StandardizationJob.scala:93) @at za.co.absa.enceladus.standardization.StandardizationJob.main(StandardizationJob.scala) @at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) @at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) @at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) @at java.lang.reflect.Method.invoke(Method.java:498) @at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:684) ```
1.0
Menas can return 201 with empty body - ## Describe the bug This happens (presumably) when a run creation request is received during a period of very high load. This happens after 30 seconds. from the start of the query ## To Reproduce * Run 1000 Standardization jobs at once ## Expected behaviour * The timeout should be configurable * The backend should return a 5xx response. ## Screenshots ``` 20/03/24 06:10:40 ERROR Client: Application diagnostics message: User class threw exception: java.lang.NullPointerException @at com.shaded.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:871) @at com.shaded.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2726) @at za.co.absa.enceladus.dao.rest.JsonSerializer$.fromJson(JsonSerializer.scala:39) @at za.co.absa.enceladus.dao.rest.RestClient.send(RestClient.scala:82) @at za.co.absa.enceladus.dao.rest.RestClient.sendPost(RestClient.scala:53) @at za.co.absa.enceladus.dao.rest.MenasRestDAO$$anonfun$storeNewRunObject$1.apply(MenasRestDAO.scala:65) @at za.co.absa.enceladus.dao.rest.MenasRestDAO$$anonfun$storeNewRunObject$1.apply(MenasRestDAO.scala:63) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller$$anonfun$za$co$absa$enceladus$dao$rest$CrossHostApiCaller$$attempt$1$3.apply(CrossHostApiCaller.scala:44) @at scala.util.Try$.apply(Try.scala:192) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller.za$co$absa$enceladus$dao$rest$CrossHostApiCaller$$attempt$1(CrossHostApiCaller.scala:43) @at za.co.absa.enceladus.dao.rest.CrossHostApiCaller.call(CrossHostApiCaller.scala:57) @at za.co.absa.enceladus.dao.rest.MenasRestDAO.storeNewRunObject(MenasRestDAO.scala:63) @at za.co.absa.enceladus.dao.menasplugin.EventListenerMenas.onLoad(EventListenerMenas.scala:66) @at za.co.absa.atum.core.ControlFrameworkState$$anonfun$initializeProcessor$1.apply(ControlFrameworkState.scala:262) @at za.co.absa.atum.core.ControlFrameworkState$$anonfun$initializeProcessor$1.apply(ControlFrameworkState.scala:261) @at scala.collection.immutable.List.foreach(List.scala:392) @at za.co.absa.atum.core.ControlFrameworkState.initializeProcessor(ControlFrameworkState.scala:261) @at za.co.absa.atum.core.ControlFrameworkState.addEventListener(ControlFrameworkState.scala:234) @at za.co.absa.atum.core.Atum$.addEventListener(Atum.scala:223) @at za.co.absa.atum.plugins.PluginManager$.loadPlugin(PluginManager.scala:22) @at za.co.absa.enceladus.dao.menasplugin.MenasPlugin$.enableMenas(MenasPlugin.scala:47) @at za.co.absa.enceladus.standardization.StandardizationJob$.main(StandardizationJob.scala:93) @at za.co.absa.enceladus.standardization.StandardizationJob.main(StandardizationJob.scala) @at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) @at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) @at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) @at java.lang.reflect.Method.invoke(Method.java:498) @at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:684) ```
priority
menas can return with empty body describe the bug this happens presumably when a run creation request is received during a period of very high load this happens after seconds from the start of the query to reproduce run standardization jobs at once expected behaviour the timeout should be configurable the backend should return a response screenshots error client application diagnostics message user class threw exception java lang nullpointerexception at com shaded fasterxml jackson core jsonfactory createparser jsonfactory java at com shaded fasterxml jackson databind objectmapper readvalue objectmapper java at za co absa enceladus dao rest jsonserializer fromjson jsonserializer scala at za co absa enceladus dao rest restclient send restclient scala at za co absa enceladus dao rest restclient sendpost restclient scala at za co absa enceladus dao rest menasrestdao anonfun storenewrunobject apply menasrestdao scala at za co absa enceladus dao rest menasrestdao anonfun storenewrunobject apply menasrestdao scala at za co absa enceladus dao rest crosshostapicaller anonfun za co absa enceladus dao rest crosshostapicaller attempt apply crosshostapicaller scala at scala util try apply try scala at za co absa enceladus dao rest crosshostapicaller za co absa enceladus dao rest crosshostapicaller attempt crosshostapicaller scala at za co absa enceladus dao rest crosshostapicaller call crosshostapicaller scala at za co absa enceladus dao rest menasrestdao storenewrunobject menasrestdao scala at za co absa enceladus dao menasplugin eventlistenermenas onload eventlistenermenas scala at za co absa atum core controlframeworkstate anonfun initializeprocessor apply controlframeworkstate scala at za co absa atum core controlframeworkstate anonfun initializeprocessor apply controlframeworkstate scala at scala collection immutable list foreach list scala at za co absa atum core controlframeworkstate initializeprocessor controlframeworkstate scala at za co absa atum core controlframeworkstate addeventlistener controlframeworkstate scala at za co absa atum core atum addeventlistener atum scala at za co absa atum plugins pluginmanager loadplugin pluginmanager scala at za co absa enceladus dao menasplugin menasplugin enablemenas menasplugin scala at za co absa enceladus standardization standardizationjob main standardizationjob scala at za co absa enceladus standardization standardizationjob main standardizationjob scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache spark deploy yarn applicationmaster anon run applicationmaster scala
1
611,081
18,944,383,896
IssuesEvent
2021-11-18 08:34:11
betagouv/service-national-universel
https://api.github.com/repos/betagouv/service-national-universel
opened
feat: component address and etablissement in input
enhancement priority-HIGH
### Fonctionnalité liée à un problème ? on peut remplir les adresses et etablissement a la volée dans la fiche d'edition d'un volontaire. Les coquilles sont trop probables ! ### Fonctionnalité ajouter les composants adresse v2 et etablissment dans la fiche d'edition d'un jeune ### Commentaires il faudrait ajouter le composant adresse aux autres endroits cote admin et app egalement !
1.0
feat: component address and etablissement in input - ### Fonctionnalité liée à un problème ? on peut remplir les adresses et etablissement a la volée dans la fiche d'edition d'un volontaire. Les coquilles sont trop probables ! ### Fonctionnalité ajouter les composants adresse v2 et etablissment dans la fiche d'edition d'un jeune ### Commentaires il faudrait ajouter le composant adresse aux autres endroits cote admin et app egalement !
priority
feat component address and etablissement in input fonctionnalité liée à un problème on peut remplir les adresses et etablissement a la volée dans la fiche d edition d un volontaire les coquilles sont trop probables fonctionnalité ajouter les composants adresse et etablissment dans la fiche d edition d un jeune commentaires il faudrait ajouter le composant adresse aux autres endroits cote admin et app egalement
1
225,450
7,481,769,265
IssuesEvent
2018-04-04 21:50:00
slashroots/undp-ghg-v2
https://api.github.com/repos/slashroots/undp-ghg-v2
closed
Incorrect Error Displayed
bug high-priority
During the importing process of the CSV file. I noticed that an incorrect message is shown to the user. It should be "Activity Not Found" <img width="292" alt="screen shot 2018-02-07 at 6 37 18 pm" src="https://user-images.githubusercontent.com/1425164/35947505-288b4648-0c36-11e8-8890-9ac7ad1aaf57.png">
1.0
Incorrect Error Displayed - During the importing process of the CSV file. I noticed that an incorrect message is shown to the user. It should be "Activity Not Found" <img width="292" alt="screen shot 2018-02-07 at 6 37 18 pm" src="https://user-images.githubusercontent.com/1425164/35947505-288b4648-0c36-11e8-8890-9ac7ad1aaf57.png">
priority
incorrect error displayed during the importing process of the csv file i noticed that an incorrect message is shown to the user it should be activity not found img width alt screen shot at pm src
1
586,232
17,573,283,332
IssuesEvent
2021-08-15 05:29:41
woowa-techcamp-2021/store-6
https://api.github.com/repos/woowa-techcamp-2021/store-6
opened
[FE, BE] 자동 배포 프로세스 구현
setup high priority
## :hammer: 기능 설명 main 브랜치에 merge 되는 코드들을 자동 배포합니다. ## 📑 완료 조건 - [ ] Github Action workflow 파일에 자동 배포 로직을 작성합니다. - [ ] main 에 push 되었을 때 GIthub Actions를 수행하고 AWS Codedeploy를 통해 EC2에 배포되어져야 합니다. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름] - [FE, BE] 기타 - 자동 배포 - 자동 배포 프로세스 구현
1.0
[FE, BE] 자동 배포 프로세스 구현 - ## :hammer: 기능 설명 main 브랜치에 merge 되는 코드들을 자동 배포합니다. ## 📑 완료 조건 - [ ] Github Action workflow 파일에 자동 배포 로직을 작성합니다. - [ ] main 에 push 되었을 때 GIthub Actions를 수행하고 AWS Codedeploy를 통해 EC2에 배포되어져야 합니다. ## :thought_balloon: 관련 Backlog > [대분류] - [중분류] - [Backlog 이름] - [FE, BE] 기타 - 자동 배포 - 자동 배포 프로세스 구현
priority
자동 배포 프로세스 구현 hammer 기능 설명 main 브랜치에 merge 되는 코드들을 자동 배포합니다 📑 완료 조건 github action workflow 파일에 자동 배포 로직을 작성합니다 main 에 push 되었을 때 github actions를 수행하고 aws codedeploy를 통해 배포되어져야 합니다 thought balloon 관련 backlog 기타 자동 배포 자동 배포 프로세스 구현
1
563,373
16,681,738,523
IssuesEvent
2021-06-08 01:14:50
GC-spigot/AdvancedEnchantments
https://api.github.com/repos/GC-spigot/AdvancedEnchantments
closed
Using `ADD_DURABILITY_CURRENT_ITEM` on armor can create duplicates of it.
Bug: Confirmed Priority: High Resolution: Accepted
<!-- FULLY FILL OUT THE TEMPLATE. YOUR ISSUE WILL BE IMMEDIATELY CLOSED IF YOU DON'T. Before reporting a bug, make sure you have the latest version of the plugin. Advanced Plugins: https://advancedplugins.net/item/1/ Spigot: https://www.spigotmc.org/resources/43058/ Songoda: https://songoda.com/marketplace/product/327/ Do not write inside the arrows or it will be hidden! 1. Check whether it has already been requested or added. You can search the issue tracker to see if what you want has already been requested and/or added to the plugin. 2. Only put ONE bug per issue. This helps us keep track of things. 3. Fully fill out the template. Everything other then screenshots/ videos is absolutely required. --> ## Details **Describe the bug** <!-- Replace this with a clear and concise description of what the bug is. --> `ADD_DURABILITY_CURRENT_ITEM` effects on armor can duplicate the armor that you are using. In case of duplicates, this is related to #1704 If you need anymore information feel free to tag me! **To Reproduce** <!-- Replace this with a way to reliability reproduce the bug. Without this, the issue will not get fixed. --> 1. Make an armor piece 2. Enchant it with effects `ADD_DURABILITY_CURRENT_ITEM` 3. Equip the armor 4. Let a mob damage you and quickly swapping your armor to inventory or with another armor. 5. Armor will be duplicated. Enchant on use: https://paste.md-5.net/umugiqofos.bash **Screenshots / Video** https://youtu.be/enN5LX45lbg <!-- If possible, add screenshots or videos to help explain/ show your problem. These are greatly appreciated. --> ## Server Information - "/ae plinfo" link: https://paste.md-5.net/kubatevuga<!-- REQUIRED! Replace this with the command output's https://paste.md-5.net/ link --> - Server log: https://paste.md-5.net/cepapuvapi.md<!-- REQUIRED! Upload `logs/latest.log` to https://mcpaste.io/ -->
1.0
Using `ADD_DURABILITY_CURRENT_ITEM` on armor can create duplicates of it. - <!-- FULLY FILL OUT THE TEMPLATE. YOUR ISSUE WILL BE IMMEDIATELY CLOSED IF YOU DON'T. Before reporting a bug, make sure you have the latest version of the plugin. Advanced Plugins: https://advancedplugins.net/item/1/ Spigot: https://www.spigotmc.org/resources/43058/ Songoda: https://songoda.com/marketplace/product/327/ Do not write inside the arrows or it will be hidden! 1. Check whether it has already been requested or added. You can search the issue tracker to see if what you want has already been requested and/or added to the plugin. 2. Only put ONE bug per issue. This helps us keep track of things. 3. Fully fill out the template. Everything other then screenshots/ videos is absolutely required. --> ## Details **Describe the bug** <!-- Replace this with a clear and concise description of what the bug is. --> `ADD_DURABILITY_CURRENT_ITEM` effects on armor can duplicate the armor that you are using. In case of duplicates, this is related to #1704 If you need anymore information feel free to tag me! **To Reproduce** <!-- Replace this with a way to reliability reproduce the bug. Without this, the issue will not get fixed. --> 1. Make an armor piece 2. Enchant it with effects `ADD_DURABILITY_CURRENT_ITEM` 3. Equip the armor 4. Let a mob damage you and quickly swapping your armor to inventory or with another armor. 5. Armor will be duplicated. Enchant on use: https://paste.md-5.net/umugiqofos.bash **Screenshots / Video** https://youtu.be/enN5LX45lbg <!-- If possible, add screenshots or videos to help explain/ show your problem. These are greatly appreciated. --> ## Server Information - "/ae plinfo" link: https://paste.md-5.net/kubatevuga<!-- REQUIRED! Replace this with the command output's https://paste.md-5.net/ link --> - Server log: https://paste.md-5.net/cepapuvapi.md<!-- REQUIRED! Upload `logs/latest.log` to https://mcpaste.io/ -->
priority
using add durability current item on armor can create duplicates of it fully fill out the template your issue will be immediately closed if you don t before reporting a bug make sure you have the latest version of the plugin advanced plugins spigot songoda do not write inside the arrows or it will be hidden check whether it has already been requested or added you can search the issue tracker to see if what you want has already been requested and or added to the plugin only put one bug per issue this helps us keep track of things fully fill out the template everything other then screenshots videos is absolutely required details describe the bug add durability current item effects on armor can duplicate the armor that you are using in case of duplicates this is related to if you need anymore information feel free to tag me to reproduce make an armor piece enchant it with effects add durability current item equip the armor let a mob damage you and quickly swapping your armor to inventory or with another armor armor will be duplicated enchant on use screenshots video server information ae plinfo link required replace this with the command output s link server log required upload logs latest log to
1
704,032
24,183,006,320
IssuesEvent
2022-09-23 10:41:01
justbudget/justbudget
https://api.github.com/repos/justbudget/justbudget
closed
Display Selected Totals
feature request p2 (high priority)
Use Case: I would like to select one or more transactions (likely via checkboxes) and have the UI show what the subtotal of just those transactions adds up to.
1.0
Display Selected Totals - Use Case: I would like to select one or more transactions (likely via checkboxes) and have the UI show what the subtotal of just those transactions adds up to.
priority
display selected totals use case i would like to select one or more transactions likely via checkboxes and have the ui show what the subtotal of just those transactions adds up to
1
462,539
13,248,884,175
IssuesEvent
2020-08-19 19:47:14
phetsims/axon
https://api.github.com/repos/phetsims/axon
closed
Can TinyEmitter.listeners become a Set?
dev:phet-io priority:2-high status:blocks-publication status:ready-for-review type:misc type:performance
Over in https://github.com/phetsims/natural-selection/issues/140#issuecomment-665348980, the phet-io team found that 1.2 seconds of a 1.3 second "start over" operation, clearing ~1000 bunnies from Natural Selection, was due to TinyEmitter.removeListener where the listener array is spliced. We see online that `Set` may be more optimized for this (as well as add/iterate) We will take it for a test drive here! Tagging @jonathanolson in case there are concerns with this investigation.
1.0
Can TinyEmitter.listeners become a Set? - Over in https://github.com/phetsims/natural-selection/issues/140#issuecomment-665348980, the phet-io team found that 1.2 seconds of a 1.3 second "start over" operation, clearing ~1000 bunnies from Natural Selection, was due to TinyEmitter.removeListener where the listener array is spliced. We see online that `Set` may be more optimized for this (as well as add/iterate) We will take it for a test drive here! Tagging @jonathanolson in case there are concerns with this investigation.
priority
can tinyemitter listeners become a set over in the phet io team found that seconds of a second start over operation clearing bunnies from natural selection was due to tinyemitter removelistener where the listener array is spliced we see online that set may be more optimized for this as well as add iterate we will take it for a test drive here tagging jonathanolson in case there are concerns with this investigation
1
693,193
23,766,235,821
IssuesEvent
2022-09-01 13:01:51
a2develop/bugTracker
https://api.github.com/repos/a2develop/bugTracker
closed
Нумерацию Налоговых накладных сделать не строгой, чтоб редактировать можно было.
priority:high
У некоторых нумерация НН начинается с каждого месяца сначала, а система не дает исправить.
1.0
Нумерацию Налоговых накладных сделать не строгой, чтоб редактировать можно было. - У некоторых нумерация НН начинается с каждого месяца сначала, а система не дает исправить.
priority
нумерацию налоговых накладных сделать не строгой чтоб редактировать можно было у некоторых нумерация нн начинается с каждого месяца сначала а система не дает исправить
1
693,327
23,772,415,922
IssuesEvent
2022-09-01 17:32:15
ApplETS/Notre-Dame
https://api.github.com/repos/ApplETS/Notre-Dame
closed
Fastlane Android release no longer working - Play Store missing information
bug CI priority: high
**Describe the bug** Releasing new versions of the app on the Play Store is no longer possible because of missing app information. **To Reproduce** 1. Run the fastlane release workflow 2. The following error occurs in the "Deploy to store" job : `[20:24:21]: Google Api Error: Invalid request - This app has no data safety declaration.` **Expected behavior** The workflow should succeed and the app should be released to the Google Play Store automatically. **Additional context** It seems that the app's Play Store page is missing data safety information, and this is preventing us from releasing any new version since July: > By July 20, 2022, all developers must declare how they collect and handle user data for the apps they publish on Google Play, and provide details about how they protect this data through security practices like encryption. ([Source](https://support.google.com/googleplay/android-developer/answer/10787469?hl=en)) This could be fixed by adding the missing information using the Play Console.
1.0
Fastlane Android release no longer working - Play Store missing information - **Describe the bug** Releasing new versions of the app on the Play Store is no longer possible because of missing app information. **To Reproduce** 1. Run the fastlane release workflow 2. The following error occurs in the "Deploy to store" job : `[20:24:21]: Google Api Error: Invalid request - This app has no data safety declaration.` **Expected behavior** The workflow should succeed and the app should be released to the Google Play Store automatically. **Additional context** It seems that the app's Play Store page is missing data safety information, and this is preventing us from releasing any new version since July: > By July 20, 2022, all developers must declare how they collect and handle user data for the apps they publish on Google Play, and provide details about how they protect this data through security practices like encryption. ([Source](https://support.google.com/googleplay/android-developer/answer/10787469?hl=en)) This could be fixed by adding the missing information using the Play Console.
priority
fastlane android release no longer working play store missing information describe the bug releasing new versions of the app on the play store is no longer possible because of missing app information to reproduce run the fastlane release workflow the following error occurs in the deploy to store job google api error invalid request this app has no data safety declaration expected behavior the workflow should succeed and the app should be released to the google play store automatically additional context it seems that the app s play store page is missing data safety information and this is preventing us from releasing any new version since july by july all developers must declare how they collect and handle user data for the apps they publish on google play and provide details about how they protect this data through security practices like encryption this could be fixed by adding the missing information using the play console
1
750,972
26,226,884,036
IssuesEvent
2023-01-04 19:34:57
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[docdb] master FATAL ../../src/yb/util/ref_cnt_buffer.cc:34] Check failed: data_ != nullptr
kind/bug area/docdb priority/high status/awaiting-triage
Jira Link: [DB-4740](https://yugabyte.atlassian.net/browse/DB-4740) 0514 12:19:10.884758 23163 ref_cnt_buffer.cc:34] Check failed: data_ != nullptr May 14 08:19:17 shayugabyte2 yb-master[3006]: Fatal failure details written to /usr/scratch/yugabyte/data0/yb-data/master/logs/yb-master.FATAL.details.2021-05-14T12_19_10.pid3006.txt May 14 08:19:17 shayugabyte2 yb-master[3006]: F20210514 12:19:10 ../../src/yb/util/ref_cnt_buffer.cc:34] Check failed: data_ != nullptr May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491b2abcc yb::LogFatalHandlerSink::send() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d03cae google::LogMessage::SendToLog() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d00e3a google::LogMessage::Flush() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d04529 google::LogMessageFatal::~LogMessageFatal() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491b9e85c yb::RefCntBuffer::RefCntBuffer() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d8923 yb::rpc::YBInboundCall::AllocateSidecarBuffer() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d9b9a yb::rpc::YBInboundCall::AddRpcSidecar() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb5e6f yb::tserver::TabletServiceImpl::DoReadImpl() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb701c yb::tserver::TabletServiceImpl::DoRead() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb7421 yb::tserver::TabletServiceImpl::CompleteRead() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb905a yb::tserver::TabletServiceImpl::Read() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54967c2d55 yb::tserver::TabletServerServiceIf::Handle() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942c31c9 yb::rpc::ServicePoolImpl::Handle() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5494268084 yb::rpc::InboundCall::InboundCallTask::Run() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d5098 yb::rpc::(anonymous namespace)::Worker::Execute() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491bc255f yb::Thread::SuperviseThread() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f548d2de694 start_thread May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f548ca1b41d __clone
1.0
[docdb] master FATAL ../../src/yb/util/ref_cnt_buffer.cc:34] Check failed: data_ != nullptr - Jira Link: [DB-4740](https://yugabyte.atlassian.net/browse/DB-4740) 0514 12:19:10.884758 23163 ref_cnt_buffer.cc:34] Check failed: data_ != nullptr May 14 08:19:17 shayugabyte2 yb-master[3006]: Fatal failure details written to /usr/scratch/yugabyte/data0/yb-data/master/logs/yb-master.FATAL.details.2021-05-14T12_19_10.pid3006.txt May 14 08:19:17 shayugabyte2 yb-master[3006]: F20210514 12:19:10 ../../src/yb/util/ref_cnt_buffer.cc:34] Check failed: data_ != nullptr May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491b2abcc yb::LogFatalHandlerSink::send() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d03cae google::LogMessage::SendToLog() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d00e3a google::LogMessage::Flush() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5490d04529 google::LogMessageFatal::~LogMessageFatal() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491b9e85c yb::RefCntBuffer::RefCntBuffer() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d8923 yb::rpc::YBInboundCall::AllocateSidecarBuffer() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d9b9a yb::rpc::YBInboundCall::AddRpcSidecar() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb5e6f yb::tserver::TabletServiceImpl::DoReadImpl() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb701c yb::tserver::TabletServiceImpl::DoRead() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb7421 yb::tserver::TabletServiceImpl::CompleteRead() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f549bfb905a yb::tserver::TabletServiceImpl::Read() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54967c2d55 yb::tserver::TabletServerServiceIf::Handle() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942c31c9 yb::rpc::ServicePoolImpl::Handle() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5494268084 yb::rpc::InboundCall::InboundCallTask::Run() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f54942d5098 yb::rpc::(anonymous namespace)::Worker::Execute() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f5491bc255f yb::Thread::SuperviseThread() May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f548d2de694 start_thread May 14 08:19:17 shayugabyte2 yb-master[3006]: @ 0x7f548ca1b41d __clone
priority
master fatal src yb util ref cnt buffer cc check failed data nullptr jira link ref cnt buffer cc check failed data nullptr may yb master fatal failure details written to usr scratch yugabyte yb data master logs yb master fatal details txt may yb master src yb util ref cnt buffer cc check failed data nullptr may yb master yb logfatalhandlersink send may yb master google logmessage sendtolog may yb master google logmessage flush may yb master google logmessagefatal logmessagefatal may yb master yb refcntbuffer refcntbuffer may yb master yb rpc ybinboundcall allocatesidecarbuffer may yb master yb rpc ybinboundcall addrpcsidecar may yb master yb tserver tabletserviceimpl doreadimpl may yb master yb tserver tabletserviceimpl doread may yb master yb tserver tabletserviceimpl completeread may yb master yb tserver tabletserviceimpl read may yb master yb tserver tabletserverserviceif handle may yb master yb rpc servicepoolimpl handle may yb master yb rpc inboundcall inboundcalltask run may yb master yb rpc anonymous namespace worker execute may yb master yb thread supervisethread may yb master start thread may yb master clone
1
391,045
11,567,907,777
IssuesEvent
2020-02-20 15:03:55
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
closed
FrequencyDomainAnalysis and MuonAnalysis GUIs do not open
Added during Sprint High Priority ISIS Team: Spectroscopy
### Expected behavior The GUI's should open ### Actual behavior It crashes with the error: ``` Traceback (most recent call last): File "c:\users\fvv28776\mantid\qt\applications\workbench\workbench\app\mainwindow.py", line 422, in <lambda> action.triggered.connect(lambda checked_py, script=script: self.launch_custom_python_gui(script)) File "c:\users\fvv28776\mantid\qt\applications\workbench\workbench\app\mainwindow.py", line 381, in launch_custom_python_gui self.interface_executor.execute(open(filename).read(), filename) File "c:\users\fvv28776\mantid\qt\python\mantidqt\widgets\codeeditor\execution.py", line 155, in execute exec (code_obj, self.globals_ns, self.globals_ns) File "C:/Users/fvv28776/mantid/scripts/Frequency_Domain_Analysis.py", line 9, in <module> from Muon.GUI.FrequencyDomainAnalysis.frequency_domain_analysis_2 import FrequencyAnalysisGui File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\FrequencyDomainAnalysis\frequency_domain_analysis_2.py", line 34, in <module> from Muon.GUI.Common.plotting_widget.plotting_widget import PlottingWidget File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\Common\plotting_widget\plotting_widget.py", line 9, in <module> from Muon.GUI.Common.plotting_widget.plotting_widget_view import PlotWidgetView File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\Common\plotting_widget\plotting_widget_view.py", line 12, in <module> from mantidqt.plotting.functions import get_plot_fig ImportError: cannot import name 'get_plot_fig' from 'mantidqt.plotting.functions' (c:\users\fvv28776\mantid\qt\python\mantidqt\plotting\functions.py) ``` ### Steps to reproduce the behavior Open one of the GUI's ### Platforms affected All
1.0
FrequencyDomainAnalysis and MuonAnalysis GUIs do not open - ### Expected behavior The GUI's should open ### Actual behavior It crashes with the error: ``` Traceback (most recent call last): File "c:\users\fvv28776\mantid\qt\applications\workbench\workbench\app\mainwindow.py", line 422, in <lambda> action.triggered.connect(lambda checked_py, script=script: self.launch_custom_python_gui(script)) File "c:\users\fvv28776\mantid\qt\applications\workbench\workbench\app\mainwindow.py", line 381, in launch_custom_python_gui self.interface_executor.execute(open(filename).read(), filename) File "c:\users\fvv28776\mantid\qt\python\mantidqt\widgets\codeeditor\execution.py", line 155, in execute exec (code_obj, self.globals_ns, self.globals_ns) File "C:/Users/fvv28776/mantid/scripts/Frequency_Domain_Analysis.py", line 9, in <module> from Muon.GUI.FrequencyDomainAnalysis.frequency_domain_analysis_2 import FrequencyAnalysisGui File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\FrequencyDomainAnalysis\frequency_domain_analysis_2.py", line 34, in <module> from Muon.GUI.Common.plotting_widget.plotting_widget import PlottingWidget File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\Common\plotting_widget\plotting_widget.py", line 9, in <module> from Muon.GUI.Common.plotting_widget.plotting_widget_view import PlotWidgetView File "C:\Users\fvv28776\mantid\scripts\Muon\GUI\Common\plotting_widget\plotting_widget_view.py", line 12, in <module> from mantidqt.plotting.functions import get_plot_fig ImportError: cannot import name 'get_plot_fig' from 'mantidqt.plotting.functions' (c:\users\fvv28776\mantid\qt\python\mantidqt\plotting\functions.py) ``` ### Steps to reproduce the behavior Open one of the GUI's ### Platforms affected All
priority
frequencydomainanalysis and muonanalysis guis do not open expected behavior the gui s should open actual behavior it crashes with the error traceback most recent call last file c users mantid qt applications workbench workbench app mainwindow py line in action triggered connect lambda checked py script script self launch custom python gui script file c users mantid qt applications workbench workbench app mainwindow py line in launch custom python gui self interface executor execute open filename read filename file c users mantid qt python mantidqt widgets codeeditor execution py line in execute exec code obj self globals ns self globals ns file c users mantid scripts frequency domain analysis py line in from muon gui frequencydomainanalysis frequency domain analysis import frequencyanalysisgui file c users mantid scripts muon gui frequencydomainanalysis frequency domain analysis py line in from muon gui common plotting widget plotting widget import plottingwidget file c users mantid scripts muon gui common plotting widget plotting widget py line in from muon gui common plotting widget plotting widget view import plotwidgetview file c users mantid scripts muon gui common plotting widget plotting widget view py line in from mantidqt plotting functions import get plot fig importerror cannot import name get plot fig from mantidqt plotting functions c users mantid qt python mantidqt plotting functions py steps to reproduce the behavior open one of the gui s platforms affected all
1
167,209
6,334,434,836
IssuesEvent
2017-07-26 16:39:13
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
closed
Sidekiq | We need to loudly log/announce/report failures and successes.
blocked bug-high-priority caseflow-dispatch In Validation prod-alert tango
1. This ticket will focus on PrepareEstablishClaimTasksJob. 2. If "preparation" of the task fails, we will catch VBMS error and continue with the next task. 3. We will keep track of the tasks that failed and log them to the log file. 4. We will have Slack integration to log to the devops-alerts channel “PrepareEstablishClaimTasksJob successfully ran. 10 tasks prepared. 3 tasks failed” 5. The PrepareEstablishClaimTasksJob will run at 5pm 6. CreateEstablishClaimTasksJob will run at 4:30pm
1.0
Sidekiq | We need to loudly log/announce/report failures and successes. - 1. This ticket will focus on PrepareEstablishClaimTasksJob. 2. If "preparation" of the task fails, we will catch VBMS error and continue with the next task. 3. We will keep track of the tasks that failed and log them to the log file. 4. We will have Slack integration to log to the devops-alerts channel “PrepareEstablishClaimTasksJob successfully ran. 10 tasks prepared. 3 tasks failed” 5. The PrepareEstablishClaimTasksJob will run at 5pm 6. CreateEstablishClaimTasksJob will run at 4:30pm
priority
sidekiq we need to loudly log announce report failures and successes this ticket will focus on prepareestablishclaimtasksjob if preparation of the task fails we will catch vbms error and continue with the next task we will keep track of the tasks that failed and log them to the log file we will have slack integration to log to the devops alerts channel “prepareestablishclaimtasksjob successfully ran tasks prepared tasks failed” the prepareestablishclaimtasksjob will run at createestablishclaimtasksjob will run at
1
614,437
19,182,980,748
IssuesEvent
2021-12-04 18:27:41
bounswe/2021SpringGroup4
https://api.github.com/repos/bounswe/2021SpringGroup4
opened
Android: Backend Connection Fix
Priority: High Status: In Progress Android
Android-backend connection will be fixed for login, register and list event features as it is decided in #166.
1.0
Android: Backend Connection Fix - Android-backend connection will be fixed for login, register and list event features as it is decided in #166.
priority
android backend connection fix android backend connection will be fixed for login register and list event features as it is decided in
1
490,506
14,135,176,688
IssuesEvent
2020-11-10 01:03:22
PlaceOS/drivers
https://api.github.com/repos/PlaceOS/drivers
opened
Stienel Driver
driver high priority
**Driver Type** Device **Manufacturer** Stienel **Model/Service** Model or Service **Link to or Attach Device API or Protocol** If applicable, add screenshots to help explain your problem. **Describe any desired functionality** - Control all aspects of device **Additional context** Add any other context about the driver request here.
1.0
Stienel Driver - **Driver Type** Device **Manufacturer** Stienel **Model/Service** Model or Service **Link to or Attach Device API or Protocol** If applicable, add screenshots to help explain your problem. **Describe any desired functionality** - Control all aspects of device **Additional context** Add any other context about the driver request here.
priority
stienel driver driver type device manufacturer stienel model service model or service link to or attach device api or protocol if applicable add screenshots to help explain your problem describe any desired functionality control all aspects of device additional context add any other context about the driver request here
1
104,288
4,208,724,960
IssuesEvent
2016-06-29 00:27:49
PhonologicalCorpusTools/CorpusTools
https://api.github.com/repos/PhonologicalCorpusTools/CorpusTools
opened
add environment to FL results window
enhancement High priority
Now that users can specify particular environments in which to calculate FL, the chosen environment needs to be included in the results window for FL calculations.
1.0
add environment to FL results window - Now that users can specify particular environments in which to calculate FL, the chosen environment needs to be included in the results window for FL calculations.
priority
add environment to fl results window now that users can specify particular environments in which to calculate fl the chosen environment needs to be included in the results window for fl calculations
1
263,913
8,303,322,461
IssuesEvent
2018-09-21 17:09:52
cuappdev/podcast-ios
https://api.github.com/repos/cuappdev/podcast-ios
closed
Create deep links for podcasts for sharing
Priority: High Status: In Review Type: Feature
For MVP1, let's just reference the sharing to the podcast app and not a deeplink.. In the future, sharing an episode should reference that specific episode with a deeplink
1.0
Create deep links for podcasts for sharing - For MVP1, let's just reference the sharing to the podcast app and not a deeplink.. In the future, sharing an episode should reference that specific episode with a deeplink
priority
create deep links for podcasts for sharing for let s just reference the sharing to the podcast app and not a deeplink in the future sharing an episode should reference that specific episode with a deeplink
1
605,803
18,740,743,666
IssuesEvent
2021-11-04 13:20:51
robotframework/robotframework
https://api.github.com/repos/robotframework/robotframework
closed
New `RETURN` statement for returning from a user keyword
enhancement priority: high
We currently have two ways to return from a user keyword: 1. `[Return]` setting that defines what to return once the keyword has been executed. 2. `Return From Keyword` keyword and its variants `Return From Keyword If`, `Run Keyword And Return`, `Run Keyword And Return If`. This is problematic for various reasons: 1. `[Return]` is more widely used but it has pretty bad limitations. Most importantly, it cannot be used conditionally with `IF/ELSE` structures. 2. Using keywords solves the above problem but using keywords for something like this is awkward. 3. It's not good to have multiple ways to solve the same problem. My proposal is that we add separate `RETURN` statement that can be used to return from a user keyword. The statement itself should return unconditionally, but it would be usable in `IF/ELSE`. Example usages: ```robotframework *** Keywords *** Return at the end Some Keyword ${result} = Another Keyword RETURN ${result} Return conditionally IF ${condition} RETURN Something ELSE RETURN Something else END Early return IF ${not applicable} RETURN END Some Keyword Another Keyword ```
1.0
New `RETURN` statement for returning from a user keyword - We currently have two ways to return from a user keyword: 1. `[Return]` setting that defines what to return once the keyword has been executed. 2. `Return From Keyword` keyword and its variants `Return From Keyword If`, `Run Keyword And Return`, `Run Keyword And Return If`. This is problematic for various reasons: 1. `[Return]` is more widely used but it has pretty bad limitations. Most importantly, it cannot be used conditionally with `IF/ELSE` structures. 2. Using keywords solves the above problem but using keywords for something like this is awkward. 3. It's not good to have multiple ways to solve the same problem. My proposal is that we add separate `RETURN` statement that can be used to return from a user keyword. The statement itself should return unconditionally, but it would be usable in `IF/ELSE`. Example usages: ```robotframework *** Keywords *** Return at the end Some Keyword ${result} = Another Keyword RETURN ${result} Return conditionally IF ${condition} RETURN Something ELSE RETURN Something else END Early return IF ${not applicable} RETURN END Some Keyword Another Keyword ```
priority
new return statement for returning from a user keyword we currently have two ways to return from a user keyword setting that defines what to return once the keyword has been executed return from keyword keyword and its variants return from keyword if run keyword and return run keyword and return if this is problematic for various reasons is more widely used but it has pretty bad limitations most importantly it cannot be used conditionally with if else structures using keywords solves the above problem but using keywords for something like this is awkward it s not good to have multiple ways to solve the same problem my proposal is that we add separate return statement that can be used to return from a user keyword the statement itself should return unconditionally but it would be usable in if else example usages robotframework keywords return at the end some keyword result another keyword return result return conditionally if condition return something else return something else end early return if not applicable return end some keyword another keyword
1
366,598
10,824,552,971
IssuesEvent
2019-11-09 10:07:10
AY1920S1-CS2113T-F09-3/main
https://api.github.com/repos/AY1920S1-CS2113T-F09-3/main
closed
As a lab tech, I want the system to warn me if I have entered a duplicated stock code
priority.High
so that I can ensure all stock codes in the system are unique. (to enable accurate searching of stock codes.)
1.0
As a lab tech, I want the system to warn me if I have entered a duplicated stock code - so that I can ensure all stock codes in the system are unique. (to enable accurate searching of stock codes.)
priority
as a lab tech i want the system to warn me if i have entered a duplicated stock code so that i can ensure all stock codes in the system are unique to enable accurate searching of stock codes
1
409,428
11,962,235,991
IssuesEvent
2020-04-05 11:36:47
rtcharity/eahub.org
https://api.github.com/repos/rtcharity/eahub.org
closed
Add optional "Other information" field to /group/*
Feature Request High Priority In Progress
Text, 5000 characters max, placed at the bottom, the field is invisible if value is null
1.0
Add optional "Other information" field to /group/* - Text, 5000 characters max, placed at the bottom, the field is invisible if value is null
priority
add optional other information field to group text characters max placed at the bottom the field is invisible if value is null
1
399,788
11,760,555,625
IssuesEvent
2020-03-13 19:46:46
robotframework/robotframework
https://api.github.com/repos/robotframework/robotframework
closed
Native `&{dict}` iteration with FOR loops
backwards incompatible deprecation enhancement priority: high rc 1
It is possible to extract the keys and values in a python for loop. This is not possible in Robot Framework (or we need to use a more complicated strategy to obtain the same result). In python we can do: ```python dict = { 'a':'one', 'b':'two'} for key, value in dict.items(): print(key, value) ``` It is requested to implement: ```robot &{dict}= Create Dictionary a=one b=two FOR ${key} ${value} IN &{dict} LOG ${key} ${value} END ```
1.0
Native `&{dict}` iteration with FOR loops - It is possible to extract the keys and values in a python for loop. This is not possible in Robot Framework (or we need to use a more complicated strategy to obtain the same result). In python we can do: ```python dict = { 'a':'one', 'b':'two'} for key, value in dict.items(): print(key, value) ``` It is requested to implement: ```robot &{dict}= Create Dictionary a=one b=two FOR ${key} ${value} IN &{dict} LOG ${key} ${value} END ```
priority
native dict iteration with for loops it is possible to extract the keys and values in a python for loop this is not possible in robot framework or we need to use a more complicated strategy to obtain the same result in python we can do python dict a one b two for key value in dict items print key value it is requested to implement robot dict create dictionary a one b two for key value in dict log key value end
1
399,707
11,759,453,658
IssuesEvent
2020-03-13 17:18:22
GrassrootsEconomics/CIC-Docs
https://api.github.com/repos/GrassrootsEconomics/CIC-Docs
opened
Migrate sempo aws instance to ge aws
Platform aws migration priority:high
Hosting of the sarafu platform will be served from GE AWS instance.
1.0
Migrate sempo aws instance to ge aws - Hosting of the sarafu platform will be served from GE AWS instance.
priority
migrate sempo aws instance to ge aws hosting of the sarafu platform will be served from ge aws instance
1
213,906
7,261,429,300
IssuesEvent
2018-02-18 20:40:48
s-p-a-r-k/Jacket-Tracker
https://api.github.com/repos/s-p-a-r-k/Jacket-Tracker
closed
Create quick management screen
Priority: High Status: In Progress Type: Feature
## Story/Task Details - [x] Update the mock UI for this page - [x] Add basic UI elements needed to display data - [x] Load required data into the UI - [x] There are two lists that show different fields for the uniform and student search ## Acceptance Scenarios - Given: A uniform lieutenant is logged in - When: The lieutenant accesses the quick management screen - Then: The quick management screen is displayed with all features available ## Done Done Criteria The quick management screen is displayed with real-time data and there are buttons/actions for each feature available to the uniform lieutenant
1.0
Create quick management screen - ## Story/Task Details - [x] Update the mock UI for this page - [x] Add basic UI elements needed to display data - [x] Load required data into the UI - [x] There are two lists that show different fields for the uniform and student search ## Acceptance Scenarios - Given: A uniform lieutenant is logged in - When: The lieutenant accesses the quick management screen - Then: The quick management screen is displayed with all features available ## Done Done Criteria The quick management screen is displayed with real-time data and there are buttons/actions for each feature available to the uniform lieutenant
priority
create quick management screen story task details update the mock ui for this page add basic ui elements needed to display data load required data into the ui there are two lists that show different fields for the uniform and student search acceptance scenarios given a uniform lieutenant is logged in when the lieutenant accesses the quick management screen then the quick management screen is displayed with all features available done done criteria the quick management screen is displayed with real time data and there are buttons actions for each feature available to the uniform lieutenant
1
378,512
11,203,393,968
IssuesEvent
2020-01-04 19:39:47
iNZightVIT/Lite
https://api.github.com/repos/iNZightVIT/Lite
closed
Check functions working
enhancement high priority question
[This file](https://github.com/iNZightVIT/Lite/blob/bac4d4c66dc6d56eb723041d635dfb28be0204e4/functions.R) contains some functions which were duplicated in `iNZightTools` - however, they are being deprecated [(fully deleted, actually)](iNZightVIT/iNZightTools#113) which will take effect in the next release of iNZightTools. So, could you check that the functions marked "to be deleted once iNZightTools is working" are using the versions specified in `functions.R`, and __not__ the functions in iNZightTools. Several of these functions have been rewritten using `tidyverse` (e.g., `combine.levels()` is now `collapseLevels()`), so at some stage, we will need to change these over (but there's no rush yet) if/when Lite starts storing code history (as these tidyverse versions include the code). One way to check things are working would be to---if you can run Lite locally---install the development version of iNZightTools: ```r devtools::install_github("iNZightVIT/iNZightTools@bugfix/deprecate-functions") ``` and then check that the following work: - combine factor levels - form class intervals - sample data - search name(?) - honestly I don't know what this is used for :confused:
1.0
Check functions working - [This file](https://github.com/iNZightVIT/Lite/blob/bac4d4c66dc6d56eb723041d635dfb28be0204e4/functions.R) contains some functions which were duplicated in `iNZightTools` - however, they are being deprecated [(fully deleted, actually)](iNZightVIT/iNZightTools#113) which will take effect in the next release of iNZightTools. So, could you check that the functions marked "to be deleted once iNZightTools is working" are using the versions specified in `functions.R`, and __not__ the functions in iNZightTools. Several of these functions have been rewritten using `tidyverse` (e.g., `combine.levels()` is now `collapseLevels()`), so at some stage, we will need to change these over (but there's no rush yet) if/when Lite starts storing code history (as these tidyverse versions include the code). One way to check things are working would be to---if you can run Lite locally---install the development version of iNZightTools: ```r devtools::install_github("iNZightVIT/iNZightTools@bugfix/deprecate-functions") ``` and then check that the following work: - combine factor levels - form class intervals - sample data - search name(?) - honestly I don't know what this is used for :confused:
priority
check functions working contains some functions which were duplicated in inzighttools however they are being deprecated inzightvit inzighttools which will take effect in the next release of inzighttools so could you check that the functions marked to be deleted once inzighttools is working are using the versions specified in functions r and not the functions in inzighttools several of these functions have been rewritten using tidyverse e g combine levels is now collapselevels so at some stage we will need to change these over but there s no rush yet if when lite starts storing code history as these tidyverse versions include the code one way to check things are working would be to if you can run lite locally install the development version of inzighttools r devtools install github inzightvit inzighttools bugfix deprecate functions and then check that the following work combine factor levels form class intervals sample data search name honestly i don t know what this is used for confused
1
743,890
25,918,220,449
IssuesEvent
2022-12-15 19:18:07
NCAR/wrfcloud
https://api.github.com/repos/NCAR/wrfcloud
opened
Automate the installation of the system
priority: high type: new feature
## Describe the New Feature ## Create an automated process for new users to install the system on their own AWS account. Idea is to create a series of questions the new user can answer and then everything is setup behind the scenes. - [ ] Outline process - [ ] Document steps - [ ] Test steps Information needed from user: - web domain name - initial model config info e.g. config name, namelists, geo_em files. ### Acceptance Testing ### New user should be able to setup and install system. ### Time Estimate ### 5 days ### Sub-Issues ### Consider breaking the new feature down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** ### Projects and Milestone ### - [ ] Select **Project** - [ ] Select **Milestone** as the next official version or **Backlog of Development Ideas** ## New Feature Checklist ## - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>/<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)**, **Project**, and **Development** issue Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
1.0
Automate the installation of the system - ## Describe the New Feature ## Create an automated process for new users to install the system on their own AWS account. Idea is to create a series of questions the new user can answer and then everything is setup behind the scenes. - [ ] Outline process - [ ] Document steps - [ ] Test steps Information needed from user: - web domain name - initial model config info e.g. config name, namelists, geo_em files. ### Acceptance Testing ### New user should be able to setup and install system. ### Time Estimate ### 5 days ### Sub-Issues ### Consider breaking the new feature down into sub-issues. - [ ] *Add a checkbox for each sub-issue here.* ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ## Define the Metadata ## ### Assignee ### - [ ] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [ ] Select **component(s)** - [ ] Select **priority** ### Projects and Milestone ### - [ ] Select **Project** - [ ] Select **Milestone** as the next official version or **Backlog of Development Ideas** ## New Feature Checklist ## - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**. - [ ] Fork this repository or create a branch of **develop**. Branch name: `feature_<Issue Number>/<Description>` - [ ] Complete the development and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **develop**. Pull request: `feature <Issue Number> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)**, **Project**, and **Development** issue Select: **Milestone** as the next official version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Close this issue.
priority
automate the installation of the system describe the new feature create an automated process for new users to install the system on their own aws account idea is to create a series of questions the new user can answer and then everything is setup behind the scenes outline process document steps test steps information needed from user web domain name initial model config info e g config name namelists geo em files acceptance testing new user should be able to setup and install system time estimate days sub issues consider breaking the new feature down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority projects and milestone select project select milestone as the next official version or backlog of development ideas new feature checklist complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s project and development issue select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
1
407,554
11,923,565,718
IssuesEvent
2020-04-01 08:04:14
balena-io/balena-supervisor
https://api.github.com/repos/balena-io/balena-supervisor
opened
The supervisor should add a random offset to it's update interval
High priority Low-hanging fruit
This update should be stored, and applied to every update delay. The offset should be in the range: ``` [0, updateInterval] ``` This will define a point in time that all devices will update at: ``` n * UPDATE_INTERVAL + offset ``` This should be recalculated on interval change. This then makes the `updateInterval` a minimum value, and the maximum is `2*updateInterval`.
1.0
The supervisor should add a random offset to it's update interval - This update should be stored, and applied to every update delay. The offset should be in the range: ``` [0, updateInterval] ``` This will define a point in time that all devices will update at: ``` n * UPDATE_INTERVAL + offset ``` This should be recalculated on interval change. This then makes the `updateInterval` a minimum value, and the maximum is `2*updateInterval`.
priority
the supervisor should add a random offset to it s update interval this update should be stored and applied to every update delay the offset should be in the range this will define a point in time that all devices will update at n update interval offset this should be recalculated on interval change this then makes the updateinterval a minimum value and the maximum is updateinterval
1
566,267
16,817,085,027
IssuesEvent
2021-06-17 08:40:24
woocommerce/google-listings-and-ads
https://api.github.com/repos/woocommerce/google-listings-and-ads
reopened
Global Offers
priority: high type: enhancement type: epic
Currently GLA [syncs all product all target countries ](https://github.com/woocommerce/google-listings-and-ads/pull/319) This approach works but creates a lot of overhead/resource consumption such as API requests, product counts & a real risk we'll keep hitting our quotas quickly (a small number of merchants can use up the quota if they have a large number of products and select to target all countries). The alternative approach is to move to "Global Offers" - Global Offers makes it easy to list products in multiple countries without needing to upload the products to each country (this was beta when initially discussing the project and there was some disconnect/confusion about using this as the go-to/default approach when working on https://github.com/woocommerce/google-listings-and-ads/pull/399). The change we need to make is essentially * We no longer need to submit products for each country * We only need to submit the list of shipping countries (an enhancement to https://github.com/woocommerce/google-listings-and-ads/pull/399) * The product will be displayed in all of the Shipping countries regardless of the target country that it is submitted to At face value - it seems simple enough - but I know we are doing are a lot of juggling under the hood. Some of the questions that came to mind when discussing internally for reference > Will the target country still be relevant after these changes? I mean what is the difference of a product submitted in all countries vs. a product that’s submitted in one country but SHIPS to all countries? Based on discussions this morning my understanding is we can actually think of the "target country" more as the country of sale with shipping to multiple countries. In the UI target country will be mapped to "Country of Sale" then in the Program and Status columns will see multiple rows for each shipping country. <img width="1242" alt="Markup 2021-06-16 at 13 04 00" src="https://user-images.githubusercontent.com/355014/122154142-74d46600-cea3-11eb-98ab-b3846de0a5df.png"> > I assume that after this change we will only submit a product once for the shop’s current country (set in Woo settings) and then set the shipping based on the target country settings. This will mean that if their API doesn’t change, we will have only one ID and one synced product to deal with. Correct, the Google team confirmed it would make sense to use the store location for the "target country" product attribute then set shipping based all the "target countries" the merchant selects during onboarding. > We could store the target countries as a separate meta and assume that they all have the same ID. > Or we can just store the same ID for each target country and continue using the same structure that we have now. > I would go with the first method to separate the concepts of target countries and shipping countries. At the moment we have `_wc_gla_google_ids` meta which is a serialized array of Ids e.g. for a single target country it looks like `a:1:{s:2:"AU";s:19:"online:en:AU:gla_85";}` Thinking outside the box - the current structure may still offer some benefitts if we add multi-lingual / currency / country support to a single site - open to some brain cycles on this though - I just like to avoid backtracking too much if we avoid it. Otherwise an un-educated thought - we could probably move to something simple like `_wc_gla_google_id` (singular) and `online:en:AU:gla_85` - so yes same/single ID for the product being uploaded (we won't have multiple IDs to track anymore - at the moment). **Notes** * variations are still handled individually - that hasn't changed. * I am assuming there are going to flow on changes for the product feed summary, issues, and table as a result of this - but this might help reduce some of the performance impacts cc @layoutd * impact on existing offers - we might need to look at running a migration job to clean up products that have been pushed up - but I'll wait to hear the teams thoughts. **Reference pull requests** * [Sync products for all target countries](https://github.com/woocommerce/google-listings-and-ads/pull/319) * [Set Product Shipping Information Based on Target Country](https://github.com/woocommerce/google-listings-and-ads/pull/399)
1.0
Global Offers - Currently GLA [syncs all product all target countries ](https://github.com/woocommerce/google-listings-and-ads/pull/319) This approach works but creates a lot of overhead/resource consumption such as API requests, product counts & a real risk we'll keep hitting our quotas quickly (a small number of merchants can use up the quota if they have a large number of products and select to target all countries). The alternative approach is to move to "Global Offers" - Global Offers makes it easy to list products in multiple countries without needing to upload the products to each country (this was beta when initially discussing the project and there was some disconnect/confusion about using this as the go-to/default approach when working on https://github.com/woocommerce/google-listings-and-ads/pull/399). The change we need to make is essentially * We no longer need to submit products for each country * We only need to submit the list of shipping countries (an enhancement to https://github.com/woocommerce/google-listings-and-ads/pull/399) * The product will be displayed in all of the Shipping countries regardless of the target country that it is submitted to At face value - it seems simple enough - but I know we are doing are a lot of juggling under the hood. Some of the questions that came to mind when discussing internally for reference > Will the target country still be relevant after these changes? I mean what is the difference of a product submitted in all countries vs. a product that’s submitted in one country but SHIPS to all countries? Based on discussions this morning my understanding is we can actually think of the "target country" more as the country of sale with shipping to multiple countries. In the UI target country will be mapped to "Country of Sale" then in the Program and Status columns will see multiple rows for each shipping country. <img width="1242" alt="Markup 2021-06-16 at 13 04 00" src="https://user-images.githubusercontent.com/355014/122154142-74d46600-cea3-11eb-98ab-b3846de0a5df.png"> > I assume that after this change we will only submit a product once for the shop’s current country (set in Woo settings) and then set the shipping based on the target country settings. This will mean that if their API doesn’t change, we will have only one ID and one synced product to deal with. Correct, the Google team confirmed it would make sense to use the store location for the "target country" product attribute then set shipping based all the "target countries" the merchant selects during onboarding. > We could store the target countries as a separate meta and assume that they all have the same ID. > Or we can just store the same ID for each target country and continue using the same structure that we have now. > I would go with the first method to separate the concepts of target countries and shipping countries. At the moment we have `_wc_gla_google_ids` meta which is a serialized array of Ids e.g. for a single target country it looks like `a:1:{s:2:"AU";s:19:"online:en:AU:gla_85";}` Thinking outside the box - the current structure may still offer some benefitts if we add multi-lingual / currency / country support to a single site - open to some brain cycles on this though - I just like to avoid backtracking too much if we avoid it. Otherwise an un-educated thought - we could probably move to something simple like `_wc_gla_google_id` (singular) and `online:en:AU:gla_85` - so yes same/single ID for the product being uploaded (we won't have multiple IDs to track anymore - at the moment). **Notes** * variations are still handled individually - that hasn't changed. * I am assuming there are going to flow on changes for the product feed summary, issues, and table as a result of this - but this might help reduce some of the performance impacts cc @layoutd * impact on existing offers - we might need to look at running a migration job to clean up products that have been pushed up - but I'll wait to hear the teams thoughts. **Reference pull requests** * [Sync products for all target countries](https://github.com/woocommerce/google-listings-and-ads/pull/319) * [Set Product Shipping Information Based on Target Country](https://github.com/woocommerce/google-listings-and-ads/pull/399)
priority
global offers currently gla this approach works but creates a lot of overhead resource consumption such as api requests product counts a real risk we ll keep hitting our quotas quickly a small number of merchants can use up the quota if they have a large number of products and select to target all countries the alternative approach is to move to global offers global offers makes it easy to list products in multiple countries without needing to upload the products to each country this was beta when initially discussing the project and there was some disconnect confusion about using this as the go to default approach when working on the change we need to make is essentially we no longer need to submit products for each country we only need to submit the list of shipping countries an enhancement to the product will be displayed in all of the shipping countries regardless of the target country that it is submitted to at face value it seems simple enough but i know we are doing are a lot of juggling under the hood some of the questions that came to mind when discussing internally for reference will the target country still be relevant after these changes i mean what is the difference of a product submitted in all countries vs a product that’s submitted in one country but ships to all countries based on discussions this morning my understanding is we can actually think of the target country more as the country of sale with shipping to multiple countries in the ui target country will be mapped to country of sale then in the program and status columns will see multiple rows for each shipping country img width alt markup at src i assume that after this change we will only submit a product once for the shop’s current country set in woo settings and then set the shipping based on the target country settings this will mean that if their api doesn’t change we will have only one id and one synced product to deal with correct the google team confirmed it would make sense to use the store location for the target country product attribute then set shipping based all the target countries the merchant selects during onboarding we could store the target countries as a separate meta and assume that they all have the same id or we can just store the same id for each target country and continue using the same structure that we have now i would go with the first method to separate the concepts of target countries and shipping countries at the moment we have wc gla google ids meta which is a serialized array of ids e g for a single target country it looks like a s au s online en au gla thinking outside the box the current structure may still offer some benefitts if we add multi lingual currency country support to a single site open to some brain cycles on this though i just like to avoid backtracking too much if we avoid it otherwise an un educated thought we could probably move to something simple like wc gla google id singular and online en au gla so yes same single id for the product being uploaded we won t have multiple ids to track anymore at the moment notes variations are still handled individually that hasn t changed i am assuming there are going to flow on changes for the product feed summary issues and table as a result of this but this might help reduce some of the performance impacts cc layoutd impact on existing offers we might need to look at running a migration job to clean up products that have been pushed up but i ll wait to hear the teams thoughts reference pull requests
1