Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
119,964
| 4,778,277,125
|
IssuesEvent
|
2016-10-27 18:48:35
|
IQSS/dataverse
|
https://api.github.com/repos/IQSS/dataverse
|
closed
|
Search: search box returns 0 results when searching for existing/duplicate MD5
|
Component: Search/Browse Priority 3: Serious Priority: High Status: QA Type: Bug Type: Feature
|
I was trying to locate a duplicate md5 in dataset with a large number of files but the search feature returns 0 results. MD5 i tested was tagged as a duplicate during file upload. I put the md5 into the search for the entire dataset, with 0 results.
|
2.0
|
Search: search box returns 0 results when searching for existing/duplicate MD5 - I was trying to locate a duplicate md5 in dataset with a large number of files but the search feature returns 0 results. MD5 i tested was tagged as a duplicate during file upload. I put the md5 into the search for the entire dataset, with 0 results.
|
priority
|
search search box returns results when searching for existing duplicate i was trying to locate a duplicate in dataset with a large number of files but the search feature returns results i tested was tagged as a duplicate during file upload i put the into the search for the entire dataset with results
| 1
|
310,334
| 9,488,803,616
|
IssuesEvent
|
2019-04-22 20:34:56
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
Mobile trading pg positions fields are scrunched together
|
Priority: High
|
Fields under the positions section of the trading pg on Mobil are all jumbled together....
see screenshot
https://cdn.discordapp.com/attachments/464067340015763466/568586637345423360/Screenshot_20190418-195855.jpg
|
1.0
|
Mobile trading pg positions fields are scrunched together - Fields under the positions section of the trading pg on Mobil are all jumbled together....
see screenshot
https://cdn.discordapp.com/attachments/464067340015763466/568586637345423360/Screenshot_20190418-195855.jpg
|
priority
|
mobile trading pg positions fields are scrunched together fields under the positions section of the trading pg on mobil are all jumbled together see screenshot
| 1
|
290,566
| 8,896,698,760
|
IssuesEvent
|
2019-01-16 12:15:39
|
FundacionParaguaya/MentorApp
|
https://api.github.com/repos/FundacionParaguaya/MentorApp
|
closed
|
When logging out and logging back in survey images disappear
|
bug high priority
|
**Describe the bug**
When I log in as a user, then I log out, then I login as a different user, create a draft and go to the questions screen, I no longer see images.
When I logout and then I login back in as the same user - I no longer see the images - The images ONLY appear on emptying the data and the cache
**Expected behavior**
I should still see images
**Smartphone (please complete the following information):**
- Device: Emulator
- OS: Android Oreo
|
1.0
|
When logging out and logging back in survey images disappear - **Describe the bug**
When I log in as a user, then I log out, then I login as a different user, create a draft and go to the questions screen, I no longer see images.
When I logout and then I login back in as the same user - I no longer see the images - The images ONLY appear on emptying the data and the cache
**Expected behavior**
I should still see images
**Smartphone (please complete the following information):**
- Device: Emulator
- OS: Android Oreo
|
priority
|
when logging out and logging back in survey images disappear describe the bug when i log in as a user then i log out then i login as a different user create a draft and go to the questions screen i no longer see images when i logout and then i login back in as the same user i no longer see the images the images only appear on emptying the data and the cache expected behavior i should still see images smartphone please complete the following information device emulator os android oreo
| 1
|
99,215
| 4,049,397,627
|
IssuesEvent
|
2016-05-23 14:03:31
|
GPUOpen-Effects/TressFX
|
https://api.github.com/repos/GPUOpen-Effects/TressFX
|
closed
|
Weird glitchy render with the preview
|
priority: high type: bug
|
TressFX 2.2 render : http://puu.sh/mKo82/21b2cdd9f6.jpg
TressFX 3.0 render : http://puu.sh/mKo9U/11de724e15.jpg
The hair on the 3.0 is glitchy, i had to zoom so it's easier to see.
Built at the same hour, on the same computer.
|
1.0
|
Weird glitchy render with the preview - TressFX 2.2 render : http://puu.sh/mKo82/21b2cdd9f6.jpg
TressFX 3.0 render : http://puu.sh/mKo9U/11de724e15.jpg
The hair on the 3.0 is glitchy, i had to zoom so it's easier to see.
Built at the same hour, on the same computer.
|
priority
|
weird glitchy render with the preview tressfx render tressfx render the hair on the is glitchy i had to zoom so it s easier to see built at the same hour on the same computer
| 1
|
498,923
| 14,435,693,834
|
IssuesEvent
|
2020-12-07 09:06:31
|
Aktanusa/CookieMonster
|
https://api.github.com/repos/Aktanusa/CookieMonster
|
closed
|
Negative Bonus Income when golden cookie is on the screen
|
Bug HIGH priority
|

My best guess is that Dragon's Fortune is not considered in the calculations
|
1.0
|
Negative Bonus Income when golden cookie is on the screen - 
My best guess is that Dragon's Fortune is not considered in the calculations
|
priority
|
negative bonus income when golden cookie is on the screen my best guess is that dragon s fortune is not considered in the calculations
| 1
|
186,242
| 6,734,621,643
|
IssuesEvent
|
2017-10-18 18:41:58
|
geosolutions-it/eumetsat-EOWS
|
https://api.github.com/repos/geosolutions-it/eumetsat-EOWS
|
opened
|
Improve Resource Limits
|
geoserver Priority: High user story
|
- [ ] #45 Ability to kill raster rendering/processing requests in JAI
- [ ] #46 Ability to kill long running DBMS queries
|
1.0
|
Improve Resource Limits - - [ ] #45 Ability to kill raster rendering/processing requests in JAI
- [ ] #46 Ability to kill long running DBMS queries
|
priority
|
improve resource limits ability to kill raster rendering processing requests in jai ability to kill long running dbms queries
| 1
|
86,335
| 3,710,741,713
|
IssuesEvent
|
2016-03-02 06:40:59
|
damlaren/ogle
|
https://api.github.com/repos/damlaren/ogle
|
closed
|
rejigger Mesh class
|
priority:very high
|
Existing class is only appropriate for rendering, closely tied to MeshRenderer, and makes assumptions about what hardware & graphics API is being used.
|
1.0
|
rejigger Mesh class - Existing class is only appropriate for rendering, closely tied to MeshRenderer, and makes assumptions about what hardware & graphics API is being used.
|
priority
|
rejigger mesh class existing class is only appropriate for rendering closely tied to meshrenderer and makes assumptions about what hardware graphics api is being used
| 1
|
418,840
| 12,214,136,357
|
IssuesEvent
|
2020-05-01 09:05:38
|
JuezUN/INGInious
|
https://api.github.com/repos/JuezUN/INGInious
|
opened
|
Modify the way input files are downloaded
|
Change request Frontend High Priority Plugins Task
|
Currently, input files in multilang are downloaded sending the text from container.
Though, this uses more memory in the container that may cause memory leaks and in case the input is long, the frontend will be heavier.
A better way set the files that can be downloaded in the public/ folder in the task file system.
|
1.0
|
Modify the way input files are downloaded - Currently, input files in multilang are downloaded sending the text from container.
Though, this uses more memory in the container that may cause memory leaks and in case the input is long, the frontend will be heavier.
A better way set the files that can be downloaded in the public/ folder in the task file system.
|
priority
|
modify the way input files are downloaded currently input files in multilang are downloaded sending the text from container though this uses more memory in the container that may cause memory leaks and in case the input is long the frontend will be heavier a better way set the files that can be downloaded in the public folder in the task file system
| 1
|
401,322
| 11,788,697,465
|
IssuesEvent
|
2020-03-17 15:57:49
|
AY1920S2-CS2103T-W17-2/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W17-2/main
|
closed
|
Create SuggestionModelImpl
|
priority.High status.Ongoing type.Enhancement
|
To create `SuggestionModelImpl` class that implements the `SuggestionModel` interface.
`model --> suggestion --> SuggestionModelImpl`
|
1.0
|
Create SuggestionModelImpl - To create `SuggestionModelImpl` class that implements the `SuggestionModel` interface.
`model --> suggestion --> SuggestionModelImpl`
|
priority
|
create suggestionmodelimpl to create suggestionmodelimpl class that implements the suggestionmodel interface model suggestion suggestionmodelimpl
| 1
|
145,185
| 5,560,059,710
|
IssuesEvent
|
2017-03-24 18:27:29
|
Esteemed-Innovation/Esteemed-Innovation
|
https://api.github.com/repos/Esteemed-Innovation/Esteemed-Innovation
|
closed
|
several blocks don't function correctly after having their chunks reloaded
|
Content: SteamNet and Technology Priority: High Status: Cannot Reproduce Type: Bug
|
seems to be sporadic but the flash boiler will sometimes appear to be completely empty of steam and water and you will not be able to interact with it (feed fuel/water) until you break and replace a block to reinitialize it.
Vacuums will not suck up items until they are broken and replaced in a similar way to the above.
Valve pipes that /were/ closed, will now be open but still graphically show as closed. fixing these requires that you open and then close the valve again.
All three of these issues are related to chunk saving/loading that contain Flaxbeard's tile entities
makes using Flaxbeard's reliably extremely frustrating.
|
1.0
|
several blocks don't function correctly after having their chunks reloaded - seems to be sporadic but the flash boiler will sometimes appear to be completely empty of steam and water and you will not be able to interact with it (feed fuel/water) until you break and replace a block to reinitialize it.
Vacuums will not suck up items until they are broken and replaced in a similar way to the above.
Valve pipes that /were/ closed, will now be open but still graphically show as closed. fixing these requires that you open and then close the valve again.
All three of these issues are related to chunk saving/loading that contain Flaxbeard's tile entities
makes using Flaxbeard's reliably extremely frustrating.
|
priority
|
several blocks don t function correctly after having their chunks reloaded seems to be sporadic but the flash boiler will sometimes appear to be completely empty of steam and water and you will not be able to interact with it feed fuel water until you break and replace a block to reinitialize it vacuums will not suck up items until they are broken and replaced in a similar way to the above valve pipes that were closed will now be open but still graphically show as closed fixing these requires that you open and then close the valve again all three of these issues are related to chunk saving loading that contain flaxbeard s tile entities makes using flaxbeard s reliably extremely frustrating
| 1
|
535,824
| 15,699,411,492
|
IssuesEvent
|
2021-03-26 08:26:57
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
API resources required to create API Product are not displayed
|
API-M 4.0.0 Feature/APIProducts Priority/Highest Resolution/Duplicate Type/Bug Type/React-UI Type/UX
|
### Description:
When attempting to create an API Product via the Publisher UI, when navigating to the API Selection window in order to select the resources that need to be added to the API Product, no resources are displayed when selecting a respective API as shown below

This issue wasnt noticed in 9.0.77.SNAPSHOT, but this is occurring in the 9.0.79.SNAPSHOT.
### Steps to reproduce:
1. Follow the documentation https://apim.docs.wso2.com/en/latest/learn/design-api/create-api-product/create-api-product/ and import APIs via Swagger URL by copying the link location of the respective Swagger files attached in the docs(https://apim.docs.wso2.com/en/3.2.0/assets/attachments/learn/customer-info-api.yaml and https://apim.docs.wso2.com/en/3.2.0/assets/attachments/learn/leasing-api.yaml)
2. Create and API Product and attempt to add resources from the above created APIs.
### Affected Product Version:
9.0.79.SNAPSHOT
|
1.0
|
API resources required to create API Product are not displayed - ### Description:
When attempting to create an API Product via the Publisher UI, when navigating to the API Selection window in order to select the resources that need to be added to the API Product, no resources are displayed when selecting a respective API as shown below

This issue wasnt noticed in 9.0.77.SNAPSHOT, but this is occurring in the 9.0.79.SNAPSHOT.
### Steps to reproduce:
1. Follow the documentation https://apim.docs.wso2.com/en/latest/learn/design-api/create-api-product/create-api-product/ and import APIs via Swagger URL by copying the link location of the respective Swagger files attached in the docs(https://apim.docs.wso2.com/en/3.2.0/assets/attachments/learn/customer-info-api.yaml and https://apim.docs.wso2.com/en/3.2.0/assets/attachments/learn/leasing-api.yaml)
2. Create and API Product and attempt to add resources from the above created APIs.
### Affected Product Version:
9.0.79.SNAPSHOT
|
priority
|
api resources required to create api product are not displayed description when attempting to create an api product via the publisher ui when navigating to the api selection window in order to select the resources that need to be added to the api product no resources are displayed when selecting a respective api as shown below this issue wasnt noticed in snapshot but this is occurring in the snapshot steps to reproduce follow the documentation and import apis via swagger url by copying the link location of the respective swagger files attached in the docs and create and api product and attempt to add resources from the above created apis affected product version snapshot
| 1
|
795,112
| 28,061,916,354
|
IssuesEvent
|
2023-03-29 13:09:55
|
Riksrevisjonen/noAPI
|
https://api.github.com/repos/Riksrevisjonen/noAPI
|
closed
|
Prepare package for RRAN release
|
Priority: 2-High Type: 2-Enhancement
|
- [x] Make sure there are no errors in R CMD check on local OS
- [x] Make sure R CMD check passes with R version 4.1 on Windows 10 and R version 4.2 on macOS
- [x] Assign a version number (update DESCRIPTION and NEWS.md)
- [x] Put the package on RRAN
- [x] Create a Github release for the package version installed on RRAN
|
1.0
|
Prepare package for RRAN release - - [x] Make sure there are no errors in R CMD check on local OS
- [x] Make sure R CMD check passes with R version 4.1 on Windows 10 and R version 4.2 on macOS
- [x] Assign a version number (update DESCRIPTION and NEWS.md)
- [x] Put the package on RRAN
- [x] Create a Github release for the package version installed on RRAN
|
priority
|
prepare package for rran release make sure there are no errors in r cmd check on local os make sure r cmd check passes with r version on windows and r version on macos assign a version number update description and news md put the package on rran create a github release for the package version installed on rran
| 1
|
178,527
| 6,609,789,479
|
IssuesEvent
|
2017-09-19 15:33:45
|
dschoenbauer/orm
|
https://api.github.com/repos/dschoenbauer/orm
|
opened
|
Add event to allow for Update but not allow it to display
|
enhancement Priority: High v1.0
|
#### Overview description
A field should be allowed to be updated but not allowed to be viewed. This is specific to a password field. It should never be displayed.
#### Steps to reproduce
1.Pull any field from the database
#### Actual Results
Field is returned
#### Expected Results
Fields identified as hidden shouldn't be returned
### Technical details
* PHP Version: 5.6
* Library Version: 1.0.1
* Database Type: MYSQL
* Browser/OS: Chrome / Windows
|
1.0
|
Add event to allow for Update but not allow it to display - #### Overview description
A field should be allowed to be updated but not allowed to be viewed. This is specific to a password field. It should never be displayed.
#### Steps to reproduce
1.Pull any field from the database
#### Actual Results
Field is returned
#### Expected Results
Fields identified as hidden shouldn't be returned
### Technical details
* PHP Version: 5.6
* Library Version: 1.0.1
* Database Type: MYSQL
* Browser/OS: Chrome / Windows
|
priority
|
add event to allow for update but not allow it to display overview description a field should be allowed to be updated but not allowed to be viewed this is specific to a password field it should never be displayed steps to reproduce pull any field from the database actual results field is returned expected results fields identified as hidden shouldn t be returned technical details php version library version database type mysql browser os chrome windows
| 1
|
281,341
| 8,694,064,886
|
IssuesEvent
|
2018-12-04 11:31:08
|
NJACKWinterOfCode/Alphynite
|
https://api.github.com/repos/NJACKWinterOfCode/Alphynite
|
closed
|
Make project Live - Priority Extremely High
|
Beginner Priority:HIGH good first issue
|
**I'm submitting a ...**
- [x] feature request
**Current behavior:**
<!-- How the bug manifests. -->
No Deployments.
**Expected behavior:**
<!-- Behavior would be without the bug. -->
Deploy the project to Heroku or any other free hosting provider.
Here is the resource for Heroku .
https://medium.com/@hellotunmbi/how-to-deploy-angular-application-to-heroku-1d56e09c5147
**Would you like to work on the issue?**
<!-- Please let us know if you can work on it or the issue should be assigned to someone else. -->
No.Open for others.
|
1.0
|
Make project Live - Priority Extremely High - **I'm submitting a ...**
- [x] feature request
**Current behavior:**
<!-- How the bug manifests. -->
No Deployments.
**Expected behavior:**
<!-- Behavior would be without the bug. -->
Deploy the project to Heroku or any other free hosting provider.
Here is the resource for Heroku .
https://medium.com/@hellotunmbi/how-to-deploy-angular-application-to-heroku-1d56e09c5147
**Would you like to work on the issue?**
<!-- Please let us know if you can work on it or the issue should be assigned to someone else. -->
No.Open for others.
|
priority
|
make project live priority extremely high i m submitting a feature request current behavior no deployments expected behavior deploy the project to heroku or any other free hosting provider here is the resource for heroku would you like to work on the issue no open for others
| 1
|
517,851
| 15,020,452,925
|
IssuesEvent
|
2021-02-01 14:43:59
|
ansible/galaxy_ng
|
https://api.github.com/repos/ansible/galaxy_ng
|
closed
|
Remove instances where we still proxy api requests to pulp
|
area/api priority/high status/new type/enhancement
|
There are still a couple of views that use the old pulp_ansible client to proxy requests to pulp_ansible. These should be replaced with subclassed viewsets sometime before GA.
|
1.0
|
Remove instances where we still proxy api requests to pulp - There are still a couple of views that use the old pulp_ansible client to proxy requests to pulp_ansible. These should be replaced with subclassed viewsets sometime before GA.
|
priority
|
remove instances where we still proxy api requests to pulp there are still a couple of views that use the old pulp ansible client to proxy requests to pulp ansible these should be replaced with subclassed viewsets sometime before ga
| 1
|
137,494
| 5,310,366,097
|
IssuesEvent
|
2017-02-12 19:28:18
|
alonshmilo/MedicalData_jce
|
https://api.github.com/repos/alonshmilo/MedicalData_jce
|
closed
|
DeepMedic
|
2 - Working <= 5 Points: 8 Priority: Very High
|
# Issue: DeepMedic
### Explanation:
Efficient Multi-Scale 3D Convolutional Neural Network for Brain Lesion Segmentation. Understanding the network, making it work on our macs, and start to think about the rib cage bone orientation.
### Checklist:
- [x] Code
- [x] Reading
- [x] Documenting reading
- [x] Project book
- [x] Advisor consult
|
1.0
|
DeepMedic - # Issue: DeepMedic
### Explanation:
Efficient Multi-Scale 3D Convolutional Neural Network for Brain Lesion Segmentation. Understanding the network, making it work on our macs, and start to think about the rib cage bone orientation.
### Checklist:
- [x] Code
- [x] Reading
- [x] Documenting reading
- [x] Project book
- [x] Advisor consult
|
priority
|
deepmedic issue deepmedic explanation efficient multi scale convolutional neural network for brain lesion segmentation understanding the network making it work on our macs and start to think about the rib cage bone orientation checklist code reading documenting reading project book advisor consult
| 1
|
171,700
| 6,493,500,940
|
IssuesEvent
|
2017-08-21 17:22:48
|
qlicker/qlicker
|
https://api.github.com/repos/qlicker/qlicker
|
closed
|
Prof session run panel should show the correct answer!
|
bug enhancement High priority
|
The preview panel in the prof session run view should show which is the correct answer! The prof might not remember off the top of their head the correct answer!
|
1.0
|
Prof session run panel should show the correct answer! - The preview panel in the prof session run view should show which is the correct answer! The prof might not remember off the top of their head the correct answer!
|
priority
|
prof session run panel should show the correct answer the preview panel in the prof session run view should show which is the correct answer the prof might not remember off the top of their head the correct answer
| 1
|
123,786
| 4,876,005,005
|
IssuesEvent
|
2016-11-16 11:22:21
|
BinPar/PPD
|
https://api.github.com/repos/BinPar/PPD
|
opened
|
INFORMACION PRECIOS PRODUCTOS FORMACION
|
Priority: High
|
Se solicita modificar el cálculo de los precios cuando se trate de los siguientes productos:
EXPERTO
MASTER
PROGRAMA DE FORMACION CONTINUADA
PROGRAMA DE ACTUALIZACION
CURSO ONLINE
Los campos:
PRECIO LOCAL ESTIMADO: Se introduce el dato de forma manual
PRECIO ESTIMADO INTERCOMPAÑIA: SIN DATOS
PRECIO ESTIMADO EXPORTACION: SIN DATOS
PRECIO ESTIMADO INTERNACIONAL: SIN CALCULO. Se introduce el dato de forma manual
LO MISMO SUCEDE CON LOS PRECIOS DEFINITIVOS
@CristianBinpar
|
1.0
|
INFORMACION PRECIOS PRODUCTOS FORMACION - Se solicita modificar el cálculo de los precios cuando se trate de los siguientes productos:
EXPERTO
MASTER
PROGRAMA DE FORMACION CONTINUADA
PROGRAMA DE ACTUALIZACION
CURSO ONLINE
Los campos:
PRECIO LOCAL ESTIMADO: Se introduce el dato de forma manual
PRECIO ESTIMADO INTERCOMPAÑIA: SIN DATOS
PRECIO ESTIMADO EXPORTACION: SIN DATOS
PRECIO ESTIMADO INTERNACIONAL: SIN CALCULO. Se introduce el dato de forma manual
LO MISMO SUCEDE CON LOS PRECIOS DEFINITIVOS
@CristianBinpar
|
priority
|
informacion precios productos formacion se solicita modificar el cálculo de los precios cuando se trate de los siguientes productos experto master programa de formacion continuada programa de actualizacion curso online los campos precio local estimado se introduce el dato de forma manual precio estimado intercompañia sin datos precio estimado exportacion sin datos precio estimado internacional sin calculo se introduce el dato de forma manual lo mismo sucede con los precios definitivos cristianbinpar
| 1
|
407,646
| 11,935,122,742
|
IssuesEvent
|
2020-04-02 08:00:17
|
wso2/docker-open-banking
|
https://api.github.com/repos/wso2/docker-open-banking
|
closed
|
Create Docker resources for WSO2 Open Banking version 1.5.0
|
Priority/High Type/Task
|
**Description:**
Dockerfile for Open Banking API Manager(OBAM) should be created to build the OBAM based on adoptopenjdk:11.0.6_10-jdk-hotspot-bionic image and run the server
**Affected Product Version:**
OB 1.5.0
**OS, DB, other environment details and versions:**
ubuntu, alpine
|
1.0
|
Create Docker resources for WSO2 Open Banking version 1.5.0 - **Description:**
Dockerfile for Open Banking API Manager(OBAM) should be created to build the OBAM based on adoptopenjdk:11.0.6_10-jdk-hotspot-bionic image and run the server
**Affected Product Version:**
OB 1.5.0
**OS, DB, other environment details and versions:**
ubuntu, alpine
|
priority
|
create docker resources for open banking version description dockerfile for open banking api manager obam should be created to build the obam based on adoptopenjdk jdk hotspot bionic image and run the server affected product version ob os db other environment details and versions ubuntu alpine
| 1
|
324,305
| 9,887,396,726
|
IssuesEvent
|
2019-06-25 09:07:00
|
lwjohnst86/prodigenr
|
https://api.github.com/repos/lwjohnst86/prodigenr
|
closed
|
Add drake build instead of using devtools load_all
|
enhancement hard high priority
|
And have the project setup not be a package.
|
1.0
|
Add drake build instead of using devtools load_all - And have the project setup not be a package.
|
priority
|
add drake build instead of using devtools load all and have the project setup not be a package
| 1
|
826,387
| 31,593,488,883
|
IssuesEvent
|
2023-09-05 02:11:18
|
lfortran/lfortran
|
https://api.github.com/repos/lfortran/lfortran
|
closed
|
Print ASR after every pass
|
high priority
|
It must print it after each pass and the very first ASR, *before* ASR verify is called, since it can fail.
Both in LPython and LFortran. This will greatly simplify debugging. This is quite urgent now, since I now hit this almost every day and so is everyone else.
|
1.0
|
Print ASR after every pass - It must print it after each pass and the very first ASR, *before* ASR verify is called, since it can fail.
Both in LPython and LFortran. This will greatly simplify debugging. This is quite urgent now, since I now hit this almost every day and so is everyone else.
|
priority
|
print asr after every pass it must print it after each pass and the very first asr before asr verify is called since it can fail both in lpython and lfortran this will greatly simplify debugging this is quite urgent now since i now hit this almost every day and so is everyone else
| 1
|
54,890
| 3,071,567,939
|
IssuesEvent
|
2015-08-19 12:56:17
|
INN/Largo
|
https://api.github.com/repos/INN/Largo
|
closed
|
when 1-col footer option is selected, footer 2 & 3 widget areas are still registered
|
priority: high type: bug
|
Observed on MWEN. They should be deactivated to avoid confusion.
|
1.0
|
when 1-col footer option is selected, footer 2 & 3 widget areas are still registered - Observed on MWEN. They should be deactivated to avoid confusion.
|
priority
|
when col footer option is selected footer widget areas are still registered observed on mwen they should be deactivated to avoid confusion
| 1
|
686,063
| 23,475,443,014
|
IssuesEvent
|
2022-08-17 05:16:23
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Document issues in Configuring OIDC Federated IdP Initiated Logout
|
Priority/Highest docs Severity/Critical Affected-6.0.0 QA-Reported federated-oidc-back-channel-logout
|
**Is your suggestion related to a missing or misleading document? Please describe.**
Document [1] required following updates,
[1]https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/
- [ ] Add following missing section on document, [Configure Pickup Dispatch application in the Primary IS](https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/#configure-pickup-dispatch-application-in-the-primary-is)
Enable OIDC back channel logout and add back channel logout url http://localhost:8080/pickup-dispatch/bclogout
<img width="424" alt="image" src="https://user-images.githubusercontent.com/39077751/160244629-fcf8fe5a-2f8e-41b6-b0b1-31d29068dc78.png">
- [ ] Update in [Try out](https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/#try-it-out) section
In point number 4, It shoudl be correct as the same browser with different tab since this is a SLO scenarios
<img width="718" alt="image" src="https://user-images.githubusercontent.com/39077751/160244713-cc415d63-b958-4511-b45f-1e9d42cc8db1.png">
**Describe the improvement**
<!-- A clear and concise description of what needs to be updated. -->
---
### Optional Fields
**Additional context**
<!-- Add any other context or screenshots about the document issue or suggestion. -->
**Related Issues:**
<!-- Any related issues from this/other repositories-->
|
1.0
|
Document issues in Configuring OIDC Federated IdP Initiated Logout - **Is your suggestion related to a missing or misleading document? Please describe.**
Document [1] required following updates,
[1]https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/
- [ ] Add following missing section on document, [Configure Pickup Dispatch application in the Primary IS](https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/#configure-pickup-dispatch-application-in-the-primary-is)
Enable OIDC back channel logout and add back channel logout url http://localhost:8080/pickup-dispatch/bclogout
<img width="424" alt="image" src="https://user-images.githubusercontent.com/39077751/160244629-fcf8fe5a-2f8e-41b6-b0b1-31d29068dc78.png">
- [ ] Update in [Try out](https://is.docs.wso2.com/en/5.12.0/learn/configuring-oidc-federated-idp-initiated-logout/#try-it-out) section
In point number 4, It shoudl be correct as the same browser with different tab since this is a SLO scenarios
<img width="718" alt="image" src="https://user-images.githubusercontent.com/39077751/160244713-cc415d63-b958-4511-b45f-1e9d42cc8db1.png">
**Describe the improvement**
<!-- A clear and concise description of what needs to be updated. -->
---
### Optional Fields
**Additional context**
<!-- Add any other context or screenshots about the document issue or suggestion. -->
**Related Issues:**
<!-- Any related issues from this/other repositories-->
|
priority
|
document issues in configuring oidc federated idp initiated logout is your suggestion related to a missing or misleading document please describe document required following updates add following missing section on document enable oidc back channel logout and add back channel logout url img width alt image src update in section in point number it shoudl be correct as the same browser with different tab since this is a slo scenarios img width alt image src describe the improvement optional fields additional context related issues
| 1
|
604,623
| 18,715,696,892
|
IssuesEvent
|
2021-11-03 04:07:53
|
AyeCode/geodirectory
|
https://api.github.com/repos/AyeCode/geodirectory
|
closed
|
Add preview option to attachments icon
|
Priority: High Type: Enhancement
|
Currently, there is no way to view an attachment such as a PDF if a user uploads it without previewing or publishing the listing.
We should add a font awesome eye icon so attachments can be previewed.
I think the best solution might be to view this in a modal iframe? If the browser supports viewing PDFs in the browser then it will show, or it will open in new window if not.
|
1.0
|
Add preview option to attachments icon - Currently, there is no way to view an attachment such as a PDF if a user uploads it without previewing or publishing the listing.
We should add a font awesome eye icon so attachments can be previewed.
I think the best solution might be to view this in a modal iframe? If the browser supports viewing PDFs in the browser then it will show, or it will open in new window if not.
|
priority
|
add preview option to attachments icon currently there is no way to view an attachment such as a pdf if a user uploads it without previewing or publishing the listing we should add a font awesome eye icon so attachments can be previewed i think the best solution might be to view this in a modal iframe if the browser supports viewing pdfs in the browser then it will show or it will open in new window if not
| 1
|
563,084
| 16,675,804,462
|
IssuesEvent
|
2021-06-07 16:01:52
|
phetsims/tandem
|
https://api.github.com/repos/phetsims/tandem
|
closed
|
ReferenceIO instances should be phetioState: false
|
dev:phet-io priority:2-high status:blocks-publication status:ready-for-review
|
I found an unimportant bug where ReferenceIO instances are often stateful, in addition to being used for Data-Type serialization. Basically, there are many ReferenceIO instances that look like this in the state. For example in Natural Selection:
```
"naturalSelection.global.model.alleles.whiteFurAllele": {
"phetioID": "naturalSelection.global.model.alleles.whiteFurAllele"
},
"naturalSelection.global.model.alleles.brownFurAllele": {
"phetioID": "naturalSelection.global.model.alleles.brownFurAllele"
},
"naturalSelection.global.model.alleles.floppyEarsAllele": {
"phetioID": "naturalSelection.global.model.alleles.floppyEarsAllele"
},
"naturalSelection.global.model.alleles.straightEarsAllele": {
"phetioID": "naturalSelection.global.model.alleles.straightEarsAllele"
},
"naturalSelection.global.model.alleles.shortTeethAllele": {
"phetioID": "naturalSelection.global.model.alleles.shortTeethAllele"
},
"naturalSelection.global.model.alleles.longTeethAllele": {
"phetioID": "naturalSelection.global.model.alleles.longTeethAllele"
},
```
Basically this is just fluff, but buggy part is that the state engine IS trying to apply state on it. This is also the case for Dialogs.
I propose:
adding an applyState function in ReferenceIO that asserts false. This helped me catch the Natural Selection problems.
Mark all the phetioState: false instances as necessary to get these sims running again.
Tagging https://github.com/phetsims/phet-io/issues/1774 as that issue is where I discovered this.
|
1.0
|
ReferenceIO instances should be phetioState: false - I found an unimportant bug where ReferenceIO instances are often stateful, in addition to being used for Data-Type serialization. Basically, there are many ReferenceIO instances that look like this in the state. For example in Natural Selection:
```
"naturalSelection.global.model.alleles.whiteFurAllele": {
"phetioID": "naturalSelection.global.model.alleles.whiteFurAllele"
},
"naturalSelection.global.model.alleles.brownFurAllele": {
"phetioID": "naturalSelection.global.model.alleles.brownFurAllele"
},
"naturalSelection.global.model.alleles.floppyEarsAllele": {
"phetioID": "naturalSelection.global.model.alleles.floppyEarsAllele"
},
"naturalSelection.global.model.alleles.straightEarsAllele": {
"phetioID": "naturalSelection.global.model.alleles.straightEarsAllele"
},
"naturalSelection.global.model.alleles.shortTeethAllele": {
"phetioID": "naturalSelection.global.model.alleles.shortTeethAllele"
},
"naturalSelection.global.model.alleles.longTeethAllele": {
"phetioID": "naturalSelection.global.model.alleles.longTeethAllele"
},
```
Basically this is just fluff, but buggy part is that the state engine IS trying to apply state on it. This is also the case for Dialogs.
I propose:
adding an applyState function in ReferenceIO that asserts false. This helped me catch the Natural Selection problems.
Mark all the phetioState: false instances as necessary to get these sims running again.
Tagging https://github.com/phetsims/phet-io/issues/1774 as that issue is where I discovered this.
|
priority
|
referenceio instances should be phetiostate false i found an unimportant bug where referenceio instances are often stateful in addition to being used for data type serialization basically there are many referenceio instances that look like this in the state for example in natural selection naturalselection global model alleles whitefurallele phetioid naturalselection global model alleles whitefurallele naturalselection global model alleles brownfurallele phetioid naturalselection global model alleles brownfurallele naturalselection global model alleles floppyearsallele phetioid naturalselection global model alleles floppyearsallele naturalselection global model alleles straightearsallele phetioid naturalselection global model alleles straightearsallele naturalselection global model alleles shortteethallele phetioid naturalselection global model alleles shortteethallele naturalselection global model alleles longteethallele phetioid naturalselection global model alleles longteethallele basically this is just fluff but buggy part is that the state engine is trying to apply state on it this is also the case for dialogs i propose adding an applystate function in referenceio that asserts false this helped me catch the natural selection problems mark all the phetiostate false instances as necessary to get these sims running again tagging as that issue is where i discovered this
| 1
|
103,510
| 4,174,410,264
|
IssuesEvent
|
2016-06-21 13:59:50
|
ngageoint/hootenanny
|
https://api.github.com/repos/ngageoint/hootenanny
|
closed
|
Right angle turn not conflating
|
Category: Algorithms Priority: High Type: Bug
|
Example case is in the associated branch. (`test-files/cases/unifying/highway-662/`)
Small changes to the geometries will make the lines conflate, but there is clearly a bug in there. Please investigate and fix.
|
1.0
|
Right angle turn not conflating - Example case is in the associated branch. (`test-files/cases/unifying/highway-662/`)
Small changes to the geometries will make the lines conflate, but there is clearly a bug in there. Please investigate and fix.
|
priority
|
right angle turn not conflating example case is in the associated branch test files cases unifying highway small changes to the geometries will make the lines conflate but there is clearly a bug in there please investigate and fix
| 1
|
136,295
| 5,279,548,929
|
IssuesEvent
|
2017-02-07 11:37:24
|
bbcarchdev/spindle
|
https://api.github.com/repos/bbcarchdev/spindle
|
reopened
|
Loops in triggers
|
bug high priority triaged Twine
|
When ingesting a data using a resource as the named graph a loop is created in the triggers.
Example:
the following file
[sample.txt](https://github.com/bbcarchdev/spindle/files/529654/sample.txt)
generates:
<img width="938" alt="screen shot 2016-10-14 at 10 07 51" src="https://cloud.githubusercontent.com/assets/196436/19389987/0bb433ba-921e-11e6-9b5b-624d48c85527.png">
|
1.0
|
Loops in triggers - When ingesting a data using a resource as the named graph a loop is created in the triggers.
Example:
the following file
[sample.txt](https://github.com/bbcarchdev/spindle/files/529654/sample.txt)
generates:
<img width="938" alt="screen shot 2016-10-14 at 10 07 51" src="https://cloud.githubusercontent.com/assets/196436/19389987/0bb433ba-921e-11e6-9b5b-624d48c85527.png">
|
priority
|
loops in triggers when ingesting a data using a resource as the named graph a loop is created in the triggers example the following file generates img width alt screen shot at src
| 1
|
712,948
| 24,512,089,484
|
IssuesEvent
|
2022-10-10 22:58:53
|
tbaranoski/Trading_Quant
|
https://api.github.com/repos/tbaranoski/Trading_Quant
|
closed
|
[Possible Bug]: get_bars pulling aftermarket and pre market data intraday
|
HIGH PRIORITY
|
We want to make sure we are only pulling market hour data and not pre/post market data for trend computations
|
1.0
|
[Possible Bug]: get_bars pulling aftermarket and pre market data intraday - We want to make sure we are only pulling market hour data and not pre/post market data for trend computations
|
priority
|
get bars pulling aftermarket and pre market data intraday we want to make sure we are only pulling market hour data and not pre post market data for trend computations
| 1
|
563,834
| 16,706,097,111
|
IssuesEvent
|
2021-06-09 10:07:48
|
yt-dlp/yt-dlp
|
https://api.github.com/repos/yt-dlp/yt-dlp
|
closed
|
[Broken] split-chapters is broken in 2021.06.08
|
bug high priority
|
Just updated yt-dlp to 2021.06.08 and found that split-chapters is broken. I confirm that it is working with 2021.06.01
With 2021.06.08 I get "ERROR: %d format: a number is required, not str"
Debug output:
$ yt-dlp -v --split-chapters RGOj5yH7evk
[debug] Command-line config: ['-v', '--split-chapters', 'RGOj5yH7evk']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] yt-dlp version 2021.06.08 (zip)
[debug] Python version 3.9.5 (CPython 64bit) - Linux-5.12.9-300.fc34.x86_64-x86_64-with-glibc2.33
[debug] exe versions: ffmpeg 4.4, ffprobe 4.4
[debug] Proxy map: {}
[debug] [youtube] Extracting URL: RGOj5yH7evk
[youtube] RGOj5yH7evk: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[info] RGOj5yH7evk: Downloading 1 format(s): 137+140
[youtube] RGOj5yH7evk: Downloading thumbnail ...
[youtube] RGOj5yH7evk: Writing thumbnail to: Git and GitHub for Beginners - Crash Course.webp
[download] Git and GitHub for Beginners - Crash Course.mp4 has already been downloaded
[ThumbnailsConvertor] Converting thumbnail "Git and GitHub for Beginners - Crash Course.webp" to png
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i 'file:Git and GitHub for Beginners - Crash Course.webp' 'file:Git and GitHub for Beginners - Crash Course.png'
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Git and GitHub for Beginners - Crash Course.mp4'
[EmbedThumbnail] ffmpeg: Adding thumbnail to "Git and GitHub for Beginners - Crash Course.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Git and GitHub for Beginners - Crash Course.mp4' -i 'file:Git and GitHub for Beginners - Crash Course.png' -c copy -map 0 -dn -map 1 -map -0:2 -disposition:2 attached_pic 'file:Git and GitHub for Beginners - Crash Course.temp.mp4'
Deleting original file Git and GitHub for Beginners - Crash Course.webp (pass -k to keep)
Deleting original file Git and GitHub for Beginners - Crash Course.png (pass -k to keep)
[SplitChapters] Splitting video by chapters; 23 chapters found
ERROR: %d format: a number is required, not str
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1128, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1160, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1197, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2166, in process_video_result
self.process_info(new_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2716, in process_info
info_dict = self.post_process(dl_filename, info_dict, files_to_move)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2856, in post_process
info = self.run_pp(pp, info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2805, in run_pp
files_to_delete, infodict = pp.run(infodict)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 816, in run
destination, opts = self._ffmpeg_args_for_chapter(idx + 1, chapter, info)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 797, in _ffmpeg_args_for_chapter
destination = self._prepare_filename(number, chapter, info)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 794, in _prepare_filename
return self._downloader.prepare_filename(info, 'chapter')
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 990, in prepare_filename
filename = self._prepare_filename(info_dict, dir_type or 'default')
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 965, in _prepare_filename
filename = expand_path(outtmpl).replace(sep, '') % template_dict
TypeError: %d format: a number is required, not str
|
1.0
|
[Broken] split-chapters is broken in 2021.06.08 - Just updated yt-dlp to 2021.06.08 and found that split-chapters is broken. I confirm that it is working with 2021.06.01
With 2021.06.08 I get "ERROR: %d format: a number is required, not str"
Debug output:
$ yt-dlp -v --split-chapters RGOj5yH7evk
[debug] Command-line config: ['-v', '--split-chapters', 'RGOj5yH7evk']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] yt-dlp version 2021.06.08 (zip)
[debug] Python version 3.9.5 (CPython 64bit) - Linux-5.12.9-300.fc34.x86_64-x86_64-with-glibc2.33
[debug] exe versions: ffmpeg 4.4, ffprobe 4.4
[debug] Proxy map: {}
[debug] [youtube] Extracting URL: RGOj5yH7evk
[youtube] RGOj5yH7evk: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[info] RGOj5yH7evk: Downloading 1 format(s): 137+140
[youtube] RGOj5yH7evk: Downloading thumbnail ...
[youtube] RGOj5yH7evk: Writing thumbnail to: Git and GitHub for Beginners - Crash Course.webp
[download] Git and GitHub for Beginners - Crash Course.mp4 has already been downloaded
[ThumbnailsConvertor] Converting thumbnail "Git and GitHub for Beginners - Crash Course.webp" to png
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i 'file:Git and GitHub for Beginners - Crash Course.webp' 'file:Git and GitHub for Beginners - Crash Course.png'
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:Git and GitHub for Beginners - Crash Course.mp4'
[EmbedThumbnail] ffmpeg: Adding thumbnail to "Git and GitHub for Beginners - Crash Course.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Git and GitHub for Beginners - Crash Course.mp4' -i 'file:Git and GitHub for Beginners - Crash Course.png' -c copy -map 0 -dn -map 1 -map -0:2 -disposition:2 attached_pic 'file:Git and GitHub for Beginners - Crash Course.temp.mp4'
Deleting original file Git and GitHub for Beginners - Crash Course.webp (pass -k to keep)
Deleting original file Git and GitHub for Beginners - Crash Course.png (pass -k to keep)
[SplitChapters] Splitting video by chapters; 23 chapters found
ERROR: %d format: a number is required, not str
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1128, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1160, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1197, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2166, in process_video_result
self.process_info(new_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2716, in process_info
info_dict = self.post_process(dl_filename, info_dict, files_to_move)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2856, in post_process
info = self.run_pp(pp, info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2805, in run_pp
files_to_delete, infodict = pp.run(infodict)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 816, in run
destination, opts = self._ffmpeg_args_for_chapter(idx + 1, chapter, info)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 797, in _ffmpeg_args_for_chapter
destination = self._prepare_filename(number, chapter, info)
File "/usr/local/bin/yt-dlp/yt_dlp/postprocessor/ffmpeg.py", line 794, in _prepare_filename
return self._downloader.prepare_filename(info, 'chapter')
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 990, in prepare_filename
filename = self._prepare_filename(info_dict, dir_type or 'default')
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 965, in _prepare_filename
filename = expand_path(outtmpl).replace(sep, '') % template_dict
TypeError: %d format: a number is required, not str
|
priority
|
split chapters is broken in just updated yt dlp to and found that split chapters is broken i confirm that it is working with with i get error d format a number is required not str debug output yt dlp v split chapters command line config encodings locale utf fs utf out utf pref utf yt dlp version zip python version cpython linux with exe versions ffmpeg ffprobe proxy map extracting url downloading webpage formats sorted by hasvid ie pref lang quality res fps vcodec acodec filesize fs approx tbr vbr abr asr proto vext aext hasaud source id downloading format s downloading thumbnail writing thumbnail to git and github for beginners crash course webp git and github for beginners crash course has already been downloaded converting thumbnail git and github for beginners crash course webp to png ffmpeg command line ffmpeg y loglevel repeat info f pattern type none i file git and github for beginners crash course webp file git and github for beginners crash course png ffprobe command line ffprobe hide banner show format show streams print format json file git and github for beginners crash course ffmpeg adding thumbnail to git and github for beginners crash course ffmpeg command line ffmpeg y loglevel repeat info i file git and github for beginners crash course i file git and github for beginners crash course png c copy map dn map map disposition attached pic file git and github for beginners crash course temp deleting original file git and github for beginners crash course webp pass k to keep deleting original file git and github for beginners crash course png pass k to keep splitting video by chapters chapters found error d format a number is required not str traceback most recent call last file usr local bin yt dlp yt dlp youtubedl py line in wrapper return func self args kwargs file usr local bin yt dlp yt dlp youtubedl py line in extract info return self process ie result ie result download extra info file usr local bin yt dlp yt dlp youtubedl py line in process ie result ie result self process video result ie result download download file usr local bin yt dlp yt dlp youtubedl py line in process video result self process info new info file usr local bin yt dlp yt dlp youtubedl py line in process info info dict self post process dl filename info dict files to move file usr local bin yt dlp yt dlp youtubedl py line in post process info self run pp pp info file usr local bin yt dlp yt dlp youtubedl py line in run pp files to delete infodict pp run infodict file usr local bin yt dlp yt dlp postprocessor ffmpeg py line in run destination opts self ffmpeg args for chapter idx chapter info file usr local bin yt dlp yt dlp postprocessor ffmpeg py line in ffmpeg args for chapter destination self prepare filename number chapter info file usr local bin yt dlp yt dlp postprocessor ffmpeg py line in prepare filename return self downloader prepare filename info chapter file usr local bin yt dlp yt dlp youtubedl py line in prepare filename filename self prepare filename info dict dir type or default file usr local bin yt dlp yt dlp youtubedl py line in prepare filename filename expand path outtmpl replace sep template dict typeerror d format a number is required not str
| 1
|
395,527
| 11,687,873,727
|
IssuesEvent
|
2020-03-05 13:37:14
|
opensourceai/vue-front-end
|
https://api.github.com/repos/opensourceai/vue-front-end
|
opened
|
设计登录注册页面UI
|
priority:high ui
|
要求:
1. 设计力求简约,可参考`知乎`

2. 登记表单
- 用户名
- 密码
- 验证码
3. 注册表单
- 用户名
- 邮箱
- 密码
- 验证码
4. 其他自由发挥
|
1.0
|
设计登录注册页面UI - 要求:
1. 设计力求简约,可参考`知乎`

2. 登记表单
- 用户名
- 密码
- 验证码
3. 注册表单
- 用户名
- 邮箱
- 密码
- 验证码
4. 其他自由发挥
|
priority
|
设计登录注册页面ui 要求 设计力求简约 可参考 知乎 登记表单 用户名 密码 验证码 注册表单 用户名 邮箱 密码 验证码 其他自由发挥
| 1
|
134,818
| 5,237,774,158
|
IssuesEvent
|
2017-01-31 01:07:33
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
Blackberry devices are unable to sync
|
accepted Priority-High waitingforfeedback
|
AnkiWeb's SSL certificate was updated on the 7th, and it appears to have broken syncing on Blackberry devices. Users are able to access ankiweb.net in their device's browser, but connections within AnkiDroid time out. There don't appear to be any errors in the server logs, so I'm not sure whether any connections are being attempted. Might be worth turning off certificate validation on these devices to see if it helps?
Related reports:
https://github.com/ankidroid/Anki-Android/issues/4533
https://anki.tenderapp.com/discussions/ankiweb/1919-a-network-error-has-occurred
|
1.0
|
Blackberry devices are unable to sync - AnkiWeb's SSL certificate was updated on the 7th, and it appears to have broken syncing on Blackberry devices. Users are able to access ankiweb.net in their device's browser, but connections within AnkiDroid time out. There don't appear to be any errors in the server logs, so I'm not sure whether any connections are being attempted. Might be worth turning off certificate validation on these devices to see if it helps?
Related reports:
https://github.com/ankidroid/Anki-Android/issues/4533
https://anki.tenderapp.com/discussions/ankiweb/1919-a-network-error-has-occurred
|
priority
|
blackberry devices are unable to sync ankiweb s ssl certificate was updated on the and it appears to have broken syncing on blackberry devices users are able to access ankiweb net in their device s browser but connections within ankidroid time out there don t appear to be any errors in the server logs so i m not sure whether any connections are being attempted might be worth turning off certificate validation on these devices to see if it helps related reports
| 1
|
212,894
| 7,243,791,520
|
IssuesEvent
|
2018-02-14 13:05:39
|
CodeGra-de/CodeGra.de
|
https://api.github.com/repos/CodeGra-de/CodeGra.de
|
opened
|
LTI launch sometimes gets stuck in infinite loop
|
LTI bug frontend priority-0-high
|
It seems that this is caused by localforage not finding any storage adapter.
|
1.0
|
LTI launch sometimes gets stuck in infinite loop - It seems that this is caused by localforage not finding any storage adapter.
|
priority
|
lti launch sometimes gets stuck in infinite loop it seems that this is caused by localforage not finding any storage adapter
| 1
|
554,613
| 16,434,758,048
|
IssuesEvent
|
2021-05-20 07:59:47
|
southppp22/Flower-Shop-Seorimhwa
|
https://api.github.com/repos/southppp22/Flower-Shop-Seorimhwa
|
closed
|
[Client]Image Atom Component 개발
|
E:3.0 Priority: High Status: To Do Type: Feature/Function
|
### ISSUE
- Type: `feature`
- Detail: Image Atom Component 개발
-
### TODO
1. [ ] style-component, storybook 적용
2. [ ] props interface 정의
### Estimated time
### `3h`
|
1.0
|
[Client]Image Atom Component 개발 - ### ISSUE
- Type: `feature`
- Detail: Image Atom Component 개발
-
### TODO
1. [ ] style-component, storybook 적용
2. [ ] props interface 정의
### Estimated time
### `3h`
|
priority
|
image atom component 개발 issue type feature detail image atom component 개발 todo style component storybook 적용 props interface 정의 estimated time
| 1
|
219,801
| 7,346,038,469
|
IssuesEvent
|
2018-03-07 19:22:16
|
AyuntamientoMadrid/consul
|
https://api.github.com/repos/AyuntamientoMadrid/consul
|
closed
|
Proposal document URL returns 404
|
Bug High priority
|
[This proposal's documents](https://decide.madrid.es/proposals/19914-incineradora-de-valdemingomez-no) URLs returns a 404 code. Looks like somehow the internal link is broken or missing, but the `Document` object is correctly stored in ddbb with the right ids and references.
```
ActionController::RoutingError: No route matches [GET] "/system/documents/attachments/000/000/060/original/4a814ef7aecebe8b5b09e4cc0074b6a14b57a442.pdf"
```
More information about this occurrence in Rollbar: https://rollbar.com/consul/Participacion/items/974/?item_page=18&#traceback
|
1.0
|
Proposal document URL returns 404 - [This proposal's documents](https://decide.madrid.es/proposals/19914-incineradora-de-valdemingomez-no) URLs returns a 404 code. Looks like somehow the internal link is broken or missing, but the `Document` object is correctly stored in ddbb with the right ids and references.
```
ActionController::RoutingError: No route matches [GET] "/system/documents/attachments/000/000/060/original/4a814ef7aecebe8b5b09e4cc0074b6a14b57a442.pdf"
```
More information about this occurrence in Rollbar: https://rollbar.com/consul/Participacion/items/974/?item_page=18&#traceback
|
priority
|
proposal document url returns urls returns a code looks like somehow the internal link is broken or missing but the document object is correctly stored in ddbb with the right ids and references actioncontroller routingerror no route matches system documents attachments original pdf more information about this occurrence in rollbar
| 1
|
288,633
| 8,849,677,329
|
IssuesEvent
|
2019-01-08 10:55:34
|
DroidKaigi/conference-app-2019
|
https://api.github.com/repos/DroidKaigi/conference-app-2019
|
closed
|
Apply SearchView design
|
assigned help wanted high priority welcome contribute
|
## Overview (Required)
- Even if it is not as designed, I want to delete the image here.
design

current

|
1.0
|
Apply SearchView design - ## Overview (Required)
- Even if it is not as designed, I want to delete the image here.
design

current

|
priority
|
apply searchview design overview required even if it is not as designed i want to delete the image here design current
| 1
|
538,166
| 15,763,990,018
|
IssuesEvent
|
2021-03-31 12:48:21
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.3 staging-1966] Can't link storages in store when title ownes property.
|
Category: Gameplay Priority: High Regression Squad: Mountain Goat Type: Bug
|
Step to reproduce:
- place registrar and make title:

- claim property and transfer owner to this title:

- place chest, store and farmers table here:

- add acorn and 10 beet to chest:

- open farmers table and limk storage to it:

- you can craft beet seed.
- open store and link chest here:

- add beet and acorn to sale list:

It should not be sold out, because I should link storage as title owner, not like just a player.
the first fix of this issue #18530
|
1.0
|
[0.9.3 staging-1966] Can't link storages in store when title ownes property. - Step to reproduce:
- place registrar and make title:

- claim property and transfer owner to this title:

- place chest, store and farmers table here:

- add acorn and 10 beet to chest:

- open farmers table and limk storage to it:

- you can craft beet seed.
- open store and link chest here:

- add beet and acorn to sale list:

It should not be sold out, because I should link storage as title owner, not like just a player.
the first fix of this issue #18530
|
priority
|
can t link storages in store when title ownes property step to reproduce place registrar and make title claim property and transfer owner to this title place chest store and farmers table here add acorn and beet to chest open farmers table and limk storage to it you can craft beet seed open store and link chest here add beet and acorn to sale list it should not be sold out because i should link storage as title owner not like just a player the first fix of this issue
| 1
|
65,764
| 3,240,278,121
|
IssuesEvent
|
2015-10-15 02:12:58
|
cs2103aug2015-w09-3j/main
|
https://api.github.com/repos/cs2103aug2015-w09-3j/main
|
closed
|
A user can specify a specific folder on the data storage location
|
priority.high type.story
|
... so that the user can know where the file is saved
|
1.0
|
A user can specify a specific folder on the data storage location - ... so that the user can know where the file is saved
|
priority
|
a user can specify a specific folder on the data storage location so that the user can know where the file is saved
| 1
|
221,992
| 7,404,334,374
|
IssuesEvent
|
2018-03-20 04:02:59
|
Earthii/Simple-Camera-SOEN-390
|
https://api.github.com/repos/Earthii/Simple-Camera-SOEN-390
|
closed
|
As A user, I should be able to swipe to trigger video mode
|
high priority user story
|
As a user if i swipe left/right, i should be able to go from picture mode to video mode etc.
[SP - 8]
[Priority - High]
[Risk - Medium]
## Task
- [x] Create the On swipe listener class, listener to the preview, Acceptance Test, Mockup [Eric 5 SP]
- [x] Add the Features to the appropriate swipes [Steven 3 SP]
|
1.0
|
As A user, I should be able to swipe to trigger video mode - As a user if i swipe left/right, i should be able to go from picture mode to video mode etc.
[SP - 8]
[Priority - High]
[Risk - Medium]
## Task
- [x] Create the On swipe listener class, listener to the preview, Acceptance Test, Mockup [Eric 5 SP]
- [x] Add the Features to the appropriate swipes [Steven 3 SP]
|
priority
|
as a user i should be able to swipe to trigger video mode as a user if i swipe left right i should be able to go from picture mode to video mode etc task create the on swipe listener class listener to the preview acceptance test mockup add the features to the appropriate swipes
| 1
|
81,243
| 3,588,089,670
|
IssuesEvent
|
2016-01-30 19:45:54
|
oakesville/mythling
|
https://api.github.com/repos/oakesville/mythling
|
opened
|
Null mediaList Can Result in Crash on Resume
|
priority:high type:bug
|
A stack trace like below is occasionally reported. User messages indicate this crash happens on resume. Currently avoided by creating an empty mediaList, but this may be a worse experience than allowing the app to crash.
java.lang.RuntimeException: Unable to resume activity {com.oakesville.mythling/com.oakesville.mythling.MediaListActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'java.util.List com.oakesville.mythling.media.MediaList.getListables(java.lang.String)' on a null object reference
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2986)
at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3017)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2392)
at android.app.ActivityThread.access$800(ActivityThread.java:151)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5254)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'java.util.List com.oakesville.mythling.media.MediaList.getListables(java.lang.String)' on a null object reference
at com.oakesville.mythling.MediaActivity.getListables(MediaActivity.java:131)
at com.oakesville.mythling.ItemDetailFragment.onResume(ItemDetailFragment.java:124)
at android.app.Fragment.performResume(Fragment.java:2096)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:928)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1067)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1049)
at android.app.FragmentManagerImpl.dispatchResume(FragmentManager.java:1879)
at android.app.Activity.performResume(Activity.java:6086)
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2975)
... 11 more
|
1.0
|
Null mediaList Can Result in Crash on Resume - A stack trace like below is occasionally reported. User messages indicate this crash happens on resume. Currently avoided by creating an empty mediaList, but this may be a worse experience than allowing the app to crash.
java.lang.RuntimeException: Unable to resume activity {com.oakesville.mythling/com.oakesville.mythling.MediaListActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'java.util.List com.oakesville.mythling.media.MediaList.getListables(java.lang.String)' on a null object reference
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2986)
at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:3017)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2392)
at android.app.ActivityThread.access$800(ActivityThread.java:151)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5254)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'java.util.List com.oakesville.mythling.media.MediaList.getListables(java.lang.String)' on a null object reference
at com.oakesville.mythling.MediaActivity.getListables(MediaActivity.java:131)
at com.oakesville.mythling.ItemDetailFragment.onResume(ItemDetailFragment.java:124)
at android.app.Fragment.performResume(Fragment.java:2096)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:928)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1067)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1049)
at android.app.FragmentManagerImpl.dispatchResume(FragmentManager.java:1879)
at android.app.Activity.performResume(Activity.java:6086)
at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2975)
... 11 more
|
priority
|
null medialist can result in crash on resume a stack trace like below is occasionally reported user messages indicate this crash happens on resume currently avoided by creating an empty medialist but this may be a worse experience than allowing the app to crash java lang runtimeexception unable to resume activity com oakesville mythling com oakesville mythling medialistactivity java lang nullpointerexception attempt to invoke virtual method java util list com oakesville mythling media medialist getlistables java lang string on a null object reference at android app activitythread performresumeactivity activitythread java at android app activitythread handleresumeactivity activitythread java at android app activitythread handlelaunchactivity activitythread java at android app activitythread access activitythread java at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java caused by java lang nullpointerexception attempt to invoke virtual method java util list com oakesville mythling media medialist getlistables java lang string on a null object reference at com oakesville mythling mediaactivity getlistables mediaactivity java at com oakesville mythling itemdetailfragment onresume itemdetailfragment java at android app fragment performresume fragment java at android app fragmentmanagerimpl movetostate fragmentmanager java at android app fragmentmanagerimpl movetostate fragmentmanager java at android app fragmentmanagerimpl movetostate fragmentmanager java at android app fragmentmanagerimpl dispatchresume fragmentmanager java at android app activity performresume activity java at android app activitythread performresumeactivity activitythread java more
| 1
|
36,894
| 2,813,420,030
|
IssuesEvent
|
2015-05-18 14:42:13
|
TresysTechnology/clip
|
https://api.github.com/repos/TresysTechnology/clip
|
closed
|
TAR_DEPS in clip-selinux-policy/Makefile needs more granular file list
|
bug High Priority short-task
|
We should only detect changes in *.te, *.fc, *.if, policy.conf and other files that I am probably missing.
|
1.0
|
TAR_DEPS in clip-selinux-policy/Makefile needs more granular file list - We should only detect changes in *.te, *.fc, *.if, policy.conf and other files that I am probably missing.
|
priority
|
tar deps in clip selinux policy makefile needs more granular file list we should only detect changes in te fc if policy conf and other files that i am probably missing
| 1
|
345,370
| 10,361,551,937
|
IssuesEvent
|
2019-09-06 10:17:58
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
idp role mapping is not working
|
Affected/5.8.0 Priority/High Type/Bug WUM
|
When tried to add the IDP role mapping in wso2 identity server, it's not work as expected. As check on this further in the code base, following line has to added under jQuery('#advancedClaimMappingAddLink').click() in idp_mgt_edit.js file. But in the fix for 5.12.387, it has been mistakenly added under jQuery('#roleAddLink').click() function in the same file.
Therefore, it is not possible to add IdP role mappings now due to undefined variable selectedIDPClaimName under jQuery('#roleAddLink').click().
selectedIDPClaimName = htmlEncode(selectedIDPClaimName);
|
1.0
|
idp role mapping is not working - When tried to add the IDP role mapping in wso2 identity server, it's not work as expected. As check on this further in the code base, following line has to added under jQuery('#advancedClaimMappingAddLink').click() in idp_mgt_edit.js file. But in the fix for 5.12.387, it has been mistakenly added under jQuery('#roleAddLink').click() function in the same file.
Therefore, it is not possible to add IdP role mappings now due to undefined variable selectedIDPClaimName under jQuery('#roleAddLink').click().
selectedIDPClaimName = htmlEncode(selectedIDPClaimName);
|
priority
|
idp role mapping is not working when tried to add the idp role mapping in identity server it s not work as expected as check on this further in the code base following line has to added under jquery advancedclaimmappingaddlink click in idp mgt edit js file but in the fix for it has been mistakenly added under jquery roleaddlink click function in the same file therefore it is not possible to add idp role mappings now due to undefined variable selectedidpclaimname under jquery roleaddlink click selectedidpclaimname htmlencode selectedidpclaimname
| 1
|
488,863
| 14,087,437,214
|
IssuesEvent
|
2020-11-05 06:25:05
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
gmail.com - desktop site instead of mobile site
|
browser-chrome ml-needsdiagnosis-false ml-probability-high priority-normal
|
<!-- @browser: Chrome 80.0.3987 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61157 -->
**URL**: https://gmail.com
**Browser / Version**: Chrome 80.0.3987
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
in valid username
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
gmail.com - desktop site instead of mobile site - <!-- @browser: Chrome 80.0.3987 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61157 -->
**URL**: https://gmail.com
**Browser / Version**: Chrome 80.0.3987
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
in valid username
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
priority
|
gmail com desktop site instead of mobile site url browser version chrome operating system windows tested another browser yes chrome problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce in valid username browser configuration none from with ❤️
| 1
|
32,544
| 2,755,767,207
|
IssuesEvent
|
2015-04-26 22:50:42
|
paceuniversity/cs3892015team1
|
https://api.github.com/repos/paceuniversity/cs3892015team1
|
closed
|
User Story 5
|
Difficulty 7 Functional High Priority Product Backlog User Story
|
As a parent I want to edit flashcards in order to Change the word associated with a picture. (This is to better clarify User Story 4 and rid User Story 1 of amalgamation issues)
|
1.0
|
User Story 5 - As a parent I want to edit flashcards in order to Change the word associated with a picture. (This is to better clarify User Story 4 and rid User Story 1 of amalgamation issues)
|
priority
|
user story as a parent i want to edit flashcards in order to change the word associated with a picture this is to better clarify user story and rid user story of amalgamation issues
| 1
|
27,847
| 2,696,615,308
|
IssuesEvent
|
2015-04-02 15:06:06
|
IQSS/dataverse
|
https://api.github.com/repos/IQSS/dataverse
|
opened
|
Once you are in data "explore" no options to exit and get back into the dataset file page
|
Component: UX & Upgrade Priority: High Status: Dev Type: Feature
|
Once you hit "explore" you are taken to "Two ravens" but there are no options to get back to the study page, and two ravens doesn't open in a new window
|
1.0
|
Once you are in data "explore" no options to exit and get back into the dataset file page - Once you hit "explore" you are taken to "Two ravens" but there are no options to get back to the study page, and two ravens doesn't open in a new window
|
priority
|
once you are in data explore no options to exit and get back into the dataset file page once you hit explore you are taken to two ravens but there are no options to get back to the study page and two ravens doesn t open in a new window
| 1
|
754,534
| 26,392,491,483
|
IssuesEvent
|
2023-01-12 16:40:19
|
nexB/scancode.io
|
https://api.github.com/repos/nexB/scancode.io
|
closed
|
Improve the value of the "documentDescribes" field in a generated SPDX 2.3 SBOM
|
bug high priority reporting
|
At the recent SPDX Docfest, it was pointed out that the value we provide in the SPDX 2.3 SBOM for the "documentDescribes" field is a self-reference to the SPDX Document, rather than the subject of the SBOM: `SPDXRef-DOCUMENT` .
It would be better to provide a filename (possibly even a PURL when available) to identify (describe?) the overall subject of the SBOM. See attached.
<img width="500" alt="Screen Shot 2022-11-30 at 13 02 45" src="https://user-images.githubusercontent.com/4991620/204908054-d9b2bb4a-1250-40d3-92c2-15e281d0e5fc.png">
|
1.0
|
Improve the value of the "documentDescribes" field in a generated SPDX 2.3 SBOM - At the recent SPDX Docfest, it was pointed out that the value we provide in the SPDX 2.3 SBOM for the "documentDescribes" field is a self-reference to the SPDX Document, rather than the subject of the SBOM: `SPDXRef-DOCUMENT` .
It would be better to provide a filename (possibly even a PURL when available) to identify (describe?) the overall subject of the SBOM. See attached.
<img width="500" alt="Screen Shot 2022-11-30 at 13 02 45" src="https://user-images.githubusercontent.com/4991620/204908054-d9b2bb4a-1250-40d3-92c2-15e281d0e5fc.png">
|
priority
|
improve the value of the documentdescribes field in a generated spdx sbom at the recent spdx docfest it was pointed out that the value we provide in the spdx sbom for the documentdescribes field is a self reference to the spdx document rather than the subject of the sbom spdxref document it would be better to provide a filename possibly even a purl when available to identify describe the overall subject of the sbom see attached img width alt screen shot at src
| 1
|
158,901
| 6,036,902,616
|
IssuesEvent
|
2017-06-09 17:17:42
|
forcedotcom/scmt-server
|
https://api.github.com/repos/forcedotcom/scmt-server
|
opened
|
Implement variable exponential back off and jitter in the backoff
|
enhancement priority - high
|
(so if there's multiple migrations running and they all hiccup from the same perf issue they don't all slam us at the same time?)
|
1.0
|
Implement variable exponential back off and jitter in the backoff - (so if there's multiple migrations running and they all hiccup from the same perf issue they don't all slam us at the same time?)
|
priority
|
implement variable exponential back off and jitter in the backoff so if there s multiple migrations running and they all hiccup from the same perf issue they don t all slam us at the same time
| 1
|
292,427
| 8,957,799,320
|
IssuesEvent
|
2019-01-27 08:12:21
|
cstate/cstate
|
https://api.github.com/repos/cstate/cstate
|
opened
|
HEX codes starting with 00 are unusable
|
bug help wanted priority: high
|
Golang/Hugo interprets a HEX code starting with 0 as octal
[Reference of quirk](https://scripter.co/golang-quirk-number-strings-starting-with-0-are-octals/)
Contributors appreciated
|
1.0
|
HEX codes starting with 00 are unusable - Golang/Hugo interprets a HEX code starting with 0 as octal
[Reference of quirk](https://scripter.co/golang-quirk-number-strings-starting-with-0-are-octals/)
Contributors appreciated
|
priority
|
hex codes starting with are unusable golang hugo interprets a hex code starting with as octal contributors appreciated
| 1
|
307,636
| 9,419,642,480
|
IssuesEvent
|
2019-04-10 22:40:59
|
cb-geo/mpm
|
https://api.github.com/repos/cb-geo/mpm
|
closed
|
Create node / particle sets
|
Priority: High Status: Pending Type: Core feature
|
In the mesh class, we should be able to create a set of nodes and particles on which loading or restraints can be applied. A code snipped on how to create a node set:
```
//! Assign node sets
//! \brief Assign loads and restrain node sets
//! \return create_status Return success or failure of node set creation
template <unsigned Tdim>
bool mpm::Mesh<Tdim>::assign_node_sets() {
auto node_set = json_mesh_["node_sets"];
bool set_status = true;
if (node_set.size() == 0) set_status = false;
// Iterate through JSON node_sets object
for (auto it = node_set.begin(); it != node_set.end(); ++it) {
auto nodes =
this->read_node_id(this->json_mesh_["cwd"].template get<std::string>() +
it.value().template get<std::string>());
// Remove duplicates
std::sort(nodes.begin(), nodes.end());
nodes.erase(std::unique(nodes.begin(), nodes.end()), nodes.end());
if (!this->create_nodes_set(it.key(), nodes)) set_status = false;
}
return set_status;
}
```
to create a node_set:
```
//! Create node sets
//! \brief Create node sets to assign forces / restraings
//! \param[in] key Key to store vector of nodes in a map
//! \param[in] nodes List of node indices
//! \return create_status Return success or failure of node set creation
template <unsigned Tdim>
bool lem::Mesh<Tdim>::create_nodes_set(const std::string& key,
const std::vector<unsigned>& nodes) {
bool node_set_create = false;
std::vector<std::shared_ptr<lem::SolidNode<Tdim>>> set_solid_nodes;
try {
// Parse node index files
for (const auto& nodeid : nodes) {
// Search if node id is in vector of solid nodes
auto node_itr = solid_nodes_.at(nodeid);
// If a solid node pointer is found, push to a new set of nodes
set_solid_nodes.push_back(node_itr);
}
} catch (std::exception& except) {
console_->error("Create node set: {}", except.what());
}
if (set_solid_nodes.size() > 0) {
node_set_.emplace(key, set_solid_nodes);
node_set_create = true;
console_->info("Node set created with: {}", key);
} else
node_set_create = false;
return node_set_create;
}
```
In addition we need to create an iterate function to iterate over these node and particle sets:
```
//! Iterate over node set using a given key
//! \param[in] key Key of node_set
//! \tparam Toper operator on each node
template <unsigned Tdim>
template <typename Toper>
void mpm::Mesh<Tdim>::iterate_over_node_set(const std::string& key,
Toper oper) {
tbb::parallel_for_each(node_set_.at(key).begin(), node_set_.at(key).end(),
oper);
}
```
Node set is implemented as:
```
//! Map of nodes to string to create sets
std::map<std::string, std::vector<std::shared_ptr<lem::SolidNode<Tdim>>>>
node_set_;
```
|
1.0
|
Create node / particle sets - In the mesh class, we should be able to create a set of nodes and particles on which loading or restraints can be applied. A code snipped on how to create a node set:
```
//! Assign node sets
//! \brief Assign loads and restrain node sets
//! \return create_status Return success or failure of node set creation
template <unsigned Tdim>
bool mpm::Mesh<Tdim>::assign_node_sets() {
auto node_set = json_mesh_["node_sets"];
bool set_status = true;
if (node_set.size() == 0) set_status = false;
// Iterate through JSON node_sets object
for (auto it = node_set.begin(); it != node_set.end(); ++it) {
auto nodes =
this->read_node_id(this->json_mesh_["cwd"].template get<std::string>() +
it.value().template get<std::string>());
// Remove duplicates
std::sort(nodes.begin(), nodes.end());
nodes.erase(std::unique(nodes.begin(), nodes.end()), nodes.end());
if (!this->create_nodes_set(it.key(), nodes)) set_status = false;
}
return set_status;
}
```
to create a node_set:
```
//! Create node sets
//! \brief Create node sets to assign forces / restraings
//! \param[in] key Key to store vector of nodes in a map
//! \param[in] nodes List of node indices
//! \return create_status Return success or failure of node set creation
template <unsigned Tdim>
bool lem::Mesh<Tdim>::create_nodes_set(const std::string& key,
const std::vector<unsigned>& nodes) {
bool node_set_create = false;
std::vector<std::shared_ptr<lem::SolidNode<Tdim>>> set_solid_nodes;
try {
// Parse node index files
for (const auto& nodeid : nodes) {
// Search if node id is in vector of solid nodes
auto node_itr = solid_nodes_.at(nodeid);
// If a solid node pointer is found, push to a new set of nodes
set_solid_nodes.push_back(node_itr);
}
} catch (std::exception& except) {
console_->error("Create node set: {}", except.what());
}
if (set_solid_nodes.size() > 0) {
node_set_.emplace(key, set_solid_nodes);
node_set_create = true;
console_->info("Node set created with: {}", key);
} else
node_set_create = false;
return node_set_create;
}
```
In addition we need to create an iterate function to iterate over these node and particle sets:
```
//! Iterate over node set using a given key
//! \param[in] key Key of node_set
//! \tparam Toper operator on each node
template <unsigned Tdim>
template <typename Toper>
void mpm::Mesh<Tdim>::iterate_over_node_set(const std::string& key,
Toper oper) {
tbb::parallel_for_each(node_set_.at(key).begin(), node_set_.at(key).end(),
oper);
}
```
Node set is implemented as:
```
//! Map of nodes to string to create sets
std::map<std::string, std::vector<std::shared_ptr<lem::SolidNode<Tdim>>>>
node_set_;
```
|
priority
|
create node particle sets in the mesh class we should be able to create a set of nodes and particles on which loading or restraints can be applied a code snipped on how to create a node set assign node sets brief assign loads and restrain node sets return create status return success or failure of node set creation template bool mpm mesh assign node sets auto node set json mesh bool set status true if node set size set status false iterate through json node sets object for auto it node set begin it node set end it auto nodes this read node id this json mesh template get it value template get remove duplicates std sort nodes begin nodes end nodes erase std unique nodes begin nodes end nodes end if this create nodes set it key nodes set status false return set status to create a node set create node sets brief create node sets to assign forces restraings param key key to store vector of nodes in a map param nodes list of node indices return create status return success or failure of node set creation template bool lem mesh create nodes set const std string key const std vector nodes bool node set create false std vector set solid nodes try parse node index files for const auto nodeid nodes search if node id is in vector of solid nodes auto node itr solid nodes at nodeid if a solid node pointer is found push to a new set of nodes set solid nodes push back node itr catch std exception except console error create node set except what if set solid nodes size node set emplace key set solid nodes node set create true console info node set created with key else node set create false return node set create in addition we need to create an iterate function to iterate over these node and particle sets iterate over node set using a given key param key key of node set tparam toper operator on each node template template void mpm mesh iterate over node set const std string key toper oper tbb parallel for each node set at key begin node set at key end oper node set is implemented as map of nodes to string to create sets std map node set
| 1
|
273,313
| 8,529,111,694
|
IssuesEvent
|
2018-11-03 08:02:19
|
CS2103-AY1819S1-F10-3/main
|
https://api.github.com/repos/CS2103-AY1819S1-F10-3/main
|
closed
|
[P.E. Dry Run] services' cost does not accept 2 d.p. values
|
priority.High
|
client#1 addservice s/ring c/2.99
- Executing above command results in command being ignored with no error messages told to user.
- Should accept numbers with 2 d.p. since the User Guide says it accepts SGD, which is a money format that accepts 2 d.p. Can either change the User Guide to specify only integers, or change implementation of cost.
Apologies for posting directly on your issue tracker. I was having some trouble during the Dry Run yesterday and was not able to submit all my found issues to the Dry Run issue tracker. Prof has instructed me to post the issues I found directly on your issue tracker instead.
|
1.0
|
[P.E. Dry Run] services' cost does not accept 2 d.p. values - client#1 addservice s/ring c/2.99
- Executing above command results in command being ignored with no error messages told to user.
- Should accept numbers with 2 d.p. since the User Guide says it accepts SGD, which is a money format that accepts 2 d.p. Can either change the User Guide to specify only integers, or change implementation of cost.
Apologies for posting directly on your issue tracker. I was having some trouble during the Dry Run yesterday and was not able to submit all my found issues to the Dry Run issue tracker. Prof has instructed me to post the issues I found directly on your issue tracker instead.
|
priority
|
services cost does not accept d p values client addservice s ring c executing above command results in command being ignored with no error messages told to user should accept numbers with d p since the user guide says it accepts sgd which is a money format that accepts d p can either change the user guide to specify only integers or change implementation of cost apologies for posting directly on your issue tracker i was having some trouble during the dry run yesterday and was not able to submit all my found issues to the dry run issue tracker prof has instructed me to post the issues i found directly on your issue tracker instead
| 1
|
335,160
| 10,149,655,104
|
IssuesEvent
|
2019-08-05 15:40:58
|
HUPO-PSI/mzTab
|
https://api.github.com/repos/HUPO-PSI/mzTab
|
closed
|
CV Terms for mzTab-m 2.0.0
|
Priority-High enhancement metabolomics-part mzTab-M 2.0.0
|
SEP, MS
[sample_processing](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#625-sample_processing1-n)
MS
[instrument_name](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#626-instrument1-n-name)
MS
[instrument_source](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#627-instrument1-n-source)
MS
[instrument_analyzer](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#628-instrument1-n-analyzer1-n)
MS
[instrument_detector](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#629-instrument1-n-detector)
MS
[software](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6210-software1-n)
MS
[quantification_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6217-quantification_method)
Any CV
[assay-custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6219-assay1-n-custom1-n)
MS or other CV?
[study_variable_function](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6225-study_variable_function1-n)
MS
[ms_run-format](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6229-ms_run1-n-format)
MS
[ms_run-id_format](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6230-ms_run1-n-id_format)
MS or other?
[ms_run-fragmentation_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6231-ms_run1-n-fragmentation_method1-n)
MS
[ms_run-hash_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6233-ms_run1-n-hash_method)
Any CV
[custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6234-custom1-n) -> arbitrary, these should not be validated
NEWT
[sample-species](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6235-sample1-n-species1-n)
BTO
[sample-tissue](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6236-sample1-n-tissue1-n)
CL
[sample-cell_type](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6237-sample1-n-cell_type1-n)
DOID
[sample-disease](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6238-sample1-n-disease1-n)
Any CV
[sample-custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6240-sample1-n-custom1-n) => custom should not be validated
MIRIAM or other CV?
[database](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6245-database1-n)
MS or chem-mod CV
[derivatization_agent](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6249-derivatization_agent1-n)
MS
[small_molecule-quantification_unit](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6250-small_molecule-quantification_unit)
MS
[small_molecule_feature-quantification_unit](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6251-small_molecule_feature-quantification_unit)
PRIDE or other CV
[small_molecule-identification_reliability](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6252-small_molecule-identification_reliability)
MS
[id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6253-id_confidence_measure1-n)
opt_ columns will not be part of the validation.
MS
[best_id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6314-best_id_confidence_measure)
MS
[identification_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6515-identification_method)
MS
[ms_level](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6516-ms_level)
MS
[id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6517-id_confidence_measure1-n)
|
1.0
|
CV Terms for mzTab-m 2.0.0 - SEP, MS
[sample_processing](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#625-sample_processing1-n)
MS
[instrument_name](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#626-instrument1-n-name)
MS
[instrument_source](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#627-instrument1-n-source)
MS
[instrument_analyzer](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#628-instrument1-n-analyzer1-n)
MS
[instrument_detector](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#629-instrument1-n-detector)
MS
[software](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6210-software1-n)
MS
[quantification_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6217-quantification_method)
Any CV
[assay-custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6219-assay1-n-custom1-n)
MS or other CV?
[study_variable_function](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6225-study_variable_function1-n)
MS
[ms_run-format](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6229-ms_run1-n-format)
MS
[ms_run-id_format](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6230-ms_run1-n-id_format)
MS or other?
[ms_run-fragmentation_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6231-ms_run1-n-fragmentation_method1-n)
MS
[ms_run-hash_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6233-ms_run1-n-hash_method)
Any CV
[custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6234-custom1-n) -> arbitrary, these should not be validated
NEWT
[sample-species](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6235-sample1-n-species1-n)
BTO
[sample-tissue](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6236-sample1-n-tissue1-n)
CL
[sample-cell_type](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6237-sample1-n-cell_type1-n)
DOID
[sample-disease](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6238-sample1-n-disease1-n)
Any CV
[sample-custom](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6240-sample1-n-custom1-n) => custom should not be validated
MIRIAM or other CV?
[database](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6245-database1-n)
MS or chem-mod CV
[derivatization_agent](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6249-derivatization_agent1-n)
MS
[small_molecule-quantification_unit](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6250-small_molecule-quantification_unit)
MS
[small_molecule_feature-quantification_unit](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6251-small_molecule_feature-quantification_unit)
PRIDE or other CV
[small_molecule-identification_reliability](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6252-small_molecule-identification_reliability)
MS
[id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6253-id_confidence_measure1-n)
opt_ columns will not be part of the validation.
MS
[best_id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6314-best_id_confidence_measure)
MS
[identification_method](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6515-identification_method)
MS
[ms_level](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6516-ms_level)
MS
[id_confidence_measure](https://github.com/HUPO-PSI/mzTab/blob/master/specification_document-developments/1_1-Metabolomics-Draft/mzTab_format_specification_1_1-M_draft.adoc#6517-id_confidence_measure1-n)
|
priority
|
cv terms for mztab m sep ms ms ms ms ms ms ms any cv ms or other cv ms ms ms or other ms any cv arbitrary these should not be validated newt bto cl doid any cv custom should not be validated miriam or other cv ms or chem mod cv ms ms pride or other cv ms opt columns will not be part of the validation ms ms ms ms
| 1
|
603,913
| 18,673,942,197
|
IssuesEvent
|
2021-10-31 08:08:34
|
xournalpp/xournalpp
|
https://api.github.com/repos/xournalpp/xournalpp
|
closed
|
Objects disappear when deselected or while working
|
bug priority::high rendering
|
Hi!
I experienced this issue while using LaTeX, I was a lot of images deep when suddendly one or two started disappearing.
I don't know what caused it, but I think it has something to do with the selection mode, because the first object disappeared when I switched do "Select Object" mode

|
1.0
|
Objects disappear when deselected or while working - Hi!
I experienced this issue while using LaTeX, I was a lot of images deep when suddendly one or two started disappearing.
I don't know what caused it, but I think it has something to do with the selection mode, because the first object disappeared when I switched do "Select Object" mode

|
priority
|
objects disappear when deselected or while working hi i experienced this issue while using latex i was a lot of images deep when suddendly one or two started disappearing i don t know what caused it but i think it has something to do with the selection mode because the first object disappeared when i switched do select object mode
| 1
|
717,706
| 24,688,072,460
|
IssuesEvent
|
2022-10-19 06:11:13
|
Rehachoudhary0/hotel_testing
|
https://api.github.com/repos/Rehachoudhary0/hotel_testing
|
closed
|
🐛 Bug Report: Cancel booking not showing in backoffice.
|
bug back-end High priority
|
### 👟 Reproduction steps
Cancel booking not showing in backoffice.
### 👍 Expected behavior
Cancel booking should be in back office.
### 👎 Actual Behavior
.
### 🎲 App version
Different version (specify in environment)
### 💻 Operating system
Something else
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md)
|
1.0
|
🐛 Bug Report: Cancel booking not showing in backoffice. - ### 👟 Reproduction steps
Cancel booking not showing in backoffice.
### 👍 Expected behavior
Cancel booking should be in back office.
### 👎 Actual Behavior
.
### 🎲 App version
Different version (specify in environment)
### 💻 Operating system
Something else
### 👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Code of Conduct?
- [X] I have read the [Code of Conduct](https://github.com/Rehachoudhary0/hotel_testing/blob/HEAD/CODE_OF_CONDUCT.md)
|
priority
|
🐛 bug report cancel booking not showing in backoffice 👟 reproduction steps cancel booking not showing in backoffice 👍 expected behavior cancel booking should be in back office 👎 actual behavior 🎲 app version different version specify in environment 💻 operating system something else 👀 have you spent some time to check if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the code of conduct i have read the
| 1
|
554,101
| 16,389,156,628
|
IssuesEvent
|
2021-05-17 14:10:39
|
micronaut-projects/micronaut-core
|
https://api.github.com/repos/micronaut-projects/micronaut-core
|
closed
|
Regression 2.4.x 2.5.x Unable to deploy to Elastic Beanstalk
|
priority: high type: bug
|
# Steps to reproduce
`mn create-app example.micronaut.micronautguide --inplace`
## Health Endpoint
Add `micronaut-management` dependency:
```groovy
dependencies {
...
..
.
implementation("io.micronaut:micronaut-management")
}
```
## Port 5000 EC2 Env
Create a file to run the application in port 5000 when deployed to [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/)
`src/main/resources/application-ec2.yml`
```yaml
micronaut:
server:
port: 5000
```
## Deployment
Generate a FAT JAR:
```
./gradlew build
```
## Elastic Beanstalk application
Create a Java 11 application in [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/).
<img width="689" alt="Screenshot 2021-05-15 at 07 47 50" src="https://user-images.githubusercontent.com/864788/118349659-33723300-b552-11eb-90f1-6433af9a6684.png">
As application code upload FAT JAR `build/libs/micronautguide-0.1-all.jar`.
# Current behaviour
GET request to the health endpoint:
```
curl http://micronautguide.us-east-1.elasticbeanstalk.com/health
{"message":"WebSocket Not Found","_links":{"self":{"href":"/health","templated":false}}}%
```
# Expected output
If I change the Micronaut version in `gradle.properties` to 2.4.4, generate and deploy the FAT JAR to [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/)
GET request to the health endpoint:
```
curl http://micronautguide.us-east-1.elasticbeanstalk.com/health
{"status":"UP"}
```
|
1.0
|
Regression 2.4.x 2.5.x Unable to deploy to Elastic Beanstalk - # Steps to reproduce
`mn create-app example.micronaut.micronautguide --inplace`
## Health Endpoint
Add `micronaut-management` dependency:
```groovy
dependencies {
...
..
.
implementation("io.micronaut:micronaut-management")
}
```
## Port 5000 EC2 Env
Create a file to run the application in port 5000 when deployed to [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/)
`src/main/resources/application-ec2.yml`
```yaml
micronaut:
server:
port: 5000
```
## Deployment
Generate a FAT JAR:
```
./gradlew build
```
## Elastic Beanstalk application
Create a Java 11 application in [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/).
<img width="689" alt="Screenshot 2021-05-15 at 07 47 50" src="https://user-images.githubusercontent.com/864788/118349659-33723300-b552-11eb-90f1-6433af9a6684.png">
As application code upload FAT JAR `build/libs/micronautguide-0.1-all.jar`.
# Current behaviour
GET request to the health endpoint:
```
curl http://micronautguide.us-east-1.elasticbeanstalk.com/health
{"message":"WebSocket Not Found","_links":{"self":{"href":"/health","templated":false}}}%
```
# Expected output
If I change the Micronaut version in `gradle.properties` to 2.4.4, generate and deploy the FAT JAR to [Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/)
GET request to the health endpoint:
```
curl http://micronautguide.us-east-1.elasticbeanstalk.com/health
{"status":"UP"}
```
|
priority
|
regression x x unable to deploy to elastic beanstalk steps to reproduce mn create app example micronaut micronautguide inplace health endpoint add micronaut management dependency groovy dependencies implementation io micronaut micronaut management port env create a file to run the application in port when deployed to src main resources application yml yaml micronaut server port deployment generate a fat jar gradlew build elastic beanstalk application create a java application in img width alt screenshot at src as application code upload fat jar build libs micronautguide all jar current behaviour get request to the health endpoint curl message websocket not found links self href health templated false expected output if i change the micronaut version in gradle properties to generate and deploy the fat jar to get request to the health endpoint curl status up
| 1
|
357,122
| 10,602,235,443
|
IssuesEvent
|
2019-10-10 13:52:51
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Upgrade React to 16.x and also update related libraries
|
Accepted Priority: High user story
|
### Description
Following up investigation done for #3528 we want to upgrade React to the latest 16.x version and the following related libraries:
- Babel (7.2.2)
- Webpack (4.29.3)
- react-redux (6.0.0)
We will also need to updgrade (to be compatible with React 16):
- qrcode.react
- react-color
- react-nouislider
- react-overlays
- connected-react-router (used instead of react-router-redux)
- react-selectize
- react-widgets
- reselect
- react-sortable-items (created 16.x version on our fork)
- react-pdf
- ag-grid -> ag-grid-community + ag-grid-react
- jsonix --> @boundlessgeo/jsonix
And also (to be compatible with webpack 4):
- babel-loader
- json-loader
- copy-webpack-plugin
- mini-css-extract-plugin (used instead of extract-text-webpack-plugin)
- postcss, postcss-loader, postcss-prefix-selector
- webpack-dev-server
- webpack-parallel-uglify-plugin
- webpack-cli (new)
Finally (for tests):
- karma (4.3.0)
- istanbul-instrumenter-loader
- mocha (6.2.0 customized to fix an issue with uncaught exceptions)
|
1.0
|
Upgrade React to 16.x and also update related libraries - ### Description
Following up investigation done for #3528 we want to upgrade React to the latest 16.x version and the following related libraries:
- Babel (7.2.2)
- Webpack (4.29.3)
- react-redux (6.0.0)
We will also need to updgrade (to be compatible with React 16):
- qrcode.react
- react-color
- react-nouislider
- react-overlays
- connected-react-router (used instead of react-router-redux)
- react-selectize
- react-widgets
- reselect
- react-sortable-items (created 16.x version on our fork)
- react-pdf
- ag-grid -> ag-grid-community + ag-grid-react
- jsonix --> @boundlessgeo/jsonix
And also (to be compatible with webpack 4):
- babel-loader
- json-loader
- copy-webpack-plugin
- mini-css-extract-plugin (used instead of extract-text-webpack-plugin)
- postcss, postcss-loader, postcss-prefix-selector
- webpack-dev-server
- webpack-parallel-uglify-plugin
- webpack-cli (new)
Finally (for tests):
- karma (4.3.0)
- istanbul-instrumenter-loader
- mocha (6.2.0 customized to fix an issue with uncaught exceptions)
|
priority
|
upgrade react to x and also update related libraries description following up investigation done for we want to upgrade react to the latest x version and the following related libraries babel webpack react redux we will also need to updgrade to be compatible with react qrcode react react color react nouislider react overlays connected react router used instead of react router redux react selectize react widgets reselect react sortable items created x version on our fork react pdf ag grid ag grid community ag grid react jsonix boundlessgeo jsonix and also to be compatible with webpack babel loader json loader copy webpack plugin mini css extract plugin used instead of extract text webpack plugin postcss postcss loader postcss prefix selector webpack dev server webpack parallel uglify plugin webpack cli new finally for tests karma istanbul instrumenter loader mocha customized to fix an issue with uncaught exceptions
| 1
|
537,943
| 15,757,794,395
|
IssuesEvent
|
2021-03-31 05:51:54
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Transaction:getInfo() does not return the past transaction information
|
Lang/Transaction Priority/High Team/CompilerFE Type/Bug
|
Consider the following code.
```ballerina
transaction {
readonly d = 123;
transactions:setData(d);
transactions:Info transInfo = transactions:info();
transactions:Info? newTransInfo = transactions:getInfo(transInfo.xid);
if(newTransInfo is transactions:Info) {
test:assertEquals(transInfo.xid, newTransInfo.xid);
} else {
panic error AssertionError(ASSERTION_ERROR_REASON, message = "unexpected output from getInfo");
}
transactions:onRollback(rollbackFunc);
str += "In Trx";
test:assertEquals(d === transactions:getData(), true);
check commit;
str += " commit";
}
```
Above code should have executed successfully, but instead it prints "unexpected output from getInfo".
Cause is the output of Info() andd getInfo() are not the same.
|
1.0
|
Transaction:getInfo() does not return the past transaction information - Consider the following code.
```ballerina
transaction {
readonly d = 123;
transactions:setData(d);
transactions:Info transInfo = transactions:info();
transactions:Info? newTransInfo = transactions:getInfo(transInfo.xid);
if(newTransInfo is transactions:Info) {
test:assertEquals(transInfo.xid, newTransInfo.xid);
} else {
panic error AssertionError(ASSERTION_ERROR_REASON, message = "unexpected output from getInfo");
}
transactions:onRollback(rollbackFunc);
str += "In Trx";
test:assertEquals(d === transactions:getData(), true);
check commit;
str += " commit";
}
```
Above code should have executed successfully, but instead it prints "unexpected output from getInfo".
Cause is the output of Info() andd getInfo() are not the same.
|
priority
|
transaction getinfo does not return the past transaction information consider the following code ballerina transaction readonly d transactions setdata d transactions info transinfo transactions info transactions info newtransinfo transactions getinfo transinfo xid if newtransinfo is transactions info test assertequals transinfo xid newtransinfo xid else panic error assertionerror assertion error reason message unexpected output from getinfo transactions onrollback rollbackfunc str in trx test assertequals d transactions getdata true check commit str commit above code should have executed successfully but instead it prints unexpected output from getinfo cause is the output of info andd getinfo are not the same
| 1
|
240,598
| 7,803,353,158
|
IssuesEvent
|
2018-06-10 22:49:05
|
leo-project/leofs
|
https://api.github.com/repos/leo-project/leofs
|
closed
|
[rack aware] doesn't work as expected
|
Bug Priority-HIGH _leo_manager _leo_redundant_manager _leo_storage survey v1.4
|
Now there are some cases that rack awareness doesn't work as expected.
For example, let's say that we try to create a cluster with
- 5 storage nodes (node[1-5])
- 2 physical racks (rack[1-2])
- 2 replicas (N=2) and each replica should belong to a different rack
then set the configurations as following
- replication.rack_awareness.rack_id on each leo_storage.conf
- storage_0@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_1@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_2@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_3@127.0.0.1: replication.rack_awareness.rack_id = rack2
- storage_4@127.0.0.1: replication.rack_awareness.rack_id = rack2
- consistency.rack_aware_replicas = 2
however there are some objects that have replicas both belonging to a same rack like following.
```shell
leofs@cat2neat:leofs.1.3.5$ leofs-adm whereis "test/2"
-------+--------------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
del? | node | ring address | size | checksum | has children | total chunks | clock | when
-------+--------------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
| storage_4@127.0.0.1 | 738e5b62350a5baebb0a619d9e44a91c | 6K | 21f79de954 | false | 0 | 56ce8ee93a199 | 2018-05-24 09:42:54 +0900
| storage_3@127.0.0.1 | 738e5b62350a5baebb0a619d9e44a91c | 6K | 21f79de954 | false | 0 | 56ce8ee93a199 | 2018-05-24 09:42:54 +0900
```
storage_[3,4]@127.0.0.1 both belong to rack2.
* related link: leo-project/leofs/issues/1047
|
1.0
|
[rack aware] doesn't work as expected - Now there are some cases that rack awareness doesn't work as expected.
For example, let's say that we try to create a cluster with
- 5 storage nodes (node[1-5])
- 2 physical racks (rack[1-2])
- 2 replicas (N=2) and each replica should belong to a different rack
then set the configurations as following
- replication.rack_awareness.rack_id on each leo_storage.conf
- storage_0@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_1@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_2@127.0.0.1: replication.rack_awareness.rack_id = rack1
- storage_3@127.0.0.1: replication.rack_awareness.rack_id = rack2
- storage_4@127.0.0.1: replication.rack_awareness.rack_id = rack2
- consistency.rack_aware_replicas = 2
however there are some objects that have replicas both belonging to a same rack like following.
```shell
leofs@cat2neat:leofs.1.3.5$ leofs-adm whereis "test/2"
-------+--------------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
del? | node | ring address | size | checksum | has children | total chunks | clock | when
-------+--------------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
| storage_4@127.0.0.1 | 738e5b62350a5baebb0a619d9e44a91c | 6K | 21f79de954 | false | 0 | 56ce8ee93a199 | 2018-05-24 09:42:54 +0900
| storage_3@127.0.0.1 | 738e5b62350a5baebb0a619d9e44a91c | 6K | 21f79de954 | false | 0 | 56ce8ee93a199 | 2018-05-24 09:42:54 +0900
```
storage_[3,4]@127.0.0.1 both belong to rack2.
* related link: leo-project/leofs/issues/1047
|
priority
|
doesn t work as expected now there are some cases that rack awareness doesn t work as expected for example let s say that we try to create a cluster with storage nodes node physical racks rack replicas n and each replica should belong to a different rack then set the configurations as following replication rack awareness rack id on each leo storage conf storage replication rack awareness rack id storage replication rack awareness rack id storage replication rack awareness rack id storage replication rack awareness rack id storage replication rack awareness rack id consistency rack aware replicas however there are some objects that have replicas both belonging to a same rack like following shell leofs leofs leofs adm whereis test del node ring address size checksum has children total chunks clock when storage false storage false storage both belong to related link leo project leofs issues
| 1
|
689,390
| 23,618,636,949
|
IssuesEvent
|
2022-08-24 18:15:05
|
nexB/vulnerablecode
|
https://api.github.com/repos/nexB/vulnerablecode
|
closed
|
Odd behavior for certain vulnerability searches
|
bug Priority: high
|
I just noticed odd results from a vulnerability search that appear to be related to the number of `aliases` for that vulnerability.
For example, if I search for `cve`, my local DB returns 4,629 records. 1 record is `VULCOID-1`, and the vulnerability search results table shows 351 Affected packages and 2 Fixed packages. If I search just for `VULCOID-1`, the search results table shows that same data for that vulnerability.
Another record in the `cve` search results is `VULCOID-106`, which according to the table has 215 Affected packages and 10 Fixed packages. However, if I search just for `VULCOID-106`, the search results table shows 430 Affected packages and 20 Fixed packages -- 2x the amount shown in the `cve` search results table.
Digging around, I see that `VULCOID-1` has 1 related `alias` while `VULCOID-106` has 2 related `aliases`. Other samples produce similar results, suggesting the vulnerability search results code (the numbers in my new details page look OK) somehow multiplies the Affected package and Fixed package counts by the number of `aliases` when the search is for a `VULCOID`.
|
1.0
|
Odd behavior for certain vulnerability searches - I just noticed odd results from a vulnerability search that appear to be related to the number of `aliases` for that vulnerability.
For example, if I search for `cve`, my local DB returns 4,629 records. 1 record is `VULCOID-1`, and the vulnerability search results table shows 351 Affected packages and 2 Fixed packages. If I search just for `VULCOID-1`, the search results table shows that same data for that vulnerability.
Another record in the `cve` search results is `VULCOID-106`, which according to the table has 215 Affected packages and 10 Fixed packages. However, if I search just for `VULCOID-106`, the search results table shows 430 Affected packages and 20 Fixed packages -- 2x the amount shown in the `cve` search results table.
Digging around, I see that `VULCOID-1` has 1 related `alias` while `VULCOID-106` has 2 related `aliases`. Other samples produce similar results, suggesting the vulnerability search results code (the numbers in my new details page look OK) somehow multiplies the Affected package and Fixed package counts by the number of `aliases` when the search is for a `VULCOID`.
|
priority
|
odd behavior for certain vulnerability searches i just noticed odd results from a vulnerability search that appear to be related to the number of aliases for that vulnerability for example if i search for cve my local db returns records record is vulcoid and the vulnerability search results table shows affected packages and fixed packages if i search just for vulcoid the search results table shows that same data for that vulnerability another record in the cve search results is vulcoid which according to the table has affected packages and fixed packages however if i search just for vulcoid the search results table shows affected packages and fixed packages the amount shown in the cve search results table digging around i see that vulcoid has related alias while vulcoid has related aliases other samples produce similar results suggesting the vulnerability search results code the numbers in my new details page look ok somehow multiplies the affected package and fixed package counts by the number of aliases when the search is for a vulcoid
| 1
|
300,880
| 9,213,202,537
|
IssuesEvent
|
2019-03-10 09:45:30
|
leinardi/FloatingActionButtonSpeedDial
|
https://api.github.com/repos/leinardi/FloatingActionButtonSpeedDial
|
closed
|
Use unique view IDs
|
Priority: High Status: Review Needed Status: Stale Type: Enhancement
|
It would be a good idea to prefix view ids used in the library to make them unique.
I'm getting the following error:
```
Wrong state class, expecting View State but received class com.google.android.material.stateful.ExtendableSavedState instead. This usually happens when two views of different type have the same id in the same hierarchy. This view's id is id/fab.
```
It turns out this is because this library uses a `id/fab`, and I use `id/fab` for the SpeedDial view.
I imagine its quite common for developers to use the `id/fab` id, so prefixing yours would help avoid this erorr.
|
1.0
|
Use unique view IDs - It would be a good idea to prefix view ids used in the library to make them unique.
I'm getting the following error:
```
Wrong state class, expecting View State but received class com.google.android.material.stateful.ExtendableSavedState instead. This usually happens when two views of different type have the same id in the same hierarchy. This view's id is id/fab.
```
It turns out this is because this library uses a `id/fab`, and I use `id/fab` for the SpeedDial view.
I imagine its quite common for developers to use the `id/fab` id, so prefixing yours would help avoid this erorr.
|
priority
|
use unique view ids it would be a good idea to prefix view ids used in the library to make them unique i m getting the following error wrong state class expecting view state but received class com google android material stateful extendablesavedstate instead this usually happens when two views of different type have the same id in the same hierarchy this view s id is id fab it turns out this is because this library uses a id fab and i use id fab for the speeddial view i imagine its quite common for developers to use the id fab id so prefixing yours would help avoid this erorr
| 1
|
594,182
| 18,026,014,572
|
IssuesEvent
|
2021-09-17 04:43:02
|
input-output-hk/cardano-graphql
|
https://api.github.com/repos/input-output-hk/cardano-graphql
|
closed
|
Problem with reading rewards after upgrade to Alonzo
|
BUG PRIORITY:HIGH
|
After upgrade the Alonzo update of the Cardano Node we are not able to read our rewards.
We are sending the following query:
```
rewards(
where:{
stakePool:{
id:{_eq:"pool13xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
}
}
)
{
address,
amount,
earnedIn
{
number
startedAt
lastBlockTime
}
stakePool { rewardAddress }
}
}
```
We can see the rewards on https://cardanoscan.io/. We did the upgrade between epoc 287 and 288
Can there some kind of bugs related to this
|
1.0
|
Problem with reading rewards after upgrade to Alonzo - After upgrade the Alonzo update of the Cardano Node we are not able to read our rewards.
We are sending the following query:
```
rewards(
where:{
stakePool:{
id:{_eq:"pool13xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
}
}
)
{
address,
amount,
earnedIn
{
number
startedAt
lastBlockTime
}
stakePool { rewardAddress }
}
}
```
We can see the rewards on https://cardanoscan.io/. We did the upgrade between epoc 287 and 288
Can there some kind of bugs related to this
|
priority
|
problem with reading rewards after upgrade to alonzo after upgrade the alonzo update of the cardano node we are not able to read our rewards we are sending the following query rewards where stakepool id eq address amount earnedin number startedat lastblocktime stakepool rewardaddress we can see the rewards on we did the upgrade between epoc and can there some kind of bugs related to this
| 1
|
467,407
| 13,447,631,935
|
IssuesEvent
|
2020-09-08 14:28:51
|
onaio/reveal-frontend
|
https://api.github.com/repos/onaio/reveal-frontend
|
opened
|
Dynamic FI Plans Missing Details in the FI Plans for Thailand
|
Priority: High Reveal-DSME
|
- [ ] The Dynamic FI plans on the Thailand preview does not show details that were there on the initial FI plans This include:
|
1.0
|
Dynamic FI Plans Missing Details in the FI Plans for Thailand - - [ ] The Dynamic FI plans on the Thailand preview does not show details that were there on the initial FI plans This include:
|
priority
|
dynamic fi plans missing details in the fi plans for thailand the dynamic fi plans on the thailand preview does not show details that were there on the initial fi plans this include
| 1
|
803,048
| 29,116,045,607
|
IssuesEvent
|
2023-05-17 01:07:34
|
encorelab/ck-board
|
https://api.github.com/repos/encorelab/ck-board
|
closed
|
Modify Todo List goal types
|
enhancement high priority
|
### Description
Add a new goal type
**Tasks**
- [ ] Add "ATL skills" as a new goal type (between "Classroom engagement" and "Assigned class work"
- [ ] For the info icon beside the goal type selector add "ATL skills (i.e., thining skills, self-management skills, and research skills)" between the class engagement and assigned class work info.
|
1.0
|
Modify Todo List goal types - ### Description
Add a new goal type
**Tasks**
- [ ] Add "ATL skills" as a new goal type (between "Classroom engagement" and "Assigned class work"
- [ ] For the info icon beside the goal type selector add "ATL skills (i.e., thining skills, self-management skills, and research skills)" between the class engagement and assigned class work info.
|
priority
|
modify todo list goal types description add a new goal type tasks add atl skills as a new goal type between classroom engagement and assigned class work for the info icon beside the goal type selector add atl skills i e thining skills self management skills and research skills between the class engagement and assigned class work info
| 1
|
634,506
| 20,363,690,967
|
IssuesEvent
|
2022-02-21 01:16:35
|
NCC-CNC/whattemplatemaker
|
https://api.github.com/repos/NCC-CNC/whattemplatemaker
|
closed
|
Action IDs must be unique - should we provide guidance?
|
bug high priority
|
e.g., if they have two separate types of invasive species management? I think people can get around this by manually entering invasive spp management for second instance. Should we make that clearer or is that something for the manual?
|
1.0
|
Action IDs must be unique - should we provide guidance? - e.g., if they have two separate types of invasive species management? I think people can get around this by manually entering invasive spp management for second instance. Should we make that clearer or is that something for the manual?
|
priority
|
action ids must be unique should we provide guidance e g if they have two separate types of invasive species management i think people can get around this by manually entering invasive spp management for second instance should we make that clearer or is that something for the manual
| 1
|
191,052
| 6,825,232,145
|
IssuesEvent
|
2017-11-08 09:47:38
|
atlarge-research/opendc-frontend
|
https://api.github.com/repos/atlarge-research/opendc-frontend
|
closed
|
Support editing the tile-topology of a room
|
in progress Priority: High
|
In room mode, a button labeled 'Edit room' should be added which allows the user to add and remove tiles to/of that room. Currently, the only way of doing this is deleting the room (and any racks that may belong to it) and re-creating it.
|
1.0
|
Support editing the tile-topology of a room - In room mode, a button labeled 'Edit room' should be added which allows the user to add and remove tiles to/of that room. Currently, the only way of doing this is deleting the room (and any racks that may belong to it) and re-creating it.
|
priority
|
support editing the tile topology of a room in room mode a button labeled edit room should be added which allows the user to add and remove tiles to of that room currently the only way of doing this is deleting the room and any racks that may belong to it and re creating it
| 1
|
25,303
| 2,679,046,614
|
IssuesEvent
|
2015-03-26 14:48:47
|
Connexions/webview
|
https://api.github.com/repos/Connexions/webview
|
closed
|
Editor - user needs to be able to edit type attribute
|
enhancement High Priority
|
The Biology OSC book uses the type attribute on Notes that are Art Connection features. We need to at least add editing of type to Notes. It should work in the same way that the class attribute editor works.
Tagging Legand for Biology: https://rice.box.com/s/2m8ysm0mg3a5bvzwu55djfqw0myxc4sf
Example of Note using type in Biology: http://legacy.cnx.org/content/m44418/latest/
Screenshot of CNXML with type.

|
1.0
|
Editor - user needs to be able to edit type attribute - The Biology OSC book uses the type attribute on Notes that are Art Connection features. We need to at least add editing of type to Notes. It should work in the same way that the class attribute editor works.
Tagging Legand for Biology: https://rice.box.com/s/2m8ysm0mg3a5bvzwu55djfqw0myxc4sf
Example of Note using type in Biology: http://legacy.cnx.org/content/m44418/latest/
Screenshot of CNXML with type.

|
priority
|
editor user needs to be able to edit type attribute the biology osc book uses the type attribute on notes that are art connection features we need to at least add editing of type to notes it should work in the same way that the class attribute editor works tagging legand for biology example of note using type in biology screenshot of cnxml with type
| 1
|
242,763
| 7,846,603,758
|
IssuesEvent
|
2018-06-19 15:54:58
|
department-of-veterans-affairs/caseflow
|
https://api.github.com/repos/department-of-veterans-affairs/caseflow
|
closed
|
RAMP election | Bug: we allow users to create EPs even if there are no issues to close
|
Triage bug-high-priority caseflow-intake sierra
|
There was a situation where an NOD was mistakenly dated one day later than the RAMP opt-in. The CA was able to create the EP, but no VACOLS appeal issues were closed. We should tell users when there are no valid issues on the "finish" step.
|
1.0
|
RAMP election | Bug: we allow users to create EPs even if there are no issues to close - There was a situation where an NOD was mistakenly dated one day later than the RAMP opt-in. The CA was able to create the EP, but no VACOLS appeal issues were closed. We should tell users when there are no valid issues on the "finish" step.
|
priority
|
ramp election bug we allow users to create eps even if there are no issues to close there was a situation where an nod was mistakenly dated one day later than the ramp opt in the ca was able to create the ep but no vacols appeal issues were closed we should tell users when there are no valid issues on the finish step
| 1
|
184,094
| 6,705,401,790
|
IssuesEvent
|
2017-10-12 00:01:17
|
Monitorr/Monitorr
|
https://api.github.com/repos/Monitorr/Monitorr
|
closed
|
Change config.php delivery
|
enhancement Epic Priority: HIGH
|
Need to change the way we supply the config file so that updates do not overwrite a users config.
- [x] Change the config.php to config.php.sample
- [x] Update README to reflect the new config setup (copy config.in.sample to config.ini)
|
1.0
|
Change config.php delivery - Need to change the way we supply the config file so that updates do not overwrite a users config.
- [x] Change the config.php to config.php.sample
- [x] Update README to reflect the new config setup (copy config.in.sample to config.ini)
|
priority
|
change config php delivery need to change the way we supply the config file so that updates do not overwrite a users config change the config php to config php sample update readme to reflect the new config setup copy config in sample to config ini
| 1
|
328,195
| 9,990,732,194
|
IssuesEvent
|
2019-07-11 09:27:54
|
nhn/tui.grid
|
https://api.github.com/repos/nhn/tui.grid
|
closed
|
jquery is required in onClick event code
|
3.x Bug Priority: High
|
안녕하세요 tui-grid 팀 여러분, 훌륭한 라이브러리 제공에 감사드립니다. toast-ui.vue-grid를 통해 코드 작성 중 에러로 보이는 부분이 있어 전달드립니다.
상황부터 설명드리자면, Tree Data 형식을 이용하고자, 다음과 같이 설정한 후,
```js
//////////// template part /////////////
...
<Grid class="table"
ref="table"
theme="striped"
:options="gridOptions"
:columnData="columns"
:rowData="batchData"
/>
...
/////////////////////////////////////
///////////// options //////////////
gridOptions: {
bodyHeight: 'fitToParent',
rowHeight: 30,
virtualScrolling: true,
treeColumnOptions: {
name: 'app',
useCascadingCheckbox: false,
},
summary: {
position: 'bottom',
height: 40,
columnContent: {
......
}
},
}
```
collapse된 row를 열고자 클릭할 시, _onClick 핸들러와 관련하여 에러가 발생하고 있습니다.
## Version
tui-grid: v 3.8.0, @toast-ui/vue-grid: v 1.0.5
## Development Environment
@vue/cli: v 3.7.0, typescript: v 3.4.3
## Current Behavior
collapse 표시 (검은 삼각형)를 클릭할 시 다음과 같은 에러가 발생합니다.
```js
tui-grid.js?800b:17714 Uncaught ReferenceError: $ is not defined
at init._onClick (tui-grid.js?800b:17714)
at executeBound (underscore.js?3e81:762)
at HTMLButtonElement.eval (underscore.js?3e81:775)
at HTMLButtonElement.eval (underscore.js?3e81:122)
at HTMLDivElement.dispatch (jquery.js?a881:5226)
at HTMLDivElement.elemData.handle (jquery.js?a881:4878)
_onClick @ tui-grid.js?800b:17714
executeBound @ underscore.js?3e81:762
(anonymous) @ underscore.js?3e81:775
(anonymous) @ underscore.js?3e81:122
dispatch @ jquery.js?a881:5226
elemData.handle @ jquery.js?a881:4878
```
tui-grid.js 파일을 확인해보니
```js
/***/ }),
/* 65 */
/***/ (function(module, exports, __webpack_require__) {
/**
* @fileoverview Tree cell painter
* @author NHN. FE Development Lab <dl_javascript@nhn.com>
*/
'use strict';
var _ = __webpack_require__(2);
var snippet = __webpack_require__(3);
var Painter = __webpack_require__(63);
var util = __webpack_require__(17);
var attrNameConst = __webpack_require__(10).attrName;
var dimensionConst = __webpack_require__(10).dimension;
var classNameConst = __webpack_require__(19);
/* 17623 ~ 17641 부분입니다. */
```
위처럼 해당 부분에 $ 가 require되지 않고 있음을 확인했습니다.
## Expected Behavior
```js
/***/ }),
/* 65 */
/***/ (function(module, exports, __webpack_require__) {
/**
* @fileoverview Tree cell painter
* @author NHN. FE Development Lab <dl_javascript@nhn.com>
*/
'use strict';
var $ = __webpack_require__(7);
var _ = __webpack_require__(2);
var snippet = __webpack_require__(3);
var Painter = __webpack_require__(63);
var util = __webpack_require__(17);
var attrNameConst = __webpack_require__(10).attrName;
var dimensionConst = __webpack_require__(10).dimension;
var classNameConst = __webpack_require__(19);
/* 17623 ~ 17642 부분입니다. */
```
위처럼 임시방편적으로 처리하여 사용하고 있는 중이긴 하다만, dist 파일이다보니 근본적인 처리가 필요할 듯하여, 이슈로 전달드립니다.
감사합니다 좋은 하루 되시기 바랍니다.
|
1.0
|
jquery is required in onClick event code - 안녕하세요 tui-grid 팀 여러분, 훌륭한 라이브러리 제공에 감사드립니다. toast-ui.vue-grid를 통해 코드 작성 중 에러로 보이는 부분이 있어 전달드립니다.
상황부터 설명드리자면, Tree Data 형식을 이용하고자, 다음과 같이 설정한 후,
```js
//////////// template part /////////////
...
<Grid class="table"
ref="table"
theme="striped"
:options="gridOptions"
:columnData="columns"
:rowData="batchData"
/>
...
/////////////////////////////////////
///////////// options //////////////
gridOptions: {
bodyHeight: 'fitToParent',
rowHeight: 30,
virtualScrolling: true,
treeColumnOptions: {
name: 'app',
useCascadingCheckbox: false,
},
summary: {
position: 'bottom',
height: 40,
columnContent: {
......
}
},
}
```
collapse된 row를 열고자 클릭할 시, _onClick 핸들러와 관련하여 에러가 발생하고 있습니다.
## Version
tui-grid: v 3.8.0, @toast-ui/vue-grid: v 1.0.5
## Development Environment
@vue/cli: v 3.7.0, typescript: v 3.4.3
## Current Behavior
collapse 표시 (검은 삼각형)를 클릭할 시 다음과 같은 에러가 발생합니다.
```js
tui-grid.js?800b:17714 Uncaught ReferenceError: $ is not defined
at init._onClick (tui-grid.js?800b:17714)
at executeBound (underscore.js?3e81:762)
at HTMLButtonElement.eval (underscore.js?3e81:775)
at HTMLButtonElement.eval (underscore.js?3e81:122)
at HTMLDivElement.dispatch (jquery.js?a881:5226)
at HTMLDivElement.elemData.handle (jquery.js?a881:4878)
_onClick @ tui-grid.js?800b:17714
executeBound @ underscore.js?3e81:762
(anonymous) @ underscore.js?3e81:775
(anonymous) @ underscore.js?3e81:122
dispatch @ jquery.js?a881:5226
elemData.handle @ jquery.js?a881:4878
```
tui-grid.js 파일을 확인해보니
```js
/***/ }),
/* 65 */
/***/ (function(module, exports, __webpack_require__) {
/**
* @fileoverview Tree cell painter
* @author NHN. FE Development Lab <dl_javascript@nhn.com>
*/
'use strict';
var _ = __webpack_require__(2);
var snippet = __webpack_require__(3);
var Painter = __webpack_require__(63);
var util = __webpack_require__(17);
var attrNameConst = __webpack_require__(10).attrName;
var dimensionConst = __webpack_require__(10).dimension;
var classNameConst = __webpack_require__(19);
/* 17623 ~ 17641 부분입니다. */
```
위처럼 해당 부분에 $ 가 require되지 않고 있음을 확인했습니다.
## Expected Behavior
```js
/***/ }),
/* 65 */
/***/ (function(module, exports, __webpack_require__) {
/**
* @fileoverview Tree cell painter
* @author NHN. FE Development Lab <dl_javascript@nhn.com>
*/
'use strict';
var $ = __webpack_require__(7);
var _ = __webpack_require__(2);
var snippet = __webpack_require__(3);
var Painter = __webpack_require__(63);
var util = __webpack_require__(17);
var attrNameConst = __webpack_require__(10).attrName;
var dimensionConst = __webpack_require__(10).dimension;
var classNameConst = __webpack_require__(19);
/* 17623 ~ 17642 부분입니다. */
```
위처럼 임시방편적으로 처리하여 사용하고 있는 중이긴 하다만, dist 파일이다보니 근본적인 처리가 필요할 듯하여, 이슈로 전달드립니다.
감사합니다 좋은 하루 되시기 바랍니다.
|
priority
|
jquery is required in onclick event code 안녕하세요 tui grid 팀 여러분 훌륭한 라이브러리 제공에 감사드립니다 toast ui vue grid를 통해 코드 작성 중 에러로 보이는 부분이 있어 전달드립니다 상황부터 설명드리자면 tree data 형식을 이용하고자 다음과 같이 설정한 후 js template part grid class table ref table theme striped options gridoptions columndata columns rowdata batchdata options gridoptions bodyheight fittoparent rowheight virtualscrolling true treecolumnoptions name app usecascadingcheckbox false summary position bottom height columncontent collapse된 row를 열고자 클릭할 시 onclick 핸들러와 관련하여 에러가 발생하고 있습니다 version tui grid v toast ui vue grid v development environment vue cli v typescript v current behavior collapse 표시 검은 삼각형 를 클릭할 시 다음과 같은 에러가 발생합니다 js tui grid js uncaught referenceerror is not defined at init onclick tui grid js at executebound underscore js at htmlbuttonelement eval underscore js at htmlbuttonelement eval underscore js at htmldivelement dispatch jquery js at htmldivelement elemdata handle jquery js onclick tui grid js executebound underscore js anonymous underscore js anonymous underscore js dispatch jquery js elemdata handle jquery js tui grid js 파일을 확인해보니 js function module exports webpack require fileoverview tree cell painter author nhn fe development lab use strict var webpack require var snippet webpack require var painter webpack require var util webpack require var attrnameconst webpack require attrname var dimensionconst webpack require dimension var classnameconst webpack require 부분입니다 위처럼 해당 부분에 가 require되지 않고 있음을 확인했습니다 expected behavior js function module exports webpack require fileoverview tree cell painter author nhn fe development lab use strict var webpack require var webpack require var snippet webpack require var painter webpack require var util webpack require var attrnameconst webpack require attrname var dimensionconst webpack require dimension var classnameconst webpack require 부분입니다 위처럼 임시방편적으로 처리하여 사용하고 있는 중이긴 하다만 dist 파일이다보니 근본적인 처리가 필요할 듯하여 이슈로 전달드립니다 감사합니다 좋은 하루 되시기 바랍니다
| 1
|
339,575
| 10,256,289,029
|
IssuesEvent
|
2019-08-21 17:18:38
|
openstax/tutor
|
https://api.github.com/repos/openstax/tutor
|
closed
|
Don’t allow "Test Prep for AP® Courses" or "Science Practice Challenge Questions" to be selectable when creating reading/HW assignments
|
change priority1-high
|
### Description
Don’t allow "Test Prep for AP® Courses" or "Science Practice Challenge Questions" to be selectable when creating reading/HW assignments
### Acceptance Criteria
These end-of-chapter questions appear in reference view, but are not available to assign when creating a reading assignment or when selecting HW problems.
### Acceptance Tests
derive these from "Acceptance Criteria" during "Design" phase
**title**: template
**categories**: Interactive Component, Navigation, TOC, Math, Book Content, Other
> GIVEN something
> AND something else
> WHEN something
> AND something else
> THEN something
> AND something else
### Epic Doc
URL
### Checklist for Done
1A-DEFINE
- [x] Description is complete
- [x] Acceptance criteria are complete
1B-DECOMP
- [ ] The work is broken into reasonable and consistent work items
- [ ] Estimate is updated
- [ ] Release is selected
1C-DESIGN
- [ ] Distilled acceptance tests are added to the issue in "Given / When / Then" format
- [ ] Automated testing criteria are added to issue
- [ ] The development approach is reviewed/updated
2A-CODE
- [ ] A pull request is opened and linked to an issue. The automated pull request checks pass
- [ ] The change has been approved by other developers
- [ ] The pull request is merged into master, and the change branch is deleted.
- [ ] The acceptance tests are finalized
- [ ] Regression test categories are identified on the issue. Categories should be referenced if any changes might affect them, even if the intended functionality is unchanged.
4A-UX REVIEW
- [ ] PDM reviews and verifies
5A-FUNCT VER
- [ ] All the acceptance tests for this issue pass.
5C-REGRESSION
- [ ] The test plan in testrail passes.
|
1.0
|
Don’t allow "Test Prep for AP® Courses" or "Science Practice Challenge Questions" to be selectable when creating reading/HW assignments - ### Description
Don’t allow "Test Prep for AP® Courses" or "Science Practice Challenge Questions" to be selectable when creating reading/HW assignments
### Acceptance Criteria
These end-of-chapter questions appear in reference view, but are not available to assign when creating a reading assignment or when selecting HW problems.
### Acceptance Tests
derive these from "Acceptance Criteria" during "Design" phase
**title**: template
**categories**: Interactive Component, Navigation, TOC, Math, Book Content, Other
> GIVEN something
> AND something else
> WHEN something
> AND something else
> THEN something
> AND something else
### Epic Doc
URL
### Checklist for Done
1A-DEFINE
- [x] Description is complete
- [x] Acceptance criteria are complete
1B-DECOMP
- [ ] The work is broken into reasonable and consistent work items
- [ ] Estimate is updated
- [ ] Release is selected
1C-DESIGN
- [ ] Distilled acceptance tests are added to the issue in "Given / When / Then" format
- [ ] Automated testing criteria are added to issue
- [ ] The development approach is reviewed/updated
2A-CODE
- [ ] A pull request is opened and linked to an issue. The automated pull request checks pass
- [ ] The change has been approved by other developers
- [ ] The pull request is merged into master, and the change branch is deleted.
- [ ] The acceptance tests are finalized
- [ ] Regression test categories are identified on the issue. Categories should be referenced if any changes might affect them, even if the intended functionality is unchanged.
4A-UX REVIEW
- [ ] PDM reviews and verifies
5A-FUNCT VER
- [ ] All the acceptance tests for this issue pass.
5C-REGRESSION
- [ ] The test plan in testrail passes.
|
priority
|
don’t allow test prep for ap® courses or science practice challenge questions to be selectable when creating reading hw assignments description don’t allow test prep for ap® courses or science practice challenge questions to be selectable when creating reading hw assignments acceptance criteria these end of chapter questions appear in reference view but are not available to assign when creating a reading assignment or when selecting hw problems acceptance tests derive these from acceptance criteria during design phase title template categories interactive component navigation toc math book content other given something and something else when something and something else then something and something else epic doc url checklist for done define description is complete acceptance criteria are complete decomp the work is broken into reasonable and consistent work items estimate is updated release is selected design distilled acceptance tests are added to the issue in given when then format automated testing criteria are added to issue the development approach is reviewed updated code a pull request is opened and linked to an issue the automated pull request checks pass the change has been approved by other developers the pull request is merged into master and the change branch is deleted the acceptance tests are finalized regression test categories are identified on the issue categories should be referenced if any changes might affect them even if the intended functionality is unchanged ux review pdm reviews and verifies funct ver all the acceptance tests for this issue pass regression the test plan in testrail passes
| 1
|
178,487
| 6,609,307,265
|
IssuesEvent
|
2017-09-19 14:09:23
|
metasfresh/metasfresh-webui-api
|
https://api.github.com/repos/metasfresh/metasfresh-webui-api
|
closed
|
Unable to see list of document's attachments
|
branch:master branch:release priority:high type:bug
|
### Type of issue
Bug
### Steps to reproduce:
1. Open window/123/2156482. This document has many attachments uploaded
2. Click on dropdown menu button (in top-right corner of the page)
3. Then click on tab with paperclip icon
4. Loading indicator will appear, need to wait some 20-30 seconds and then it throws the error
You'll get following error:
Error: Error while loading model...
Server error
Error while loading model from ResultSet Name der DB-Tabelle: AD_AttachmentEntry Class: class org.compiere.model.X_AD_AttachmentEntry

|
1.0
|
Unable to see list of document's attachments - ### Type of issue
Bug
### Steps to reproduce:
1. Open window/123/2156482. This document has many attachments uploaded
2. Click on dropdown menu button (in top-right corner of the page)
3. Then click on tab with paperclip icon
4. Loading indicator will appear, need to wait some 20-30 seconds and then it throws the error
You'll get following error:
Error: Error while loading model...
Server error
Error while loading model from ResultSet Name der DB-Tabelle: AD_AttachmentEntry Class: class org.compiere.model.X_AD_AttachmentEntry

|
priority
|
unable to see list of document s attachments type of issue bug steps to reproduce open window this document has many attachments uploaded click on dropdown menu button in top right corner of the page then click on tab with paperclip icon loading indicator will appear need to wait some seconds and then it throws the error you ll get following error error error while loading model server error error while loading model from resultset name der db tabelle ad attachmententry class class org compiere model x ad attachmententry
| 1
|
553,126
| 16,357,488,553
|
IssuesEvent
|
2021-05-14 02:08:30
|
k8ssandra/k8ssandra
|
https://api.github.com/repos/k8ssandra/k8ssandra
|
closed
|
Token allocations are random when using 4.0 and lead to collisions
|
bug complexity: high component: cass-operator component: cassandra priority: p1 sprint: 5
|
## Bug Report
<!--
Thanks for filing an issue! Before hitting the button, please answer these questions.
Fill in as much of the template below as you can.
-->
**Describe the bug**
When collocating Cassandra pods on the same worker node using Cassandra 4.0, some nodes will fail to start due to tokens collisions:
```
INFO [main] 2021-05-06 09:09:44,192 Gossiper.java:2110 - No gossip backlog; proceeding
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: schema complete, ready to bootstrap
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: waiting for pending range calculation
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: calculation complete, ready to bootstrap
INFO [main] 2021-05-06 09:09:44,197 StorageService.java:1599 - JOINING: getting bootstrap token
INFO [main] 2021-05-06 09:09:44,198 BootStrapper.java:250 - Generated random tokens. tokens are [4916404158927833750, -1389173121663489945, 5083198339702642473, -4169687567737103308, -3820396588987121254, -930632370836328278, -7651905638036910085, -7132548803968952319, 6570178476170400202, 732244319226977375, -5562429418534086174, -6429970682779439091, -3132119538246048788, -8439012771966836414, -4842558979028939132, 6174033040364023532]
INFO [main] 2021-05-06 09:09:44,204 ColumnFamilyStore.java:870 - Enqueuing flush of local: 2.846KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2021-05-06 09:09:44,213 Memtable.java:452 - Writing Memtable-local@1003054531(0.627KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2021-05-06 09:09:44,214 Memtable.java:481 - Completed flushing /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/na-41-big-Data.db (0.388KiB) for commitlog position CommitLogPosition(segmentId=1620292171353, position=40787)
INFO [MemtableFlushWriter:2] 2021-05-06 09:09:44,232 LogTransaction.java:240 - Unfinished transaction log, deleting /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/na_txn_flush_cc396ae0-ae4a-11eb-9534-c5d576268691.log
INFO [main] 2021-05-06 09:09:44,236 StorageService.java:1599 - JOINING: sleeping 30000 ms for pending range setup
INFO [main] 2021-05-06 09:10:14,236 StorageService.java:1599 - JOINING: Starting to bootstrap...
ERROR [main] 2021-05-06 09:10:14,251 CassandraDaemon.java:822 - Exception encountered during startup
java.lang.IllegalStateException: Multiple strict sources found for Full(/172.31.11.214:7000,(9096486175408947141,-9149794428708036724]), sources: [Full(/172.31.41.163:7000,(9096486175408947141,-9149794428708036724]), Full(/172.31.25.109:7000,(9096486175408947141,-9149794428708036724])]
at org.apache.cassandra.dht.RangeStreamer.calculateRangesToFetchWithPreferredEndpoints(RangeStreamer.java:517)
at org.apache.cassandra.dht.RangeStreamer.calculateRangesToFetchWithPreferredEndpoints(RangeStreamer.java:383)
at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:320)
at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
at org.apache.cassandra.service.StorageService.startBootstrap(StorageService.java:1765)
at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1742)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:1033)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:994)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:793)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:400)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:676)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:800)
INFO [StorageServiceShutdownHook] 2021-05-06 09:10:14,259 HintsService.java:220 - Paused hints dispatch
WARN [StorageServiceShutdownHook] 2021-05-06 09:10:14,259 Gossiper.java:1912 - No local state, state is in silent shutdown, or node hasn't joined, not announcing shutdown
```
Looking at the `cassandra.yaml` file in the containers, I can see that the `allocate_tokens_for_replication_factor` is missing, which explains why `Generated random tokens` shows up in the logs.
While I don't get how random tokens could end up with collisions, generating random tokens with 16 vnodes per node will lead to imbalances that are no fit for production use.
**Expected behavior**
4.0 clusters should use the new token allocation algorithm by default and nodes should be able to bootstrap without token collisions.
**Environment (please complete the following information):**
* Helm charts version info
<!-- list installed charts and their versions from all namespaces -->
<!-- Replace the command with its output -->
1.1.0
|
1.0
|
Token allocations are random when using 4.0 and lead to collisions - ## Bug Report
<!--
Thanks for filing an issue! Before hitting the button, please answer these questions.
Fill in as much of the template below as you can.
-->
**Describe the bug**
When collocating Cassandra pods on the same worker node using Cassandra 4.0, some nodes will fail to start due to tokens collisions:
```
INFO [main] 2021-05-06 09:09:44,192 Gossiper.java:2110 - No gossip backlog; proceeding
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: schema complete, ready to bootstrap
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: waiting for pending range calculation
INFO [main] 2021-05-06 09:09:44,193 StorageService.java:1599 - JOINING: calculation complete, ready to bootstrap
INFO [main] 2021-05-06 09:09:44,197 StorageService.java:1599 - JOINING: getting bootstrap token
INFO [main] 2021-05-06 09:09:44,198 BootStrapper.java:250 - Generated random tokens. tokens are [4916404158927833750, -1389173121663489945, 5083198339702642473, -4169687567737103308, -3820396588987121254, -930632370836328278, -7651905638036910085, -7132548803968952319, 6570178476170400202, 732244319226977375, -5562429418534086174, -6429970682779439091, -3132119538246048788, -8439012771966836414, -4842558979028939132, 6174033040364023532]
INFO [main] 2021-05-06 09:09:44,204 ColumnFamilyStore.java:870 - Enqueuing flush of local: 2.846KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2021-05-06 09:09:44,213 Memtable.java:452 - Writing Memtable-local@1003054531(0.627KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2021-05-06 09:09:44,214 Memtable.java:481 - Completed flushing /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/na-41-big-Data.db (0.388KiB) for commitlog position CommitLogPosition(segmentId=1620292171353, position=40787)
INFO [MemtableFlushWriter:2] 2021-05-06 09:09:44,232 LogTransaction.java:240 - Unfinished transaction log, deleting /var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/na_txn_flush_cc396ae0-ae4a-11eb-9534-c5d576268691.log
INFO [main] 2021-05-06 09:09:44,236 StorageService.java:1599 - JOINING: sleeping 30000 ms for pending range setup
INFO [main] 2021-05-06 09:10:14,236 StorageService.java:1599 - JOINING: Starting to bootstrap...
ERROR [main] 2021-05-06 09:10:14,251 CassandraDaemon.java:822 - Exception encountered during startup
java.lang.IllegalStateException: Multiple strict sources found for Full(/172.31.11.214:7000,(9096486175408947141,-9149794428708036724]), sources: [Full(/172.31.41.163:7000,(9096486175408947141,-9149794428708036724]), Full(/172.31.25.109:7000,(9096486175408947141,-9149794428708036724])]
at org.apache.cassandra.dht.RangeStreamer.calculateRangesToFetchWithPreferredEndpoints(RangeStreamer.java:517)
at org.apache.cassandra.dht.RangeStreamer.calculateRangesToFetchWithPreferredEndpoints(RangeStreamer.java:383)
at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:320)
at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:81)
at org.apache.cassandra.service.StorageService.startBootstrap(StorageService.java:1765)
at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1742)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:1033)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:994)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:793)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:400)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:676)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:800)
INFO [StorageServiceShutdownHook] 2021-05-06 09:10:14,259 HintsService.java:220 - Paused hints dispatch
WARN [StorageServiceShutdownHook] 2021-05-06 09:10:14,259 Gossiper.java:1912 - No local state, state is in silent shutdown, or node hasn't joined, not announcing shutdown
```
Looking at the `cassandra.yaml` file in the containers, I can see that the `allocate_tokens_for_replication_factor` is missing, which explains why `Generated random tokens` shows up in the logs.
While I don't get how random tokens could end up with collisions, generating random tokens with 16 vnodes per node will lead to imbalances that are no fit for production use.
**Expected behavior**
4.0 clusters should use the new token allocation algorithm by default and nodes should be able to bootstrap without token collisions.
**Environment (please complete the following information):**
* Helm charts version info
<!-- list installed charts and their versions from all namespaces -->
<!-- Replace the command with its output -->
1.1.0
|
priority
|
token allocations are random when using and lead to collisions bug report thanks for filing an issue before hitting the button please answer these questions fill in as much of the template below as you can describe the bug when collocating cassandra pods on the same worker node using cassandra some nodes will fail to start due to tokens collisions info gossiper java no gossip backlog proceeding info storageservice java joining schema complete ready to bootstrap info storageservice java joining waiting for pending range calculation info storageservice java joining calculation complete ready to bootstrap info storageservice java joining getting bootstrap token info bootstrapper java generated random tokens tokens are info columnfamilystore java enqueuing flush of local on heap off heap info memtable java writing memtable local serialized bytes ops of on off heap limit flushed range null null info memtable java completed flushing var lib cassandra data system local na big data db for commitlog position commitlogposition segmentid position info logtransaction java unfinished transaction log deleting var lib cassandra data system local na txn flush log info storageservice java joining sleeping ms for pending range setup info storageservice java joining starting to bootstrap error cassandradaemon java exception encountered during startup java lang illegalstateexception multiple strict sources found for full sources full at org apache cassandra dht rangestreamer calculaterangestofetchwithpreferredendpoints rangestreamer java at org apache cassandra dht rangestreamer calculaterangestofetchwithpreferredendpoints rangestreamer java at org apache cassandra dht rangestreamer addranges rangestreamer java at org apache cassandra dht bootstrapper bootstrap bootstrapper java at org apache cassandra service storageservice startbootstrap storageservice java at org apache cassandra service storageservice bootstrap storageservice java at org apache cassandra service storageservice jointokenring storageservice java at org apache cassandra service storageservice jointokenring storageservice java at org apache cassandra service storageservice initserver storageservice java at org apache cassandra service storageservice initserver storageservice java at org apache cassandra service cassandradaemon setup cassandradaemon java at org apache cassandra service cassandradaemon activate cassandradaemon java at org apache cassandra service cassandradaemon main cassandradaemon java info hintsservice java paused hints dispatch warn gossiper java no local state state is in silent shutdown or node hasn t joined not announcing shutdown looking at the cassandra yaml file in the containers i can see that the allocate tokens for replication factor is missing which explains why generated random tokens shows up in the logs while i don t get how random tokens could end up with collisions generating random tokens with vnodes per node will lead to imbalances that are no fit for production use expected behavior clusters should use the new token allocation algorithm by default and nodes should be able to bootstrap without token collisions environment please complete the following information helm charts version info
| 1
|
783,759
| 27,544,657,765
|
IssuesEvent
|
2023-03-07 10:54:02
|
frequenz-floss/frequenz-sdk-python
|
https://api.github.com/repos/frequenz-floss/frequenz-sdk-python
|
opened
|
Create a BackgroundService class
|
priority:high type:enhancement part:core
|
### What's needed?
Right now we have no common approach on how to manage classes/objects that spawn tasks. The most core example are actors, but we also have a lot of other classes that spawn tasks (like the resampler, moving window, etc.).
It should be possible to clearly control the lifespan of these objects, and be able to cleanly and clearly start and stop the tasks they spawn.
For now every class is doing their own thing, sometimes there is no way to stop things, sometimes there is but it is private, etc.
We should have a common way to do this.
### Proposed solution
Add a `BackgroundService` class to the SDK that provides a simple framework to handle this.
Here is a quick sketch of a possible implementation:
```py
class BackgroundService(ABC):
def __init__(self):
self.tasks: set[asyncio.Task] = set
# async might not be really needed here, but allows sub-classes
# to do async work as part of starting a task, which can be a
# good thing since async initializaiton work can't be done in
# __init__()
@abstract
async def start(self) -> None:
pass
@property
def is_stopped(self) -> bool:
for it in self.tasks:
if t.done():
return False
return True
def cancel(self) -> None:
for t in self.tasks:
t.cancel()
async def join(self) -> None:
await asyncio.gather(*[t.join() for t in self.tasks])
async def stop(self) -> None:
for t in self.tasks:
await cancel_and_await(t)
async __aenter__(self) -> None:
await self.start()
async def __aexit__(self, exc_type, exc, tb) -> None:
await self.stop()
def __del__(self) -> None:
self.cancel()
# Maybe see if there is a loop running and try to run await self.stop() in the loop?
```
(`join()` and `stop()` should probably be implemented using `asyncio.wait()` in a more resilient way)
Then all other classes can inherit from `BackgroundService`:
```py
class MovingWindow(BackgroundService):
def __init__(self, other_params: ..., ...):
super().__init__()
async def start(self) -> None:
self.tasks.add(asyncio.create_task(...))
mw = MovingWindow(...)
await mw.start()
...
await mw.stop()
```
### Use cases
* `Actor`
* `MovingWindow`
* `Resampler`
* etc.
### Alternatives and workarounds
_No response_
### Additional context
# Advantages
* Having a common, well-known interface (consistency, ease of use, less cognitive burden for users to know what's going on)
* It allows full control over the lifespan of a worker
* It separates initialization from running the task, which could help avoid dependency problems if 2 *workers* need each other to run.
* It is explicit in terms of when something is running in the background or not.
# Disadvantages
* If people forget to call `start()` it might take a while to figure out what's going on.
*
# Class name
I first thought of `Worker`, but it's not a great name, as it is usually used for something that runs in a thread and can run a task, but I could not think of a better name.
I thought of `Task` but it might be confusing given `asyncio.Task`, I thought of `BackgroundTask` as it is something that it is supposed to run in the background, opposed to a general `Task` that can also be some calculation for which you want some result, but want to run several calculations in parallel.
I thought of `Daemon` as the functionality is similar to a unix daemon. Or maybe `Service` in the same sense. To make it more explicit I ended up with `BackgroundService`, as it will only be used to sub-class it shouldn't be too annoying to have a long name.
# References
* Coming originally from this PR discussion: https://github.com/frequenz-floss/frequenz-sdk-python/pull/190#discussion_r1100080476
* Discussed in: https://github.com/frequenz-floss/frequenz-sdk-python/discussions/197
|
1.0
|
Create a BackgroundService class - ### What's needed?
Right now we have no common approach on how to manage classes/objects that spawn tasks. The most core example are actors, but we also have a lot of other classes that spawn tasks (like the resampler, moving window, etc.).
It should be possible to clearly control the lifespan of these objects, and be able to cleanly and clearly start and stop the tasks they spawn.
For now every class is doing their own thing, sometimes there is no way to stop things, sometimes there is but it is private, etc.
We should have a common way to do this.
### Proposed solution
Add a `BackgroundService` class to the SDK that provides a simple framework to handle this.
Here is a quick sketch of a possible implementation:
```py
class BackgroundService(ABC):
def __init__(self):
self.tasks: set[asyncio.Task] = set
# async might not be really needed here, but allows sub-classes
# to do async work as part of starting a task, which can be a
# good thing since async initializaiton work can't be done in
# __init__()
@abstract
async def start(self) -> None:
pass
@property
def is_stopped(self) -> bool:
for it in self.tasks:
if t.done():
return False
return True
def cancel(self) -> None:
for t in self.tasks:
t.cancel()
async def join(self) -> None:
await asyncio.gather(*[t.join() for t in self.tasks])
async def stop(self) -> None:
for t in self.tasks:
await cancel_and_await(t)
async __aenter__(self) -> None:
await self.start()
async def __aexit__(self, exc_type, exc, tb) -> None:
await self.stop()
def __del__(self) -> None:
self.cancel()
# Maybe see if there is a loop running and try to run await self.stop() in the loop?
```
(`join()` and `stop()` should probably be implemented using `asyncio.wait()` in a more resilient way)
Then all other classes can inherit from `BackgroundService`:
```py
class MovingWindow(BackgroundService):
def __init__(self, other_params: ..., ...):
super().__init__()
async def start(self) -> None:
self.tasks.add(asyncio.create_task(...))
mw = MovingWindow(...)
await mw.start()
...
await mw.stop()
```
### Use cases
* `Actor`
* `MovingWindow`
* `Resampler`
* etc.
### Alternatives and workarounds
_No response_
### Additional context
# Advantages
* Having a common, well-known interface (consistency, ease of use, less cognitive burden for users to know what's going on)
* It allows full control over the lifespan of a worker
* It separates initialization from running the task, which could help avoid dependency problems if 2 *workers* need each other to run.
* It is explicit in terms of when something is running in the background or not.
# Disadvantages
* If people forget to call `start()` it might take a while to figure out what's going on.
*
# Class name
I first thought of `Worker`, but it's not a great name, as it is usually used for something that runs in a thread and can run a task, but I could not think of a better name.
I thought of `Task` but it might be confusing given `asyncio.Task`, I thought of `BackgroundTask` as it is something that it is supposed to run in the background, opposed to a general `Task` that can also be some calculation for which you want some result, but want to run several calculations in parallel.
I thought of `Daemon` as the functionality is similar to a unix daemon. Or maybe `Service` in the same sense. To make it more explicit I ended up with `BackgroundService`, as it will only be used to sub-class it shouldn't be too annoying to have a long name.
# References
* Coming originally from this PR discussion: https://github.com/frequenz-floss/frequenz-sdk-python/pull/190#discussion_r1100080476
* Discussed in: https://github.com/frequenz-floss/frequenz-sdk-python/discussions/197
|
priority
|
create a backgroundservice class what s needed right now we have no common approach on how to manage classes objects that spawn tasks the most core example are actors but we also have a lot of other classes that spawn tasks like the resampler moving window etc it should be possible to clearly control the lifespan of these objects and be able to cleanly and clearly start and stop the tasks they spawn for now every class is doing their own thing sometimes there is no way to stop things sometimes there is but it is private etc we should have a common way to do this proposed solution add a backgroundservice class to the sdk that provides a simple framework to handle this here is a quick sketch of a possible implementation py class backgroundservice abc def init self self tasks set set async might not be really needed here but allows sub classes to do async work as part of starting a task which can be a good thing since async initializaiton work can t be done in init abstract async def start self none pass property def is stopped self bool for it in self tasks if t done return false return true def cancel self none for t in self tasks t cancel async def join self none await asyncio gather async def stop self none for t in self tasks await cancel and await t async aenter self none await self start async def aexit self exc type exc tb none await self stop def del self none self cancel maybe see if there is a loop running and try to run await self stop in the loop join and stop should probably be implemented using asyncio wait in a more resilient way then all other classes can inherit from backgroundservice py class movingwindow backgroundservice def init self other params super init async def start self none self tasks add asyncio create task mw movingwindow await mw start await mw stop use cases actor movingwindow resampler etc alternatives and workarounds no response additional context advantages having a common well known interface consistency ease of use less cognitive burden for users to know what s going on it allows full control over the lifespan of a worker it separates initialization from running the task which could help avoid dependency problems if workers need each other to run it is explicit in terms of when something is running in the background or not disadvantages if people forget to call start it might take a while to figure out what s going on class name i first thought of worker but it s not a great name as it is usually used for something that runs in a thread and can run a task but i could not think of a better name i thought of task but it might be confusing given asyncio task i thought of backgroundtask as it is something that it is supposed to run in the background opposed to a general task that can also be some calculation for which you want some result but want to run several calculations in parallel i thought of daemon as the functionality is similar to a unix daemon or maybe service in the same sense to make it more explicit i ended up with backgroundservice as it will only be used to sub class it shouldn t be too annoying to have a long name references coming originally from this pr discussion discussed in
| 1
|
254,379
| 8,073,429,849
|
IssuesEvent
|
2018-08-06 19:13:52
|
HealthCatalyst/healthcareai-r
|
https://api.github.com/repos/HealthCatalyst/healthcareai-r
|
closed
|
Limone integration
|
High Priority model interpretation new features
|
should be called after/separately from `predict` rather than being a switch to turn on during predict
|
1.0
|
Limone integration - should be called after/separately from `predict` rather than being a switch to turn on during predict
|
priority
|
limone integration should be called after separately from predict rather than being a switch to turn on during predict
| 1
|
258,646
| 8,178,606,926
|
IssuesEvent
|
2018-08-28 14:15:29
|
Theophilix/event-table-edit
|
https://api.github.com/repos/Theophilix/event-table-edit
|
closed
|
Frontend: Layout: Normal mode: If user clicks on date, only actual date is shown.
|
bug high priority
|
Click was on first date, 24.03.2016! Also, the filter is not working.

|
1.0
|
Frontend: Layout: Normal mode: If user clicks on date, only actual date is shown. - Click was on first date, 24.03.2016! Also, the filter is not working.

|
priority
|
frontend layout normal mode if user clicks on date only actual date is shown click was on first date also the filter is not working
| 1
|
580,428
| 17,243,820,820
|
IssuesEvent
|
2021-07-21 05:09:50
|
prysmaticlabs/prysm
|
https://api.github.com/repos/prysmaticlabs/prysm
|
opened
|
Nested Subcommands Are Broken on v1.4.1
|
Bug Priority: High
|
# 🐞 Bug Report
### Description
Nested Subcommands are Broken on `v1.4.1` . Any subcommand whose parent was a subcommand too
is affected by this error. #9129 introduced stricter validation of commands and arguments which did end up
breaking this command flow.
### Has this worked before in a previous version?
Yes this has been working previously till v1.3.11 .
## 🔬 Minimal Reproduction
```
./validator accounts list
```
## 🔥 Error
```
[2021-07-21 12:43:04] ERROR main: unrecognized argument: list
```
|
1.0
|
Nested Subcommands Are Broken on v1.4.1 - # 🐞 Bug Report
### Description
Nested Subcommands are Broken on `v1.4.1` . Any subcommand whose parent was a subcommand too
is affected by this error. #9129 introduced stricter validation of commands and arguments which did end up
breaking this command flow.
### Has this worked before in a previous version?
Yes this has been working previously till v1.3.11 .
## 🔬 Minimal Reproduction
```
./validator accounts list
```
## 🔥 Error
```
[2021-07-21 12:43:04] ERROR main: unrecognized argument: list
```
|
priority
|
nested subcommands are broken on 🐞 bug report description nested subcommands are broken on any subcommand whose parent was a subcommand too is affected by this error introduced stricter validation of commands and arguments which did end up breaking this command flow has this worked before in a previous version yes this has been working previously till 🔬 minimal reproduction validator accounts list 🔥 error error main unrecognized argument list
| 1
|
605,418
| 18,735,000,024
|
IssuesEvent
|
2021-11-04 05:44:21
|
AY2122S1-CS2103T-F11-2/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-F11-2/tp
|
closed
|
[PE-D] delete 6 6 command has unexpected behaviour
|
bug priority.High mustfix
|
`delete 6 6` returns error "The person index provided is invalid", but the applicant at index 6 is actually deleted in the GUI list. In addition, upon immediately exiting the application and opening the application, applicant at index 6 remains and is not deleted.
Expected success message, and applicant at index 6 is actually deleted.

<!--session: 1635494425946-cea8de67-f220-4dcd-8f03-713c1f42adf4--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: wz27/ped#5
|
1.0
|
[PE-D] delete 6 6 command has unexpected behaviour - `delete 6 6` returns error "The person index provided is invalid", but the applicant at index 6 is actually deleted in the GUI list. In addition, upon immediately exiting the application and opening the application, applicant at index 6 remains and is not deleted.
Expected success message, and applicant at index 6 is actually deleted.

<!--session: 1635494425946-cea8de67-f220-4dcd-8f03-713c1f42adf4--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: wz27/ped#5
|
priority
|
delete command has unexpected behaviour delete returns error the person index provided is invalid but the applicant at index is actually deleted in the gui list in addition upon immediately exiting the application and opening the application applicant at index remains and is not deleted expected success message and applicant at index is actually deleted labels severity medium type functionalitybug original ped
| 1
|
424,035
| 12,305,215,039
|
IssuesEvent
|
2020-05-11 22:00:34
|
salimkanoun/Orthanc-Tools-JS
|
https://api.github.com/repos/salimkanoun/Orthanc-Tools-JS
|
closed
|
Sortie des Overlay
|
Priority : High enhancement question
|
Serait il possible de fermer les overlay automatiquement quand ils ont plus le focus ou qcch comme ca.
Ou si la souris est sortie de l'overlay ?
L'idée serait de pas avoir à recliquer sur le boutton pour fermer l'overlay.
|
1.0
|
Sortie des Overlay - Serait il possible de fermer les overlay automatiquement quand ils ont plus le focus ou qcch comme ca.
Ou si la souris est sortie de l'overlay ?
L'idée serait de pas avoir à recliquer sur le boutton pour fermer l'overlay.
|
priority
|
sortie des overlay serait il possible de fermer les overlay automatiquement quand ils ont plus le focus ou qcch comme ca ou si la souris est sortie de l overlay l idée serait de pas avoir à recliquer sur le boutton pour fermer l overlay
| 1
|
379,433
| 11,221,660,486
|
IssuesEvent
|
2020-01-07 18:21:40
|
ansible/awx
|
https://api.github.com/repos/ansible/awx
|
closed
|
Incorrect Delete Modal on InventoryGroupDetail
|
component:ui_next priority:high state:in_progress
|
##### ISSUE TYPE
- Bug Report
##### SUMMARY
The InventoryGroupDetails view does not use the correct delete modal.
##### ENVIRONMENT
* AWX version: 9.1
* AWX install method: docker for mac
* Ansible version: 2.10=
* Operating System: Mojave
* Web Browser: Chrome
##### STEPS TO REPRODUCE
Go to Inventory --> Inventory Group and click on a group. Then click delete.
##### EXPECTED RESULTS

This modal should pop up allowing the user to decide to how to handle related groups/hosts
##### ACTUAL RESULTS

User can't decide how to handle related group/hosts
##### ADDITIONAL INFORMATION
|
1.0
|
Incorrect Delete Modal on InventoryGroupDetail - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
The InventoryGroupDetails view does not use the correct delete modal.
##### ENVIRONMENT
* AWX version: 9.1
* AWX install method: docker for mac
* Ansible version: 2.10=
* Operating System: Mojave
* Web Browser: Chrome
##### STEPS TO REPRODUCE
Go to Inventory --> Inventory Group and click on a group. Then click delete.
##### EXPECTED RESULTS

This modal should pop up allowing the user to decide to how to handle related groups/hosts
##### ACTUAL RESULTS

User can't decide how to handle related group/hosts
##### ADDITIONAL INFORMATION
|
priority
|
incorrect delete modal on inventorygroupdetail issue type bug report summary the inventorygroupdetails view does not use the correct delete modal environment awx version awx install method docker for mac ansible version operating system mojave web browser chrome steps to reproduce go to inventory inventory group and click on a group then click delete expected results this modal should pop up allowing the user to decide to how to handle related groups hosts actual results user can t decide how to handle related group hosts additional information
| 1
|
399,067
| 11,742,665,065
|
IssuesEvent
|
2020-03-12 01:38:16
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
CSV export does not contain payment values
|
bug easy and fun events priority: high
|
In GitLab by @JobDoesburg on May 7, 2019, 09:07
### One-sentence description
The .csv export of event registrations does not contain payment values
### Current behaviour / Reproducing the bug
Every row is exported as unpaid
### Expected behaviour
Contain the payment value visible in the admin
|
1.0
|
CSV export does not contain payment values - In GitLab by @JobDoesburg on May 7, 2019, 09:07
### One-sentence description
The .csv export of event registrations does not contain payment values
### Current behaviour / Reproducing the bug
Every row is exported as unpaid
### Expected behaviour
Contain the payment value visible in the admin
|
priority
|
csv export does not contain payment values in gitlab by jobdoesburg on may one sentence description the csv export of event registrations does not contain payment values current behaviour reproducing the bug every row is exported as unpaid expected behaviour contain the payment value visible in the admin
| 1
|
563,666
| 16,703,358,758
|
IssuesEvent
|
2021-06-09 07:00:42
|
knowease-inc/knowease-inc.github.io
|
https://api.github.com/repos/knowease-inc/knowease-inc.github.io
|
opened
|
Google Analytics 적용
|
Domain:Infra ETR:1W- Priority:High Task:Enhancement
|
## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
* Google Analytics 데이터 수집 가능상태
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
* 페이지 방문자 관련 데이터를 알 수 없습니다.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
* @T-Mook
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ] 적용
- [ ] 테스트
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: #3
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @T-Mook
- Domain : **Infra**
- Priority: **High**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
|
1.0
|
Google Analytics 적용 - ## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
* Google Analytics 데이터 수집 가능상태
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
* 페이지 방문자 관련 데이터를 알 수 없습니다.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
* @T-Mook
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ] 적용
- [ ] 테스트
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: #3
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @T-Mook
- Domain : **Infra**
- Priority: **High**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
|
priority
|
google analytics 적용 이런 목표를 달성해야 합니다 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요 google analytics 데이터 수집 가능상태 현재 이런 상태입니다 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요 페이지 방문자 관련 데이터를 알 수 없습니다 이 이슈는 이 분이 풀 수 있을 것 같습니다 담당할 assignee를 로 멘션해주세요 t mook 아래의 세부적인 문제를 풀어야 할 것 같습니다 이 이슈를 해결하기 위한 세부 항목 이슈 클로징 조건 을 체크리스트로 적어주세요 적용 테스트 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호 문서 wiki 스크린샷 개인적인 의견 등을 최대한 적어주세요 이 이슈가 다른 이슈와 관련되어 있는 경우는 반드시 이슈 번호를 적어주세요 관련이슈 이 이슈 해결을 위해 이정도 시간이 예상됩니다 예상소요시간을 한가지만 선택해주세요 가 아닌 경우 레이블을 변경해주세요 예상소요시간 관련된 세부 정보입니다 reporter는 domain priority task를 각각 한가지만 선택해주세요 ux medium enhancement 가 아닌 경우 레이블을 변경해주세요 reporter t mook domain infra priority high task enhancement 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다 이 이슈를 해결함에 따라 전사적으로 유의미한 수익 비용 변동이 예상될 경우 해당 수치를 입력해주세요 예상수익 원 월 예상비용 원 월
| 1
|
527,061
| 15,307,955,372
|
IssuesEvent
|
2021-02-24 21:41:11
|
infiniteautomation/ma-core-public
|
https://api.github.com/repos/infiniteautomation/ma-core-public
|
closed
|
Audit Event Refactor
|
High Priority Item
|
Audit events are currently raised via a static method in the AbstractVoDao, they should leverage DaoEvents. The problem that brought this to my attention was that EventInstanceDao is attempting to raise audit events for EventInstanceVOs when they are inserted (which won't generally happen in Mango but does in the test framework as Mango raises EventInstance objects via a different Dao).
The solution is to:
1. Create a new type of Dao Event called AuditEventDao
2. Capture the raising user's information to be used in the audit event (if no user then fall back to System Superadmin from the DaoEvent's SecurityContext
3. Raise an AuditDaoEvent event if AbstractVoDao.typeName is not null
AuditDaoEvent:
```
{
changeType: 1 (From AuditEventInstanceDao.CHANGE_TYPE_...,
auditEventType: AuditEventType,
permissionHolderName: See AuditEventType.raiseEvent(),
message: translatableMessage,
fromVO: vo,
toVO: vo,
timestamp: long,
}
```
- [x] Also add a way for audit events to be raised for System Settings
|
1.0
|
Audit Event Refactor - Audit events are currently raised via a static method in the AbstractVoDao, they should leverage DaoEvents. The problem that brought this to my attention was that EventInstanceDao is attempting to raise audit events for EventInstanceVOs when they are inserted (which won't generally happen in Mango but does in the test framework as Mango raises EventInstance objects via a different Dao).
The solution is to:
1. Create a new type of Dao Event called AuditEventDao
2. Capture the raising user's information to be used in the audit event (if no user then fall back to System Superadmin from the DaoEvent's SecurityContext
3. Raise an AuditDaoEvent event if AbstractVoDao.typeName is not null
AuditDaoEvent:
```
{
changeType: 1 (From AuditEventInstanceDao.CHANGE_TYPE_...,
auditEventType: AuditEventType,
permissionHolderName: See AuditEventType.raiseEvent(),
message: translatableMessage,
fromVO: vo,
toVO: vo,
timestamp: long,
}
```
- [x] Also add a way for audit events to be raised for System Settings
|
priority
|
audit event refactor audit events are currently raised via a static method in the abstractvodao they should leverage daoevents the problem that brought this to my attention was that eventinstancedao is attempting to raise audit events for eventinstancevos when they are inserted which won t generally happen in mango but does in the test framework as mango raises eventinstance objects via a different dao the solution is to create a new type of dao event called auditeventdao capture the raising user s information to be used in the audit event if no user then fall back to system superadmin from the daoevent s securitycontext raise an auditdaoevent event if abstractvodao typename is not null auditdaoevent changetype from auditeventinstancedao change type auditeventtype auditeventtype permissionholdername see auditeventtype raiseevent message translatablemessage fromvo vo tovo vo timestamp long also add a way for audit events to be raised for system settings
| 1
|
688,594
| 23,589,154,368
|
IssuesEvent
|
2022-08-23 13:58:03
|
huridocs/uwazi
|
https://api.github.com/repos/huridocs/uwazi
|
closed
|
Multi date range and multi date ranges are not supported by CSV import
|
Sprint Priority: High Feature Backend 💾
|
The property type date range and multi date range are not supported through csv import. At present, this type of information has to be manually entered into the database as the import leaves out these property types when trying to do an import.
User Story:
As an admin user responsible for the migration of a large dataset of existing data into Uwazi that contains crucial date range information, I would like to be able to import this information through the csv file as well instead of having to manually enter this into each entity after the import is completed.
|
1.0
|
Multi date range and multi date ranges are not supported by CSV import - The property type date range and multi date range are not supported through csv import. At present, this type of information has to be manually entered into the database as the import leaves out these property types when trying to do an import.
User Story:
As an admin user responsible for the migration of a large dataset of existing data into Uwazi that contains crucial date range information, I would like to be able to import this information through the csv file as well instead of having to manually enter this into each entity after the import is completed.
|
priority
|
multi date range and multi date ranges are not supported by csv import the property type date range and multi date range are not supported through csv import at present this type of information has to be manually entered into the database as the import leaves out these property types when trying to do an import user story as an admin user responsible for the migration of a large dataset of existing data into uwazi that contains crucial date range information i would like to be able to import this information through the csv file as well instead of having to manually enter this into each entity after the import is completed
| 1
|
645,947
| 21,033,517,539
|
IssuesEvent
|
2022-03-31 04:47:24
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Getting "Something went wrong" Error when trying to add a attribute for a registered OIDC application
|
ui Priority/Highest Severity/Blocker bug console UI Component/Application Management UI Component/Attribute Management Affected-5.12.0 QA-Reported
|
**How to reproduce:**
1. Set up postgres 11.5 as primary db
2. Access Console
3. Register a OIDC application
4. Navigate to Attributes tab of the registered app
5. Try to edit and add a new attribute
**Observation**
Getting a something went wrong page

https://user-images.githubusercontent.com/31848014/149087846-57ef0e51-012e-4235-9332-735c62240ce8.mp4
**Environment information (Please complete the following information; remove any unnecessary fields) **
5.12.0 alpha 9
postgres 11.5
chrome 65
jdk 1.8.0_291
Ubuntu 20.04.3 LTS
|
1.0
|
Getting "Something went wrong" Error when trying to add a attribute for a registered OIDC application - **How to reproduce:**
1. Set up postgres 11.5 as primary db
2. Access Console
3. Register a OIDC application
4. Navigate to Attributes tab of the registered app
5. Try to edit and add a new attribute
**Observation**
Getting a something went wrong page

https://user-images.githubusercontent.com/31848014/149087846-57ef0e51-012e-4235-9332-735c62240ce8.mp4
**Environment information (Please complete the following information; remove any unnecessary fields) **
5.12.0 alpha 9
postgres 11.5
chrome 65
jdk 1.8.0_291
Ubuntu 20.04.3 LTS
|
priority
|
getting something went wrong error when trying to add a attribute for a registered oidc application how to reproduce set up postgres as primary db access console register a oidc application navigate to attributes tab of the registered app try to edit and add a new attribute observation getting a something went wrong page environment information please complete the following information remove any unnecessary fields alpha postgres chrome jdk ubuntu lts
| 1
|
791,383
| 27,861,907,781
|
IssuesEvent
|
2023-03-21 07:15:55
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[Bug]: Resolution engine resolves to higher versions of local repo dependencies
|
Type/Bug Priority/High Team/DevTools Area/ProjectAPI needTriage userCategory/Compilation
|
### Description
A dependency specified with the local repository is expected to resolve to the exact version but when there is a higher version of the dependency in BCentral, the resolution engine picks the higher version and looks up the local repository for this higher version.
The issue is that the locking mode is not set to `HARD` mode for local repo dependencies. Rather it is set with the default locking mode.
### Steps to Reproduce
1. Pack and publish two libraries within the compatibility range to Central (E.g foo 1.0.0, 1.1.0)
2. Publish `foo:1.0.0` to the local repository
3. Create a new package and use `foo:1.0.0` from the local repository
4. Build the package with `bal build --dump-graph`
**Output:**
The dependency graph will contain foo 1.1.0 followed by the `cannot resolve module myOrg/foo`
### Affected Version(s)
At least Swan Lake Update 4
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
1.0
|
[Bug]: Resolution engine resolves to higher versions of local repo dependencies - ### Description
A dependency specified with the local repository is expected to resolve to the exact version but when there is a higher version of the dependency in BCentral, the resolution engine picks the higher version and looks up the local repository for this higher version.
The issue is that the locking mode is not set to `HARD` mode for local repo dependencies. Rather it is set with the default locking mode.
### Steps to Reproduce
1. Pack and publish two libraries within the compatibility range to Central (E.g foo 1.0.0, 1.1.0)
2. Publish `foo:1.0.0` to the local repository
3. Create a new package and use `foo:1.0.0` from the local repository
4. Build the package with `bal build --dump-graph`
**Output:**
The dependency graph will contain foo 1.1.0 followed by the `cannot resolve module myOrg/foo`
### Affected Version(s)
At least Swan Lake Update 4
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
priority
|
resolution engine resolves to higher versions of local repo dependencies description a dependency specified with the local repository is expected to resolve to the exact version but when there is a higher version of the dependency in bcentral the resolution engine picks the higher version and looks up the local repository for this higher version the issue is that the locking mode is not set to hard mode for local repo dependencies rather it is set with the default locking mode steps to reproduce pack and publish two libraries within the compatibility range to central e g foo publish foo to the local repository create a new package and use foo from the local repository build the package with bal build dump graph output the dependency graph will contain foo followed by the cannot resolve module myorg foo affected version s at least swan lake update os db other environment details and versions no response related area compilation related issue s optional no response suggested label s optional no response suggested assignee s optional no response
| 1
|
491,938
| 14,174,192,501
|
IssuesEvent
|
2020-11-12 19:30:48
|
beacondig/site-beacon
|
https://api.github.com/repos/beacondig/site-beacon
|
opened
|
BDM | Page | Work Single | QA
|
Priority: High Status: Available
|
**NOTE:** Add this to the page epic
## Accelo Milestone:
## Accelo QA Task:
**Client Google Drive:**
**Figma Wireframes:**
Figma Design: https://www.figma.com/file/W8aDuuIxhO6ZUtzEot5rYU/BDM-%7C-Website-Redesign?node-id=983%3A0&viewport=-4247%2C-1853%2C0.25994157791137695
**Copy Deck:**
**Design Assets:**
Staging: https://www.beacondigitalmarketing.com/creative-design/appfolio-podcast-the-top-floor
## People Responsible
**Project Manager:**
**UX:**
**Design:**
**Developer:**
**QA:**
**Content:**
**Launch:**
#### DESKTOP
1. Windows 10.0
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Edge
- [ ] IE11 (ONLY CRITICAL ISSUES)
2. Windows 7.0
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Edge
- [ ] IE11 (ONLY CRITICAL ISSUES)
3. macOS 10.15: Catalina
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
4. macOS 10.14: Mojave
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
5. macOS 10.13: High Sierra
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
#### MOBILE
6. iPhone iOS 13.6
- [ ] Safari
- [ ] Chrome
7. iPhone iOS 13.5
- [ ] Safari
- [ ] Chrome
8. iPhone iOS 12.4
- [ ] Safari
- [ ] Chrome
9. Samsung Galaxy
- [ ] Native Browser
- [ ] Chrome
10. Google Pixel
- [ ] Native Browser
- [ ] Chrome
|
1.0
|
BDM | Page | Work Single | QA - **NOTE:** Add this to the page epic
## Accelo Milestone:
## Accelo QA Task:
**Client Google Drive:**
**Figma Wireframes:**
Figma Design: https://www.figma.com/file/W8aDuuIxhO6ZUtzEot5rYU/BDM-%7C-Website-Redesign?node-id=983%3A0&viewport=-4247%2C-1853%2C0.25994157791137695
**Copy Deck:**
**Design Assets:**
Staging: https://www.beacondigitalmarketing.com/creative-design/appfolio-podcast-the-top-floor
## People Responsible
**Project Manager:**
**UX:**
**Design:**
**Developer:**
**QA:**
**Content:**
**Launch:**
#### DESKTOP
1. Windows 10.0
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Edge
- [ ] IE11 (ONLY CRITICAL ISSUES)
2. Windows 7.0
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Edge
- [ ] IE11 (ONLY CRITICAL ISSUES)
3. macOS 10.15: Catalina
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
4. macOS 10.14: Mojave
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
5. macOS 10.13: High Sierra
- [ ] Chrome (Most Recent)
- [ ] FireFox (Most Recent)
- [ ] Safari
#### MOBILE
6. iPhone iOS 13.6
- [ ] Safari
- [ ] Chrome
7. iPhone iOS 13.5
- [ ] Safari
- [ ] Chrome
8. iPhone iOS 12.4
- [ ] Safari
- [ ] Chrome
9. Samsung Galaxy
- [ ] Native Browser
- [ ] Chrome
10. Google Pixel
- [ ] Native Browser
- [ ] Chrome
|
priority
|
bdm page work single qa note add this to the page epic accelo milestone accelo qa task client google drive figma wireframes figma design copy deck design assets staging people responsible project manager ux design developer qa content launch desktop windows chrome most recent firefox most recent edge only critical issues windows chrome most recent firefox most recent edge only critical issues macos catalina chrome most recent firefox most recent safari macos mojave chrome most recent firefox most recent safari macos high sierra chrome most recent firefox most recent safari mobile iphone ios safari chrome iphone ios safari chrome iphone ios safari chrome samsung galaxy native browser chrome google pixel native browser chrome
| 1
|
265,273
| 8,351,831,965
|
IssuesEvent
|
2018-10-02 02:37:12
|
josephroqueca/bowling-companion
|
https://api.github.com/repos/josephroqueca/bowling-companion
|
opened
|
Add an option to combine series bowled on the same day
|
enhancement from user high priority
|
Only applies to the `Practice` league.
When the user opens the series list for the `Practice` league and 2 or more series have the same date and less than 20 games, the app should prompt the user to combine the series into one.
|
1.0
|
Add an option to combine series bowled on the same day - Only applies to the `Practice` league.
When the user opens the series list for the `Practice` league and 2 or more series have the same date and less than 20 games, the app should prompt the user to combine the series into one.
|
priority
|
add an option to combine series bowled on the same day only applies to the practice league when the user opens the series list for the practice league and or more series have the same date and less than games the app should prompt the user to combine the series into one
| 1
|
90,212
| 3,812,885,970
|
IssuesEvent
|
2016-03-27 22:29:35
|
blackwatchint/modpack
|
https://api.github.com/repos/blackwatchint/modpack
|
opened
|
Eastern Militia T72 causing server desync
|
High Priority Modpack
|
The Eastern Militia T72 appears to be causing server desync due to a locality issue with the flag attached to the rear of the tank. Each instance of the tank created results in a number of additional flags being spawned per-player, with a seemingly exponential increase.
The affected tank can be found under the Independent > Eastern Militia > Armor category and is from RHS. No other T72's are affected (they have no flags).
|
1.0
|
Eastern Militia T72 causing server desync - The Eastern Militia T72 appears to be causing server desync due to a locality issue with the flag attached to the rear of the tank. Each instance of the tank created results in a number of additional flags being spawned per-player, with a seemingly exponential increase.
The affected tank can be found under the Independent > Eastern Militia > Armor category and is from RHS. No other T72's are affected (they have no flags).
|
priority
|
eastern militia causing server desync the eastern militia appears to be causing server desync due to a locality issue with the flag attached to the rear of the tank each instance of the tank created results in a number of additional flags being spawned per player with a seemingly exponential increase the affected tank can be found under the independent eastern militia armor category and is from rhs no other s are affected they have no flags
| 1
|
43,160
| 2,883,727,209
|
IssuesEvent
|
2015-06-11 13:49:47
|
Ombridride/minetest-minetestforfun-server
|
https://api.github.com/repos/Ombridride/minetest-minetestforfun-server
|
closed
|
Too much monsters !
|
Modding Priority@High
|
There are too much monsters ! Even at day-time !
The spiders spawn too much : I many times had three spiders in 30 voxels of distance.
CGT ASKINGS :
- There are two much spawning monsters on the server !
- Swords don't have enough damage points ! It's boring to take 15 seconds to kill a monster !
|
1.0
|
Too much monsters ! - There are too much monsters ! Even at day-time !
The spiders spawn too much : I many times had three spiders in 30 voxels of distance.
CGT ASKINGS :
- There are two much spawning monsters on the server !
- Swords don't have enough damage points ! It's boring to take 15 seconds to kill a monster !
|
priority
|
too much monsters there are too much monsters even at day time the spiders spawn too much i many times had three spiders in voxels of distance cgt askings there are two much spawning monsters on the server swords don t have enough damage points it s boring to take seconds to kill a monster
| 1
|
365,265
| 10,780,200,158
|
IssuesEvent
|
2019-11-04 12:26:04
|
CESARBR/knot-setup-android
|
https://api.github.com/repos/CESARBR/knot-setup-android
|
opened
|
Add write mechanism to write to an Android device
|
priority: high
|
The KNoT SDK should, if permitted, allow the developer to write values to the KNoT Thing. This means that the Android SDK should be able to receive a message from a KNoT Cloud, parse the message and actuate on it.
|
1.0
|
Add write mechanism to write to an Android device - The KNoT SDK should, if permitted, allow the developer to write values to the KNoT Thing. This means that the Android SDK should be able to receive a message from a KNoT Cloud, parse the message and actuate on it.
|
priority
|
add write mechanism to write to an android device the knot sdk should if permitted allow the developer to write values to the knot thing this means that the android sdk should be able to receive a message from a knot cloud parse the message and actuate on it
| 1
|
589,205
| 17,692,322,743
|
IssuesEvent
|
2021-08-24 11:35:16
|
tooget/tooget.github.io
|
https://api.github.com/repos/tooget/tooget.github.io
|
opened
|
내 프로필 페이지 + staticman.net 깃헙 이슈 문의 컴포넌트 적용
|
Priority:Medium Task:Enhancement Priority:High Domain:UX
|
## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
NUXT 내 프로필 페이지 초기 구성 및 staticman.net 를 이용한 소통 채널을 구현합니다.
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
https://github.com/tooget/tooget.github.io/commit/bfc58f86c340d751ec93809ddaa1667f46e7b019 기준, create-nuxt-app 기본 페이지만 존재합니다.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
@tooget
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ]
- [ ] https://linkedin.com/in/tooget 등 개인 프로필 컴포넌트 임베딩
- [ ]
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: #1
- 참고사항: https://staticman.net
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @tooget
- Domain : **UX**
- Priority: **High**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
|
2.0
|
내 프로필 페이지 + staticman.net 깃헙 이슈 문의 컴포넌트 적용 - ## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
NUXT 내 프로필 페이지 초기 구성 및 staticman.net 를 이용한 소통 채널을 구현합니다.
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
https://github.com/tooget/tooget.github.io/commit/bfc58f86c340d751ec93809ddaa1667f46e7b019 기준, create-nuxt-app 기본 페이지만 존재합니다.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
@tooget
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ]
- [ ] https://linkedin.com/in/tooget 등 개인 프로필 컴포넌트 임베딩
- [ ]
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: #1
- 참고사항: https://staticman.net
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @tooget
- Domain : **UX**
- Priority: **High**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
|
priority
|
내 프로필 페이지 staticman net 깃헙 이슈 문의 컴포넌트 적용 이런 목표를 달성해야 합니다 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요 nuxt 내 프로필 페이지 초기 구성 및 staticman net 를 이용한 소통 채널을 구현합니다 현재 이런 상태입니다 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요 기준 create nuxt app 기본 페이지만 존재합니다 이 이슈는 이 분이 풀 수 있을 것 같습니다 담당할 assignee를 로 멘션해주세요 tooget 아래의 세부적인 문제를 풀어야 할 것 같습니다 이 이슈를 해결하기 위한 세부 항목 이슈 클로징 조건 을 체크리스트로 적어주세요 등 개인 프로필 컴포넌트 임베딩 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호 문서 wiki 스크린샷 개인적인 의견 등을 최대한 적어주세요 이 이슈가 다른 이슈와 관련되어 있는 경우는 반드시 이슈 번호를 적어주세요 관련이슈 참고사항 이 이슈 해결을 위해 이정도 시간이 예상됩니다 예상소요시간을 한가지만 선택해주세요 가 아닌 경우 레이블을 변경해주세요 예상소요시간 관련된 세부 정보입니다 reporter는 domain priority task를 각각 한가지만 선택해주세요 ux medium enhancement 가 아닌 경우 레이블을 변경해주세요 reporter tooget domain ux priority high task enhancement 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다 이 이슈를 해결함에 따라 전사적으로 유의미한 수익 비용 변동이 예상될 경우 해당 수치를 입력해주세요 예상수익 원 월 예상비용 원 월
| 1
|
609,342
| 18,870,387,347
|
IssuesEvent
|
2021-11-13 03:57:37
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
CI: test cases run in subprocesses cannot be disabled
|
high priority oncall: distributed
|
For more context please read https://github.com/pytorch/pytorch/issues/68173#issuecomment-967153656
This is bad because most distributed tests are now run in a subprocess, and they cannot be disabled by an issue currently.
The code for how the subprocess is spawned is here: https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_utils.py#L601
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
|
1.0
|
CI: test cases run in subprocesses cannot be disabled - For more context please read https://github.com/pytorch/pytorch/issues/68173#issuecomment-967153656
This is bad because most distributed tests are now run in a subprocess, and they cannot be disabled by an issue currently.
The code for how the subprocess is spawned is here: https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_utils.py#L601
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
|
priority
|
ci test cases run in subprocesses cannot be disabled for more context please read this is bad because most distributed tests are now run in a subprocess and they cannot be disabled by an issue currently the code for how the subprocess is spawned is here cc ezyang gchanan bdhirsh jbschlosser pietern mrshenli zhaojuanmao satgera rohan varma gqchen aazzolini osalpekar jiayisuse scipioneer h huang
| 1
|
244,304
| 7,873,292,266
|
IssuesEvent
|
2018-06-25 13:54:53
|
larray-project/larray
|
https://api.github.com/repos/larray-project/larray
|
closed
|
excel.Range.load and excel.Sheet.load do not have nb_axes argument
|
bug priority: high work in progress
|
They still use nb_index, which produces a warning.
|
1.0
|
excel.Range.load and excel.Sheet.load do not have nb_axes argument - They still use nb_index, which produces a warning.
|
priority
|
excel range load and excel sheet load do not have nb axes argument they still use nb index which produces a warning
| 1
|
418,327
| 12,196,366,449
|
IssuesEvent
|
2020-04-29 18:57:25
|
certbot/website
|
https://api.github.com/repos/certbot/website
|
closed
|
Ubuntu 20.04 nginx instructions
|
bug instruction generator priority: high
|
People are actively working on https://bugs.launchpad.net/ubuntu/+source/python-certbot-nginx/+bug/1875471/, but it'll be a few days, if not a couple weeks, before the problem is resolved.
I'd like to fix our instructions in the meantime. Unfortunately, I think we're in a bit of a tricky spot with certbot-auto not currently working on Ubuntu 20.04 and no PPA packages available for the system with no Debian/Ubuntu developers planning to help with this in the short term.
For now, I think we should:
1. Split Ubuntu 19.10+ to Ubuntu 19.10 and Ubuntu 20.04. (I think Ubuntu 19.10 can keep the "ubuntuother" id since the nginx plugin should work on every other Ubuntu version and when the problem is fixed, we can merge the two again and create a redirect from Ubuntu 20.04 to ubuntuother if we want.)
2. Our instructions for nginx should treat the nginx plugin as unpackaged and list our certonly instructions.
As an alternate to (2), we could make sure that the nginx plugin works as an installer and provide instructions to use webroot/standalone with the nginx installer, but I personally don't think this work is necessary for what should be a short term setup and it causes problems such as https://github.com/certbot/certbot/issues/5486.
@ohemorange, can I get a +1 on this plan? If you agree I'll write a PR for it (or you can if you're interested and I'll review it).
|
1.0
|
Ubuntu 20.04 nginx instructions - People are actively working on https://bugs.launchpad.net/ubuntu/+source/python-certbot-nginx/+bug/1875471/, but it'll be a few days, if not a couple weeks, before the problem is resolved.
I'd like to fix our instructions in the meantime. Unfortunately, I think we're in a bit of a tricky spot with certbot-auto not currently working on Ubuntu 20.04 and no PPA packages available for the system with no Debian/Ubuntu developers planning to help with this in the short term.
For now, I think we should:
1. Split Ubuntu 19.10+ to Ubuntu 19.10 and Ubuntu 20.04. (I think Ubuntu 19.10 can keep the "ubuntuother" id since the nginx plugin should work on every other Ubuntu version and when the problem is fixed, we can merge the two again and create a redirect from Ubuntu 20.04 to ubuntuother if we want.)
2. Our instructions for nginx should treat the nginx plugin as unpackaged and list our certonly instructions.
As an alternate to (2), we could make sure that the nginx plugin works as an installer and provide instructions to use webroot/standalone with the nginx installer, but I personally don't think this work is necessary for what should be a short term setup and it causes problems such as https://github.com/certbot/certbot/issues/5486.
@ohemorange, can I get a +1 on this plan? If you agree I'll write a PR for it (or you can if you're interested and I'll review it).
|
priority
|
ubuntu nginx instructions people are actively working on but it ll be a few days if not a couple weeks before the problem is resolved i d like to fix our instructions in the meantime unfortunately i think we re in a bit of a tricky spot with certbot auto not currently working on ubuntu and no ppa packages available for the system with no debian ubuntu developers planning to help with this in the short term for now i think we should split ubuntu to ubuntu and ubuntu i think ubuntu can keep the ubuntuother id since the nginx plugin should work on every other ubuntu version and when the problem is fixed we can merge the two again and create a redirect from ubuntu to ubuntuother if we want our instructions for nginx should treat the nginx plugin as unpackaged and list our certonly instructions as an alternate to we could make sure that the nginx plugin works as an installer and provide instructions to use webroot standalone with the nginx installer but i personally don t think this work is necessary for what should be a short term setup and it causes problems such as ohemorange can i get a on this plan if you agree i ll write a pr for it or you can if you re interested and i ll review it
| 1
|
285,345
| 8,757,589,868
|
IssuesEvent
|
2018-12-14 21:49:45
|
PyLadiesCZ/roboprojekt
|
https://api.github.com/repos/PyLadiesCZ/roboprojekt
|
closed
|
Robot si nepamatuje informace o sobě
|
high-priority
|
Robot si nepamatuje kolik má životů, vlajek, zranění.
Potřebuji to načítat do Interface.
|
1.0
|
Robot si nepamatuje informace o sobě - Robot si nepamatuje kolik má životů, vlajek, zranění.
Potřebuji to načítat do Interface.
|
priority
|
robot si nepamatuje informace o sobě robot si nepamatuje kolik má životů vlajek zranění potřebuji to načítat do interface
| 1
|
701,854
| 24,111,511,825
|
IssuesEvent
|
2022-09-20 11:45:39
|
adanvdo/YT-RED-UI
|
https://api.github.com/repos/adanvdo/YT-RED-UI
|
closed
|
Segment Downloads have become very slow
|
bug fixed High Priority
|
For some reason segment downloads have recently become very slow. Progress seems to sit at 0% and I cannot tell what the program is doing. I'm not sure if this is throttling or what. Needs to be investigated
|
1.0
|
Segment Downloads have become very slow - For some reason segment downloads have recently become very slow. Progress seems to sit at 0% and I cannot tell what the program is doing. I'm not sure if this is throttling or what. Needs to be investigated
|
priority
|
segment downloads have become very slow for some reason segment downloads have recently become very slow progress seems to sit at and i cannot tell what the program is doing i m not sure if this is throttling or what needs to be investigated
| 1
|
475,437
| 13,710,306,643
|
IssuesEvent
|
2020-10-02 00:39:17
|
joyent/kosh
|
https://api.github.com/repos/joyent/kosh
|
opened
|
Remove `rack_id` from layout requests
|
blocker high priority
|
The endpoints in https://github.com/joyent/conch/pull/1032 are all about to change to no longer require a `rack_id`. Need to confirm that we already support this (I believe we do).
|
1.0
|
Remove `rack_id` from layout requests - The endpoints in https://github.com/joyent/conch/pull/1032 are all about to change to no longer require a `rack_id`. Need to confirm that we already support this (I believe we do).
|
priority
|
remove rack id from layout requests the endpoints in are all about to change to no longer require a rack id need to confirm that we already support this i believe we do
| 1
|
342,019
| 10,310,988,572
|
IssuesEvent
|
2019-08-29 16:16:34
|
BlueCodeSystems/smartcerv
|
https://api.github.com/repos/BlueCodeSystems/smartcerv
|
closed
|
Visit data previously entered not syncing down after upgrade
|
High priority v1.0.0-alpha.5
|
Lower version

After upgrade

Lower version

After upgrade

|
1.0
|
Visit data previously entered not syncing down after upgrade - Lower version

After upgrade

Lower version

After upgrade

|
priority
|
visit data previously entered not syncing down after upgrade lower version after upgrade lower version after upgrade
| 1
|
678,425
| 23,197,036,495
|
IssuesEvent
|
2022-08-01 17:24:59
|
zesty-io/website
|
https://api.github.com/repos/zesty-io/website
|
closed
|
Add Additional CTA buttons to next.js page
|
High Priority
|
Please add our standard "Get started" and "request demo" buttons (which are in the process of being updated) to this page - https://www.zesty.io/integrations/nextjs-cms/
Please add below the black starter button. See this page for an example: https://strapi.io/
|
1.0
|
Add Additional CTA buttons to next.js page - Please add our standard "Get started" and "request demo" buttons (which are in the process of being updated) to this page - https://www.zesty.io/integrations/nextjs-cms/
Please add below the black starter button. See this page for an example: https://strapi.io/
|
priority
|
add additional cta buttons to next js page please add our standard get started and request demo buttons which are in the process of being updated to this page please add below the black starter button see this page for an example
| 1
|
365,021
| 10,774,821,100
|
IssuesEvent
|
2019-11-03 09:42:25
|
AY1920S1-CS2113T-F11-3/main
|
https://api.github.com/repos/AY1920S1-CS2113T-F11-3/main
|
closed
|
Invalid command in User Guide for Tagging Email
|
priority.High
|

Example input given in User Guide does not work for Tagging Email command.
Input is "update 1 -tag CS2113T update 2 -tag Tutorial -tag Spam" and returns ParseException.
<hr><sub>[original: leowyh/ped#8]<br/>
</sub>
|
1.0
|
Invalid command in User Guide for Tagging Email - 
Example input given in User Guide does not work for Tagging Email command.
Input is "update 1 -tag CS2113T update 2 -tag Tutorial -tag Spam" and returns ParseException.
<hr><sub>[original: leowyh/ped#8]<br/>
</sub>
|
priority
|
invalid command in user guide for tagging email example input given in user guide does not work for tagging email command input is update tag update tag tutorial tag spam and returns parseexception
| 1
|
200,363
| 7,006,492,758
|
IssuesEvent
|
2017-12-19 08:44:24
|
Freemius/wordpress-sdk
|
https://api.github.com/repos/Freemius/wordpress-sdk
|
closed
|
Empty "Payments" section of Account page
|
billing bug priority: high ui
|
**Actual Behavior**:
- `What is the issue? (*)` I generated a new license for an existing user (myself) via Dashboard. I entered the license and it worked and redirected me to the Account page. It had a lot of empty Payments.
- `What is the expected behavior?` There shouldn't be any Payments if I haven't made any.

**Versions**: (*)
- `Freemius SDK Version:` 1.2.2.9
- `WordPress Version:` 4.9
- `PHP Version:` 7.1
**Plugin / Theme**: (*)
- `Name:` TK Event Weather for The Events Calendar
- `Slug:` tk-event-weather-the-events-calendar
- `Freemius ID:` 241
|
1.0
|
Empty "Payments" section of Account page - **Actual Behavior**:
- `What is the issue? (*)` I generated a new license for an existing user (myself) via Dashboard. I entered the license and it worked and redirected me to the Account page. It had a lot of empty Payments.
- `What is the expected behavior?` There shouldn't be any Payments if I haven't made any.

**Versions**: (*)
- `Freemius SDK Version:` 1.2.2.9
- `WordPress Version:` 4.9
- `PHP Version:` 7.1
**Plugin / Theme**: (*)
- `Name:` TK Event Weather for The Events Calendar
- `Slug:` tk-event-weather-the-events-calendar
- `Freemius ID:` 241
|
priority
|
empty payments section of account page actual behavior what is the issue i generated a new license for an existing user myself via dashboard i entered the license and it worked and redirected me to the account page it had a lot of empty payments what is the expected behavior there shouldn t be any payments if i haven t made any versions freemius sdk version wordpress version php version plugin theme name tk event weather for the events calendar slug tk event weather the events calendar freemius id
| 1
|
48,215
| 2,994,720,839
|
IssuesEvent
|
2015-07-22 13:30:10
|
N4SJAMK/teamboard-client-react
|
https://api.github.com/repos/N4SJAMK/teamboard-client-react
|
closed
|
Background size change is not updated in other clients?
|
bug HIGH PRIORITY
|
Client version v1.2.86
API version v0.4.35
IMG version v0.1.10
I changed backgroud size from 10X10 to 99 X 99 in Chrome + Ubuntu
No update in Windows Chrome & IE
|
1.0
|
Background size change is not updated in other clients? - Client version v1.2.86
API version v0.4.35
IMG version v0.1.10
I changed backgroud size from 10X10 to 99 X 99 in Chrome + Ubuntu
No update in Windows Chrome & IE
|
priority
|
background size change is not updated in other clients client version api version img version i changed backgroud size from to x in chrome ubuntu no update in windows chrome ie
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.