Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
830,876
| 32,028,395,037
|
IssuesEvent
|
2023-09-22 10:27:34
|
svthalia/Reaxit
|
https://api.github.com/repos/svthalia/Reaxit
|
opened
|
Some events don't show up in calendar
|
bug priority: high
|
### Describe the bug
The GM on the 10th of October and the Master thesis market are on the calendar on the website, but don't show up on the app.
### How to reproduce
Steps to reproduce the behavior:
1. Go to the calendar.
2. Scroll down to the 10th of October.
4. See that the GM is not there.
|
1.0
|
Some events don't show up in calendar - ### Describe the bug
The GM on the 10th of October and the Master thesis market are on the calendar on the website, but don't show up on the app.
### How to reproduce
Steps to reproduce the behavior:
1. Go to the calendar.
2. Scroll down to the 10th of October.
4. See that the GM is not there.
|
priority
|
some events don t show up in calendar describe the bug the gm on the of october and the master thesis market are on the calendar on the website but don t show up on the app how to reproduce steps to reproduce the behavior go to the calendar scroll down to the of october see that the gm is not there
| 1
|
733,042
| 25,285,571,015
|
IssuesEvent
|
2022-11-16 19:00:06
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
opened
|
No CSV Import Toggle w/ VoIP
|
Type: Bug Type: Feature / Enhancement Priority: High
|
**Describe the bug**
CSV Import does not support yes/no toggle with VoIP.
**Screenshots**

|
1.0
|
No CSV Import Toggle w/ VoIP - **Describe the bug**
CSV Import does not support yes/no toggle with VoIP.
**Screenshots**

|
priority
|
no csv import toggle w voip describe the bug csv import does not support yes no toggle with voip screenshots
| 1
|
214,685
| 7,275,781,151
|
IssuesEvent
|
2018-02-21 14:37:14
|
vanilla-framework/vanilla-framework
|
https://api.github.com/repos/vanilla-framework/vanilla-framework
|
opened
|
An empty code block only shows half
|
Priority: High Type: Bug
|
If you have an empty code block numbered. Only half of the component is visible. I would expect for it to render a empty first line and display all of it.
Example:
https://codepen.io/anthonydillon/pen/eVrrWd
|
1.0
|
An empty code block only shows half - If you have an empty code block numbered. Only half of the component is visible. I would expect for it to render a empty first line and display all of it.
Example:
https://codepen.io/anthonydillon/pen/eVrrWd
|
priority
|
an empty code block only shows half if you have an empty code block numbered only half of the component is visible i would expect for it to render a empty first line and display all of it example
| 1
|
638,584
| 20,731,126,294
|
IssuesEvent
|
2022-03-14 09:33:01
|
SE701-T1/frontend
|
https://api.github.com/repos/SE701-T1/frontend
|
closed
|
Scaffold Upload Page
|
Status: In Progress Type: Feature Priority: High
|
**Describe the task that needs to be done.**
Create basic page for uploading timetable. Does not need functionality or styling.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Basic skeleton of an upload page similar to designs on figma
**Notes**
This issue was discussed during a team meeting at 11/03/22 and approved by the timetable team then
|
1.0
|
Scaffold Upload Page - **Describe the task that needs to be done.**
Create basic page for uploading timetable. Does not need functionality or styling.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
Basic skeleton of an upload page similar to designs on figma
**Notes**
This issue was discussed during a team meeting at 11/03/22 and approved by the timetable team then
|
priority
|
scaffold upload page describe the task that needs to be done create basic page for uploading timetable does not need functionality or styling describe how a solution to your proposed task might look like and any alternatives considered basic skeleton of an upload page similar to designs on figma notes this issue was discussed during a team meeting at and approved by the timetable team then
| 1
|
460,348
| 13,208,550,267
|
IssuesEvent
|
2020-08-15 05:33:14
|
monkey-team-3801/on-board
|
https://api.github.com/repos/monkey-team-3801/on-board
|
opened
|
Login + register routes
|
enhancement high priority
|
**Specifications**
Setup the relevant routes/endpoint for the login and creation of users.
Data will be sent as a POST request to the following routes:
register -> /user/register
login -> /user/login
Read in the data and perform the relevant checks before inserting the user to the database.
Return the relevant error codes should there be an error.
Work with @spaceytato, #24 to coordinate the frontend and backend. You might want to decide on the data structure/type being passed between the frontend and backend.
Don't worry about persisting sessions.
**Useful resources**
- MongoDB manual http://mongodb.github.io/node-mongodb-native/3.4/quick-start/quick-start/
- Express 5 API https://expressjs.com/en/5x/api.html
|
1.0
|
Login + register routes - **Specifications**
Setup the relevant routes/endpoint for the login and creation of users.
Data will be sent as a POST request to the following routes:
register -> /user/register
login -> /user/login
Read in the data and perform the relevant checks before inserting the user to the database.
Return the relevant error codes should there be an error.
Work with @spaceytato, #24 to coordinate the frontend and backend. You might want to decide on the data structure/type being passed between the frontend and backend.
Don't worry about persisting sessions.
**Useful resources**
- MongoDB manual http://mongodb.github.io/node-mongodb-native/3.4/quick-start/quick-start/
- Express 5 API https://expressjs.com/en/5x/api.html
|
priority
|
login register routes specifications setup the relevant routes endpoint for the login and creation of users data will be sent as a post request to the following routes register user register login user login read in the data and perform the relevant checks before inserting the user to the database return the relevant error codes should there be an error work with spaceytato to coordinate the frontend and backend you might want to decide on the data structure type being passed between the frontend and backend don t worry about persisting sessions useful resources mongodb manual express api
| 1
|
484,812
| 13,957,718,999
|
IssuesEvent
|
2020-10-24 08:18:51
|
jihoonsong/operating-system
|
https://api.github.com/repos/jihoonsong/operating-system
|
closed
|
[PROJECT 1] [BUGFIX] Print only ELF name in termination message
|
priority: high status: working type: bug
|
The argument of process_execute is in fact the name of ELF and its arguments.
Tokenize the argument into the ELF name and its argument, and then print only its name on termination message.
|
1.0
|
[PROJECT 1] [BUGFIX] Print only ELF name in termination message - The argument of process_execute is in fact the name of ELF and its arguments.
Tokenize the argument into the ELF name and its argument, and then print only its name on termination message.
|
priority
|
print only elf name in termination message the argument of process execute is in fact the name of elf and its arguments tokenize the argument into the elf name and its argument and then print only its name on termination message
| 1
|
191,204
| 6,826,839,606
|
IssuesEvent
|
2017-11-08 15:20:21
|
tsgrp/OpenAnnotate
|
https://api.github.com/repos/tsgrp/OpenAnnotate
|
opened
|
Highlight Annotations Cannot be "reopened" after using the new OK button
|
High Priority Issue
|
Steps to reproduce:
1. Open a document with highlights in it.
1. Click on the highlight to bring up the dialog
1. Click on "Ok" button to dismiss the dialog
1. Try to click on the highlight again (hint: you can't...)

Seems like if you just click off of the dialog rather than using the "OK" button that it works fine, so we are likely just not firing the right events when the "ok button is clicked for the "Highlight" annotation type.
Things to check while we are in there:
- [ ] All other annotation types
|
1.0
|
Highlight Annotations Cannot be "reopened" after using the new OK button - Steps to reproduce:
1. Open a document with highlights in it.
1. Click on the highlight to bring up the dialog
1. Click on "Ok" button to dismiss the dialog
1. Try to click on the highlight again (hint: you can't...)

Seems like if you just click off of the dialog rather than using the "OK" button that it works fine, so we are likely just not firing the right events when the "ok button is clicked for the "Highlight" annotation type.
Things to check while we are in there:
- [ ] All other annotation types
|
priority
|
highlight annotations cannot be reopened after using the new ok button steps to reproduce open a document with highlights in it click on the highlight to bring up the dialog click on ok button to dismiss the dialog try to click on the highlight again hint you can t seems like if you just click off of the dialog rather than using the ok button that it works fine so we are likely just not firing the right events when the ok button is clicked for the highlight annotation type things to check while we are in there all other annotation types
| 1
|
315,369
| 9,612,578,445
|
IssuesEvent
|
2019-05-13 09:14:48
|
Sp2000/colplus-frontend
|
https://api.github.com/repos/Sp2000/colplus-frontend
|
closed
|
taxon detail 404s
|
bug high priority
|
some accepted taxa linked from the name search yield a 404, e.g. genus `Melitta` here:
https://www.col.plus/dataset/1067/names?facet=rank&facet=issue&facet=status&limit=50&offset=0&rank=genus
https://www.col.plus/dataset/1067/taxon/xozkk
It really doesnt exist:
https://api.col.plus/dataset/1067/taxon/xozkk
It seems the UI uses the name.id for linking, not the taxon.id!
https://api.col.plus/dataset/1067/name/search?rank=genus&q=Melitta
|
1.0
|
taxon detail 404s - some accepted taxa linked from the name search yield a 404, e.g. genus `Melitta` here:
https://www.col.plus/dataset/1067/names?facet=rank&facet=issue&facet=status&limit=50&offset=0&rank=genus
https://www.col.plus/dataset/1067/taxon/xozkk
It really doesnt exist:
https://api.col.plus/dataset/1067/taxon/xozkk
It seems the UI uses the name.id for linking, not the taxon.id!
https://api.col.plus/dataset/1067/name/search?rank=genus&q=Melitta
|
priority
|
taxon detail some accepted taxa linked from the name search yield a e g genus melitta here it really doesnt exist it seems the ui uses the name id for linking not the taxon id
| 1
|
623,573
| 19,672,575,613
|
IssuesEvent
|
2022-01-11 09:02:49
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
closed
|
Seed phrase is not fully visible when entering 24 words seed phrase
|
bug Wallet priority 1: high
|
### Description
When entering 24 words seed, it is not possible to see all of them on the screen
<img width="1524" alt="Screenshot 2022-01-05 at 10 00 42" src="https://user-images.githubusercontent.com/82375995/148174310-301dc709-c432-4025-8e20-c187069ce450.png">
**Steps:**
1. generate a 24 words seed https://iancoleman.io/bip39/
2. open status - add existing account
3. paste the seed u generated into the modal
<img width="1524" alt="Screenshot 2022-01-05 at 10 00 42" src="https://user-images.githubusercontent.com/82375995/148174383-605ce478-4519-4c6d-940f-41bbdadffa7a.png">
**Build info:** master commit 6d0d00a50a144929abb5fb15603038428b4ae2ad (from Jan 4, 2022)
|
1.0
|
Seed phrase is not fully visible when entering 24 words seed phrase - ### Description
When entering 24 words seed, it is not possible to see all of them on the screen
<img width="1524" alt="Screenshot 2022-01-05 at 10 00 42" src="https://user-images.githubusercontent.com/82375995/148174310-301dc709-c432-4025-8e20-c187069ce450.png">
**Steps:**
1. generate a 24 words seed https://iancoleman.io/bip39/
2. open status - add existing account
3. paste the seed u generated into the modal
<img width="1524" alt="Screenshot 2022-01-05 at 10 00 42" src="https://user-images.githubusercontent.com/82375995/148174383-605ce478-4519-4c6d-940f-41bbdadffa7a.png">
**Build info:** master commit 6d0d00a50a144929abb5fb15603038428b4ae2ad (from Jan 4, 2022)
|
priority
|
seed phrase is not fully visible when entering words seed phrase description when entering words seed it is not possible to see all of them on the screen img width alt screenshot at src steps generate a words seed open status add existing account paste the seed u generated into the modal img width alt screenshot at src build info master commit from jan
| 1
|
282,020
| 8,701,783,885
|
IssuesEvent
|
2018-12-05 12:38:13
|
gitcoinco/web
|
https://api.github.com/repos/gitcoinco/web
|
opened
|
Confirm Wallet Address When User Applies for Work
|
Gitcoin Bounties Up Next high-priority
|
**The Issue:**
Brief Summary: When submitting a request to Start Work on a bounty, the Gitcoin system does not indicate where the funds will be sent if your work is approved. Furthermore during this process, if you have an existing ETH address on file, the system A) fails to inform you that you do, B) does not specify what this address is, and C) does not inform you that funds will be automatically deposited to that ETH address upon approval.
### Steps to replicate:
On a Github issue w/ a Gitcoin bounty (like this one), click on the 'Gitcoin Issue Details page' link in the first bullet of the gitcoinbot comment.
The link brings you directly to the issue details page on Gitcoin. Notice that on this page, there is no indication of whether you have an ETH address associated with your account or what that ETH address is.
Click the Start Work button, fill out the form, and press submit. Once approved, notice there is no indication of how the funds will be sent to you upon acceptance of your work. Neither on the web app, nor via email.
Throughout the process of requesting to start work, starting work, submitting your work, and having your work submission approved, there is no information provided to you on how the system will pay out the award. The only way to know that you have an ETH address associated with Gitcoin is to proactively navigate to your profile page.
**Proposed Solution:**
Part 1
After one's 'Start Work' request is approved, the email notification they receive should have the current method of payment included in it.
If there is an ETH address associated with the account, include that ETH address in the email, and indicate that the funds will be automatically sent to that address upon work approval.
If there is no ETH address associated with the account, indicate that in the email, and explain that they will have the opportunity to claim the ETH to their desired ETH address upon work approval.
Part 2 (Up for debate)
Require each user to 'claim' their award after their bounty is accepted. Instead of automatically depositing the award into the ETH address associated with that account, let the user confirm that the ETH address on file is correct and that they want their bounty reward to be sent there.
Other Ideas:
Hi Skylar,
We're so sorry for the experience that you've had. Thanks for putting together the detailed explanation and proposed solution.
Your solution to part 1 is definitely something that we can and will do. In terms of part 2, we are legally unable to hold funds which is why it is immediately transferred from the bounty funder to bounty hunter.
Other things we might also be able to do are:
When a contributor submits work, we ask the user to validate the wallet address on record and update it if needed. We enforce them to check this option as verified so that they are forced to validate it.
We update the email stating that funds have been sent to `X` address (if we have the address on record ) and if we don't we tell them how they can claim it.
|
1.0
|
Confirm Wallet Address When User Applies for Work - **The Issue:**
Brief Summary: When submitting a request to Start Work on a bounty, the Gitcoin system does not indicate where the funds will be sent if your work is approved. Furthermore during this process, if you have an existing ETH address on file, the system A) fails to inform you that you do, B) does not specify what this address is, and C) does not inform you that funds will be automatically deposited to that ETH address upon approval.
### Steps to replicate:
On a Github issue w/ a Gitcoin bounty (like this one), click on the 'Gitcoin Issue Details page' link in the first bullet of the gitcoinbot comment.
The link brings you directly to the issue details page on Gitcoin. Notice that on this page, there is no indication of whether you have an ETH address associated with your account or what that ETH address is.
Click the Start Work button, fill out the form, and press submit. Once approved, notice there is no indication of how the funds will be sent to you upon acceptance of your work. Neither on the web app, nor via email.
Throughout the process of requesting to start work, starting work, submitting your work, and having your work submission approved, there is no information provided to you on how the system will pay out the award. The only way to know that you have an ETH address associated with Gitcoin is to proactively navigate to your profile page.
**Proposed Solution:**
Part 1
After one's 'Start Work' request is approved, the email notification they receive should have the current method of payment included in it.
If there is an ETH address associated with the account, include that ETH address in the email, and indicate that the funds will be automatically sent to that address upon work approval.
If there is no ETH address associated with the account, indicate that in the email, and explain that they will have the opportunity to claim the ETH to their desired ETH address upon work approval.
Part 2 (Up for debate)
Require each user to 'claim' their award after their bounty is accepted. Instead of automatically depositing the award into the ETH address associated with that account, let the user confirm that the ETH address on file is correct and that they want their bounty reward to be sent there.
Other Ideas:
Hi Skylar,
We're so sorry for the experience that you've had. Thanks for putting together the detailed explanation and proposed solution.
Your solution to part 1 is definitely something that we can and will do. In terms of part 2, we are legally unable to hold funds which is why it is immediately transferred from the bounty funder to bounty hunter.
Other things we might also be able to do are:
When a contributor submits work, we ask the user to validate the wallet address on record and update it if needed. We enforce them to check this option as verified so that they are forced to validate it.
We update the email stating that funds have been sent to `X` address (if we have the address on record ) and if we don't we tell them how they can claim it.
|
priority
|
confirm wallet address when user applies for work the issue brief summary when submitting a request to start work on a bounty the gitcoin system does not indicate where the funds will be sent if your work is approved furthermore during this process if you have an existing eth address on file the system a fails to inform you that you do b does not specify what this address is and c does not inform you that funds will be automatically deposited to that eth address upon approval steps to replicate on a github issue w a gitcoin bounty like this one click on the gitcoin issue details page link in the first bullet of the gitcoinbot comment the link brings you directly to the issue details page on gitcoin notice that on this page there is no indication of whether you have an eth address associated with your account or what that eth address is click the start work button fill out the form and press submit once approved notice there is no indication of how the funds will be sent to you upon acceptance of your work neither on the web app nor via email throughout the process of requesting to start work starting work submitting your work and having your work submission approved there is no information provided to you on how the system will pay out the award the only way to know that you have an eth address associated with gitcoin is to proactively navigate to your profile page proposed solution part after one s start work request is approved the email notification they receive should have the current method of payment included in it if there is an eth address associated with the account include that eth address in the email and indicate that the funds will be automatically sent to that address upon work approval if there is no eth address associated with the account indicate that in the email and explain that they will have the opportunity to claim the eth to their desired eth address upon work approval part up for debate require each user to claim their award after their bounty is accepted instead of automatically depositing the award into the eth address associated with that account let the user confirm that the eth address on file is correct and that they want their bounty reward to be sent there other ideas hi skylar we re so sorry for the experience that you ve had thanks for putting together the detailed explanation and proposed solution your solution to part is definitely something that we can and will do in terms of part we are legally unable to hold funds which is why it is immediately transferred from the bounty funder to bounty hunter other things we might also be able to do are when a contributor submits work we ask the user to validate the wallet address on record and update it if needed we enforce them to check this option as verified so that they are forced to validate it we update the email stating that funds have been sent to x address if we have the address on record and if we don t we tell them how they can claim it
| 1
|
468,636
| 13,487,190,795
|
IssuesEvent
|
2020-09-11 10:34:17
|
foolip/mdn-bcd-collector
|
https://api.github.com/repos/foolip/mdn-bcd-collector
|
closed
|
Move backend endpoints?
|
Priority: High
|
The `/api/` URL contains two separate pieces of functionality in it -- the API tests, and the API endpoints. We should rename the API endpoint URLs to something else (for example, `/backend/` or just leave them as-is). Either that, or move the tests themselves to a new URL, which we could do considering #413.
|
1.0
|
Move backend endpoints? - The `/api/` URL contains two separate pieces of functionality in it -- the API tests, and the API endpoints. We should rename the API endpoint URLs to something else (for example, `/backend/` or just leave them as-is). Either that, or move the tests themselves to a new URL, which we could do considering #413.
|
priority
|
move backend endpoints the api url contains two separate pieces of functionality in it the api tests and the api endpoints we should rename the api endpoint urls to something else for example backend or just leave them as is either that or move the tests themselves to a new url which we could do considering
| 1
|
188,238
| 6,774,289,165
|
IssuesEvent
|
2017-10-27 09:47:49
|
SciTools/iris
|
https://api.github.com/repos/SciTools/iris
|
closed
|
Cube merge loads NetCDF deferred auxiliary coordinates.
|
enhancement high-priority performance
|
Iris implements deferred loading of auxiliary coordinates payload via the NetCDF loader via a `LazyArray`.
This is in response to large multi-dimensional auxiliary coordinates causing a large memory footprint. Loading many such cubes can easily exhaust the available RAM for a process.
At the moment cube merge has no concept of such deferred auxiliary coordinates. As such, cube merge loads the payload of the deferred auxiliary coordinates, thus negating all benefit of such deferred loading.
The only workaround at the moment is to use `iris.load_raw` as this circumvents the merge process. This is a valid step to take in the majority of times, as NetCDF sourced cubes tend to be complete multi-dimensional entities i.e. non-mergeable.
We require to extend cube merge to recognise such deferred auxiliary coordinates and not realise the payload.
|
1.0
|
Cube merge loads NetCDF deferred auxiliary coordinates. - Iris implements deferred loading of auxiliary coordinates payload via the NetCDF loader via a `LazyArray`.
This is in response to large multi-dimensional auxiliary coordinates causing a large memory footprint. Loading many such cubes can easily exhaust the available RAM for a process.
At the moment cube merge has no concept of such deferred auxiliary coordinates. As such, cube merge loads the payload of the deferred auxiliary coordinates, thus negating all benefit of such deferred loading.
The only workaround at the moment is to use `iris.load_raw` as this circumvents the merge process. This is a valid step to take in the majority of times, as NetCDF sourced cubes tend to be complete multi-dimensional entities i.e. non-mergeable.
We require to extend cube merge to recognise such deferred auxiliary coordinates and not realise the payload.
|
priority
|
cube merge loads netcdf deferred auxiliary coordinates iris implements deferred loading of auxiliary coordinates payload via the netcdf loader via a lazyarray this is in response to large multi dimensional auxiliary coordinates causing a large memory footprint loading many such cubes can easily exhaust the available ram for a process at the moment cube merge has no concept of such deferred auxiliary coordinates as such cube merge loads the payload of the deferred auxiliary coordinates thus negating all benefit of such deferred loading the only workaround at the moment is to use iris load raw as this circumvents the merge process this is a valid step to take in the majority of times as netcdf sourced cubes tend to be complete multi dimensional entities i e non mergeable we require to extend cube merge to recognise such deferred auxiliary coordinates and not realise the payload
| 1
|
300,292
| 9,206,337,392
|
IssuesEvent
|
2019-03-08 13:28:20
|
forpdi/forpdi
|
https://api.github.com/repos/forpdi/forpdi
|
opened
|
Edição de nome da subunidade não funciona
|
ForRisco bug highpriority
|
Depois de criada uma ‘subunidade’ não é possível editar o nome desta subunidade, vide figura abaixo. Perceba que o nome da subunidade é o único que não consigo editar.

|
1.0
|
Edição de nome da subunidade não funciona - Depois de criada uma ‘subunidade’ não é possível editar o nome desta subunidade, vide figura abaixo. Perceba que o nome da subunidade é o único que não consigo editar.

|
priority
|
edição de nome da subunidade não funciona depois de criada uma ‘subunidade’ não é possível editar o nome desta subunidade vide figura abaixo perceba que o nome da subunidade é o único que não consigo editar
| 1
|
682,475
| 23,346,006,054
|
IssuesEvent
|
2022-08-09 18:02:22
|
ChainSafe/forest
|
https://api.github.com/repos/ChainSafe/forest
|
closed
|
Snapshot Export: Show a sensible error message if some messages cannot be exported.
|
Priority: 2 - High Enhancement Ready
|
**Issue summary**
<!-- A clear and concise description of what the task is. -->
Lightweight snapshots are common and they only include messages from the last 2000 epochs. This is fine since the messages are only used for debugging or building services like blockchain explorers. But if we import a lightweight snapshot and then try to export a full snapshot, forest will fail with an inscrutable error message. We should either have a better error message or automatically switch to a lightweight snapshot. We should also consider making lightweight snapshots the default option.
Replicate the error by importing a lightweight calibnet snapshot (get it from our Digital Ocean Space), and then run `forest chain export`.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
Blocked by: #1464
<!-- Thank you 🙏 -->
|
1.0
|
Snapshot Export: Show a sensible error message if some messages cannot be exported. - **Issue summary**
<!-- A clear and concise description of what the task is. -->
Lightweight snapshots are common and they only include messages from the last 2000 epochs. This is fine since the messages are only used for debugging or building services like blockchain explorers. But if we import a lightweight snapshot and then try to export a full snapshot, forest will fail with an inscrutable error message. We should either have a better error message or automatically switch to a lightweight snapshot. We should also consider making lightweight snapshots the default option.
Replicate the error by importing a lightweight calibnet snapshot (get it from our Digital Ocean Space), and then run `forest chain export`.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
Blocked by: #1464
<!-- Thank you 🙏 -->
|
priority
|
snapshot export show a sensible error message if some messages cannot be exported issue summary lightweight snapshots are common and they only include messages from the last epochs this is fine since the messages are only used for debugging or building services like blockchain explorers but if we import a lightweight snapshot and then try to export a full snapshot forest will fail with an inscrutable error message we should either have a better error message or automatically switch to a lightweight snapshot we should also consider making lightweight snapshots the default option replicate the error by importing a lightweight calibnet snapshot get it from our digital ocean space and then run forest chain export other information and links blocked by
| 1
|
198,656
| 6,975,301,401
|
IssuesEvent
|
2017-12-12 06:15:44
|
arquillian/smart-testing
|
https://api.github.com/repos/arquillian/smart-testing
|
closed
|
Be as less intrusive as possible with the dependencies
|
Component: Maven Priority: High train/ginger Type: Bug
|
##### Issue Overview
Most of the strategy implementations, core and also surefire provider has several transitive dependencies. These transitive deps may cause conflicts, classloading issues or other problems such as failures when transitive dependencies are forbidden (using force plugin).
All dependencies that are added to the pom should be as less intrusive as possible.
Proposed solution:
* surefire provider & core should be handled by this issue: #277
* all strategies should be shaded as well
* when the dependency is added to the effective `pom.xml` file, an exclusion for all dependencies should be set
* jar should be minimized
##### Steps To Reproduce
1. clone https://github.com/wildfly/wildfly
2. install ST
3. run build with some strategy specified
|
1.0
|
Be as less intrusive as possible with the dependencies - ##### Issue Overview
Most of the strategy implementations, core and also surefire provider has several transitive dependencies. These transitive deps may cause conflicts, classloading issues or other problems such as failures when transitive dependencies are forbidden (using force plugin).
All dependencies that are added to the pom should be as less intrusive as possible.
Proposed solution:
* surefire provider & core should be handled by this issue: #277
* all strategies should be shaded as well
* when the dependency is added to the effective `pom.xml` file, an exclusion for all dependencies should be set
* jar should be minimized
##### Steps To Reproduce
1. clone https://github.com/wildfly/wildfly
2. install ST
3. run build with some strategy specified
|
priority
|
be as less intrusive as possible with the dependencies issue overview most of the strategy implementations core and also surefire provider has several transitive dependencies these transitive deps may cause conflicts classloading issues or other problems such as failures when transitive dependencies are forbidden using force plugin all dependencies that are added to the pom should be as less intrusive as possible proposed solution surefire provider core should be handled by this issue all strategies should be shaded as well when the dependency is added to the effective pom xml file an exclusion for all dependencies should be set jar should be minimized steps to reproduce clone install st run build with some strategy specified
| 1
|
397,246
| 11,725,683,880
|
IssuesEvent
|
2020-03-10 13:23:17
|
perfectsense/gyro-azure-provider
|
https://api.github.com/repos/perfectsense/gyro-azure-provider
|
closed
|
Azure: Gyro SSH/List doesn't show scaling set virtual machines.
|
enhancement priority:high
|
From @RichieHowell in GYRO-439:
When bringing up virtual machines through a scaling set in azure, I am unable to list the virtual machines in that scaling set through gyro list and gyro ssh does not give me those virtual machines as valid options.
```
gyro list prod/frontend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
No instances found.
gyro ssh prod/frontend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
No instances found.
```
Here is my frontend.gyro file with the scaling set. Let me know if you need anything else about the environment.
[I couldn't attach the file at this point because it is .gyro.]
Here is a gyro list of the backends to show it is not just gyro list in general not working for azure, it seems to be only when the virtual machines are brought up in a scaling set. This probably has to do with virtual machines brought up by scaling sets not being considered virtual machines by Azure even though they have all the characteristics of one.
```
gyro list prod/backend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
| Instance ID | State | Launch Date | Hostname |
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
| a0068218-b63c-4edd-bece-007fee8551d8 | PowerState/running | Mon Feb 17 15:31:14 EST 2020 | 10.0.0.132 |
| bc3dc8a8-162a-4dc5-883f-76e9701a1ae9 | PowerState/running | Mon Feb 17 15:33:59 EST 2020 | 10.0.0.133 |
| 4b2d383b-5dce-4f1b-988c-21ff9cc1e816 | PowerState/running | Mon Feb 17 15:37:06 EST 2020 | 10.0.0.134 |
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
```
|
1.0
|
Azure: Gyro SSH/List doesn't show scaling set virtual machines. - From @RichieHowell in GYRO-439:
When bringing up virtual machines through a scaling set in azure, I am unable to list the virtual machines in that scaling set through gyro list and gyro ssh does not give me those virtual machines as valid options.
```
gyro list prod/frontend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
No instances found.
gyro ssh prod/frontend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
No instances found.
```
Here is my frontend.gyro file with the scaling set. Let me know if you need anything else about the environment.
[I couldn't attach the file at this point because it is .gyro.]
Here is a gyro list of the backends to show it is not just gyro list in general not working for azure, it seems to be only when the virtual machines are brought up in a scaling set. This probably has to do with virtual machines brought up by scaling sets not being considered virtual machines by Azure even though they have all the characteristics of one.
```
gyro list prod/backend.gyro
↓ Loading plugin: gyro:gyro-azure-provider:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-brightspot-plugin:0.99.1-SNAPSHOT
↓ Loading plugin: gyro:gyro-ssh-plugin:0.99.1-SNAPSHOT
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
| Instance ID | State | Launch Date | Hostname |
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
| a0068218-b63c-4edd-bece-007fee8551d8 | PowerState/running | Mon Feb 17 15:31:14 EST 2020 | 10.0.0.132 |
| bc3dc8a8-162a-4dc5-883f-76e9701a1ae9 | PowerState/running | Mon Feb 17 15:33:59 EST 2020 | 10.0.0.133 |
| 4b2d383b-5dce-4f1b-988c-21ff9cc1e816 | PowerState/running | Mon Feb 17 15:37:06 EST 2020 | 10.0.0.134 |
+----------------------+--------------+----------------------+-------------------------------------------------------------------+
```
|
priority
|
azure gyro ssh list doesn t show scaling set virtual machines from richiehowell in gyro when bringing up virtual machines through a scaling set in azure i am unable to list the virtual machines in that scaling set through gyro list and gyro ssh does not give me those virtual machines as valid options gyro list prod frontend gyro ↓ loading plugin gyro gyro azure provider snapshot ↓ loading plugin gyro gyro brightspot plugin snapshot ↓ loading plugin gyro gyro ssh plugin snapshot no instances found gyro ssh prod frontend gyro ↓ loading plugin gyro gyro azure provider snapshot ↓ loading plugin gyro gyro brightspot plugin snapshot ↓ loading plugin gyro gyro ssh plugin snapshot no instances found here is my frontend gyro file with the scaling set let me know if you need anything else about the environment here is a gyro list of the backends to show it is not just gyro list in general not working for azure it seems to be only when the virtual machines are brought up in a scaling set this probably has to do with virtual machines brought up by scaling sets not being considered virtual machines by azure even though they have all the characteristics of one gyro list prod backend gyro ↓ loading plugin gyro gyro azure provider snapshot ↓ loading plugin gyro gyro brightspot plugin snapshot ↓ loading plugin gyro gyro ssh plugin snapshot instance id state launch date hostname bece powerstate running mon feb est powerstate running mon feb est powerstate running mon feb est
| 1
|
735,619
| 25,406,998,298
|
IssuesEvent
|
2022-11-22 15:58:28
|
alex4401/mediawiki-extensions-DataMaps
|
https://api.github.com/repos/alex4401/mediawiki-extensions-DataMaps
|
opened
|
Test MW 1.39 compatibility on v0.14.0 in-dev branch
|
priority: high QA
|
This MUST be done before v0.14.0 is released: most of the other changes can be postponed into v0.14.x and v0.15.0.
|
1.0
|
Test MW 1.39 compatibility on v0.14.0 in-dev branch - This MUST be done before v0.14.0 is released: most of the other changes can be postponed into v0.14.x and v0.15.0.
|
priority
|
test mw compatibility on in dev branch this must be done before is released most of the other changes can be postponed into x and
| 1
|
815,379
| 30,549,865,214
|
IssuesEvent
|
2023-07-20 07:46:39
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[Bug]: Ballerina OpenAPI built-in plugin is not working
|
Type/Bug Priority/High Team/DevTools Area/ProjectAPI
|
### Description
When try to generate OpenAPI specification using OpenAPI built-in plugin the OpenAPI spec generation is not working.
### Steps to Reproduce
Build the following code and check whether source-modifier task for OpenAPI built-in plugin [1] executed.
```ballerina
import ballerina/http;
import ballerina/openapi;
# Represents location
public type Location record {|
# Name of the location
string name;
# Unique identification of the location
string id;
# Address of the location
string address;
|};
# Represents a collection of locations
public type Locations record {
# collection of locations
Location[] locations;
};
@openapi:ServiceInfo {
embed: true
}
service /snowpeak on new http:Listener(9090) {
# Snowpeak locations resource
#
# + return - `Locations` or `SnowpeakError` representation
resource function get locations() returns @http:Cache Locations {
return getLocations();
}
}
isolated function getLocations() returns Locations {
return {
locations: [
{
name: "Alps",
id: "l1000",
address: "NC 29384, some place, switzerland"
},
{
name: "Pilatus",
id: "l2000",
address: "NC 29444, some place, switzerland"
}
]
};
}
```
[1] - https://github.com/ballerina-platform/openapi-tools/tree/master/openapi-extension
### Affected Version(s)
Ballerina 2201.6.0
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
1.0
|
[Bug]: Ballerina OpenAPI built-in plugin is not working - ### Description
When try to generate OpenAPI specification using OpenAPI built-in plugin the OpenAPI spec generation is not working.
### Steps to Reproduce
Build the following code and check whether source-modifier task for OpenAPI built-in plugin [1] executed.
```ballerina
import ballerina/http;
import ballerina/openapi;
# Represents location
public type Location record {|
# Name of the location
string name;
# Unique identification of the location
string id;
# Address of the location
string address;
|};
# Represents a collection of locations
public type Locations record {
# collection of locations
Location[] locations;
};
@openapi:ServiceInfo {
embed: true
}
service /snowpeak on new http:Listener(9090) {
# Snowpeak locations resource
#
# + return - `Locations` or `SnowpeakError` representation
resource function get locations() returns @http:Cache Locations {
return getLocations();
}
}
isolated function getLocations() returns Locations {
return {
locations: [
{
name: "Alps",
id: "l1000",
address: "NC 29384, some place, switzerland"
},
{
name: "Pilatus",
id: "l2000",
address: "NC 29444, some place, switzerland"
}
]
};
}
```
[1] - https://github.com/ballerina-platform/openapi-tools/tree/master/openapi-extension
### Affected Version(s)
Ballerina 2201.6.0
### OS, DB, other environment details and versions
_No response_
### Related area
-> Compilation
### Related issue(s) (optional)
_No response_
### Suggested label(s) (optional)
_No response_
### Suggested assignee(s) (optional)
_No response_
|
priority
|
ballerina openapi built in plugin is not working description when try to generate openapi specification using openapi built in plugin the openapi spec generation is not working steps to reproduce build the following code and check whether source modifier task for openapi built in plugin executed ballerina import ballerina http import ballerina openapi represents location public type location record name of the location string name unique identification of the location string id address of the location string address represents a collection of locations public type locations record collection of locations location locations openapi serviceinfo embed true service snowpeak on new http listener snowpeak locations resource return locations or snowpeakerror representation resource function get locations returns http cache locations return getlocations isolated function getlocations returns locations return locations name alps id address nc some place switzerland name pilatus id address nc some place switzerland affected version s ballerina os db other environment details and versions no response related area compilation related issue s optional no response suggested label s optional no response suggested assignee s optional no response
| 1
|
565,787
| 16,769,884,742
|
IssuesEvent
|
2021-06-14 13:40:05
|
notawakestudio/NUSConnect
|
https://api.github.com/repos/notawakestudio/NUSConnect
|
opened
|
WK6 SPRINT
|
priority.High type.Task
|
Forum
- [ ] Mark post as wiki (YL)
- [ ] Post should be able to link to a quiz question (Create/View) (YL)
---
Quiz
- [ ] Create a post from a question (JX)
- [ ] Fix UI (Quiz Landing page, question taking) (JX)
---
Profile
- [ ] User backend (YL)
- [ ] Edit of user name (YL)
- [ ] Account creation (JX)
---
Documentation
- [ ] Update 1 (YL)
- [ ] Update 2 (JX)
|
1.0
|
WK6 SPRINT - Forum
- [ ] Mark post as wiki (YL)
- [ ] Post should be able to link to a quiz question (Create/View) (YL)
---
Quiz
- [ ] Create a post from a question (JX)
- [ ] Fix UI (Quiz Landing page, question taking) (JX)
---
Profile
- [ ] User backend (YL)
- [ ] Edit of user name (YL)
- [ ] Account creation (JX)
---
Documentation
- [ ] Update 1 (YL)
- [ ] Update 2 (JX)
|
priority
|
sprint forum mark post as wiki yl post should be able to link to a quiz question create view yl quiz create a post from a question jx fix ui quiz landing page question taking jx profile user backend yl edit of user name yl account creation jx documentation update yl update jx
| 1
|
516,796
| 14,987,999,216
|
IssuesEvent
|
2021-01-29 00:10:34
|
nlpsandbox/data-node
|
https://api.github.com/repos/nlpsandbox/data-node
|
closed
|
Improve Error message when posting empty Annotation object
|
Priority: High
|
Here is the current message when posting an empty object `{}`:
```
{
"detail": "'NoneType' object has no attribute 'to_dict'",
"status": 500,
"title": "Internal error"
}
```
|
1.0
|
Improve Error message when posting empty Annotation object - Here is the current message when posting an empty object `{}`:
```
{
"detail": "'NoneType' object has no attribute 'to_dict'",
"status": 500,
"title": "Internal error"
}
```
|
priority
|
improve error message when posting empty annotation object here is the current message when posting an empty object detail nonetype object has no attribute to dict status title internal error
| 1
|
282,869
| 8,711,117,155
|
IssuesEvent
|
2018-12-06 18:15:51
|
DemocraciaEnRed/leyesabiertas-web
|
https://api.github.com/repos/DemocraciaEnRed/leyesabiertas-web
|
closed
|
Cambiar texto de alert cuando se guardan cambios
|
priority: high
|
Dice "los cambios fueron guardados y publicados". Quitar "y publicados".
|
1.0
|
Cambiar texto de alert cuando se guardan cambios - Dice "los cambios fueron guardados y publicados". Quitar "y publicados".
|
priority
|
cambiar texto de alert cuando se guardan cambios dice los cambios fueron guardados y publicados quitar y publicados
| 1
|
376,605
| 11,149,032,088
|
IssuesEvent
|
2019-12-23 17:16:48
|
bounswe/bounswe2019group11
|
https://api.github.com/repos/bounswe/bounswe2019group11
|
closed
|
Implement search for mobile
|
Component: Mobile Estimation: M Priority: High Status: In Progress
|
Search functionality needs to be implemented as stated in 1.1.1.10. It is better to make it semantic search. Users, events and trading equipment would be able to be searched.
|
1.0
|
Implement search for mobile - Search functionality needs to be implemented as stated in 1.1.1.10. It is better to make it semantic search. Users, events and trading equipment would be able to be searched.
|
priority
|
implement search for mobile search functionality needs to be implemented as stated in it is better to make it semantic search users events and trading equipment would be able to be searched
| 1
|
184,481
| 6,713,483,015
|
IssuesEvent
|
2017-10-13 13:38:23
|
Automattic/Co-Authors-Plus
|
https://api.github.com/repos/Automattic/Co-Authors-Plus
|
closed
|
Remove user association from guest author when user is deleted / removed
|
confirmed-bug Priority::High
|
When a user is deleted or removed from a site, any guest author associations they have should be deleted.
|
1.0
|
Remove user association from guest author when user is deleted / removed - When a user is deleted or removed from a site, any guest author associations they have should be deleted.
|
priority
|
remove user association from guest author when user is deleted removed when a user is deleted or removed from a site any guest author associations they have should be deleted
| 1
|
362,430
| 10,727,472,270
|
IssuesEvent
|
2019-10-28 11:47:40
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Core] Repair/Withdrawl from guild
|
Confirmed Fixed in Dev Guild related Priority-High
|
**Links:**
cant find a link for guild repairs...
**What is happening:**
guild repair and guild bank withdraw is only allowing a set amount, even when the guild master is setting a value in the guild control options
see below, i have set the value to 200g for the officer rank, this value is a daily value they can withdraw or repair, not 200g for both, this is combined, so they can either repair for 100g and take 100g out or they can use all 200 on repairs etc
but no matter what i set the value as its showing for the officers in my guild 1 gold max a day, no matter what i set the value as

this second screenshot is from Dev server where me and kitz went there to test and on Dev it will only allow 2 silver per day, using same conrols as i am on live
https://cdn.discordapp.com/attachments/584187724685377556/584187730406277129/unknown.png
**What should happen:**
it should allow the selected rank to repair or withdraw or a combo of both the value of what is set in guild bank options on guild control tab
|
1.0
|
[Core] Repair/Withdrawl from guild - **Links:**
cant find a link for guild repairs...
**What is happening:**
guild repair and guild bank withdraw is only allowing a set amount, even when the guild master is setting a value in the guild control options
see below, i have set the value to 200g for the officer rank, this value is a daily value they can withdraw or repair, not 200g for both, this is combined, so they can either repair for 100g and take 100g out or they can use all 200 on repairs etc
but no matter what i set the value as its showing for the officers in my guild 1 gold max a day, no matter what i set the value as

this second screenshot is from Dev server where me and kitz went there to test and on Dev it will only allow 2 silver per day, using same conrols as i am on live
https://cdn.discordapp.com/attachments/584187724685377556/584187730406277129/unknown.png
**What should happen:**
it should allow the selected rank to repair or withdraw or a combo of both the value of what is set in guild bank options on guild control tab
|
priority
|
repair withdrawl from guild links cant find a link for guild repairs what is happening guild repair and guild bank withdraw is only allowing a set amount even when the guild master is setting a value in the guild control options see below i have set the value to for the officer rank this value is a daily value they can withdraw or repair not for both this is combined so they can either repair for and take out or they can use all on repairs etc but no matter what i set the value as its showing for the officers in my guild gold max a day no matter what i set the value as this second screenshot is from dev server where me and kitz went there to test and on dev it will only allow silver per day using same conrols as i am on live what should happen it should allow the selected rank to repair or withdraw or a combo of both the value of what is set in guild bank options on guild control tab
| 1
|
136,486
| 5,283,613,314
|
IssuesEvent
|
2017-02-07 21:50:42
|
SCIInstitute/SCIRun
|
https://api.github.com/repos/SCIInstitute/SCIRun
|
closed
|
Module IDs: refactor to use Boost UUID
|
Archive Bug Network Priority-High Refactoring Serialization
|
Serialization has never respected the "instance count" method. I just discovered this after loading a network. We need a more robust id generation function. Boost UUID looks pretty good for this.
Links:
- http://www.boost.org/doc/libs/1_42_0/libs/uuid/uuid.html
|
1.0
|
Module IDs: refactor to use Boost UUID - Serialization has never respected the "instance count" method. I just discovered this after loading a network. We need a more robust id generation function. Boost UUID looks pretty good for this.
Links:
- http://www.boost.org/doc/libs/1_42_0/libs/uuid/uuid.html
|
priority
|
module ids refactor to use boost uuid serialization has never respected the instance count method i just discovered this after loading a network we need a more robust id generation function boost uuid looks pretty good for this links
| 1
|
712,322
| 24,490,901,170
|
IssuesEvent
|
2022-10-10 01:37:58
|
AY2223S1-CS2103T-T17-2/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-T17-2/tp
|
closed
|
As a Careless User I can delete a meal wrongly recorded
|
type.Story priority.High
|
... So that I can fix my food records easily
|
1.0
|
As a Careless User I can delete a meal wrongly recorded - ... So that I can fix my food records easily
|
priority
|
as a careless user i can delete a meal wrongly recorded so that i can fix my food records easily
| 1
|
506,397
| 14,664,240,336
|
IssuesEvent
|
2020-12-29 11:31:36
|
bounswe/bounswe2020group3
|
https://api.github.com/repos/bounswe/bounswe2020group3
|
opened
|
[Frontend] Project Edit Fails
|
Frontend Priority: High Type: Bug
|
* **Project: FRONTEND **
* **This is a: BUG REPORT **
* **Description of the issue**
We can not edit the project in frontend. There is a bug and "bad request" returned due to bug.
* **For bug reports: Explanation of how to reproduce the bug, and what was the expected behaviour.**
Go to one of yout projects and try to edit it.
* **Deadline for resolution:**
31.12.2020
|
1.0
|
[Frontend] Project Edit Fails - * **Project: FRONTEND **
* **This is a: BUG REPORT **
* **Description of the issue**
We can not edit the project in frontend. There is a bug and "bad request" returned due to bug.
* **For bug reports: Explanation of how to reproduce the bug, and what was the expected behaviour.**
Go to one of yout projects and try to edit it.
* **Deadline for resolution:**
31.12.2020
|
priority
|
project edit fails project frontend this is a bug report description of the issue we can not edit the project in frontend there is a bug and bad request returned due to bug for bug reports explanation of how to reproduce the bug and what was the expected behaviour go to one of yout projects and try to edit it deadline for resolution
| 1
|
575,933
| 17,066,630,396
|
IssuesEvent
|
2021-07-07 08:11:34
|
containerd/nerdctl
|
https://api.github.com/repos/containerd/nerdctl
|
closed
|
Support labels for networks and volumes
|
enhancement priority/high
|
### Network labels
Should be stored as `NerdctlLabels` field, which will appear next to `NerdctlID` field
https://github.com/containerd/nerdctl/blob/3b63cac3a713987790479206b7ebb1865caec50d/pkg/netutil/netutil_linux.go#L32
### Volume labels
Should be stored as a JSON file like `/var/lib/nerdctl/volumes/default/foo/volume.json`
|
1.0
|
Support labels for networks and volumes - ### Network labels
Should be stored as `NerdctlLabels` field, which will appear next to `NerdctlID` field
https://github.com/containerd/nerdctl/blob/3b63cac3a713987790479206b7ebb1865caec50d/pkg/netutil/netutil_linux.go#L32
### Volume labels
Should be stored as a JSON file like `/var/lib/nerdctl/volumes/default/foo/volume.json`
|
priority
|
support labels for networks and volumes network labels should be stored as nerdctllabels field which will appear next to nerdctlid field volume labels should be stored as a json file like var lib nerdctl volumes default foo volume json
| 1
|
223,546
| 7,458,282,326
|
IssuesEvent
|
2018-03-30 09:32:25
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
opened
|
Google background doesn't work in openlayers/widget map
|
Priority: High bug
|
### Description
Because of some issues with style and with timing, if a map has google as background, it will have some problems when used as a widget with openlayers.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Open Dashboard
- Add a map widget
- Select a map with a google background from the list
*Expected Result*
- you can see the map
*Current Result*
- there are some errors and some graphical glitches both in widget and preview context
### Other useful information (optional):
|
1.0
|
Google background doesn't work in openlayers/widget map - ### Description
Because of some issues with style and with timing, if a map has google as background, it will have some problems when used as a widget with openlayers.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Open Dashboard
- Add a map widget
- Select a map with a google background from the list
*Expected Result*
- you can see the map
*Current Result*
- there are some errors and some graphical glitches both in widget and preview context
### Other useful information (optional):
|
priority
|
google background doesn t work in openlayers widget map description because of some issues with style and with timing if a map has google as background it will have some problems when used as a widget with openlayers in case of bug otherwise remove this paragraph browser affected any steps to reproduce open dashboard add a map widget select a map with a google background from the list expected result you can see the map current result there are some errors and some graphical glitches both in widget and preview context other useful information optional
| 1
|
617,482
| 19,358,763,877
|
IssuesEvent
|
2021-12-16 00:55:44
|
UC-Davis-molecular-computing/scadnano
|
https://api.github.com/repos/UC-Davis-molecular-computing/scadnano
|
closed
|
make more useful error message when helices view order input is invalid
|
enhancement high priority closed in dev
|
Click on Group→adjust current group and edit its helices view order with an invalid list of integers.
Currently, it just shows all the helix indices and the ones that were typed:

It's not straightforward to pinpoint the problem from that. In this case, some helix indices were left out; it should say what they are. If there is a duplicate, say explicitly what it is. If a helix index is invalid, say which one.
|
1.0
|
make more useful error message when helices view order input is invalid - Click on Group→adjust current group and edit its helices view order with an invalid list of integers.
Currently, it just shows all the helix indices and the ones that were typed:

It's not straightforward to pinpoint the problem from that. In this case, some helix indices were left out; it should say what they are. If there is a duplicate, say explicitly what it is. If a helix index is invalid, say which one.
|
priority
|
make more useful error message when helices view order input is invalid click on group rarr adjust current group and edit its helices view order with an invalid list of integers currently it just shows all the helix indices and the ones that were typed it s not straightforward to pinpoint the problem from that in this case some helix indices were left out it should say what they are if there is a duplicate say explicitly what it is if a helix index is invalid say which one
| 1
|
590,870
| 17,789,870,584
|
IssuesEvent
|
2021-08-31 15:03:40
|
SeinopSys/Deviant-Notify
|
https://api.github.com/repos/SeinopSys/Deviant-Notify
|
opened
|
Link to miscellaneous watch category is incorrect
|
bug high priority
|
Unread watch notifications link to https://www.deviantart.com/notifications/watch/misc instead of https://www.deviantart.com/notifications/watch/miscellaneous
|
1.0
|
Link to miscellaneous watch category is incorrect - Unread watch notifications link to https://www.deviantart.com/notifications/watch/misc instead of https://www.deviantart.com/notifications/watch/miscellaneous
|
priority
|
link to miscellaneous watch category is incorrect unread watch notifications link to instead of
| 1
|
691,477
| 23,698,016,181
|
IssuesEvent
|
2022-08-29 16:15:00
|
hackforla/expunge-assist
|
https://api.github.com/repos/hackforla/expunge-assist
|
closed
|
Change copy on pop-up buttons
|
role: development priority: high size: 1pt feature: figma content writing
|
### Overview
Replace button copy for pop-ups before internal usability testing 2.
### Action Items
- [ ] Change copy from NEXT or whatever it says now on ALL pop-up screens to "LET'S CONTINUE"
- [ ] Button (see pictures for reference)
- [ ] Color: #9903FF (Electric Violet)
- [ ] 40 px height x 173 px width
- [ ] Top and bottom padding: 8
- [ ] Side padding: 16
- [ ] Padding between "CONTINUE" and arrow: 8
- [ ] Do not change the button copy on the Welcome page
### Top and Bottom padding

### Side Padding

### Padding between "CONTINUE" and arrow

### Resources/Instructions
See this [issue](https://github.com/hackforla/expunge-assist/issues/628)
|
1.0
|
Change copy on pop-up buttons - ### Overview
Replace button copy for pop-ups before internal usability testing 2.
### Action Items
- [ ] Change copy from NEXT or whatever it says now on ALL pop-up screens to "LET'S CONTINUE"
- [ ] Button (see pictures for reference)
- [ ] Color: #9903FF (Electric Violet)
- [ ] 40 px height x 173 px width
- [ ] Top and bottom padding: 8
- [ ] Side padding: 16
- [ ] Padding between "CONTINUE" and arrow: 8
- [ ] Do not change the button copy on the Welcome page
### Top and Bottom padding

### Side Padding

### Padding between "CONTINUE" and arrow

### Resources/Instructions
See this [issue](https://github.com/hackforla/expunge-assist/issues/628)
|
priority
|
change copy on pop up buttons overview replace button copy for pop ups before internal usability testing action items change copy from next or whatever it says now on all pop up screens to let s continue button see pictures for reference color electric violet px height x px width top and bottom padding side padding padding between continue and arrow do not change the button copy on the welcome page top and bottom padding side padding padding between continue and arrow resources instructions see this
| 1
|
728,408
| 25,077,540,924
|
IssuesEvent
|
2022-11-07 16:33:40
|
Mbed-TLS/mbedtls
|
https://api.github.com/repos/Mbed-TLS/mbedtls
|
closed
|
TLS 1.3 session resumption does not work
|
bug component-tls13 priority-high
|
### Summary
I am using latest version and session resumption is not working with TLS 1.3 (works with TLS 1.2).
I tested this on Linux by using the following command:
programs/ssl/ssl_client2 server_name=twitter.com server_port=443 reconnect=1
Resumption fails with any TLS 1.3 site, but it works with TLS 1.2 sites.
Also, the option auth_mode=optional does not work with TLS 1.3 connections, but it works with TLS 1.2 connections.
|
1.0
|
TLS 1.3 session resumption does not work - ### Summary
I am using latest version and session resumption is not working with TLS 1.3 (works with TLS 1.2).
I tested this on Linux by using the following command:
programs/ssl/ssl_client2 server_name=twitter.com server_port=443 reconnect=1
Resumption fails with any TLS 1.3 site, but it works with TLS 1.2 sites.
Also, the option auth_mode=optional does not work with TLS 1.3 connections, but it works with TLS 1.2 connections.
|
priority
|
tls session resumption does not work summary i am using latest version and session resumption is not working with tls works with tls i tested this on linux by using the following command programs ssl ssl server name twitter com server port reconnect resumption fails with any tls site but it works with tls sites also the option auth mode optional does not work with tls connections but it works with tls connections
| 1
|
424,696
| 12,322,192,275
|
IssuesEvent
|
2020-05-13 09:56:02
|
DIT112-V20/group-06
|
https://api.github.com/repos/DIT112-V20/group-06
|
closed
|
Connect app to Spotify
|
app high priority sprint5 user story
|
## User story
As an app user, I want the app to have a connection to Spotify so that the app can play music.
## Acceptance criteria
The app has established a connection to the Spotify API.
|
1.0
|
Connect app to Spotify - ## User story
As an app user, I want the app to have a connection to Spotify so that the app can play music.
## Acceptance criteria
The app has established a connection to the Spotify API.
|
priority
|
connect app to spotify user story as an app user i want the app to have a connection to spotify so that the app can play music acceptance criteria the app has established a connection to the spotify api
| 1
|
821,334
| 30,817,683,936
|
IssuesEvent
|
2023-08-01 14:24:46
|
tinkoff-ai/etna
|
https://api.github.com/repos/tinkoff-ai/etna
|
closed
|
Examine slow metrics computation
|
enhancement priority/high
|
### 🚀 Feature Request
Currently in some backtesting pipelines the bottleneck could be a metric computation. It isn't a good situation. We should discover why is it happening.
### Proposal
We should try to understand why metrics are computed very slow for a big number of segments.
The problem probably lies in per-segment metric computation. We could compute it more optimally, but our current `Metric` implementation isn't suited for that: it is very strict about how metric is computed using `metric_fn`. Probably, we could significantly improve even this kind of computation if we could iterate over segments more optimally.
Steps:
- Profile metric computation for different number of segments and features;
- Find out what place is bottleneck;
- Describe the problem in the comments of the issue.
### Test cases
_No response_
### Additional context
_No response_
|
1.0
|
Examine slow metrics computation - ### 🚀 Feature Request
Currently in some backtesting pipelines the bottleneck could be a metric computation. It isn't a good situation. We should discover why is it happening.
### Proposal
We should try to understand why metrics are computed very slow for a big number of segments.
The problem probably lies in per-segment metric computation. We could compute it more optimally, but our current `Metric` implementation isn't suited for that: it is very strict about how metric is computed using `metric_fn`. Probably, we could significantly improve even this kind of computation if we could iterate over segments more optimally.
Steps:
- Profile metric computation for different number of segments and features;
- Find out what place is bottleneck;
- Describe the problem in the comments of the issue.
### Test cases
_No response_
### Additional context
_No response_
|
priority
|
examine slow metrics computation 🚀 feature request currently in some backtesting pipelines the bottleneck could be a metric computation it isn t a good situation we should discover why is it happening proposal we should try to understand why metrics are computed very slow for a big number of segments the problem probably lies in per segment metric computation we could compute it more optimally but our current metric implementation isn t suited for that it is very strict about how metric is computed using metric fn probably we could significantly improve even this kind of computation if we could iterate over segments more optimally steps profile metric computation for different number of segments and features find out what place is bottleneck describe the problem in the comments of the issue test cases no response additional context no response
| 1
|
323,074
| 9,842,623,321
|
IssuesEvent
|
2019-06-18 09:40:55
|
luna/luna-studio
|
https://api.github.com/repos/luna/luna-studio
|
reopened
|
Visualization unknown placeholder error
|
Category: Visualisations Change: Non-Breaking Priority: High Type: Bug
|
Please ensure that you are running the latest version of Luna Studio before reporting the bug! It may have been fixed since.
### General Summary
<img width="249" alt="Zrzut ekranu 2019-04-8 o 16 09 48" src="https://user-images.githubusercontent.com/12892578/55730575-d0c81800-5a18-11e9-9aad-2bfb573f7f7f.png">
### Steps to Reproduce
Please list the reproduction steps for your bug. For example:
1. Create a new project.
2. add a node `JSONString "foo"`
3. add another node with method `asText` connected with the previous one
### Expected Result
- Node with text `"foo"` and visualization
### Actual Result
- `Unknown placeholder type: "foo"` - restarting interpreter is helping.
### Luna Version
- Please include the output of `luna-studio --version`.
|
1.0
|
Visualization unknown placeholder error - Please ensure that you are running the latest version of Luna Studio before reporting the bug! It may have been fixed since.
### General Summary
<img width="249" alt="Zrzut ekranu 2019-04-8 o 16 09 48" src="https://user-images.githubusercontent.com/12892578/55730575-d0c81800-5a18-11e9-9aad-2bfb573f7f7f.png">
### Steps to Reproduce
Please list the reproduction steps for your bug. For example:
1. Create a new project.
2. add a node `JSONString "foo"`
3. add another node with method `asText` connected with the previous one
### Expected Result
- Node with text `"foo"` and visualization
### Actual Result
- `Unknown placeholder type: "foo"` - restarting interpreter is helping.
### Luna Version
- Please include the output of `luna-studio --version`.
|
priority
|
visualization unknown placeholder error please ensure that you are running the latest version of luna studio before reporting the bug it may have been fixed since general summary img width alt zrzut ekranu o src steps to reproduce please list the reproduction steps for your bug for example create a new project add a node jsonstring foo add another node with method astext connected with the previous one expected result node with text foo and visualization actual result unknown placeholder type foo restarting interpreter is helping luna version please include the output of luna studio version
| 1
|
575,483
| 17,032,216,036
|
IssuesEvent
|
2021-07-04 20:04:13
|
venturemark/webclient
|
https://api.github.com/repos/venturemark/webclient
|
closed
|
Cannot edit "allow members to create timelines" after venture creation
|
priority/high
|

**Current behavior:**
The owner of the Venture is not able to access the toggle "allow members to create timelines" toggle, click the save button, and persist the change
**Expected behavior:**
The owner can see this view, can toggle button, change persists and affects all members.
|
1.0
|
Cannot edit "allow members to create timelines" after venture creation -

**Current behavior:**
The owner of the Venture is not able to access the toggle "allow members to create timelines" toggle, click the save button, and persist the change
**Expected behavior:**
The owner can see this view, can toggle button, change persists and affects all members.
|
priority
|
cannot edit allow members to create timelines after venture creation current behavior the owner of the venture is not able to access the toggle allow members to create timelines toggle click the save button and persist the change expected behavior the owner can see this view can toggle button change persists and affects all members
| 1
|
405,848
| 11,883,451,478
|
IssuesEvent
|
2020-03-27 15:58:37
|
cdfoundation/foundation
|
https://api.github.com/repos/cdfoundation/foundation
|
closed
|
Digicert validation is holding up Code Signing
|
developer outreach high priority legal operations
|
re: #10
Digicert is asking to validate the project series LLC https://www.digicert.com/ssl-certificate-purchase-validation.htm
Per @olblak
- tried to request one with my own credit card, and my request staid pending for a little bit more than a week without further notification then I canceled when I discovered that I should use the CDF credit card but I can't use it either.
>> This is due to required org validation. nothing on your end. also, please use the corp card on file
Could we plan a zoom session to move this issue forward?
>> This is scheduled
|
1.0
|
Digicert validation is holding up Code Signing - re: #10
Digicert is asking to validate the project series LLC https://www.digicert.com/ssl-certificate-purchase-validation.htm
Per @olblak
- tried to request one with my own credit card, and my request staid pending for a little bit more than a week without further notification then I canceled when I discovered that I should use the CDF credit card but I can't use it either.
>> This is due to required org validation. nothing on your end. also, please use the corp card on file
Could we plan a zoom session to move this issue forward?
>> This is scheduled
|
priority
|
digicert validation is holding up code signing re digicert is asking to validate the project series llc per olblak tried to request one with my own credit card and my request staid pending for a little bit more than a week without further notification then i canceled when i discovered that i should use the cdf credit card but i can t use it either this is due to required org validation nothing on your end also please use the corp card on file could we plan a zoom session to move this issue forward this is scheduled
| 1
|
724,046
| 24,915,981,351
|
IssuesEvent
|
2022-10-30 12:04:16
|
bounswe/bounswe2022group9
|
https://api.github.com/repos/bounswe/bounswe2022group9
|
closed
|
[Backend] Implementing the login endpoint.
|
Priority: High In Progress Backend
|
Deadline: 29.10.2022 23.59
TODO:
- [x] The login endpoint will be implemented for the users who are already signed up.
- [x] Username and password will be accepted and an access-token will be returned for successful attempts. Otherwise status 401 will be returned.
- [x] Unit tests will be written to test the functionality.
- [x] A pull request will be opened to merge the enpoint to the master.
|
1.0
|
[Backend] Implementing the login endpoint. - Deadline: 29.10.2022 23.59
TODO:
- [x] The login endpoint will be implemented for the users who are already signed up.
- [x] Username and password will be accepted and an access-token will be returned for successful attempts. Otherwise status 401 will be returned.
- [x] Unit tests will be written to test the functionality.
- [x] A pull request will be opened to merge the enpoint to the master.
|
priority
|
implementing the login endpoint deadline todo the login endpoint will be implemented for the users who are already signed up username and password will be accepted and an access token will be returned for successful attempts otherwise status will be returned unit tests will be written to test the functionality a pull request will be opened to merge the enpoint to the master
| 1
|
684,185
| 23,410,279,240
|
IssuesEvent
|
2022-08-12 16:43:38
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
kernel: mem_protect: mimxrt11xx series build failure
|
bug priority: high platform: NXP
|
**Describe the bug**
build failure
Please also mention any information which could help others to understand
the problem you're facing:
```
see: /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/build.log
INFO - Total complete: 11/ 20 55% skipped: 10, failed: 0
ERROR - mimxrt1170_evk_cm7 tests/kernel/mem_protect/stackprot/kernel.memory_protection.stackprot FAILED: Build failure
ERROR - see: /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stackprot/kernel.memory_protection.stackprot/build.log
```
**To Reproduce**
Steps to reproduce the behavior:
```
scripts/twister -p mimxrt1170_evk_cm7 --build-only -T tests/kernel/mem_protect/
```
**Expected behavior**
build pass
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
```
FAILED: modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj
ccache /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/arm-zephyr-eabi-gcc -DBOARD_FLASH_SIZE="CONFIG_FLASH_SIZE*1024" -DCPU_MIMXRT1176DVMAA_cm7 -DFSL_SDK_ENABLE_DRIVER_CACHE_CONTROL -DKERNEL -DTC_RUNID=b497ff30139cffe3653f0807cb7375cd -DXIP_BOOT_HEADER_DCD_ENABLE=1 -DXIP_BOOT_HEADER_ENABLE=1 -DXIP_EXTERNAL_FLASH -D_FORTIFY_SOURCE=2 -D__PROGRAM_START -D__ZEPHYR__=1 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/common/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/drivers/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/CMSIS/Core/Include/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/lpuart/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/gpt/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/igpio/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/cache/armv7-m7/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/boards/evkmimxrt1170/xip/. -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/include -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/zephyr/include/generated -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/soc/arm/nxp_imx/rt -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/subsys/testsuite/include -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/subsys/testsuite/ztest/include -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/cmsis/CMSIS/Core/Include -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/drivers -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/xip -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/common -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/lpuart -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/gpt -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/igpio -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/cache/armv7-m7 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/boards/evkmimxrt1170 -isystem /home/shared/disk/zephyr_project/zephyr_test/zephyr/lib/libc/minimal/include -isystem /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.3.0/include -isystem /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.3.0/include-fixed -fno-strict-aliasing -Os -imacros /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/zephyr/include/generated/autoconf.h -ffreestanding -fno-common -g -gdwarf-4 -fdiagnostics-color=always -mcpu=cortex-m7 -mthumb -mabi=aapcs -mfp16-format=ieee --sysroot=/home/ubuntu/zephyr-sdk/arm-zephyr-eabi/arm-zephyr-eabi -imacros /home/shared/disk/zephyr_project/zephyr_test/zephyr/include/zephyr/toolchain/zephyr_stdint.h -Wall -Wformat -Wformat-security -Wno-format-zero-length -Wno-main -Wno-pointer-sign -Wpointer-arith -Wexpansion-to-defined -Wno-unused-but-set-variable -Werror=implicit-int -Werror -fno-asynchronous-unwind-tables -fno-pie -fno-pic -fno-reorder-functions -fno-defer-pop -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test/zephyr/tests/kernel/mem_protect/stack_random=CMAKE_SOURCE_DIR -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test/zephyr=ZEPHYR_BASE -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test=WEST_TOPDIR -ffunction-sections -fdata-sections -std=c99 -nostdinc -MD -MT modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj -MF modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj.d -o modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj -c /home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/fsl_caam.c
/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/fsl_caam.c:18:2: error: #warning "DCACHE must be set to write-trough mode to safely invalidate cache!!" [-Werror=cpp]
18 | #warning "DCACHE must be set to write-trough mode to safely invalidate cache!!"
| ^~~~~~~
cc1: all warnings being treated as errors
[220/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_msg.c.obj
[221/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_crl.c.obj
[222/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_csr.c.obj
[223/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/rsa.c.obj
[224/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_srv.c.obj
[225/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509.c.obj
[226/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/sha1.c.obj
[227/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_tls.c.obj
[228/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_crt.c.obj
```
**Environment (please complete the following information):**
- OS: (e.g. Linux,)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v3.1.0-3420-ga41f4f30b5
|
1.0
|
kernel: mem_protect: mimxrt11xx series build failure - **Describe the bug**
build failure
Please also mention any information which could help others to understand
the problem you're facing:
```
see: /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/build.log
INFO - Total complete: 11/ 20 55% skipped: 10, failed: 0
ERROR - mimxrt1170_evk_cm7 tests/kernel/mem_protect/stackprot/kernel.memory_protection.stackprot FAILED: Build failure
ERROR - see: /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stackprot/kernel.memory_protection.stackprot/build.log
```
**To Reproduce**
Steps to reproduce the behavior:
```
scripts/twister -p mimxrt1170_evk_cm7 --build-only -T tests/kernel/mem_protect/
```
**Expected behavior**
build pass
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
```
FAILED: modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj
ccache /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/arm-zephyr-eabi-gcc -DBOARD_FLASH_SIZE="CONFIG_FLASH_SIZE*1024" -DCPU_MIMXRT1176DVMAA_cm7 -DFSL_SDK_ENABLE_DRIVER_CACHE_CONTROL -DKERNEL -DTC_RUNID=b497ff30139cffe3653f0807cb7375cd -DXIP_BOOT_HEADER_DCD_ENABLE=1 -DXIP_BOOT_HEADER_ENABLE=1 -DXIP_EXTERNAL_FLASH -D_FORTIFY_SOURCE=2 -D__PROGRAM_START -D__ZEPHYR__=1 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/common/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/drivers/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/CMSIS/Core/Include/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/lpuart/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/gpt/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/igpio/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/cache/armv7-m7/. -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/boards/evkmimxrt1170/xip/. -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/include -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/zephyr/include/generated -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/soc/arm/nxp_imx/rt -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/subsys/testsuite/include -I/home/shared/disk/zephyr_project/zephyr_test/zephyr/subsys/testsuite/ztest/include -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/cmsis/CMSIS/Core/Include -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/drivers -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/devices/MIMXRT1176/xip -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/common -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/lpuart -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/gpt -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/igpio -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/cache/armv7-m7 -I/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/boards/evkmimxrt1170 -isystem /home/shared/disk/zephyr_project/zephyr_test/zephyr/lib/libc/minimal/include -isystem /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.3.0/include -isystem /home/ubuntu/zephyr-sdk/arm-zephyr-eabi/bin/../lib/gcc/arm-zephyr-eabi/10.3.0/include-fixed -fno-strict-aliasing -Os -imacros /home/shared/disk/zephyr_project/zephyr_test/zephyr/twister-out/mimxrt1170_evk_cm7/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/zephyr/include/generated/autoconf.h -ffreestanding -fno-common -g -gdwarf-4 -fdiagnostics-color=always -mcpu=cortex-m7 -mthumb -mabi=aapcs -mfp16-format=ieee --sysroot=/home/ubuntu/zephyr-sdk/arm-zephyr-eabi/arm-zephyr-eabi -imacros /home/shared/disk/zephyr_project/zephyr_test/zephyr/include/zephyr/toolchain/zephyr_stdint.h -Wall -Wformat -Wformat-security -Wno-format-zero-length -Wno-main -Wno-pointer-sign -Wpointer-arith -Wexpansion-to-defined -Wno-unused-but-set-variable -Werror=implicit-int -Werror -fno-asynchronous-unwind-tables -fno-pie -fno-pic -fno-reorder-functions -fno-defer-pop -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test/zephyr/tests/kernel/mem_protect/stack_random=CMAKE_SOURCE_DIR -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test/zephyr=ZEPHYR_BASE -fmacro-prefix-map=/home/shared/disk/zephyr_project/zephyr_test=WEST_TOPDIR -ffunction-sections -fdata-sections -std=c99 -nostdinc -MD -MT modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj -MF modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj.d -o modules/hal_nxp/hal_nxp/CMakeFiles/..__modules__hal__nxp.dir/mcux/mcux-sdk/drivers/caam/fsl_caam.c.obj -c /home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/fsl_caam.c
/home/shared/disk/zephyr_project/zephyr_test/modules/hal/nxp/mcux/mcux-sdk/drivers/caam/fsl_caam.c:18:2: error: #warning "DCACHE must be set to write-trough mode to safely invalidate cache!!" [-Werror=cpp]
18 | #warning "DCACHE must be set to write-trough mode to safely invalidate cache!!"
| ^~~~~~~
cc1: all warnings being treated as errors
[220/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_msg.c.obj
[221/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_crl.c.obj
[222/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_csr.c.obj
[223/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/rsa.c.obj
[224/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_srv.c.obj
[225/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509.c.obj
[226/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/sha1.c.obj
[227/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/ssl_tls.c.obj
[228/270] Building C object modules/mbedtls/CMakeFiles/modules__mbedtls.dir/home/shared/disk/zephyr_project/zephyr_test/modules/crypto/mbedtls/library/x509_crt.c.obj
```
**Environment (please complete the following information):**
- OS: (e.g. Linux,)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v3.1.0-3420-ga41f4f30b5
|
priority
|
kernel mem protect series build failure describe the bug build failure please also mention any information which could help others to understand the problem you re facing see home shared disk zephyr project zephyr test zephyr twister out evk tests kernel mem protect stack random kernel memory protection stack random build log info total complete skipped failed error evk tests kernel mem protect stackprot kernel memory protection stackprot failed build failure error see home shared disk zephyr project zephyr test zephyr twister out evk tests kernel mem protect stackprot kernel memory protection stackprot build log to reproduce steps to reproduce the behavior scripts twister p evk build only t tests kernel mem protect expected behavior build pass impact what impact does this issue have on your progress e g annoyance showstopper logs and console output failed modules hal nxp hal nxp cmakefiles modules hal nxp dir mcux mcux sdk drivers caam fsl caam c obj ccache home ubuntu zephyr sdk arm zephyr eabi bin arm zephyr eabi gcc dboard flash size config flash size dcpu dfsl sdk enable driver cache control dkernel dtc runid dxip boot header dcd enable dxip boot header enable dxip external flash d fortify source d program start d zephyr i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers common i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk devices drivers i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk devices i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk cmsis core include i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers caam i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers lpuart i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers gpt i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers igpio i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers cache i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk boards xip i home shared disk zephyr project zephyr test zephyr include i home shared disk zephyr project zephyr test zephyr twister out evk tests kernel mem protect stack random kernel memory protection stack random zephyr include generated i home shared disk zephyr project zephyr test zephyr soc arm nxp imx rt i home shared disk zephyr project zephyr test zephyr subsys testsuite include i home shared disk zephyr project zephyr test zephyr subsys testsuite ztest include i home shared disk zephyr project zephyr test modules hal cmsis cmsis core include i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk devices i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk devices drivers i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk devices xip i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers common i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers caam i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers lpuart i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers gpt i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers igpio i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers cache i home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk boards isystem home shared disk zephyr project zephyr test zephyr lib libc minimal include isystem home ubuntu zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi include isystem home ubuntu zephyr sdk arm zephyr eabi bin lib gcc arm zephyr eabi include fixed fno strict aliasing os imacros home shared disk zephyr project zephyr test zephyr twister out evk tests kernel mem protect stack random kernel memory protection stack random zephyr include generated autoconf h ffreestanding fno common g gdwarf fdiagnostics color always mcpu cortex mthumb mabi aapcs format ieee sysroot home ubuntu zephyr sdk arm zephyr eabi arm zephyr eabi imacros home shared disk zephyr project zephyr test zephyr include zephyr toolchain zephyr stdint h wall wformat wformat security wno format zero length wno main wno pointer sign wpointer arith wexpansion to defined wno unused but set variable werror implicit int werror fno asynchronous unwind tables fno pie fno pic fno reorder functions fno defer pop fmacro prefix map home shared disk zephyr project zephyr test zephyr tests kernel mem protect stack random cmake source dir fmacro prefix map home shared disk zephyr project zephyr test zephyr zephyr base fmacro prefix map home shared disk zephyr project zephyr test west topdir ffunction sections fdata sections std nostdinc md mt modules hal nxp hal nxp cmakefiles modules hal nxp dir mcux mcux sdk drivers caam fsl caam c obj mf modules hal nxp hal nxp cmakefiles modules hal nxp dir mcux mcux sdk drivers caam fsl caam c obj d o modules hal nxp hal nxp cmakefiles modules hal nxp dir mcux mcux sdk drivers caam fsl caam c obj c home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers caam fsl caam c home shared disk zephyr project zephyr test modules hal nxp mcux mcux sdk drivers caam fsl caam c error warning dcache must be set to write trough mode to safely invalidate cache warning dcache must be set to write trough mode to safely invalidate cache all warnings being treated as errors building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library ssl msg c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library crl c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library csr c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library rsa c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library ssl srv c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library ssl tls c obj building c object modules mbedtls cmakefiles modules mbedtls dir home shared disk zephyr project zephyr test modules crypto mbedtls library crt c obj environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
| 1
|
794,286
| 28,029,770,310
|
IssuesEvent
|
2023-03-28 11:36:20
|
AY2223S2-CS2113-W12-1/tp
|
https://api.github.com/repos/AY2223S2-CS2113-W12-1/tp
|
closed
|
Add LocalDate and LocalTime functionality
|
priority.High
|
Add LocalDate and LocalTime functionalities to appointment classes and commands
|
1.0
|
Add LocalDate and LocalTime functionality - Add LocalDate and LocalTime functionalities to appointment classes and commands
|
priority
|
add localdate and localtime functionality add localdate and localtime functionalities to appointment classes and commands
| 1
|
465,783
| 13,392,164,362
|
IssuesEvent
|
2020-09-03 00:33:07
|
OpenTransitTools/trimet-mod-pelias
|
https://api.github.com/repos/OpenTransitTools/trimet-mod-pelias
|
opened
|
MLK alias can't find stops with "ML King" in their name
|
bug high priority
|
Basically, MLK synonyms and variations are spotty. Call center folks' see this as their biggest complaint with Pelias.
This is from a Feb 2020 Slack discussion:
fpurcell Feb 27th at 12:13 AM
a) killingsworth & mlk: https://ws-st.trimet.org/pelias/v1/autocomplete?text=mlk%20%26%20killingsworth
b) mlk & ainsworth: https://ws-st.trimet.org/pelias/v1/autocomplete?text=ainsworth%20%26%20mlk
Even limiting the search to just stops doesn't seem to work well:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ainsworth%20%26%20mlk&layers=stops
And search, although there are results, isn't much better than autocomplete:
https://ws-st.trimet.org/pelias/v1/search?text=ainsworth%20%26%20mlk&layers=stops (edited)
c) spell out ml king works better:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20ml%20king%20%26%20killingsworth
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20ml%20king%20%26%20ainsworth
and spelling out 'martin luther king' also better:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20martin%20luther%20king%20%26%20killingsworth
Julian Simioni 6 months ago
hmm. this worked previously, right? i recall there were changes in recent versions of elasticsearch that might have impacted this
fpurcell 6 months ago
I'm not sure it ever worked for those stops, Julian. MLK alias' seemingly work well for addresses and intersections. Maybe @myleen remembers whether this worked at one time for those stops.
myleen 6 months ago
I don't recall, sorry.
Julian Simioni 6 months ago
no problem. we'll take a look and see what the current situation is and what we can do. maybe there are some synonym/alias tweaks or maybe something else
myleen 6 months ago
And I would say MLK aliases sometimes work for intersections. See the example immediately above - why is mlk & killingsworth 5th in the list? (edited)
Julian Simioni 6 months ago
good question. frank which one of the pelias queries corresponds to that last screenshot?
fpurcell 6 months ago
https://ws-st.trimet.org/pelias/v1/search?text=MLK%20and%20Killingsworth
|
1.0
|
MLK alias can't find stops with "ML King" in their name - Basically, MLK synonyms and variations are spotty. Call center folks' see this as their biggest complaint with Pelias.
This is from a Feb 2020 Slack discussion:
fpurcell Feb 27th at 12:13 AM
a) killingsworth & mlk: https://ws-st.trimet.org/pelias/v1/autocomplete?text=mlk%20%26%20killingsworth
b) mlk & ainsworth: https://ws-st.trimet.org/pelias/v1/autocomplete?text=ainsworth%20%26%20mlk
Even limiting the search to just stops doesn't seem to work well:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ainsworth%20%26%20mlk&layers=stops
And search, although there are results, isn't much better than autocomplete:
https://ws-st.trimet.org/pelias/v1/search?text=ainsworth%20%26%20mlk&layers=stops (edited)
c) spell out ml king works better:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20ml%20king%20%26%20killingsworth
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20ml%20king%20%26%20ainsworth
and spelling out 'martin luther king' also better:
https://ws-st.trimet.org/pelias/v1/autocomplete?text=ne%20martin%20luther%20king%20%26%20killingsworth
Julian Simioni 6 months ago
hmm. this worked previously, right? i recall there were changes in recent versions of elasticsearch that might have impacted this
fpurcell 6 months ago
I'm not sure it ever worked for those stops, Julian. MLK alias' seemingly work well for addresses and intersections. Maybe @myleen remembers whether this worked at one time for those stops.
myleen 6 months ago
I don't recall, sorry.
Julian Simioni 6 months ago
no problem. we'll take a look and see what the current situation is and what we can do. maybe there are some synonym/alias tweaks or maybe something else
myleen 6 months ago
And I would say MLK aliases sometimes work for intersections. See the example immediately above - why is mlk & killingsworth 5th in the list? (edited)
Julian Simioni 6 months ago
good question. frank which one of the pelias queries corresponds to that last screenshot?
fpurcell 6 months ago
https://ws-st.trimet.org/pelias/v1/search?text=MLK%20and%20Killingsworth
|
priority
|
mlk alias can t find stops with ml king in their name basically mlk synonyms and variations are spotty call center folks see this as their biggest complaint with pelias this is from a feb slack discussion fpurcell feb at am a killingsworth mlk b mlk ainsworth even limiting the search to just stops doesn t seem to work well and search although there are results isn t much better than autocomplete edited c spell out ml king works better and spelling out martin luther king also better julian simioni months ago hmm this worked previously right i recall there were changes in recent versions of elasticsearch that might have impacted this fpurcell months ago i m not sure it ever worked for those stops julian mlk alias seemingly work well for addresses and intersections maybe myleen remembers whether this worked at one time for those stops myleen months ago i don t recall sorry julian simioni months ago no problem we ll take a look and see what the current situation is and what we can do maybe there are some synonym alias tweaks or maybe something else myleen months ago and i would say mlk aliases sometimes work for intersections see the example immediately above why is mlk killingsworth in the list edited julian simioni months ago good question frank which one of the pelias queries corresponds to that last screenshot fpurcell months ago
| 1
|
207,334
| 7,127,353,279
|
IssuesEvent
|
2018-01-20 20:43:31
|
umple/umple
|
https://api.github.com/repos/umple/umple
|
closed
|
Make minimized js build automatically
|
Component-UmpleOnline Diffic-Easy Priority-VHigh
|
Umpleonline is loading a lot slower than it used to. It seems that joint.js is a particularly large file being loaded in _load.js and may be the culprit. We need to experiment with loading this only 'on demand' when a request is first made for joint diagrams, and/or minimizing it.
|
1.0
|
Make minimized js build automatically - Umpleonline is loading a lot slower than it used to. It seems that joint.js is a particularly large file being loaded in _load.js and may be the culprit. We need to experiment with loading this only 'on demand' when a request is first made for joint diagrams, and/or minimizing it.
|
priority
|
make minimized js build automatically umpleonline is loading a lot slower than it used to it seems that joint js is a particularly large file being loaded in load js and may be the culprit we need to experiment with loading this only on demand when a request is first made for joint diagrams and or minimizing it
| 1
|
143,527
| 5,518,267,206
|
IssuesEvent
|
2017-03-18 07:08:42
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
closed
|
FOSSASIA Summit no longer on start page event though event is still running
|
bug Priority: High
|
Please ensure events that are still running are listed on start page until end of event, e.g. check https://eventyay.com
|
1.0
|
FOSSASIA Summit no longer on start page event though event is still running - Please ensure events that are still running are listed on start page until end of event, e.g. check https://eventyay.com
|
priority
|
fossasia summit no longer on start page event though event is still running please ensure events that are still running are listed on start page until end of event e g check
| 1
|
477,939
| 13,770,524,446
|
IssuesEvent
|
2020-10-07 20:22:41
|
qarmin/czkawka
|
https://api.github.com/repos/qarmin/czkawka
|
closed
|
Don't lock UI when searching
|
enhancement help wanted high priority
|
For now pressing Search button will freeze entire GUI until searching ends which is really bad.
Probably running searching in another thread will allow to create smother UI and also to pause and stop search without needing to kill app
|
1.0
|
Don't lock UI when searching - For now pressing Search button will freeze entire GUI until searching ends which is really bad.
Probably running searching in another thread will allow to create smother UI and also to pause and stop search without needing to kill app
|
priority
|
don t lock ui when searching for now pressing search button will freeze entire gui until searching ends which is really bad probably running searching in another thread will allow to create smother ui and also to pause and stop search without needing to kill app
| 1
|
745,392
| 25,982,443,030
|
IssuesEvent
|
2022-12-19 20:09:55
|
encorelab/ck-board
|
https://api.github.com/repos/encorelab/ck-board
|
closed
|
Increase post character limit
|
enhancement high priority
|
The post character limit should be increased to at least 2000 characters.
|
1.0
|
Increase post character limit - The post character limit should be increased to at least 2000 characters.
|
priority
|
increase post character limit the post character limit should be increased to at least characters
| 1
|
700,617
| 24,067,099,197
|
IssuesEvent
|
2022-09-17 16:51:46
|
etro-js/etro
|
https://api.github.com/repos/etro-js/etro
|
closed
|
Visual fields missing from `Image` and `Video` types
|
help wanted priority:high type:typings
|
**Steps to reproduce:**
1. Clone repo
2. Run `npm install`
3. Run `npm test`
**Actual behavior:**
You should see a bunch of compilation errors like this one:
```
16 09 2022 15:39:44.366:ERROR [compiler.karma-typescript]: spec/integration/layer.spec.ts(229,40): error TS2339: Property 'cctx' does not exist on type 'Image'.
```
Although not included in the tests, `etro.layer.Video` is also missing these fields.
**Expected behavior:**
`etro.layer.Image` and `etro.layer.Video` should both inherit properties and methods from `etro.layer.Visual` (not just the ones from `etro.layer.Base`).
**Notes:**
I believe this is an issue with `VisualSourceMixin`. This mixin was created, along with `AudioSourceMixin`, to prevent duplicate code from visual sources (image and video html elements) and audio sources (audio and video html elements).
Interestingly, `Audio` and `Video` both correctly inherit fields from `AudioSourceMixin`.
I see a few options:
- Define the visual source fields in each visual source class and the audio source fields in each audio source class (duplicate fields). Logic can be extracted to util files.
- Fix the typings for the mixin.
|
1.0
|
Visual fields missing from `Image` and `Video` types - **Steps to reproduce:**
1. Clone repo
2. Run `npm install`
3. Run `npm test`
**Actual behavior:**
You should see a bunch of compilation errors like this one:
```
16 09 2022 15:39:44.366:ERROR [compiler.karma-typescript]: spec/integration/layer.spec.ts(229,40): error TS2339: Property 'cctx' does not exist on type 'Image'.
```
Although not included in the tests, `etro.layer.Video` is also missing these fields.
**Expected behavior:**
`etro.layer.Image` and `etro.layer.Video` should both inherit properties and methods from `etro.layer.Visual` (not just the ones from `etro.layer.Base`).
**Notes:**
I believe this is an issue with `VisualSourceMixin`. This mixin was created, along with `AudioSourceMixin`, to prevent duplicate code from visual sources (image and video html elements) and audio sources (audio and video html elements).
Interestingly, `Audio` and `Video` both correctly inherit fields from `AudioSourceMixin`.
I see a few options:
- Define the visual source fields in each visual source class and the audio source fields in each audio source class (duplicate fields). Logic can be extracted to util files.
- Fix the typings for the mixin.
|
priority
|
visual fields missing from image and video types steps to reproduce clone repo run npm install run npm test actual behavior you should see a bunch of compilation errors like this one error spec integration layer spec ts error property cctx does not exist on type image although not included in the tests etro layer video is also missing these fields expected behavior etro layer image and etro layer video should both inherit properties and methods from etro layer visual not just the ones from etro layer base notes i believe this is an issue with visualsourcemixin this mixin was created along with audiosourcemixin to prevent duplicate code from visual sources image and video html elements and audio sources audio and video html elements interestingly audio and video both correctly inherit fields from audiosourcemixin i see a few options define the visual source fields in each visual source class and the audio source fields in each audio source class duplicate fields logic can be extracted to util files fix the typings for the mixin
| 1
|
22,566
| 2,649,502,935
|
IssuesEvent
|
2015-03-15 00:01:46
|
Araq/Nim
|
https://api.github.com/repos/Araq/Nim
|
closed
|
`items` for `Slice` makes `toSeq` fail
|
High Priority Showstopper
|
I have encountered some strange behaviour (in Nim 0.10.2) when trying to create a sequence from a slice of integers:
```nim
from sequtils import toSeq
# This works.
let s1 = toSeq(1..4)
# This fails.
let r: Slice[int] = 1..4
let s2 = toSeq(r)
```
This fails:
```
Error: type mismatch: got (Slice[int])
but expected one of:
system.items(a: cstring): iter[char]
system.items(a: seq[T]): iter[T]
system.items(a: string): iter[char]
system.items(E: typedesc[enum]): iter[typedesc[enum]]
system.items(a: array[IX, T]): iter[T]
system.items(a: set[T]): iter[T]
system.items(a: openarray[T]): iter[T]
```
I tried to overload `system.items` for `Slice` (until now, the `openarray`-accepting variant seems to have been called):
```nim
iterator items*[T](a: Slice[T]): T {.inline.} =
## iterates over each item of `a`.
var i = a.a
while i <= a.b:
yield i
inc(i)
```
(According to Araq, high-to-low slices are not supported. The docs state that `..` is just an alias for `countup`.)
With the new `items` overload, this error occurs:
```
Error: type mismatch: got (seq[Slice[int]], int)
but expected one of:
system.add(x: var string, y: cstring)
system.add(x: var string, y: char)
system.add(x: var seq[T], y: T)
system.add(x: var string, y: string)
system.add(x: var seq[T], y: openarray[T])
```
The implementation of the `toSeq` template (https://github.com/Araq/Nim/blob/devel/lib/pure/collections/sequtils.nim#L309) uses `seq[type(iter)]`. However, `type(1..4)` doesn't return `int` as expected, but raises an error (in interactive mode):
```
>>> type(1..4)
stdin(6, 8) Error: ')' expected
stdin(6, 6) Error: type mismatch: got (typedesc[int], int literal(4))
but expected one of:
system...(a: T, b: T): Slice[T]
system...(b: T): Slice[T]
stdin(6, 10) Error: invalid indentation
stdin(6, 10) Error: expression expected, but found ')'
```
As a result, the `for` loop in `toSeq` seems to work, but the following call to `add` fails as it is passed an unexpected (and unreasonable) type of sequence (`seq[Slice[int]]` instead of `seq[int]`; see my comment on `type(1..4)` above):
```
Error: type mismatch: got (seq[Slice[int]], int)
but expected one of:
system.add(x: var string, y: cstring)
system.add(x: var string, y: char)
system.add(x: var seq[T], y: T)
system.add(x: var string, y: string)
system.add(x: var seq[T], y: openarray[T])
```
|
1.0
|
`items` for `Slice` makes `toSeq` fail - I have encountered some strange behaviour (in Nim 0.10.2) when trying to create a sequence from a slice of integers:
```nim
from sequtils import toSeq
# This works.
let s1 = toSeq(1..4)
# This fails.
let r: Slice[int] = 1..4
let s2 = toSeq(r)
```
This fails:
```
Error: type mismatch: got (Slice[int])
but expected one of:
system.items(a: cstring): iter[char]
system.items(a: seq[T]): iter[T]
system.items(a: string): iter[char]
system.items(E: typedesc[enum]): iter[typedesc[enum]]
system.items(a: array[IX, T]): iter[T]
system.items(a: set[T]): iter[T]
system.items(a: openarray[T]): iter[T]
```
I tried to overload `system.items` for `Slice` (until now, the `openarray`-accepting variant seems to have been called):
```nim
iterator items*[T](a: Slice[T]): T {.inline.} =
## iterates over each item of `a`.
var i = a.a
while i <= a.b:
yield i
inc(i)
```
(According to Araq, high-to-low slices are not supported. The docs state that `..` is just an alias for `countup`.)
With the new `items` overload, this error occurs:
```
Error: type mismatch: got (seq[Slice[int]], int)
but expected one of:
system.add(x: var string, y: cstring)
system.add(x: var string, y: char)
system.add(x: var seq[T], y: T)
system.add(x: var string, y: string)
system.add(x: var seq[T], y: openarray[T])
```
The implementation of the `toSeq` template (https://github.com/Araq/Nim/blob/devel/lib/pure/collections/sequtils.nim#L309) uses `seq[type(iter)]`. However, `type(1..4)` doesn't return `int` as expected, but raises an error (in interactive mode):
```
>>> type(1..4)
stdin(6, 8) Error: ')' expected
stdin(6, 6) Error: type mismatch: got (typedesc[int], int literal(4))
but expected one of:
system...(a: T, b: T): Slice[T]
system...(b: T): Slice[T]
stdin(6, 10) Error: invalid indentation
stdin(6, 10) Error: expression expected, but found ')'
```
As a result, the `for` loop in `toSeq` seems to work, but the following call to `add` fails as it is passed an unexpected (and unreasonable) type of sequence (`seq[Slice[int]]` instead of `seq[int]`; see my comment on `type(1..4)` above):
```
Error: type mismatch: got (seq[Slice[int]], int)
but expected one of:
system.add(x: var string, y: cstring)
system.add(x: var string, y: char)
system.add(x: var seq[T], y: T)
system.add(x: var string, y: string)
system.add(x: var seq[T], y: openarray[T])
```
|
priority
|
items for slice makes toseq fail i have encountered some strange behaviour in nim when trying to create a sequence from a slice of integers nim from sequtils import toseq this works let toseq this fails let r slice let toseq r this fails error type mismatch got slice but expected one of system items a cstring iter system items a seq iter system items a string iter system items e typedesc iter system items a array iter system items a set iter system items a openarray iter i tried to overload system items for slice until now the openarray accepting variant seems to have been called nim iterator items a slice t inline iterates over each item of a var i a a while i a b yield i inc i according to araq high to low slices are not supported the docs state that is just an alias for countup with the new items overload this error occurs error type mismatch got seq int but expected one of system add x var string y cstring system add x var string y char system add x var seq y t system add x var string y string system add x var seq y openarray the implementation of the toseq template uses seq however type doesn t return int as expected but raises an error in interactive mode type stdin error expected stdin error type mismatch got typedesc int literal but expected one of system a t b t slice system b t slice stdin error invalid indentation stdin error expression expected but found as a result the for loop in toseq seems to work but the following call to add fails as it is passed an unexpected and unreasonable type of sequence seq instead of seq see my comment on type above error type mismatch got seq int but expected one of system add x var string y cstring system add x var string y char system add x var seq y t system add x var string y string system add x var seq y openarray
| 1
|
330,436
| 10,040,296,307
|
IssuesEvent
|
2019-07-18 19:35:28
|
bluecherrydvr/bluecherry-apps
|
https://api.github.com/repos/bluecherrydvr/bluecherry-apps
|
opened
|
TLS and SASL email authentication fails
|
Bug Priority High Server
|
When configured with TLS/SSL and port 587, emails will fail with "E: Failed to connect to ssl://DOMAIN:587 [SMTP: Failed to connect socket: fsockopen(): unable to connect to ssl://DOMAIN:587 (Unknown error) (code: -1, response: )]"
|
1.0
|
TLS and SASL email authentication fails - When configured with TLS/SSL and port 587, emails will fail with "E: Failed to connect to ssl://DOMAIN:587 [SMTP: Failed to connect socket: fsockopen(): unable to connect to ssl://DOMAIN:587 (Unknown error) (code: -1, response: )]"
|
priority
|
tls and sasl email authentication fails when configured with tls ssl and port emails will fail with e failed to connect to ssl domain
| 1
|
526,465
| 15,293,685,816
|
IssuesEvent
|
2021-02-24 00:48:20
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Image upload in columns uploads image twice
|
[Block] Image [Feature] Drag and Drop [Priority] High [Status] Duplicate [Type] Bug
|
## Description
Uploaded an image through the media dialog to fill an image block in a column, the image gets uploaded twice. It populates the selected image block but also creates a new image block and populates this one as well.
## Step-by-step reproduction instructions
1. Clean installation
2. Activate current Gutenberg plugin
3. Create a columns block
4. Insert an image block in one column
5. Open the media dialog
6. Upload an image
7. Select
8. See two images in the content
9. Open media dialog again
10. See image has been uploaded twice
## Expected behaviour
The image gets uploaded once and only the selected image block gets filled with an image
## Actual behaviour
When I upload the image, the image gets uploaded twice. Gutenberg populates the selected image block bot also creates a new image block and populates this block as well.
Sometimes, but I am not quite sure how to reproduce that, when uploading the media dialog gets white (only close-cross is left) and only the newly created block gets populated. i guess both are related.
## Screenshots or screen recording (optional)

## WordPress information
- WordPress version:5.6.1
- Gutenberg version: 9.9.1
- Are all plugins except Gutenberg deactivated? Yes
- Are you using a default theme (e.g. Twenty Twenty-One)? Yes
## Device information
- Device: Desktop
- Operating system: Ubuntu 18.04
- Browser: Firefox 84.0.2
|
1.0
|
Image upload in columns uploads image twice - ## Description
Uploaded an image through the media dialog to fill an image block in a column, the image gets uploaded twice. It populates the selected image block but also creates a new image block and populates this one as well.
## Step-by-step reproduction instructions
1. Clean installation
2. Activate current Gutenberg plugin
3. Create a columns block
4. Insert an image block in one column
5. Open the media dialog
6. Upload an image
7. Select
8. See two images in the content
9. Open media dialog again
10. See image has been uploaded twice
## Expected behaviour
The image gets uploaded once and only the selected image block gets filled with an image
## Actual behaviour
When I upload the image, the image gets uploaded twice. Gutenberg populates the selected image block bot also creates a new image block and populates this block as well.
Sometimes, but I am not quite sure how to reproduce that, when uploading the media dialog gets white (only close-cross is left) and only the newly created block gets populated. i guess both are related.
## Screenshots or screen recording (optional)

## WordPress information
- WordPress version:5.6.1
- Gutenberg version: 9.9.1
- Are all plugins except Gutenberg deactivated? Yes
- Are you using a default theme (e.g. Twenty Twenty-One)? Yes
## Device information
- Device: Desktop
- Operating system: Ubuntu 18.04
- Browser: Firefox 84.0.2
|
priority
|
image upload in columns uploads image twice description uploaded an image through the media dialog to fill an image block in a column the image gets uploaded twice it populates the selected image block but also creates a new image block and populates this one as well step by step reproduction instructions clean installation activate current gutenberg plugin create a columns block insert an image block in one column open the media dialog upload an image select see two images in the content open media dialog again see image has been uploaded twice expected behaviour the image gets uploaded once and only the selected image block gets filled with an image actual behaviour when i upload the image the image gets uploaded twice gutenberg populates the selected image block bot also creates a new image block and populates this block as well sometimes but i am not quite sure how to reproduce that when uploading the media dialog gets white only close cross is left and only the newly created block gets populated i guess both are related screenshots or screen recording optional wordpress information wordpress version gutenberg version are all plugins except gutenberg deactivated yes are you using a default theme e g twenty twenty one yes device information device desktop operating system ubuntu browser firefox
| 1
|
390,420
| 11,543,502,757
|
IssuesEvent
|
2020-02-18 09:43:30
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.0 staging-1334] Contracts transport issues
|
Priority: High Status: Fixed
|
1. Don't have items to move when creating a transport contract
(To Move: position)
We had this list of items previously

2. Contractor won't see markers on the storages from a contract.

3. Contract can be finished without full transportation
This contract was for 20 Bricks and 20 Logs transportation.

Kirill has moved my 20 logs into his small stockpile and added 20 more logs there.

Then he moved these 40 logs to the destination stockpile.

Contract was finished without bricks

|
1.0
|
[0.9.0 staging-1334] Contracts transport issues - 1. Don't have items to move when creating a transport contract
(To Move: position)
We had this list of items previously

2. Contractor won't see markers on the storages from a contract.

3. Contract can be finished without full transportation
This contract was for 20 Bricks and 20 Logs transportation.

Kirill has moved my 20 logs into his small stockpile and added 20 more logs there.

Then he moved these 40 logs to the destination stockpile.

Contract was finished without bricks

|
priority
|
contracts transport issues don t have items to move when creating a transport contract to move position we had this list of items previously contractor won t see markers on the storages from a contract contract can be finished without full transportation this contract was for bricks and logs transportation kirill has moved my logs into his small stockpile and added more logs there then he moved these logs to the destination stockpile contract was finished without bricks
| 1
|
510,486
| 14,791,378,235
|
IssuesEvent
|
2021-01-12 13:25:28
|
BeamMW/beam
|
https://api.github.com/repos/BeamMW/beam
|
closed
|
IOS mobile wallet crash
|
bug crash high priority mobile
|
**Bug description**
The mobile wallet was connected to its own node(Google cloud platform) and crashed when sending max privacy transaction.

**The base of the wallet and the logs file**
[ldb.zip](https://github.com/BeamMW/beam/files/5673130/ldb.zip)
password :1
**Platform and build:**
- IOS
- OS version: 14.2
- Build number: 5.2.7.
|
1.0
|
IOS mobile wallet crash - **Bug description**
The mobile wallet was connected to its own node(Google cloud platform) and crashed when sending max privacy transaction.

**The base of the wallet and the logs file**
[ldb.zip](https://github.com/BeamMW/beam/files/5673130/ldb.zip)
password :1
**Platform and build:**
- IOS
- OS version: 14.2
- Build number: 5.2.7.
|
priority
|
ios mobile wallet crash bug description the mobile wallet was connected to its own node google cloud platform and crashed when sending max privacy transaction the base of the wallet and the logs file password platform and build ios os version build number
| 1
|
382,890
| 11,339,532,113
|
IssuesEvent
|
2020-01-23 02:23:24
|
woocommerce/woocommerce-gateway-paypal-express-checkout
|
https://api.github.com/repos/woocommerce/woocommerce-gateway-paypal-express-checkout
|
closed
|
"PayPal Express Checkout" option missing from checkout page when "PayPal Credit" is enabled
|
Priority: High [Type] Bug
|
On the checkout page, the "PayPal Express Checkout" payment option only appears if the "Enable PayPal Credit" setting is not checked.
_Edit:_ This only applies when Smart Payment Buttons are disabled.
Disabled | Enabled
-- | --
<img width="340" alt="screen shot 2018-05-16 at 11 01 36 am" src="https://user-images.githubusercontent.com/1867547/40131919-4e75789a-5909-11e8-8e07-3a66924bd6e1.png"> | <img width="342" alt="screen shot 2018-05-16 at 1 01 10 pm" src="https://user-images.githubusercontent.com/1867547/40131925-5133d1ee-5909-11e8-96df-94f6643457d7.png">
_(All available payment methods are shown.)_
This is because the "PayPal Credit" gateway [inherits its `id`](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/master/includes/class-wc-gateway-ppec-with-paypal-credit.php#L11) from the "PayPal Express Checkout" gateway, at which point it is not unique. Note that at one point in its history, the "PayPal Credit" gateway a) [did have its own ID](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/c614592983f65c18beafa36ddade36abf60a8704/includes/class-wc-gateway-ppec-with-paypal-credit.php#L10), and b) [inherited from an abstract class](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/c614592983f65c18beafa36ddade36abf60a8704/includes/class-wc-gateway-ppec-with-paypal-credit.php#L7) that didn't have its own ID.
|
1.0
|
"PayPal Express Checkout" option missing from checkout page when "PayPal Credit" is enabled - On the checkout page, the "PayPal Express Checkout" payment option only appears if the "Enable PayPal Credit" setting is not checked.
_Edit:_ This only applies when Smart Payment Buttons are disabled.
Disabled | Enabled
-- | --
<img width="340" alt="screen shot 2018-05-16 at 11 01 36 am" src="https://user-images.githubusercontent.com/1867547/40131919-4e75789a-5909-11e8-8e07-3a66924bd6e1.png"> | <img width="342" alt="screen shot 2018-05-16 at 1 01 10 pm" src="https://user-images.githubusercontent.com/1867547/40131925-5133d1ee-5909-11e8-96df-94f6643457d7.png">
_(All available payment methods are shown.)_
This is because the "PayPal Credit" gateway [inherits its `id`](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/master/includes/class-wc-gateway-ppec-with-paypal-credit.php#L11) from the "PayPal Express Checkout" gateway, at which point it is not unique. Note that at one point in its history, the "PayPal Credit" gateway a) [did have its own ID](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/c614592983f65c18beafa36ddade36abf60a8704/includes/class-wc-gateway-ppec-with-paypal-credit.php#L10), and b) [inherited from an abstract class](https://github.com/woocommerce/woocommerce-gateway-paypal-express-checkout/blob/c614592983f65c18beafa36ddade36abf60a8704/includes/class-wc-gateway-ppec-with-paypal-credit.php#L7) that didn't have its own ID.
|
priority
|
paypal express checkout option missing from checkout page when paypal credit is enabled on the checkout page the paypal express checkout payment option only appears if the enable paypal credit setting is not checked edit this only applies when smart payment buttons are disabled disabled enabled img width alt screen shot at am src img width alt screen shot at pm src all available payment methods are shown this is because the paypal credit gateway from the paypal express checkout gateway at which point it is not unique note that at one point in its history the paypal credit gateway a and b that didn t have its own id
| 1
|
268,284
| 8,405,775,087
|
IssuesEvent
|
2018-10-11 16:03:33
|
joyent/conch
|
https://api.github.com/repos/joyent/conch
|
closed
|
Dedup device reports
|
high priority
|
When receiving a device report, if it exactly matches the most recently received report, do not store it or run validations.
Currently, the endpoint returns the results of executing a validation plan. The return payload from the endpoint, in the case of deduplication, will be the validation results of the previous run.
Note: this probably won't catch too many duplicates until #458 and joyent/conch-relay#152 are complete.
|
1.0
|
Dedup device reports - When receiving a device report, if it exactly matches the most recently received report, do not store it or run validations.
Currently, the endpoint returns the results of executing a validation plan. The return payload from the endpoint, in the case of deduplication, will be the validation results of the previous run.
Note: this probably won't catch too many duplicates until #458 and joyent/conch-relay#152 are complete.
|
priority
|
dedup device reports when receiving a device report if it exactly matches the most recently received report do not store it or run validations currently the endpoint returns the results of executing a validation plan the return payload from the endpoint in the case of deduplication will be the validation results of the previous run note this probably won t catch too many duplicates until and joyent conch relay are complete
| 1
|
603,299
| 18,537,667,933
|
IssuesEvent
|
2021-10-21 13:13:42
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] Adjust plugin wiring
|
enhancement priority: high
|
# Feature Request
#### Is your feature request related to a problem? Please describe.
Current wiring makes assumptions about the structure new elements need to be injected at.
#### Describe the solution you'd like
Please adjust to allow specifying exactly where to insert
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
#### Additional context
{{Add any other context or screenshots about the feature request here.}}
|
1.0
|
[studio] Adjust plugin wiring - # Feature Request
#### Is your feature request related to a problem? Please describe.
Current wiring makes assumptions about the structure new elements need to be injected at.
#### Describe the solution you'd like
Please adjust to allow specifying exactly where to insert
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
#### Additional context
{{Add any other context or screenshots about the feature request here.}}
|
priority
|
adjust plugin wiring feature request is your feature request related to a problem please describe current wiring makes assumptions about the structure new elements need to be injected at describe the solution you d like please adjust to allow specifying exactly where to insert describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
362,028
| 10,722,146,616
|
IssuesEvent
|
2019-10-27 09:41:20
|
AY1920S1-CS2103T-F14-1/main
|
https://api.github.com/repos/AY1920S1-CS2103T-F14-1/main
|
closed
|
Add display for question statements
|
priority.High severity.High type.Story
|
- [x] Create javaFx control to display full question statement
- [x] Create controller class that will load and display information based on the Question object.
- [x] Integrate with existing UI
|
1.0
|
Add display for question statements - - [x] Create javaFx control to display full question statement
- [x] Create controller class that will load and display information based on the Question object.
- [x] Integrate with existing UI
|
priority
|
add display for question statements create javafx control to display full question statement create controller class that will load and display information based on the question object integrate with existing ui
| 1
|
386,041
| 11,430,786,393
|
IssuesEvent
|
2020-02-04 10:46:09
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Getting fatal along with amp theme frame work with the latest version 1.0.20
|
NEXT UPDATE Urgent [Priority: HIGH] bug
| ERROR: type should be string, got "\r\nhttps://secure.helpscout.net/conversation/1070940476/108971?folderId=1060556\r\n\r\nhttps://monosnap.com/file/8YBxtWEczVNo4WIhe1AbHPeavMlyt0\r\n\r\nWarning: Use of undefined constant AMPFORWP_CUSTOM_THEME - assumed 'AMPFORWP_CUSTOM_THEME' (this will throw an Error in a future version of PHP) in C:\\xampp\\htdocs\\lc\\wp-content\\plugins\\accelerated-mobile-pages\\components\\theme-loader.php on line 18\r\n\r\nNotice: Undefined variable: amp_main_dir in C:\\xampp\\htdocs\\lc\\wp-content\\plugins\\accelerated-mobile-pages\\components\\theme-loader.php on line 20\r\n\r\nWarning: require_once(/functions.php): failed to open stream: No such file or directory in C:\\xampp\\htdocs\\lc\\wp-content\\plugins\\accelerated-mobile-pages\\components\\theme-loader.php on line 22\r\n\r\nFatal error: require_once(): Failed opening required '/functions.php' (include_path='C:\\xampp\\php\\PEAR') in C:\\xampp\\htdocs\\lc\\wp-content\\plugins\\accelerated-mobile-pages\\components\\theme-loader.php on line 22\r\n\r\n[redux_options_redux_builder_amp_backup_01-02-2020 (1).txt](https://github.com/ahmedkaludi/accelerated-mobile-pages/files/4142329/redux_options_redux_builder_amp_backup_01-02-2020.1.txt)\r\n\r\nUser is using First Pixel theme\r\n\r\n"
|
1.0
|
Getting fatal along with amp theme frame work with the latest version 1.0.20 -
https://secure.helpscout.net/conversation/1070940476/108971?folderId=1060556
https://monosnap.com/file/8YBxtWEczVNo4WIhe1AbHPeavMlyt0
Warning: Use of undefined constant AMPFORWP_CUSTOM_THEME - assumed 'AMPFORWP_CUSTOM_THEME' (this will throw an Error in a future version of PHP) in C:\xampp\htdocs\lc\wp-content\plugins\accelerated-mobile-pages\components\theme-loader.php on line 18
Notice: Undefined variable: amp_main_dir in C:\xampp\htdocs\lc\wp-content\plugins\accelerated-mobile-pages\components\theme-loader.php on line 20
Warning: require_once(/functions.php): failed to open stream: No such file or directory in C:\xampp\htdocs\lc\wp-content\plugins\accelerated-mobile-pages\components\theme-loader.php on line 22
Fatal error: require_once(): Failed opening required '/functions.php' (include_path='C:\xampp\php\PEAR') in C:\xampp\htdocs\lc\wp-content\plugins\accelerated-mobile-pages\components\theme-loader.php on line 22
[redux_options_redux_builder_amp_backup_01-02-2020 (1).txt](https://github.com/ahmedkaludi/accelerated-mobile-pages/files/4142329/redux_options_redux_builder_amp_backup_01-02-2020.1.txt)
User is using First Pixel theme
|
priority
|
getting fatal along with amp theme frame work with the latest version warning use of undefined constant ampforwp custom theme assumed ampforwp custom theme this will throw an error in a future version of php in c xampp htdocs lc wp content plugins accelerated mobile pages components theme loader php on line notice undefined variable amp main dir in c xampp htdocs lc wp content plugins accelerated mobile pages components theme loader php on line warning require once functions php failed to open stream no such file or directory in c xampp htdocs lc wp content plugins accelerated mobile pages components theme loader php on line fatal error require once failed opening required functions php include path c xampp php pear in c xampp htdocs lc wp content plugins accelerated mobile pages components theme loader php on line user is using first pixel theme
| 1
|
467,282
| 13,445,138,143
|
IssuesEvent
|
2020-09-08 10:54:23
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Review of the story settings UI
|
Accepted GeoStory Priority: High Project: C039 enhancement
|
## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
The UI of the [story settings](https://mapstore.readthedocs.io/en/latest/user-guide/story-setting/) needs to be reviewed and small additional features included to improve the customization of the story header. The required updates are reported in the mockup below.

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
|
1.0
|
Review of the story settings UI - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
The UI of the [story settings](https://mapstore.readthedocs.io/en/latest/user-guide/story-setting/) needs to be reviewed and small additional features included to improve the customization of the story header. The required updates are reported in the mockup below.

**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
|
priority
|
review of the story settings ui description the ui of the needs to be reviewed and small additional features included to improve the customization of the story header the required updates are reported in the mockup below what kind of improvement you want to add check one with x remove the others minor changes to existing features code style update formatting local variables refactoring no functional changes no api changes build related changes ci related changes other please describe other useful information
| 1
|
479,263
| 13,793,747,474
|
IssuesEvent
|
2020-10-09 15:22:37
|
CatalogueOfLife/portal-components
|
https://api.github.com/repos/CatalogueOfLife/portal-components
|
closed
|
Source dataset metadata loaded from wrong endpoint
|
high priority
|
The metadata for a projects source dataset is not /dataset/1234 but is scoped within the project/release:
Instead of:
https://api.catalogue.life/dataset/1146
use
https://api.catalogue.life/dataset/3LR/source/1146
This will differ over time as the release source metadata is immutable and archived, while the global dataset one is changing continuously. See https://github.com/CatalogueOfLife/backend/issues/689#issuecomment-617808658
Please check if the source dataset metadata retrieval needs to be changed elsewhere too!
|
1.0
|
Source dataset metadata loaded from wrong endpoint - The metadata for a projects source dataset is not /dataset/1234 but is scoped within the project/release:
Instead of:
https://api.catalogue.life/dataset/1146
use
https://api.catalogue.life/dataset/3LR/source/1146
This will differ over time as the release source metadata is immutable and archived, while the global dataset one is changing continuously. See https://github.com/CatalogueOfLife/backend/issues/689#issuecomment-617808658
Please check if the source dataset metadata retrieval needs to be changed elsewhere too!
|
priority
|
source dataset metadata loaded from wrong endpoint the metadata for a projects source dataset is not dataset but is scoped within the project release instead of use this will differ over time as the release source metadata is immutable and archived while the global dataset one is changing continuously see please check if the source dataset metadata retrieval needs to be changed elsewhere too
| 1
|
211,854
| 7,208,571,029
|
IssuesEvent
|
2018-02-07 03:53:50
|
DroidKaigi/conference-app-2018
|
https://api.github.com/repos/DroidKaigi/conference-app-2018
|
opened
|
Fatal Exception: java.lang.NoSuchMethodError android.content.res.Configuration.getLayoutDirection
|
high priority welcome contribute
|
## Overview (Required)
- I will add description
## Links
-
|
1.0
|
Fatal Exception: java.lang.NoSuchMethodError android.content.res.Configuration.getLayoutDirection - ## Overview (Required)
- I will add description
## Links
-
|
priority
|
fatal exception java lang nosuchmethoderror android content res configuration getlayoutdirection overview required i will add description links
| 1
|
225,901
| 7,496,092,541
|
IssuesEvent
|
2018-04-08 05:25:02
|
CS2103JAN2018-T15-B4/main
|
https://api.github.com/repos/CS2103JAN2018-T15-B4/main
|
closed
|
Invalid addevent Format Undetected
|
priority.high type.bug
|
addevent et/Movie Outing ed/Watching Black Panther el/Suntec City GV edt/22&04+2018=1630 was accepted even though datetime wasn't in the expected dd-mm-yyyy hhmm format.
|
1.0
|
Invalid addevent Format Undetected - addevent et/Movie Outing ed/Watching Black Panther el/Suntec City GV edt/22&04+2018=1630 was accepted even though datetime wasn't in the expected dd-mm-yyyy hhmm format.
|
priority
|
invalid addevent format undetected addevent et movie outing ed watching black panther el suntec city gv edt was accepted even though datetime wasn t in the expected dd mm yyyy hhmm format
| 1
|
315,402
| 9,620,294,326
|
IssuesEvent
|
2019-05-14 08:00:39
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
Got 500 when to sign an image with v1.8.0-rc1
|
priority/high target/1.8.0
|
```
latest: digest: sha256:8fb64fee8b9f05f92c1c9fca2f3715f5973ba0c538a05463976e48a2566233c1 size: 2626
Signing and pushing trust metadata
unable to reach trust server at this time: 500.
```
|
1.0
|
Got 500 when to sign an image with v1.8.0-rc1 - ```
latest: digest: sha256:8fb64fee8b9f05f92c1c9fca2f3715f5973ba0c538a05463976e48a2566233c1 size: 2626
Signing and pushing trust metadata
unable to reach trust server at this time: 500.
```
|
priority
|
got when to sign an image with latest digest size signing and pushing trust metadata unable to reach trust server at this time
| 1
|
97,160
| 3,985,504,914
|
IssuesEvent
|
2016-05-07 22:53:21
|
Brickimedia/brickimedia
|
https://api.github.com/repos/Brickimedia/brickimedia
|
closed
|
Make Refreshed less brickimedia-specific
|
[browser] IE [improv] Enhancement [priority] Mid-high [skin] Refreshed [wiki] Global
|
The Refreshed skin should be less brickimedia specific so other wikis don't have to download unnecessary files they're not going to use (files should only be there if they're going to be used globally otherwise local files should be on our own local site), this means:
- [ ] The `/images` folder shouldn't include brickimedia wiki logos, they should just include the icons required for mobile.
- [x] Remove the `.png` wordmark fallbacks since we're officially not supporting IE 8 anymore. Also helps with performance since loading less files that aren't being used
- [ ] Requires some rewriting of Refreshed header component
- [ ] That means all these logos should turn into variables required to be entered into [brickimedia/LocalSettings](https://github.com/Brickimedia/LocalSettings), meaning this would require making changes across multiple repositories. (all of our local variable values would be from `image.brickimedia.org` and uploaded to Meta), and for other wikis they would require editing their own `LocalSettings.php` file
- [ ] Since this breaks directory paths, this should be counted as a major patch instead of a minor patch. So instead of 3.1.1, this should be included in 4.1.1 and should be reflected on the [Refreshed.php](https://github.com/Brickimedia/Refreshed/blob/master/Refreshed.php) file (Note: I found a good way of versioning that I like in of these comments at https://github.com/alrra/browser-logos/issues/76)
- [ ] Reflect these changes on mw docs https://www.mediawiki.org/wiki/Skin:Refreshed
|
1.0
|
Make Refreshed less brickimedia-specific - The Refreshed skin should be less brickimedia specific so other wikis don't have to download unnecessary files they're not going to use (files should only be there if they're going to be used globally otherwise local files should be on our own local site), this means:
- [ ] The `/images` folder shouldn't include brickimedia wiki logos, they should just include the icons required for mobile.
- [x] Remove the `.png` wordmark fallbacks since we're officially not supporting IE 8 anymore. Also helps with performance since loading less files that aren't being used
- [ ] Requires some rewriting of Refreshed header component
- [ ] That means all these logos should turn into variables required to be entered into [brickimedia/LocalSettings](https://github.com/Brickimedia/LocalSettings), meaning this would require making changes across multiple repositories. (all of our local variable values would be from `image.brickimedia.org` and uploaded to Meta), and for other wikis they would require editing their own `LocalSettings.php` file
- [ ] Since this breaks directory paths, this should be counted as a major patch instead of a minor patch. So instead of 3.1.1, this should be included in 4.1.1 and should be reflected on the [Refreshed.php](https://github.com/Brickimedia/Refreshed/blob/master/Refreshed.php) file (Note: I found a good way of versioning that I like in of these comments at https://github.com/alrra/browser-logos/issues/76)
- [ ] Reflect these changes on mw docs https://www.mediawiki.org/wiki/Skin:Refreshed
|
priority
|
make refreshed less brickimedia specific the refreshed skin should be less brickimedia specific so other wikis don t have to download unnecessary files they re not going to use files should only be there if they re going to be used globally otherwise local files should be on our own local site this means the images folder shouldn t include brickimedia wiki logos they should just include the icons required for mobile remove the png wordmark fallbacks since we re officially not supporting ie anymore also helps with performance since loading less files that aren t being used requires some rewriting of refreshed header component that means all these logos should turn into variables required to be entered into meaning this would require making changes across multiple repositories all of our local variable values would be from image brickimedia org and uploaded to meta and for other wikis they would require editing their own localsettings php file since this breaks directory paths this should be counted as a major patch instead of a minor patch so instead of this should be included in and should be reflected on the file note i found a good way of versioning that i like in of these comments at reflect these changes on mw docs
| 1
|
87,018
| 3,736,110,240
|
IssuesEvent
|
2016-03-08 14:54:35
|
todotoit/cryptoloji
|
https://api.github.com/repos/todotoit/cryptoloji
|
closed
|
"I have a crush on you" > "Have a nice day"
|
enhancement priority:high
|
Come discusso su Basecamp, aspettiamo solo domani mattina per evitare di fare doppie correzioni. Venerdì 4 mattina, salvo contrordine, diventa prioritaria.
- [x] cambiare il testo da "I have a crush on you" a "Have a nice day"
- [x] cambiare numero e disposizione delle emoji nel messaggio in cipher
- [x] cambiare emoji di risposta da occhi-a-cuore a faccina sorridente
|
1.0
|
"I have a crush on you" > "Have a nice day" - Come discusso su Basecamp, aspettiamo solo domani mattina per evitare di fare doppie correzioni. Venerdì 4 mattina, salvo contrordine, diventa prioritaria.
- [x] cambiare il testo da "I have a crush on you" a "Have a nice day"
- [x] cambiare numero e disposizione delle emoji nel messaggio in cipher
- [x] cambiare emoji di risposta da occhi-a-cuore a faccina sorridente
|
priority
|
i have a crush on you have a nice day come discusso su basecamp aspettiamo solo domani mattina per evitare di fare doppie correzioni venerdì mattina salvo contrordine diventa prioritaria cambiare il testo da i have a crush on you a have a nice day cambiare numero e disposizione delle emoji nel messaggio in cipher cambiare emoji di risposta da occhi a cuore a faccina sorridente
| 1
|
780,575
| 27,400,424,415
|
IssuesEvent
|
2023-02-28 23:56:14
|
chef/chef
|
https://api.github.com/repos/chef/chef
|
opened
|
GHE migration broke teams for external contributorx
|
Priority: High
|
As of the GHE migration, external contributors are no longer in any teams - and @GeorgeWestwater no longer has the ability to add them.
As a work around, myself and others have been added directly as contributors, but this breaks `CODEOWNERS` - now those with `Reviewer` or `Owner` power no longer have the ability to approve PRs in a way that GH understands. See https://github.com/chef/ohai/pull/1787 as an example.
Looking through GHE docs, it seems like outside collaborators SHOULD be able to add outside collaborators to teams - but there's likely some Enterprise-level setting missing.
|
1.0
|
GHE migration broke teams for external contributorx - As of the GHE migration, external contributors are no longer in any teams - and @GeorgeWestwater no longer has the ability to add them.
As a work around, myself and others have been added directly as contributors, but this breaks `CODEOWNERS` - now those with `Reviewer` or `Owner` power no longer have the ability to approve PRs in a way that GH understands. See https://github.com/chef/ohai/pull/1787 as an example.
Looking through GHE docs, it seems like outside collaborators SHOULD be able to add outside collaborators to teams - but there's likely some Enterprise-level setting missing.
|
priority
|
ghe migration broke teams for external contributorx as of the ghe migration external contributors are no longer in any teams and georgewestwater no longer has the ability to add them as a work around myself and others have been added directly as contributors but this breaks codeowners now those with reviewer or owner power no longer have the ability to approve prs in a way that gh understands see as an example looking through ghe docs it seems like outside collaborators should be able to add outside collaborators to teams but there s likely some enterprise level setting missing
| 1
|
560,974
| 16,607,877,426
|
IssuesEvent
|
2021-06-02 07:19:02
|
openedx/build-test-release-wg
|
https://api.github.com/repos/openedx/build-test-release-wg
|
closed
|
Support for the Account MFE
|
affects:lilac priority:high type:enhancement
|
Coordinate, and if necessary execute, the tasks necessary to get the [Account MFE](https://github.com/edx/frontend-app-account) up to speed for Lilac:
1. It passes a minimum standard of documentation: at the very least, the READMEs should have descriptive text that includes use cases, screenshots, and particularly, documentation on that MFE's environment variables.
2. Its basic feature-set (as documented above) works reliably with the rest of the Lilac codebase.
3. It's deployed by default using the Native Installation, and deployable as a plugin using Tutor.
4. It is internationalized (i18n), localizable (l10n), and passes minimum accessibility (a11y) standards.
5. It is reasonably themable.
|
1.0
|
Support for the Account MFE - Coordinate, and if necessary execute, the tasks necessary to get the [Account MFE](https://github.com/edx/frontend-app-account) up to speed for Lilac:
1. It passes a minimum standard of documentation: at the very least, the READMEs should have descriptive text that includes use cases, screenshots, and particularly, documentation on that MFE's environment variables.
2. Its basic feature-set (as documented above) works reliably with the rest of the Lilac codebase.
3. It's deployed by default using the Native Installation, and deployable as a plugin using Tutor.
4. It is internationalized (i18n), localizable (l10n), and passes minimum accessibility (a11y) standards.
5. It is reasonably themable.
|
priority
|
support for the account mfe coordinate and if necessary execute the tasks necessary to get the up to speed for lilac it passes a minimum standard of documentation at the very least the readmes should have descriptive text that includes use cases screenshots and particularly documentation on that mfe s environment variables its basic feature set as documented above works reliably with the rest of the lilac codebase it s deployed by default using the native installation and deployable as a plugin using tutor it is internationalized localizable and passes minimum accessibility standards it is reasonably themable
| 1
|
951
| 2,505,930,045
|
IssuesEvent
|
2015-01-12 02:09:31
|
AtlasOfLivingAustralia/spatial-portal
|
https://api.github.com/repos/AtlasOfLivingAustralia/spatial-portal
|
closed
|
Imported shapefile produces -1 occurrence in Area report
|
priority-high
|
An imported shapefile that checks out topologically (see http://www.mapshaper.org/) produces an invalid area report with -1 species.
See https://www.dropbox.com/s/9pvvtsnwsee6o9i/Shapefile_Vanessa%20Westcott_MGA.zip?dl=0
|
1.0
|
Imported shapefile produces -1 occurrence in Area report - An imported shapefile that checks out topologically (see http://www.mapshaper.org/) produces an invalid area report with -1 species.
See https://www.dropbox.com/s/9pvvtsnwsee6o9i/Shapefile_Vanessa%20Westcott_MGA.zip?dl=0
|
priority
|
imported shapefile produces occurrence in area report an imported shapefile that checks out topologically see produces an invalid area report with species see
| 1
|
632,352
| 20,193,247,180
|
IssuesEvent
|
2022-02-11 08:14:17
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
opened
|
Error when adding a new VHost
|
Type/Bug Priority/High Feature/VHosts APIM - 4.1.0
|
### Description:

The following error can be seen in logs.
```
[2022-02-11 13:41:07,432] ERROR - ApiMgtDAO Failed to add VHost: c48cc8c7-25db-4b8a-98b4-5029017f4950
org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException: NULL not allowed for column "PROVIDER"; SQL statement:
INSERT INTO AM_GATEWAY_ENVIRONMENT (UUID, NAME, TENANT_DOMAIN, DISPLAY_NAME, DESCRIPTION, PROVIDER, ORGANIZATION) VALUES (?,?,?,?,?,?,?) [23502-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:459) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.get(DbException.java:205) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.get(DbException.java:181) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.table.Column.validateConvertUpdateSequence(Column.java:374) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.table.Table.validateConvertUpdateSequence(Table.java:845) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.dml.Insert.insertRows(Insert.java:187) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.dml.Insert.update(Insert.java:151) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.CommandContainer.executeUpdateWithGeneratedKeys(CommandContainer.java:272) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.CommandContainer.update(CommandContainer.java:191) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.Command.executeUpdate(Command.java:251) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:191) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:152) ~[h2_1.4.200.wso2v1.jar:?]
at sun.reflect.GeneratedMethodAccessor147.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_251]
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114) ~[jdbc-pool_9.0.35.wso2v1.jar:?]
at com.sun.proxy.$Proxy52.executeUpdate(Unknown Source) ~[?:?]
at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.addEnvironment_aroundBody636(ApiMgtDAO.java:13641) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.addEnvironment(ApiMgtDAO.java:13624) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.addEnvironment_aroundBody4(APIAdminImpl.java:162) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.addEnvironment(APIAdminImpl.java:152) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.impl.EnvironmentsApiServiceImpl.environmentsPost(EnvironmentsApiServiceImpl.java:102) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.EnvironmentsApi.environmentsPost(EnvironmentsApi.java:105) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_251]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[?:?]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[?:?]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[?:?]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[?:?]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:217) ~[?:?]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:681) ~[tomcat-servlet-api_9.0.54.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[?:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) ~[org.wso2.carbon.identity.context.rewrite.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) ~[org.wso2.carbon.identity.authz.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) ~[org.wso2.carbon.identity.auth.valve_1.4.52.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:101) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:58) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:895) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1722) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat_9.0.54.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_251]
[2022-02-11 13:41:07,433] ERROR - GlobalThrowableMapper Failed to add VHost: c48cc8c7-25db-4b8a-98b4-5029017f4950
```
|
1.0
|
Error when adding a new VHost - ### Description:

The following error can be seen in logs.
```
[2022-02-11 13:41:07,432] ERROR - ApiMgtDAO Failed to add VHost: c48cc8c7-25db-4b8a-98b4-5029017f4950
org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException: NULL not allowed for column "PROVIDER"; SQL statement:
INSERT INTO AM_GATEWAY_ENVIRONMENT (UUID, NAME, TENANT_DOMAIN, DISPLAY_NAME, DESCRIPTION, PROVIDER, ORGANIZATION) VALUES (?,?,?,?,?,?,?) [23502-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:459) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.get(DbException.java:205) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.message.DbException.get(DbException.java:181) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.table.Column.validateConvertUpdateSequence(Column.java:374) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.table.Table.validateConvertUpdateSequence(Table.java:845) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.dml.Insert.insertRows(Insert.java:187) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.dml.Insert.update(Insert.java:151) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.CommandContainer.executeUpdateWithGeneratedKeys(CommandContainer.java:272) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.CommandContainer.update(CommandContainer.java:191) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.command.Command.executeUpdate(Command.java:251) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:191) ~[h2_1.4.200.wso2v1.jar:?]
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:152) ~[h2_1.4.200.wso2v1.jar:?]
at sun.reflect.GeneratedMethodAccessor147.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_251]
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114) ~[jdbc-pool_9.0.35.wso2v1.jar:?]
at com.sun.proxy.$Proxy52.executeUpdate(Unknown Source) ~[?:?]
at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.addEnvironment_aroundBody636(ApiMgtDAO.java:13641) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO.addEnvironment(ApiMgtDAO.java:13624) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.addEnvironment_aroundBody4(APIAdminImpl.java:162) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.addEnvironment(APIAdminImpl.java:152) ~[org.wso2.carbon.apimgt.impl_9.16.4.jar:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.impl.EnvironmentsApiServiceImpl.environmentsPost(EnvironmentsApiServiceImpl.java:102) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.EnvironmentsApi.environmentsPost(EnvironmentsApi.java:105) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_251]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[?:?]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[?:?]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[?:?]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[?:?]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:217) ~[?:?]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:681) ~[tomcat-servlet-api_9.0.54.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[?:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) ~[org.wso2.carbon.identity.context.rewrite.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) ~[org.wso2.carbon.identity.authz.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) ~[org.wso2.carbon.identity.auth.valve_1.4.52.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:101) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:58) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) ~[org.wso2.carbon.tomcat.ext_4.6.3.m7.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:895) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1722) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat_9.0.54.wso2v1.jar:?]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat_9.0.54.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_251]
[2022-02-11 13:41:07,433] ERROR - GlobalThrowableMapper Failed to add VHost: c48cc8c7-25db-4b8a-98b4-5029017f4950
```
|
priority
|
error when adding a new vhost description the following error can be seen in logs error apimgtdao failed to add vhost org jdbc jdbcsqlintegrityconstraintviolationexception null not allowed for column provider sql statement insert into am gateway environment uuid name tenant domain display name description provider organization values at org message dbexception getjdbcsqlexception dbexception java at org message dbexception getjdbcsqlexception dbexception java at org message dbexception get dbexception java at org message dbexception get dbexception java at org table column validateconvertupdatesequence column java at org table table validateconvertupdatesequence table java at org command dml insert insertrows insert java at org command dml insert update insert java at org command commandcontainer executeupdatewithgeneratedkeys commandcontainer java at org command commandcontainer update commandcontainer java at org command command executeupdate command java at org jdbc jdbcpreparedstatement executeupdateinternal jdbcpreparedstatement java at org jdbc jdbcpreparedstatement executeupdate jdbcpreparedstatement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tomcat jdbc pool statementfacade statementproxy invoke statementfacade java at com sun proxy executeupdate unknown source at org carbon apimgt impl dao apimgtdao addenvironment apimgtdao java at org carbon apimgt impl dao apimgtdao addenvironment apimgtdao java at org carbon apimgt impl apiadminimpl addenvironment apiadminimpl java at org carbon apimgt impl apiadminimpl addenvironment apiadminimpl java at org carbon apimgt rest api admin impl environmentsapiserviceimpl environmentspost environmentsapiserviceimpl java at org carbon apimgt rest api admin environmentsapi environmentspost environmentsapi java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet dopost abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java error globalthrowablemapper failed to add vhost
| 1
|
673,615
| 23,022,828,009
|
IssuesEvent
|
2022-07-22 06:45:39
|
Elice-SW-2-Team14/Animal-Hospital
|
https://api.github.com/repos/Elice-SW-2-Team14/Animal-Hospital
|
closed
|
[FE] 디테일 페이지 예약 등록 API 요청
|
🔨 Feature ❗️high-priority 🖥 Frontend
|
## 🔨 기능 설명
디테일 페이지 예약 등록 API 요청
## 📑 완료 조건
오류없이 완료되었을때
## 💭 관련 백로그
[[FE] 상세페이지]-[메인 컴포넌트]-[예약한 날짜 시간 펫 정보 post 요청]
## 💭 예상 작업 시간
3h
|
1.0
|
[FE] 디테일 페이지 예약 등록 API 요청 - ## 🔨 기능 설명
디테일 페이지 예약 등록 API 요청
## 📑 완료 조건
오류없이 완료되었을때
## 💭 관련 백로그
[[FE] 상세페이지]-[메인 컴포넌트]-[예약한 날짜 시간 펫 정보 post 요청]
## 💭 예상 작업 시간
3h
|
priority
|
디테일 페이지 예약 등록 api 요청 🔨 기능 설명 디테일 페이지 예약 등록 api 요청 📑 완료 조건 오류없이 완료되었을때 💭 관련 백로그 상세페이지 💭 예상 작업 시간
| 1
|
766,837
| 26,901,288,337
|
IssuesEvent
|
2023-02-06 15:49:15
|
flowforge/flowforge
|
https://api.github.com/repos/flowforge/flowforge
|
closed
|
Leverage FlowForge credentials instead of http node auth
|
feature-request story 8 scope:collaboration priority:high area:api headline
|
**As a:** user
**I want to:** sign in to HTTP pages with my FlowForge credentials
**So that:** I don't have to share usernames and passwords
---
Node-RED has very basic support for [authentication of pages](https://nodered.org/docs/user-guide/runtime/securing-node-red#http-node-security).
We expose this in the Project Settings within FF.
However this is not always suitable for users:
1. It only supports a single, hardcoded username and password. This is not appropriate for multiple users, having to share passwords. This would fail to meet many organisation's security policies
2. It only supports Basic Auth - again, this would fall foul of some org's acceptable policy
This story will allow a user to select to secure their HTTP Endpoints with their FlowForge login.
When accessing a Node-RED hosted endpoint, if they are not currently logged in, they will be redirected to the FF login page and redirected back once signed in.
|
1.0
|
Leverage FlowForge credentials instead of http node auth - **As a:** user
**I want to:** sign in to HTTP pages with my FlowForge credentials
**So that:** I don't have to share usernames and passwords
---
Node-RED has very basic support for [authentication of pages](https://nodered.org/docs/user-guide/runtime/securing-node-red#http-node-security).
We expose this in the Project Settings within FF.
However this is not always suitable for users:
1. It only supports a single, hardcoded username and password. This is not appropriate for multiple users, having to share passwords. This would fail to meet many organisation's security policies
2. It only supports Basic Auth - again, this would fall foul of some org's acceptable policy
This story will allow a user to select to secure their HTTP Endpoints with their FlowForge login.
When accessing a Node-RED hosted endpoint, if they are not currently logged in, they will be redirected to the FF login page and redirected back once signed in.
|
priority
|
leverage flowforge credentials instead of http node auth as a user i want to sign in to http pages with my flowforge credentials so that i don t have to share usernames and passwords node red has very basic support for we expose this in the project settings within ff however this is not always suitable for users it only supports a single hardcoded username and password this is not appropriate for multiple users having to share passwords this would fail to meet many organisation s security policies it only supports basic auth again this would fall foul of some org s acceptable policy this story will allow a user to select to secure their http endpoints with their flowforge login when accessing a node red hosted endpoint if they are not currently logged in they will be redirected to the ff login page and redirected back once signed in
| 1
|
178,354
| 6,607,712,447
|
IssuesEvent
|
2017-09-19 08:15:34
|
dagcoin/dagcoin
|
https://api.github.com/repos/dagcoin/dagcoin
|
closed
|
Wallet infirmation page looks weird
|
bug high priority
|
Buttons have different style and aligned differently
Stange date next to address id displayed
Adresses are not visible fully
|
1.0
|
Wallet infirmation page looks weird - Buttons have different style and aligned differently
Stange date next to address id displayed
Adresses are not visible fully
|
priority
|
wallet infirmation page looks weird buttons have different style and aligned differently stange date next to address id displayed adresses are not visible fully
| 1
|
82,671
| 3,617,940,426
|
IssuesEvent
|
2016-02-08 08:56:44
|
knime-mpicbg/knime-scripting
|
https://api.github.com/repos/knime-mpicbg/knime-scripting
|
closed
|
row ids should be used as row names in R-data.frame
|
high priority R
|
Currently row IDs are lost when using R-scripting nodes. By simply using them as row.names of kIn they would be preserved and might be helpful when processing data with R.
|
1.0
|
row ids should be used as row names in R-data.frame - Currently row IDs are lost when using R-scripting nodes. By simply using them as row.names of kIn they would be preserved and might be helpful when processing data with R.
|
priority
|
row ids should be used as row names in r data frame currently row ids are lost when using r scripting nodes by simply using them as row names of kin they would be preserved and might be helpful when processing data with r
| 1
|
149,416
| 5,718,116,205
|
IssuesEvent
|
2017-04-19 18:46:54
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
opened
|
amp-ima-video: Use native controls.
|
Category: Audio&Video P1: High Priority Type: Feature Request
|
With using native controls we will get UX consistency, built-in accessibility,Volume, Chromecast, AirPlay, Closed Caption, etc..
There are issues with using native controls in IMA, that need addressing however:
1- Initially native controls need to be hidden and a custom "play" icon for click to play need to be present to capcture user-intent and initialize IMA.
2- During Ad play, controls need to be hidden since there is no way to disable "seek" otherwise.
3- Native controls `Fullscreen` is a big issue, outside of iOS, IMA needs to have a different Ad container rather than the `<video>` itself to go to fullscreen so various Ad formats can be served. There are two approaches for this:
- In Chrome, we can hide the native fullscreen with `video::-webkit-media-controls-fullscreen-button { display: none }` and provide our own button.
- In FF/IE, since we are in an iframe, we can toggle iframe's `allowfullscreen` attribute which automatically removes the native fullscreen control from all `<video>`s inside the iframe.
|
1.0
|
amp-ima-video: Use native controls. - With using native controls we will get UX consistency, built-in accessibility,Volume, Chromecast, AirPlay, Closed Caption, etc..
There are issues with using native controls in IMA, that need addressing however:
1- Initially native controls need to be hidden and a custom "play" icon for click to play need to be present to capcture user-intent and initialize IMA.
2- During Ad play, controls need to be hidden since there is no way to disable "seek" otherwise.
3- Native controls `Fullscreen` is a big issue, outside of iOS, IMA needs to have a different Ad container rather than the `<video>` itself to go to fullscreen so various Ad formats can be served. There are two approaches for this:
- In Chrome, we can hide the native fullscreen with `video::-webkit-media-controls-fullscreen-button { display: none }` and provide our own button.
- In FF/IE, since we are in an iframe, we can toggle iframe's `allowfullscreen` attribute which automatically removes the native fullscreen control from all `<video>`s inside the iframe.
|
priority
|
amp ima video use native controls with using native controls we will get ux consistency built in accessibility volume chromecast airplay closed caption etc there are issues with using native controls in ima that need addressing however initially native controls need to be hidden and a custom play icon for click to play need to be present to capcture user intent and initialize ima during ad play controls need to be hidden since there is no way to disable seek otherwise native controls fullscreen is a big issue outside of ios ima needs to have a different ad container rather than the itself to go to fullscreen so various ad formats can be served there are two approaches for this in chrome we can hide the native fullscreen with video webkit media controls fullscreen button display none and provide our own button in ff ie since we are in an iframe we can toggle iframe s allowfullscreen attribute which automatically removes the native fullscreen control from all s inside the iframe
| 1
|
185,978
| 6,732,432,425
|
IssuesEvent
|
2017-10-18 11:29:12
|
bleenco/abstruse
|
https://api.github.com/repos/bleenco/abstruse
|
closed
|
[feat]: anonymous user
|
Priority: High Status: Completed Type: Enhancement
|
- [ ] add property "demo" in a config, which defines which components anonymous user can see (Dashboard, ...)
|
1.0
|
[feat]: anonymous user - - [ ] add property "demo" in a config, which defines which components anonymous user can see (Dashboard, ...)
|
priority
|
anonymous user add property demo in a config which defines which components anonymous user can see dashboard
| 1
|
467,516
| 13,450,011,558
|
IssuesEvent
|
2020-09-08 17:49:28
|
InstituteforDiseaseModeling/covasim
|
https://api.github.com/repos/InstituteforDiseaseModeling/covasim
|
closed
|
[UI 2.0] Automatic data loading
|
CovasimUI approved highpriority ui_wishlist
|
The user should be able to select the region from a drop-down menu, and automatically load demographic and up-to-date epidemiological data for that region. The data scraping scripts have already been written by @willf , but some additional work remains:
- [ ] Update data format (e.g. `new_death` -> `new_deaths`) so loads automatically, and trim the data to start from the first diagnosis/death
- [ ] Check that scrapers still work (at least one seems to have stopped working), and figure out how to reconcile data from multiple scrapers (or just pick the most comprehensive one and go with that)
- [ ] Check ~20 locations, including ~5 US states, ~5 high-income countries, and ~10 low-income countries, and ensure that the data look reasonable
- [ ] Write method to load the data into Covasim, including population size
- [ ] In the UI, re-enable the drop-down menu for location selection (commented out in cova_app.py currently)
|
1.0
|
[UI 2.0] Automatic data loading - The user should be able to select the region from a drop-down menu, and automatically load demographic and up-to-date epidemiological data for that region. The data scraping scripts have already been written by @willf , but some additional work remains:
- [ ] Update data format (e.g. `new_death` -> `new_deaths`) so loads automatically, and trim the data to start from the first diagnosis/death
- [ ] Check that scrapers still work (at least one seems to have stopped working), and figure out how to reconcile data from multiple scrapers (or just pick the most comprehensive one and go with that)
- [ ] Check ~20 locations, including ~5 US states, ~5 high-income countries, and ~10 low-income countries, and ensure that the data look reasonable
- [ ] Write method to load the data into Covasim, including population size
- [ ] In the UI, re-enable the drop-down menu for location selection (commented out in cova_app.py currently)
|
priority
|
automatic data loading the user should be able to select the region from a drop down menu and automatically load demographic and up to date epidemiological data for that region the data scraping scripts have already been written by willf but some additional work remains update data format e g new death new deaths so loads automatically and trim the data to start from the first diagnosis death check that scrapers still work at least one seems to have stopped working and figure out how to reconcile data from multiple scrapers or just pick the most comprehensive one and go with that check locations including us states high income countries and low income countries and ensure that the data look reasonable write method to load the data into covasim including population size in the ui re enable the drop down menu for location selection commented out in cova app py currently
| 1
|
349,684
| 10,471,933,703
|
IssuesEvent
|
2019-09-23 09:03:21
|
francismaria/MaTheX2Java
|
https://api.github.com/repos/francismaria/MaTheX2Java
|
opened
|
Implement tablet responsiveness
|
frontend high priority
|
The styling of the web application is not yet prepared to be in "tablet" size. This must be solved.
|
1.0
|
Implement tablet responsiveness - The styling of the web application is not yet prepared to be in "tablet" size. This must be solved.
|
priority
|
implement tablet responsiveness the styling of the web application is not yet prepared to be in tablet size this must be solved
| 1
|
322,566
| 9,819,502,639
|
IssuesEvent
|
2019-06-13 22:14:43
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
backtrace when attempting to synchronise thumbnailPhoto onto S3 backend
|
area: authentication area: uploads bug in progress priority: high
|
```
Traceback (most recent call last):
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "./zerver/views/auth.py", line 647, in login_page
extra_context=extra_context, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 54, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 150, in login
)(request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 149, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 90, in dispatch
return super(LoginView, self).dispatch(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/edit.py", line 182, in post
if form.is_valid():
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 183, in is_valid
return self.is_bound and not self.errors
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 175, in errors
self.full_clean()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 385, in full_clean
self._clean_form()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 412, in _clean_form
cleaned_data = self.clean()
File "./zerver/forms.py", line 283, in clean
realm=realm, return_data=return_data)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/__init__.py", line 70, in authenticate
user = _authenticate_with_backend(backend, backend_path, request, credentials)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/__init__.py", line 116, in _authenticate_with_backend
return backend.authenticate(*args, **credentials)
File "./zproject/backends.py", line 431, in authenticate
password=password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 150, in authenticate
user = self.authenticate_ldap_user(ldap_user, password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 210, in authenticate_ldap_user
return ldap_user.authenticate(password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 350, in authenticate
self._get_or_create_user()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 591, in _get_or_create_user
self._user, built = self.backend.get_or_build_user(username, self)
File "./zproject/backends.py", line 500, in get_or_build_user
self.sync_avatar_from_ldap(user_profile, ldap_user)
File "./zproject/backends.py", line 307, in sync_avatar_from_ldap
upload_avatar_image(BytesIO(ldap_user.attrs[avatar_attr_name][0]), user, user)
File "./zerver/lib/upload.py", line 769, in upload_avatar_image
upload_backend.upload_avatar_image(user_file, acting_user_profile, target_user_profile)
File "./zerver/lib/upload.py", line 402, in upload_avatar_image
content_type = guess_type(user_file.name)[0]
AttributeError: '_io.BytesIO' object has no attribute 'name'
```
|
1.0
|
backtrace when attempting to synchronise thumbnailPhoto onto S3 backend - ```
Traceback (most recent call last):
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "./zerver/views/auth.py", line 647, in login_page
extra_context=extra_context, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 54, in inner
return func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 150, in login
)(request)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 149, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 67, in _wrapper
return bound_func(*args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/utils/decorators.py", line 63, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/views.py", line 90, in dispatch
return super(LoginView, self).dispatch(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/views/generic/edit.py", line 182, in post
if form.is_valid():
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 183, in is_valid
return self.is_bound and not self.errors
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 175, in errors
self.full_clean()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 385, in full_clean
self._clean_form()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/forms/forms.py", line 412, in _clean_form
cleaned_data = self.clean()
File "./zerver/forms.py", line 283, in clean
realm=realm, return_data=return_data)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/__init__.py", line 70, in authenticate
user = _authenticate_with_backend(backend, backend_path, request, credentials)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django/contrib/auth/__init__.py", line 116, in _authenticate_with_backend
return backend.authenticate(*args, **credentials)
File "./zproject/backends.py", line 431, in authenticate
password=password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 150, in authenticate
user = self.authenticate_ldap_user(ldap_user, password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 210, in authenticate_ldap_user
return ldap_user.authenticate(password)
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 350, in authenticate
self._get_or_create_user()
File "/home/zulip/deployments/2019-05-20-11-38-33/zulip-py3-venv/lib/python3.6/site-packages/django_auth_ldap/backend.py", line 591, in _get_or_create_user
self._user, built = self.backend.get_or_build_user(username, self)
File "./zproject/backends.py", line 500, in get_or_build_user
self.sync_avatar_from_ldap(user_profile, ldap_user)
File "./zproject/backends.py", line 307, in sync_avatar_from_ldap
upload_avatar_image(BytesIO(ldap_user.attrs[avatar_attr_name][0]), user, user)
File "./zerver/lib/upload.py", line 769, in upload_avatar_image
upload_backend.upload_avatar_image(user_file, acting_user_profile, target_user_profile)
File "./zerver/lib/upload.py", line 402, in upload_avatar_image
content_type = guess_type(user_file.name)[0]
AttributeError: '_io.BytesIO' object has no attribute 'name'
```
|
priority
|
backtrace when attempting to synchronise thumbnailphoto onto backend traceback most recent call last file home zulip deployments zulip venv lib site packages django core handlers exception py line in inner response get response request file home zulip deployments zulip venv lib site packages django core handlers base py line in get response response self process exception by middleware e request file home zulip deployments zulip venv lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file zerver views auth py line in login page extra context extra context kwargs file home zulip deployments zulip venv lib site packages django contrib auth views py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django contrib auth views py line in login request file home zulip deployments zulip venv lib site packages django views generic base py line in view return self dispatch request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django views decorators debug py line in sensitive post parameters wrapper return view request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapped view response view func request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django views decorators cache py line in wrapped view func response view func request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django contrib auth views py line in dispatch return super loginview self dispatch request args kwargs file home zulip deployments zulip venv lib site packages django views generic base py line in dispatch return handler request args kwargs file home zulip deployments zulip venv lib site packages django views generic edit py line in post if form is valid file home zulip deployments zulip venv lib site packages django forms forms py line in is valid return self is bound and not self errors file home zulip deployments zulip venv lib site packages django forms forms py line in errors self full clean file home zulip deployments zulip venv lib site packages django forms forms py line in full clean self clean form file home zulip deployments zulip venv lib site packages django forms forms py line in clean form cleaned data self clean file zerver forms py line in clean realm realm return data return data file home zulip deployments zulip venv lib site packages django contrib auth init py line in authenticate user authenticate with backend backend backend path request credentials file home zulip deployments zulip venv lib site packages django contrib auth init py line in authenticate with backend return backend authenticate args credentials file zproject backends py line in authenticate password password file home zulip deployments zulip venv lib site packages django auth ldap backend py line in authenticate user self authenticate ldap user ldap user password file home zulip deployments zulip venv lib site packages django auth ldap backend py line in authenticate ldap user return ldap user authenticate password file home zulip deployments zulip venv lib site packages django auth ldap backend py line in authenticate self get or create user file home zulip deployments zulip venv lib site packages django auth ldap backend py line in get or create user self user built self backend get or build user username self file zproject backends py line in get or build user self sync avatar from ldap user profile ldap user file zproject backends py line in sync avatar from ldap upload avatar image bytesio ldap user attrs user user file zerver lib upload py line in upload avatar image upload backend upload avatar image user file acting user profile target user profile file zerver lib upload py line in upload avatar image content type guess type user file name attributeerror io bytesio object has no attribute name
| 1
|
590,854
| 17,789,502,012
|
IssuesEvent
|
2021-08-31 14:43:41
|
edwisely-ai/Relationship-Management
|
https://api.github.com/repos/edwisely-ai/Relationship-Management
|
closed
|
RMKEC Student Details incorrect - Swaroop Sir
|
Priority High
|
Respected sir
Request to add the name list of present second year ( yourself added present 3rd year name)
Dept: EIE (Autonomous)
Subject code & Name : 20EI303 Electrical Machines
Batch : 2020-2024 ( Present 2nd Year)
Please do the needful...
|
1.0
|
RMKEC Student Details incorrect - Swaroop Sir - Respected sir
Request to add the name list of present second year ( yourself added present 3rd year name)
Dept: EIE (Autonomous)
Subject code & Name : 20EI303 Electrical Machines
Batch : 2020-2024 ( Present 2nd Year)
Please do the needful...
|
priority
|
rmkec student details incorrect swaroop sir respected sir request to add the name list of present second year yourself added present year name dept eie autonomous subject code name electrical machines batch present year please do the needful
| 1
|
353,184
| 10,549,671,547
|
IssuesEvent
|
2019-10-03 09:15:57
|
RADAR-base/radar-upload-source-connector
|
https://api.github.com/repos/RADAR-base/radar-upload-source-connector
|
closed
|
Pagination of Participants and Records
|
high-priority upload-backend upload-frontend
|
To allow pagination, the records response should return a `lastId` and a `limit`
The front-end can use this information to issue request to query next page.
`GET /records?project-id=<projectname>&limit=<limit>&lastId=<lastIdreturnedfrompreviospage>`
|
1.0
|
Pagination of Participants and Records - To allow pagination, the records response should return a `lastId` and a `limit`
The front-end can use this information to issue request to query next page.
`GET /records?project-id=<projectname>&limit=<limit>&lastId=<lastIdreturnedfrompreviospage>`
|
priority
|
pagination of participants and records to allow pagination the records response should return a lastid and a limit the front end can use this information to issue request to query next page get records project id limit lastid
| 1
|
43,071
| 2,882,118,053
|
IssuesEvent
|
2015-06-11 01:26:30
|
Ecotrust/floodplain-restoration
|
https://api.github.com/repos/Ecotrust/floodplain-restoration
|
closed
|
Password Reset Content
|
High Priority
|
Current emails are getting sent to spam or 'updates' in gmail - this might get better if the email doesn't contain IP address-based urls, but may not. Still worth getting fixed when we fix #17
|
1.0
|
Password Reset Content - Current emails are getting sent to spam or 'updates' in gmail - this might get better if the email doesn't contain IP address-based urls, but may not. Still worth getting fixed when we fix #17
|
priority
|
password reset content current emails are getting sent to spam or updates in gmail this might get better if the email doesn t contain ip address based urls but may not still worth getting fixed when we fix
| 1
|
586,340
| 17,575,590,547
|
IssuesEvent
|
2021-08-15 14:47:00
|
umple/umple
|
https://api.github.com/repos/umple/umple
|
closed
|
GraphViz class with traits and methods active does not display the complete diagram in UmpleOnline
|
bug Component-UmpleOnline Priority-High Diffic-Med
|
## Summary
Within the UmpleOnline, the users can choose a diagram that he/she want to view in certain circumstances. In particular, scenario, when 'Methods' and 'Traits' and "GraphViz Class" are all selected, the system can not display the whole diagram.
## Steps to Reproduce
1. Click the "Tools" menu item, select the example as "Class Diagrams >> "Afghan Rain Design".

2. Now, click the "Options" menu item, select "Methods", "Traits" and "GraphViz Class" and wait for the diagram.

The system can not display the whole diagram and it only loads some partial parts.
## Expected Feature
The system should display the complete diagram.
|
1.0
|
GraphViz class with traits and methods active does not display the complete diagram in UmpleOnline - ## Summary
Within the UmpleOnline, the users can choose a diagram that he/she want to view in certain circumstances. In particular, scenario, when 'Methods' and 'Traits' and "GraphViz Class" are all selected, the system can not display the whole diagram.
## Steps to Reproduce
1. Click the "Tools" menu item, select the example as "Class Diagrams >> "Afghan Rain Design".

2. Now, click the "Options" menu item, select "Methods", "Traits" and "GraphViz Class" and wait for the diagram.

The system can not display the whole diagram and it only loads some partial parts.
## Expected Feature
The system should display the complete diagram.
|
priority
|
graphviz class with traits and methods active does not display the complete diagram in umpleonline summary within the umpleonline the users can choose a diagram that he she want to view in certain circumstances in particular scenario when methods and traits and graphviz class are all selected the system can not display the whole diagram steps to reproduce click the tools menu item select the example as class diagrams afghan rain design now click the options menu item select methods traits and graphviz class and wait for the diagram the system can not display the whole diagram and it only loads some partial parts expected feature the system should display the complete diagram
| 1
|
318,291
| 9,690,343,952
|
IssuesEvent
|
2019-05-24 08:24:22
|
teleporthq/teleport-code-generators
|
https://api.github.com/repos/teleporthq/teleport-code-generators
|
closed
|
Extend the UIDL validator with business rules
|
exploration good first issue high priority
|
UIDL validation is currently performed strictly on the JSON schema structure.
We could add a thin layer of validation for:
- prop that is being used without being defined in `propDefinitions`
- prop that is defined but not used in the content area
- state that is being used without being defined in `stateDefinitions`
- state that is defined but not used in the content area
- prop and state key are not the same? (this could work in React, but not in Vue)
- using local variables inside repeat:
* using index without declaring `useIndex` in meta
* using custom local variable name without specifying it in meta as `iteratorName`
Additionally for project generators:
- component is referenced but does not exist in the project
- component name is different than component key (in project UIDLs)
- external dependency version is consistent across the project
- `route` state key is defined inside `root`
- the first level children in the `root` component are conditionals based on `route`
|
1.0
|
Extend the UIDL validator with business rules - UIDL validation is currently performed strictly on the JSON schema structure.
We could add a thin layer of validation for:
- prop that is being used without being defined in `propDefinitions`
- prop that is defined but not used in the content area
- state that is being used without being defined in `stateDefinitions`
- state that is defined but not used in the content area
- prop and state key are not the same? (this could work in React, but not in Vue)
- using local variables inside repeat:
* using index without declaring `useIndex` in meta
* using custom local variable name without specifying it in meta as `iteratorName`
Additionally for project generators:
- component is referenced but does not exist in the project
- component name is different than component key (in project UIDLs)
- external dependency version is consistent across the project
- `route` state key is defined inside `root`
- the first level children in the `root` component are conditionals based on `route`
|
priority
|
extend the uidl validator with business rules uidl validation is currently performed strictly on the json schema structure we could add a thin layer of validation for prop that is being used without being defined in propdefinitions prop that is defined but not used in the content area state that is being used without being defined in statedefinitions state that is defined but not used in the content area prop and state key are not the same this could work in react but not in vue using local variables inside repeat using index without declaring useindex in meta using custom local variable name without specifying it in meta as iteratorname additionally for project generators component is referenced but does not exist in the project component name is different than component key in project uidls external dependency version is consistent across the project route state key is defined inside root the first level children in the root component are conditionals based on route
| 1
|
1,912
| 2,521,533,714
|
IssuesEvent
|
2015-01-19 15:13:32
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
Homepage displays strangely on firefox
|
bug Priority-High
|
I tried hard refresh and private browsing. Same result.
Looks right when zoomed to 80%, but for any other zoom, I get various configurations of weirdness.

|
1.0
|
Homepage displays strangely on firefox - I tried hard refresh and private browsing. Same result.
Looks right when zoomed to 80%, but for any other zoom, I get various configurations of weirdness.

|
priority
|
homepage displays strangely on firefox i tried hard refresh and private browsing same result looks right when zoomed to but for any other zoom i get various configurations of weirdness
| 1
|
590,703
| 17,785,360,269
|
IssuesEvent
|
2021-08-31 10:21:30
|
IgniteUI/ignite-ui
|
https://api.github.com/repos/IgniteUI/ignite-ui
|
closed
|
igCombo - visibleItemsCount does not work properly if dataSource contains both half-width and full-width characters.
|
bug combo status: resolved priority: high
|
## Description
visibleItemsCount does not work properly if dataSource contains both half-width and full-width characters.
IE works as expected.
* ignite-ui version: 20.2.20.2.17
* browser: Chrome, Edge, FireFox
## Steps to reproduce
1. Run the attached sample in Chrome.
2. Open the drop down.
## Result
9 items are visible.
## Expected result
10 items are visible.
## Attachments
[sample.zip](https://github.com/IgniteUI/ignite-ui/files/6347482/sample.zip)
|
1.0
|
igCombo - visibleItemsCount does not work properly if dataSource contains both half-width and full-width characters. - ## Description
visibleItemsCount does not work properly if dataSource contains both half-width and full-width characters.
IE works as expected.
* ignite-ui version: 20.2.20.2.17
* browser: Chrome, Edge, FireFox
## Steps to reproduce
1. Run the attached sample in Chrome.
2. Open the drop down.
## Result
9 items are visible.
## Expected result
10 items are visible.
## Attachments
[sample.zip](https://github.com/IgniteUI/ignite-ui/files/6347482/sample.zip)
|
priority
|
igcombo visibleitemscount does not work properly if datasource contains both half width and full width characters description visibleitemscount does not work properly if datasource contains both half width and full width characters ie works as expected ignite ui version browser chrome edge firefox steps to reproduce run the attached sample in chrome open the drop down result items are visible expected result items are visible attachments
| 1
|
776,376
| 27,258,063,049
|
IssuesEvent
|
2023-02-22 13:02:17
|
eclipse/openvsx
|
https://api.github.com/repos/eclipse/openvsx
|
closed
|
Improvements in idService
|
bug priority:high
|
It's possible for upstream id service to fail fetching the public uuid of an extension, and then it will generate a random uuid, we should try to always have the correct uuid from upstream
|
1.0
|
Improvements in idService - It's possible for upstream id service to fail fetching the public uuid of an extension, and then it will generate a random uuid, we should try to always have the correct uuid from upstream
|
priority
|
improvements in idservice it s possible for upstream id service to fail fetching the public uuid of an extension and then it will generate a random uuid we should try to always have the correct uuid from upstream
| 1
|
483,650
| 13,927,820,985
|
IssuesEvent
|
2020-10-21 20:26:47
|
huridocs/uwazi
|
https://api.github.com/repos/huridocs/uwazi
|
closed
|
Rison decoding errors break the client app
|
Bug Priority: High Status: Sprint
|
Running searches containing certain characters break the app with a blank screen. Ie, searching for "title:contreras" tosses these errors:
```
rison parser error: missing ',' vendor.bundle.js:1:300385
rison parser error: missing ':' vendor.bundle.js:1:300385
rison parser error: missing ',' vendor.bundle.js:1:300385
rison parser error: missing ':'
```
|
1.0
|
Rison decoding errors break the client app - Running searches containing certain characters break the app with a blank screen. Ie, searching for "title:contreras" tosses these errors:
```
rison parser error: missing ',' vendor.bundle.js:1:300385
rison parser error: missing ':' vendor.bundle.js:1:300385
rison parser error: missing ',' vendor.bundle.js:1:300385
rison parser error: missing ':'
```
|
priority
|
rison decoding errors break the client app running searches containing certain characters break the app with a blank screen ie searching for title contreras tosses these errors rison parser error missing vendor bundle js rison parser error missing vendor bundle js rison parser error missing vendor bundle js rison parser error missing
| 1
|
563,700
| 16,704,035,454
|
IssuesEvent
|
2021-06-09 07:50:43
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[LS] Hover documentation for classes from stdlibs reveal private fields
|
Area/Hover Priority/High Team/LanguageServer Type/Bug
|
**Description:**
$subject
Consider
```
service / on new http:Listener(8080) {
resource function get getResource(http:Caller caller, http:Request req) {
}
}
```
Now, hover over `http:Request` (method param) and the fields will be displayed in the documentation popup. It contains private fields like `isDirty`, etc. Need to hide them
**Steps to reproduce:**
See description
**Affected Versions:**
Beta1 RC5
|
1.0
|
[LS] Hover documentation for classes from stdlibs reveal private fields - **Description:**
$subject
Consider
```
service / on new http:Listener(8080) {
resource function get getResource(http:Caller caller, http:Request req) {
}
}
```
Now, hover over `http:Request` (method param) and the fields will be displayed in the documentation popup. It contains private fields like `isDirty`, etc. Need to hide them
**Steps to reproduce:**
See description
**Affected Versions:**
Beta1 RC5
|
priority
|
hover documentation for classes from stdlibs reveal private fields description subject consider service on new http listener resource function get getresource http caller caller http request req now hover over http request method param and the fields will be displayed in the documentation popup it contains private fields like isdirty etc need to hide them steps to reproduce see description affected versions
| 1
|
200,225
| 7,001,599,226
|
IssuesEvent
|
2017-12-18 10:50:08
|
metasfresh/metasfresh-webui-api
|
https://api.github.com/repos/metasfresh/metasfresh-webui-api
|
opened
|
Picking Tray Clearing: process to take out an HU and add it to existing HU
|
priority:high type:enhancement
|
### Is this a bug or feature request?
part of https://github.com/metasfresh/metasfresh/issues/3190
### What is the current behavior?
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
Have a process which allows user to transfer Qty from a picking tray HUs to an existing HU.
The process shall be available when:
* you select a top level HU on left side (picking slots clearing view)
* you select a top level HU on right side (HUs to pack view)
The process has one param, the QtyCU which is set, by default, to picking slot's HU available quantity.
If, after running the process, the picking slot's HU becomes empty, it shall be destroyed and it shall vanish from left side.
|
1.0
|
Picking Tray Clearing: process to take out an HU and add it to existing HU - ### Is this a bug or feature request?
part of https://github.com/metasfresh/metasfresh/issues/3190
### What is the current behavior?
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
Have a process which allows user to transfer Qty from a picking tray HUs to an existing HU.
The process shall be available when:
* you select a top level HU on left side (picking slots clearing view)
* you select a top level HU on right side (HUs to pack view)
The process has one param, the QtyCU which is set, by default, to picking slot's HU available quantity.
If, after running the process, the picking slot's HU becomes empty, it shall be destroyed and it shall vanish from left side.
|
priority
|
picking tray clearing process to take out an hu and add it to existing hu is this a bug or feature request part of what is the current behavior which are the steps to reproduce what is the expected or desired behavior have a process which allows user to transfer qty from a picking tray hus to an existing hu the process shall be available when you select a top level hu on left side picking slots clearing view you select a top level hu on right side hus to pack view the process has one param the qtycu which is set by default to picking slot s hu available quantity if after running the process the picking slot s hu becomes empty it shall be destroyed and it shall vanish from left side
| 1
|
649,648
| 21,316,858,674
|
IssuesEvent
|
2022-04-16 12:36:35
|
lord-server/lord
|
https://api.github.com/repos/lord-server/lord
|
closed
|
Прозрачность жемчужных блоков
|
bug graphics high priority
|
Блоки жемчуга `lottores:pearl_block` в minetest 5.1 выглядит полупрозрачным, а в 5.4 - нет. По идее жемчуг непрозрачен, но часть построек использует его прозрачность
5.4

5.1

|
1.0
|
Прозрачность жемчужных блоков - Блоки жемчуга `lottores:pearl_block` в minetest 5.1 выглядит полупрозрачным, а в 5.4 - нет. По идее жемчуг непрозрачен, но часть построек использует его прозрачность
5.4

5.1

|
priority
|
прозрачность жемчужных блоков блоки жемчуга lottores pearl block в minetest выглядит полупрозрачным а в нет по идее жемчуг непрозрачен но часть построек использует его прозрачность
| 1
|
385,522
| 11,421,505,720
|
IssuesEvent
|
2020-02-03 12:20:32
|
luna/ide
|
https://api.github.com/repos/luna/ide
|
opened
|
Text Controller
|
Category: IDE Change: Non-Breaking Difficulty: Core Contributor Priority: Highest Type: Enhancement
|
### Summary
We need an implementation of Text Controller using File Manager to save and load source files, notify file changes and highlight Luna modules.
### Value
An usable Text Controller component following the specifications bellow for our IDE.
### Specification
- Text files saving and loading.
- Discerns between Luna module file and plain text file.
- In case of luna module idmap and metadata are hidden.
- It obtains the module controller and informs it about text changes using Text API.
- It uses highlighter to properly highlight code.
- In case of plain text file, it is “just” edited.
- Provides highlighting information
- For now only for Luna modules, possible other highlighters in future
### Acceptance Criteria & Test Cases
We should have a working example.
|
1.0
|
Text Controller - ### Summary
We need an implementation of Text Controller using File Manager to save and load source files, notify file changes and highlight Luna modules.
### Value
An usable Text Controller component following the specifications bellow for our IDE.
### Specification
- Text files saving and loading.
- Discerns between Luna module file and plain text file.
- In case of luna module idmap and metadata are hidden.
- It obtains the module controller and informs it about text changes using Text API.
- It uses highlighter to properly highlight code.
- In case of plain text file, it is “just” edited.
- Provides highlighting information
- For now only for Luna modules, possible other highlighters in future
### Acceptance Criteria & Test Cases
We should have a working example.
|
priority
|
text controller summary we need an implementation of text controller using file manager to save and load source files notify file changes and highlight luna modules value an usable text controller component following the specifications bellow for our ide specification text files saving and loading discerns between luna module file and plain text file in case of luna module idmap and metadata are hidden it obtains the module controller and informs it about text changes using text api it uses highlighter to properly highlight code in case of plain text file it is “just” edited provides highlighting information for now only for luna modules possible other highlighters in future acceptance criteria test cases we should have a working example
| 1
|
227,990
| 7,544,935,894
|
IssuesEvent
|
2018-04-17 20:00:48
|
Earthii/Simple-Camera-SOEN-390
|
https://api.github.com/repos/Earthii/Simple-Camera-SOEN-390
|
closed
|
As a user, I want to be able to scan a phone number and have it add into the user's contacts
|
TA-signoff [Number Scan] high priority high risk in progress user story
|
[SP - 8]
[Priority - High]
[Risk - medium]
## Task
- [x] Mockup, 1sp - Johnny
- [x] Acceptance test, 1sp - Youness
- [x] Recognize / Add number to contacts , 6sp - Youness, Johnny, Steven
- [x] #125 Unit tests - Ethan
- [x] #124 UI tests - Ethan
|
1.0
|
As a user, I want to be able to scan a phone number and have it add into the user's contacts - [SP - 8]
[Priority - High]
[Risk - medium]
## Task
- [x] Mockup, 1sp - Johnny
- [x] Acceptance test, 1sp - Youness
- [x] Recognize / Add number to contacts , 6sp - Youness, Johnny, Steven
- [x] #125 Unit tests - Ethan
- [x] #124 UI tests - Ethan
|
priority
|
as a user i want to be able to scan a phone number and have it add into the user s contacts task mockup johnny acceptance test youness recognize add number to contacts youness johnny steven unit tests ethan ui tests ethan
| 1
|
333,826
| 10,131,856,433
|
IssuesEvent
|
2019-08-01 20:39:25
|
trestletech/plumber
|
https://api.github.com/repos/trestletech/plumber
|
opened
|
`$run(swagger = fn)` does not work for RSConnect
|
difficulty: advanced effort: medium priority: high theme: swagger
|
RSC does not want to call `$run` with special parameters... such as a swagger function.
It's also weird to do this for `entrypoint.R` as we do not have a way to define swagger info in `plumber.R`.
-----------------------
I propose we have a new plumber tag to define a swagger function that should take in `pr` and `spec`. This follows the natural plumber.R flow.
Example:
```r
#* @get-swagger
function(pr, spec) {
spec$info$title <- Sys.time()
spec
}
```
Options for plumber tag...
* `#* @get-swagger`
* `#* @get-openapi`
* `#* @swagger`
* `#* @openapi`
* ...?
----------------------
This should also be paired with a programmatic handler function...
* `pr$handle_swagger(fn)`
* `pr$handle_openapi(fn)`
* ...?
This'll get weird having mounted routers with different customizations. I'm thinking the root router will be the only one to be called to avoid confusion / definition collisions.
The `handle_swagger` function will be called from the root router who has all of the endpoints recursively added from the mounted routers and itself.
------------------
cc @blairj09 @aronatkins
|
1.0
|
`$run(swagger = fn)` does not work for RSConnect - RSC does not want to call `$run` with special parameters... such as a swagger function.
It's also weird to do this for `entrypoint.R` as we do not have a way to define swagger info in `plumber.R`.
-----------------------
I propose we have a new plumber tag to define a swagger function that should take in `pr` and `spec`. This follows the natural plumber.R flow.
Example:
```r
#* @get-swagger
function(pr, spec) {
spec$info$title <- Sys.time()
spec
}
```
Options for plumber tag...
* `#* @get-swagger`
* `#* @get-openapi`
* `#* @swagger`
* `#* @openapi`
* ...?
----------------------
This should also be paired with a programmatic handler function...
* `pr$handle_swagger(fn)`
* `pr$handle_openapi(fn)`
* ...?
This'll get weird having mounted routers with different customizations. I'm thinking the root router will be the only one to be called to avoid confusion / definition collisions.
The `handle_swagger` function will be called from the root router who has all of the endpoints recursively added from the mounted routers and itself.
------------------
cc @blairj09 @aronatkins
|
priority
|
run swagger fn does not work for rsconnect rsc does not want to call run with special parameters such as a swagger function it s also weird to do this for entrypoint r as we do not have a way to define swagger info in plumber r i propose we have a new plumber tag to define a swagger function that should take in pr and spec this follows the natural plumber r flow example r get swagger function pr spec spec info title sys time spec options for plumber tag get swagger get openapi swagger openapi this should also be paired with a programmatic handler function pr handle swagger fn pr handle openapi fn this ll get weird having mounted routers with different customizations i m thinking the root router will be the only one to be called to avoid confusion definition collisions the handle swagger function will be called from the root router who has all of the endpoints recursively added from the mounted routers and itself cc aronatkins
| 1
|
401,763
| 11,797,448,141
|
IssuesEvent
|
2020-03-18 12:45:46
|
lineupjs/lineupjs
|
https://api.github.com/repos/lineupjs/lineupjs
|
closed
|
Sublabel is undefined when grouping or sorting a column
|
lineup: v4 priority: high type: bug
|
* Release number or git hash: 928d48119d632cae56be90a32297bd398d457b3c
* Web browser version and OS: Windows Chrome 80
### Steps to reproduce
1. Start local LineUp v4 demos
1. Open builder4.html (as an example)
1. Group or sort a column
### Observed behavior

```
Uncaught TypeError: Cannot set property 'innerHTML' of undefined
at updateHeader (header.ts:71)
at Hierarchy.ts:117
at Array.forEach (<anonymous>)
at Hierarchy.render (Hierarchy.ts:71)
at Hierarchy.renderGroups (Hierarchy.ts:142)
at Hierarchy.update (Hierarchy.ts:62)
at SidePanelRanking.updateHierarchy (SidePanelRanking.ts:89)
at Object.<anonymous> (SidePanelRanking.ts:52)
at Dispatch.apply (dispatch.js:61)
at fireImpl (AEventDispatcher.ts:121)
```
This bug was most likely introduced with PR #255.
* The icons in the toolbar are also not highlighted
### Expected behavior
* Grouping and sorting should work without errors
|
1.0
|
Sublabel is undefined when grouping or sorting a column - * Release number or git hash: 928d48119d632cae56be90a32297bd398d457b3c
* Web browser version and OS: Windows Chrome 80
### Steps to reproduce
1. Start local LineUp v4 demos
1. Open builder4.html (as an example)
1. Group or sort a column
### Observed behavior

```
Uncaught TypeError: Cannot set property 'innerHTML' of undefined
at updateHeader (header.ts:71)
at Hierarchy.ts:117
at Array.forEach (<anonymous>)
at Hierarchy.render (Hierarchy.ts:71)
at Hierarchy.renderGroups (Hierarchy.ts:142)
at Hierarchy.update (Hierarchy.ts:62)
at SidePanelRanking.updateHierarchy (SidePanelRanking.ts:89)
at Object.<anonymous> (SidePanelRanking.ts:52)
at Dispatch.apply (dispatch.js:61)
at fireImpl (AEventDispatcher.ts:121)
```
This bug was most likely introduced with PR #255.
* The icons in the toolbar are also not highlighted
### Expected behavior
* Grouping and sorting should work without errors
|
priority
|
sublabel is undefined when grouping or sorting a column release number or git hash web browser version and os windows chrome steps to reproduce start local lineup demos open html as an example group or sort a column observed behavior uncaught typeerror cannot set property innerhtml of undefined at updateheader header ts at hierarchy ts at array foreach at hierarchy render hierarchy ts at hierarchy rendergroups hierarchy ts at hierarchy update hierarchy ts at sidepanelranking updatehierarchy sidepanelranking ts at object sidepanelranking ts at dispatch apply dispatch js at fireimpl aeventdispatcher ts this bug was most likely introduced with pr the icons in the toolbar are also not highlighted expected behavior grouping and sorting should work without errors
| 1
|
554,618
| 16,434,825,926
|
IssuesEvent
|
2021-05-20 08:04:13
|
sopra-fs21-group-09/sopra-fs21-group-09-client
|
https://api.github.com/repos/sopra-fs21-group-09/sopra-fs21-group-09-client
|
closed
|
Create Join Module Page
|
Frontend high priority task
|
This is part of User story #9
link from Join a Module button
list all existing Modules (Get request?)
Estimate: 2 h
ScrollBar needs fixing and backend connection missing
|
1.0
|
Create Join Module Page - This is part of User story #9
link from Join a Module button
list all existing Modules (Get request?)
Estimate: 2 h
ScrollBar needs fixing and backend connection missing
|
priority
|
create join module page this is part of user story link from join a module button list all existing modules get request estimate h scrollbar needs fixing and backend connection missing
| 1
|
613,095
| 19,072,633,320
|
IssuesEvent
|
2021-11-27 06:46:31
|
Wrap-and-Go/Wrap-nd-Go
|
https://api.github.com/repos/Wrap-and-Go/Wrap-nd-Go
|
closed
|
Search bar UI Needed
|
good first issue help wanted UX/UI high priority
|
The basic design of the search bar is given in the link: https://whimsical.com/food-8PBqBtCftsetbN27UNpFJC
Just waiting 🤞 Contributor✨
|
1.0
|
Search bar UI Needed - The basic design of the search bar is given in the link: https://whimsical.com/food-8PBqBtCftsetbN27UNpFJC
Just waiting 🤞 Contributor✨
|
priority
|
search bar ui needed the basic design of the search bar is given in the link just waiting 🤞 contributor✨
| 1
|
809,061
| 30,123,016,681
|
IssuesEvent
|
2023-06-30 16:46:54
|
ORNL-AMO/VERIFI
|
https://api.github.com/repos/ORNL-AMO/VERIFI
|
closed
|
Table Data for water
|
bug High Priority
|
currently emissions are showing for water meters. Should double check which columns should be displayed
|
1.0
|
Table Data for water - currently emissions are showing for water meters. Should double check which columns should be displayed
|
priority
|
table data for water currently emissions are showing for water meters should double check which columns should be displayed
| 1
|
64,734
| 3,214,576,870
|
IssuesEvent
|
2015-10-07 03:20:44
|
cs2103aug2015-w11-3j/main
|
https://api.github.com/repos/cs2103aug2015-w11-3j/main
|
closed
|
Update the deadline for tasks
|
priority.high type.story
|
I dont need to delete and recreate a task if its deadline changes
|
1.0
|
Update the deadline for tasks - I dont need to delete and recreate a task if its deadline changes
|
priority
|
update the deadline for tasks i dont need to delete and recreate a task if its deadline changes
| 1
|
235,702
| 7,741,921,022
|
IssuesEvent
|
2018-05-29 07:53:45
|
MoOx/postcss-cssnext
|
https://api.github.com/repos/MoOx/postcss-cssnext
|
closed
|
Some features needs to be deactivated without using caniuse-db
|
level: high-priority type: enhancement
|
``` sh
$ mkdir cssnext-test
$ cd cssnext-test/
$ npm i cssnext
# snip
cssnext@1.7.1 node_modules/cssnext
# snip
$ echo 'a { filter: drop-shadow(0 0 5px black) }'>test.css
$ cssnext test.css
a { filter: url('data:image/svg+xml;charset=utf-8,<svg xmlns="http://www.w3.org/2000/svg"><filter id="filter"><feGaussianBlur in="SourceAlpha" stdDeviation="5" /><feOffset dx="1" dy="1" result="offsetblur" /><feFlood flood-color="rgba(0,0,0,1)" /><feComposite in2="offsetblur" operator="in" /><feMerge><feMergeNode /><feMergeNode in="SourceGraphic" /></feMerge></filter></svg>#filter'); -webkit-filter: drop-shadow(0 0 5px black); filter: drop-shadow(0 0 5px black) }
$ npm i autoprefixer
autoprefixer@5.2.0 node_modules/autoprefixer
├── postcss@4.1.13 (js-base64@2.1.8, es6-promise@2.3.0, source-map@0.4.2)
├── autoprefixer-core@5.2.1 (num2fraction@1.1.0, browserslist@0.4.0, caniuse-db@1.0.30000215)
└── fs-extra@0.18.4 (jsonfile@2.1.2, graceful-fs@3.0.8, rimraf@2.4.0)
$ autoprefixer <test.css
Autoprefixer CLI is deprecated. Use postcss-cli instead.
a { -webkit-filter: drop-shadow(0 0 5px black); filter: drop-shadow(0 0 5px black) }
```
Why are the `cssnext` and `autoprefixer` results different? I’d expect them to be equal.
|
1.0
|
Some features needs to be deactivated without using caniuse-db - ``` sh
$ mkdir cssnext-test
$ cd cssnext-test/
$ npm i cssnext
# snip
cssnext@1.7.1 node_modules/cssnext
# snip
$ echo 'a { filter: drop-shadow(0 0 5px black) }'>test.css
$ cssnext test.css
a { filter: url('data:image/svg+xml;charset=utf-8,<svg xmlns="http://www.w3.org/2000/svg"><filter id="filter"><feGaussianBlur in="SourceAlpha" stdDeviation="5" /><feOffset dx="1" dy="1" result="offsetblur" /><feFlood flood-color="rgba(0,0,0,1)" /><feComposite in2="offsetblur" operator="in" /><feMerge><feMergeNode /><feMergeNode in="SourceGraphic" /></feMerge></filter></svg>#filter'); -webkit-filter: drop-shadow(0 0 5px black); filter: drop-shadow(0 0 5px black) }
$ npm i autoprefixer
autoprefixer@5.2.0 node_modules/autoprefixer
├── postcss@4.1.13 (js-base64@2.1.8, es6-promise@2.3.0, source-map@0.4.2)
├── autoprefixer-core@5.2.1 (num2fraction@1.1.0, browserslist@0.4.0, caniuse-db@1.0.30000215)
└── fs-extra@0.18.4 (jsonfile@2.1.2, graceful-fs@3.0.8, rimraf@2.4.0)
$ autoprefixer <test.css
Autoprefixer CLI is deprecated. Use postcss-cli instead.
a { -webkit-filter: drop-shadow(0 0 5px black); filter: drop-shadow(0 0 5px black) }
```
Why are the `cssnext` and `autoprefixer` results different? I’d expect them to be equal.
|
priority
|
some features needs to be deactivated without using caniuse db sh mkdir cssnext test cd cssnext test npm i cssnext snip cssnext node modules cssnext snip echo a filter drop shadow black test css cssnext test css a filter url data image svg xml charset utf filter webkit filter drop shadow black filter drop shadow black npm i autoprefixer autoprefixer node modules autoprefixer ├── postcss js promise source map ├── autoprefixer core browserslist caniuse db └── fs extra jsonfile graceful fs rimraf autoprefixer test css autoprefixer cli is deprecated use postcss cli instead a webkit filter drop shadow black filter drop shadow black why are the cssnext and autoprefixer results different i’d expect them to be equal
| 1
|
112,245
| 4,513,808,476
|
IssuesEvent
|
2016-09-04 14:10:26
|
nextcloud/appstore
|
https://api.github.com/repos/nextcloud/appstore
|
closed
|
Rating in app detail view
|
enhancement help wanted high priority starter issue
|
Similar to #210 both ratings (overall and recent) should be shown in a visual way (no numbers). It should be consistent with the option chosen in #210
Below the releases there should be a non paginated list of comments (content needs to be fetched with JavaScript, if needed we will add automatic pagination later on) ordered by rated_at descending. The ratings should only be displayed if the comment is non empty and matches the current locale (see request.LANGUAGE_CODE). The comment should include both the users full name, the rated_at timestamp, the visual rating (thumbs or smilies?) and the comment rendered using markdown.
|
1.0
|
Rating in app detail view - Similar to #210 both ratings (overall and recent) should be shown in a visual way (no numbers). It should be consistent with the option chosen in #210
Below the releases there should be a non paginated list of comments (content needs to be fetched with JavaScript, if needed we will add automatic pagination later on) ordered by rated_at descending. The ratings should only be displayed if the comment is non empty and matches the current locale (see request.LANGUAGE_CODE). The comment should include both the users full name, the rated_at timestamp, the visual rating (thumbs or smilies?) and the comment rendered using markdown.
|
priority
|
rating in app detail view similar to both ratings overall and recent should be shown in a visual way no numbers it should be consistent with the option chosen in below the releases there should be a non paginated list of comments content needs to be fetched with javascript if needed we will add automatic pagination later on ordered by rated at descending the ratings should only be displayed if the comment is non empty and matches the current locale see request language code the comment should include both the users full name the rated at timestamp the visual rating thumbs or smilies and the comment rendered using markdown
| 1
|
402,119
| 11,802,353,038
|
IssuesEvent
|
2020-03-18 21:22:11
|
AtlasOfLivingAustralia/specieslist-webapp
|
https://api.github.com/repos/AtlasOfLivingAustralia/specieslist-webapp
|
opened
|
Add a Delete button for displayed list
|
bug priority-high
|
It is way beyond cryptic to be able to **delete** a list you own (or for an admin to delete a list). You have to click on the Name of the list, which takes you to the Data resource, then go to the bottom of the page and find delete. This may remove the resource but doesn't (at least immediately) seem to delete the list.
A DELETE button needs to be added somewhere on the list page. My suggestion would be to either put one in the List info window or adjacent to the download, view occurrence records, view in spatial portal.
I have been reviewing the (4550!) lists and a serious tidy-up is way overdue. This is step 1.
|
1.0
|
Add a Delete button for displayed list - It is way beyond cryptic to be able to **delete** a list you own (or for an admin to delete a list). You have to click on the Name of the list, which takes you to the Data resource, then go to the bottom of the page and find delete. This may remove the resource but doesn't (at least immediately) seem to delete the list.
A DELETE button needs to be added somewhere on the list page. My suggestion would be to either put one in the List info window or adjacent to the download, view occurrence records, view in spatial portal.
I have been reviewing the (4550!) lists and a serious tidy-up is way overdue. This is step 1.
|
priority
|
add a delete button for displayed list it is way beyond cryptic to be able to delete a list you own or for an admin to delete a list you have to click on the name of the list which takes you to the data resource then go to the bottom of the page and find delete this may remove the resource but doesn t at least immediately seem to delete the list a delete button needs to be added somewhere on the list page my suggestion would be to either put one in the list info window or adjacent to the download view occurrence records view in spatial portal i have been reviewing the lists and a serious tidy up is way overdue this is step
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.