Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
414,530 | 12,104,213,324 | IssuesEvent | 2020-04-20 19:48:09 | wri/gfw-mapbuilder | https://api.github.com/repos/wri/gfw-mapbuilder | opened | Languge Dropdown styling/language | 4.x Upgrade medium priority | - [ ] Lang dropdown needs to match styling of PROD
- [ ] Lang dropdown needs to be language aware, having appropriate translations

| 1.0 | Languge Dropdown styling/language - - [ ] Lang dropdown needs to match styling of PROD
- [ ] Lang dropdown needs to be language aware, having appropriate translations

| priority | languge dropdown styling language lang dropdown needs to match styling of prod lang dropdown needs to be language aware having appropriate translations | 1 |
258,865 | 8,180,357,410 | IssuesEvent | 2018-08-28 19:12:35 | cms-ttbarAC/Analysis | https://api.github.com/repos/cms-ttbarAC/Analysis | opened | Lepton-Jet Cleaning Producer | Priority: Medium Project: Genesis Status: Open Type: Enhancement | Make a new producer that reads leptons and AK4 jets, applies the lepton-jet cleaning, and saves a new collection of AK4 (and updated MET).
Then, the `updateJetCollection()` from the JEC tools can be used to apply the JECs rather than doing it by hand. | 1.0 | Lepton-Jet Cleaning Producer - Make a new producer that reads leptons and AK4 jets, applies the lepton-jet cleaning, and saves a new collection of AK4 (and updated MET).
Then, the `updateJetCollection()` from the JEC tools can be used to apply the JECs rather than doing it by hand. | priority | lepton jet cleaning producer make a new producer that reads leptons and jets applies the lepton jet cleaning and saves a new collection of and updated met then the updatejetcollection from the jec tools can be used to apply the jecs rather than doing it by hand | 1 |
403,558 | 11,842,985,303 | IssuesEvent | 2020-03-24 00:47:38 | harmony-one/harmony | https://api.github.com/repos/harmony-one/harmony | closed | Current wallet check balance is resource-heavy and slow | bug medium priority | Not sure what change caused this, but the [current wallet binary](https://github.com/harmony-one/harmony/releases/tag/pangaea-20190910.0)'s balance check cannot be run in parallel very well. The time it takes to run [check.sh](https://github.com/harmony-one/harmony-ops/blob/master/monitoring/pga-monitoring/check.sh) is now 25 minutes and 22 seconds long.
 In comparison, the [previous compiled binary](https://github.com/harmony-one/harmony/releases/tag/pangaea-20190903.0) runs the script at just about 1 minute.

This causes complete incapability to check balances with accuracy and overloads the system. Can be worked around by downgrading the binary and using custom wallet.ini.
| 1.0 | Current wallet check balance is resource-heavy and slow - Not sure what change caused this, but the [current wallet binary](https://github.com/harmony-one/harmony/releases/tag/pangaea-20190910.0)'s balance check cannot be run in parallel very well. The time it takes to run [check.sh](https://github.com/harmony-one/harmony-ops/blob/master/monitoring/pga-monitoring/check.sh) is now 25 minutes and 22 seconds long.
 In comparison, the [previous compiled binary](https://github.com/harmony-one/harmony/releases/tag/pangaea-20190903.0) runs the script at just about 1 minute.

This causes complete incapability to check balances with accuracy and overloads the system. Can be worked around by downgrading the binary and using custom wallet.ini.
| priority | current wallet check balance is resource heavy and slow not sure what change caused this but the balance check cannot be run in parallel very well the time it takes to run is now minutes and seconds long in comparison the runs the script at just about minute this causes complete incapability to check balances with accuracy and overloads the system can be worked around by downgrading the binary and using custom wallet ini | 1 |
175,050 | 6,546,333,228 | IssuesEvent | 2017-09-04 09:50:43 | robertgruening/Munins-Archiv | https://api.github.com/repos/robertgruening/Munins-Archiv | opened | Zwischenablage | priority: medium urgency: medium | Um unterschiedliche Arbeitsprozesse zu beschleunigen, soll es eine Zwischenablage in der Oberfläche geben, die auf allen Seiten verfügbar ist. Elemente, wie bspw. Ablagen oder Funde, können in die Zwischenablage mittel Maussteuerung (Anfassen und Ziehen) oder ggf. Kontextmenü gebracht werden. Die Elemente werden als Kachel dargestellt und können über Kontextmenü oder ein Löschen-X in der Kachel entfernt werden. Angewendet werden die Elemente per Maussteuerung wie folgt: Der Anwender zieht ein in der Zwischenablage ausgewähltes Element in ein entsprechendes HTML-Control, das daraufhin mit den Informationen des Elementes gefüllt wird.
Beispiel:
Der Anwender befindet sich im Formular für Funde und legt einen neuen Eintrag an. Aus der Zwischenablage wählt der Anwender eine Ablage, z. B. einen Karton, und zieht diesen mit der Maus in die Ablagetextbox im Fundformular. Daraufhin trägt das System den Datenpfad der gewählten Ablage in die Textbox ein.
Der Vorteil besteht darin, dass der Anwender beim Anlegen und Bearbeiten von Elementen sich wiederholende Informationen nicht von Hand erneut eingeben muss, sondern diese mit einer einfachen Geste hinzufügt oder überschreibt.
- [ ] Ablegen von Elementen in die Zwischenablage (Ablagen, Kontexte, Funde, Fundattribute und Orte)
- [ ] Entfernen von Elementen aus der Zwischenablage
- [ ] Öffnen des Elements in seinem Bearbeitungsformular über das Kontextmenü
- [ ] Einfügen der Elementinformation in dafür vorgesehenen HTML-Controls
- [ ] Maussteuerung
- [ ] Kentextmenüsteuerung
- [ ] ein Element kann nur ein Mal in der Zwischenablage sein
- [ ] Elemente in der Zwischenablage können vom Anwender beliebig angeordnet werden | 1.0 | Zwischenablage - Um unterschiedliche Arbeitsprozesse zu beschleunigen, soll es eine Zwischenablage in der Oberfläche geben, die auf allen Seiten verfügbar ist. Elemente, wie bspw. Ablagen oder Funde, können in die Zwischenablage mittel Maussteuerung (Anfassen und Ziehen) oder ggf. Kontextmenü gebracht werden. Die Elemente werden als Kachel dargestellt und können über Kontextmenü oder ein Löschen-X in der Kachel entfernt werden. Angewendet werden die Elemente per Maussteuerung wie folgt: Der Anwender zieht ein in der Zwischenablage ausgewähltes Element in ein entsprechendes HTML-Control, das daraufhin mit den Informationen des Elementes gefüllt wird.
Beispiel:
Der Anwender befindet sich im Formular für Funde und legt einen neuen Eintrag an. Aus der Zwischenablage wählt der Anwender eine Ablage, z. B. einen Karton, und zieht diesen mit der Maus in die Ablagetextbox im Fundformular. Daraufhin trägt das System den Datenpfad der gewählten Ablage in die Textbox ein.
Der Vorteil besteht darin, dass der Anwender beim Anlegen und Bearbeiten von Elementen sich wiederholende Informationen nicht von Hand erneut eingeben muss, sondern diese mit einer einfachen Geste hinzufügt oder überschreibt.
- [ ] Ablegen von Elementen in die Zwischenablage (Ablagen, Kontexte, Funde, Fundattribute und Orte)
- [ ] Entfernen von Elementen aus der Zwischenablage
- [ ] Öffnen des Elements in seinem Bearbeitungsformular über das Kontextmenü
- [ ] Einfügen der Elementinformation in dafür vorgesehenen HTML-Controls
- [ ] Maussteuerung
- [ ] Kentextmenüsteuerung
- [ ] ein Element kann nur ein Mal in der Zwischenablage sein
- [ ] Elemente in der Zwischenablage können vom Anwender beliebig angeordnet werden | priority | zwischenablage um unterschiedliche arbeitsprozesse zu beschleunigen soll es eine zwischenablage in der oberfläche geben die auf allen seiten verfügbar ist elemente wie bspw ablagen oder funde können in die zwischenablage mittel maussteuerung anfassen und ziehen oder ggf kontextmenü gebracht werden die elemente werden als kachel dargestellt und können über kontextmenü oder ein löschen x in der kachel entfernt werden angewendet werden die elemente per maussteuerung wie folgt der anwender zieht ein in der zwischenablage ausgewähltes element in ein entsprechendes html control das daraufhin mit den informationen des elementes gefüllt wird beispiel der anwender befindet sich im formular für funde und legt einen neuen eintrag an aus der zwischenablage wählt der anwender eine ablage z b einen karton und zieht diesen mit der maus in die ablagetextbox im fundformular daraufhin trägt das system den datenpfad der gewählten ablage in die textbox ein der vorteil besteht darin dass der anwender beim anlegen und bearbeiten von elementen sich wiederholende informationen nicht von hand erneut eingeben muss sondern diese mit einer einfachen geste hinzufügt oder überschreibt ablegen von elementen in die zwischenablage ablagen kontexte funde fundattribute und orte entfernen von elementen aus der zwischenablage öffnen des elements in seinem bearbeitungsformular über das kontextmenü einfügen der elementinformation in dafür vorgesehenen html controls maussteuerung kentextmenüsteuerung ein element kann nur ein mal in der zwischenablage sein elemente in der zwischenablage können vom anwender beliebig angeordnet werden | 1 |
708,703 | 24,350,723,382 | IssuesEvent | 2022-10-02 22:37:59 | thegrumpys/odop | https://api.github.com/repos/thegrumpys/odop | opened | Create release rollback procedure | Priority Medium | Create (and test in the staging system) a release rollback procedure.
In spite of diligent testing, there may come a time when there is a desire to (undo, retract, rollback) a recent release. Considering that there are various currently poorly understood consequences, for example impact to user designs saved by the code to be rolled back, the process needs to be tested and documented before using it on live data.
| 1.0 | Create release rollback procedure - Create (and test in the staging system) a release rollback procedure.
In spite of diligent testing, there may come a time when there is a desire to (undo, retract, rollback) a recent release. Considering that there are various currently poorly understood consequences, for example impact to user designs saved by the code to be rolled back, the process needs to be tested and documented before using it on live data.
| priority | create release rollback procedure create and test in the staging system a release rollback procedure in spite of diligent testing there may come a time when there is a desire to undo retract rollback a recent release considering that there are various currently poorly understood consequences for example impact to user designs saved by the code to be rolled back the process needs to be tested and documented before using it on live data | 1 |
89,692 | 3,798,606,883 | IssuesEvent | 2016-03-23 13:17:15 | WhitestormJS/whitestorm.js | https://api.github.com/repos/WhitestormJS/whitestorm.js | closed | [Improve] WHS.PointLight and WHS.SpotLight need to be updated. | E-easy enhancement Medium [priority] v0.1(Beta) | There are some properties like `exponent` and `decay` that need to be applied to these lights. | 1.0 | [Improve] WHS.PointLight and WHS.SpotLight need to be updated. - There are some properties like `exponent` and `decay` that need to be applied to these lights. | priority | whs pointlight and whs spotlight need to be updated there are some properties like exponent and decay that need to be applied to these lights | 1 |
482,156 | 13,901,774,546 | IssuesEvent | 2020-10-20 03:43:52 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Show Map Info crash if opened too quickly | Priority: Medium Size: Small Type: Bug | **Describe the bug**
If you open/close the Map Info dialog too quickly in the Map Selector, the game may crash.
**To Reproduce**
Steps to reproduce the behavior:
Open Map Selector
Right Click Map -> Show Map Info
Close Immediately, Repeat with any other map
Eventual Crash
If you have a video with the steps to recreate the bug, please post it here.
[Video](https://youtu.be/60Y9D1iNDXI)
**Expected behavior**
The game should not crash!
**Screenshots**
N/A
**Desktop/Branch (please complete the following information):**
- OS: Windows
- Branch: Latest develop branch
**Additional context**
N/A
| 1.0 | Show Map Info crash if opened too quickly - **Describe the bug**
If you open/close the Map Info dialog too quickly in the Map Selector, the game may crash.
**To Reproduce**
Steps to reproduce the behavior:
Open Map Selector
Right Click Map -> Show Map Info
Close Immediately, Repeat with any other map
Eventual Crash
If you have a video with the steps to recreate the bug, please post it here.
[Video](https://youtu.be/60Y9D1iNDXI)
**Expected behavior**
The game should not crash!
**Screenshots**
N/A
**Desktop/Branch (please complete the following information):**
- OS: Windows
- Branch: Latest develop branch
**Additional context**
N/A
| priority | show map info crash if opened too quickly describe the bug if you open close the map info dialog too quickly in the map selector the game may crash to reproduce steps to reproduce the behavior open map selector right click map show map info close immediately repeat with any other map eventual crash if you have a video with the steps to recreate the bug please post it here expected behavior the game should not crash screenshots n a desktop branch please complete the following information os windows branch latest develop branch additional context n a | 1 |
444,346 | 12,810,259,579 | IssuesEvent | 2020-07-03 18:02:05 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | Web admin: users device list only 25 devices | Priority: Medium Type: Bug | **Describe the bug**
Users device list shows only 25 devices.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Users' > `Username` > `Devices`
**Screenshots**

**Expected behavior**
All user devices should be listed.
| 1.0 | Web admin: users device list only 25 devices - **Describe the bug**
Users device list shows only 25 devices.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Users' > `Username` > `Devices`
**Screenshots**

**Expected behavior**
All user devices should be listed.
| priority | web admin users device list only devices describe the bug users device list shows only devices to reproduce steps to reproduce the behavior go to users username devices screenshots expected behavior all user devices should be listed | 1 |
220,723 | 7,370,236,027 | IssuesEvent | 2018-03-13 07:35:09 | teamforus/research-and-development | https://api.github.com/repos/teamforus/research-and-development | closed | POC: Pay transaction fees for other contract | fill-template priority-medium proposal | ## poc
### Background / Context
**Goal/user story:**
**More:**
### Hypothesis:
### Method
*documentation/code*
### Result
*present findings*
### Recommendation
*write recomendation*
| 1.0 | POC: Pay transaction fees for other contract - ## poc
### Background / Context
**Goal/user story:**
**More:**
### Hypothesis:
### Method
*documentation/code*
### Result
*present findings*
### Recommendation
*write recomendation*
| priority | poc pay transaction fees for other contract poc background context goal user story more hypothesis method documentation code result present findings recommendation write recomendation | 1 |
341,613 | 10,300,001,538 | IssuesEvent | 2019-08-28 13:56:04 | pupil-labs/pupil | https://api.github.com/repos/pupil-labs/pupil | closed | Player: Improve "Invalid Recording" error message | enhancement priority:medium | Currently, one gets the same error message for different cases:
- User drops a video file instead of the recording folder (wrong usage, message: "No valid dir supplied")
- User drops a folder that does not contain (actual invalid recording)
- an `info.csv` (message: "Could not read info.csv file: Not a valid Pupil recording.")
- any video files (message: "Could not generate world timestamps from eye timestamps. This is an invalid recording.")
- `info.csv` does not contain `Recording Name` (message: "Could not read info.csv file: Not a valid Pupil recording.")
- Error during parsing `info.csv` (message: "Could not read info.csv file: Not a valid Pupil recording.") | 1.0 | Player: Improve "Invalid Recording" error message - Currently, one gets the same error message for different cases:
- User drops a video file instead of the recording folder (wrong usage, message: "No valid dir supplied")
- User drops a folder that does not contain (actual invalid recording)
- an `info.csv` (message: "Could not read info.csv file: Not a valid Pupil recording.")
- any video files (message: "Could not generate world timestamps from eye timestamps. This is an invalid recording.")
- `info.csv` does not contain `Recording Name` (message: "Could not read info.csv file: Not a valid Pupil recording.")
- Error during parsing `info.csv` (message: "Could not read info.csv file: Not a valid Pupil recording.") | priority | player improve invalid recording error message currently one gets the same error message for different cases user drops a video file instead of the recording folder wrong usage message no valid dir supplied user drops a folder that does not contain actual invalid recording an info csv message could not read info csv file not a valid pupil recording any video files message could not generate world timestamps from eye timestamps this is an invalid recording info csv does not contain recording name message could not read info csv file not a valid pupil recording error during parsing info csv message could not read info csv file not a valid pupil recording | 1 |
165,219 | 6,265,605,811 | IssuesEvent | 2017-07-16 18:53:18 | CS2103JUN2017-T1/main | https://api.github.com/repos/CS2103JUN2017-T1/main | closed | Implement feature to filter events | priority.medium type.enhancement type.epic | Filter by event type.
Filter by tags.
Filter by due date.
Other filters. | 1.0 | Implement feature to filter events - Filter by event type.
Filter by tags.
Filter by due date.
Other filters. | priority | implement feature to filter events filter by event type filter by tags filter by due date other filters | 1 |
793,631 | 28,005,317,967 | IssuesEvent | 2023-03-27 14:55:58 | clt313/SuperballVR | https://api.github.com/repos/clt313/SuperballVR | opened | Fix loading screen MissingReferenceException | priority: medium bug | When loading a new scene, a MissingReferenceException pops up. We're trying to access an object that's already been deleted, so need to look into that to prevent the error. | 1.0 | Fix loading screen MissingReferenceException - When loading a new scene, a MissingReferenceException pops up. We're trying to access an object that's already been deleted, so need to look into that to prevent the error. | priority | fix loading screen missingreferenceexception when loading a new scene a missingreferenceexception pops up we re trying to access an object that s already been deleted so need to look into that to prevent the error | 1 |
28,186 | 2,700,368,083 | IssuesEvent | 2015-04-04 02:50:52 | cs2103jan2015-t15-4j/main | https://api.github.com/repos/cs2103jan2015-t15-4j/main | closed | A user can access help with one click | priority.medium type.story | ...so that I can explore more features of the software easily and manage my tasks better. | 1.0 | A user can access help with one click - ...so that I can explore more features of the software easily and manage my tasks better. | priority | a user can access help with one click so that i can explore more features of the software easily and manage my tasks better | 1 |
790,669 | 27,832,447,923 | IssuesEvent | 2023-03-20 06:40:02 | TimerTiTi/TiTi | https://api.github.com/repos/TimerTiTi/TiTi | opened | VC, VM, Manager 명칭 관련 리펙토링 작업 (3h) | refactor priority: medium | [제목] VC, VM, Manager 명칭 관련 리펙토링 작업 (3h)
[내용]
### 배경
Manager, Controller, ViewModel 명칭들을 일관되도록 작성하고, 불필요한 코드가 제거되어야 Android 개발자에게 관련 코드를 보여주기에 용이하고, 이해하시기 더욱 좋을 것으로 예상되어 진행합니다.
### 요구사항
- [ ] VC, VM 명칭 반영
- [ ] RecordController → Records 명칭 정정
- [ ] DailyViewModel → DailyManager 명칭 정정 및 리펙토링 | 1.0 | VC, VM, Manager 명칭 관련 리펙토링 작업 (3h) - [제목] VC, VM, Manager 명칭 관련 리펙토링 작업 (3h)
[내용]
### 배경
Manager, Controller, ViewModel 명칭들을 일관되도록 작성하고, 불필요한 코드가 제거되어야 Android 개발자에게 관련 코드를 보여주기에 용이하고, 이해하시기 더욱 좋을 것으로 예상되어 진행합니다.
### 요구사항
- [ ] VC, VM 명칭 반영
- [ ] RecordController → Records 명칭 정정
- [ ] DailyViewModel → DailyManager 명칭 정정 및 리펙토링 | priority | vc vm manager 명칭 관련 리펙토링 작업 vc vm manager 명칭 관련 리펙토링 작업 배경 manager controller viewmodel 명칭들을 일관되도록 작성하고 불필요한 코드가 제거되어야 android 개발자에게 관련 코드를 보여주기에 용이하고 이해하시기 더욱 좋을 것으로 예상되어 진행합니다 요구사항 vc vm 명칭 반영 recordcontroller → records 명칭 정정 dailyviewmodel → dailymanager 명칭 정정 및 리펙토링 | 1 |
277,563 | 8,629,660,598 | IssuesEvent | 2018-11-21 21:37:47 | Viq111/kvimd | https://api.github.com/repos/Viq111/kvimd | opened | [main] Add snapshotting | enhancement priority:medium | Snapshotting means:
- Lock and create a new HashDisk
- Lock and create a new ValuesDisk (in that order so we don't reference to the other ValuesDisk)
- Add a `EntriesSinceLastSnapshot() int` that counts how many entries there are since last snapshot | 1.0 | [main] Add snapshotting - Snapshotting means:
- Lock and create a new HashDisk
- Lock and create a new ValuesDisk (in that order so we don't reference to the other ValuesDisk)
- Add a `EntriesSinceLastSnapshot() int` that counts how many entries there are since last snapshot | priority | add snapshotting snapshotting means lock and create a new hashdisk lock and create a new valuesdisk in that order so we don t reference to the other valuesdisk add a entriessincelastsnapshot int that counts how many entries there are since last snapshot | 1 |
476,459 | 13,745,273,436 | IssuesEvent | 2020-10-06 02:23:00 | puddletag/puddletag | https://api.github.com/repos/puddletag/puddletag | closed | Swedish | Priority-Medium Type-Translation auto-migrated | ```
You should update your Translating page on sourceforge.net, instructions on how
to create my native ts file, don't work.
First issue is the step two, "cd puddletag-hg"... should be "cd
puddletag-hg/source".
At step three I'm lost. In terminal I get ImportError: No module named mutagen.
That's it, can't get any further.
Åke Engelbrektson
```
Original issue reported on code.google.com by `eso...@gmail.com` on 11 Jan 2015 at 6:39
| 1.0 | Swedish - ```
You should update your Translating page on sourceforge.net, instructions on how
to create my native ts file, don't work.
First issue is the step two, "cd puddletag-hg"... should be "cd
puddletag-hg/source".
At step three I'm lost. In terminal I get ImportError: No module named mutagen.
That's it, can't get any further.
Åke Engelbrektson
```
Original issue reported on code.google.com by `eso...@gmail.com` on 11 Jan 2015 at 6:39
| priority | swedish you should update your translating page on sourceforge net instructions on how to create my native ts file don t work first issue is the step two cd puddletag hg should be cd puddletag hg source at step three i m lost in terminal i get importerror no module named mutagen that s it can t get any further åke engelbrektson original issue reported on code google com by eso gmail com on jan at | 1 |
278,849 | 8,651,177,026 | IssuesEvent | 2018-11-27 01:51:48 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | "Report a bug" on "connect failed" error generate incorrect GH code. | Fixed Medium Priority outsource | **Version:** 0.7.8.0 beta
As i see, generated code gor GH incorrect, you can see result in many ussues.
e.g.
https://github.com/StrangeLoopGames/EcoIssues/issues/10010




| 1.0 | "Report a bug" on "connect failed" error generate incorrect GH code. - **Version:** 0.7.8.0 beta
As i see, generated code gor GH incorrect, you can see result in many ussues.
e.g.
https://github.com/StrangeLoopGames/EcoIssues/issues/10010




| priority | report a bug on connect failed error generate incorrect gh code version beta as i see generated code gor gh incorrect you can see result in many ussues e g | 1 |
589,634 | 17,754,969,454 | IssuesEvent | 2021-08-28 15:20:24 | dodona-edu/dodona | https://api.github.com/repos/dodona-edu/dodona | opened | Merging institutions sometimes fails | bug medium priority | Merging institution sometimes fails with this error:
```
An ActionView::Template::Error occurred while POST </nl/institutions/699/merge/?other_institution_id=30> was processed by institutions#do_merge
Exception
undefined method 'identifier_string' for nil:NilClass
Hostname
dodona
Backtrace
app/views/institutions/_merge_changes.html.erb:19
app/views/institutions/merge.html.erb:42
app/controllers/institutions_controller.rb:64:in `do_merge'
app/controllers/application_controller.rb:170:in `user_time_zone'
```
Some users might be in limbo due to this:
- users of institution 699 when merging into 30
- users of institution 350 when merging into 301 | 1.0 | Merging institutions sometimes fails - Merging institution sometimes fails with this error:
```
An ActionView::Template::Error occurred while POST </nl/institutions/699/merge/?other_institution_id=30> was processed by institutions#do_merge
Exception
undefined method 'identifier_string' for nil:NilClass
Hostname
dodona
Backtrace
app/views/institutions/_merge_changes.html.erb:19
app/views/institutions/merge.html.erb:42
app/controllers/institutions_controller.rb:64:in `do_merge'
app/controllers/application_controller.rb:170:in `user_time_zone'
```
Some users might be in limbo due to this:
- users of institution 699 when merging into 30
- users of institution 350 when merging into 301 | priority | merging institutions sometimes fails merging institution sometimes fails with this error an actionview template error occurred while post was processed by institutions do merge exception undefined method identifier string for nil nilclass hostname dodona backtrace app views institutions merge changes html erb app views institutions merge html erb app controllers institutions controller rb in do merge app controllers application controller rb in user time zone some users might be in limbo due to this users of institution when merging into users of institution when merging into | 1 |
122,303 | 4,833,557,145 | IssuesEvent | 2016-11-08 11:24:51 | arescentral/antares | https://api.github.com/repos/arescentral/antares | opened | Use system libraries when possible | Complexity:Medium OS:Linux Priority:Medium Type:Enhancement | Antares currently keeps vendored copies of most of its libraries in ``ext/``. If there are system-provided versions of those libraries, we should use those instead.
This is mainly of interest for Linux—historically, Antares does what it does because it was originally Mac-only, where the system doesn't provide the libraries we want and it's easiest if we build and link them statically ourselves. | 1.0 | Use system libraries when possible - Antares currently keeps vendored copies of most of its libraries in ``ext/``. If there are system-provided versions of those libraries, we should use those instead.
This is mainly of interest for Linux—historically, Antares does what it does because it was originally Mac-only, where the system doesn't provide the libraries we want and it's easiest if we build and link them statically ourselves. | priority | use system libraries when possible antares currently keeps vendored copies of most of its libraries in ext if there are system provided versions of those libraries we should use those instead this is mainly of interest for linux—historically antares does what it does because it was originally mac only where the system doesn t provide the libraries we want and it s easiest if we build and link them statically ourselves | 1 |
651,517 | 21,481,718,457 | IssuesEvent | 2022-04-26 18:25:32 | Samfundet/Samfundet | https://api.github.com/repos/Samfundet/Samfundet | closed | allow either lim_web or applicants to edit information | admission medium priority | As of now there is no way for us or applicants to edit applicant-information, this causes some issues every admission. We should add a way for us to easily edit this information, and look into the posibility of giving applicants the option further down the road. | 1.0 | allow either lim_web or applicants to edit information - As of now there is no way for us or applicants to edit applicant-information, this causes some issues every admission. We should add a way for us to easily edit this information, and look into the posibility of giving applicants the option further down the road. | priority | allow either lim web or applicants to edit information as of now there is no way for us or applicants to edit applicant information this causes some issues every admission we should add a way for us to easily edit this information and look into the posibility of giving applicants the option further down the road | 1 |
251,601 | 8,017,849,890 | IssuesEvent | 2018-07-25 17:11:43 | MARKETProtocol/dApp | https://api.github.com/repos/MARKETProtocol/dApp | closed | [Explorer] Implement Explorer UI/UX Mock Up | Bounty Attached Help Wanted Priority: Medium Status: Review Needed Type: Enhancement | ## Gitcoin Bounty Hunters
Please only consider picking up this issue if you feel confident that you are able to implement the designs with a high fidelity to the provided assets. Additionally, responsiveness of the implementation is very important and the design must translate well across, mobile, tablet, and large monitor resolutions.
## Before you `start work`
Please read our [contribution guidelines](https://docs.marketprotocol.io/#contributing) and if there is a bounty involved please also see [here](https://docs.marketprotocol.io/#gitcoin-and-bounties).
If you have ongoing work from other bounties with us where funding has not been released, please do not pick up a new issue. We would like to involve as many contributors as possible and parallelize the work flow as much as possible.
Please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline.
Please also note that in order for work to be accepted, all code must be accompanied by test cases as well.
## Why Is this Needed?
*Summary:* Implements a key view for the MARKET dApp
## Description
Type: Feature
*Solution*
Summary: Using the .sketch file found [here](https://github.com/MARKETProtocol/assets/blob/master/MockUps/MARKET_Protocol-dApp.sketch) as a reference implement new design using react. The design can be found below.

## Definition of Done
- [ ] Using the provided assets, implement the new design for the MARKET dApp
- [ ] Update tests as needed
- [ ] Add new tests for all newly generated code
- [ ] For reference the existing implementation exists here http://dev.dapp.marketprotocol.io/contract/explorer
- [ ] Interact with @MARKETProtocol/core and expect a few revisions
Additional comments
• This is just the first of a few different pages for the dApp, so hunters who do a good job on this bounty will have more similar tasks coming their way | 1.0 | [Explorer] Implement Explorer UI/UX Mock Up - ## Gitcoin Bounty Hunters
Please only consider picking up this issue if you feel confident that you are able to implement the designs with a high fidelity to the provided assets. Additionally, responsiveness of the implementation is very important and the design must translate well across, mobile, tablet, and large monitor resolutions.
## Before you `start work`
Please read our [contribution guidelines](https://docs.marketprotocol.io/#contributing) and if there is a bounty involved please also see [here](https://docs.marketprotocol.io/#gitcoin-and-bounties).
If you have ongoing work from other bounties with us where funding has not been released, please do not pick up a new issue. We would like to involve as many contributors as possible and parallelize the work flow as much as possible.
Please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline.
Please also note that in order for work to be accepted, all code must be accompanied by test cases as well.
## Why Is this Needed?
*Summary:* Implements a key view for the MARKET dApp
## Description
Type: Feature
*Solution*
Summary: Using the .sketch file found [here](https://github.com/MARKETProtocol/assets/blob/master/MockUps/MARKET_Protocol-dApp.sketch) as a reference implement new design using react. The design can be found below.

## Definition of Done
- [ ] Using the provided assets, implement the new design for the MARKET dApp
- [ ] Update tests as needed
- [ ] Add new tests for all newly generated code
- [ ] For reference the existing implementation exists here http://dev.dapp.marketprotocol.io/contract/explorer
- [ ] Interact with @MARKETProtocol/core and expect a few revisions
Additional comments
• This is just the first of a few different pages for the dApp, so hunters who do a good job on this bounty will have more similar tasks coming their way | priority | implement explorer ui ux mock up gitcoin bounty hunters please only consider picking up this issue if you feel confident that you are able to implement the designs with a high fidelity to the provided assets additionally responsiveness of the implementation is very important and the design must translate well across mobile tablet and large monitor resolutions before you start work please read our and if there is a bounty involved please also see if you have ongoing work from other bounties with us where funding has not been released please do not pick up a new issue we would like to involve as many contributors as possible and parallelize the work flow as much as possible please make sure to comment in the issue here immediately after starting work so we know your plans for implementation and a timeline please also note that in order for work to be accepted all code must be accompanied by test cases as well why is this needed summary implements a key view for the market dapp description type feature solution summary using the sketch file found as a reference implement new design using react the design can be found below definition of done using the provided assets implement the new design for the market dapp update tests as needed add new tests for all newly generated code for reference the existing implementation exists here interact with marketprotocol core and expect a few revisions additional comments • this is just the first of a few different pages for the dapp so hunters who do a good job on this bounty will have more similar tasks coming their way | 1 |
800,916 | 28,438,205,333 | IssuesEvent | 2023-04-15 15:22:34 | IAmTamal/Milan | https://api.github.com/repos/IAmTamal/Milan | opened | fix: add proper image size | ✨ goal: improvement 🟨 priority: medium 🛠 status : under development | ### What would you like to share?
- We need to add proper image size to improve SEO

images
- https://milaan.vercel.app/assets/MilanLanding1.2ee0ffc1.png
- https://milaan.vercel.app/assets/LandingMobile.e03cab94.png
### 🥦 Browser
Brave
### Checklist ✅
- [X] I checked and didn't find similar issue
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
- [X] I am willing to work on this issue (blank for no) | 1.0 | fix: add proper image size - ### What would you like to share?
- We need to add proper image size to improve SEO

images
- https://milaan.vercel.app/assets/MilanLanding1.2ee0ffc1.png
- https://milaan.vercel.app/assets/LandingMobile.e03cab94.png
### 🥦 Browser
Brave
### Checklist ✅
- [X] I checked and didn't find similar issue
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
- [X] I am willing to work on this issue (blank for no) | priority | fix add proper image size what would you like to share we need to add proper image size to improve seo images 🥦 browser brave checklist ✅ i checked and didn t find similar issue i have read the i am willing to work on this issue blank for no | 1 |
251,965 | 8,030,008,075 | IssuesEvent | 2018-07-27 18:02:07 | esteemapp/esteem-surfer | https://api.github.com/repos/esteemapp/esteem-surfer | closed | Older revisions of post | medium priority | Have possibility to see older revisions of the post with eSync.
https://github.com/google/diff-match-patch might be useful to show differences... | 1.0 | Older revisions of post - Have possibility to see older revisions of the post with eSync.
https://github.com/google/diff-match-patch might be useful to show differences... | priority | older revisions of post have possibility to see older revisions of the post with esync might be useful to show differences | 1 |
620,312 | 19,558,818,079 | IssuesEvent | 2022-01-03 13:33:39 | bounswe/2021SpringGroup1 | https://api.github.com/repos/bounswe/2021SpringGroup1 | closed | Adding comment related API functionality | Priority: Medium Status: In Progress Platform: Backend | This is not very crutial for now, but it looks quite straightforward so I will try to address this requirement now. We discussed specifications as:
- Returned comment objects will contain `id:<comment_id>`, `poster_id:<user_id>` ,`body:<str>`, `replies: [<comments>]`, `created_date: <date>` fields.
- Comments will be rendered in a special page containing only the post itself.
- Comments will be generated by entrypoint taking fields `post_id:<post_id>`, `body:<str>`, `replied_to:<comnent_id>` | 1.0 | Adding comment related API functionality - This is not very crutial for now, but it looks quite straightforward so I will try to address this requirement now. We discussed specifications as:
- Returned comment objects will contain `id:<comment_id>`, `poster_id:<user_id>` ,`body:<str>`, `replies: [<comments>]`, `created_date: <date>` fields.
- Comments will be rendered in a special page containing only the post itself.
- Comments will be generated by entrypoint taking fields `post_id:<post_id>`, `body:<str>`, `replied_to:<comnent_id>` | priority | adding comment related api functionality this is not very crutial for now but it looks quite straightforward so i will try to address this requirement now we discussed specifications as returned comment objects will contain id poster id body replies created date fields comments will be rendered in a special page containing only the post itself comments will be generated by entrypoint taking fields post id body replied to | 1 |
45,478 | 2,933,889,560 | IssuesEvent | 2015-06-30 03:12:25 | openpnp/openpnp | https://api.github.com/repos/openpnp/openpnp | closed | Program requires restart before recognizing new cameras. | bug Component-GUI imported Priority-Medium | _Original author: ja...@vonnieda.org (May 13, 2012 04:10:48)_
When a new camera is added through the GUI the camera panel and machine controls panel do not know about it until after a restart. This is because these panels only read the configuration when it's loaded. These classes will need to become PropertyChangeListeners for the machine.cameras list.
_Original issue: http://code.google.com/p/openpnp/issues/detail?id=10_ | 1.0 | Program requires restart before recognizing new cameras. - _Original author: ja...@vonnieda.org (May 13, 2012 04:10:48)_
When a new camera is added through the GUI the camera panel and machine controls panel do not know about it until after a restart. This is because these panels only read the configuration when it's loaded. These classes will need to become PropertyChangeListeners for the machine.cameras list.
_Original issue: http://code.google.com/p/openpnp/issues/detail?id=10_ | priority | program requires restart before recognizing new cameras original author ja vonnieda org may when a new camera is added through the gui the camera panel and machine controls panel do not know about it until after a restart this is because these panels only read the configuration when it s loaded these classes will need to become propertychangelisteners for the machine cameras list original issue | 1 |
821,302 | 30,816,096,325 | IssuesEvent | 2023-08-01 13:34:09 | Dessia-tech/plot_data | https://api.github.com/repos/Dessia-tech/plot_data | closed | bug: single point dataset makes a bug | type: bug priority: medium | * **I'm submitting a ...**
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
Having a single point dataset makes the plot crash:
Even if there is another dataset with more than 1 point.
```
import plot_data
from plot_data.colors import *
import numpy as np
elements1 = [{'time': 0, 'electric current': 1}]
dataset1 = plot_data.Dataset(elements=elements1, name='I1 = f(t)')
# The previous line instantiates a dataset with limited arguments but
# several customizations are available
point_style = plot_data.PointStyle(color_fill=RED, color_stroke=BLACK)
edge_style = plot_data.EdgeStyle(color_stroke=BLUE, dashline=[10, 5])
custom_dataset = plot_data.Dataset(elements=elements1, name='I = f(t)',
point_style=point_style,
edge_style=edge_style)
graph2d = plot_data.Graph2D(graphs=[dataset1],
x_variable='time', y_variable='electric current')
plot_data.plot_canvas(plot_data_object=graph2d, canvas_id='canvas',
debug_mode=True)
```

* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** Avoid reference to other packages
* **Please tell us about your environment:**
- python version: 0.10.8
| 1.0 | bug: single point dataset makes a bug - * **I'm submitting a ...**
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
Having a single point dataset makes the plot crash:
Even if there is another dataset with more than 1 point.
```
import plot_data
from plot_data.colors import *
import numpy as np
elements1 = [{'time': 0, 'electric current': 1}]
dataset1 = plot_data.Dataset(elements=elements1, name='I1 = f(t)')
# The previous line instantiates a dataset with limited arguments but
# several customizations are available
point_style = plot_data.PointStyle(color_fill=RED, color_stroke=BLACK)
edge_style = plot_data.EdgeStyle(color_stroke=BLUE, dashline=[10, 5])
custom_dataset = plot_data.Dataset(elements=elements1, name='I = f(t)',
point_style=point_style,
edge_style=edge_style)
graph2d = plot_data.Graph2D(graphs=[dataset1],
x_variable='time', y_variable='electric current')
plot_data.plot_canvas(plot_data_object=graph2d, canvas_id='canvas',
debug_mode=True)
```

* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** Avoid reference to other packages
* **Please tell us about your environment:**
- python version: 0.10.8
| priority | bug single point dataset makes a bug i m submitting a bug report feature request support request please do not submit support request here see note at the top of this template having a single point dataset makes the plot crash even if there is another dataset with more than point import plot data from plot data colors import import numpy as np plot data dataset elements name f t the previous line instantiates a dataset with limited arguments but several customizations are available point style plot data pointstyle color fill red color stroke black edge style plot data edgestyle color stroke blue dashline custom dataset plot data dataset elements name i f t point style point style edge style edge style plot data graphs x variable time y variable electric current plot data plot canvas plot data object canvas id canvas debug mode true if the current behavior is a bug please provide the steps to reproduce and if possible a minimal demo of the problem avoid reference to other packages please tell us about your environment python version | 1 |
780,341 | 27,390,664,357 | IssuesEvent | 2023-02-28 16:04:00 | CDCgov/prime-reportstream | https://api.github.com/repos/CDCgov/prime-reportstream | closed | Elitemedical - Lab Data from SimpleReport - IL state not receiving negative test results | onboarding-ops Medium Priority Illinois | Problem Statement:
EliteMed Laboratories is concerned that their negative lab data is not going to IL State. After reviewing the data, it looks like the test results are showing as: not detected. Please see below the SimpleReport CSV Uploads being sent to ReportStream Below:

Acceptance Criteria: Ensure that all relevant negative lab test results are going to IL.
To Do: Assign Engineer to Investigate | 1.0 | Elitemedical - Lab Data from SimpleReport - IL state not receiving negative test results - Problem Statement:
EliteMed Laboratories is concerned that their negative lab data is not going to IL State. After reviewing the data, it looks like the test results are showing as: not detected. Please see below the SimpleReport CSV Uploads being sent to ReportStream Below:

Acceptance Criteria: Ensure that all relevant negative lab test results are going to IL.
To Do: Assign Engineer to Investigate | priority | elitemedical lab data from simplereport il state not receiving negative test results problem statement elitemed laboratories is concerned that their negative lab data is not going to il state after reviewing the data it looks like the test results are showing as not detected please see below the simplereport csv uploads being sent to reportstream below acceptance criteria ensure that all relevant negative lab test results are going to il to do assign engineer to investigate | 1 |
757,878 | 26,533,199,310 | IssuesEvent | 2023-01-19 13:59:14 | netlify/next-runtime | https://api.github.com/repos/netlify/next-runtime | closed | Edge runtime routes throw 500 error on trailing slash mismatch | priority: medium | If trailing slash in disabled, requesting an edge runtime route with a trailing slash will throw a 500 error. This is because the matcher pattern strictly matches just the route without the slash, meaning the site tries to serve the route from the origin which will fail. The expected behaviour is to redirect to the canonical version.
The fix is the trailing slash fix as indicated in #1448, however this may need to happen before the edge router is implemented.
e2e test: `streaming-ssr` | 1.0 | Edge runtime routes throw 500 error on trailing slash mismatch - If trailing slash in disabled, requesting an edge runtime route with a trailing slash will throw a 500 error. This is because the matcher pattern strictly matches just the route without the slash, meaning the site tries to serve the route from the origin which will fail. The expected behaviour is to redirect to the canonical version.
The fix is the trailing slash fix as indicated in #1448, however this may need to happen before the edge router is implemented.
e2e test: `streaming-ssr` | priority | edge runtime routes throw error on trailing slash mismatch if trailing slash in disabled requesting an edge runtime route with a trailing slash will throw a error this is because the matcher pattern strictly matches just the route without the slash meaning the site tries to serve the route from the origin which will fail the expected behaviour is to redirect to the canonical version the fix is the trailing slash fix as indicated in however this may need to happen before the edge router is implemented test streaming ssr | 1 |
551,643 | 16,177,755,409 | IssuesEvent | 2021-05-03 09:43:38 | sopra-fs21-group-4/server | https://api.github.com/repos/sopra-fs21-group-4/server | closed | Save/Update Lobby Settings for the Lobby Entity (espc. Time Limits) | medium priority removed task | - [x] Implement adaptSettings() in Game
- [x] Implement adaptSettings() in GameService
- [x] Implement adaptSettings() in GameController
- [ ] test
#37 Story 15 | 1.0 | Save/Update Lobby Settings for the Lobby Entity (espc. Time Limits) - - [x] Implement adaptSettings() in Game
- [x] Implement adaptSettings() in GameService
- [x] Implement adaptSettings() in GameController
- [ ] test
#37 Story 15 | priority | save update lobby settings for the lobby entity espc time limits implement adaptsettings in game implement adaptsettings in gameservice implement adaptsettings in gamecontroller test story | 1 |
596,274 | 18,101,467,788 | IssuesEvent | 2021-09-22 14:36:56 | gravityview/GravityView | https://api.github.com/repos/gravityview/GravityView | closed | Fix enqueued scripts and styles when embedded in ACF fields | Enhancement Compat: Plugin Difficulty: Medium Priority: Medium | > If the shortcode is in the standard content editor, the default css is included (table-view.css, gv-default.css). When we switch the page to use a page builder powered by ACF, the CSS is not included. We love the idea of the CSS only being included on pages where it's required but, if it's only checking "the_content()" this is going to be an issue. Is there any way around this other than manually enqueueing the files globally? | 1.0 | Fix enqueued scripts and styles when embedded in ACF fields - > If the shortcode is in the standard content editor, the default css is included (table-view.css, gv-default.css). When we switch the page to use a page builder powered by ACF, the CSS is not included. We love the idea of the CSS only being included on pages where it's required but, if it's only checking "the_content()" this is going to be an issue. Is there any way around this other than manually enqueueing the files globally? | priority | fix enqueued scripts and styles when embedded in acf fields if the shortcode is in the standard content editor the default css is included table view css gv default css when we switch the page to use a page builder powered by acf the css is not included we love the idea of the css only being included on pages where it s required but if it s only checking the content this is going to be an issue is there any way around this other than manually enqueueing the files globally | 1 |
779,817 | 27,367,344,948 | IssuesEvent | 2023-02-27 20:17:28 | DSpace/dspace-angular | https://api.github.com/repos/DSpace/dspace-angular | closed | In facets, vertically align checkbox with first word of option | bug help wanted usability component: Discovery good first issue medium priority Estimate TBD | **Describe the bug** DSpace 7.2?
As first reported in https://groups.google.com/g/dspace-community/c/GMO020isfZQ/m/1ISexkihAwAJ,
in facets, the check box for each option seems to be vertically centered on each option. This is fine for single lines but is confusing for multi-line options. For example, the ETD collection in the Iowa State University repository running DSpace 7.2, https://dr.lib.iastate.edu/collections/0830d32e-14e1-4a4f-bb8f-271a75ed35af?scope=0830d32e-14e1-4a4f-bb8f-271a75ed35af, offers a Department facet with the long option, Civil, Construction, and Environmental Engineering. I propose putting the check box inline with the first word of each option. I observe this in Firefox but not in Chrome or Safari. Attached are screenshots of each. [Note also that checkboxes in Safari overlap the text for some options.]
___________
Firefox
<img width="271" alt="Firefox_facet_checkbox" src="https://user-images.githubusercontent.com/13037168/176241621-f3fbe7ca-7aa1-45a8-8799-b823283cf379.png">
___________
Chrome
<img width="271" alt="Chrome_facet_checkbot" src="https://user-images.githubusercontent.com/13037168/176241670-c298117a-f9e6-4f01-8840-f8f7c0361688.png">
___________
Safari
<img width="271" alt="Safari_facet_checkbox" src="https://user-images.githubusercontent.com/13037168/176241700-d681b86d-bbd4-447f-9c14-94f4cf58948b.png">
___________
A clear and concise description of what the bug is. Include the version(s) of DSpace where you've seen this problem & what *web browser* you were using. Link to examples if they are public.
**To Reproduce**
Steps to reproduce the behavior:
1. Do this
2. Then this...
**Expected behavior**
A clear and concise description of what you expected to happen.
**Related work**
Link to any related tickets or PRs here.
| 1.0 | In facets, vertically align checkbox with first word of option - **Describe the bug** DSpace 7.2?
As first reported in https://groups.google.com/g/dspace-community/c/GMO020isfZQ/m/1ISexkihAwAJ,
in facets, the check box for each option seems to be vertically centered on each option. This is fine for single lines but is confusing for multi-line options. For example, the ETD collection in the Iowa State University repository running DSpace 7.2, https://dr.lib.iastate.edu/collections/0830d32e-14e1-4a4f-bb8f-271a75ed35af?scope=0830d32e-14e1-4a4f-bb8f-271a75ed35af, offers a Department facet with the long option, Civil, Construction, and Environmental Engineering. I propose putting the check box inline with the first word of each option. I observe this in Firefox but not in Chrome or Safari. Attached are screenshots of each. [Note also that checkboxes in Safari overlap the text for some options.]
___________
Firefox
<img width="271" alt="Firefox_facet_checkbox" src="https://user-images.githubusercontent.com/13037168/176241621-f3fbe7ca-7aa1-45a8-8799-b823283cf379.png">
___________
Chrome
<img width="271" alt="Chrome_facet_checkbot" src="https://user-images.githubusercontent.com/13037168/176241670-c298117a-f9e6-4f01-8840-f8f7c0361688.png">
___________
Safari
<img width="271" alt="Safari_facet_checkbox" src="https://user-images.githubusercontent.com/13037168/176241700-d681b86d-bbd4-447f-9c14-94f4cf58948b.png">
___________
A clear and concise description of what the bug is. Include the version(s) of DSpace where you've seen this problem & what *web browser* you were using. Link to examples if they are public.
**To Reproduce**
Steps to reproduce the behavior:
1. Do this
2. Then this...
**Expected behavior**
A clear and concise description of what you expected to happen.
**Related work**
Link to any related tickets or PRs here.
| priority | in facets vertically align checkbox with first word of option describe the bug dspace as first reported in in facets the check box for each option seems to be vertically centered on each option this is fine for single lines but is confusing for multi line options for example the etd collection in the iowa state university repository running dspace offers a department facet with the long option civil construction and environmental engineering i propose putting the check box inline with the first word of each option i observe this in firefox but not in chrome or safari attached are screenshots of each firefox img width alt firefox facet checkbox src chrome img width alt chrome facet checkbot src safari img width alt safari facet checkbox src a clear and concise description of what the bug is include the version s of dspace where you ve seen this problem what web browser you were using link to examples if they are public to reproduce steps to reproduce the behavior do this then this expected behavior a clear and concise description of what you expected to happen related work link to any related tickets or prs here | 1 |
761,004 | 26,663,184,445 | IssuesEvent | 2023-01-25 23:26:51 | NIAEFEUP/tts-be | https://api.github.com/repos/NIAEFEUP/tts-be | opened | Analysis Tool - Scheduled Caching | medium effort low priority | Currently, the caching of the statistics is made synchronously: when a user makes a GET request on the '/statistics/` endpoint, the current statistics are cached.
This is unreliable since it requires someone to make these requests for us to have a good chronological representation of the statistics over time.
An asynchronous implementation that caches the statistics 1x or 2x a day is needed. Suggestion:
- django-q
- cron
- celery
- (can't remember but I'm sure there are others :) ) | 1.0 | Analysis Tool - Scheduled Caching - Currently, the caching of the statistics is made synchronously: when a user makes a GET request on the '/statistics/` endpoint, the current statistics are cached.
This is unreliable since it requires someone to make these requests for us to have a good chronological representation of the statistics over time.
An asynchronous implementation that caches the statistics 1x or 2x a day is needed. Suggestion:
- django-q
- cron
- celery
- (can't remember but I'm sure there are others :) ) | priority | analysis tool scheduled caching currently the caching of the statistics is made synchronously when a user makes a get request on the statistics endpoint the current statistics are cached this is unreliable since it requires someone to make these requests for us to have a good chronological representation of the statistics over time an asynchronous implementation that caches the statistics or a day is needed suggestion django q cron celery can t remember but i m sure there are others | 1 |
716,303 | 24,627,795,931 | IssuesEvent | 2022-10-16 18:42:02 | AY2223S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2223S1-CS2103T-W16-3/tp | closed | Get /wn command restrictions to user inputs | priority.Medium type.Enhancement | There should be restrictions implemented for ward number inputs for patients. This restriction is subjective, but for our case we can set it to accept only **ONE** alphabet followed by numerical values.
Example inputs:
- D312
- T46
- F17 | 1.0 | Get /wn command restrictions to user inputs - There should be restrictions implemented for ward number inputs for patients. This restriction is subjective, but for our case we can set it to accept only **ONE** alphabet followed by numerical values.
Example inputs:
- D312
- T46
- F17 | priority | get wn command restrictions to user inputs there should be restrictions implemented for ward number inputs for patients this restriction is subjective but for our case we can set it to accept only one alphabet followed by numerical values example inputs | 1 |
307,171 | 9,414,248,825 | IssuesEvent | 2019-04-10 09:42:08 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: steam tractor - no ability to turn on\off for sower and harvester | Medium Priority | **Version:** 0.7.5.0 beta staging-bad4361f
Only plough can be enabled\disabled by pressing 1.
Both other modules work everytime. | 1.0 | USER ISSUE: steam tractor - no ability to turn on\off for sower and harvester - **Version:** 0.7.5.0 beta staging-bad4361f
Only plough can be enabled\disabled by pressing 1.
Both other modules work everytime. | priority | user issue steam tractor no ability to turn on off for sower and harvester version beta staging only plough can be enabled disabled by pressing both other modules work everytime | 1 |
720,863 | 24,809,250,318 | IssuesEvent | 2022-10-25 08:08:17 | AY2223S1-CS2103-F13-1/tp | https://api.github.com/repos/AY2223S1-CS2103-F13-1/tp | closed | As a developer, I can protect my data | type.Story priority.Medium | ...so that others do not get access to my project information | 1.0 | As a developer, I can protect my data - ...so that others do not get access to my project information | priority | as a developer i can protect my data so that others do not get access to my project information | 1 |
813,272 | 30,450,918,648 | IssuesEvent | 2023-07-16 09:31:13 | sef-global/scholarx-backend | https://api.github.com/repos/sef-global/scholarx-backend | opened | Implement update my profile endpoint | backend priority: medium | **Description:**
This issue involves implementing an API to update the profile details. The endpoint should allow clients to make a PUT request to `{{baseUrl}}/api/me`.
sample response body:
```
{
"created_at": "2023-06-29T09:15:00Z",
"updated_at": "2023-06-30T16:45:00Z",
"primary_email": "mentor@example.com",
"contact_email": "mentor_contact@example.com",
"first_name": "John",
"last_name": "Doe",
"image_url": "https://example.com/mentor_profile_image.jpg",
"linkedin_url": "https://www.linkedin.com/in/johndoe",
"type": "DEFAULT",
"uuid": "12345678-1234-5678-1234-567812345678"
}
```
**Tasks:**
1. Create a controller for the `/controllers/profile` endpoint in the backend (create the route profile if not created).
2. Parse and validate the request body to ensure it matches the expected format.
3. Implement appropriate error handling and response status codes for different scenarios (e.g., validation errors, database errors).
5. Write unit tests to validate the functionality and correctness of the endpoint.
API documentation: https://documenter.getpostman.com/view/27421496/2s93m1a4ac#8744a3ee-970f-489a-853d-8b23fdee8de3
ER diagram: https://drive.google.com/file/d/11KMgdNu2mSAm0Ner8UsSPQpZJS8QNqYc/view
**Acceptance Criteria:**
- The profile API endpoint `/controllers/profile` is implemented and accessible via a PUT request.
- The request body is properly parsed and validated for the required format.
- Appropriate error handling is implemented, providing meaningful error messages and correct response status codes.
- Unit tests are written to validate the functionality and correctness of the endpoint.
**Additional Information:**
No
**Related Dependencies or References:**
No
| 1.0 | Implement update my profile endpoint - **Description:**
This issue involves implementing an API to update the profile details. The endpoint should allow clients to make a PUT request to `{{baseUrl}}/api/me`.
sample response body:
```
{
"created_at": "2023-06-29T09:15:00Z",
"updated_at": "2023-06-30T16:45:00Z",
"primary_email": "mentor@example.com",
"contact_email": "mentor_contact@example.com",
"first_name": "John",
"last_name": "Doe",
"image_url": "https://example.com/mentor_profile_image.jpg",
"linkedin_url": "https://www.linkedin.com/in/johndoe",
"type": "DEFAULT",
"uuid": "12345678-1234-5678-1234-567812345678"
}
```
**Tasks:**
1. Create a controller for the `/controllers/profile` endpoint in the backend (create the route profile if not created).
2. Parse and validate the request body to ensure it matches the expected format.
3. Implement appropriate error handling and response status codes for different scenarios (e.g., validation errors, database errors).
5. Write unit tests to validate the functionality and correctness of the endpoint.
API documentation: https://documenter.getpostman.com/view/27421496/2s93m1a4ac#8744a3ee-970f-489a-853d-8b23fdee8de3
ER diagram: https://drive.google.com/file/d/11KMgdNu2mSAm0Ner8UsSPQpZJS8QNqYc/view
**Acceptance Criteria:**
- The profile API endpoint `/controllers/profile` is implemented and accessible via a PUT request.
- The request body is properly parsed and validated for the required format.
- Appropriate error handling is implemented, providing meaningful error messages and correct response status codes.
- Unit tests are written to validate the functionality and correctness of the endpoint.
**Additional Information:**
No
**Related Dependencies or References:**
No
| priority | implement update my profile endpoint description this issue involves implementing an api to update the profile details the endpoint should allow clients to make a put request to baseurl api me sample response body created at updated at primary email mentor example com contact email mentor contact example com first name john last name doe image url linkedin url type default uuid tasks create a controller for the controllers profile endpoint in the backend create the route profile if not created parse and validate the request body to ensure it matches the expected format implement appropriate error handling and response status codes for different scenarios e g validation errors database errors write unit tests to validate the functionality and correctness of the endpoint api documentation er diagram acceptance criteria the profile api endpoint controllers profile is implemented and accessible via a put request the request body is properly parsed and validated for the required format appropriate error handling is implemented providing meaningful error messages and correct response status codes unit tests are written to validate the functionality and correctness of the endpoint additional information no related dependencies or references no | 1 |
504,353 | 14,617,081,791 | IssuesEvent | 2020-12-22 14:15:40 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | opened | ['Моделі' page] Font-size and font-family of headline and content of page don't met the appropriates | UI bug priority: medium severity: trivial | **Environment:** Windows 10 Home, Google Chrome Version 87.0.4280.88.
**Reproducible:** always.
**Build found:** "commit https://horondi-admin-staging.azurewebsites.net/models"
**Preconditions**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator: 'User'="admin2@gmail.com", 'Password'="qwertY123"
**Steps to reproduce**
1. Click on item 'Моделі'
2. Click on button of light/dark theme
**Actual result**
Headline 'Інформація про моделі' - font-size 24px, font-family Roboto, Helvetica
Content - font-size 12px, font-family Roboto, Helvetica
**Expected result**
Headline 'Інформація про моделі' - font-size 24px, font-family Montserrat
Content - font-size 16px, font-family Montserrat.
**User story and test case links**
"User story #LVHRB-19
[Test case](https://jira.softserve.academy/browse/LVHRB-41)
**Labels to be added**
"Bug", Priority ("medium "), Severity ("trivial:"), Type ("UI").
| 1.0 | ['Моделі' page] Font-size and font-family of headline and content of page don't met the appropriates - **Environment:** Windows 10 Home, Google Chrome Version 87.0.4280.88.
**Reproducible:** always.
**Build found:** "commit https://horondi-admin-staging.azurewebsites.net/models"
**Preconditions**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator: 'User'="admin2@gmail.com", 'Password'="qwertY123"
**Steps to reproduce**
1. Click on item 'Моделі'
2. Click on button of light/dark theme
**Actual result**
Headline 'Інформація про моделі' - font-size 24px, font-family Roboto, Helvetica
Content - font-size 12px, font-family Roboto, Helvetica
**Expected result**
Headline 'Інформація про моделі' - font-size 24px, font-family Montserrat
Content - font-size 16px, font-family Montserrat.
**User story and test case links**
"User story #LVHRB-19
[Test case](https://jira.softserve.academy/browse/LVHRB-41)
**Labels to be added**
"Bug", Priority ("medium "), Severity ("trivial:"), Type ("UI").
| priority | font size and font family of headline and content of page don t met the appropriates environment windows home google chrome version reproducible always build found commit preconditions go to log into administrator page as administrator user gmail com password steps to reproduce click on item моделі click on button of light dark theme actual result headline інформація про моделі font size font family roboto helvetica content font size font family roboto helvetica expected result headline інформація про моделі font size font family montserrat content font size font family montserrat user story and test case links user story lvhrb labels to be added bug priority medium severity trivial type ui | 1 |
370,667 | 10,935,283,409 | IssuesEvent | 2019-11-24 17:16:44 | UW-Macrostrat/burwell-processing | https://api.github.com/repos/UW-Macrostrat/burwell-processing | closed | Reno Junction, WY | Medium priority | ## Info
**Name:**
Geologic Map Of The Reno Junction 30' X 60' Quadrangle, Campbell, And Weston Counties, Wyoming
**Source URL:**
GIS Data: http://www.wsgs.wyo.gov/gis-files/mapseries/100k/bedrock/gis-2003-ms-62.zip
PDF: http://www.wsgs.wyo.gov/products/wsgs-2004-ms-62.pdf
**Estimated scale:**
large
**Number of bedrock polygons:**
**Lithology field:**
name
**Time field:**
age,early_id,late_id
**Stratigraphy name field:**
strat_name,hierarchy
**Description field:**
description
**Comments field:**
comments
**Comments:**
coal mine polygon is not processed.
No lines or points shapefiles
## Todo:
- [x] Insert new source record
- [x] Add data to database
- [x] Process and add to homogenized tables
- [ ] Process and import lines
- [x] Match
- [x] Roll tiles
| 1.0 | Reno Junction, WY - ## Info
**Name:**
Geologic Map Of The Reno Junction 30' X 60' Quadrangle, Campbell, And Weston Counties, Wyoming
**Source URL:**
GIS Data: http://www.wsgs.wyo.gov/gis-files/mapseries/100k/bedrock/gis-2003-ms-62.zip
PDF: http://www.wsgs.wyo.gov/products/wsgs-2004-ms-62.pdf
**Estimated scale:**
large
**Number of bedrock polygons:**
**Lithology field:**
name
**Time field:**
age,early_id,late_id
**Stratigraphy name field:**
strat_name,hierarchy
**Description field:**
description
**Comments field:**
comments
**Comments:**
coal mine polygon is not processed.
No lines or points shapefiles
## Todo:
- [x] Insert new source record
- [x] Add data to database
- [x] Process and add to homogenized tables
- [ ] Process and import lines
- [x] Match
- [x] Roll tiles
| priority | reno junction wy info name geologic map of the reno junction x quadrangle campbell and weston counties wyoming source url gis data pdf estimated scale large number of bedrock polygons lithology field name time field age early id late id stratigraphy name field strat name hierarchy description field description comments field comments comments coal mine polygon is not processed no lines or points shapefiles todo insert new source record add data to database process and add to homogenized tables process and import lines match roll tiles | 1 |
605,947 | 18,752,375,479 | IssuesEvent | 2021-11-05 05:04:10 | CMPUT301F21T20/HabitTracker | https://api.github.com/repos/CMPUT301F21T20/HabitTracker | closed | 1.5 Delete Habit | priority: medium Habits complexity: low | Story Point: As a doer, I want to delete a habit of mine.
Provide a method for doers to delete already created habits. Refer to UI mockups for design. Integration with firestore is required.
Halfway Checkpoint: Backend implemetation complete
Points: 2
Priority: Medium
Complexity: Low | 1.0 | 1.5 Delete Habit - Story Point: As a doer, I want to delete a habit of mine.
Provide a method for doers to delete already created habits. Refer to UI mockups for design. Integration with firestore is required.
Halfway Checkpoint: Backend implemetation complete
Points: 2
Priority: Medium
Complexity: Low | priority | delete habit story point as a doer i want to delete a habit of mine provide a method for doers to delete already created habits refer to ui mockups for design integration with firestore is required halfway checkpoint backend implemetation complete points priority medium complexity low | 1 |
16,468 | 2,615,116,738 | IssuesEvent | 2015-03-01 05:41:52 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | Youtube | auto-migrated Priority-Medium Type-Sample | ```
Code required for fetching from youtube, the following using
google-api-java-client
1) Most Rated Videos
2) Most Viewed Videos
3) Searching a video
Regards,
Anees
```
Original issue reported on code.google.com by `aneesaha...@gmail.com` on 27 Sep 2010 at 5:02
* Merged into: #16 | 1.0 | Youtube - ```
Code required for fetching from youtube, the following using
google-api-java-client
1) Most Rated Videos
2) Most Viewed Videos
3) Searching a video
Regards,
Anees
```
Original issue reported on code.google.com by `aneesaha...@gmail.com` on 27 Sep 2010 at 5:02
* Merged into: #16 | priority | youtube code required for fetching from youtube the following using google api java client most rated videos most viewed videos searching a video regards anees original issue reported on code google com by aneesaha gmail com on sep at merged into | 1 |
35,967 | 2,794,222,704 | IssuesEvent | 2015-05-11 15:34:33 | CUL-DigitalServices/grasshopper-ui | https://api.github.com/repos/CUL-DigitalServices/grasshopper-ui | opened | Editting Module Event time- (Day and time) dropdown options doesnt dislay the time on interface | Medium Priority | -Editing Module Event - (Day and time) drop down buttons covering the day and time view on interface, this happens in Firefox browser(25 v), OS Windows 7 Professional.
-In Chrome browser(42.0 v) time is not displayed in fields, and day field length should be increased so that when day is selected it should display full text of the day
-Attached Screenshot for reference


| 1.0 | Editting Module Event time- (Day and time) dropdown options doesnt dislay the time on interface - -Editing Module Event - (Day and time) drop down buttons covering the day and time view on interface, this happens in Firefox browser(25 v), OS Windows 7 Professional.
-In Chrome browser(42.0 v) time is not displayed in fields, and day field length should be increased so that when day is selected it should display full text of the day
-Attached Screenshot for reference


| priority | editting module event time day and time dropdown options doesnt dislay the time on interface editing module event day and time drop down buttons covering the day and time view on interface this happens in firefox browser v os windows professional in chrome browser v time is not displayed in fields and day field length should be increased so that when day is selected it should display full text of the day attached screenshot for reference | 1 |
317,887 | 9,670,421,703 | IssuesEvent | 2019-05-21 19:52:43 | x-klanas/Wrath | https://api.github.com/repos/x-klanas/Wrath | closed | Start testing button | 2 points medium priority user story | As a player I want to have a button in the garage, which when pressed turn on the engine on the vehicle.
- [x] Then button must be easily reachable by the player
- [x] When pressed, the testing timer starts and the engine turns on | 1.0 | Start testing button - As a player I want to have a button in the garage, which when pressed turn on the engine on the vehicle.
- [x] Then button must be easily reachable by the player
- [x] When pressed, the testing timer starts and the engine turns on | priority | start testing button as a player i want to have a button in the garage which when pressed turn on the engine on the vehicle then button must be easily reachable by the player when pressed the testing timer starts and the engine turns on | 1 |
399,709 | 11,759,481,933 | IssuesEvent | 2020-03-13 17:21:46 | eric-bixby/auto-sort-bookmarks-webext | https://api.github.com/repos/eric-bixby/auto-sort-bookmarks-webext | closed | Sort by Last visited missing in new version | Priority: Medium Status: Pending Type: Bug | Hi
Any chance in getting option to sort by "Last Visited" returned in future version? Disappeared in 3.4
| 1.0 | Sort by Last visited missing in new version - Hi
Any chance in getting option to sort by "Last Visited" returned in future version? Disappeared in 3.4
| priority | sort by last visited missing in new version hi any chance in getting option to sort by last visited returned in future version disappeared in | 1 |
715,438 | 24,599,267,244 | IssuesEvent | 2022-10-14 11:01:04 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Research & Report for Backend Technologies (Backend Team) | priority-medium status-inprogress back-end | ### Issue Description
After our first meeting, backend team for our project has been decided as @codingAku, @mbatuhancelik and @hasancan-code. Our first task as backend team is make a research and present our findings about various technologies for implementing backend, first ideas include Node & Python FastAPI.
### Step Details
Steps that will be performed:
- Find pros and cons of various technologies and their performance/complexity
- Present your findings
- Decide on the most optimal solution
### Final Actions
After our research is complete and the technology we should use is decided, we will connect with our Mobile and Frontend team and let everyone in the group know about it.
### Responsible People
- [x] Mehmet Batuhan Çelik
- [ ] Hasan Can Erol
- [x] Ecenur Sezer
### Deadline of the Issue
10.10.2022 - 23:59
### Reviewer
_No response_
### Deadline for the Review
_No response_
### Final To-Dos
- [X] Every responsible shared an info about the sections/parts s/he will perform.
- [ ] Every responsible mentioned the sub-issue contains the details of his/her work, if a sub-issue is created.
- [ ] Every responsible mentioned this issue in the description of his/her sub-issue, if a sub-issue is created. | 1.0 | Research & Report for Backend Technologies (Backend Team) - ### Issue Description
After our first meeting, backend team for our project has been decided as @codingAku, @mbatuhancelik and @hasancan-code. Our first task as backend team is make a research and present our findings about various technologies for implementing backend, first ideas include Node & Python FastAPI.
### Step Details
Steps that will be performed:
- Find pros and cons of various technologies and their performance/complexity
- Present your findings
- Decide on the most optimal solution
### Final Actions
After our research is complete and the technology we should use is decided, we will connect with our Mobile and Frontend team and let everyone in the group know about it.
### Responsible People
- [x] Mehmet Batuhan Çelik
- [ ] Hasan Can Erol
- [x] Ecenur Sezer
### Deadline of the Issue
10.10.2022 - 23:59
### Reviewer
_No response_
### Deadline for the Review
_No response_
### Final To-Dos
- [X] Every responsible shared an info about the sections/parts s/he will perform.
- [ ] Every responsible mentioned the sub-issue contains the details of his/her work, if a sub-issue is created.
- [ ] Every responsible mentioned this issue in the description of his/her sub-issue, if a sub-issue is created. | priority | research report for backend technologies backend team issue description after our first meeting backend team for our project has been decided as codingaku mbatuhancelik and hasancan code our first task as backend team is make a research and present our findings about various technologies for implementing backend first ideas include node python fastapi step details steps that will be performed find pros and cons of various technologies and their performance complexity present your findings decide on the most optimal solution final actions after our research is complete and the technology we should use is decided we will connect with our mobile and frontend team and let everyone in the group know about it responsible people mehmet batuhan çelik hasan can erol ecenur sezer deadline of the issue reviewer no response deadline for the review no response final to dos every responsible shared an info about the sections parts s he will perform every responsible mentioned the sub issue contains the details of his her work if a sub issue is created every responsible mentioned this issue in the description of his her sub issue if a sub issue is created | 1 |
989 | 2,506,524,029 | IssuesEvent | 2015-01-12 11:24:26 | HubTurbo/HubTurbo | https://api.github.com/repos/HubTurbo/HubTurbo | closed | HT (re)creates some labels automatically | feature-labels priority.medium type.bug | HT seems to recreate deleted labels. specifically, status.open and status.closed | 1.0 | HT (re)creates some labels automatically - HT seems to recreate deleted labels. specifically, status.open and status.closed | priority | ht re creates some labels automatically ht seems to recreate deleted labels specifically status open and status closed | 1 |
345,022 | 10,351,613,451 | IssuesEvent | 2019-09-05 07:21:26 | OpenSRP/opensrp-server-web | https://api.github.com/repos/OpenSRP/opensrp-server-web | closed | Get all children and grand children of a jurisdiction | Priority: Medium | Is it possible to add support to the `/rest/location/findByProperties` endpoint to return all the children,and grand children - basically the entire family of a jurisdiction?
For example, I would love to be able to get all the jurisdictions whose root parent is `Namibia`. | 1.0 | Get all children and grand children of a jurisdiction - Is it possible to add support to the `/rest/location/findByProperties` endpoint to return all the children,and grand children - basically the entire family of a jurisdiction?
For example, I would love to be able to get all the jurisdictions whose root parent is `Namibia`. | priority | get all children and grand children of a jurisdiction is it possible to add support to the rest location findbyproperties endpoint to return all the children and grand children basically the entire family of a jurisdiction for example i would love to be able to get all the jurisdictions whose root parent is namibia | 1 |
48,767 | 2,999,805,441 | IssuesEvent | 2015-07-23 20:57:52 | zhengj2007/BFO-test | https://api.github.com/repos/zhengj2007/BFO-test | closed | immaterial parts and MaterialEntity | duplicate imported Priority-Medium Type-BFO2-Reference | _From [batchelorc@rsc.org](https://code.google.com/u/batchelorc@rsc.org/) on February 17, 2010 09:21:01_
Hello,
This is not a problem with BFO as such, but something to bear in mind in
the presentation.
Objects and FiatObjectParts at least may (and possibly must) have
immaterial parts such as cavities, hollows and tunnels. I'm not sure about
whether the space in between penguins in a huddle counts as an immaterial
part of that ObjectAggregate, but that ObjectAggregate definitely has
immaterial parts inside and on the surface of the penguins.
Calling the parent of these three MaterialEntity does make it sound as if
they don't include the immaterial parts.
Also, whereas process--result polysemy (examples: infection, distribution)
relates an occurrent and a continuant, figure--ground polysemy (examples:
door, gate, conduit, tunnel) relates the whole and an immaterial part in a
rather more complicated way. I'm happy with the inside of a fireplace or a
blood vessel being a Site, but it seems less obvious that a door (in the
sense of walking through a door with a pout as opposed to walking through a
door with a crash and splintering wood) is a Site.
Where do immaterial parts that aren't Sites go?
Best wishes,
Colin.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=12_ | 1.0 | immaterial parts and MaterialEntity - _From [batchelorc@rsc.org](https://code.google.com/u/batchelorc@rsc.org/) on February 17, 2010 09:21:01_
Hello,
This is not a problem with BFO as such, but something to bear in mind in
the presentation.
Objects and FiatObjectParts at least may (and possibly must) have
immaterial parts such as cavities, hollows and tunnels. I'm not sure about
whether the space in between penguins in a huddle counts as an immaterial
part of that ObjectAggregate, but that ObjectAggregate definitely has
immaterial parts inside and on the surface of the penguins.
Calling the parent of these three MaterialEntity does make it sound as if
they don't include the immaterial parts.
Also, whereas process--result polysemy (examples: infection, distribution)
relates an occurrent and a continuant, figure--ground polysemy (examples:
door, gate, conduit, tunnel) relates the whole and an immaterial part in a
rather more complicated way. I'm happy with the inside of a fireplace or a
blood vessel being a Site, but it seems less obvious that a door (in the
sense of walking through a door with a pout as opposed to walking through a
door with a crash and splintering wood) is a Site.
Where do immaterial parts that aren't Sites go?
Best wishes,
Colin.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=12_ | priority | immaterial parts and materialentity from on february hello this is not a problem with bfo as such but something to bear in mind in the presentation objects and fiatobjectparts at least may and possibly must have immaterial parts such as cavities hollows and tunnels i m not sure about whether the space in between penguins in a huddle counts as an immaterial part of that objectaggregate but that objectaggregate definitely has immaterial parts inside and on the surface of the penguins calling the parent of these three materialentity does make it sound as if they don t include the immaterial parts also whereas process result polysemy examples infection distribution relates an occurrent and a continuant figure ground polysemy examples door gate conduit tunnel relates the whole and an immaterial part in a rather more complicated way i m happy with the inside of a fireplace or a blood vessel being a site but it seems less obvious that a door in the sense of walking through a door with a pout as opposed to walking through a door with a crash and splintering wood is a site where do immaterial parts that aren t sites go best wishes colin original issue | 1 |
566,275 | 16,817,316,416 | IssuesEvent | 2021-06-17 08:55:35 | nerdguyahmad/randomstuff.py | https://api.github.com/repos/nerdguyahmad/randomstuff.py | closed | [TODO] Warnings not coloured properly | Priority: MEDIUM Type: Bug | The warnings that are supposed to be orange in terminal are only orange on Linux based systems. On windows, It simply prints the raw code.
## Possible Workaround
Use `colorama.init()` on windows. | 1.0 | [TODO] Warnings not coloured properly - The warnings that are supposed to be orange in terminal are only orange on Linux based systems. On windows, It simply prints the raw code.
## Possible Workaround
Use `colorama.init()` on windows. | priority | warnings not coloured properly the warnings that are supposed to be orange in terminal are only orange on linux based systems on windows it simply prints the raw code possible workaround use colorama init on windows | 1 |
115,006 | 4,650,687,165 | IssuesEvent | 2016-10-03 06:30:31 | VSCodeVim/Vim | https://api.github.com/repos/VSCodeVim/Vim | reopened | bug: press % for matching does not work or goes to wrong locations | difficulty-medium not hard just annoying priority-normal |
Please *thumbs-up* 👍 this issue if it personally affects you! You can do this by clicking on the emoji-face on the top right of this post. Issues with more thumbs-up will be prioritized.
-----
### What did you do?
Open this file https://github.com/telerik/kendo-ui-core/blob/master/src/kendo.color.js in visual studio code and switch to NORMAL MODE, go to last line which begins with "}, typeof define == 'function' && define.amd", move cursor to col 1, that's should be under '}', now press %.
### What did you expect to happen?
The cursor should go to '{' in line 3.
### What happened instead?
Nothing happens.
Many times, press % will cause the cursor go to wrong places.
### Technical details:
* VSCode Version: 1.3 /1.4
* VsCodeVim Version: *[please ensure you are on the latest]* 0.1.7
* OS: win7, 64bit
| 1.0 | bug: press % for matching does not work or goes to wrong locations -
Please *thumbs-up* 👍 this issue if it personally affects you! You can do this by clicking on the emoji-face on the top right of this post. Issues with more thumbs-up will be prioritized.
-----
### What did you do?
Open this file https://github.com/telerik/kendo-ui-core/blob/master/src/kendo.color.js in visual studio code and switch to NORMAL MODE, go to last line which begins with "}, typeof define == 'function' && define.amd", move cursor to col 1, that's should be under '}', now press %.
### What did you expect to happen?
The cursor should go to '{' in line 3.
### What happened instead?
Nothing happens.
Many times, press % will cause the cursor go to wrong places.
### Technical details:
* VSCode Version: 1.3 /1.4
* VsCodeVim Version: *[please ensure you are on the latest]* 0.1.7
* OS: win7, 64bit
| priority | bug press for matching does not work or goes to wrong locations please thumbs up 👍 this issue if it personally affects you you can do this by clicking on the emoji face on the top right of this post issues with more thumbs up will be prioritized what did you do open this file in visual studio code and switch to normal mode go to last line which begins with typeof define function define amd move cursor to col that s should be under now press what did you expect to happen the cursor should go to in line what happened instead nothing happens many times press will cause the cursor go to wrong places technical details vscode version vscodevim version os | 1 |
142,134 | 5,459,722,744 | IssuesEvent | 2017-03-09 01:45:06 | NostraliaWoW/mangoszero | https://api.github.com/repos/NostraliaWoW/mangoszero | reopened | Garona: A Study on Stealth and Treachery | Awaiting Feedback Priority - Medium System | This book you can get at 54 but you can't start the quest as the quest is 57 in the database please change this.
http://db.vanillagaming.org/?search=Garona%3A+A+Study+on+Stealth+and+Treachery#items | 1.0 | Garona: A Study on Stealth and Treachery - This book you can get at 54 but you can't start the quest as the quest is 57 in the database please change this.
http://db.vanillagaming.org/?search=Garona%3A+A+Study+on+Stealth+and+Treachery#items | priority | garona a study on stealth and treachery this book you can get at but you can t start the quest as the quest is in the database please change this | 1 |
348,212 | 10,440,198,422 | IssuesEvent | 2019-09-18 08:12:38 | openshift/odo | https://api.github.com/repos/openshift/odo | closed | Implement `odo services list -o json` / listServiceInstances | kind/discussion points/3 priority/Medium | [kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## Which functionality do you think we should update/improve?
Within VSCode we currently use:
`oc get ServiceInstance -o jsonpath="{range .items[?(.metadata.labels.app == \\"${app}\\")]}{.metadata.labels.app\\.kubernetes\\.io/component-name}{\\"\\n\\"}{end}" --namespace ${project}`
We should implement something similar.
How should we iimplement this? odo services list -o json ?
## Why is this needed?
Needed so that VSCode doesn't have to use the `oc` command.
Ping @kadel @girishramnani @dharmit @dgolovin
| 1.0 | Implement `odo services list -o json` / listServiceInstances - [kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## Which functionality do you think we should update/improve?
Within VSCode we currently use:
`oc get ServiceInstance -o jsonpath="{range .items[?(.metadata.labels.app == \\"${app}\\")]}{.metadata.labels.app\\.kubernetes\\.io/component-name}{\\"\\n\\"}{end}" --namespace ${project}`
We should implement something similar.
How should we iimplement this? odo services list -o json ?
## Why is this needed?
Needed so that VSCode doesn't have to use the `oc` command.
Ping @kadel @girishramnani @dharmit @dgolovin
| priority | implement odo services list o json listserviceinstances welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project which functionality do you think we should update improve within vscode we currently use oc get serviceinstance o jsonpath range items metadata labels app kubernetes io component name n end namespace project we should implement something similar how should we iimplement this odo services list o json why is this needed needed so that vscode doesn t have to use the oc command ping kadel girishramnani dharmit dgolovin | 1 |
790,079 | 27,814,995,460 | IssuesEvent | 2023-03-18 15:26:05 | mertvn/EMQ | https://api.github.com/repos/mertvn/EMQ | opened | Add filters for the room listing | enhancement priority-medium | - [ ] IsPrivate
- [ ] QuizStatus
- [ ] Player count (Range slider)
- [ ] QuizSettings (NumSongs, SongSelectionKind, Lives, guess & result times) | 1.0 | Add filters for the room listing - - [ ] IsPrivate
- [ ] QuizStatus
- [ ] Player count (Range slider)
- [ ] QuizSettings (NumSongs, SongSelectionKind, Lives, guess & result times) | priority | add filters for the room listing isprivate quizstatus player count range slider quizsettings numsongs songselectionkind lives guess result times | 1 |
779,489 | 27,354,643,273 | IssuesEvent | 2023-02-27 12:01:04 | AUBGTheHUB/spa-website-2022 | https://api.github.com/repos/AUBGTheHUB/spa-website-2022 | closed | HackAUBG Desktop Navbar spacing | frontend medium priority HackAUBG | The desktop navbar of HackAUBG on 900px-1000px the anchors don't have enough spacing between them.
| 1.0 | HackAUBG Desktop Navbar spacing - The desktop navbar of HackAUBG on 900px-1000px the anchors don't have enough spacing between them.
| priority | hackaubg desktop navbar spacing the desktop navbar of hackaubg on the anchors don t have enough spacing between them | 1 |
762,345 | 26,715,977,589 | IssuesEvent | 2023-01-28 14:14:39 | NIAEFEUP/tts-revamp-fe | https://api.github.com/repos/NIAEFEUP/tts-revamp-fe | closed | Show classes on hover in dropdown | low priority medium effort | On hover an option in the dropdown, show the classes in the schedule, but in a light tonality.

| 1.0 | Show classes on hover in dropdown - On hover an option in the dropdown, show the classes in the schedule, but in a light tonality.

| priority | show classes on hover in dropdown on hover an option in the dropdown show the classes in the schedule but in a light tonality | 1 |
208,812 | 7,160,514,073 | IssuesEvent | 2018-01-28 01:35:01 | Lapsang-boys/Crimson-Chronicles | https://api.github.com/repos/Lapsang-boys/Crimson-Chronicles | closed | Script that generates mob for different difficulties | priority medium system | We need to write a script that generates 3 sets of mobs, one for each difficulty. The property changed will be maxhp.
Either we do it like this (roughly):
- Easy 60% of hp
- Normal 80%
- Hard 100%
Or
- Easy 80% of hp
- Normal 100%
- Hard 120%
Additionally we might want hardcore depending on how we want to go about balancing. For instance 10% more health than hard.
| 1.0 | Script that generates mob for different difficulties - We need to write a script that generates 3 sets of mobs, one for each difficulty. The property changed will be maxhp.
Either we do it like this (roughly):
- Easy 60% of hp
- Normal 80%
- Hard 100%
Or
- Easy 80% of hp
- Normal 100%
- Hard 120%
Additionally we might want hardcore depending on how we want to go about balancing. For instance 10% more health than hard.
| priority | script that generates mob for different difficulties we need to write a script that generates sets of mobs one for each difficulty the property changed will be maxhp either we do it like this roughly easy of hp normal hard or easy of hp normal hard additionally we might want hardcore depending on how we want to go about balancing for instance more health than hard | 1 |
50,138 | 3,006,205,147 | IssuesEvent | 2015-07-27 08:52:17 | 52North/SOS | https://api.github.com/repos/52North/SOS | closed | Check for not null/empty coordinates in CoordinateTransformer.transformSweCoordinates() before joining | bug medium priority | Add check for not null/empty coordinates in the CoordinateTransformer.transformSweCoordinates() before the Joiner is executed to avoid NPE. | 1.0 | Check for not null/empty coordinates in CoordinateTransformer.transformSweCoordinates() before joining - Add check for not null/empty coordinates in the CoordinateTransformer.transformSweCoordinates() before the Joiner is executed to avoid NPE. | priority | check for not null empty coordinates in coordinatetransformer transformswecoordinates before joining add check for not null empty coordinates in the coordinatetransformer transformswecoordinates before the joiner is executed to avoid npe | 1 |
821,587 | 30,827,675,321 | IssuesEvent | 2023-08-01 21:33:24 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | Ruins/colonisation feature | new feature :star: suggestion :question: priority low :grey_exclamation: priority medium :grey_exclamation: | <!--
------------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
Title
| 2.0 | Ruins/colonisation feature - <!--
------------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
Title
| priority | ruins colonisation feature describe your suggestion in full detail below title | 1 |
379,527 | 11,222,868,844 | IssuesEvent | 2020-01-07 21:13:16 | JensenJ/BunkerSurvival | https://api.github.com/repos/JensenJ/BunkerSurvival | closed | [BUG] Client player name not updated on join game. | bug medium-priority network-issue | **Describe the bug**
When a client joins the game, all players that joined before them have the default name on the player connection object. This only happens on the client's game and not the host's.
**To Reproduce**
Steps to reproduce the behavior:
1. Get somebody to host server
2. Join the server
3. Look at host's name
4. See error
**Expected behavior**
Name's to be synced across network.
| 1.0 | [BUG] Client player name not updated on join game. - **Describe the bug**
When a client joins the game, all players that joined before them have the default name on the player connection object. This only happens on the client's game and not the host's.
**To Reproduce**
Steps to reproduce the behavior:
1. Get somebody to host server
2. Join the server
3. Look at host's name
4. See error
**Expected behavior**
Name's to be synced across network.
| priority | client player name not updated on join game describe the bug when a client joins the game all players that joined before them have the default name on the player connection object this only happens on the client s game and not the host s to reproduce steps to reproduce the behavior get somebody to host server join the server look at host s name see error expected behavior name s to be synced across network | 1 |
155,161 | 5,949,797,085 | IssuesEvent | 2017-05-26 15:07:54 | mkdo/kapow-theme | https://api.github.com/repos/mkdo/kapow-theme | closed | Remove widgets from 404 template | Priority: Medium Status: Completed Type: Maintenance | All of the widgets that are output below the search form need to be removed. They are never used.
What would be better is to register a 404 sidebar and refactor the page so that the heading appears above both the content and a new sidebar area. | 1.0 | Remove widgets from 404 template - All of the widgets that are output below the search form need to be removed. They are never used.
What would be better is to register a 404 sidebar and refactor the page so that the heading appears above both the content and a new sidebar area. | priority | remove widgets from template all of the widgets that are output below the search form need to be removed they are never used what would be better is to register a sidebar and refactor the page so that the heading appears above both the content and a new sidebar area | 1 |
672,118 | 22,791,440,178 | IssuesEvent | 2022-07-10 03:49:02 | Pycord-Development/pycord | https://api.github.com/repos/Pycord-Development/pycord | closed | `IPC` Module | feature request priority: medium | ### Summary
Adding a `ipc` feature
### What is the feature request for?
The core library
### The Problem
None really I just think this would be a cool feature to have especially for big bots
### The Ideal Solution
Maybe adding it as a ext? Like discord.ext.web or ipc
### The Current Solution
_No response_
### Additional Context
_No response_ | 1.0 | `IPC` Module - ### Summary
Adding a `ipc` feature
### What is the feature request for?
The core library
### The Problem
None really I just think this would be a cool feature to have especially for big bots
### The Ideal Solution
Maybe adding it as a ext? Like discord.ext.web or ipc
### The Current Solution
_No response_
### Additional Context
_No response_ | priority | ipc module summary adding a ipc feature what is the feature request for the core library the problem none really i just think this would be a cool feature to have especially for big bots the ideal solution maybe adding it as a ext like discord ext web or ipc the current solution no response additional context no response | 1 |
734,130 | 25,338,258,240 | IssuesEvent | 2022-11-18 18:51:28 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Streamline locking when creating a new colocated tablet | kind/bug area/docdb priority/medium | Jira Link: [DB-4092](https://yugabyte.atlassian.net/browse/DB-4092)
### Description
There's some issues with lock usage inside `CatalogManager::CreateTable`.
1. We hold the `mutex_` while writing to the sys catalog [here](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3968) which is frowned on.
2. We [commit](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3969) a cow mutation and then immediately [begin](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3975) one for the same object.
We should streamline this logic. There's a sys catalog [write](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L4031) later in this codepath. We should coalesce these two sys catalog writes together. We should also acquire the object-specific locks once and hold them until the single sys catalog write is complete. | 1.0 | [DocDB] Streamline locking when creating a new colocated tablet - Jira Link: [DB-4092](https://yugabyte.atlassian.net/browse/DB-4092)
### Description
There's some issues with lock usage inside `CatalogManager::CreateTable`.
1. We hold the `mutex_` while writing to the sys catalog [here](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3968) which is frowned on.
2. We [commit](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3969) a cow mutation and then immediately [begin](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L3975) one for the same object.
We should streamline this logic. There's a sys catalog [write](https://github.com/yugabyte/yugabyte-db/blob/master/src/yb/master/catalog_manager.cc#L4031) later in this codepath. We should coalesce these two sys catalog writes together. We should also acquire the object-specific locks once and hold them until the single sys catalog write is complete. | priority | streamline locking when creating a new colocated tablet jira link description there s some issues with lock usage inside catalogmanager createtable we hold the mutex while writing to the sys catalog which is frowned on we a cow mutation and then immediately one for the same object we should streamline this logic there s a sys catalog later in this codepath we should coalesce these two sys catalog writes together we should also acquire the object specific locks once and hold them until the single sys catalog write is complete | 1 |
14,822 | 2,610,635,672 | IssuesEvent | 2015-02-26 21:33:18 | alistairreilly/open-ig | https://api.github.com/repos/alistairreilly/open-ig | closed | Create XSD's for the XML files | auto-migrated Component-Docs Milestone-0.95.200 Priority-Medium Type-Enhancement | ```
They should become relatively stable now.
```
Original issue reported on code.google.com by `akarn...@gmail.com` on 23 Aug 2011 at 9:27 | 1.0 | Create XSD's for the XML files - ```
They should become relatively stable now.
```
Original issue reported on code.google.com by `akarn...@gmail.com` on 23 Aug 2011 at 9:27 | priority | create xsd s for the xml files they should become relatively stable now original issue reported on code google com by akarn gmail com on aug at | 1 |
676,027 | 23,113,974,010 | IssuesEvent | 2022-07-27 15:07:31 | magma/magma | https://api.github.com/repos/magma/magma | closed | [AMF]: AGW is not responding to a PDU session request with a DNN which is not configured in NMS | type: bug wontfix component: agw priority: medium product: 5g sa | After getting PDU session establishment request with DNN = "internet" and same APN is not configured in NMS for subscriber, AGW does not respond back with a reject..
[https://app.zenhub.com/files/170803235/c54ac09a-2ebc-4e11-9ede-0e7c4b62659a/download](https://app.zenhub.com/files/170803235/c54ac09a-2ebc-4e11-9ede-0e7c4b62659a/download) | 1.0 | [AMF]: AGW is not responding to a PDU session request with a DNN which is not configured in NMS - After getting PDU session establishment request with DNN = "internet" and same APN is not configured in NMS for subscriber, AGW does not respond back with a reject..
[https://app.zenhub.com/files/170803235/c54ac09a-2ebc-4e11-9ede-0e7c4b62659a/download](https://app.zenhub.com/files/170803235/c54ac09a-2ebc-4e11-9ede-0e7c4b62659a/download) | priority | agw is not responding to a pdu session request with a dnn which is not configured in nms after getting pdu session establishment request with dnn internet and same apn is not configured in nms for subscriber agw does not respond back with a reject | 1 |
440,689 | 12,702,855,093 | IssuesEvent | 2020-06-22 21:00:41 | ShatteredSuite/ShatteredScrolls2 | https://api.github.com/repos/ShatteredSuite/ShatteredScrolls2 | closed | Default Scroll Types | priority-medium size-small | The plugin should come with a few default scroll types to show off the features better and provide easy conversion from old configs. | 1.0 | Default Scroll Types - The plugin should come with a few default scroll types to show off the features better and provide easy conversion from old configs. | priority | default scroll types the plugin should come with a few default scroll types to show off the features better and provide easy conversion from old configs | 1 |
197,313 | 6,954,238,653 | IssuesEvent | 2017-12-07 00:18:04 | minio/minio | https://api.github.com/repos/minio/minio | closed | docs: Deployment automation button for jelastic cloud | community priority: medium won't fix | Is there any interest to add a button that simplifies deployment and configuration of Minio XL in Jelastic cloud? There are many Jelastic cloud providers that offer free resources, so it should be useful for end users as they can try Minio and get an instant experience easily. There is an example of similar buttons (scroll down to Deployment section) https://github.com/PokemonGoMap/PokemonGo-Map. Please let me know if so then I will add to Readme.md.
| 1.0 | docs: Deployment automation button for jelastic cloud - Is there any interest to add a button that simplifies deployment and configuration of Minio XL in Jelastic cloud? There are many Jelastic cloud providers that offer free resources, so it should be useful for end users as they can try Minio and get an instant experience easily. There is an example of similar buttons (scroll down to Deployment section) https://github.com/PokemonGoMap/PokemonGo-Map. Please let me know if so then I will add to Readme.md.
| priority | docs deployment automation button for jelastic cloud is there any interest to add a button that simplifies deployment and configuration of minio xl in jelastic cloud there are many jelastic cloud providers that offer free resources so it should be useful for end users as they can try minio and get an instant experience easily there is an example of similar buttons scroll down to deployment section please let me know if so then i will add to readme md | 1 |
106,368 | 4,271,013,670 | IssuesEvent | 2016-07-13 09:30:17 | ubuntudesign/snapcraft.io | https://api.github.com/repos/ubuntudesign/snapcraft.io | closed | No mention that devmode snaps will not appear in snap find | in progress Priority: medium Status: More info needed Type: Enhancement | http://snapcraft.io/create does not mention that when you publish your snap in devmode, it will not appear in search results. The reader can find and install their snap by running `snap install hello --devmode` | 1.0 | No mention that devmode snaps will not appear in snap find - http://snapcraft.io/create does not mention that when you publish your snap in devmode, it will not appear in search results. The reader can find and install their snap by running `snap install hello --devmode` | priority | no mention that devmode snaps will not appear in snap find does not mention that when you publish your snap in devmode it will not appear in search results the reader can find and install their snap by running snap install hello devmode | 1 |
77,208 | 3,506,272,189 | IssuesEvent | 2016-01-08 05:10:58 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Piercing Howl makes Evocation Cancel? (BB #271) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 26.08.2010 23:26:20 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/271
<hr>
Piercing Howl will make channeled spells like evocation cancel, and probably other spells will as well. | 1.0 | Piercing Howl makes Evocation Cancel? (BB #271) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 26.08.2010 23:26:20 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/271
<hr>
Piercing Howl will make channeled spells like evocation cancel, and probably other spells will as well. | priority | piercing howl makes evocation cancel bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link piercing howl will make channeled spells like evocation cancel and probably other spells will as well | 1 |
371,985 | 11,007,348,316 | IssuesEvent | 2019-12-04 08:16:57 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | closed | Use mobile user-agents as well | bug effort/S priority/medium | `net.userAgents` has nothing but ancient Firefox.
We should have greater diversity of the User-Agents as we've seen mobile-targeted injections.
**TBD**: find some source of up-to-date UA statistics. | 1.0 | Use mobile user-agents as well - `net.userAgents` has nothing but ancient Firefox.
We should have greater diversity of the User-Agents as we've seen mobile-targeted injections.
**TBD**: find some source of up-to-date UA statistics. | priority | use mobile user agents as well net useragents has nothing but ancient firefox we should have greater diversity of the user agents as we ve seen mobile targeted injections tbd find some source of up to date ua statistics | 1 |
750,369 | 26,199,101,892 | IssuesEvent | 2023-01-03 15:54:06 | netdata/netdata | https://api.github.com/repos/netdata/netdata | opened | [Feat]: Publish native packages for Amazon Linux 2. | area/packaging feature request priority/medium | ### Problem
While we publish RHEL compatible packages, these are not reliably compatible with Amazon Linux 2. This, combined with the fact that AL2 uses it’s own repository paths in most cases, means that we are not currently providing native packages for Amazon Linux 2.
### Description
We should provide native packages for Amazon Linux 2.
### Importance
nice to have
### Value proposition
Users who are using Amazon Linux 2 will be able to use native packages.
### Proposed implementation
_No response_ | 1.0 | [Feat]: Publish native packages for Amazon Linux 2. - ### Problem
While we publish RHEL compatible packages, these are not reliably compatible with Amazon Linux 2. This, combined with the fact that AL2 uses it’s own repository paths in most cases, means that we are not currently providing native packages for Amazon Linux 2.
### Description
We should provide native packages for Amazon Linux 2.
### Importance
nice to have
### Value proposition
Users who are using Amazon Linux 2 will be able to use native packages.
### Proposed implementation
_No response_ | priority | publish native packages for amazon linux problem while we publish rhel compatible packages these are not reliably compatible with amazon linux this combined with the fact that uses it’s own repository paths in most cases means that we are not currently providing native packages for amazon linux description we should provide native packages for amazon linux importance nice to have value proposition users who are using amazon linux will be able to use native packages proposed implementation no response | 1 |
171,177 | 6,480,587,516 | IssuesEvent | 2017-08-18 13:46:37 | Polpetta/SecurityAndRiskManagementNotes | https://api.github.com/repos/Polpetta/SecurityAndRiskManagementNotes | opened | Capitolo 11 fa parte di 10.2.5 | bug priority:medium type:content | **Description**
Il capitolo 11 parla del Security Administrator e basta. Questo non ha senso che sia un capitolo, perché in 10.2.5 si parla delle `Posizioni della Sicurezza` in cui il `Security Administrator` è solamente una misera lista puntata. Secondo me andrebbe mergiato il capitolo in quella parte, e 11.1 bisognerebbe metterlo come una sottosezione di `Posizioni della Sicurezza`, così eliminiamo un capitolo che non ha molto senso.
Che ne pensate @herrBez @mzanella ? | 1.0 | Capitolo 11 fa parte di 10.2.5 - **Description**
Il capitolo 11 parla del Security Administrator e basta. Questo non ha senso che sia un capitolo, perché in 10.2.5 si parla delle `Posizioni della Sicurezza` in cui il `Security Administrator` è solamente una misera lista puntata. Secondo me andrebbe mergiato il capitolo in quella parte, e 11.1 bisognerebbe metterlo come una sottosezione di `Posizioni della Sicurezza`, così eliminiamo un capitolo che non ha molto senso.
Che ne pensate @herrBez @mzanella ? | priority | capitolo fa parte di description il capitolo parla del security administrator e basta questo non ha senso che sia un capitolo perché in si parla delle posizioni della sicurezza in cui il security administrator è solamente una misera lista puntata secondo me andrebbe mergiato il capitolo in quella parte e bisognerebbe metterlo come una sottosezione di posizioni della sicurezza così eliminiamo un capitolo che non ha molto senso che ne pensate herrbez mzanella | 1 |
685,347 | 23,454,100,131 | IssuesEvent | 2022-08-16 07:16:13 | gladiaio/gladia | https://api.github.com/repos/gladiaio/gladia | closed | POST requests works with a bad formatted JSON | priority: medium status : needs additional information | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
POST instructions on endpoint unclear. (application/json required but sending plain string as a json works ?)

**Steps to reproduce**
1. Check swagger "Try out" requests.

**What's the expected behavior**
POST valid JSON instead of `{ "string" }`
| 1.0 | POST requests works with a bad formatted JSON - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
POST instructions on endpoint unclear. (application/json required but sending plain string as a json works ?)

**Steps to reproduce**
1. Check swagger "Try out" requests.

**What's the expected behavior**
POST valid JSON instead of `{ "string" }`
| priority | post requests works with a bad formatted json describe the bug post instructions on endpoint unclear application json required but sending plain string as a json works steps to reproduce check swagger try out requests what s the expected behavior post valid json instead of string | 1 |
170,683 | 6,469,178,989 | IssuesEvent | 2017-08-17 04:35:50 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | test/rpmem_obc: rpmem_obc/TEST2 crashed (signal 6). | Exposure: Low OS: Linux Priority: 3 medium State: To be verified Type: Question | I have make test and make sync-remote before run the test. And here is my testconfig.sh, I think there may be some problems. Firstly I didn't defined the NODE_ADDR.
What is the correct way to run the tests about remote.
```
/nvml/src/test#` cat testconfig.sh
NON_PMEM_FS_DIR=/tmp/tmp.nSMsKPILnS
PMEM_FS_DIR=/fs/pmem0
NODE[0]=127.0.0.1
NODE_WORKING_DIR[0]=/lkp/benchmarks/nvml/src/testremote
NODE_ADDR[0]=192.168.0.1
NODE[1]=127.0.0.1
NODE_WORKING_DIR[1]=/lkp/benchmarks/nvml/src/testremote
NODE[2]=127.0.0.1
NODE_WORKING_DIR[2]=/lkp/benchmarks/nvml/src/testremote
NODE[3]=127.0.0.1
NODE_WORKING_DIR[3]=/lkp/benchmarks/nvml/src/testremote
RPMEM_VALGRIND_ENABLED=y
PMEM_FS_DIR_FORCE_PMEM=1
```
- And error log show below.
nvml/src/test# ./RUNTESTS rpmem_obc
rpmem_obc/TEST0: SETUP (check/none/debug)
rpmem_obc/TEST0: START: rpmem_obc
rpmem_obc/TEST0: PASS
rpmem_obc/TEST0: SETUP (check/none/nondebug)
rpmem_obc/TEST0: START: rpmem_obc
rpmem_obc/TEST0: PASS
rpmem_obc/TEST1: SETUP (check/none/debug)
rpmem_obc/TEST1: START: rpmem_obc
rpmem_obc/TEST1: PASS
rpmem_obc/TEST1: SETUP (check/none/nondebug)
rpmem_obc/TEST1: START: rpmem_obc
rpmem_obc/TEST1: PASS
rpmem_obc/TEST2: SETUP (check/none/debug)
rpmem_obc/TEST2 crashed (signal 6).
node_0_err2.log below.
rpmem_obc/TEST2 node_0_err2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_err2.log
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_err2.log
node_0_out2.log below.
rpmem_obc/TEST2 node_0_out2.log rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_0_out2.log ./rpmem_obc client_monitor
node_0_trace2.log below.
rpmem_obc/TEST2 node_0_trace2.log {rpmem_obc_test.c:80 main} rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_0_trace2.log ./rpmem_obc client_monitor
rpmem_obc/TEST2 node_0_trace2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_trace2.log
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_trace2.log
node_1_err2.log below.
rpmem_obc/TEST2 node_1_err2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_err2.log
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_err2.log
node_1_out2.log below.
rpmem_obc/TEST2 node_1_out2.log rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_1_out2.log ./rpmem_obc client_monitor
node_1_rpmem2.log below.
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:265 out_init] pid 11119: program: /lkp/benchmarks/nvml/src/testremote/test_rpmem_obc/rpmem_obc
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:267 out_init] rpmem_obc version 0.0
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:268 out_init] src version SRCVERSION:
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:276 out_init] compiled with support for Valgrind pmemcheck
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:281 out_init] compiled with support for Valgrind helgrind
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:286 out_init] compiled with support for Valgrind memcheck
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:291 out_init] compiled with support for Valgrind drd
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <3> [mmap.c:91 util_mmap_init]
node_1_trace2.log below.
rpmem_obc/TEST2 node_1_trace2.log {rpmem_obc_test.c:80 main} rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_1_trace2.log ./rpmem_obc client_monitor
rpmem_obc/TEST2 node_1_trace2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_trace2.log
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_trace2.log
rpmem_obc/TEST2: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: rpmem_obc/TEST2 failed, TEST=check FS=none BUILD=debug
| 1.0 | test/rpmem_obc: rpmem_obc/TEST2 crashed (signal 6). - I have make test and make sync-remote before run the test. And here is my testconfig.sh, I think there may be some problems. Firstly I didn't defined the NODE_ADDR.
What is the correct way to run the tests about remote.
```
/nvml/src/test#` cat testconfig.sh
NON_PMEM_FS_DIR=/tmp/tmp.nSMsKPILnS
PMEM_FS_DIR=/fs/pmem0
NODE[0]=127.0.0.1
NODE_WORKING_DIR[0]=/lkp/benchmarks/nvml/src/testremote
NODE_ADDR[0]=192.168.0.1
NODE[1]=127.0.0.1
NODE_WORKING_DIR[1]=/lkp/benchmarks/nvml/src/testremote
NODE[2]=127.0.0.1
NODE_WORKING_DIR[2]=/lkp/benchmarks/nvml/src/testremote
NODE[3]=127.0.0.1
NODE_WORKING_DIR[3]=/lkp/benchmarks/nvml/src/testremote
RPMEM_VALGRIND_ENABLED=y
PMEM_FS_DIR_FORCE_PMEM=1
```
- And error log show below.
nvml/src/test# ./RUNTESTS rpmem_obc
rpmem_obc/TEST0: SETUP (check/none/debug)
rpmem_obc/TEST0: START: rpmem_obc
rpmem_obc/TEST0: PASS
rpmem_obc/TEST0: SETUP (check/none/nondebug)
rpmem_obc/TEST0: START: rpmem_obc
rpmem_obc/TEST0: PASS
rpmem_obc/TEST1: SETUP (check/none/debug)
rpmem_obc/TEST1: START: rpmem_obc
rpmem_obc/TEST1: PASS
rpmem_obc/TEST1: SETUP (check/none/nondebug)
rpmem_obc/TEST1: START: rpmem_obc
rpmem_obc/TEST1: PASS
rpmem_obc/TEST2: SETUP (check/none/debug)
rpmem_obc/TEST2 crashed (signal 6).
node_0_err2.log below.
rpmem_obc/TEST2 node_0_err2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_err2.log
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_0_err2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_err2.log
node_0_out2.log below.
rpmem_obc/TEST2 node_0_out2.log rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_0_out2.log ./rpmem_obc client_monitor
node_0_trace2.log below.
rpmem_obc/TEST2 node_0_trace2.log {rpmem_obc_test.c:80 main} rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_0_trace2.log ./rpmem_obc client_monitor
rpmem_obc/TEST2 node_0_trace2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_trace2.log
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_0_trace2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_0_trace2.log
node_1_err2.log below.
rpmem_obc/TEST2 node_1_err2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_err2.log
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_1_err2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_err2.log
node_1_out2.log below.
rpmem_obc/TEST2 node_1_out2.log rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_1_out2.log ./rpmem_obc client_monitor
node_1_rpmem2.log below.
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:265 out_init] pid 11119: program: /lkp/benchmarks/nvml/src/testremote/test_rpmem_obc/rpmem_obc
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:267 out_init] rpmem_obc version 0.0
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:268 out_init] src version SRCVERSION:
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:276 out_init] compiled with support for Valgrind pmemcheck
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:281 out_init] compiled with support for Valgrind helgrind
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:286 out_init] compiled with support for Valgrind memcheck
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <1> [out.c:291 out_init] compiled with support for Valgrind drd
rpmem_obc/TEST2 node_1_rpmem2.log <rpmem_obc>: <3> [mmap.c:91 util_mmap_init]
node_1_trace2.log below.
rpmem_obc/TEST2 node_1_trace2.log {rpmem_obc_test.c:80 main} rpmem_obc/TEST2: START: rpmem_obc
rpmem_obc/TEST2 node_1_trace2.log ./rpmem_obc client_monitor
rpmem_obc/TEST2 node_1_trace2.log {rpmem_obc_test_misc.c:154 client_monitor} rpmem_obc/TEST2: Error: usage: client_monitor <addr>[:<port>]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:198 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_trace2.log
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:199 ut_sighandler} rpmem_obc/TEST2: Signal 6, backtrace:
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 0: ./rpmem_obc() [0x42357c]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 1: ./rpmem_obc() [0x4236bb]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 2: /lib/x86_64-linux-gnu/libc.so.6(+0x33030) [0x7f3a0e927030]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 3: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf) [0x7f3a0e926fcf]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 4: /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x7f3a0e9283fa]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 5: ./rpmem_obc() [0x420555]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 6: ./rpmem_obc() [0x404c21]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 7: ./rpmem_obc() [0x403bce]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 8: ./rpmem_obc() [0x403a5e]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 9: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1) [0x7f3a0e9142b1]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:149 ut_dump_backtrace} rpmem_obc/TEST2: 10: ./rpmem_obc() [0x4038ea]
rpmem_obc/TEST2 node_1_trace2.log {ut_backtrace.c:201 ut_sighandler} rpmem_obc/TEST2:
rpmem_obc/TEST2 node_1_trace2.log
rpmem_obc/TEST2: CLEAN (cleaning processes on remote nodes)
RUNTESTS: stopping: rpmem_obc/TEST2 failed, TEST=check FS=none BUILD=debug
| priority | test rpmem obc rpmem obc crashed signal i have make test and make sync remote before run the test and here is my testconfig sh i think there may be some problems firstly i didn t defined the node addr what is the correct way to run the tests about remote nvml src test cat testconfig sh non pmem fs dir tmp tmp nsmskpilns pmem fs dir fs node node working dir lkp benchmarks nvml src testremote node addr node node working dir lkp benchmarks nvml src testremote node node working dir lkp benchmarks nvml src testremote node node working dir lkp benchmarks nvml src testremote rpmem valgrind enabled y pmem fs dir force pmem and error log show below nvml src test runtests rpmem obc rpmem obc setup check none debug rpmem obc start rpmem obc rpmem obc pass rpmem obc setup check none nondebug rpmem obc start rpmem obc rpmem obc pass rpmem obc setup check none debug rpmem obc start rpmem obc rpmem obc pass rpmem obc setup check none nondebug rpmem obc start rpmem obc rpmem obc pass rpmem obc setup check none debug rpmem obc crashed signal node log below rpmem obc node log rpmem obc test misc c client monitor rpmem obc error usage client monitor rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log rpmem obc node log ut backtrace c ut sighandler rpmem obc signal backtrace rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so gsignal rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so abort rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so libc start main rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log node log below rpmem obc node log rpmem obc start rpmem obc rpmem obc node log rpmem obc client monitor node log below rpmem obc node log rpmem obc test c main rpmem obc start rpmem obc rpmem obc node log rpmem obc client monitor rpmem obc node log rpmem obc test misc c client monitor rpmem obc error usage client monitor rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log rpmem obc node log ut backtrace c ut sighandler rpmem obc signal backtrace rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so gsignal rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so abort rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so libc start main rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log node log below rpmem obc node log rpmem obc test misc c client monitor rpmem obc error usage client monitor rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log rpmem obc node log ut backtrace c ut sighandler rpmem obc signal backtrace rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so gsignal rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so abort rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so libc start main rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log node log below rpmem obc node log rpmem obc start rpmem obc rpmem obc node log rpmem obc client monitor node log below rpmem obc node log pid program lkp benchmarks nvml src testremote test rpmem obc rpmem obc rpmem obc node log rpmem obc version rpmem obc node log src version srcversion rpmem obc node log compiled with support for valgrind pmemcheck rpmem obc node log compiled with support for valgrind helgrind rpmem obc node log compiled with support for valgrind memcheck rpmem obc node log compiled with support for valgrind drd rpmem obc node log node log below rpmem obc node log rpmem obc test c main rpmem obc start rpmem obc rpmem obc node log rpmem obc client monitor rpmem obc node log rpmem obc test misc c client monitor rpmem obc error usage client monitor rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log rpmem obc node log ut backtrace c ut sighandler rpmem obc signal backtrace rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so gsignal rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so abort rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut dump backtrace rpmem obc lib linux gnu libc so libc start main rpmem obc node log ut backtrace c ut dump backtrace rpmem obc rpmem obc rpmem obc node log ut backtrace c ut sighandler rpmem obc rpmem obc node log rpmem obc clean cleaning processes on remote nodes runtests stopping rpmem obc failed test check fs none build debug | 1 |
828,916 | 31,847,015,757 | IssuesEvent | 2023-09-14 20:47:57 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Latency increases when yb_fetch_size_limit is increased from 1MB to 10MB | kind/enhancement area/ysql priority/medium | Jira Link: [DB-6939](https://yugabyte.atlassian.net/browse/DB-6939)
### Description
**ISSUE1**
To fetch 1M rows with with prefetch size of
1. 1MB resulted in 9 RPC requests with a latency of 279.99ms
2. Where as with 10MB prefetch size only 1 RPC request was required with latency of 444.99ms.
3. Even though with 10MB prefetch size resulted in reduced RPCs to 1 from 9 for 1MB prefetch size the Storage Read Execution Time for single RPC of 10MB is higher than 9 RPC requests of 1MB.
**ISSUE2**
Single RPCs is required to fetch 1M rows with prefetch size set to either 10M or 1GB but the storage read latency for 10M prefetch buffer is 444ms where as for 1GB is 502ms.
<img width="1327" alt="image" src="https://github.com/yugabyte/yugabyte-db/assets/85676531/fd5d5188-eed3-4e83-966c-0d1cf8fe85d6">
Microbenchmark reports:
Scan Regular tables | http://perf.dev.yugabyte.com/report/view/W3sibmFtZSI6IllCUl9EZWZhdWx0IiwidGVzdF9pZCI6IjE4OTk4MDIiLCJpc0Jhc2VsaW5lIjp0cnVlfSx7Im5hbWUiOiJQcmVmZXRjaF8xTSIsInRlc3RfaWQiOiIxOTAxMDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9LHsibmFtZSI6IlByZWZldGNoXzEwTSIsInRlc3RfaWQiOiIxOTAxMjAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9XQ==
-- | --
Scan Colocated tables | http://perf.dev.yugabyte.com/report/view/W3sibmFtZSI6IllCQ19ERUZBVUxUIiwidGVzdF9pZCI6IjE4OTkxMDIiLCJpc0Jhc2VsaW5lIjp0cnVlfSx7Im5hbWUiOiJQcmVmZXRjaF8xTSIsInRlc3RfaWQiOiIxODc2NDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9LHsibmFtZSI6IlByZWZldGNoXzEwTSIsInRlc3RfaWQiOiIxOTAxNDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9XQ==
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6939]: https://yugabyte.atlassian.net/browse/DB-6939?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] Latency increases when yb_fetch_size_limit is increased from 1MB to 10MB - Jira Link: [DB-6939](https://yugabyte.atlassian.net/browse/DB-6939)
### Description
**ISSUE1**
To fetch 1M rows with with prefetch size of
1. 1MB resulted in 9 RPC requests with a latency of 279.99ms
2. Where as with 10MB prefetch size only 1 RPC request was required with latency of 444.99ms.
3. Even though with 10MB prefetch size resulted in reduced RPCs to 1 from 9 for 1MB prefetch size the Storage Read Execution Time for single RPC of 10MB is higher than 9 RPC requests of 1MB.
**ISSUE2**
Single RPCs is required to fetch 1M rows with prefetch size set to either 10M or 1GB but the storage read latency for 10M prefetch buffer is 444ms where as for 1GB is 502ms.
<img width="1327" alt="image" src="https://github.com/yugabyte/yugabyte-db/assets/85676531/fd5d5188-eed3-4e83-966c-0d1cf8fe85d6">
Microbenchmark reports:
Scan Regular tables | http://perf.dev.yugabyte.com/report/view/W3sibmFtZSI6IllCUl9EZWZhdWx0IiwidGVzdF9pZCI6IjE4OTk4MDIiLCJpc0Jhc2VsaW5lIjp0cnVlfSx7Im5hbWUiOiJQcmVmZXRjaF8xTSIsInRlc3RfaWQiOiIxOTAxMDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9LHsibmFtZSI6IlByZWZldGNoXzEwTSIsInRlc3RfaWQiOiIxOTAxMjAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9XQ==
-- | --
Scan Colocated tables | http://perf.dev.yugabyte.com/report/view/W3sibmFtZSI6IllCQ19ERUZBVUxUIiwidGVzdF9pZCI6IjE4OTkxMDIiLCJpc0Jhc2VsaW5lIjp0cnVlfSx7Im5hbWUiOiJQcmVmZXRjaF8xTSIsInRlc3RfaWQiOiIxODc2NDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9LHsibmFtZSI6IlByZWZldGNoXzEwTSIsInRlc3RfaWQiOiIxOTAxNDAyIiwiaXNCYXNlbGluZSI6ZmFsc2V9XQ==
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6939]: https://yugabyte.atlassian.net/browse/DB-6939?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | latency increases when yb fetch size limit is increased from to jira link description to fetch rows with with prefetch size of resulted in rpc requests with a latency of where as with prefetch size only rpc request was required with latency of even though with prefetch size resulted in reduced rpcs to from for prefetch size the storage read execution time for single rpc of is higher than rpc requests of single rpcs is required to fetch rows with prefetch size set to either or but the storage read latency for prefetch buffer is where as for is img width alt image src microbenchmark reports scan regular tables scan colocated tables warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
504,351 | 14,617,039,954 | IssuesEvent | 2020-12-22 14:11:52 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | RVL-1396 - Structures not appearing in Web UI | Priority: Medium | **Thailand Preview Environment**
1. Log-in on Android as bvbd_mhealth/Test123
2. Select operational area BVBD training site 3
3. Select plan: A1 Thailand test site BVBD 3 2020-12-21 vr2
4. Add a new structure and carry out family registration
5. Save form and sync
6. Check Web UI that structure appears
Screenshots below from mobile app and Web UI monitoring tab
Web UI monitoring tab:

Mobile Device:

| 1.0 | RVL-1396 - Structures not appearing in Web UI - **Thailand Preview Environment**
1. Log-in on Android as bvbd_mhealth/Test123
2. Select operational area BVBD training site 3
3. Select plan: A1 Thailand test site BVBD 3 2020-12-21 vr2
4. Add a new structure and carry out family registration
5. Save form and sync
6. Check Web UI that structure appears
Screenshots below from mobile app and Web UI monitoring tab
Web UI monitoring tab:

Mobile Device:

| priority | rvl structures not appearing in web ui thailand preview environment log in on android as bvbd mhealth select operational area bvbd training site select plan thailand test site bvbd add a new structure and carry out family registration save form and sync check web ui that structure appears screenshots below from mobile app and web ui monitoring tab web ui monitoring tab mobile device | 1 |
186,898 | 6,743,538,710 | IssuesEvent | 2017-10-20 12:33:00 | hassio-addons/addon-terminal | https://api.github.com/repos/hassio-addons/addon-terminal | closed | Upgrade 'hassio' CLI tool to the latest version | Accepted Enhancement Medium Priority RFC | ## Problem/Motivation
The version of the Hass.io CLI tools (`hassio`) is currently outdated.
This tool needs to be upgraded to the latest version. | 1.0 | Upgrade 'hassio' CLI tool to the latest version - ## Problem/Motivation
The version of the Hass.io CLI tools (`hassio`) is currently outdated.
This tool needs to be upgraded to the latest version. | priority | upgrade hassio cli tool to the latest version problem motivation the version of the hass io cli tools hassio is currently outdated this tool needs to be upgraded to the latest version | 1 |
509,568 | 14,739,797,256 | IssuesEvent | 2021-01-07 07:56:58 | MikeVedsted/JoinMe | https://api.github.com/repos/MikeVedsted/JoinMe | closed | [FEAT] Add Update operation for comments on back end | Priority: Medium :zap: Status: Done :heavy_check_mark: Type: Enhancement :rocket: | 💡 I would really like to solve or include
Clearly and concisely describe the problem you are trying to solve.
Currently, we do no have any update operations for comments on the backend.
👶 How would a user describe this?
Describe how users are affected by statements users might make or a user story.
I made a typo in my comment. Can I update that somewhere?
🏆 My dream solution would be
Describe the best possible scenario of this being implemented.
Route, service and handler created for comment/update
🚀 I'm ready for take off
Before submitting, please mark if you:
Checked that this feature doesn't already exists
Checked that a feature request doesn't already exists
Went through the user flow, and understand the impact | 1.0 | [FEAT] Add Update operation for comments on back end - 💡 I would really like to solve or include
Clearly and concisely describe the problem you are trying to solve.
Currently, we do no have any update operations for comments on the backend.
👶 How would a user describe this?
Describe how users are affected by statements users might make or a user story.
I made a typo in my comment. Can I update that somewhere?
🏆 My dream solution would be
Describe the best possible scenario of this being implemented.
Route, service and handler created for comment/update
🚀 I'm ready for take off
Before submitting, please mark if you:
Checked that this feature doesn't already exists
Checked that a feature request doesn't already exists
Went through the user flow, and understand the impact | priority | add update operation for comments on back end 💡 i would really like to solve or include clearly and concisely describe the problem you are trying to solve currently we do no have any update operations for comments on the backend 👶 how would a user describe this describe how users are affected by statements users might make or a user story i made a typo in my comment can i update that somewhere 🏆 my dream solution would be describe the best possible scenario of this being implemented route service and handler created for comment update 🚀 i m ready for take off before submitting please mark if you checked that this feature doesn t already exists checked that a feature request doesn t already exists went through the user flow and understand the impact | 1 |
708,144 | 24,331,769,187 | IssuesEvent | 2022-09-30 20:08:20 | Vatsim-Scandinavia/controlcenter | https://api.github.com/repos/Vatsim-Scandinavia/controlcenter | closed | TypeError at WarningMail | back-end bug medium priority | TypeError /app/Mail/WarningMail.php in App\Mail\WarningMail::__construct
App\Mail\WarningMail::__construct(): Argument #2 ($user) must be of type App\Models\User, App\Models\Handover given, called in /var/www/html/app/Notifications/InactivityNotification.php on line 55
https://sentry.io/organizations/vatsim-scandinavia/issues/3559123648/?project=6130472&query=is%3Aunresolved

| 1.0 | TypeError at WarningMail - TypeError /app/Mail/WarningMail.php in App\Mail\WarningMail::__construct
App\Mail\WarningMail::__construct(): Argument #2 ($user) must be of type App\Models\User, App\Models\Handover given, called in /var/www/html/app/Notifications/InactivityNotification.php on line 55
https://sentry.io/organizations/vatsim-scandinavia/issues/3559123648/?project=6130472&query=is%3Aunresolved

| priority | typeerror at warningmail typeerror app mail warningmail php in app mail warningmail construct app mail warningmail construct argument user must be of type app models user app models handover given called in var www html app notifications inactivitynotification php on line | 1 |
766,524 | 26,886,908,195 | IssuesEvent | 2023-02-06 04:36:40 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Something needs doing about ruin-cargo-salvage | Priority: 2-Before Release Issue: Needs Cleanup Difficulty: 2-Medium | This takes like 10 ticks to load and is going to lag the entire server and it's way too big. Long-term it should be killed unless we get async map loading (doubtful). | 1.0 | Something needs doing about ruin-cargo-salvage - This takes like 10 ticks to load and is going to lag the entire server and it's way too big. Long-term it should be killed unless we get async map loading (doubtful). | priority | something needs doing about ruin cargo salvage this takes like ticks to load and is going to lag the entire server and it s way too big long term it should be killed unless we get async map loading doubtful | 1 |
609,440 | 18,873,219,136 | IssuesEvent | 2021-11-13 15:11:33 | ApplETS/Notre-Dame | https://api.github.com/repos/ApplETS/Notre-Dame | closed | Unify the refresh behaviour | bug good first issue platform: ios platform: android feature: Schedule priority: medium ready to develop | **Describe the issue**
```Quand ont scroll vers le bas pour actualiser l’horaire il n’y a pas de « throbber » tel que dans profil.```
**Screenshot**

**Device Infos**
- **Version:** 4.2.3
- **Connectivity:** ConnectivityResult.wifi
- **Build number:** 1631981526
- **Platform operating system:** ios
- **Platform operating system version:** 14.8
| 1.0 | Unify the refresh behaviour - **Describe the issue**
```Quand ont scroll vers le bas pour actualiser l’horaire il n’y a pas de « throbber » tel que dans profil.```
**Screenshot**

**Device Infos**
- **Version:** 4.2.3
- **Connectivity:** ConnectivityResult.wifi
- **Build number:** 1631981526
- **Platform operating system:** ios
- **Platform operating system version:** 14.8
| priority | unify the refresh behaviour describe the issue quand ont scroll vers le bas pour actualiser l’horaire il n’y a pas de « throbber » tel que dans profil screenshot device infos version connectivity connectivityresult wifi build number platform operating system ios platform operating system version | 1 |
385,155 | 11,414,107,239 | IssuesEvent | 2020-02-02 00:05:28 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | reopened | website footer covers task three-dots menu (z-index problem?) | priority: medium section: Task Page status: issue: in progress type: medium level coding | As ieahleen reports in the Report a Bug guild:
"It is not possible to access the three-dots-menu of the tasks at the bottom of the screen, the menu appear under the footer:"

It might be because the z-index of the footer is too large.
You can replicate this with a task that has no note. (You don't see the bug with a task that has a note because the note will push the bottom of the task down far enough that the three-dots menu doesn't reach the site footer.) | 1.0 | website footer covers task three-dots menu (z-index problem?) - As ieahleen reports in the Report a Bug guild:
"It is not possible to access the three-dots-menu of the tasks at the bottom of the screen, the menu appear under the footer:"

It might be because the z-index of the footer is too large.
You can replicate this with a task that has no note. (You don't see the bug with a task that has a note because the note will push the bottom of the task down far enough that the three-dots menu doesn't reach the site footer.) | priority | website footer covers task three dots menu z index problem as ieahleen reports in the report a bug guild it is not possible to access the three dots menu of the tasks at the bottom of the screen the menu appear under the footer it might be because the z index of the footer is too large you can replicate this with a task that has no note you don t see the bug with a task that has a note because the note will push the bottom of the task down far enough that the three dots menu doesn t reach the site footer | 1 |
560,311 | 16,593,294,900 | IssuesEvent | 2021-06-01 10:21:01 | gnosis/ido-ux | https://api.github.com/repos/gnosis/ido-ux | closed | [auction details] no autorefresh page when auction ends and user stays on auction details page | QA passed bug medium priority | 1. open app
2. start an auction

3. place some orders
4. open auction details page and wait for the end
Result: 
when refresh this page looks 
+ check Your orders section
| 1.0 | [auction details] no autorefresh page when auction ends and user stays on auction details page - 1. open app
2. start an auction

3. place some orders
4. open auction details page and wait for the end
Result: 
when refresh this page looks 
+ check Your orders section
| priority | no autorefresh page when auction ends and user stays on auction details page open app start an auction place some orders open auction details page and wait for the end result when refresh this page looks check your orders section | 1 |
246,153 | 7,893,196,213 | IssuesEvent | 2018-06-28 17:14:41 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | Add xml results to Pick | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any | People are always asking for an easier way to parse pick output.
Add xml results like are available for queries.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 09/09/2011 12:27 pm
Original update: 10/27/2011 08:58 pm
Ticket number: 838 | 1.0 | Add xml results to Pick - People are always asking for an easier way to parse pick output.
Add xml results like are available for queries.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 09/09/2011 12:27 pm
Original update: 10/27/2011 08:58 pm
Ticket number: 838 | priority | add xml results to pick people are always asking for an easier way to parse pick output add xml results like are available for queries redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kathleen biagas original creation pm original update pm ticket number | 1 |
698,639 | 23,987,706,830 | IssuesEvent | 2022-09-13 20:43:26 | responsible-ai-collaborative/aiid | https://api.github.com/repos/responsible-ai-collaborative/aiid | closed | Improve OpenGraph meta tags | Priority:Medium | Open graph meta tags need an update/standardization. Sharing incidentdatabase.ai links could do better:
Facebook:
<img width="1038" alt="image" src="https://user-images.githubusercontent.com/1172479/172948609-6e59a99d-243d-416e-ac34-65bbe632512e.png">
Linked in:
<img width="570" alt="image" src="https://user-images.githubusercontent.com/1172479/172948232-098f1a50-0493-4332-9c3a-fc21dc135229.png">
Shares could show the incident title, a description, a picture, etc.
| 1.0 | Improve OpenGraph meta tags - Open graph meta tags need an update/standardization. Sharing incidentdatabase.ai links could do better:
Facebook:
<img width="1038" alt="image" src="https://user-images.githubusercontent.com/1172479/172948609-6e59a99d-243d-416e-ac34-65bbe632512e.png">
Linked in:
<img width="570" alt="image" src="https://user-images.githubusercontent.com/1172479/172948232-098f1a50-0493-4332-9c3a-fc21dc135229.png">
Shares could show the incident title, a description, a picture, etc.
| priority | improve opengraph meta tags open graph meta tags need an update standardization sharing incidentdatabase ai links could do better facebook img width alt image src linked in img width alt image src shares could show the incident title a description a picture etc | 1 |
129,278 | 5,093,448,296 | IssuesEvent | 2017-01-03 06:07:21 | CS3216-Bubble/bubble-frontend-deprecated | https://api.github.com/repos/CS3216-Bubble/bubble-frontend-deprecated | closed | Tutorial card scroll view | counsel-ui feature medium-priority onboarding-view | A scroll view showing introductory, tutorial and feature cards to onboard counsellors on the functionalities and usage of the application.
| 1.0 | Tutorial card scroll view - A scroll view showing introductory, tutorial and feature cards to onboard counsellors on the functionalities and usage of the application.
| priority | tutorial card scroll view a scroll view showing introductory tutorial and feature cards to onboard counsellors on the functionalities and usage of the application | 1 |
677,214 | 23,155,423,972 | IssuesEvent | 2022-07-29 12:34:15 | trustwallet/wallet-core | https://api.github.com/repos/trustwallet/wallet-core | opened | [Wasm] Dev console | enhancement priority:medium size:medium | The idea is to demonstrate the usage of Wasm + simplify adding mainnet tx to wallet core, current c++ `walletconsole` is not very flexible to modify and add new features (networking rpc etc)
TODO
- [ ] Wallet Core public API playground
- [ ] Networking layer (leverage node packages)
- [ ] Define blockchain RPC using Open API spec
- [ ] Fetch data from node and send tx | 1.0 | [Wasm] Dev console - The idea is to demonstrate the usage of Wasm + simplify adding mainnet tx to wallet core, current c++ `walletconsole` is not very flexible to modify and add new features (networking rpc etc)
TODO
- [ ] Wallet Core public API playground
- [ ] Networking layer (leverage node packages)
- [ ] Define blockchain RPC using Open API spec
- [ ] Fetch data from node and send tx | priority | dev console the idea is to demonstrate the usage of wasm simplify adding mainnet tx to wallet core current c walletconsole is not very flexible to modify and add new features networking rpc etc todo wallet core public api playground networking layer leverage node packages define blockchain rpc using open api spec fetch data from node and send tx | 1 |
540,094 | 15,800,649,686 | IssuesEvent | 2021-04-03 00:39:09 | musescore/MuseScore | https://api.github.com/repos/musescore/MuseScore | opened | [MU4 Issue] Re-ordering instruments affects score but not the parts | Medium Priority | **Describe the bug**
Re-ordering instruments affects score but not the parts
After reordering instruments in Instruments panel they stay unchanged on parts dialog
**To Reproduce**
Steps to reproduce the behavior:
1. Create a score with at least 2 instruments
2. Change instruments order in Instruments panel
3. Open parts dialog
4. Compare instruments order in Instruments panel and Parts dialog
**Expected behavior**
Parts ordering should be changed as instruments order
**Screenshots**

**Desktop (please complete the following information):**
Windows 10
Linux Ubunty
**Additional context**
Add any other context about the problem here.
| 1.0 | [MU4 Issue] Re-ordering instruments affects score but not the parts - **Describe the bug**
Re-ordering instruments affects score but not the parts
After reordering instruments in Instruments panel they stay unchanged on parts dialog
**To Reproduce**
Steps to reproduce the behavior:
1. Create a score with at least 2 instruments
2. Change instruments order in Instruments panel
3. Open parts dialog
4. Compare instruments order in Instruments panel and Parts dialog
**Expected behavior**
Parts ordering should be changed as instruments order
**Screenshots**

**Desktop (please complete the following information):**
Windows 10
Linux Ubunty
**Additional context**
Add any other context about the problem here.
| priority | re ordering instruments affects score but not the parts describe the bug re ordering instruments affects score but not the parts after reordering instruments in instruments panel they stay unchanged on parts dialog to reproduce steps to reproduce the behavior create a score with at least instruments change instruments order in instruments panel open parts dialog compare instruments order in instruments panel and parts dialog expected behavior parts ordering should be changed as instruments order screenshots desktop please complete the following information windows linux ubunty additional context add any other context about the problem here | 1 |
611,607 | 18,959,496,699 | IssuesEvent | 2021-11-19 01:41:04 | airshipit/airshipctl | https://api.github.com/repos/airshipit/airshipctl | closed | Generic Container Timeout support | enhancement 1-Core priority/medium size l | As laid out in #533, timeouts are currently ignored by the executors. To begin provide this support, this issue asks the following:
1) Update the Airshiptctl API interface to add a variable to pass a timeout value (likely seconds).
2) Update the the Generic Container framework accept the timeout variable & then inject it into the container to prompt the operation to finish gracefully. At this point in time it will not force kill the process. | 1.0 | Generic Container Timeout support - As laid out in #533, timeouts are currently ignored by the executors. To begin provide this support, this issue asks the following:
1) Update the Airshiptctl API interface to add a variable to pass a timeout value (likely seconds).
2) Update the the Generic Container framework accept the timeout variable & then inject it into the container to prompt the operation to finish gracefully. At this point in time it will not force kill the process. | priority | generic container timeout support as laid out in timeouts are currently ignored by the executors to begin provide this support this issue asks the following update the airshiptctl api interface to add a variable to pass a timeout value likely seconds update the the generic container framework accept the timeout variable then inject it into the container to prompt the operation to finish gracefully at this point in time it will not force kill the process | 1 |
23,957 | 2,665,041,328 | IssuesEvent | 2015-03-20 17:58:13 | QuiteRSS/quiterss | https://api.github.com/repos/QuiteRSS/quiterss | closed | Запоминание установок фильтров | 1 star bug imported Priority-Medium | _From [egor.shi...@gmail.com](https://code.google.com/u/102978245797933232604/) on December 28, 2011 09:20:14_
При закрытии программы запоминать. При открытии восстанавливать
_Original issue: http://code.google.com/p/quite-rss/issues/detail?id=1_ | 1.0 | Запоминание установок фильтров - _From [egor.shi...@gmail.com](https://code.google.com/u/102978245797933232604/) on December 28, 2011 09:20:14_
При закрытии программы запоминать. При открытии восстанавливать
_Original issue: http://code.google.com/p/quite-rss/issues/detail?id=1_ | priority | запоминание установок фильтров from on december при закрытии программы запоминать при открытии восстанавливать original issue | 1 |
306,495 | 9,395,618,317 | IssuesEvent | 2019-04-08 03:32:29 | OperationCode/resources_api | https://api.github.com/repos/OperationCode/resources_api | closed | Better error handling of out of bounds GET resources | Priority: Medium bug | As of the latest commit, if you try to GET a resource with an id that's higher than the highest in the database, you get an error that looks like this:
```json
{
"apiVersion": "1.0",
"errors": [
{
"code": "something-went-wrong"
}
],
"status": "ok"
}
```
I put this error in here as a "catch all" for errors that we aren't specifically catching. We need to specifically catch the out-of-bounds scenario I described.
Acceptance criteria:
When doing `GET /api/v1/resources/10000000`, a 404 error should be returned. | 1.0 | Better error handling of out of bounds GET resources - As of the latest commit, if you try to GET a resource with an id that's higher than the highest in the database, you get an error that looks like this:
```json
{
"apiVersion": "1.0",
"errors": [
{
"code": "something-went-wrong"
}
],
"status": "ok"
}
```
I put this error in here as a "catch all" for errors that we aren't specifically catching. We need to specifically catch the out-of-bounds scenario I described.
Acceptance criteria:
When doing `GET /api/v1/resources/10000000`, a 404 error should be returned. | priority | better error handling of out of bounds get resources as of the latest commit if you try to get a resource with an id that s higher than the highest in the database you get an error that looks like this json apiversion errors code something went wrong status ok i put this error in here as a catch all for errors that we aren t specifically catching we need to specifically catch the out of bounds scenario i described acceptance criteria when doing get api resources a error should be returned | 1 |
751,773 | 26,257,559,089 | IssuesEvent | 2023-01-06 03:04:53 | Greenstand/treetracker-query-api | https://api.github.com/repos/Greenstand/treetracker-query-api | closed | Implement API: GET /gis/bounds | medium Express Node.js postgresql priority | To implement an endpoint:
https://github.com/Greenstand/treetracker-query-api/blob/1c7c606c3fd18554b3163fac29ad147ff3b43702/docs/api/spec/query-api.yaml#L578-L604
- Please write an e2e test to cover it.
- Please follow the guide in the 'readme' to follow our architecture on the server-side.
---
Some hints:
- Please read our readme for more information/guide/tutorial.
- Here is [an engineering book](https://greenstand.gitbook.io/engineering/) in Greenstand.
- To know more about our organization, visit our [website](https://greenstand.org).
- If you want to join the slack community (some resources need the community member's permission), please leave your email address.
| 1.0 | Implement API: GET /gis/bounds - To implement an endpoint:
https://github.com/Greenstand/treetracker-query-api/blob/1c7c606c3fd18554b3163fac29ad147ff3b43702/docs/api/spec/query-api.yaml#L578-L604
- Please write an e2e test to cover it.
- Please follow the guide in the 'readme' to follow our architecture on the server-side.
---
Some hints:
- Please read our readme for more information/guide/tutorial.
- Here is [an engineering book](https://greenstand.gitbook.io/engineering/) in Greenstand.
- To know more about our organization, visit our [website](https://greenstand.org).
- If you want to join the slack community (some resources need the community member's permission), please leave your email address.
| priority | implement api get gis bounds to implement an endpoint please write an test to cover it please follow the guide in the readme to follow our architecture on the server side some hints please read our readme for more information guide tutorial here is in greenstand to know more about our organization visit our if you want to join the slack community some resources need the community member s permission please leave your email address | 1 |
107,946 | 4,322,439,186 | IssuesEvent | 2016-07-25 14:06:31 | Metaswitch/clearwater-snmp-handlers | https://api.github.com/repos/Metaswitch/clearwater-snmp-handlers | closed | Over aggressive filtering of alarms in the Alarm Trap Sender | bug cat:diagnostics medium-priority | The alarm_filtered function defined within the Alarm Trap Sender will currently filter out an alarm if it has been raised with the same state within 5 seconds. This is to stop a flickering alarm causing a high volume of SNMP informs to be sent to an NMS.
This behaviour though is too aggressive and could cause the NMS to be out of sync- If an alarm is CLEARED, raised (in a non-CLEARED state) and then CLEARED again, the NMS would believe the alarm is raised as the last CLEARED would have been filtered out. | 1.0 | Over aggressive filtering of alarms in the Alarm Trap Sender - The alarm_filtered function defined within the Alarm Trap Sender will currently filter out an alarm if it has been raised with the same state within 5 seconds. This is to stop a flickering alarm causing a high volume of SNMP informs to be sent to an NMS.
This behaviour though is too aggressive and could cause the NMS to be out of sync- If an alarm is CLEARED, raised (in a non-CLEARED state) and then CLEARED again, the NMS would believe the alarm is raised as the last CLEARED would have been filtered out. | priority | over aggressive filtering of alarms in the alarm trap sender the alarm filtered function defined within the alarm trap sender will currently filter out an alarm if it has been raised with the same state within seconds this is to stop a flickering alarm causing a high volume of snmp informs to be sent to an nms this behaviour though is too aggressive and could cause the nms to be out of sync if an alarm is cleared raised in a non cleared state and then cleared again the nms would believe the alarm is raised as the last cleared would have been filtered out | 1 |
166,770 | 6,311,246,460 | IssuesEvent | 2017-07-23 17:46:58 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | Create /media page | in progress Needs: Copy/Content Priority: Medium Status: In Progress | # Feature
## Why is this feature being added?
As referenced in #216, this page will be available for media to see how we've been portrayed in articles, photos, and videos.
## What should your feature do?
- Implement a page called "Media" with the route being /press
- Link the page in a dropdown in the header.
- Page should display content described in linked wireframe, excepting the press releases section.
Wireframe: https://xd.adobe.com/view/c1e499b5-a59b-430a-8f61-25b9dbcef715/ | 1.0 | Create /media page - # Feature
## Why is this feature being added?
As referenced in #216, this page will be available for media to see how we've been portrayed in articles, photos, and videos.
## What should your feature do?
- Implement a page called "Media" with the route being /press
- Link the page in a dropdown in the header.
- Page should display content described in linked wireframe, excepting the press releases section.
Wireframe: https://xd.adobe.com/view/c1e499b5-a59b-430a-8f61-25b9dbcef715/ | priority | create media page feature why is this feature being added as referenced in this page will be available for media to see how we ve been portrayed in articles photos and videos what should your feature do implement a page called media with the route being press link the page in a dropdown in the header page should display content described in linked wireframe excepting the press releases section wireframe | 1 |
52,954 | 3,031,807,817 | IssuesEvent | 2015-08-05 02:23:59 | starteam/starcellbio_html | https://api.github.com/repos/starteam/starcellbio_html | closed | NSF Exercises 2 & 3 - Change label on small tabbed windows of the load gel page of the western blotting technique | Medium Priority | The small tabs on the develop page of the western blotting experimental technique should be labeled with 'anti' instead of 'a' and then be followed by the name of the primary antibody. For example 'anti-Protein Y' or 'anti-PGK1'. | 1.0 | NSF Exercises 2 & 3 - Change label on small tabbed windows of the load gel page of the western blotting technique - The small tabs on the develop page of the western blotting experimental technique should be labeled with 'anti' instead of 'a' and then be followed by the name of the primary antibody. For example 'anti-Protein Y' or 'anti-PGK1'. | priority | nsf exercises change label on small tabbed windows of the load gel page of the western blotting technique the small tabs on the develop page of the western blotting experimental technique should be labeled with anti instead of a and then be followed by the name of the primary antibody for example anti protein y or anti | 1 |
224,314 | 7,468,950,531 | IssuesEvent | 2018-04-02 20:50:59 | StephanoMehawej/Bot | https://api.github.com/repos/StephanoMehawej/Bot | closed | Add a confirmation input to UIConsole | Enhancement: Bot Priority: Medium Status: To Do | You can base yourself out of the TerminalConsole's `getConfirmation()` method that has been added in the `dev` branch. | 1.0 | Add a confirmation input to UIConsole - You can base yourself out of the TerminalConsole's `getConfirmation()` method that has been added in the `dev` branch. | priority | add a confirmation input to uiconsole you can base yourself out of the terminalconsole s getconfirmation method that has been added in the dev branch | 1 |
55,297 | 3,072,648,804 | IssuesEvent | 2015-08-19 17:58:42 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Robotium consumes more time if we use conditional loops | bug imported invalid Priority-Medium | _From [alpatisu...@gmail.com](https://code.google.com/u/109862279532577856294/) on May 29, 2012 05:59:05_
What steps will reproduce the problem? 1.Robotium consumes more time if we use conditional loops
Eg#1:
test = solo.waitForText(text); //Because waitForText method loops even it returns true
while (test == true){
tFlag = true;
break;
} What is the expected output? What do you see instead? waitFortext should return true only when expected text found unless it should return false. otherwise it is keeps on looping even it returns at times. What version of the product are you using? On what operating system? Robotium 3.2.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=271_ | 1.0 | Robotium consumes more time if we use conditional loops - _From [alpatisu...@gmail.com](https://code.google.com/u/109862279532577856294/) on May 29, 2012 05:59:05_
What steps will reproduce the problem? 1.Robotium consumes more time if we use conditional loops
Eg#1:
test = solo.waitForText(text); //Because waitForText method loops even it returns true
while (test == true){
tFlag = true;
break;
} What is the expected output? What do you see instead? waitFortext should return true only when expected text found unless it should return false. otherwise it is keeps on looping even it returns at times. What version of the product are you using? On what operating system? Robotium 3.2.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=271_ | priority | robotium consumes more time if we use conditional loops from on may what steps will reproduce the problem robotium consumes more time if we use conditional loops eg test solo waitfortext text because waitfortext method loops even it returns true while test true tflag true break what is the expected output what do you see instead waitfortext should return true only when expected text found unless it should return false otherwise it is keeps on looping even it returns at times what version of the product are you using on what operating system robotium please provide any additional information below original issue | 1 |
702,900 | 24,140,441,090 | IssuesEvent | 2022-09-21 14:28:47 | blindnet-io/blindnet.dev | https://api.github.com/repos/blindnet-io/blindnet.dev | closed | new content structure | type: doc effort2: medium (days) type: request priority: 0 (critical) | > Request by @Vuk-BN for 2022-07-22: Deliver a thorough plan describing the new structure of the website while progressively moving toward the DevKit.
Define the new structure and create the associated sections. | 1.0 | new content structure - > Request by @Vuk-BN for 2022-07-22: Deliver a thorough plan describing the new structure of the website while progressively moving toward the DevKit.
Define the new structure and create the associated sections. | priority | new content structure request by vuk bn for deliver a thorough plan describing the new structure of the website while progressively moving toward the devkit define the new structure and create the associated sections | 1 |
661,435 | 22,054,518,161 | IssuesEvent | 2022-05-30 11:40:03 | sciencemesh/cs3api4lab | https://api.github.com/repos/sciencemesh/cs3api4lab | opened | Cache calls | type:enhancement priority:medium | If we cannot remove the need to stat the shares in the short term, we should at least cache these calls.
These, and maybe others... Please evaluate what calls are done frequently. | 1.0 | Cache calls - If we cannot remove the need to stat the shares in the short term, we should at least cache these calls.
These, and maybe others... Please evaluate what calls are done frequently. | priority | cache calls if we cannot remove the need to stat the shares in the short term we should at least cache these calls these and maybe others please evaluate what calls are done frequently | 1 |
629,188 | 20,025,524,971 | IssuesEvent | 2022-02-01 20:51:28 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | Disable OPCache purging on SiteGround | type: enhancement 3rd party compatibility module: cache priority: medium effort: [XS] | **Is your feature request related to a problem? Please describe.**
We have at least a couple of reports from SiteGround customers that they are getting the `OPcache purge failed.` message when they are trying to purge the OPCache.
Their support told one of [the customers](https://secure.helpscout.net/conversation/1729608368/314685/#thread-5074820863) (internal link) they are using file-based OPcache, and the folder isn't accessible to plugins.
**Describe the solution you'd like**
On SiteGround, do not display:
1. The "Clear OPCache" admin menu item.

2. The dashboard "Quick Actions".

**Additional context**
**Tickets:**
https://secure.helpscout.net/conversation/1752825270/318477/
https://secure.helpscout.net/conversation/1729608368/314685/
| 1.0 | Disable OPCache purging on SiteGround - **Is your feature request related to a problem? Please describe.**
We have at least a couple of reports from SiteGround customers that they are getting the `OPcache purge failed.` message when they are trying to purge the OPCache.
Their support told one of [the customers](https://secure.helpscout.net/conversation/1729608368/314685/#thread-5074820863) (internal link) they are using file-based OPcache, and the folder isn't accessible to plugins.
**Describe the solution you'd like**
On SiteGround, do not display:
1. The "Clear OPCache" admin menu item.

2. The dashboard "Quick Actions".

**Additional context**
**Tickets:**
https://secure.helpscout.net/conversation/1752825270/318477/
https://secure.helpscout.net/conversation/1729608368/314685/
| priority | disable opcache purging on siteground is your feature request related to a problem please describe we have at least a couple of reports from siteground customers that they are getting the opcache purge failed message when they are trying to purge the opcache their support told one of internal link they are using file based opcache and the folder isn t accessible to plugins describe the solution you d like on siteground do not display the clear opcache admin menu item the dashboard quick actions additional context tickets | 1 |
742,158 | 25,840,885,894 | IssuesEvent | 2022-12-13 00:12:45 | apache/airflow | https://api.github.com/repos/apache/airflow | closed | KubernetesExecutor: kubectl task gets stuck running if command takes more than 0.4 sec | kind:bug stale priority:medium pending-response area:kubernetes reported_version:2.1 | ### Apache Airflow version
2.1.3 (latest released)
### Operating System
20.04.1-Ubuntu (in k8s, via helm)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon | 2.1.0
apache-airflow-providers-celery | 2.0.0
apache-airflow-providers-cncf-kubernetes | 2.0.2
apache-airflow-providers-docker | 2.1.0
apache-airflow-providers-elasticsearch | 2.0.2
apache-airflow-providers-ftp | 2.0.0
apache-airflow-providers-google | 5.0.0
apache-airflow-providers-grpc | 2.0.0
apache-airflow-providers-hashicorp | 2.0.0
apache-airflow-providers-http | 2.0.0
apache-airflow-providers-imap | 2.0.0
apache-airflow-providers-microsoft-azure | 3.1.0
apache-airflow-providers-mysql | 2.1.0
apache-airflow-providers-postgres | 2.0.0
apache-airflow-providers-redis | 2.0.0
apache-airflow-providers-sendgrid | 2.0.0
apache-airflow-providers-sftp | 2.1.0
apache-airflow-providers-slack | 4.0.0
apache-airflow-providers-sqlite | 2.0.0
apache-airflow-providers-ssh | 2.1.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Dockerfile (build this first, then push to registry):
```
FROM apache/airflow:2.1.3
USER root
RUN apt update && apt install -y curl
# kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# kubeconfig
RUN mkdir -p ~/.kube
COPY ./config /home/airflow/.kube/config
ENV KUBECONFIG=/home/airflow/.kube/config
RUN chown airflow /home/airflow/.kube/config
COPY increment.py $AIRFLOW_HOME/dags/increment.py
USER airflow
RUN pip install sh
```
values.yaml (provide this to the helm chart while deploying, use your registry's address instead of mine):
```
executor: KubernetesExecutor
defaultAirflowRepository: 192.168.90.13:30500/airflow
defaultAirflowTag: freezebug
images:
airflow:
pullPolicy: Always
flower:
pullPolicy: Always
pod_template:
pullPolicy: Always
logs:
persistence:
enabled: true
size: 1Gi
```
increment.py (see [repo](https://github.com/MatrixManAtYrService/bug_airflowfreeze) for complete file)
```python3
def get_task(delay):
@task(task_id=str(delay))
def wait():
# sleep in a bash container
run(f"kubectl run do_thing --rm --restart=Never -i --image=bash -- sleep {delay}")
return wait()
@dag
def increment():
# sleep longer each time
prev_task = get_task(0)
for i in range(1, 20):
current_task = get_task(i / 10)
prev_task >> current_task
prev_task = current_task
```
### What happened
I had defined a DAG which uses `kubectl run` to run `bash -c 'sleep X'` for successively higher values of X:
- 0 sec
- 0.1 sec
- 0.2 sec
- ...
- 2 sec
At somewhere around 0.3 - 05 seconds of delay, the tasks stopped completing:
<img width="381" alt="stuck_eventually" src="https://user-images.githubusercontent.com/5834582/131287433-a8abaf1d-cfaa-4bcd-93ef-e754b20c41de.png">
### What you expected to happen
I expected the tasks to all complete successfully
### How to reproduce
I made[ a repo](https://github.com/MatrixManAtYrService/bug_airflowfreeze) with the files that I used. See [commands.txt](https://github.com/MatrixManAtYrService/bug_airflowfreeze/blob/main/commands.txt) for the flow.
Generally:
- docker build an image with the problematic DAG, include kubectl and a kubeconfig file for a running cluster
- push the image to a repository (I use https://github.com/twuni/docker-registry.helm to deploy one locally)
- deploy Airflow in that image with the airflow helm chart
- run the "increment" DAG
- notice that it never completes
### Anything else
If you use the Sequential Executor, it completes just fine
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 1.0 | KubernetesExecutor: kubectl task gets stuck running if command takes more than 0.4 sec - ### Apache Airflow version
2.1.3 (latest released)
### Operating System
20.04.1-Ubuntu (in k8s, via helm)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon | 2.1.0
apache-airflow-providers-celery | 2.0.0
apache-airflow-providers-cncf-kubernetes | 2.0.2
apache-airflow-providers-docker | 2.1.0
apache-airflow-providers-elasticsearch | 2.0.2
apache-airflow-providers-ftp | 2.0.0
apache-airflow-providers-google | 5.0.0
apache-airflow-providers-grpc | 2.0.0
apache-airflow-providers-hashicorp | 2.0.0
apache-airflow-providers-http | 2.0.0
apache-airflow-providers-imap | 2.0.0
apache-airflow-providers-microsoft-azure | 3.1.0
apache-airflow-providers-mysql | 2.1.0
apache-airflow-providers-postgres | 2.0.0
apache-airflow-providers-redis | 2.0.0
apache-airflow-providers-sendgrid | 2.0.0
apache-airflow-providers-sftp | 2.1.0
apache-airflow-providers-slack | 4.0.0
apache-airflow-providers-sqlite | 2.0.0
apache-airflow-providers-ssh | 2.1.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Dockerfile (build this first, then push to registry):
```
FROM apache/airflow:2.1.3
USER root
RUN apt update && apt install -y curl
# kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# kubeconfig
RUN mkdir -p ~/.kube
COPY ./config /home/airflow/.kube/config
ENV KUBECONFIG=/home/airflow/.kube/config
RUN chown airflow /home/airflow/.kube/config
COPY increment.py $AIRFLOW_HOME/dags/increment.py
USER airflow
RUN pip install sh
```
values.yaml (provide this to the helm chart while deploying, use your registry's address instead of mine):
```
executor: KubernetesExecutor
defaultAirflowRepository: 192.168.90.13:30500/airflow
defaultAirflowTag: freezebug
images:
airflow:
pullPolicy: Always
flower:
pullPolicy: Always
pod_template:
pullPolicy: Always
logs:
persistence:
enabled: true
size: 1Gi
```
increment.py (see [repo](https://github.com/MatrixManAtYrService/bug_airflowfreeze) for complete file)
```python3
def get_task(delay):
@task(task_id=str(delay))
def wait():
# sleep in a bash container
run(f"kubectl run do_thing --rm --restart=Never -i --image=bash -- sleep {delay}")
return wait()
@dag
def increment():
# sleep longer each time
prev_task = get_task(0)
for i in range(1, 20):
current_task = get_task(i / 10)
prev_task >> current_task
prev_task = current_task
```
### What happened
I had defined a DAG which uses `kubectl run` to run `bash -c 'sleep X'` for successively higher values of X:
- 0 sec
- 0.1 sec
- 0.2 sec
- ...
- 2 sec
At somewhere around 0.3 - 05 seconds of delay, the tasks stopped completing:
<img width="381" alt="stuck_eventually" src="https://user-images.githubusercontent.com/5834582/131287433-a8abaf1d-cfaa-4bcd-93ef-e754b20c41de.png">
### What you expected to happen
I expected the tasks to all complete successfully
### How to reproduce
I made[ a repo](https://github.com/MatrixManAtYrService/bug_airflowfreeze) with the files that I used. See [commands.txt](https://github.com/MatrixManAtYrService/bug_airflowfreeze/blob/main/commands.txt) for the flow.
Generally:
- docker build an image with the problematic DAG, include kubectl and a kubeconfig file for a running cluster
- push the image to a repository (I use https://github.com/twuni/docker-registry.helm to deploy one locally)
- deploy Airflow in that image with the airflow helm chart
- run the "increment" DAG
- notice that it never completes
### Anything else
If you use the Sequential Executor, it completes just fine
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| priority | kubernetesexecutor kubectl task gets stuck running if command takes more than sec apache airflow version latest released operating system ubuntu in via helm versions of apache airflow providers apache airflow providers amazon apache airflow providers celery apache airflow providers cncf kubernetes apache airflow providers docker apache airflow providers elasticsearch apache airflow providers ftp apache airflow providers google apache airflow providers grpc apache airflow providers hashicorp apache airflow providers http apache airflow providers imap apache airflow providers microsoft azure apache airflow providers mysql apache airflow providers postgres apache airflow providers redis apache airflow providers sendgrid apache airflow providers sftp apache airflow providers slack apache airflow providers sqlite apache airflow providers ssh deployment official apache airflow helm chart deployment details dockerfile build this first then push to registry from apache airflow user root run apt update apt install y curl kubectl run curl lo l s run install o root g root m kubectl usr local bin kubectl kubeconfig run mkdir p kube copy config home airflow kube config env kubeconfig home airflow kube config run chown airflow home airflow kube config copy increment py airflow home dags increment py user airflow run pip install sh values yaml provide this to the helm chart while deploying use your registry s address instead of mine executor kubernetesexecutor defaultairflowrepository airflow defaultairflowtag freezebug images airflow pullpolicy always flower pullpolicy always pod template pullpolicy always logs persistence enabled true size increment py see for complete file def get task delay task task id str delay def wait sleep in a bash container run f kubectl run do thing rm restart never i image bash sleep delay return wait dag def increment sleep longer each time prev task get task for i in range current task get task i prev task current task prev task current task what happened i had defined a dag which uses kubectl run to run bash c sleep x for successively higher values of x sec sec sec sec at somewhere around seconds of delay the tasks stopped completing img width alt stuck eventually src what you expected to happen i expected the tasks to all complete successfully how to reproduce i made with the files that i used see for the flow generally docker build an image with the problematic dag include kubectl and a kubeconfig file for a running cluster push the image to a repository i use to deploy one locally deploy airflow in that image with the airflow helm chart run the increment dag notice that it never completes anything else if you use the sequential executor it completes just fine are you willing to submit pr yes i am willing to submit a pr code of conduct i agree to follow this project s | 1 |
651,824 | 21,511,576,147 | IssuesEvent | 2022-04-28 05:28:25 | kubesphere/ks-devops | https://api.github.com/repos/kubesphere/ks-devops | closed | When importing the gitlab repository, you need to add the suffix ".git" to the repository address | kind/bug priority/medium | ### What is version of KubeSphere DevOps has the issue?
v3.3.0-alpha.2
### How did you install the Kubernetes? Or what is the Kubernetes distribution?
_No response_
### What happened?
When importing the gitlab repository, the url is displayed as `https://gitlab.com/wxsunshine/open-podcasts`, but get error in CD detail.

When modify url to `https://gitlab.com/wxsunshine/open-podcasts.git`, get success.

/priority medium
/assign @kubesphere/sig-devops
### Relevant log output
_No response_
### Additional information
_No response_ | 1.0 | When importing the gitlab repository, you need to add the suffix ".git" to the repository address - ### What is version of KubeSphere DevOps has the issue?
v3.3.0-alpha.2
### How did you install the Kubernetes? Or what is the Kubernetes distribution?
_No response_
### What happened?
When importing the gitlab repository, the url is displayed as `https://gitlab.com/wxsunshine/open-podcasts`, but get error in CD detail.

When modify url to `https://gitlab.com/wxsunshine/open-podcasts.git`, get success.

/priority medium
/assign @kubesphere/sig-devops
### Relevant log output
_No response_
### Additional information
_No response_ | priority | when importing the gitlab repository you need to add the suffix git to the repository address what is version of kubesphere devops has the issue alpha how did you install the kubernetes or what is the kubernetes distribution no response what happened when importing the gitlab repository the url is displayed as but get error in cd detail when modify url to get success priority medium assign kubesphere sig devops relevant log output no response additional information no response | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.