Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
691,815 | 23,711,820,585 | IssuesEvent | 2022-08-30 08:30:18 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Scan responses with info about periodic adv. sometimes stops being reported | bug priority: medium area: Bluetooth area: Bluetooth Controller | **Describe the bug**
Sometimes after for example a sync termination then no new advertisments with periodic adv. are reported in the `bt_le_scan_cb recv` callback. Restarting scanning will result again with scan responses containing info about periodic adv.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using? - Zephyr SDK and NRF52833
- Is this a regression? - Yes a regression. No bisect made, but it works with the Zephyr version used in nRF Connect SDK 2.0
**To Reproduce**
Build below application based on the `direction_finding_connectionless_rx` sample (the application does not enable CTE sampling by default but can still reproduce the issue).
[aoa_receiver_multiple.zip](https://github.com/zephyrproject-rtos/zephyr/files/9402234/aoa_receiver_multiple.zip)
Have 2 tags sending periodic advertisements (I used about 50ms interval and about 30 bytes per. adv. data payload, this is probably not needed and won't matter) for example use `direction_finding_connectionless_tx` sample, the `periodic_adv` should also work, but I have not tried.
Then just randomly reset the two tags until the log print `Got peridoic adv. scan rsp` disappears.
**Expected behavior**
Scan responses keep reporting scan responses with periodic advertisements.
**Impact**
Annoyance for application using periodic advertising sync as restart of scan is needed periodically.
**Logs and console output**
```
** Booting Zephyr OS build zephyr-v3.1.0-3844-g0e560bf8e91e ***
Starting Connectionless Locator Demo
Bluetooth initialization...success
Scan callbacks register...success.
Periodic Advertising callbacks register...success.
Start scanning...success
Waiting for periodic advertising...
Scan is running...
Got peridoic adv. scan rsp.
(306) Found device sending per adv: 21:D6:DD:B7:69:62 (random)
(306) Creating Periodic Advertising Sync...
success.
Sync index: 0
Got peridoic adv. scan rsp.
Got peridoic adv. scan rsp.
(967) PER_ADV_SYNC[0]: [DEVICE]: 21:D6:DD:B7:69:62 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
Got peridoic adv. scan rsp.
(1177) Found device sending per adv: 3C:97:D6:E5:A2:40 (random)
(1177) Creating Periodic Advertising Sync...
success.
Sync index: 1
TAGS:0,sync...,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
(1808) PER_ADV_SYNC[1]: [DEVICE]: 3C:97:D6:E5:A2:40 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
Got peridoic adv. scan rsp.
TAGS:0,0,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
Got peridoic adv. scan rsp.
TAGS:0,0,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
(3343) Found device sending per adv: 0F:46:A7:CE:90:F7 (random)
(3343) Creating Periodic Advertising Sync...
success.
Sync index: 2
Got peridoic adv. scan rsp.
(3784) PER_ADV_SYNC[2]: [DEVICE]: 0F:46:A7:CE:90:F7 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
TAGS:0,0,0,-,-,-,-,-,-,-,
TAGS:0,0,0,-,-,-,-,-,-,-,
(5460) PER_ADV_SYNC[1]: [DEVICE]: 3C:97:D6:E5:A2:40 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 1
(5869) PER_ADV_SYNC[0]: [DEVICE]: 21:D6:DD:B7:69:62 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 0
TAGS:-,-,0,-,-,-,-,-,-,-,
Scan is running...
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
(10585) PER_ADV_SYNC[2]: [DEVICE]: 0F:46:A7:CE:90:F7 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 2
TAGS:-,-,-,-,-,-,-,-,-,-,
Scan is running...
TAGS:-,-,-,-,-,-,-,-,-,-,
TAGS:-,-,-,-,-,-,-,-,-,-,
TAGS:-,-,-,-,-,-,-,-,-,-,
```
After this no new tags sending per. adv. can be found, note that the `Scan is running...` print is still printed.
The 0 in the `TAGS:0,0,-,-,-,-,-,-,-,-,` print indicate that the tag is synced (if CTE is sampled it shows num CTE per second).
**Environment (please complete the following information):**
- OS: Windows
- Toolchain: Zephyr SDK
- Commit SHA or Version used: 4f84bc8f40311fb7ccd6575790c97b4e7807cc87
| 1.0 | Bluetooth: Scan responses with info about periodic adv. sometimes stops being reported - **Describe the bug**
Sometimes after for example a sync termination then no new advertisments with periodic adv. are reported in the `bt_le_scan_cb recv` callback. Restarting scanning will result again with scan responses containing info about periodic adv.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using? - Zephyr SDK and NRF52833
- Is this a regression? - Yes a regression. No bisect made, but it works with the Zephyr version used in nRF Connect SDK 2.0
**To Reproduce**
Build below application based on the `direction_finding_connectionless_rx` sample (the application does not enable CTE sampling by default but can still reproduce the issue).
[aoa_receiver_multiple.zip](https://github.com/zephyrproject-rtos/zephyr/files/9402234/aoa_receiver_multiple.zip)
Have 2 tags sending periodic advertisements (I used about 50ms interval and about 30 bytes per. adv. data payload, this is probably not needed and won't matter) for example use `direction_finding_connectionless_tx` sample, the `periodic_adv` should also work, but I have not tried.
Then just randomly reset the two tags until the log print `Got peridoic adv. scan rsp` disappears.
**Expected behavior**
Scan responses keep reporting scan responses with periodic advertisements.
**Impact**
Annoyance for application using periodic advertising sync as restart of scan is needed periodically.
**Logs and console output**
```
** Booting Zephyr OS build zephyr-v3.1.0-3844-g0e560bf8e91e ***
Starting Connectionless Locator Demo
Bluetooth initialization...success
Scan callbacks register...success.
Periodic Advertising callbacks register...success.
Start scanning...success
Waiting for periodic advertising...
Scan is running...
Got peridoic adv. scan rsp.
(306) Found device sending per adv: 21:D6:DD:B7:69:62 (random)
(306) Creating Periodic Advertising Sync...
success.
Sync index: 0
Got peridoic adv. scan rsp.
Got peridoic adv. scan rsp.
(967) PER_ADV_SYNC[0]: [DEVICE]: 21:D6:DD:B7:69:62 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
Got peridoic adv. scan rsp.
(1177) Found device sending per adv: 3C:97:D6:E5:A2:40 (random)
(1177) Creating Periodic Advertising Sync...
success.
Sync index: 1
TAGS:0,sync...,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
(1808) PER_ADV_SYNC[1]: [DEVICE]: 3C:97:D6:E5:A2:40 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
Got peridoic adv. scan rsp.
TAGS:0,0,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
Got peridoic adv. scan rsp.
TAGS:0,0,-,-,-,-,-,-,-,-,
Got peridoic adv. scan rsp.
(3343) Found device sending per adv: 0F:46:A7:CE:90:F7 (random)
(3343) Creating Periodic Advertising Sync...
success.
Sync index: 2
Got peridoic adv. scan rsp.
(3784) PER_ADV_SYNC[2]: [DEVICE]: 0F:46:A7:CE:90:F7 (random) synced, Interval 0x0028 (50 ms), PHY LE 1M
TAGS:0,0,0,-,-,-,-,-,-,-,
TAGS:0,0,0,-,-,-,-,-,-,-,
(5460) PER_ADV_SYNC[1]: [DEVICE]: 3C:97:D6:E5:A2:40 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 1
(5869) PER_ADV_SYNC[0]: [DEVICE]: 21:D6:DD:B7:69:62 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 0
TAGS:-,-,0,-,-,-,-,-,-,-,
Scan is running...
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
TAGS:-,-,0,-,-,-,-,-,-,-,
(10585) PER_ADV_SYNC[2]: [DEVICE]: 0F:46:A7:CE:90:F7 (random) sync terminated reason: 31
Device was synced: term_cb: clear device index: 2
TAGS:-,-,-,-,-,-,-,-,-,-,
Scan is running...
TAGS:-,-,-,-,-,-,-,-,-,-,
TAGS:-,-,-,-,-,-,-,-,-,-,
TAGS:-,-,-,-,-,-,-,-,-,-,
```
After this no new tags sending per. adv. can be found, note that the `Scan is running...` print is still printed.
The 0 in the `TAGS:0,0,-,-,-,-,-,-,-,-,` print indicate that the tag is synced (if CTE is sampled it shows num CTE per second).
**Environment (please complete the following information):**
- OS: Windows
- Toolchain: Zephyr SDK
- Commit SHA or Version used: 4f84bc8f40311fb7ccd6575790c97b4e7807cc87
| priority | bluetooth scan responses with info about periodic adv sometimes stops being reported describe the bug sometimes after for example a sync termination then no new advertisments with periodic adv are reported in the bt le scan cb recv callback restarting scanning will result again with scan responses containing info about periodic adv please also mention any information which could help others to understand the problem you re facing what target platform are you using zephyr sdk and is this a regression yes a regression no bisect made but it works with the zephyr version used in nrf connect sdk to reproduce build below application based on the direction finding connectionless rx sample the application does not enable cte sampling by default but can still reproduce the issue have tags sending periodic advertisements i used about interval and about bytes per adv data payload this is probably not needed and won t matter for example use direction finding connectionless tx sample the periodic adv should also work but i have not tried then just randomly reset the two tags until the log print got peridoic adv scan rsp disappears expected behavior scan responses keep reporting scan responses with periodic advertisements impact annoyance for application using periodic advertising sync as restart of scan is needed periodically logs and console output booting zephyr os build zephyr starting connectionless locator demo bluetooth initialization success scan callbacks register success periodic advertising callbacks register success start scanning success waiting for periodic advertising scan is running got peridoic adv scan rsp found device sending per adv dd random creating periodic advertising sync success sync index got peridoic adv scan rsp got peridoic adv scan rsp per adv sync dd random synced interval ms phy le got peridoic adv scan rsp found device sending per adv random creating periodic advertising sync success sync index tags sync got peridoic adv scan rsp per adv sync random synced interval ms phy le got peridoic adv scan rsp tags got peridoic adv scan rsp got peridoic adv scan rsp tags got peridoic adv scan rsp found device sending per adv ce random creating periodic advertising sync success sync index got peridoic adv scan rsp per adv sync ce random synced interval ms phy le tags tags per adv sync random sync terminated reason device was synced term cb clear device index per adv sync dd random sync terminated reason device was synced term cb clear device index tags scan is running tags tags tags tags per adv sync ce random sync terminated reason device was synced term cb clear device index tags scan is running tags tags tags after this no new tags sending per adv can be found note that the scan is running print is still printed the in the tags print indicate that the tag is synced if cte is sampled it shows num cte per second environment please complete the following information os windows toolchain zephyr sdk commit sha or version used | 1 |
53,757 | 3,047,320,801 | IssuesEvent | 2015-08-11 03:25:22 | piccolo2d/piccolo2d.java | https://api.github.com/repos/piccolo2d/piccolo2d.java | closed | PArea, a wrapper for java.awt.geom.Area to allow Constructive Area Geometry (CAG) operations | Component-Core Effort-Medium Milestone-2.0 OpSys-All Priority-Medium Status-Verified Toolkit-Piccolo2D.Java Type-Enhancement | Originally reported on Google Code with ID 153
```
It would be desirable to have a node wrapper for java.awt.geom.Area to
allow Constructive Area Geometry (CAG) operations, such as area addition,
subtraction, intersection, and exclusive or.
```
Reported by `heuermh` on 2009-12-15 20:36:09
| 1.0 | PArea, a wrapper for java.awt.geom.Area to allow Constructive Area Geometry (CAG) operations - Originally reported on Google Code with ID 153
```
It would be desirable to have a node wrapper for java.awt.geom.Area to
allow Constructive Area Geometry (CAG) operations, such as area addition,
subtraction, intersection, and exclusive or.
```
Reported by `heuermh` on 2009-12-15 20:36:09
| priority | parea a wrapper for java awt geom area to allow constructive area geometry cag operations originally reported on google code with id it would be desirable to have a node wrapper for java awt geom area to allow constructive area geometry cag operations such as area addition subtraction intersection and exclusive or reported by heuermh on | 1 |
188,202 | 6,773,966,455 | IssuesEvent | 2017-10-27 08:35:58 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | "Insufficient funds" error is not visible if try to send ETH from Wallet via Enter address/Scan QR | android bug medium-priority wontfix | ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: If try to send more ETH than available to someone who is not in the Contacts by using Wallet -> Send -> Enter Address/Scan QR then Unsigned transactions screen is shown. When user confirms transaction then Wallet's Send funds screen is shown with no feedback. In fact, behind this screen there is error "Insufficient funds". It looks like error should be shown before Unsigned transactions screen.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
When user tries to send more ETH than available error message is shown to the user
#### Actual behavior
[comment]: # (Describe what actually happened.)
When user tries to send more ETH than available then error message is not visible to the user
### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
Video: https://drive.google.com/open?id=0Bz3t9zSg1wb7ZGdjNHBfZjF1dHc
You need to have some ETH in the Wallet and someone's QR who is not in your contact list.
- Open Status
- Open Wallet
- For SEND type sum that is not available, e.g 1000
- Tap SEND
- Tap on Enter address or on Scan QR
- Scan QR code from someone who is not in your contacts list
- As a result, Send Funds screen is shown. Tap on Send button in the bottom
- On Unsigned Transactions tap on confirm transaction and provide password
- As a result, Send Funds screen is shown. Move this screen down there is error message behind it.
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: 0.9.6
* Operating System: Android only. In iOS error message is visible on Send funds screen and user can't continue
| 1.0 | "Insufficient funds" error is not visible if try to send ETH from Wallet via Enter address/Scan QR - ### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Bug
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: If try to send more ETH than available to someone who is not in the Contacts by using Wallet -> Send -> Enter Address/Scan QR then Unsigned transactions screen is shown. When user confirms transaction then Wallet's Send funds screen is shown with no feedback. In fact, behind this screen there is error "Insufficient funds". It looks like error should be shown before Unsigned transactions screen.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
When user tries to send more ETH than available error message is shown to the user
#### Actual behavior
[comment]: # (Describe what actually happened.)
When user tries to send more ETH than available then error message is not visible to the user
### Reproduction
[comment]: # (Describe how we can replicate the bug step by step.)
Video: https://drive.google.com/open?id=0Bz3t9zSg1wb7ZGdjNHBfZjF1dHc
You need to have some ETH in the Wallet and someone's QR who is not in your contact list.
- Open Status
- Open Wallet
- For SEND type sum that is not available, e.g 1000
- Tap SEND
- Tap on Enter address or on Scan QR
- Scan QR code from someone who is not in your contacts list
- As a result, Send Funds screen is shown. Tap on Send button in the bottom
- On Unsigned Transactions tap on confirm transaction and provide password
- As a result, Send Funds screen is shown. Move this screen down there is error message behind it.
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: 0.9.6
* Operating System: Android only. In iOS error message is visible on Send funds screen and user can't continue
| priority | insufficient funds error is not visible if try to send eth from wallet via enter address scan qr description feature or bug i e type bug type bug describe the feature you would like or briefly summarise the bug and what you did what you expected to happen and what actually happens sections below summary if try to send more eth than available to someone who is not in the contacts by using wallet send enter address scan qr then unsigned transactions screen is shown when user confirms transaction then wallet s send funds screen is shown with no feedback in fact behind this screen there is error insufficient funds it looks like error should be shown before unsigned transactions screen expected behavior describe what you expected to happen when user tries to send more eth than available error message is shown to the user actual behavior describe what actually happened when user tries to send more eth than available then error message is not visible to the user reproduction describe how we can replicate the bug step by step video you need to have some eth in the wallet and someone s qr who is not in your contact list open status open wallet for send type sum that is not available e g tap send tap on enter address or on scan qr scan qr code from someone who is not in your contacts list as a result send funds screen is shown tap on send button in the bottom on unsigned transactions tap on confirm transaction and provide password as a result send funds screen is shown move this screen down there is error message behind it additional information please do your best to fill this out status version operating system android only in ios error message is visible on send funds screen and user can t continue | 1 |
711,733 | 24,473,502,133 | IssuesEvent | 2022-10-07 23:38:16 | turbot/steampipe-plugin-oci | https://api.github.com/repos/turbot/steampipe-plugin-oci | closed | Deprecate columns ipv6_cidr_block and ipv6_public_cidr_block in oci_core_vcn table | enhancement priority:medium stale | **Describe the bug**
The VCN List/Get API does not return values for `ipv6_cidr_block` and `ipv6_public_cidr_block`, so it will always show as null.
**Steampipe version (`steampipe -v`)**
Example: v0.3.0
**Plugin version (`steampipe plugin list`)**
Example: v0.5.0
**To reproduce**
Steps to reproduce the behaviour (please include relevant code and/or commands).
**Expected behaviour**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| 1.0 | Deprecate columns ipv6_cidr_block and ipv6_public_cidr_block in oci_core_vcn table - **Describe the bug**
The VCN List/Get API does not return values for `ipv6_cidr_block` and `ipv6_public_cidr_block`, so it will always show as null.
**Steampipe version (`steampipe -v`)**
Example: v0.3.0
**Plugin version (`steampipe plugin list`)**
Example: v0.5.0
**To reproduce**
Steps to reproduce the behaviour (please include relevant code and/or commands).
**Expected behaviour**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| priority | deprecate columns cidr block and public cidr block in oci core vcn table describe the bug the vcn list get api does not return values for cidr block and public cidr block so it will always show as null steampipe version steampipe v example plugin version steampipe plugin list example to reproduce steps to reproduce the behaviour please include relevant code and or commands expected behaviour a clear and concise description of what you expected to happen additional context add any other context about the problem here | 1 |
54,682 | 3,070,961,113 | IssuesEvent | 2015-08-19 09:00:00 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Отключить кеш windows для раздаваемых (да и наверно для всех) файлов | enhancement imported Performance Priority-Medium | _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on July 21, 2009 09:14:33_
Когда расшаришь популярные файлы большого объёма, и ни них налетит
толпа леммингов для скачки (большой аплоад), кеш виндовс забивается --
он уходит на обслуживание флайлинка -- дабы ему комфортно отдавались
ранее считанные данные.
Мне в принципе накакать, как быстро мой клиент отдаёт данные другим
пользователям -- я хочу, чтобы мне на мой машинке комфортно
работалось. Вот uTorrent умудряется раздавать гигантские объёмы
информации, и при этом комфортно работается -- потому что файлы для
выдачи он открывает с флагами Windows API, предотвращающими
использование кеша Windows при их чтении.
При этом кеш создаётся свой, внутренний, и используется он и только
он, ограниченного размера.
Есть идея использовать похожий механизм и для FlylinkDC++
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=26_ | 1.0 | Отключить кеш windows для раздаваемых (да и наверно для всех) файлов - _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on July 21, 2009 09:14:33_
Когда расшаришь популярные файлы большого объёма, и ни них налетит
толпа леммингов для скачки (большой аплоад), кеш виндовс забивается --
он уходит на обслуживание флайлинка -- дабы ему комфортно отдавались
ранее считанные данные.
Мне в принципе накакать, как быстро мой клиент отдаёт данные другим
пользователям -- я хочу, чтобы мне на мой машинке комфортно
работалось. Вот uTorrent умудряется раздавать гигантские объёмы
информации, и при этом комфортно работается -- потому что файлы для
выдачи он открывает с флагами Windows API, предотвращающими
использование кеша Windows при их чтении.
При этом кеш создаётся свой, внутренний, и используется он и только
он, ограниченного размера.
Есть идея использовать похожий механизм и для FlylinkDC++
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=26_ | priority | отключить кеш windows для раздаваемых да и наверно для всех файлов from on july когда расшаришь популярные файлы большого объёма и ни них налетит толпа леммингов для скачки большой аплоад кеш виндовс забивается он уходит на обслуживание флайлинка дабы ему комфортно отдавались ранее считанные данные мне в принципе накакать как быстро мой клиент отдаёт данные другим пользователям я хочу чтобы мне на мой машинке комфортно работалось вот utorrent умудряется раздавать гигантские объёмы информации и при этом комфортно работается потому что файлы для выдачи он открывает с флагами windows api предотвращающими использование кеша windows при их чтении при этом кеш создаётся свой внутренний и используется он и только он ограниченного размера есть идея использовать похожий механизм и для flylinkdc original issue | 1 |
576,422 | 17,086,781,144 | IssuesEvent | 2021-07-08 12:51:58 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | Intel Performance Counters | area/collectors data-collection-mteam feature request new collector priority/medium | Found this: https://www.quora.com/How-can-I-monitor-PCI-Express-bus-usage-on-Linux
with a lead to this: https://software.intel.com/en-us/articles/intel-performance-counter-monitor
It would be nice if we could monitor such system internals...
| 1.0 | Intel Performance Counters - Found this: https://www.quora.com/How-can-I-monitor-PCI-Express-bus-usage-on-Linux
with a lead to this: https://software.intel.com/en-us/articles/intel-performance-counter-monitor
It would be nice if we could monitor such system internals...
| priority | intel performance counters found this with a lead to this it would be nice if we could monitor such system internals | 1 |
709,131 | 24,368,124,589 | IssuesEvent | 2022-10-03 16:47:23 | diba-io/bitmask-core | https://api.github.com/repos/diba-io/bitmask-core | opened | Wallet Contract Storage | priority-medium | This allows us to store contracts on the user's behalf. Earlier, assets would have to be tracked out of band.
GET and POST for both `/store_global` and `/store_wallet` on local `bitmaskd` and lambdas
File Schema:
- global
- rgbc
- rgb1... (rgbid asset genesis)
- wallet
- (pkh)
- assets (file)
- rgbid[] (list of rgbids)
- udas (file)
- rgbid[] (list of rgbids)
- rgbc1... (rgb contracts)
- consignment1... (rgb consignments)
An sk is generated from a specific wallet address at a specific, fixed, separate derivation path and wallet index.
Both endpoints are authenticated using a signature challenge; a hash of the data is signed by the sk, and the pk is provided and checked. Then a hash of the pk is made and the data is persisted under a directory with that hash. The file names under their directory should conform to the schema above, but won't be parsed or checked for consistency.
The methods for authenticating and retrieving the data can be tested locally in the local node, and in the future, a means of downloading and uploading your data can be built into bitmask-web. | 1.0 | Wallet Contract Storage - This allows us to store contracts on the user's behalf. Earlier, assets would have to be tracked out of band.
GET and POST for both `/store_global` and `/store_wallet` on local `bitmaskd` and lambdas
File Schema:
- global
- rgbc
- rgb1... (rgbid asset genesis)
- wallet
- (pkh)
- assets (file)
- rgbid[] (list of rgbids)
- udas (file)
- rgbid[] (list of rgbids)
- rgbc1... (rgb contracts)
- consignment1... (rgb consignments)
An sk is generated from a specific wallet address at a specific, fixed, separate derivation path and wallet index.
Both endpoints are authenticated using a signature challenge; a hash of the data is signed by the sk, and the pk is provided and checked. Then a hash of the pk is made and the data is persisted under a directory with that hash. The file names under their directory should conform to the schema above, but won't be parsed or checked for consistency.
The methods for authenticating and retrieving the data can be tested locally in the local node, and in the future, a means of downloading and uploading your data can be built into bitmask-web. | priority | wallet contract storage this allows us to store contracts on the user s behalf earlier assets would have to be tracked out of band get and post for both store global and store wallet on local bitmaskd and lambdas file schema global rgbc rgbid asset genesis wallet pkh assets file rgbid list of rgbids udas file rgbid list of rgbids rgb contracts rgb consignments an sk is generated from a specific wallet address at a specific fixed separate derivation path and wallet index both endpoints are authenticated using a signature challenge a hash of the data is signed by the sk and the pk is provided and checked then a hash of the pk is made and the data is persisted under a directory with that hash the file names under their directory should conform to the schema above but won t be parsed or checked for consistency the methods for authenticating and retrieving the data can be tested locally in the local node and in the future a means of downloading and uploading your data can be built into bitmask web | 1 |
372,070 | 11,008,735,167 | IssuesEvent | 2019-12-04 11:07:00 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Laws: Location conditions available for actions without location | Fixed Medium Priority Reopen | I.e. for PolluteAir action (which doesn't implement `IPositionGameAction` it still allows to add `Location` related conditions).

| 1.0 | Laws: Location conditions available for actions without location - I.e. for PolluteAir action (which doesn't implement `IPositionGameAction` it still allows to add `Location` related conditions).

| priority | laws location conditions available for actions without location i e for polluteair action which doesn t implement ipositiongameaction it still allows to add location related conditions | 1 |
677,899 | 23,179,434,674 | IssuesEvent | 2022-07-31 22:33:20 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Exhaustion Limit extra bits | Priority: Medium Type: Feature Category: Gameplay Squad: Redwood | - [ ] Make flags in Balance Manager that define what general activities are affected by exhaustion. Make them all on by default. Should be: farming, transport (vehicles), labor on tables, starting work order
- [ ] Make picking vegetables and starting work orders check exhaustion limit if flags set.
- [ ] Add a warning message that appears every couple minutes for the last 10 minutes | 1.0 | Exhaustion Limit extra bits - - [ ] Make flags in Balance Manager that define what general activities are affected by exhaustion. Make them all on by default. Should be: farming, transport (vehicles), labor on tables, starting work order
- [ ] Make picking vegetables and starting work orders check exhaustion limit if flags set.
- [ ] Add a warning message that appears every couple minutes for the last 10 minutes | priority | exhaustion limit extra bits make flags in balance manager that define what general activities are affected by exhaustion make them all on by default should be farming transport vehicles labor on tables starting work order make picking vegetables and starting work orders check exhaustion limit if flags set add a warning message that appears every couple minutes for the last minutes | 1 |
111,967 | 4,499,890,163 | IssuesEvent | 2016-09-01 01:09:40 | benvenutti/hasm | https://api.github.com/repos/benvenutti/hasm | closed | Load command accepts negative numbers | priority: medium status: completed type: bug | Loading signed integers with the load command is an invalid operation. The assembler should output an error message and halt the assembling process. | 1.0 | Load command accepts negative numbers - Loading signed integers with the load command is an invalid operation. The assembler should output an error message and halt the assembling process. | priority | load command accepts negative numbers loading signed integers with the load command is an invalid operation the assembler should output an error message and halt the assembling process | 1 |
791,233 | 27,856,657,648 | IssuesEvent | 2023-03-20 23:53:45 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | [workspaces] Consumer (conanfile.txt) as root of workspace | type: feature stage: queue priority: medium complex: medium | Current Workspace feature does not support conan "final consumer project" (configured by conanfile.txt) to be root of the workspace - it requires root to be full-featured package reference. As temp hack, we converted our conanfile.txt into conanfile.py with dummy names and versions, but we are not sure how safe such a solution is - do we have to keep those dummy names and version unique? Since those represent "final consuming projects", we do not have any reasonable names or versions for those - they are not supposed to go back into conan package.
If it is possible, please support conanfile.txt as root node of the workspace.
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
| 1.0 | [workspaces] Consumer (conanfile.txt) as root of workspace - Current Workspace feature does not support conan "final consumer project" (configured by conanfile.txt) to be root of the workspace - it requires root to be full-featured package reference. As temp hack, we converted our conanfile.txt into conanfile.py with dummy names and versions, but we are not sure how safe such a solution is - do we have to keep those dummy names and version unique? Since those represent "final consuming projects", we do not have any reasonable names or versions for those - they are not supposed to go back into conan package.
If it is possible, please support conanfile.txt as root node of the workspace.
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
| priority | consumer conanfile txt as root of workspace current workspace feature does not support conan final consumer project configured by conanfile txt to be root of the workspace it requires root to be full featured package reference as temp hack we converted our conanfile txt into conanfile py with dummy names and versions but we are not sure how safe such a solution is do we have to keep those dummy names and version unique since those represent final consuming projects we do not have any reasonable names or versions for those they are not supposed to go back into conan package if it is possible please support conanfile txt as root node of the workspace i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion | 1 |
638,837 | 20,739,897,336 | IssuesEvent | 2022-03-14 16:43:26 | bounswe/bounswe2022group5 | https://api.github.com/repos/bounswe/bounswe2022group5 | closed | Research about Telegram’s Chatbot API | Type: Research Medium Priority Status: In Progress | To Do:
- Research will be made about Telegram's Chatbot API to get the idea of how to use the bot in our Medical Experience Sharing Platform Project, and the advantages of the Chatbot API.
- Information about the research will be documented in our [Research Items for Project](https://github.com/bounswe/bounswe2022group5/wiki/Research-Items-for-Project) page.
Reviewers:
@Keremgunduz7 and @makifyilmaz
Task Deadline: 14.03.2022 - 13:00 GMT+3 | 1.0 | Research about Telegram’s Chatbot API - To Do:
- Research will be made about Telegram's Chatbot API to get the idea of how to use the bot in our Medical Experience Sharing Platform Project, and the advantages of the Chatbot API.
- Information about the research will be documented in our [Research Items for Project](https://github.com/bounswe/bounswe2022group5/wiki/Research-Items-for-Project) page.
Reviewers:
@Keremgunduz7 and @makifyilmaz
Task Deadline: 14.03.2022 - 13:00 GMT+3 | priority | research about telegram’s chatbot api to do research will be made about telegram s chatbot api to get the idea of how to use the bot in our medical experience sharing platform project and the advantages of the chatbot api information about the research will be documented in our page reviewers and makifyilmaz task deadline gmt | 1 |
303,337 | 9,305,666,228 | IssuesEvent | 2019-03-25 07:24:05 | TNG/ngqp | https://api.github.com/repos/TNG/ngqp | closed | Array values with multi: false need to be boxed | Comp: Core Priority: Medium Type: Bug | I think that right now, if `multi: false` is used with an array-types value, we end up spreading the value across multiple URL query parameters anway. We probably need to box the URL parameter for non-multi parameters to avoid this. | 1.0 | Array values with multi: false need to be boxed - I think that right now, if `multi: false` is used with an array-types value, we end up spreading the value across multiple URL query parameters anway. We probably need to box the URL parameter for non-multi parameters to avoid this. | priority | array values with multi false need to be boxed i think that right now if multi false is used with an array types value we end up spreading the value across multiple url query parameters anway we probably need to box the url parameter for non multi parameters to avoid this | 1 |
401,145 | 11,786,333,892 | IssuesEvent | 2020-03-17 12:04:17 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Fix OONI Explorer links linked in the mobile & desktop apps | effort/XS ooni/probe-desktop ooni/probe-mobile priority/medium | The OONI Probe mobile & desktop apps link to an old version of OONI Explorer (https://explorer.ooni.io/world/) which no longer works.
We need to replace these links with the new URL of the revamped OONI Explorer: https://explorer.ooni.org/ | 1.0 | Fix OONI Explorer links linked in the mobile & desktop apps - The OONI Probe mobile & desktop apps link to an old version of OONI Explorer (https://explorer.ooni.io/world/) which no longer works.
We need to replace these links with the new URL of the revamped OONI Explorer: https://explorer.ooni.org/ | priority | fix ooni explorer links linked in the mobile desktop apps the ooni probe mobile desktop apps link to an old version of ooni explorer which no longer works we need to replace these links with the new url of the revamped ooni explorer | 1 |
83,944 | 3,645,311,306 | IssuesEvent | 2016-02-15 14:07:57 | MBB-team/VBA-toolbox | https://api.github.com/repos/MBB-team/VBA-toolbox | closed | wiki is incomplete! | auto-migrated Priority-Medium Type-Enhancement | _From @GoogleCodeExporter on October 7, 2015 9:29_
```
What steps will reproduce the problem?
1. SVN checkout
2. list directories
3.
What is the expected output? What do you see instead?
Well, it should work.
What version of the product are you using? On what operating system?
I'm using version 0.0
Please provide any additional information below.
No.
```
Original issue reported on code.google.com by `jean.dau...@gmail.com` on 26 Mar 2012 at 3:49
_Copied from original issue: lionel-rigoux/mbb-vb-toolbox#1_ | 1.0 | wiki is incomplete! - _From @GoogleCodeExporter on October 7, 2015 9:29_
```
What steps will reproduce the problem?
1. SVN checkout
2. list directories
3.
What is the expected output? What do you see instead?
Well, it should work.
What version of the product are you using? On what operating system?
I'm using version 0.0
Please provide any additional information below.
No.
```
Original issue reported on code.google.com by `jean.dau...@gmail.com` on 26 Mar 2012 at 3:49
_Copied from original issue: lionel-rigoux/mbb-vb-toolbox#1_ | priority | wiki is incomplete from googlecodeexporter on october what steps will reproduce the problem svn checkout list directories what is the expected output what do you see instead well it should work what version of the product are you using on what operating system i m using version please provide any additional information below no original issue reported on code google com by jean dau gmail com on mar at copied from original issue lionel rigoux mbb vb toolbox | 1 |
461,529 | 13,231,787,226 | IssuesEvent | 2020-08-18 12:18:35 | input-output-hk/ouroboros-network | https://api.github.com/repos/input-output-hk/ouroboros-network | closed | Introduce header revalidation | consensus optimisation priority medium | As mentioned in https://github.com/input-output-hk/cardano-ledger-specs/pull/1785#issuecomment-675035681, the majority of block replay time is spent in C functions doing crypto checks, see the two top of the profiling report:
```
COST CENTRE SRC %time %alloc
verify src/Cardano/Crypto/VRF/Praos.hs:(388,1)-(397,33) 46.9 0.1
verify Crypto/PubKey/Ed25519.hs:(107,1)-(114,30) 6.3 0.0
```
Both are part of `validateHeader`, which accounts for 63.2% of the time and 11.6% of the allocations.
Note that this is when we're *reapplying blocks*, so we know these blocks *and thus their headers* are already valid. While we already distinguish between block body [application](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Abstract.hs#L63) and [reapplication](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Abstract.hs#L76), we don't do the same thing for block [header validation](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/HeaderValidation.hs#L468).
Header validation consists of three parts:
https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/HeaderValidation.hs#L474-L492
1. `validateEnvelope`, which is cheap, but could be skipped during header revalidation. Maybe leave it in as a basic sanity check?
2. `updateChainDepState`, expensive, as Shelley's implementation uses the `Prtcl` rule, which calls the `Overlay` rule, which includes [`pbftVrfChecks`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Overlay.hs#L262) (current hotspot) and [`praosVrfChecks`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Overlay.hs#L253) (likely the future hotspot when more decentralised). The `Overlay` rule also calls the `Ocert` rule, which does a KES check with [`verifySignedKES`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Ocert.hs#L101), which is the second `verify` from the profiling report.
Both of these checks should not be needed during *revalidation*. However, the `Prtcl` rule and its subrules do update some state, i.e., the `ChainDepState`, so we can't just *not* call these rules.
If the ledger provides a version of [`updateChainDepState`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/API/Protocol.hs#L287) (note that this is a function in `cardano-ledger-specs`, provided exactly to be used for the Shelley implementation of consensus' `updateChainDepState` function) that skips the VRF and KES checks, we can use that in consensus to speed up header revalidation.
3. `headerStatePush`, cheapest of all three and required, as it builds up required state. Must be included in revalidation.
Consensus changes:
* Introduce the `reupdateChainDepState` method in the `ConsensusProtocol` class, similar to [`updateChainDepState`](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Protocol/Abstract.hs#L202-L208), but its implementation can assume `updateChainDepState` has been called before with the same header so it can skip the expensive crypto checks. I'm not sure yet whether it will still return an `Except` or just the updated `ChainDepState`.
* Introduce a function called `revalidateHeader` in `Ouroboros.Consensus.HeaderValidation` (to be used when reapplying blocks [here](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Extended.hs#L189)), which calls `validateEnvelope`, `headerStatePush`, and `reupdateChainDepState`.
| 1.0 | Introduce header revalidation - As mentioned in https://github.com/input-output-hk/cardano-ledger-specs/pull/1785#issuecomment-675035681, the majority of block replay time is spent in C functions doing crypto checks, see the two top of the profiling report:
```
COST CENTRE SRC %time %alloc
verify src/Cardano/Crypto/VRF/Praos.hs:(388,1)-(397,33) 46.9 0.1
verify Crypto/PubKey/Ed25519.hs:(107,1)-(114,30) 6.3 0.0
```
Both are part of `validateHeader`, which accounts for 63.2% of the time and 11.6% of the allocations.
Note that this is when we're *reapplying blocks*, so we know these blocks *and thus their headers* are already valid. While we already distinguish between block body [application](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Abstract.hs#L63) and [reapplication](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Abstract.hs#L76), we don't do the same thing for block [header validation](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/HeaderValidation.hs#L468).
Header validation consists of three parts:
https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/HeaderValidation.hs#L474-L492
1. `validateEnvelope`, which is cheap, but could be skipped during header revalidation. Maybe leave it in as a basic sanity check?
2. `updateChainDepState`, expensive, as Shelley's implementation uses the `Prtcl` rule, which calls the `Overlay` rule, which includes [`pbftVrfChecks`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Overlay.hs#L262) (current hotspot) and [`praosVrfChecks`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Overlay.hs#L253) (likely the future hotspot when more decentralised). The `Overlay` rule also calls the `Ocert` rule, which does a KES check with [`verifySignedKES`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/STS/Ocert.hs#L101), which is the second `verify` from the profiling report.
Both of these checks should not be needed during *revalidation*. However, the `Prtcl` rule and its subrules do update some state, i.e., the `ChainDepState`, so we can't just *not* call these rules.
If the ledger provides a version of [`updateChainDepState`](https://github.com/input-output-hk/cardano-ledger-specs/blob/736c9295f5896de28fbcedc6a8fc4d509fb78d10/shelley/chain-and-ledger/executable-spec/src/Shelley/Spec/Ledger/API/Protocol.hs#L287) (note that this is a function in `cardano-ledger-specs`, provided exactly to be used for the Shelley implementation of consensus' `updateChainDepState` function) that skips the VRF and KES checks, we can use that in consensus to speed up header revalidation.
3. `headerStatePush`, cheapest of all three and required, as it builds up required state. Must be included in revalidation.
Consensus changes:
* Introduce the `reupdateChainDepState` method in the `ConsensusProtocol` class, similar to [`updateChainDepState`](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Protocol/Abstract.hs#L202-L208), but its implementation can assume `updateChainDepState` has been called before with the same header so it can skip the expensive crypto checks. I'm not sure yet whether it will still return an `Except` or just the updated `ChainDepState`.
* Introduce a function called `revalidateHeader` in `Ouroboros.Consensus.HeaderValidation` (to be used when reapplying blocks [here](https://github.com/input-output-hk/ouroboros-network/blob/cfb465726c9d023ad3ab2a3030d847df27787c3d/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/Extended.hs#L189)), which calls `validateEnvelope`, `headerStatePush`, and `reupdateChainDepState`.
| priority | introduce header revalidation as mentioned in the majority of block replay time is spent in c functions doing crypto checks see the two top of the profiling report cost centre src time alloc verify src cardano crypto vrf praos hs verify crypto pubkey hs both are part of validateheader which accounts for of the time and of the allocations note that this is when we re reapplying blocks so we know these blocks and thus their headers are already valid while we already distinguish between block body and we don t do the same thing for block header validation consists of three parts validateenvelope which is cheap but could be skipped during header revalidation maybe leave it in as a basic sanity check updatechaindepstate expensive as shelley s implementation uses the prtcl rule which calls the overlay rule which includes current hotspot and likely the future hotspot when more decentralised the overlay rule also calls the ocert rule which does a kes check with which is the second verify from the profiling report both of these checks should not be needed during revalidation however the prtcl rule and its subrules do update some state i e the chaindepstate so we can t just not call these rules if the ledger provides a version of note that this is a function in cardano ledger specs provided exactly to be used for the shelley implementation of consensus updatechaindepstate function that skips the vrf and kes checks we can use that in consensus to speed up header revalidation headerstatepush cheapest of all three and required as it builds up required state must be included in revalidation consensus changes introduce the reupdatechaindepstate method in the consensusprotocol class similar to but its implementation can assume updatechaindepstate has been called before with the same header so it can skip the expensive crypto checks i m not sure yet whether it will still return an except or just the updated chaindepstate introduce a function called revalidateheader in ouroboros consensus headervalidation to be used when reapplying blocks which calls validateenvelope headerstatepush and reupdatechaindepstate | 1 |
479,705 | 13,804,751,355 | IssuesEvent | 2020-10-11 10:31:33 | amplication/amplication | https://api.github.com/repos/amplication/amplication | closed | Run E2E tests in CI | priority: medium | **Is your feature request related to a problem? Please describe.**
We currently have no indication in the CI E2E tests are passing
**Describe the solution you'd like**
Run the E2E tests in a dedicated GitHub action
**Describe alternatives you've considered**
RUn the E2E tests in Google Cloud Build
**Additional context**
Currently, only unit tests are running in our CI.
| 1.0 | Run E2E tests in CI - **Is your feature request related to a problem? Please describe.**
We currently have no indication in the CI E2E tests are passing
**Describe the solution you'd like**
Run the E2E tests in a dedicated GitHub action
**Describe alternatives you've considered**
RUn the E2E tests in Google Cloud Build
**Additional context**
Currently, only unit tests are running in our CI.
| priority | run tests in ci is your feature request related to a problem please describe we currently have no indication in the ci tests are passing describe the solution you d like run the tests in a dedicated github action describe alternatives you ve considered run the tests in google cloud build additional context currently only unit tests are running in our ci | 1 |
198,932 | 6,979,238,243 | IssuesEvent | 2017-12-12 20:17:51 | sagesharp/outreachy-django-wagtail | https://api.github.com/repos/sagesharp/outreachy-django-wagtail | opened | Set up test server | medium priority | Need to have a test server managed by dokku at test.outreachy.org. Currently we're just crossing our fingers and pushing to production after local testing. 😱
Ideally in the future we would set up a test server for each developer to push to. We need to have a way to copy the Django database, scrub any personal data (like email addresses). We could use Django fixtures to define the models we want to test. | 1.0 | Set up test server - Need to have a test server managed by dokku at test.outreachy.org. Currently we're just crossing our fingers and pushing to production after local testing. 😱
Ideally in the future we would set up a test server for each developer to push to. We need to have a way to copy the Django database, scrub any personal data (like email addresses). We could use Django fixtures to define the models we want to test. | priority | set up test server need to have a test server managed by dokku at test outreachy org currently we re just crossing our fingers and pushing to production after local testing 😱 ideally in the future we would set up a test server for each developer to push to we need to have a way to copy the django database scrub any personal data like email addresses we could use django fixtures to define the models we want to test | 1 |
523,756 | 15,189,184,663 | IssuesEvent | 2021-02-15 16:04:12 | staxrip/staxrip | https://api.github.com/repos/staxrip/staxrip | closed | StaxRip MP4Box Crash on AAC 7.1 MKV File Open | added/fixed/done bug priority medium tool issue | I get StaxRip crash on opening MKV files with AAC 7.1 audio. ALWAYS occurs on ALL MKV files with aac 7.1 audio. MKV files with aac 5.1 audio open without issue. If I leave video and audio untouched and convert the MKV to an MP4 container the file opens fine. The culprit appears to be MP4Box. It reads the 7.1 aac in MKV containers as 5 channels? Then the crash occurs. Please find attached the error log and jpg showing error.
Thanks for your help and dedication on this issue.
------------------------- System Environment -------------------------
StaxRip : 2.1.7.6
Windows : Windows 10 Pro 2004
Language : English (Australia)
CPU : Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
GPU : Intel(R) HD Graphics 4600
Resolution : 2240 x 1260
DPI : 158
Code Page : 1252
----------------------- Media Info Source File -----------------------
C:\Encode\4K HEVC 7.1 AAC.mkv
General
Complete name : C:\Encode\4K HEVC 7.1 AAC.mkv
Format : Matroska
Format version : Version 4
File size : 5.50 GiB
Duration : 1 h 58 min
Overall bit rate : 6 628 kb/s
Encoded date : UTC 2021-02-04 09:33:02
Writing application : mkvmerge v52.0.0 ('Secret For The Mad') 64-bit
Writing library : libebml v1.4.1 + libmatroska v1.6.2
Cover : Yes
Attachments : cover.jpg
Video
ID : 1
Format : HEVC
Format/Info : High Efficiency Video Coding
Format profile : Main 10@L5@Main
Codec ID : V_MPEGH/ISO/HEVC
Duration : 1 h 58 min
Bit rate : 6 002 kb/s
Width : 3 840 pixels
Height : 1 616 pixels
Display aspect ratio : 2.40:1
Frame rate mode : Constant
Frame rate : 23.976 (24000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 10 bits
Bits/(Pixel*Frame) : 0.040
Stream size : 4.98 GiB (91%)
Writing library : x265 3.2+9-971180b100f8:[Windows][GCC 9.2.0][64 bit] 10bit
Default : Yes
Forced : No
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : A_AAC-2
Duration : 1 h 58 min
Bit rate : 609 kb/s
Channel(s) : 8 channels
Channel layout : C L R Ls Rs Lw Rw LFE
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Delay relative to video : 9 ms
Stream size : 518 MiB (9%)
Title : Surround 7.1
Language : English
Default : No
Forced : No
Text #1
ID : 3
Format : PGS
Muxing mode : zlib
Codec ID : S_HDMV/PGS
Codec ID/Info : Picture based subtitle format used on BDs/HD-DVDs
Duration : 1 h 51 min
Bit rate : 40.6 kb/s
Count of elements : 2674
Stream size : 32.4 MiB (1%)
Language : English
Default : No
Forced : No
Text #2
ID : 4
Format : VobSub
Muxing mode : zlib
Codec ID : S_VOBSUB
Codec ID/Info : Picture based subtitle format used on DVDs
Duration : 1 h 54 min
Bit rate : 4 776 b/s
Count of elements : 917
Stream size : 3.92 MiB (0%)
Language : Arabic
Default : No
Forced : No
Menu
00 : 00:00.000 : en:Chapter 01
00 : 03:08.938 : en:Chapter 02
00 : 07:02.171 : en:Chapter 03
00 : 13:36.023 : en:Chapter 04
00 : 16:43.168 : en:Chapter 05
00 : 27:29.939 : en:Chapter 06
00 : 36:06.747 : en:Chapter 07
00 : 45:25.389 : en:Chapter 08
00 : 51:33.423 : en:Chapter 09
00 : 58:01.769 : en:Chapter 10
01 : 02:57.648 : en:Chapter 11
01 : 12:17.666 : en:Chapter 12
01 : 17:38.654 : en:Chapter 13
01 : 22:44.542 : en:Chapter 14
01 : 28:29.304 : en:Chapter 15
01 : 37:17.081 : en:Chapter 16
01 : 48:14.279 : en:Chapter 17
01 : 55:00.727 : en:Chapter 18
------------------------------ Demux MKV ------------------------------
mkvextract 52
"C:\Encode\StaxRip Beta\Apps\Support\MKVToolNix\mkvextract.exe" "C:\Encode\4K HEVC 7.1 AAC.mkv" tracks 2:"C:\Encode\4K HEVC 7.1 AAC_temp\ID1 English.sup" 3:"C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.idx" 1:"C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac" --ui-language en
Extracting track 1 with the CodecID 'A_AAC' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac'. Container format: raw AAC file with ADTS headers
Extracting track 2 with the CodecID 'S_HDMV/PGS' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID1 English.sup'. Container format: SUP
Extracting track 3 with the CodecID 'S_VOBSUB' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.sub'. Container format: VobSubs
Writing the VobSub index file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.idx'.
Start: 7:42:20 PM
End: 7:42:23 PM
Duration: 00:00:03
General
Complete name : C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac
Format : ADTS
Format/Info : Audio Data Transport Stream
File size : 520 MiB
Audio
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Format version : Version 4
Codec ID : 2
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 520 MiB (100%)
------------------------ Error Mux AAC to M4A ------------------------
Mux AAC to M4A returned error exit code: -1073741819 (0xC0000005)
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
Start: 7:42:23 PM
End: 7:42:27 PM
Duration: 00:00:04
------------------------------ Exception ------------------------------
StaxRip.ErrorAbortException: Mux AAC to M4A returned error exit code: -1073741819 (0xC0000005)
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
at StaxRip.Proc.Start() in D:\Projekte\VB\staxrip\General\Proc.vb:line 374
at StaxRip.mkvDemuxer.Demux(String sourcefile, IEnumerable`1 audioStreams, IEnumerable`1 subtitles, AudioProfile ap, Project proj, Boolean onlyEnabled, Boolean videoDemuxing, Boolean overrideExisting, String title, Boolean useStreamName) in D:\Projekte\VB\staxrip\General\Demux.vb:line 935
at StaxRip.mkvDemuxer.Run(Project proj) in D:\Projekte\VB\staxrip\General\Demux.vb:line 721
at StaxRip.MainForm.Demux() in D:\Projekte\VB\staxrip\Forms\MainForm.vb:line 3232
at StaxRip.MainForm.OpenVideoSourceFiles(IEnumerable`1 files, Boolean isEncoding) in D:\Projekte\VB\staxrip\Forms\MainForm.vb:line 2142
| 1.0 | StaxRip MP4Box Crash on AAC 7.1 MKV File Open - I get StaxRip crash on opening MKV files with AAC 7.1 audio. ALWAYS occurs on ALL MKV files with aac 7.1 audio. MKV files with aac 5.1 audio open without issue. If I leave video and audio untouched and convert the MKV to an MP4 container the file opens fine. The culprit appears to be MP4Box. It reads the 7.1 aac in MKV containers as 5 channels? Then the crash occurs. Please find attached the error log and jpg showing error.
Thanks for your help and dedication on this issue.
------------------------- System Environment -------------------------
StaxRip : 2.1.7.6
Windows : Windows 10 Pro 2004
Language : English (Australia)
CPU : Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
GPU : Intel(R) HD Graphics 4600
Resolution : 2240 x 1260
DPI : 158
Code Page : 1252
----------------------- Media Info Source File -----------------------
C:\Encode\4K HEVC 7.1 AAC.mkv
General
Complete name : C:\Encode\4K HEVC 7.1 AAC.mkv
Format : Matroska
Format version : Version 4
File size : 5.50 GiB
Duration : 1 h 58 min
Overall bit rate : 6 628 kb/s
Encoded date : UTC 2021-02-04 09:33:02
Writing application : mkvmerge v52.0.0 ('Secret For The Mad') 64-bit
Writing library : libebml v1.4.1 + libmatroska v1.6.2
Cover : Yes
Attachments : cover.jpg
Video
ID : 1
Format : HEVC
Format/Info : High Efficiency Video Coding
Format profile : Main 10@L5@Main
Codec ID : V_MPEGH/ISO/HEVC
Duration : 1 h 58 min
Bit rate : 6 002 kb/s
Width : 3 840 pixels
Height : 1 616 pixels
Display aspect ratio : 2.40:1
Frame rate mode : Constant
Frame rate : 23.976 (24000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 10 bits
Bits/(Pixel*Frame) : 0.040
Stream size : 4.98 GiB (91%)
Writing library : x265 3.2+9-971180b100f8:[Windows][GCC 9.2.0][64 bit] 10bit
Default : Yes
Forced : No
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : A_AAC-2
Duration : 1 h 58 min
Bit rate : 609 kb/s
Channel(s) : 8 channels
Channel layout : C L R Ls Rs Lw Rw LFE
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Delay relative to video : 9 ms
Stream size : 518 MiB (9%)
Title : Surround 7.1
Language : English
Default : No
Forced : No
Text #1
ID : 3
Format : PGS
Muxing mode : zlib
Codec ID : S_HDMV/PGS
Codec ID/Info : Picture based subtitle format used on BDs/HD-DVDs
Duration : 1 h 51 min
Bit rate : 40.6 kb/s
Count of elements : 2674
Stream size : 32.4 MiB (1%)
Language : English
Default : No
Forced : No
Text #2
ID : 4
Format : VobSub
Muxing mode : zlib
Codec ID : S_VOBSUB
Codec ID/Info : Picture based subtitle format used on DVDs
Duration : 1 h 54 min
Bit rate : 4 776 b/s
Count of elements : 917
Stream size : 3.92 MiB (0%)
Language : Arabic
Default : No
Forced : No
Menu
00 : 00:00.000 : en:Chapter 01
00 : 03:08.938 : en:Chapter 02
00 : 07:02.171 : en:Chapter 03
00 : 13:36.023 : en:Chapter 04
00 : 16:43.168 : en:Chapter 05
00 : 27:29.939 : en:Chapter 06
00 : 36:06.747 : en:Chapter 07
00 : 45:25.389 : en:Chapter 08
00 : 51:33.423 : en:Chapter 09
00 : 58:01.769 : en:Chapter 10
01 : 02:57.648 : en:Chapter 11
01 : 12:17.666 : en:Chapter 12
01 : 17:38.654 : en:Chapter 13
01 : 22:44.542 : en:Chapter 14
01 : 28:29.304 : en:Chapter 15
01 : 37:17.081 : en:Chapter 16
01 : 48:14.279 : en:Chapter 17
01 : 55:00.727 : en:Chapter 18
------------------------------ Demux MKV ------------------------------
mkvextract 52
"C:\Encode\StaxRip Beta\Apps\Support\MKVToolNix\mkvextract.exe" "C:\Encode\4K HEVC 7.1 AAC.mkv" tracks 2:"C:\Encode\4K HEVC 7.1 AAC_temp\ID1 English.sup" 3:"C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.idx" 1:"C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac" --ui-language en
Extracting track 1 with the CodecID 'A_AAC' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac'. Container format: raw AAC file with ADTS headers
Extracting track 2 with the CodecID 'S_HDMV/PGS' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID1 English.sup'. Container format: SUP
Extracting track 3 with the CodecID 'S_VOBSUB' to the file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.sub'. Container format: VobSubs
Writing the VobSub index file 'C:\Encode\4K HEVC 7.1 AAC_temp\ID2 Arabic.idx'.
Start: 7:42:20 PM
End: 7:42:23 PM
Duration: 00:00:03
General
Complete name : C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac
Format : ADTS
Format/Info : Audio Data Transport Stream
File size : 520 MiB
Audio
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Format version : Version 4
Codec ID : 2
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 520 MiB (100%)
------------------------ Error Mux AAC to M4A ------------------------
Mux AAC to M4A returned error exit code: -1073741819 (0xC0000005)
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
Start: 7:42:23 PM
End: 7:42:27 PM
Duration: 00:00:04
------------------------------ Exception ------------------------------
StaxRip.ErrorAbortException: Mux AAC to M4A returned error exit code: -1073741819 (0xC0000005)
--------------------------- Mux AAC to M4A ---------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
"C:\Encode\StaxRip Beta\Apps\Support\MP4Box\MP4Box.exe" -add "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.aac:name= " -new "C:\Encode\4K HEVC 7.1 AAC_temp\ID1 9ms English {Surround 7.1}.m4a"
Track Importing AAC - SampleRate 7350 Num Channels 5
at StaxRip.Proc.Start() in D:\Projekte\VB\staxrip\General\Proc.vb:line 374
at StaxRip.mkvDemuxer.Demux(String sourcefile, IEnumerable`1 audioStreams, IEnumerable`1 subtitles, AudioProfile ap, Project proj, Boolean onlyEnabled, Boolean videoDemuxing, Boolean overrideExisting, String title, Boolean useStreamName) in D:\Projekte\VB\staxrip\General\Demux.vb:line 935
at StaxRip.mkvDemuxer.Run(Project proj) in D:\Projekte\VB\staxrip\General\Demux.vb:line 721
at StaxRip.MainForm.Demux() in D:\Projekte\VB\staxrip\Forms\MainForm.vb:line 3232
at StaxRip.MainForm.OpenVideoSourceFiles(IEnumerable`1 files, Boolean isEncoding) in D:\Projekte\VB\staxrip\Forms\MainForm.vb:line 2142
| priority | staxrip crash on aac mkv file open i get staxrip crash on opening mkv files with aac audio always occurs on all mkv files with aac audio mkv files with aac audio open without issue if i leave video and audio untouched and convert the mkv to an container the file opens fine the culprit appears to be it reads the aac in mkv containers as channels then the crash occurs please find attached the error log and jpg showing error thanks for your help and dedication on this issue system environment staxrip windows windows pro language english australia cpu intel r core tm cpu gpu intel r hd graphics resolution x dpi code page media info source file c encode hevc aac mkv general complete name c encode hevc aac mkv format matroska format version version file size gib duration h min overall bit rate kb s encoded date utc writing application mkvmerge secret for the mad bit writing library libebml libmatroska cover yes attachments cover jpg video id format hevc format info high efficiency video coding format profile main main codec id v mpegh iso hevc duration h min bit rate kb s width pixels height pixels display aspect ratio frame rate mode constant frame rate fps color space yuv chroma subsampling bit depth bits bits pixel frame stream size gib writing library default yes forced no audio id format aac lc format info advanced audio codec low complexity codec id a aac duration h min bit rate kb s channel s channels channel layout c l r ls rs lw rw lfe sampling rate khz frame rate fps spf compression mode lossy delay relative to video ms stream size mib title surround language english default no forced no text id format pgs muxing mode zlib codec id s hdmv pgs codec id info picture based subtitle format used on bds hd dvds duration h min bit rate kb s count of elements stream size mib language english default no forced no text id format vobsub muxing mode zlib codec id s vobsub codec id info picture based subtitle format used on dvds duration h min bit rate b s count of elements stream size mib language arabic default no forced no menu en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter en chapter demux mkv mkvextract c encode staxrip beta apps support mkvtoolnix mkvextract exe c encode hevc aac mkv tracks c encode hevc aac temp english sup c encode hevc aac temp arabic idx c encode hevc aac temp english surround aac ui language en extracting track with the codecid a aac to the file c encode hevc aac temp english surround aac container format raw aac file with adts headers extracting track with the codecid s hdmv pgs to the file c encode hevc aac temp english sup container format sup extracting track with the codecid s vobsub to the file c encode hevc aac temp arabic sub container format vobsubs writing the vobsub index file c encode hevc aac temp arabic idx start pm end pm duration general complete name c encode hevc aac temp english surround aac format adts format info audio data transport stream file size mib audio format aac lc format info advanced audio codec low complexity format version version codec id sampling rate khz frame rate fps spf compression mode lossy stream size mib error mux aac to mux aac to returned error exit code mux aac to patman c encode staxrip beta apps support exe add c encode hevc aac temp english surround aac name new c encode hevc aac temp english surround track importing aac samplerate num channels mux aac to patman c encode staxrip beta apps support exe add c encode hevc aac temp english surround aac name new c encode hevc aac temp english surround track importing aac samplerate num channels start pm end pm duration exception staxrip errorabortexception mux aac to returned error exit code mux aac to patman c encode staxrip beta apps support exe add c encode hevc aac temp english surround aac name new c encode hevc aac temp english surround track importing aac samplerate num channels at staxrip proc start in d projekte vb staxrip general proc vb line at staxrip mkvdemuxer demux string sourcefile ienumerable audiostreams ienumerable subtitles audioprofile ap project proj boolean onlyenabled boolean videodemuxing boolean overrideexisting string title boolean usestreamname in d projekte vb staxrip general demux vb line at staxrip mkvdemuxer run project proj in d projekte vb staxrip general demux vb line at staxrip mainform demux in d projekte vb staxrip forms mainform vb line at staxrip mainform openvideosourcefiles ienumerable files boolean isencoding in d projekte vb staxrip forms mainform vb line | 1 |
40,679 | 2,868,935,667 | IssuesEvent | 2015-06-05 22:03:34 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | oauth authorized redirect should use given server | bug Priority-Medium wontfix | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7035_
----
If you pub lish to a custom server, it goes through oath and then redirects you to pub.dartlang.org/authorized. It should redirect to <custom server>/authorized. | 1.0 | oauth authorized redirect should use given server - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7035_
----
If you pub lish to a custom server, it goes through oath and then redirects you to pub.dartlang.org/authorized. It should redirect to <custom server>/authorized. | priority | oauth authorized redirect should use given server issue by originally opened as dart lang sdk if you pub lish to a custom server it goes through oath and then redirects you to pub dartlang org authorized it should redirect to lt custom server gt authorized | 1 |
26,116 | 2,684,179,527 | IssuesEvent | 2015-03-28 18:42:42 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | powershell scripts will not execute under ConEmu, but execute ok with native PowerShell directly | 1 star bug imported Priority-Medium | _From [ois...@gmail.com](https://code.google.com/u/105345745417239788864/) on September 17, 2012 14:19:44_
Required information! OS version: Win8 x64 ConEmu version: started with 120909 (I think?) -> 120916 (current)
PowerShell scripts will not execute with execution policy = remotesigned when run under ConEmu . They execute fine with powershell.exe directly.
My $profile will not execute. All scripts behave like they are actually blocked or located on a network even if they are local. I suspect this is because the PSAuthorizationManager sees that ConEmu64.exe is signed with an untrusted self-signed certificate. I noticed this problem.
Before 120909 (I think), scripts had no problem. This might also be something new to powershell 3.0 on windows 8? I suspect it may have something to do with: http://msdn.microsoft.com/en-us/library/windows/desktop/ms722431(v=vs.85).aspx
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=710_ | 1.0 | powershell scripts will not execute under ConEmu, but execute ok with native PowerShell directly - _From [ois...@gmail.com](https://code.google.com/u/105345745417239788864/) on September 17, 2012 14:19:44_
Required information! OS version: Win8 x64 ConEmu version: started with 120909 (I think?) -> 120916 (current)
PowerShell scripts will not execute with execution policy = remotesigned when run under ConEmu . They execute fine with powershell.exe directly.
My $profile will not execute. All scripts behave like they are actually blocked or located on a network even if they are local. I suspect this is because the PSAuthorizationManager sees that ConEmu64.exe is signed with an untrusted self-signed certificate. I noticed this problem.
Before 120909 (I think), scripts had no problem. This might also be something new to powershell 3.0 on windows 8? I suspect it may have something to do with: http://msdn.microsoft.com/en-us/library/windows/desktop/ms722431(v=vs.85).aspx
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=710_ | priority | powershell scripts will not execute under conemu but execute ok with native powershell directly from on september required information os version conemu version started with i think current powershell scripts will not execute with execution policy remotesigned when run under conemu they execute fine with powershell exe directly my profile will not execute all scripts behave like they are actually blocked or located on a network even if they are local i suspect this is because the psauthorizationmanager sees that exe is signed with an untrusted self signed certificate i noticed this problem before i think scripts had no problem this might also be something new to powershell on windows i suspect it may have something to do with original issue | 1 |
447,278 | 12,887,471,406 | IssuesEvent | 2020-07-13 11:18:23 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Oil Refinery has significant issues | Category: Balance Priority: Medium | During the most recent playtest, we have tracked the major source of over-pollution to the Oil Refinery. There are a few reasons for this:
- Multiple refineries are running 24/7. With barrels being refunded 100%, we can setup a loop where the oil refineries run indefinitely as long as the labor as been added.
- It is world object with the highest pollution
- Late game we need a very large supply of Fiberglass & Epoxy for crafting circuits and other recipes, making a huge demand for products made from oil drilling
Edit: This issue is designed to address the oil refinery loop. We have a separate issue for pollution. | 1.0 | Oil Refinery has significant issues - During the most recent playtest, we have tracked the major source of over-pollution to the Oil Refinery. There are a few reasons for this:
- Multiple refineries are running 24/7. With barrels being refunded 100%, we can setup a loop where the oil refineries run indefinitely as long as the labor as been added.
- It is world object with the highest pollution
- Late game we need a very large supply of Fiberglass & Epoxy for crafting circuits and other recipes, making a huge demand for products made from oil drilling
Edit: This issue is designed to address the oil refinery loop. We have a separate issue for pollution. | priority | oil refinery has significant issues during the most recent playtest we have tracked the major source of over pollution to the oil refinery there are a few reasons for this multiple refineries are running with barrels being refunded we can setup a loop where the oil refineries run indefinitely as long as the labor as been added it is world object with the highest pollution late game we need a very large supply of fiberglass epoxy for crafting circuits and other recipes making a huge demand for products made from oil drilling edit this issue is designed to address the oil refinery loop we have a separate issue for pollution | 1 |
772,853 | 27,139,195,589 | IssuesEvent | 2023-02-16 15:14:32 | boostercloud/booster | https://api.github.com/repos/boostercloud/booster | closed | Add support for notification events | size: XL dev-experience difficulty: high priority: medium | ## Feature Request
## Description
Following up with the conversation in #894, we've found many scenarios whose implementation could be significantly simplified if we had support for notification events, that is, events that are not tied to any specific entity. The only thing these events can do is trigger an event handler that could potentially generate state change events to alter the state of some entities.
It could also make sense, as #894 suggests, to add support to reduce notification events directly from entities. Still, to ensure data consistency, it could be necessary to create separate state change events for each entity changed. However, this could be done implicitly by the framework to simplify the programming interface.
An especially interesting scenario that something like this could help to solve is broadcast events: An event that potentially alters the state of all entities of a specific type.
## Possible Solution
It'd be worth discussing options before committing to any particular solution. | 1.0 | Add support for notification events - ## Feature Request
## Description
Following up with the conversation in #894, we've found many scenarios whose implementation could be significantly simplified if we had support for notification events, that is, events that are not tied to any specific entity. The only thing these events can do is trigger an event handler that could potentially generate state change events to alter the state of some entities.
It could also make sense, as #894 suggests, to add support to reduce notification events directly from entities. Still, to ensure data consistency, it could be necessary to create separate state change events for each entity changed. However, this could be done implicitly by the framework to simplify the programming interface.
An especially interesting scenario that something like this could help to solve is broadcast events: An event that potentially alters the state of all entities of a specific type.
## Possible Solution
It'd be worth discussing options before committing to any particular solution. | priority | add support for notification events feature request description following up with the conversation in we ve found many scenarios whose implementation could be significantly simplified if we had support for notification events that is events that are not tied to any specific entity the only thing these events can do is trigger an event handler that could potentially generate state change events to alter the state of some entities it could also make sense as suggests to add support to reduce notification events directly from entities still to ensure data consistency it could be necessary to create separate state change events for each entity changed however this could be done implicitly by the framework to simplify the programming interface an especially interesting scenario that something like this could help to solve is broadcast events an event that potentially alters the state of all entities of a specific type possible solution it d be worth discussing options before committing to any particular solution | 1 |
536,269 | 15,707,018,403 | IssuesEvent | 2021-03-26 18:15:04 | sopra-fs21-group-03/Client | https://api.github.com/repos/sopra-fs21-group-03/Client | opened | Every user can see what the users before have done so far | medium priority task | Time estimate: 0.7h
"This task is part of user story #13" | 1.0 | Every user can see what the users before have done so far - Time estimate: 0.7h
"This task is part of user story #13" | priority | every user can see what the users before have done so far time estimate this task is part of user story | 1 |
655,695 | 21,705,322,563 | IssuesEvent | 2022-05-10 09:05:11 | epiphany-platform/epiphany | https://api.github.com/repos/epiphany-platform/epiphany | closed | [FEATURE REQUEST] Create more accurate exception output for dnf_repoquery.py command | area/development priority/medium | **Is your feature request related to a problem? Please describe.**
After changes in dnf_repoquery command we run list of all packages in one repoquery call. That means when some of paczage is not found we get excpetion with list of all packages and it difficult to find package is the reason of issue.
```code
def output_handler(output: str):
""" In addition to errors, handle missing packages """
if not output:
raise PackageNotfound(f'repoquery failed for packages `{packages}`, reason: some of package(s) not found')
elif 'error' in output:
raise CriticalError(f'repoquery failed for packages `{packages}`, reason: `{output}`')
```
**Describe the solution you'd like**
Try to modify code to get only packages in which problem occured
---
**DoD checklist**
- Changelog
- [ ] updated
- [x] not needed
- COMPONENTS.md
- [ ] updated
- [x] not needed
- Schema
- [ ] updated
- [x] not needed
- Backport tasks
- [ ] created
- [x] not needed
- Documentation
- [ ] added
- [x] updated
- [ ] not needed
- [ ] Feature has automated tests
- [x] Automated tests passed (QA pipelines)
- [x] apply
- [ ] upgrade
- [ ] backup/restore
- [ ] Idempotency tested
- [x] All conversations in PR resolved
- [ ] Solution meets requirements and is done according to design doc
- [x] Usage compliant with license
| 1.0 | [FEATURE REQUEST] Create more accurate exception output for dnf_repoquery.py command - **Is your feature request related to a problem? Please describe.**
After changes in dnf_repoquery command we run list of all packages in one repoquery call. That means when some of paczage is not found we get excpetion with list of all packages and it difficult to find package is the reason of issue.
```code
def output_handler(output: str):
""" In addition to errors, handle missing packages """
if not output:
raise PackageNotfound(f'repoquery failed for packages `{packages}`, reason: some of package(s) not found')
elif 'error' in output:
raise CriticalError(f'repoquery failed for packages `{packages}`, reason: `{output}`')
```
**Describe the solution you'd like**
Try to modify code to get only packages in which problem occured
---
**DoD checklist**
- Changelog
- [ ] updated
- [x] not needed
- COMPONENTS.md
- [ ] updated
- [x] not needed
- Schema
- [ ] updated
- [x] not needed
- Backport tasks
- [ ] created
- [x] not needed
- Documentation
- [ ] added
- [x] updated
- [ ] not needed
- [ ] Feature has automated tests
- [x] Automated tests passed (QA pipelines)
- [x] apply
- [ ] upgrade
- [ ] backup/restore
- [ ] Idempotency tested
- [x] All conversations in PR resolved
- [ ] Solution meets requirements and is done according to design doc
- [x] Usage compliant with license
| priority | create more accurate exception output for dnf repoquery py command is your feature request related to a problem please describe after changes in dnf repoquery command we run list of all packages in one repoquery call that means when some of paczage is not found we get excpetion with list of all packages and it difficult to find package is the reason of issue code def output handler output str in addition to errors handle missing packages if not output raise packagenotfound f repoquery failed for packages packages reason some of package s not found elif error in output raise criticalerror f repoquery failed for packages packages reason output describe the solution you d like try to modify code to get only packages in which problem occured dod checklist changelog updated not needed components md updated not needed schema updated not needed backport tasks created not needed documentation added updated not needed feature has automated tests automated tests passed qa pipelines apply upgrade backup restore idempotency tested all conversations in pr resolved solution meets requirements and is done according to design doc usage compliant with license | 1 |
763,505 | 26,760,696,902 | IssuesEvent | 2023-01-31 06:30:32 | JeYeongR/repo-setup-sample | https://api.github.com/repos/JeYeongR/repo-setup-sample | closed | Sample Backlog 1 | For: CI/CD Priority: Medium Type: Idea Status: Available | ## Description
프로젝트 작업 전 수행되어야 할 issue(backlog)를 template을 활용하여 만들었습니다.
## Tasks(Process)
- [ ] 저녁 메뉴 정하기
- [ ] 저녁 먹기
- [ ] 집에 가기
## References
- [google](https://www.google.com/)
| 1.0 | Sample Backlog 1 - ## Description
프로젝트 작업 전 수행되어야 할 issue(backlog)를 template을 활용하여 만들었습니다.
## Tasks(Process)
- [ ] 저녁 메뉴 정하기
- [ ] 저녁 먹기
- [ ] 집에 가기
## References
- [google](https://www.google.com/)
| priority | sample backlog description 프로젝트 작업 전 수행되어야 할 issue backlog 를 template을 활용하여 만들었습니다 tasks process 저녁 메뉴 정하기 저녁 먹기 집에 가기 references | 1 |
77,693 | 3,507,216,950 | IssuesEvent | 2016-01-08 11:58:05 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Bugged Global Cooldown (BB #673) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 27.08.2014 09:29:57 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/673
<hr>
If you are in combat change equip (weapon, shield, relic) - appears a global cooldown on all. This should not be. | 1.0 | Bugged Global Cooldown (BB #673) - This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 27.08.2014 09:29:57 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/673
<hr>
If you are in combat change equip (weapon, shield, relic) - appears a global cooldown on all. This should not be. | priority | bugged global cooldown bb this issue was migrated from bitbucket original reporter alex step original date gmt original priority major original type bug original state invalid direct link if you are in combat change equip weapon shield relic appears a global cooldown on all this should not be | 1 |
216,677 | 7,310,826,930 | IssuesEvent | 2018-02-28 16:01:53 | canonical-websites/maas.io | https://api.github.com/repos/canonical-websites/maas.io | closed | Need event on the contact us button on the contact us page | Priority: Medium Type: Enhancement | ## Summary
Ideally need an event when button is clicked.
## Process
<button type="submit" class="mktoButton p-button--positive" onclick="dataLayer.push({'event' : 'GAEvent', 'eventCategory' : 'Form', 'eventAction' : 'MaaS.io contact-us', 'eventLabel' : maas.io-cloud', 'eventValue' : undefined });">Submit</button>
## Current and expected result
[describe what happened and what you expected]
## Screenshot
[if relevant, include a screenshot]
| 1.0 | Need event on the contact us button on the contact us page - ## Summary
Ideally need an event when button is clicked.
## Process
<button type="submit" class="mktoButton p-button--positive" onclick="dataLayer.push({'event' : 'GAEvent', 'eventCategory' : 'Form', 'eventAction' : 'MaaS.io contact-us', 'eventLabel' : maas.io-cloud', 'eventValue' : undefined });">Submit</button>
## Current and expected result
[describe what happened and what you expected]
## Screenshot
[if relevant, include a screenshot]
| priority | need event on the contact us button on the contact us page summary ideally need an event when button is clicked process submit current and expected result screenshot | 1 |
289,165 | 8,855,483,643 | IssuesEvent | 2019-01-09 06:42:06 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | Use Coord System Fix for MFEM Mesh Constructor | bug likelihood medium priority reviewed severity low | From Tzanio Kolev & Dan White We have a bug in the MFEM/VisIt integration that prevents us from visualizing high-order Nedelec elements on tet meshes. Cyrus: the problem is in line 462 of VisIts src/databases/MFEM/avtMFEMFileFormat.C. Can you switch mesh = new Mesh(imesh, 1, 1);to mesh = new Mesh(imesh, 1, 1, false); The fix_orientation argument is true by default in this constructor, because it makes sense when we use the mesh for discretization. For visualization, this is actually wrong in the high-order Nedelec case.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2578
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Use Coord System Fix for MFEM Mesh Constructor
Assigned to: Cyrus Harrison
Category: -
Target version: 2.10.3
Author: Cyrus Harrison
Start: 04/01/2016
Due date:
% Done: 0%
Estimated time:
Created: 04/01/2016 01:19 pm
Updated: 06/03/2016 07:22 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
From Tzanio Kolev & Dan White We have a bug in the MFEM/VisIt integration that prevents us from visualizing high-order Nedelec elements on tet meshes. Cyrus: the problem is in line 462 of VisIts src/databases/MFEM/avtMFEMFileFormat.C. Can you switch mesh = new Mesh(imesh, 1, 1);to mesh = new Mesh(imesh, 1, 1, false); The fix_orientation argument is true by default in this constructor, because it makes sense when we use the mesh for discretization. For visualization, this is actually wrong in the high-order Nedelec case.
Comments:
resolved on 2.10RC and trunk
| 1.0 | Use Coord System Fix for MFEM Mesh Constructor - From Tzanio Kolev & Dan White We have a bug in the MFEM/VisIt integration that prevents us from visualizing high-order Nedelec elements on tet meshes. Cyrus: the problem is in line 462 of VisIts src/databases/MFEM/avtMFEMFileFormat.C. Can you switch mesh = new Mesh(imesh, 1, 1);to mesh = new Mesh(imesh, 1, 1, false); The fix_orientation argument is true by default in this constructor, because it makes sense when we use the mesh for discretization. For visualization, this is actually wrong in the high-order Nedelec case.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2578
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Use Coord System Fix for MFEM Mesh Constructor
Assigned to: Cyrus Harrison
Category: -
Target version: 2.10.3
Author: Cyrus Harrison
Start: 04/01/2016
Due date:
% Done: 0%
Estimated time:
Created: 04/01/2016 01:19 pm
Updated: 06/03/2016 07:22 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
From Tzanio Kolev & Dan White We have a bug in the MFEM/VisIt integration that prevents us from visualizing high-order Nedelec elements on tet meshes. Cyrus: the problem is in line 462 of VisIts src/databases/MFEM/avtMFEMFileFormat.C. Can you switch mesh = new Mesh(imesh, 1, 1);to mesh = new Mesh(imesh, 1, 1, false); The fix_orientation argument is true by default in this constructor, because it makes sense when we use the mesh for discretization. For visualization, this is actually wrong in the high-order Nedelec case.
Comments:
resolved on 2.10RC and trunk
| priority | use coord system fix for mfem mesh constructor from tzanio kolev dan white we have a bug in the mfem visit integration that prevents us from visualizing high order nedelec elements on tet meshes cyrus the problem is in line of visits src databases mfem avtmfemfileformat c can you switch mesh new mesh imesh to mesh new mesh imesh false the fix orientation argument is true by default in this constructor because it makes sense when we use the mesh for discretization for visualization this is actually wrong in the high order nedelec case redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject use coord system fix for mfem mesh constructor assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity minor irritation found in version impact expected use os all support group any description from tzanio kolev dan white we have a bug in the mfem visit integration that prevents us from visualizing high order nedelec elements on tet meshes cyrus the problem is in line of visits src databases mfem avtmfemfileformat c can you switch mesh new mesh imesh to mesh new mesh imesh false the fix orientation argument is true by default in this constructor because it makes sense when we use the mesh for discretization for visualization this is actually wrong in the high order nedelec case comments resolved on and trunk | 1 |
220,486 | 7,360,330,094 | IssuesEvent | 2018-03-10 17:28:52 | bounswe/bounswe2018group5 | https://api.github.com/repos/bounswe/bounswe2018group5 | opened | Revise User Stories page | Effort: Medium Priority: High Status: In Progress Type: Wiki | Per Cihat's comment:
> Mockups
> * Do not forget to provide real data within the all of the pages :)
> * You can progress the scenario between each page with a couple of sentences.
> * That would be good to add 1-2 more pages to the 3rd scenario. Is this information (most bided project etc.) are available at the home page?
Precedent: Everybody should revise their mockup pages. | 1.0 | Revise User Stories page - Per Cihat's comment:
> Mockups
> * Do not forget to provide real data within the all of the pages :)
> * You can progress the scenario between each page with a couple of sentences.
> * That would be good to add 1-2 more pages to the 3rd scenario. Is this information (most bided project etc.) are available at the home page?
Precedent: Everybody should revise their mockup pages. | priority | revise user stories page per cihat s comment mockups do not forget to provide real data within the all of the pages you can progress the scenario between each page with a couple of sentences that would be good to add more pages to the scenario is this information most bided project etc are available at the home page precedent everybody should revise their mockup pages | 1 |
583,090 | 17,376,940,149 | IssuesEvent | 2021-07-30 23:46:14 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Trying to start the election for a constitution on tiger leads to darkened screen with no hint whats going on | Category: UI Priority: Medium Squad: Mountain Goat Type: Bug | 
| 1.0 | Trying to start the election for a constitution on tiger leads to darkened screen with no hint whats going on - 
| priority | trying to start the election for a constitution on tiger leads to darkened screen with no hint whats going on | 1 |
829,310 | 31,863,630,272 | IssuesEvent | 2023-09-15 12:45:06 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | opened | Grid's pager breaks if you use the setOptions method on the Pager. | Bug SEV: Medium C: Grid jQuery Priority 5 | ### Bug report
The Pager breaks if you use the setOptions method to alter its options.
**Regression introduced with 2023.2.829**
### Reproduction of the problem
1. Open the Pager Grid demo - https://demos.telerik.com/kendo-ui/grid/pager-functionality
2. Check either of the checkboxes to change the Pager options
### Current behavior
After checking a checkbox, only the arrows remain from the Pager.
### Expected/desired behavior
The Pager should remain whole when you alter its options.
### Environment
* **Kendo UI version:** 2023.2.829
* **Browser:** [all]
| 1.0 | Grid's pager breaks if you use the setOptions method on the Pager. - ### Bug report
The Pager breaks if you use the setOptions method to alter its options.
**Regression introduced with 2023.2.829**
### Reproduction of the problem
1. Open the Pager Grid demo - https://demos.telerik.com/kendo-ui/grid/pager-functionality
2. Check either of the checkboxes to change the Pager options
### Current behavior
After checking a checkbox, only the arrows remain from the Pager.
### Expected/desired behavior
The Pager should remain whole when you alter its options.
### Environment
* **Kendo UI version:** 2023.2.829
* **Browser:** [all]
| priority | grid s pager breaks if you use the setoptions method on the pager bug report the pager breaks if you use the setoptions method to alter its options regression introduced with reproduction of the problem open the pager grid demo check either of the checkboxes to change the pager options current behavior after checking a checkbox only the arrows remain from the pager expected desired behavior the pager should remain whole when you alter its options environment kendo ui version browser | 1 |
783,295 | 27,525,594,361 | IssuesEvent | 2023-03-06 17:49:03 | telabotanica/pollinisateurs | https://api.github.com/repos/telabotanica/pollinisateurs | closed | Profile formulaire style | priority::medium | en plus des corrections de fond qui sont en cours (ajout d'un champ et modif 1 champ & txts et skills) il faut modifier du style, un peu comme pour
#72
lien https://staging.nospollinisateurs.fr/user/profile/edit
font, hauteur de ligne, parcourir en rose clair sans survol

| 1.0 | Profile formulaire style - en plus des corrections de fond qui sont en cours (ajout d'un champ et modif 1 champ & txts et skills) il faut modifier du style, un peu comme pour
#72
lien https://staging.nospollinisateurs.fr/user/profile/edit
font, hauteur de ligne, parcourir en rose clair sans survol

| priority | profile formulaire style en plus des corrections de fond qui sont en cours ajout d un champ et modif champ txts et skills il faut modifier du style un peu comme pour lien font hauteur de ligne parcourir en rose clair sans survol | 1 |
243,911 | 7,868,244,427 | IssuesEvent | 2018-06-23 19:02:12 | ByteClubGames/YumiAndTheYokai | https://api.github.com/repos/ByteClubGames/YumiAndTheYokai | closed | Options Menu | In Progress MEDIUM PRIORITY Programming | Complete the options menu that shows up when you pause the game
-Scriptless Buttons
-Resume Functionality
-Quit Functionality | 1.0 | Options Menu - Complete the options menu that shows up when you pause the game
-Scriptless Buttons
-Resume Functionality
-Quit Functionality | priority | options menu complete the options menu that shows up when you pause the game scriptless buttons resume functionality quit functionality | 1 |
307,057 | 9,414,154,712 | IssuesEvent | 2019-04-10 09:29:34 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Skid Steer can destroy wood pulp, but gather nothing. | Medium Priority Usability | I think not bad if you can gather wood pulp by skid steer... | 1.0 | Skid Steer can destroy wood pulp, but gather nothing. - I think not bad if you can gather wood pulp by skid steer... | priority | skid steer can destroy wood pulp but gather nothing i think not bad if you can gather wood pulp by skid steer | 1 |
652,844 | 21,563,211,231 | IssuesEvent | 2022-05-01 13:28:59 | Polymer/tools | https://api.github.com/repos/Polymer/tools | closed | Warn when using an attribute that isn't a declared attribute (even if it's a property) | Package: linter Status: Available Priority: Medium Type: Enhancement wontfix | In Polymer 1, this works:
```html
<dom-module id="foo-bar">
<template>
{{baz}}
</template>
<script>
Polymer({is: 'foo-bar'});
</script>
</dom-module>
<foo-bar baz="blah"></foo-bar>
```
In Polymer 2 it does not. Attributes are only observed if they're in the properties block, otherwise they don't end up in `observedAttributes`.
So in Polymer 2 and Polymer 2 hybrid code we should warn about the above case. | 1.0 | Warn when using an attribute that isn't a declared attribute (even if it's a property) - In Polymer 1, this works:
```html
<dom-module id="foo-bar">
<template>
{{baz}}
</template>
<script>
Polymer({is: 'foo-bar'});
</script>
</dom-module>
<foo-bar baz="blah"></foo-bar>
```
In Polymer 2 it does not. Attributes are only observed if they're in the properties block, otherwise they don't end up in `observedAttributes`.
So in Polymer 2 and Polymer 2 hybrid code we should warn about the above case. | priority | warn when using an attribute that isn t a declared attribute even if it s a property in polymer this works html baz polymer is foo bar in polymer it does not attributes are only observed if they re in the properties block otherwise they don t end up in observedattributes so in polymer and polymer hybrid code we should warn about the above case | 1 |
145,362 | 5,566,912,721 | IssuesEvent | 2017-03-27 00:33:22 | ClaytonPassmore/ProjectOrange | https://api.github.com/repos/ClaytonPassmore/ProjectOrange | closed | Flag card endpoint returns 404 on deleted deck | Bug Priority: Medium Server | If a deck was deleted on a different device and you try to sync a card flag, the flag endpoint will return 404 error.
Expected behaviour: flag endpoint should return 409 status like edit endpoint


@ClaytonPassmore
@henryc132 | 1.0 | Flag card endpoint returns 404 on deleted deck - If a deck was deleted on a different device and you try to sync a card flag, the flag endpoint will return 404 error.
Expected behaviour: flag endpoint should return 409 status like edit endpoint


@ClaytonPassmore
@henryc132 | priority | flag card endpoint returns on deleted deck if a deck was deleted on a different device and you try to sync a card flag the flag endpoint will return error expected behaviour flag endpoint should return status like edit endpoint claytonpassmore | 1 |
125,196 | 4,953,839,800 | IssuesEvent | 2016-12-01 16:03:51 | SciSpike/yaktor-issues | https://api.github.com/repos/SciSpike/yaktor-issues | opened | Upgrade Yaktor dependencies | platform:nodejs priority:medium status:available team:core type:maintenance | There are deprecation warnings during `npm install` of a new Yaktor project. Excerpts follow:
```
$ curl https://init.yaktor.io | sh
...
npm install
npm WARN deprecated node-uuid@1.4.1: use uuid module instead
npm WARN deprecated standard-format@2.1.1: standard-format is deprecated in favor of a built-in autofixer in 'standard'. Usage: standard --fix
...
npm WARN deprecated tough-cookie@0.9.15: ReDoS vulnerability parsing Set-Cookie https://nodesecurity.io/advisories/130
npm WARN deprecated native-or-bluebird@1.1.2: 'native-or-bluebird' is deprecated. Please use 'any-promise' instead.
...
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
...
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated graceful-fs@2.0.3: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
...
npm WARN deprecated graceful-fs@1.1.14: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
npm WARN deprecated minimatch@1.0.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.3.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
```
Some of the above warnings may be from transitive dependencies. Upgrading Yaktor's direct dependencies should eliminate the transitive deprecation warnings. | 1.0 | Upgrade Yaktor dependencies - There are deprecation warnings during `npm install` of a new Yaktor project. Excerpts follow:
```
$ curl https://init.yaktor.io | sh
...
npm install
npm WARN deprecated node-uuid@1.4.1: use uuid module instead
npm WARN deprecated standard-format@2.1.1: standard-format is deprecated in favor of a built-in autofixer in 'standard'. Usage: standard --fix
...
npm WARN deprecated tough-cookie@0.9.15: ReDoS vulnerability parsing Set-Cookie https://nodesecurity.io/advisories/130
npm WARN deprecated native-or-bluebird@1.1.2: 'native-or-bluebird' is deprecated. Please use 'any-promise' instead.
...
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
...
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated graceful-fs@2.0.3: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
...
npm WARN deprecated graceful-fs@1.1.14: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
npm WARN deprecated minimatch@1.0.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated minimatch@0.3.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
```
Some of the above warnings may be from transitive dependencies. Upgrading Yaktor's direct dependencies should eliminate the transitive deprecation warnings. | priority | upgrade yaktor dependencies there are deprecation warnings during npm install of a new yaktor project excerpts follow curl sh npm install npm warn deprecated node uuid use uuid module instead npm warn deprecated standard format standard format is deprecated in favor of a built in autofixer in standard usage standard fix npm warn deprecated tough cookie redos vulnerability parsing set cookie npm warn deprecated native or bluebird native or bluebird is deprecated please use any promise instead npm warn deprecated jade jade has been renamed to pug please install the latest version of pug instead of jade npm warn deprecated minimatch please update to minimatch or higher to avoid a regexp dos issue npm warn deprecated minimatch please update to minimatch or higher to avoid a regexp dos issue npm warn deprecated graceful fs graceful fs and before will fail on node releases please update to graceful fs as soon as possible use npm ls graceful fs to find it in the tree npm warn deprecated graceful fs graceful fs and before will fail on node releases please update to graceful fs as soon as possible use npm ls graceful fs to find it in the tree npm warn deprecated minimatch please update to minimatch or higher to avoid a regexp dos issue npm warn deprecated minimatch please update to minimatch or higher to avoid a regexp dos issue some of the above warnings may be from transitive dependencies upgrading yaktor s direct dependencies should eliminate the transitive deprecation warnings | 1 |
798,118 | 28,236,654,842 | IssuesEvent | 2023-04-06 01:32:57 | FTC7393/FtcRobotController | https://api.github.com/repos/FTC7393/FtcRobotController | reopened | Add option to score only on Medium Junction | enhancement Auto Medium priority | Find the new scoring position.
Trajectory from starting to Medium Scoring Position.
Trajectory from Medium Scoring Position to each parking position.
Options op to select it. | 1.0 | Add option to score only on Medium Junction - Find the new scoring position.
Trajectory from starting to Medium Scoring Position.
Trajectory from Medium Scoring Position to each parking position.
Options op to select it. | priority | add option to score only on medium junction find the new scoring position trajectory from starting to medium scoring position trajectory from medium scoring position to each parking position options op to select it | 1 |
69,910 | 3,316,303,392 | IssuesEvent | 2015-11-06 16:20:57 | TeselaGen/Peony-Issue-Tracking | https://api.github.com/repos/TeselaGen/Peony-Issue-Tracking | opened | Design Editor Export/Import does not pick up imported file name | Customer: DAS Phase I Milestone #4 - Oracle Rewrite Priority: Medium Status: Active Type: Bug | _From @mfero on October 18, 2015 15:47_
Design Editor: Export/Import > JSON File
Design Editor: Export/Import > XML File
I exported a design, then reimported. I can't tell if it worked because I tried to distinguish the newly imported file by changing it's name. When I re-import, the same name appears as the one I exported... perhaps because it is picking up the design name from meta-data inside the file and not the file name. I'm not sure what the proper behavior should be, but this will be confusing to users who might expect that the file name will be the design name.
_Copied from original issue: TeselaGen/ve#1447_ | 1.0 | Design Editor Export/Import does not pick up imported file name - _From @mfero on October 18, 2015 15:47_
Design Editor: Export/Import > JSON File
Design Editor: Export/Import > XML File
I exported a design, then reimported. I can't tell if it worked because I tried to distinguish the newly imported file by changing it's name. When I re-import, the same name appears as the one I exported... perhaps because it is picking up the design name from meta-data inside the file and not the file name. I'm not sure what the proper behavior should be, but this will be confusing to users who might expect that the file name will be the design name.
_Copied from original issue: TeselaGen/ve#1447_ | priority | design editor export import does not pick up imported file name from mfero on october design editor export import json file design editor export import xml file i exported a design then reimported i can t tell if it worked because i tried to distinguish the newly imported file by changing it s name when i re import the same name appears as the one i exported perhaps because it is picking up the design name from meta data inside the file and not the file name i m not sure what the proper behavior should be but this will be confusing to users who might expect that the file name will be the design name copied from original issue teselagen ve | 1 |
727,812 | 25,047,091,896 | IssuesEvent | 2022-11-05 11:55:57 | nupac/nupac | https://api.github.com/repos/nupac/nupac | closed | Make packages added to scope unique | bug good first issue refactor priority: Medium | ### Describe the issue
We currently blidnly append to the file. Maybe a simple `uniq` call on the file should do?
### Argumentation
Entries there should be unique.
### Screenshots and other helpful media
_No response_
### Additional context
_No response_ | 1.0 | Make packages added to scope unique - ### Describe the issue
We currently blidnly append to the file. Maybe a simple `uniq` call on the file should do?
### Argumentation
Entries there should be unique.
### Screenshots and other helpful media
_No response_
### Additional context
_No response_ | priority | make packages added to scope unique describe the issue we currently blidnly append to the file maybe a simple uniq call on the file should do argumentation entries there should be unique screenshots and other helpful media no response additional context no response | 1 |
172,784 | 6,516,190,273 | IssuesEvent | 2017-08-27 04:17:10 | MyMICDS/MyMICDS-v2-Angular | https://api.github.com/repos/MyMICDS/MyMICDS-v2-Angular | opened | add automatic deployment | effort: medium enhancement priority: medium | I have created a script to be used on travis to automatically deploy github repos. Although it probably needs further testing and configurations | 1.0 | add automatic deployment - I have created a script to be used on travis to automatically deploy github repos. Although it probably needs further testing and configurations | priority | add automatic deployment i have created a script to be used on travis to automatically deploy github repos although it probably needs further testing and configurations | 1 |
224,851 | 7,473,476,410 | IssuesEvent | 2018-04-03 15:28:11 | emory-libraries/ezpaarse-platforms | https://api.github.com/repos/emory-libraries/ezpaarse-platforms | opened | American Journal of Roentgenology | Medium Priority | ### Example:star::star: :
https://www.ajronline.org/
### Priority:
Medium
### Subscriber (Library):
Woodruff
| 1.0 | American Journal of Roentgenology - ### Example:star::star: :
https://www.ajronline.org/
### Priority:
Medium
### Subscriber (Library):
Woodruff
| priority | american journal of roentgenology example star star priority medium subscriber library woodruff | 1 |
16,707 | 2,615,122,167 | IssuesEvent | 2015-03-01 05:49:20 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | opened | Google Custom Search Api | auto-migrated Priority-Medium Type-Sample | ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Custom Search Api
What format (e.g. JSON, Atom)?
JSON
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
ClientLogin
Java environment (e.g. Java 6, Android 2.3, App Engine 1.4.2)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
I will use Google CSE for my thesis project and I could not find any examples
with it.
```
Original issue reported on code.google.com by `omererak...@gmail.com` on 24 Aug 2011 at 1:45 | 1.0 | Google Custom Search Api - ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
Google Custom Search Api
What format (e.g. JSON, Atom)?
JSON
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
ClientLogin
Java environment (e.g. Java 6, Android 2.3, App Engine 1.4.2)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
I will use Google CSE for my thesis project and I could not find any examples
with it.
```
Original issue reported on code.google.com by `omererak...@gmail.com` on 24 Aug 2011 at 1:45 | priority | google custom search api which google api and version e g google calendar data api version google custom search api what format e g json atom json what authentation e g oauth oauth android clientlogin clientlogin java environment e g java android app engine java external references such as api reference guide please provide any additional information below i will use google cse for my thesis project and i could not find any examples with it original issue reported on code google com by omererak gmail com on aug at | 1 |
101,606 | 4,120,652,224 | IssuesEvent | 2016-06-08 18:36:18 | ocadotechnology/rapid-router | https://api.github.com/repos/ocadotechnology/rapid-router | closed | Night / underground mode | estimate: 8 priority: medium | A mode were you can't see the road and try to solve the route using a generic solution. | 1.0 | Night / underground mode - A mode were you can't see the road and try to solve the route using a generic solution. | priority | night underground mode a mode were you can t see the road and try to solve the route using a generic solution | 1 |
310,180 | 9,486,747,955 | IssuesEvent | 2019-04-22 14:54:06 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Duplicate dialog is not clickable after publishing an item | bug priority: medium | ## Describe the bug
Duplicate dialog is not clickable after publish an item
## To Reproduce
Steps to reproduce the behavior:
1. Create a new site using a BP
2. Go to Preview
3. Go to sidebar, right click and select duplicate opt, see it is working
4. Go to sidebar, right click and select publish opt, and then submit.
5. Go to sidebar, right click and select duplicate opt, see that dialog is not clickable
## Expected behavior
Duplicate dialog should be always clickable
| 1.0 | [studio-ui] Duplicate dialog is not clickable after publishing an item - ## Describe the bug
Duplicate dialog is not clickable after publish an item
## To Reproduce
Steps to reproduce the behavior:
1. Create a new site using a BP
2. Go to Preview
3. Go to sidebar, right click and select duplicate opt, see it is working
4. Go to sidebar, right click and select publish opt, and then submit.
5. Go to sidebar, right click and select duplicate opt, see that dialog is not clickable
## Expected behavior
Duplicate dialog should be always clickable
| priority | duplicate dialog is not clickable after publishing an item describe the bug duplicate dialog is not clickable after publish an item to reproduce steps to reproduce the behavior create a new site using a bp go to preview go to sidebar right click and select duplicate opt see it is working go to sidebar right click and select publish opt and then submit go to sidebar right click and select duplicate opt see that dialog is not clickable expected behavior duplicate dialog should be always clickable | 1 |
559,710 | 16,574,249,357 | IssuesEvent | 2021-05-31 00:01:46 | factly/vidcheck | https://api.github.com/repos/factly/vidcheck | closed | Add Featured Image to Rating entity | priority:medium | Add `featured_image` as another field to Rating entity to keep it consistent with Dega. | 1.0 | Add Featured Image to Rating entity - Add `featured_image` as another field to Rating entity to keep it consistent with Dega. | priority | add featured image to rating entity add featured image as another field to rating entity to keep it consistent with dega | 1 |
418,719 | 12,202,703,184 | IssuesEvent | 2020-04-30 09:22:40 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1308] Default world generation | Priority: Medium Status: Fixed Type: Regression | We need to have it properly generated with builds and be actual.
It seems to be old and have some issues for now
- no trees with start from client
- wrong yield potential
the old one #11062 | 1.0 | [0.9.0 staging-1308] Default world generation - We need to have it properly generated with builds and be actual.
It seems to be old and have some issues for now
- no trees with start from client
- wrong yield potential
the old one #11062 | priority | default world generation we need to have it properly generated with builds and be actual it seems to be old and have some issues for now no trees with start from client wrong yield potential the old one | 1 |
237,721 | 7,763,559,240 | IssuesEvent | 2018-06-01 16:59:47 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Fix tests randomly failing on the build | Priority: Medium bug | ### Description
Some tests sometimes fail becouse of slowdown or some wrong race condition:
Mainly in 2 or 3 different failure sequence:
- [Sample 1](https://travis-ci.org/geosolutions-it/MapStore2/builds/380775634)
- [Sample 2](https://travis-ci.org/geosolutions-it/MapStore2/builds/380747290)
- [Sample 3](https://travis-ci.org/geosolutions-it/MapStore2/builds/380773425)
Sample 1 and 2 are similar, so they may have the same reason.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [x] Firefox
*Browser Version Affected*
- Latest (on travis)
*Steps to reproduce*
- Run test on travis, sometimes. Need to investigate the reasons.
*Expected Result*
- Build always has success
*Current Result*
- Random fails of the travis build
| 1.0 | Fix tests randomly failing on the build - ### Description
Some tests sometimes fail becouse of slowdown or some wrong race condition:
Mainly in 2 or 3 different failure sequence:
- [Sample 1](https://travis-ci.org/geosolutions-it/MapStore2/builds/380775634)
- [Sample 2](https://travis-ci.org/geosolutions-it/MapStore2/builds/380747290)
- [Sample 3](https://travis-ci.org/geosolutions-it/MapStore2/builds/380773425)
Sample 1 and 2 are similar, so they may have the same reason.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [x] Firefox
*Browser Version Affected*
- Latest (on travis)
*Steps to reproduce*
- Run test on travis, sometimes. Need to investigate the reasons.
*Expected Result*
- Build always has success
*Current Result*
- Random fails of the travis build
| priority | fix tests randomly failing on the build description some tests sometimes fail becouse of slowdown or some wrong race condition mainly in or different failure sequence sample and are similar so they may have the same reason in case of bug otherwise remove this paragraph browser affected use this site for non expert users firefox browser version affected latest on travis steps to reproduce run test on travis sometimes need to investigate the reasons expected result build always has success current result random fails of the travis build | 1 |
163,438 | 6,198,303,274 | IssuesEvent | 2017-07-05 18:50:57 | Polymer/polymer-bundler | https://api.github.com/repos/Polymer/polymer-bundler | closed | Shell's import gets included in all the bundles | Priority: Medium Type: Question | Suppose we have an application having **4 entrypoints** (named Entrypoint 1, 2, 3, 4).
There dependencies are as flow
### Dependency tree
Entrypoint | Dependencies
------------ | -------------
Entrypoint 1 | Dep1.html, common.html
Entrypoint 2 | Dep2.html, common.html
Entrypoint 3 | common.html
Entrypoint 4 | Dep1.html
Now, with **merge strategy set to 3** content of each file will be
### Output content
Entrypoint | element code | imports
------------ | ------------- | ---------------
Entrypoint 1 | entrypoint 1 | shell, shared_bundle_1
Entrypoint 2 | entrypoint 2, dep 2 | shell
Entrypoint 3 | entrypoint 3 | shell
Entrypoint 4 | entrypoint 4 | shared_bundle_1
shell | common | -
shared_bundle_1 | dep 1 | shell
Now, if user opens **Entrypoint 4** files from _shell_ will also be loaded even though they are not required by _Entrypoint 4_.
Ideally, none of the _shared_bundle_ should have import to shell as required _Entrypoint_ will always have it.
| 1.0 | Shell's import gets included in all the bundles - Suppose we have an application having **4 entrypoints** (named Entrypoint 1, 2, 3, 4).
There dependencies are as flow
### Dependency tree
Entrypoint | Dependencies
------------ | -------------
Entrypoint 1 | Dep1.html, common.html
Entrypoint 2 | Dep2.html, common.html
Entrypoint 3 | common.html
Entrypoint 4 | Dep1.html
Now, with **merge strategy set to 3** content of each file will be
### Output content
Entrypoint | element code | imports
------------ | ------------- | ---------------
Entrypoint 1 | entrypoint 1 | shell, shared_bundle_1
Entrypoint 2 | entrypoint 2, dep 2 | shell
Entrypoint 3 | entrypoint 3 | shell
Entrypoint 4 | entrypoint 4 | shared_bundle_1
shell | common | -
shared_bundle_1 | dep 1 | shell
Now, if user opens **Entrypoint 4** files from _shell_ will also be loaded even though they are not required by _Entrypoint 4_.
Ideally, none of the _shared_bundle_ should have import to shell as required _Entrypoint_ will always have it.
| priority | shell s import gets included in all the bundles suppose we have an application having entrypoints named entrypoint there dependencies are as flow dependency tree entrypoint dependencies entrypoint html common html entrypoint html common html entrypoint common html entrypoint html now with merge strategy set to content of each file will be output content entrypoint element code imports entrypoint entrypoint shell shared bundle entrypoint entrypoint dep shell entrypoint entrypoint shell entrypoint entrypoint shared bundle shell common shared bundle dep shell now if user opens entrypoint files from shell will also be loaded even though they are not required by entrypoint ideally none of the shared bundle should have import to shell as required entrypoint will always have it | 1 |
421,853 | 12,262,195,882 | IssuesEvent | 2020-05-06 21:34:11 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui][studio] Create site dialog updates | enhancement priority: medium | Studio UI:
Update the Create Site dialog, marketplace tab to:
- Invoke the updated API with include incompatible plugins
- Have a checkbox that's checked on by default and controls that parameter
- Remove the Crafter CMS version from the detailed plugin/blueprint view
- Remove the "Use" button from the listing if a plugin is not compatible (just leave the "More...")
- Remove the "Use" button from the plugin detail page
- Add a warning/notice that the plugin/bp is not compatible in the detailed view page clearly so the user knows why they can't use it
Studio:
- Update the backend MP proxy to handle the new API updates | 1.0 | [studio-ui][studio] Create site dialog updates - Studio UI:
Update the Create Site dialog, marketplace tab to:
- Invoke the updated API with include incompatible plugins
- Have a checkbox that's checked on by default and controls that parameter
- Remove the Crafter CMS version from the detailed plugin/blueprint view
- Remove the "Use" button from the listing if a plugin is not compatible (just leave the "More...")
- Remove the "Use" button from the plugin detail page
- Add a warning/notice that the plugin/bp is not compatible in the detailed view page clearly so the user knows why they can't use it
Studio:
- Update the backend MP proxy to handle the new API updates | priority | create site dialog updates studio ui update the create site dialog marketplace tab to invoke the updated api with include incompatible plugins have a checkbox that s checked on by default and controls that parameter remove the crafter cms version from the detailed plugin blueprint view remove the use button from the listing if a plugin is not compatible just leave the more remove the use button from the plugin detail page add a warning notice that the plugin bp is not compatible in the detailed view page clearly so the user knows why they can t use it studio update the backend mp proxy to handle the new api updates | 1 |
58,865 | 3,092,304,015 | IssuesEvent | 2015-08-26 17:08:00 | TheLens/demolitions | https://api.github.com/repos/TheLens/demolitions | closed | change "support this project" text to "about this project" | Medium priority | i think it's incongruous. we'll leave donation stuff at the bottom of the about page. | 1.0 | change "support this project" text to "about this project" - i think it's incongruous. we'll leave donation stuff at the bottom of the about page. | priority | change support this project text to about this project i think it s incongruous we ll leave donation stuff at the bottom of the about page | 1 |
97,052 | 3,984,636,467 | IssuesEvent | 2016-05-07 09:43:12 | BugBusterSWE/documentation | https://api.github.com/repos/BugBusterSWE/documentation | opened | Attività di aggiunta alle norme | activity Manager priority:medium | *Documento in cui si trova il problema*:
Norme di progetto
*Descrizione del problema*:
- [ ] Aggiungere processo di validazione
- [ ] Aggiungere decisioni riunione
Link task: [https://bugbusters.teamwork.com/tasks/6601072](https://bugbusters.teamwork.com/tasks/6601072) | 1.0 | Attività di aggiunta alle norme - *Documento in cui si trova il problema*:
Norme di progetto
*Descrizione del problema*:
- [ ] Aggiungere processo di validazione
- [ ] Aggiungere decisioni riunione
Link task: [https://bugbusters.teamwork.com/tasks/6601072](https://bugbusters.teamwork.com/tasks/6601072) | priority | attività di aggiunta alle norme documento in cui si trova il problema norme di progetto descrizione del problema aggiungere processo di validazione aggiungere decisioni riunione link task | 1 |
721,615 | 24,832,668,922 | IssuesEvent | 2022-10-26 05:54:24 | AY2223S1-CS2103T-T09-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-T09-1/tp | closed | Change date format in transaction | priority.Medium type.Task | Currently shows `2000-11-11`, but it should be changed to `11/11/2000` | 1.0 | Change date format in transaction - Currently shows `2000-11-11`, but it should be changed to `11/11/2000` | priority | change date format in transaction currently shows but it should be changed to | 1 |
131,338 | 5,146,242,320 | IssuesEvent | 2017-01-13 00:19:34 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Create a cmake only way of packaging Drake. | priority: medium team: kitware type: installation and distribution | Currently, this shell script is used:
https://github.com/RobotLocomotion/drake-admin/tree/master/packaging
Would be better if this was a cmake only script so it can be run cross platform. Maybe CPack, or other.
| 1.0 | Create a cmake only way of packaging Drake. - Currently, this shell script is used:
https://github.com/RobotLocomotion/drake-admin/tree/master/packaging
Would be better if this was a cmake only script so it can be run cross platform. Maybe CPack, or other.
| priority | create a cmake only way of packaging drake currently this shell script is used would be better if this was a cmake only script so it can be run cross platform maybe cpack or other | 1 |
22,701 | 2,649,880,980 | IssuesEvent | 2015-03-15 11:41:06 | prikhi/pencil | https://api.github.com/repos/prikhi/pencil | closed | Feature request: Object cloning | 2–5 stars duplicate enhancement imported Priority-Medium | _From [leschilo...@ok.by](https://code.google.com/u/100258333374114659683/) on December 27, 2013 07:08:54_
Hi,
Please add a feature "object cloning" (hotkey "ctrl+D" or "+" on numpad)
Regards,
Alex
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=611_ | 1.0 | Feature request: Object cloning - _From [leschilo...@ok.by](https://code.google.com/u/100258333374114659683/) on December 27, 2013 07:08:54_
Hi,
Please add a feature "object cloning" (hotkey "ctrl+D" or "+" on numpad)
Regards,
Alex
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=611_ | priority | feature request object cloning from on december hi please add a feature object cloning hotkey ctrl d or on numpad regards alex original issue | 1 |
19,576 | 2,622,154,031 | IssuesEvent | 2015-03-04 00:07:20 | byzhang/terrastore | https://api.github.com/repos/byzhang/terrastore | opened | Add streaming support to Java API | auto-migrated Priority-Medium Project-Terrastore-JavaClient Type-Feature | ```
It would be nice if the Java Client API supported streaming via the
getValue and putValue operations. This would be nice for the following
situations:
- storing large documents (ie. 100MB)
- can pass the stream directly through to the client connected to the
webapp, thus consuming no disk or memory on the web server
- If storing something such as XML, can parse the document being pulled out
of the bucket as a stream
```
Original issue reported on code.google.com by `ryan.sch...@gmail.com` on 24 Jan 2010 at 3:35 | 1.0 | Add streaming support to Java API - ```
It would be nice if the Java Client API supported streaming via the
getValue and putValue operations. This would be nice for the following
situations:
- storing large documents (ie. 100MB)
- can pass the stream directly through to the client connected to the
webapp, thus consuming no disk or memory on the web server
- If storing something such as XML, can parse the document being pulled out
of the bucket as a stream
```
Original issue reported on code.google.com by `ryan.sch...@gmail.com` on 24 Jan 2010 at 3:35 | priority | add streaming support to java api it would be nice if the java client api supported streaming via the getvalue and putvalue operations this would be nice for the following situations storing large documents ie can pass the stream directly through to the client connected to the webapp thus consuming no disk or memory on the web server if storing something such as xml can parse the document being pulled out of the bucket as a stream original issue reported on code google com by ryan sch gmail com on jan at | 1 |
153,192 | 5,886,993,855 | IssuesEvent | 2017-05-17 05:41:42 | ThoughtWorksInc/treadmill | https://api.github.com/repos/ThoughtWorksInc/treadmill | closed | Set up FreeIPA server for LDAP-based directory of host and user accounts | Feature-Security inDev Priority-Critical Role-Administrator Size-Medium (M) | So that :
I have a centralized way of managing host and user accounts across Treadmill
Tasks:
"Assumption:
Can initialize a Treadmill cell on the cloud. (i.e. Stories '44844e' and '0dae3b' have been played.)
FreeIPA could be a probable candidate for centrally managing host and user accounts."
Tasks:
Create ansible role for FreeIPA server setup.
| 1.0 | Set up FreeIPA server for LDAP-based directory of host and user accounts - So that :
I have a centralized way of managing host and user accounts across Treadmill
Tasks:
"Assumption:
Can initialize a Treadmill cell on the cloud. (i.e. Stories '44844e' and '0dae3b' have been played.)
FreeIPA could be a probable candidate for centrally managing host and user accounts."
Tasks:
Create ansible role for FreeIPA server setup.
| priority | set up freeipa server for ldap based directory of host and user accounts so that i have a centralized way of managing host and user accounts across treadmill tasks assumption can initialize a treadmill cell on the cloud i e stories and have been played freeipa could be a probable candidate for centrally managing host and user accounts tasks create ansible role for freeipa server setup | 1 |
4,743 | 2,563,500,067 | IssuesEvent | 2015-02-06 13:35:27 | olga-jane/prizm | https://api.github.com/repos/olga-jane/prizm | closed | Open only "Pipe" tab from menu "Settings" | bug bug - functional Coding MEDIUM priority Settings to_share_students | Scenario:
1. Open Prizm application
2. Go to "Settings" -> ANY menu item
Result:
Open only Pipe tab
Expected:
Open tab corresponding to menu item | 1.0 | Open only "Pipe" tab from menu "Settings" - Scenario:
1. Open Prizm application
2. Go to "Settings" -> ANY menu item
Result:
Open only Pipe tab
Expected:
Open tab corresponding to menu item | priority | open only pipe tab from menu settings scenario open prizm application go to settings any menu item result open only pipe tab expected open tab corresponding to menu item | 1 |
802,443 | 28,962,630,688 | IssuesEvent | 2023-05-10 04:50:59 | Park-Station/Parkstation | https://api.github.com/repos/Park-Station/Parkstation | closed | Shuttle changes | Priority: 3-Medium Type: Feature Request Status: Derelict | - [x] Longer shuttle transit
- [x] Auto call shuttle
- [x] Restart vote calls shuttle
- [x] Make evac a "lose condition" if the shift wasn't ended via Autocall
- [ ] Make the condition message appear at the bottom of the round end panel
- [x] Shuttle collision damage
---
- [x] Cargo should cost something to fly
- [x] Methods of pricing
- [x] None
- [x] Flat price (min)
- [x] Over time increase
- [x] Percentage of balance
- [x] Load size
- [x] Max cost
- [ ] Debt
---
- [ ] Guidebooks
- [ ] Debt
- [ ] Cargo pricing methods
- [ ] Shuttle 💥 (explosion based on mass or something)
---
| 1.0 | Shuttle changes - - [x] Longer shuttle transit
- [x] Auto call shuttle
- [x] Restart vote calls shuttle
- [x] Make evac a "lose condition" if the shift wasn't ended via Autocall
- [ ] Make the condition message appear at the bottom of the round end panel
- [x] Shuttle collision damage
---
- [x] Cargo should cost something to fly
- [x] Methods of pricing
- [x] None
- [x] Flat price (min)
- [x] Over time increase
- [x] Percentage of balance
- [x] Load size
- [x] Max cost
- [ ] Debt
---
- [ ] Guidebooks
- [ ] Debt
- [ ] Cargo pricing methods
- [ ] Shuttle 💥 (explosion based on mass or something)
---
| priority | shuttle changes longer shuttle transit auto call shuttle restart vote calls shuttle make evac a lose condition if the shift wasn t ended via autocall make the condition message appear at the bottom of the round end panel shuttle collision damage cargo should cost something to fly methods of pricing none flat price min over time increase percentage of balance load size max cost debt guidebooks debt cargo pricing methods shuttle 💥 explosion based on mass or something | 1 |
265,085 | 8,337,063,174 | IssuesEvent | 2018-09-28 09:49:47 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | The role MIS USER is not able to receive its scopes, PreProd, #J355 | kind/support priority/medium | Steps to reproduce:
GET https://api-preprod.ehealth-ukraine.org/admin/roles
Для ролі MIS USER у скоупах містяться скоупи: user:request_factor user:approve_factor
Із цими скоупами неможливо виконати процедуру логіну від користувача МІС.
Expected results:
Скоупи відповідають документації та досзволяють пройти Oauth Flow
Детальніше маємо наступне:
GET https://api-preprod.ehealth-ukraine.org/admin/roles
Response Body:
....
3: scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
name:"MIS USER"
id:"56134459-f73e-45a5-af79-e16f3db7151f"
....
При спробі логіну із даними скоупами:
POST https://api-preprod.ehealth-ukraine.org/oauth/apps/authorize
Request Headers:
content-type:"application/json"
api-key:"d036273f779b1741dcf0"
authorization:"Bearer Z2VMSHcyK1ppakorUUxJdmRTL3dRQT09"
cache-control:"no-cache"
postman-token:"6212b19a-0e6f-45d8-8adb-609b86c4a649"
user-agent:"PostmanRuntime/7.3.0"
accept:"/"
host:"api-preprod.ehealth-ukraine.org"
cookie:"__cfduid=d793d0eccd15c42ea6746e77dc14d01711532015998"
accept-encoding:"gzip, deflate"
content-length:267
Request Body:
app:
client_id:"68325CEB-84FA-4217-A544-40FC38837911"
redirect_uri:"http://example.com/redirect"
scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
Response Body:
meta:
url:"http://api-svc.mithril/oauth/apps/authorize"
type:"object"
request_id:"c0612da5-8be9-40d0-aac4-8960d42feb0c#112951"
code:422
error:
type:"request_malformed"
message:"Scope is not allowed by client type."
На даний момент виконали запит повторно:
Щойно перепровірив. GET https://api-preprod.ehealth-ukraine.org/admin/roles все ще повертає невірні скоупи.
Response Headers:
date:"Fri, 28 Sep 2018 09:12:35 GMT"
Response Body:
3:
scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
name:"MIS USER"
id:"56134459-f73e-45a5-af79-e16f3db7151f"
| 1.0 | The role MIS USER is not able to receive its scopes, PreProd, #J355 - Steps to reproduce:
GET https://api-preprod.ehealth-ukraine.org/admin/roles
Для ролі MIS USER у скоупах містяться скоупи: user:request_factor user:approve_factor
Із цими скоупами неможливо виконати процедуру логіну від користувача МІС.
Expected results:
Скоупи відповідають документації та досзволяють пройти Oauth Flow
Детальніше маємо наступне:
GET https://api-preprod.ehealth-ukraine.org/admin/roles
Response Body:
....
3: scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
name:"MIS USER"
id:"56134459-f73e-45a5-af79-e16f3db7151f"
....
При спробі логіну із даними скоупами:
POST https://api-preprod.ehealth-ukraine.org/oauth/apps/authorize
Request Headers:
content-type:"application/json"
api-key:"d036273f779b1741dcf0"
authorization:"Bearer Z2VMSHcyK1ppakorUUxJdmRTL3dRQT09"
cache-control:"no-cache"
postman-token:"6212b19a-0e6f-45d8-8adb-609b86c4a649"
user-agent:"PostmanRuntime/7.3.0"
accept:"/"
host:"api-preprod.ehealth-ukraine.org"
cookie:"__cfduid=d793d0eccd15c42ea6746e77dc14d01711532015998"
accept-encoding:"gzip, deflate"
content-length:267
Request Body:
app:
client_id:"68325CEB-84FA-4217-A544-40FC38837911"
redirect_uri:"http://example.com/redirect"
scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
Response Body:
meta:
url:"http://api-svc.mithril/oauth/apps/authorize"
type:"object"
request_id:"c0612da5-8be9-40d0-aac4-8960d42feb0c#112951"
code:422
error:
type:"request_malformed"
message:"Scope is not allowed by client type."
На даний момент виконали запит повторно:
Щойно перепровірив. GET https://api-preprod.ehealth-ukraine.org/admin/roles все ще повертає невірні скоупи.
Response Headers:
date:"Fri, 28 Sep 2018 09:12:35 GMT"
Response Body:
3:
scope:"legal_entity:read legal_entity:write legal_entity:mis_verify role:read user:request_factor user:approve_factor event:read"
name:"MIS USER"
id:"56134459-f73e-45a5-af79-e16f3db7151f"
| priority | the role mis user is not able to receive its scopes preprod steps to reproduce get для ролі mis user у скоупах містяться скоупи user request factor user approve factor із цими скоупами неможливо виконати процедуру логіну від користувача міс expected results скоупи відповідають документації та досзволяють пройти oauth flow детальніше маємо наступне get response body scope legal entity read legal entity write legal entity mis verify role read user request factor user approve factor event read name mis user id при спробі логіну із даними скоупами post request headers content type application json api key authorization bearer cache control no cache postman token user agent postmanruntime accept host api preprod ehealth ukraine org cookie cfduid accept encoding gzip deflate content length request body app client id redirect uri scope legal entity read legal entity write legal entity mis verify role read user request factor user approve factor event read response body meta url type object request id code error type request malformed message scope is not allowed by client type на даний момент виконали запит повторно щойно перепровірив get все ще повертає невірні скоупи response headers date fri sep gmt response body scope legal entity read legal entity write legal entity mis verify role read user request factor user approve factor event read name mis user id | 1 |
512,483 | 14,897,845,748 | IssuesEvent | 2021-01-21 12:20:16 | igroglaz/srvmgr | https://api.github.com/repos/igroglaz/srvmgr | closed | unhardcode starting exp | Medium priority enhancement | allow to create character with 0 exp (atm character created with 20 main skill and 10 secondary.. we need to create characters with 0 skill) | 1.0 | unhardcode starting exp - allow to create character with 0 exp (atm character created with 20 main skill and 10 secondary.. we need to create characters with 0 skill) | priority | unhardcode starting exp allow to create character with exp atm character created with main skill and secondary we need to create characters with skill | 1 |
23,800 | 2,663,652,511 | IssuesEvent | 2015-03-20 08:23:25 | less/less.js | https://api.github.com/repos/less/less.js | closed | Add color scaling functions | Feature Request Medium Priority ReadyForImplementation | As evidenced by #282, many people feel the color functions should work relatively instead of absolutely. For example, calling ```darken(hsla(0,0,10%,1), 10%)``` should result in ```hsla(0,0,9%,1)``` instead of ```hsla(0,0,0,1)```.
Due to obvious backwards compatibility issues, we shouldn't change the behavior of the existing functions completely. However, it would be nice if we added relative variations or possibly overloads.
I propose the convention ```darkenrelative(@color, @percent)``` instead of ```darken(@color, @percent, true)``` because it's more obvious what you're doing. Personally, I'd rather see ```darkenRelative``` but it appears your standards are all lowercase (e.g. ```isnumber```, ```iscolor```, etc.).
The functions that should have relative counterparts added are:
lighten
darken
saturate
desaturate
fadein
fadeout
I also propose new color functions follow this convention where applicable -- ```somefunction``` and ```somefunctionrelative```. | 1.0 | Add color scaling functions - As evidenced by #282, many people feel the color functions should work relatively instead of absolutely. For example, calling ```darken(hsla(0,0,10%,1), 10%)``` should result in ```hsla(0,0,9%,1)``` instead of ```hsla(0,0,0,1)```.
Due to obvious backwards compatibility issues, we shouldn't change the behavior of the existing functions completely. However, it would be nice if we added relative variations or possibly overloads.
I propose the convention ```darkenrelative(@color, @percent)``` instead of ```darken(@color, @percent, true)``` because it's more obvious what you're doing. Personally, I'd rather see ```darkenRelative``` but it appears your standards are all lowercase (e.g. ```isnumber```, ```iscolor```, etc.).
The functions that should have relative counterparts added are:
lighten
darken
saturate
desaturate
fadein
fadeout
I also propose new color functions follow this convention where applicable -- ```somefunction``` and ```somefunctionrelative```. | priority | add color scaling functions as evidenced by many people feel the color functions should work relatively instead of absolutely for example calling darken hsla should result in hsla instead of hsla due to obvious backwards compatibility issues we shouldn t change the behavior of the existing functions completely however it would be nice if we added relative variations or possibly overloads i propose the convention darkenrelative color percent instead of darken color percent true because it s more obvious what you re doing personally i d rather see darkenrelative but it appears your standards are all lowercase e g isnumber iscolor etc the functions that should have relative counterparts added are lighten darken saturate desaturate fadein fadeout i also propose new color functions follow this convention where applicable somefunction and somefunctionrelative | 1 |
425,485 | 12,341,059,445 | IssuesEvent | 2020-05-14 21:07:32 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Nested drop-zone created component but it's not binding it to the drop-zone | bug priority: medium | ## Describe the bug
I used the empty blueprint to test the nested dropzones. I added a freemarker block which accept a Component content type called `section` and when i drag `section` to that dropzone it works fine. Now that section component also accept a dropzone of any Component content type exists in the site. When i drag any component is gets created but never bound to that dropzone. If i would do the same steps with the manual way by clicking Edit and add component it works.
## To Reproduce
Steps to reproduce the behavior:
1. create a new site based on the **empty blueprint**.
2. go to **site config**
3. Open **A Page** content type
4. Add a node selector named **sections** with a data-source of embedded-content with a type of `/component/layoutsection`
5. edit the template to have this block of
```
<section class="section" <@studio.componentContainerAttr target="sections_o" component=contentModel/>>
<#if contentModel.sections_o?? && contentModel.sections_o.item??>
<#list contentModel.sections_o.item as section>
<@renderComponent component=section />
</#list>
</#if>
</section>
```
6. create a new Component Content Type called **Layout - Section** which will have a type name as `/component/layoutsection`
7. add a form section
8. add node-selector inside the form section and call it **Components** with a variable name `components_o`
9. assign a **shared-content** data-source to this node-selector named **Shared Components** with a path repository/browse path `/site/components/`
10. edit the template to be
```
<#import "/templates/system/common/cstudio-support.ftl" as studio />
<div <@studio.componentAttr path=contentModel.storeUrl ice=true /> >
<div style="border: 1px solid black" <@studio.componentContainerAttr target="components_o" component=contentModel/>>
<#if contentModel.components_o?? && contentModel.components_o.item??>
<#list contentModel.components_o.item as component>
<@renderComponent component=component />
</#list>
</#if>
</div>
</div>
```
11. edit the Preview Components Configurations
```
<config>
<category>
<label>Layouts</label>
<component>
<label>Section</label>
<type>/component/layoutsection</type>
</component>
</category>
<category>
<!-- any component of your choice to test -->
<label>Basic Components</label>
<component>
<label>Header</label>
<type>/component/componentheader</type>
<path>/site/components/headers</path>
</component>
</category>
<browse>
<label>Header</label>
<path>/site/components/headers</path>
</browse>
</config>
```
12. Open the home page and click preview tools to drag component.
13. drag a section component (it should work fine)
14. drag any component inside the section that we dragged (in my case a header component)
15. A header will be created but it won't be bound to that dropzone.
Note: my header component is just HTML it has nothing at all.
## Expected behavior
When I drag a component to the dropzone inside the section component it should be not only created it also should be bound.
## Screenshots
https://craftercms.slack.com/archives/C0ME7US74/p1587699413244900
## Logs
n/a
## Specs
### Version
Studio Version Number: 3.1.5-fb3509
Build Number: fb35092a40f7b3d5edf1f2ef61e437b61cc2e5e6
Build Date/Time: 02-22-2020 02:17:38 +0300
### OS
Debian Buster (10)
### Browser
Firefox, chromium
## Additional context
n/a
| 1.0 | [studio-ui] Nested drop-zone created component but it's not binding it to the drop-zone - ## Describe the bug
I used the empty blueprint to test the nested dropzones. I added a freemarker block which accept a Component content type called `section` and when i drag `section` to that dropzone it works fine. Now that section component also accept a dropzone of any Component content type exists in the site. When i drag any component is gets created but never bound to that dropzone. If i would do the same steps with the manual way by clicking Edit and add component it works.
## To Reproduce
Steps to reproduce the behavior:
1. create a new site based on the **empty blueprint**.
2. go to **site config**
3. Open **A Page** content type
4. Add a node selector named **sections** with a data-source of embedded-content with a type of `/component/layoutsection`
5. edit the template to have this block of
```
<section class="section" <@studio.componentContainerAttr target="sections_o" component=contentModel/>>
<#if contentModel.sections_o?? && contentModel.sections_o.item??>
<#list contentModel.sections_o.item as section>
<@renderComponent component=section />
</#list>
</#if>
</section>
```
6. create a new Component Content Type called **Layout - Section** which will have a type name as `/component/layoutsection`
7. add a form section
8. add node-selector inside the form section and call it **Components** with a variable name `components_o`
9. assign a **shared-content** data-source to this node-selector named **Shared Components** with a path repository/browse path `/site/components/`
10. edit the template to be
```
<#import "/templates/system/common/cstudio-support.ftl" as studio />
<div <@studio.componentAttr path=contentModel.storeUrl ice=true /> >
<div style="border: 1px solid black" <@studio.componentContainerAttr target="components_o" component=contentModel/>>
<#if contentModel.components_o?? && contentModel.components_o.item??>
<#list contentModel.components_o.item as component>
<@renderComponent component=component />
</#list>
</#if>
</div>
</div>
```
11. edit the Preview Components Configurations
```
<config>
<category>
<label>Layouts</label>
<component>
<label>Section</label>
<type>/component/layoutsection</type>
</component>
</category>
<category>
<!-- any component of your choice to test -->
<label>Basic Components</label>
<component>
<label>Header</label>
<type>/component/componentheader</type>
<path>/site/components/headers</path>
</component>
</category>
<browse>
<label>Header</label>
<path>/site/components/headers</path>
</browse>
</config>
```
12. Open the home page and click preview tools to drag component.
13. drag a section component (it should work fine)
14. drag any component inside the section that we dragged (in my case a header component)
15. A header will be created but it won't be bound to that dropzone.
Note: my header component is just HTML it has nothing at all.
## Expected behavior
When I drag a component to the dropzone inside the section component it should be not only created it also should be bound.
## Screenshots
https://craftercms.slack.com/archives/C0ME7US74/p1587699413244900
## Logs
n/a
## Specs
### Version
Studio Version Number: 3.1.5-fb3509
Build Number: fb35092a40f7b3d5edf1f2ef61e437b61cc2e5e6
Build Date/Time: 02-22-2020 02:17:38 +0300
### OS
Debian Buster (10)
### Browser
Firefox, chromium
## Additional context
n/a
| priority | nested drop zone created component but it s not binding it to the drop zone describe the bug i used the empty blueprint to test the nested dropzones i added a freemarker block which accept a component content type called section and when i drag section to that dropzone it works fine now that section component also accept a dropzone of any component content type exists in the site when i drag any component is gets created but never bound to that dropzone if i would do the same steps with the manual way by clicking edit and add component it works to reproduce steps to reproduce the behavior create a new site based on the empty blueprint go to site config open a page content type add a node selector named sections with a data source of embedded content with a type of component layoutsection edit the template to have this block of create a new component content type called layout section which will have a type name as component layoutsection add a form section add node selector inside the form section and call it components with a variable name components o assign a shared content data source to this node selector named shared components with a path repository browse path site components edit the template to be edit the preview components configurations layouts section component layoutsection basic components header component componentheader site components headers header site components headers open the home page and click preview tools to drag component drag a section component it should work fine drag any component inside the section that we dragged in my case a header component a header will be created but it won t be bound to that dropzone note my header component is just html it has nothing at all expected behavior when i drag a component to the dropzone inside the section component it should be not only created it also should be bound screenshots logs n a specs version studio version number build number build date time os debian buster browser firefox chromium additional context n a | 1 |
792,099 | 27,946,316,686 | IssuesEvent | 2023-03-24 03:37:58 | ncaq/google-search-title-qualified | https://api.github.com/repos/ncaq/google-search-title-qualified | closed | タイトル幅を広めるCSSを追加 | Type: Feature Priority: Medium | 現在私は自作の、
<https://userstyles.org/styles/95431/google-expand>
を使っているのですが、
タイトル文字列が多くなる場合ほとんどの環境で改行が発生しやすくなるので、
拡張機能にそういうCSSを同梱したほうが良いかもしれません。 | 1.0 | タイトル幅を広めるCSSを追加 - 現在私は自作の、
<https://userstyles.org/styles/95431/google-expand>
を使っているのですが、
タイトル文字列が多くなる場合ほとんどの環境で改行が発生しやすくなるので、
拡張機能にそういうCSSを同梱したほうが良いかもしれません。 | priority | タイトル幅を広めるcssを追加 現在私は自作の、 を使っているのですが、 タイトル文字列が多くなる場合ほとんどの環境で改行が発生しやすくなるので、 拡張機能にそういうcssを同梱したほうが良いかもしれません。 | 1 |
145,923 | 5,584,008,570 | IssuesEvent | 2017-03-29 02:55:16 | awslabs/s2n | https://api.github.com/repos/awslabs/s2n | closed | Add Support for ChaCha20-Poly1305 | priority/high size/medium status/work_in_progress type/enhancement type/new_crypto | Recent vulnerabilities in 3DES-CBC3[1] pushed us to remove it from our default preference list. That has left us with only suites that use AES-{CBC,GCM}. To diversify our offerings, we consider adding the stream cipher ChaCha20-Poly1305 . ChaCha20-Poly1305 was added in Openssl 1.1.0 [2]. Supporting 1.1.0's libcrypto is a prerequisite. ChaCha20 has been found to be more performant than AES-GCM on some mobile devices[3]. I'm going to chalk that up to lack of AES-NI support.
Relevant IANA info[4]:
```
0xCC,0xA8 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
0xCC,0xAA TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
```
[1] https://sweet32.info/
[2] https://www.openssl.org/news/openssl-1.1.0-notes.html
[3] https://security.googleblog.com/2014/04/speeding-up-and-strengthening-https.html
[4] https://tools.ietf.org/html/rfc7539
| 1.0 | Add Support for ChaCha20-Poly1305 - Recent vulnerabilities in 3DES-CBC3[1] pushed us to remove it from our default preference list. That has left us with only suites that use AES-{CBC,GCM}. To diversify our offerings, we consider adding the stream cipher ChaCha20-Poly1305 . ChaCha20-Poly1305 was added in Openssl 1.1.0 [2]. Supporting 1.1.0's libcrypto is a prerequisite. ChaCha20 has been found to be more performant than AES-GCM on some mobile devices[3]. I'm going to chalk that up to lack of AES-NI support.
Relevant IANA info[4]:
```
0xCC,0xA8 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
0xCC,0xAA TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
```
[1] https://sweet32.info/
[2] https://www.openssl.org/news/openssl-1.1.0-notes.html
[3] https://security.googleblog.com/2014/04/speeding-up-and-strengthening-https.html
[4] https://tools.ietf.org/html/rfc7539
| priority | add support for recent vulnerabilities in pushed us to remove it from our default preference list that has left us with only suites that use aes cbc gcm to diversify our offerings we consider adding the stream cipher was added in openssl supporting s libcrypto is a prerequisite has been found to be more performant than aes gcm on some mobile devices i m going to chalk that up to lack of aes ni support relevant iana info tls ecdhe rsa with tls dhe rsa with | 1 |
421,674 | 12,260,114,444 | IssuesEvent | 2020-05-06 17:44:19 | onicagroup/runway | https://api.github.com/repos/onicagroup/runway | opened | [REQUEST] escape runway.yml lookup syntax | feature priority:medium | In some cases, there is a need to pass a `parameters` or `options` in a format that looks like our lookup syntax (e.g. `${<something>}`). There needs to be a way to _escape_ the lookup, causing it to pass the literal value instead of resolving it.
### Possible Syntax
- `{{<something>}}` in the runway or CFNgin config would translate to `${<something>}` after lookups have been resolved
- `#{<something>}` in the runway or CFNgin config would translate to `${<something>}` after lookups have been resolved | 1.0 | [REQUEST] escape runway.yml lookup syntax - In some cases, there is a need to pass a `parameters` or `options` in a format that looks like our lookup syntax (e.g. `${<something>}`). There needs to be a way to _escape_ the lookup, causing it to pass the literal value instead of resolving it.
### Possible Syntax
- `{{<something>}}` in the runway or CFNgin config would translate to `${<something>}` after lookups have been resolved
- `#{<something>}` in the runway or CFNgin config would translate to `${<something>}` after lookups have been resolved | priority | escape runway yml lookup syntax in some cases there is a need to pass a parameters or options in a format that looks like our lookup syntax e g there needs to be a way to escape the lookup causing it to pass the literal value instead of resolving it possible syntax in the runway or cfngin config would translate to after lookups have been resolved in the runway or cfngin config would translate to after lookups have been resolved | 1 |
396,348 | 11,708,283,488 | IssuesEvent | 2020-03-08 12:23:23 | naipaka/PinMusubi-iOS | https://api.github.com/repos/naipaka/PinMusubi-iOS | opened | add favorite spot feature | Enhancement Medium Priority | # Overview
- Make enable to add only spot to favorite list
- add favorite button on spot cell on spot list view
# Purpose
- To make easier to reach spot information which user wants to look | 1.0 | add favorite spot feature - # Overview
- Make enable to add only spot to favorite list
- add favorite button on spot cell on spot list view
# Purpose
- To make easier to reach spot information which user wants to look | priority | add favorite spot feature overview make enable to add only spot to favorite list add favorite button on spot cell on spot list view purpose to make easier to reach spot information which user wants to look | 1 |
317,613 | 9,666,973,066 | IssuesEvent | 2019-05-21 12:12:32 | conan-io/conan | https://api.github.com/repos/conan-io/conan | opened | SCM auto mode improvments | complex: medium priority: high stage: queue type: feature | This is a new issue to manage together #5126 and #4732 (probably some other).
**Current pains**
- `scm_folder.txt` is tricky and ugly (internal concern mostly).
- Since the usage of the local sources should be avoided when running a `conan install` command (see next section), could be still a good mechanism to take benefit of the optimization without taking the sources from the local folder but not cloning the repo. (@niosHD at https://github.com/conan-io/conan/issues/4732)
- A recipe containing `scm` with a real revision exported is dangerous if the folder was not pristine because the recipe could be uploaded and the CI could run something different from the expected. (@Mark-Hatch-Bose at https://github.com/conan-io/conan/issues/5126)
**Already solved pains to be kept**
- The local sources shouldn't be used in a `conan install --build XX` because it could take sources too much changed or belonging even to a different version. It should use the sources only for `conan create`. Solved at #4218
**Proposition**
- The `scm` auto will copy the sources from the local directory to the `source` cache directory. It makes `scm_folder.txt` not needed (@niosHD)
- The `scm` auto won't replace the revision if the repo is not pristine. `auto` will be kept.
- The `conan upload` command won't allow uploading a recipe with `revision=auto`.
- The `conan install` will work out of the box, because if the `source` folder exists, it will just use it, without trying to use changes from the local directory.
- The `conan export` and `conan create` will accept a `--scm_dirty` parameter to force capturing the revision (commit) even if the repo is not pristine. Why? if a CI build clone the repository, it is common that some file is added or modified because of the pipeline, but indeed you want to force that commit to be considered valid, so you can write your CI script with the `--scm_dirty` because you know what are you doing.
Feedback, please! Let's target `1.17` for this. | 1.0 | SCM auto mode improvments - This is a new issue to manage together #5126 and #4732 (probably some other).
**Current pains**
- `scm_folder.txt` is tricky and ugly (internal concern mostly).
- Since the usage of the local sources should be avoided when running a `conan install` command (see next section), could be still a good mechanism to take benefit of the optimization without taking the sources from the local folder but not cloning the repo. (@niosHD at https://github.com/conan-io/conan/issues/4732)
- A recipe containing `scm` with a real revision exported is dangerous if the folder was not pristine because the recipe could be uploaded and the CI could run something different from the expected. (@Mark-Hatch-Bose at https://github.com/conan-io/conan/issues/5126)
**Already solved pains to be kept**
- The local sources shouldn't be used in a `conan install --build XX` because it could take sources too much changed or belonging even to a different version. It should use the sources only for `conan create`. Solved at #4218
**Proposition**
- The `scm` auto will copy the sources from the local directory to the `source` cache directory. It makes `scm_folder.txt` not needed (@niosHD)
- The `scm` auto won't replace the revision if the repo is not pristine. `auto` will be kept.
- The `conan upload` command won't allow uploading a recipe with `revision=auto`.
- The `conan install` will work out of the box, because if the `source` folder exists, it will just use it, without trying to use changes from the local directory.
- The `conan export` and `conan create` will accept a `--scm_dirty` parameter to force capturing the revision (commit) even if the repo is not pristine. Why? if a CI build clone the repository, it is common that some file is added or modified because of the pipeline, but indeed you want to force that commit to be considered valid, so you can write your CI script with the `--scm_dirty` because you know what are you doing.
Feedback, please! Let's target `1.17` for this. | priority | scm auto mode improvments this is a new issue to manage together and probably some other current pains scm folder txt is tricky and ugly internal concern mostly since the usage of the local sources should be avoided when running a conan install command see next section could be still a good mechanism to take benefit of the optimization without taking the sources from the local folder but not cloning the repo nioshd at a recipe containing scm with a real revision exported is dangerous if the folder was not pristine because the recipe could be uploaded and the ci could run something different from the expected mark hatch bose at already solved pains to be kept the local sources shouldn t be used in a conan install build xx because it could take sources too much changed or belonging even to a different version it should use the sources only for conan create solved at proposition the scm auto will copy the sources from the local directory to the source cache directory it makes scm folder txt not needed nioshd the scm auto won t replace the revision if the repo is not pristine auto will be kept the conan upload command won t allow uploading a recipe with revision auto the conan install will work out of the box because if the source folder exists it will just use it without trying to use changes from the local directory the conan export and conan create will accept a scm dirty parameter to force capturing the revision commit even if the repo is not pristine why if a ci build clone the repository it is common that some file is added or modified because of the pipeline but indeed you want to force that commit to be considered valid so you can write your ci script with the scm dirty because you know what are you doing feedback please let s target for this | 1 |
764,046 | 26,782,889,636 | IssuesEvent | 2023-01-31 22:59:47 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Batch nested loops closing connection when querying PG tables | kind/bug area/ysql priority/medium status/awaiting-triage | Jira Link: [DB-5294](https://yugabyte.atlassian.net/browse/DB-5294)
### Description
When using batched nested loops, certain queries to PG tables cause the connection to close due to the following error seen on the postgres logs:
```
TRAP: FailedAssertion("!(bms_overlap(rinfo->clause_relids, batchedrelids))", File: "../../../../../../../src/postgres/src/backend/optimizer/path/indxpath.c", Line: 685)
```
This error occured for the latest debug build for 2.17.2.0, and was reproduced by @tanujnay112.
Reproduction:
```
set yb_bnl_batch_size = 2;
select
t.relname as table_name,
a.attname as column_name
from
pg_class t,
pg_index ix,
pg_attribute a
where
t.oid = ix.indrelid
and a.attrelid = t.oid
and a.attnum = ANY(ix.indkey) limit 10;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
```
[DB-5294]: https://yugabyte.atlassian.net/browse/DB-5294?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] Batch nested loops closing connection when querying PG tables - Jira Link: [DB-5294](https://yugabyte.atlassian.net/browse/DB-5294)
### Description
When using batched nested loops, certain queries to PG tables cause the connection to close due to the following error seen on the postgres logs:
```
TRAP: FailedAssertion("!(bms_overlap(rinfo->clause_relids, batchedrelids))", File: "../../../../../../../src/postgres/src/backend/optimizer/path/indxpath.c", Line: 685)
```
This error occured for the latest debug build for 2.17.2.0, and was reproduced by @tanujnay112.
Reproduction:
```
set yb_bnl_batch_size = 2;
select
t.relname as table_name,
a.attname as column_name
from
pg_class t,
pg_index ix,
pg_attribute a
where
t.oid = ix.indrelid
and a.attrelid = t.oid
and a.attnum = ANY(ix.indkey) limit 10;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
```
[DB-5294]: https://yugabyte.atlassian.net/browse/DB-5294?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | batch nested loops closing connection when querying pg tables jira link description when using batched nested loops certain queries to pg tables cause the connection to close due to the following error seen on the postgres logs trap failedassertion bms overlap rinfo clause relids batchedrelids file src postgres src backend optimizer path indxpath c line this error occured for the latest debug build for and was reproduced by reproduction set yb bnl batch size select t relname as table name a attname as column name from pg class t pg index ix pg attribute a where t oid ix indrelid and a attrelid t oid and a attnum any ix indkey limit server closed the connection unexpectedly this probably means the server terminated abnormally before or while processing the request the connection to the server was lost attempting reset succeeded | 1 |
121,204 | 4,806,371,886 | IssuesEvent | 2016-11-02 18:22:06 | worona/worona-dashboard-layout | https://api.github.com/repos/worona/worona-dashboard-layout | closed | We've to decide the errors to show when check site fail | Priority: Medium | I'd say something like this...
**If "Site online and available" fails:**
> ### Oops!
>
> Looks like your site is not online or available. This may occur if:
> - You have entered an incorrect URL. Please make sure your site is correct: _http://www.site.com_. Want to fix it? Edit your URL [here](url).
> - Your WordPress website is down or it is too slow to respond.
> Please [contact us](url) or try again later.
**If "Worona WordPress Plugin" fails:**
> ### WordPress Plugin missing
>
> We can't find the Worona Plugin installed in your site. Our plugin lets you synchronize your WordPress site with our platform, please make sure it is installed and active.
>
> [Install and activate Worona plugin now](url) or check out our [help documentation](url).
| 1.0 | We've to decide the errors to show when check site fail - I'd say something like this...
**If "Site online and available" fails:**
> ### Oops!
>
> Looks like your site is not online or available. This may occur if:
> - You have entered an incorrect URL. Please make sure your site is correct: _http://www.site.com_. Want to fix it? Edit your URL [here](url).
> - Your WordPress website is down or it is too slow to respond.
> Please [contact us](url) or try again later.
**If "Worona WordPress Plugin" fails:**
> ### WordPress Plugin missing
>
> We can't find the Worona Plugin installed in your site. Our plugin lets you synchronize your WordPress site with our platform, please make sure it is installed and active.
>
> [Install and activate Worona plugin now](url) or check out our [help documentation](url).
| priority | we ve to decide the errors to show when check site fail i d say something like this if site online and available fails oops looks like your site is not online or available this may occur if you have entered an incorrect url please make sure your site is correct want to fix it edit your url url your wordpress website is down or it is too slow to respond please url or try again later if worona wordpress plugin fails wordpress plugin missing we can t find the worona plugin installed in your site our plugin lets you synchronize your wordpress site with our platform please make sure it is installed and active url or check out our url | 1 |
55,868 | 3,075,081,191 | IssuesEvent | 2015-08-20 11:31:34 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Очередь скачивания не очистилась после завершения загрузки | bug imported Priority-Medium | _From [franc...@mail.ru](https://code.google.com/u/108356571090263611971/) on February 08, 2011 23:07:34_
114 issue, как я понял, говорит о том, что в очереди скачивания отображаются не все файлы, у меня же получилась обратная ситуация (но тоже с Интернами, и это может быть на что-то наведёт):
Поставил на скачивание 3 файла, один из них удалил из очереди в процессе скачивания, он удалился, остальные 2 уведомили о том, что скачивание закончилось, переместились куда надо, но очередь скачивания не очистилась
Закрыл, открыл очередь - файлы не исчезли.
**Attachment:** [DownloadsQueue.png](http://code.google.com/p/flylinkdc/issues/detail?id=349)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=349_ | 1.0 | Очередь скачивания не очистилась после завершения загрузки - _From [franc...@mail.ru](https://code.google.com/u/108356571090263611971/) on February 08, 2011 23:07:34_
114 issue, как я понял, говорит о том, что в очереди скачивания отображаются не все файлы, у меня же получилась обратная ситуация (но тоже с Интернами, и это может быть на что-то наведёт):
Поставил на скачивание 3 файла, один из них удалил из очереди в процессе скачивания, он удалился, остальные 2 уведомили о том, что скачивание закончилось, переместились куда надо, но очередь скачивания не очистилась
Закрыл, открыл очередь - файлы не исчезли.
**Attachment:** [DownloadsQueue.png](http://code.google.com/p/flylinkdc/issues/detail?id=349)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=349_ | priority | очередь скачивания не очистилась после завершения загрузки from on february issue как я понял говорит о том что в очереди скачивания отображаются не все файлы у меня же получилась обратная ситуация но тоже с интернами и это может быть на что то наведёт поставил на скачивание файла один из них удалил из очереди в процессе скачивания он удалился остальные уведомили о том что скачивание закончилось переместились куда надо но очередь скачивания не очистилась закрыл открыл очередь файлы не исчезли attachment original issue | 1 |
105,278 | 4,233,497,925 | IssuesEvent | 2016-07-05 08:10:21 | BugBusterSWE/documentation | https://api.github.com/repos/BugBusterSWE/documentation | opened | Slide riguardanti il bilancio | priority:medium | *Documento in cui si trova il problema*:
Manuale Admin
Activity #580
*Descrizione del problema*:
Creare slide riguardanti il bilancio.
Link task: [https://bugbusters.teamwork.com/tasks/7482386](https://bugbusters.teamwork.com/tasks/7482386) | 1.0 | Slide riguardanti il bilancio - *Documento in cui si trova il problema*:
Manuale Admin
Activity #580
*Descrizione del problema*:
Creare slide riguardanti il bilancio.
Link task: [https://bugbusters.teamwork.com/tasks/7482386](https://bugbusters.teamwork.com/tasks/7482386) | priority | slide riguardanti il bilancio documento in cui si trova il problema manuale admin activity descrizione del problema creare slide riguardanti il bilancio link task | 1 |
749,031 | 26,148,233,298 | IssuesEvent | 2022-12-30 09:22:24 | NomicFoundation/hardhat | https://api.github.com/repos/NomicFoundation/hardhat | closed | Can not change gas price of local mainet fork | type:feature priority:medium | Here is my hardhat setting:
```js
module.exports = {
solidity: "0.7.3",
networks: {
hardhat: {
forking: {
url: "http://127.0.0.1:18545"
},
gasPrice: 150000000000
}
}
};
```
But the gas price is still 8000000000:
```js
const gasPrice = await web3.eth.getGasPrice();
console.log(`gasPrice is ${gasPrice}`); // 8000000000
```
So how to set gas price properly? | 1.0 | Can not change gas price of local mainet fork - Here is my hardhat setting:
```js
module.exports = {
solidity: "0.7.3",
networks: {
hardhat: {
forking: {
url: "http://127.0.0.1:18545"
},
gasPrice: 150000000000
}
}
};
```
But the gas price is still 8000000000:
```js
const gasPrice = await web3.eth.getGasPrice();
console.log(`gasPrice is ${gasPrice}`); // 8000000000
```
So how to set gas price properly? | priority | can not change gas price of local mainet fork here is my hardhat setting js module exports solidity networks hardhat forking url gasprice but the gas price is still js const gasprice await eth getgasprice console log gasprice is gasprice so how to set gas price properly | 1 |
581,270 | 17,290,107,525 | IssuesEvent | 2021-07-24 15:04:38 | leokraft/QuickKey | https://api.github.com/repos/leokraft/QuickKey | opened | Autogenerate version numbers. | Priority: Medium Type: Feature | This would be usefull to keep track of build number and update wix info. | 1.0 | Autogenerate version numbers. - This would be usefull to keep track of build number and update wix info. | priority | autogenerate version numbers this would be usefull to keep track of build number and update wix info | 1 |
393,486 | 11,616,560,629 | IssuesEvent | 2020-02-26 15:54:02 | arkhn/pyrog | https://api.github.com/repos/arkhn/pyrog | closed | Clicking issues | Client Enhancement Medium Priority | When we change resources, attributes are no longer clickable (page needs to be refreshed) | 1.0 | Clicking issues - When we change resources, attributes are no longer clickable (page needs to be refreshed) | priority | clicking issues when we change resources attributes are no longer clickable page needs to be refreshed | 1 |
801,062 | 28,452,506,225 | IssuesEvent | 2023-04-17 02:55:40 | masastack/MASA.Scheduler | https://api.github.com/repos/masastack/MASA.Scheduler | closed | The source drop-down box is missing a clear button. Click to clear the options in the drop-down box | type/bug status/resolved severity/medium site/staging priority/p3 | 来源下拉框缺少清除按钮,点击清除下拉框内已选项
 | 1.0 | The source drop-down box is missing a clear button. Click to clear the options in the drop-down box - 来源下拉框缺少清除按钮,点击清除下拉框内已选项
 | priority | the source drop down box is missing a clear button click to clear the options in the drop down box 来源下拉框缺少清除按钮,点击清除下拉框内已选项 | 1 |
173,768 | 6,530,669,791 | IssuesEvent | 2017-08-30 15:50:23 | assemblee-virtuelle/mmmfest | https://api.github.com/repos/assemblee-virtuelle/mmmfest | closed | Harmoniser les polices d'affichage sur la bannière | medium priority | Sur la bannière à minima pour Code social, Programme etc...
Utiliser la police des grands voisins ?
Enlever introduction, et peut être augmenter un poil l'espace voir la police pour Domaine de Millemont / 4 au 10 octobre... | 1.0 | Harmoniser les polices d'affichage sur la bannière - Sur la bannière à minima pour Code social, Programme etc...
Utiliser la police des grands voisins ?
Enlever introduction, et peut être augmenter un poil l'espace voir la police pour Domaine de Millemont / 4 au 10 octobre... | priority | harmoniser les polices d affichage sur la bannière sur la bannière à minima pour code social programme etc utiliser la police des grands voisins enlever introduction et peut être augmenter un poil l espace voir la police pour domaine de millemont au octobre | 1 |
603,294 | 18,537,066,747 | IssuesEvent | 2021-10-21 12:40:52 | HabitRPG/habitica-ios | https://api.github.com/repos/HabitRPG/habitica-ios | closed | New task drag and drop logic for when filters are applied | Type: Enhancement Priority: medium | this is to help fix issues with task shuffling their order when users try to reorganize them. Filtering to 'is due' is extremely common, and the filtered tasks you can't see behind the scenes can cause a very strange experience when dragging and dropping tasks around.
When a user tries to drag and drop a task when filters are applied, the moved task will sort of ‘attach’ itself to the task it was placed above. Any invisible filtered tasks will be sorted around it, never between.
If you move a task to the top of a filtered list, it won’t go above invisible tasks. It will just stay above the last visible task. Conversely, if you move a task to the bottom of a filtered list, it will also not go below invisible tasks. It will slot itself below the last visible task.
(We’re following how trello handles this, so if you’d like an example of how it should work in various circumstances that is a good place to experiment. we used their labeling system and filtered to different labels to recreate our scenarios)
including some visual examples of how we’d like this to work:
<img width="761" alt="Screen Shot 2021-05-26 at 11 11 21 AM" src="https://user-images.githubusercontent.com/12453119/119698641-6ef8e100-be1f-11eb-8c4a-30df59d1e614.png">
<img width="760" alt="Screen Shot 2021-05-26 at 11 11 49 AM" src="https://user-images.githubusercontent.com/12453119/119698645-70c2a480-be1f-11eb-8a7a-2b12a424322b.png"> | 1.0 | New task drag and drop logic for when filters are applied - this is to help fix issues with task shuffling their order when users try to reorganize them. Filtering to 'is due' is extremely common, and the filtered tasks you can't see behind the scenes can cause a very strange experience when dragging and dropping tasks around.
When a user tries to drag and drop a task when filters are applied, the moved task will sort of ‘attach’ itself to the task it was placed above. Any invisible filtered tasks will be sorted around it, never between.
If you move a task to the top of a filtered list, it won’t go above invisible tasks. It will just stay above the last visible task. Conversely, if you move a task to the bottom of a filtered list, it will also not go below invisible tasks. It will slot itself below the last visible task.
(We’re following how trello handles this, so if you’d like an example of how it should work in various circumstances that is a good place to experiment. we used their labeling system and filtered to different labels to recreate our scenarios)
including some visual examples of how we’d like this to work:
<img width="761" alt="Screen Shot 2021-05-26 at 11 11 21 AM" src="https://user-images.githubusercontent.com/12453119/119698641-6ef8e100-be1f-11eb-8c4a-30df59d1e614.png">
<img width="760" alt="Screen Shot 2021-05-26 at 11 11 49 AM" src="https://user-images.githubusercontent.com/12453119/119698645-70c2a480-be1f-11eb-8a7a-2b12a424322b.png"> | priority | new task drag and drop logic for when filters are applied this is to help fix issues with task shuffling their order when users try to reorganize them filtering to is due is extremely common and the filtered tasks you can t see behind the scenes can cause a very strange experience when dragging and dropping tasks around when a user tries to drag and drop a task when filters are applied the moved task will sort of ‘attach’ itself to the task it was placed above any invisible filtered tasks will be sorted around it never between if you move a task to the top of a filtered list it won’t go above invisible tasks it will just stay above the last visible task conversely if you move a task to the bottom of a filtered list it will also not go below invisible tasks it will slot itself below the last visible task we’re following how trello handles this so if you’d like an example of how it should work in various circumstances that is a good place to experiment we used their labeling system and filtered to different labels to recreate our scenarios including some visual examples of how we’d like this to work img width alt screen shot at am src img width alt screen shot at am src | 1 |
22,611 | 2,649,565,063 | IssuesEvent | 2015-03-15 01:43:03 | prikhi/pencil | https://api.github.com/repos/prikhi/pencil | closed | Pencil not compatible with Firefox 4.0 | 6–10 stars bug imported Priority-Medium | _From [seaplane...@gmail.com](https://code.google.com/u/109609509405567341253/) on March 22, 2011 17:13:36_
What steps will reproduce the problem? 1. Install Pencil
2. Install Firefox 4.0
3. Revert installation when prompted with add-on warning What is the expected output? What do you see instead? Hopefully an update to Pencil to make it compatible with Firefox 4.0 What version of the product are you using? On what operating system? 1.0 build 6 Please provide any additional information below. Windows 7
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=292_ | 1.0 | Pencil not compatible with Firefox 4.0 - _From [seaplane...@gmail.com](https://code.google.com/u/109609509405567341253/) on March 22, 2011 17:13:36_
What steps will reproduce the problem? 1. Install Pencil
2. Install Firefox 4.0
3. Revert installation when prompted with add-on warning What is the expected output? What do you see instead? Hopefully an update to Pencil to make it compatible with Firefox 4.0 What version of the product are you using? On what operating system? 1.0 build 6 Please provide any additional information below. Windows 7
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=292_ | priority | pencil not compatible with firefox from on march what steps will reproduce the problem install pencil install firefox revert installation when prompted with add on warning what is the expected output what do you see instead hopefully an update to pencil to make it compatible with firefox what version of the product are you using on what operating system build please provide any additional information below windows original issue | 1 |
770,912 | 27,060,750,450 | IssuesEvent | 2023-02-13 19:33:35 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] LIKE ALL is not supported | kind/enhancement area/ysql priority/medium pgcm | Jira Link: [DB-974](https://yugabyte.atlassian.net/browse/DB-974)
### Description
```
CREATE TABLE p1_c1 (LIKE p1 INCLUDING ALL, CHECK (id <= 100)) INHERITS(p1);
ERROR: LIKE ALL not supported yet
LINE 1: CREATE TABLE p1_c1 (LIKE p1 INCLUDING ALL, CHECK (id <= 100)...
^
``` | 1.0 | [YSQL] LIKE ALL is not supported - Jira Link: [DB-974](https://yugabyte.atlassian.net/browse/DB-974)
### Description
```
CREATE TABLE p1_c1 (LIKE p1 INCLUDING ALL, CHECK (id <= 100)) INHERITS(p1);
ERROR: LIKE ALL not supported yet
LINE 1: CREATE TABLE p1_c1 (LIKE p1 INCLUDING ALL, CHECK (id <= 100)...
^
``` | priority | like all is not supported jira link description create table like including all check id inherits error like all not supported yet line create table like including all check id | 1 |
658,574 | 21,897,163,114 | IssuesEvent | 2022-05-20 09:44:29 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Practice App: Frontend for Creating Event | priority-medium status-inprogress feature practice-app practice-app:front-end | ### Issue Description
Since the backend development of event creation and getting attended events is done and can be viewed in detail from the below issues, now I will be implementing the frontend (related web pages) of the named functionalities of our project:
* #193
* #195
* #211
### Step Details
Steps that will be performed:
- [x] Determining what will be shown in the page.
- [x] Adding the page to the router and the toolbar.
- [x] Implementing the page with mock data
- [x] Establishing communication between the page and the API.
### Final Actions
After above mentioned steps are performed, a pull request regarding to the frontend development of the event creation will be opened and after a review and updates if necessary will be merged to the master branch.
### Deadline of the Issue
19.05.2022 - Thursday - 20.00
### Reviewer
Mehmet Batuhan Çelik
### Deadline for the Review
19.05.2022 - Thursday - 23.59 | 1.0 | Practice App: Frontend for Creating Event - ### Issue Description
Since the backend development of event creation and getting attended events is done and can be viewed in detail from the below issues, now I will be implementing the frontend (related web pages) of the named functionalities of our project:
* #193
* #195
* #211
### Step Details
Steps that will be performed:
- [x] Determining what will be shown in the page.
- [x] Adding the page to the router and the toolbar.
- [x] Implementing the page with mock data
- [x] Establishing communication between the page and the API.
### Final Actions
After above mentioned steps are performed, a pull request regarding to the frontend development of the event creation will be opened and after a review and updates if necessary will be merged to the master branch.
### Deadline of the Issue
19.05.2022 - Thursday - 20.00
### Reviewer
Mehmet Batuhan Çelik
### Deadline for the Review
19.05.2022 - Thursday - 23.59 | priority | practice app frontend for creating event issue description since the backend development of event creation and getting attended events is done and can be viewed in detail from the below issues now i will be implementing the frontend related web pages of the named functionalities of our project step details steps that will be performed determining what will be shown in the page adding the page to the router and the toolbar implementing the page with mock data establishing communication between the page and the api final actions after above mentioned steps are performed a pull request regarding to the frontend development of the event creation will be opened and after a review and updates if necessary will be merged to the master branch deadline of the issue thursday reviewer mehmet batuhan çelik deadline for the review thursday | 1 |
248,848 | 7,936,865,096 | IssuesEvent | 2018-07-09 10:51:42 | nlbdev/pipeline | https://api.github.com/repos/nlbdev/pipeline | closed | Class to force volume break | CSS Priority:2 - Medium enhancement | As a convenience instead of using style attributes or attaching CSS stylesheets, we should have a class that can be manually inserted if there are some locations that we need to force a volume break.
Maybe `braille-volume-break-before`. | 1.0 | Class to force volume break - As a convenience instead of using style attributes or attaching CSS stylesheets, we should have a class that can be manually inserted if there are some locations that we need to force a volume break.
Maybe `braille-volume-break-before`. | priority | class to force volume break as a convenience instead of using style attributes or attaching css stylesheets we should have a class that can be manually inserted if there are some locations that we need to force a volume break maybe braille volume break before | 1 |
311,848 | 9,539,639,528 | IssuesEvent | 2019-04-30 17:28:14 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | Do not treat "unidentified" request issues as non-comp BusinessLine | bug-medium-priority caseflow-intake sierra | See https://dsva.slack.com/archives/C2ZAMLK88/p1556644556118700
We should not be attempting to create BusinessLine for unidentified non-comp RequestIssues. That we get to that place in the code smells like a stricter requirement is necessary for tracking unidentified non-comp tasks. | 1.0 | Do not treat "unidentified" request issues as non-comp BusinessLine - See https://dsva.slack.com/archives/C2ZAMLK88/p1556644556118700
We should not be attempting to create BusinessLine for unidentified non-comp RequestIssues. That we get to that place in the code smells like a stricter requirement is necessary for tracking unidentified non-comp tasks. | priority | do not treat unidentified request issues as non comp businessline see we should not be attempting to create businessline for unidentified non comp requestissues that we get to that place in the code smells like a stricter requirement is necessary for tracking unidentified non comp tasks | 1 |
536,820 | 15,715,352,415 | IssuesEvent | 2021-03-28 00:47:17 | sopra-fs21-group-10/td-server | https://api.github.com/repos/sopra-fs21-group-10/td-server | opened | Answer get request from client | medium priority task | - If request to /games/{gameId} valid, return opponent's info according to REST specification, else return an error message
Estimated time: 3h
This task is part of user story #27 | 1.0 | Answer get request from client - - If request to /games/{gameId} valid, return opponent's info according to REST specification, else return an error message
Estimated time: 3h
This task is part of user story #27 | priority | answer get request from client if request to games gameid valid return opponent s info according to rest specification else return an error message estimated time this task is part of user story | 1 |
2,713 | 2,532,778,207 | IssuesEvent | 2015-01-23 18:25:34 | google/error-prone | https://api.github.com/repos/google/error-prone | closed | Error:java: An unhandled exception was thrown by the Error Prone static analysis plugin. | migrated Priority-Medium Type-NewCheck | _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=262) created by **pedro.dusso@portotech.org** on 2014-09-11 at 02:24 PM_
---
IntelliJ IDEA 13.1.4
Build #IC-135.1230,0built on July 21, 2014
JRE 1.8.0_11-b12 amd64
JVM: Java HotSpot 64-bit Server VM by Oracle Corporation.
Error:java: An unhandled exception was thrown by the Error Prone static analysis plugin.
Please report this at https://code.google.com/p/error-prone/issues/entry and include the following:
error-prone version: 1.1.1
Stack Trace:
java.lang.NoSuchFieldError: endPositions
at com.google.errorprone.ErrorProneAnalyzer.createVisitorState(ErrorProneAnalyzer.java:130)
at com.google.errorprone.ErrorProneAnalyzer.reportReadyForAnalysis(ErrorProneAnalyzer.java:101)
at com.google.errorprone.ErrorReportingJavaCompiler.postFlow(ErrorReportingJavaCompiler.java:149)
at com.google.errorprone.ErrorReportingJavaCompiler.flow(ErrorReportingJavaCompiler.java:113)
at com.sun.tools.javac.main.JavaCompiler.flow(JavaCompiler.java:1299)
at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:904)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:863)
at com.sun.tools.javac.main.Main.compile(Main.java:523)
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)
at org.jetbrains.jps.javac.JavacMain.compile(JavacMain.java:165)
at org.jetbrains.jps.incremental.java.JavaBuilder.compileJava(JavaBuilder.java:407)
at org.jetbrains.jps.incremental.java.JavaBuilder.compile(JavaBuilder.java:304)
at org.jetbrains.jps.incremental.java.JavaBuilder.doBuild(JavaBuilder.java:210)
at org.jetbrains.jps.incremental.java.JavaBuilder.build(JavaBuilder.java:182)
at org.jetbrains.jps.incremental.IncProjectBuilder.runModuleLevelBuilders(IncProjectBuilder.java:1106)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk(IncProjectBuilder.java:814)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk(IncProjectBuilder.java:862)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected(IncProjectBuilder.java:777)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunks(IncProjectBuilder.java:600)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:352)
at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:184)
at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:129)
at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:224)
at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:113)
at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler$1.run(BuildMain.java:133)
at org.jetbrains.jps.service.impl.SharedThreadPoolImpl$1.run(SharedThreadPoolImpl.java:41)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) | 1.0 | Error:java: An unhandled exception was thrown by the Error Prone static analysis plugin. - _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=262) created by **pedro.dusso@portotech.org** on 2014-09-11 at 02:24 PM_
---
IntelliJ IDEA 13.1.4
Build #IC-135.1230,0built on July 21, 2014
JRE 1.8.0_11-b12 amd64
JVM: Java HotSpot 64-bit Server VM by Oracle Corporation.
Error:java: An unhandled exception was thrown by the Error Prone static analysis plugin.
Please report this at https://code.google.com/p/error-prone/issues/entry and include the following:
error-prone version: 1.1.1
Stack Trace:
java.lang.NoSuchFieldError: endPositions
at com.google.errorprone.ErrorProneAnalyzer.createVisitorState(ErrorProneAnalyzer.java:130)
at com.google.errorprone.ErrorProneAnalyzer.reportReadyForAnalysis(ErrorProneAnalyzer.java:101)
at com.google.errorprone.ErrorReportingJavaCompiler.postFlow(ErrorReportingJavaCompiler.java:149)
at com.google.errorprone.ErrorReportingJavaCompiler.flow(ErrorReportingJavaCompiler.java:113)
at com.sun.tools.javac.main.JavaCompiler.flow(JavaCompiler.java:1299)
at com.sun.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:904)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:863)
at com.sun.tools.javac.main.Main.compile(Main.java:523)
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)
at org.jetbrains.jps.javac.JavacMain.compile(JavacMain.java:165)
at org.jetbrains.jps.incremental.java.JavaBuilder.compileJava(JavaBuilder.java:407)
at org.jetbrains.jps.incremental.java.JavaBuilder.compile(JavaBuilder.java:304)
at org.jetbrains.jps.incremental.java.JavaBuilder.doBuild(JavaBuilder.java:210)
at org.jetbrains.jps.incremental.java.JavaBuilder.build(JavaBuilder.java:182)
at org.jetbrains.jps.incremental.IncProjectBuilder.runModuleLevelBuilders(IncProjectBuilder.java:1106)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk(IncProjectBuilder.java:814)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk(IncProjectBuilder.java:862)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected(IncProjectBuilder.java:777)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunks(IncProjectBuilder.java:600)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuild(IncProjectBuilder.java:352)
at org.jetbrains.jps.incremental.IncProjectBuilder.build(IncProjectBuilder.java:184)
at org.jetbrains.jps.cmdline.BuildRunner.runBuild(BuildRunner.java:129)
at org.jetbrains.jps.cmdline.BuildSession.runBuild(BuildSession.java:224)
at org.jetbrains.jps.cmdline.BuildSession.run(BuildSession.java:113)
at org.jetbrains.jps.cmdline.BuildMain$MyMessageHandler$1.run(BuildMain.java:133)
at org.jetbrains.jps.service.impl.SharedThreadPoolImpl$1.run(SharedThreadPoolImpl.java:41)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) | priority | error java an unhandled exception was thrown by the error prone static analysis plugin created by pedro dusso portotech org on at pm intellij idea build ic on july jre jvm java hotspot bit server vm by oracle corporation error java an unhandled exception was thrown by the error prone static analysis plugin nbsp nbsp please report this at and include the following nbsp nbsp error prone version nbsp nbsp stack trace nbsp nbsp java lang nosuchfielderror endpositions nbsp nbsp nbsp nbsp nbsp nbsp at com google errorprone errorproneanalyzer createvisitorstate errorproneanalyzer java nbsp nbsp nbsp nbsp nbsp nbsp at com google errorprone errorproneanalyzer reportreadyforanalysis errorproneanalyzer java nbsp nbsp nbsp nbsp nbsp nbsp at com google errorprone errorreportingjavacompiler postflow errorreportingjavacompiler java nbsp nbsp nbsp nbsp nbsp nbsp at com google errorprone errorreportingjavacompiler flow errorreportingjavacompiler java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac main javacompiler flow javacompiler java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac main javacompiler javacompiler java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac main javacompiler compile javacompiler java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac main main compile main java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac api javactaskimpl docall javactaskimpl java nbsp nbsp nbsp nbsp nbsp nbsp at com sun tools javac api javactaskimpl call javactaskimpl java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps javac javacmain compile javacmain java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental java javabuilder compilejava javabuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental java javabuilder compile javabuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental java javabuilder dobuild javabuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental java javabuilder build javabuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder runmodulelevelbuilders incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder runbuildersforchunk incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder buildtargetschunk incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder buildchunkifaffected incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder buildchunks incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder runbuild incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps incremental incprojectbuilder build incprojectbuilder java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps cmdline buildrunner runbuild buildrunner java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps cmdline buildsession runbuild buildsession java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps cmdline buildsession run buildsession java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps cmdline buildmain mymessagehandler run buildmain java nbsp nbsp nbsp nbsp nbsp nbsp at org jetbrains jps service impl sharedthreadpoolimpl run sharedthreadpoolimpl java nbsp nbsp nbsp nbsp nbsp nbsp at java util concurrent executors runnableadapter call executors java nbsp nbsp nbsp nbsp nbsp nbsp at java util concurrent futuretask run futuretask java nbsp nbsp nbsp nbsp nbsp nbsp at java util concurrent threadpoolexecutor runworker threadpoolexecutor java nbsp nbsp nbsp nbsp nbsp nbsp at java util concurrent threadpoolexecutor worker run threadpoolexecutor java nbsp nbsp nbsp nbsp nbsp nbsp at java lang thread run thread java | 1 |
614,200 | 19,145,046,279 | IssuesEvent | 2021-12-02 06:25:00 | kubesphere/ks-devops | https://api.github.com/repos/kubesphere/ks-devops | closed | Last message is wrong in activity of pipeline list | kind/bug priority/medium | Versions Used
KubeSphere: `v3.2.0-alpha.0`

/kind bug
/cc @kubesphere/sig-devops
/priority Medium | 1.0 | Last message is wrong in activity of pipeline list - Versions Used
KubeSphere: `v3.2.0-alpha.0`

/kind bug
/cc @kubesphere/sig-devops
/priority Medium | priority | last message is wrong in activity of pipeline list versions used kubesphere alpha kind bug cc kubesphere sig devops priority medium | 1 |
338,196 | 10,225,638,347 | IssuesEvent | 2019-08-16 15:37:40 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | tests/drivers/can/api/peripheral.can fail on FRDM-K64F | area: CAN bug platform: NXP priority: medium | **Describe the bug**
Test timesout in `test_set_loopback`
This is all the output we get:
```
***** Booting Zephyr OS build v2.0.0-rc1-63-g25a9cde22349 *****
Running test suite can_driver
===================================================================
starting test - test_set_loopback
```
**To Reproduce**
`scripts/sanitycheck -p frdm_k64f --device-testing --device-serial /dev/ttyACM0 -s tests/drivers/can/api/peripheral.can` | 1.0 | tests/drivers/can/api/peripheral.can fail on FRDM-K64F - **Describe the bug**
Test timesout in `test_set_loopback`
This is all the output we get:
```
***** Booting Zephyr OS build v2.0.0-rc1-63-g25a9cde22349 *****
Running test suite can_driver
===================================================================
starting test - test_set_loopback
```
**To Reproduce**
`scripts/sanitycheck -p frdm_k64f --device-testing --device-serial /dev/ttyACM0 -s tests/drivers/can/api/peripheral.can` | priority | tests drivers can api peripheral can fail on frdm describe the bug test timesout in test set loopback this is all the output we get booting zephyr os build running test suite can driver starting test test set loopback to reproduce scripts sanitycheck p frdm device testing device serial dev s tests drivers can api peripheral can | 1 |
465,927 | 13,395,041,808 | IssuesEvent | 2020-09-03 07:48:58 | zowe/api-layer | https://api.github.com/repos/zowe/api-layer | closed | Document properly configuration of the API Mediation Layer | 20PI3 Priority: Medium docs enhancement in progress | The documentation of the possible configuration is either nonexistent or in the .yml files in the repository as such it makes it more difficult for Sysadmins to understand the possible configuration options for fine tuning the API ML and also it makes it more complex for us to find a right place to document new options.
| 1.0 | Document properly configuration of the API Mediation Layer - The documentation of the possible configuration is either nonexistent or in the .yml files in the repository as such it makes it more difficult for Sysadmins to understand the possible configuration options for fine tuning the API ML and also it makes it more complex for us to find a right place to document new options.
| priority | document properly configuration of the api mediation layer the documentation of the possible configuration is either nonexistent or in the yml files in the repository as such it makes it more difficult for sysadmins to understand the possible configuration options for fine tuning the api ml and also it makes it more complex for us to find a right place to document new options | 1 |
501,237 | 14,523,831,165 | IssuesEvent | 2020-12-14 10:37:15 | fasten-project/fasten | https://api.github.com/repos/fasten-project/fasten | closed | The response is returned within the array | Priority: Medium bug good first issue | The response format still (after #100) doesn't exactly match the standard specifications of [JSON API](https://jsonapi.org). There is a minor flow.
The response is returned within an array, rather than by the object itself.
For example, instead of:
```
[
{
"id": 16,
"package_name": "junit:junit",
"forge": "mvn",
"project_name": "JUnit",
"repository": "http://github.com/junit-team/junit/tree/master",
"created_at": null,
"version": "4.12"
}
]
```
Should be:
```
{
"id": 16,
"package_name": "junit:junit",
"forge": "mvn",
"project_name": "JUnit",
"repository": "http://github.com/junit-team/junit/tree/master",
"created_at": null,
"version": "4.12"
}
``` | 1.0 | The response is returned within the array - The response format still (after #100) doesn't exactly match the standard specifications of [JSON API](https://jsonapi.org). There is a minor flow.
The response is returned within an array, rather than by the object itself.
For example, instead of:
```
[
{
"id": 16,
"package_name": "junit:junit",
"forge": "mvn",
"project_name": "JUnit",
"repository": "http://github.com/junit-team/junit/tree/master",
"created_at": null,
"version": "4.12"
}
]
```
Should be:
```
{
"id": 16,
"package_name": "junit:junit",
"forge": "mvn",
"project_name": "JUnit",
"repository": "http://github.com/junit-team/junit/tree/master",
"created_at": null,
"version": "4.12"
}
``` | priority | the response is returned within the array the response format still after doesn t exactly match the standard specifications of there is a minor flow the response is returned within an array rather than by the object itself for example instead of id package name junit junit forge mvn project name junit repository created at null version should be id package name junit junit forge mvn project name junit repository created at null version | 1 |
649,815 | 21,326,900,429 | IssuesEvent | 2022-04-18 00:50:15 | gilhrpenner/COMP4350 | https://api.github.com/repos/gilhrpenner/COMP4350 | closed | Improve mobile experience | dev task frontend medium priority | ## Description
Some pages/components are not mobile friendly.
## Acceptance Criteria
- Make the pages/components mobile friendly
User story: N/A\
Pre-requisites: N/A
| 1.0 | Improve mobile experience - ## Description
Some pages/components are not mobile friendly.
## Acceptance Criteria
- Make the pages/components mobile friendly
User story: N/A\
Pre-requisites: N/A
| priority | improve mobile experience description some pages components are not mobile friendly acceptance criteria make the pages components mobile friendly user story n a pre requisites n a | 1 |
251,793 | 8,027,564,994 | IssuesEvent | 2018-07-27 09:31:53 | Codaone/DEXBot | https://api.github.com/repos/Codaone/DEXBot | closed | make use of pybitshares cycle though api nodes at disconnects | Priority: Medium Status: Requires Discussion Type: Enhancement | From Xeroc:
> Pro tipp: you can give pybitshares a list of endpoints and it will cycle thru them automatically on disconnects..
So could we make use of this? 50% of problems users experience are related to node issues, and this could help a lot (number not verified).
Having the list in the config file would allow users to modify the list; adding their own nodes and removing the rest, removing distant nodes, or adding new ones... | 1.0 | make use of pybitshares cycle though api nodes at disconnects - From Xeroc:
> Pro tipp: you can give pybitshares a list of endpoints and it will cycle thru them automatically on disconnects..
So could we make use of this? 50% of problems users experience are related to node issues, and this could help a lot (number not verified).
Having the list in the config file would allow users to modify the list; adding their own nodes and removing the rest, removing distant nodes, or adding new ones... | priority | make use of pybitshares cycle though api nodes at disconnects from xeroc pro tipp you can give pybitshares a list of endpoints and it will cycle thru them automatically on disconnects so could we make use of this of problems users experience are related to node issues and this could help a lot number not verified having the list in the config file would allow users to modify the list adding their own nodes and removing the rest removing distant nodes or adding new ones | 1 |
646,444 | 21,047,756,279 | IssuesEvent | 2022-03-31 17:40:17 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Reviewing Community Events Requirement | priority-medium status-needreview | ### Issue Description
I will be adding the discussed features about creating, canceling, joining, and displaying to the community events section of our wiki page.
### Step Details
_No response_
### Final Actions
_No response_
### Deadline of the Issue
28.03.2022 - Sunday 23.59
### Reviewer
Batuhan Çelik
### Deadline for the Review
30.03.2022 - Wednesday 23.59 | 1.0 | Reviewing Community Events Requirement - ### Issue Description
I will be adding the discussed features about creating, canceling, joining, and displaying to the community events section of our wiki page.
### Step Details
_No response_
### Final Actions
_No response_
### Deadline of the Issue
28.03.2022 - Sunday 23.59
### Reviewer
Batuhan Çelik
### Deadline for the Review
30.03.2022 - Wednesday 23.59 | priority | reviewing community events requirement issue description i will be adding the discussed features about creating canceling joining and displaying to the community events section of our wiki page step details no response final actions no response deadline of the issue sunday reviewer batuhan çelik deadline for the review wednesday | 1 |
165,739 | 6,284,829,622 | IssuesEvent | 2017-07-19 08:49:10 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | reopened | ng build accepts any argument list and silently ignores them | effort1: easy (hours) freq2: medium priority: 2 (required) severity1: confusing | ### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
```
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
@angular/cli: 1.0.0
node: 6.9.5
os: darwin x64
@angular/animations: 4.0.0
@angular/common: 4.0.0
@angular/compiler: 4.0.0
@angular/core: 4.0.0
@angular/forms: 4.0.0
@angular/http: 4.0.0
@angular/platform-browser: 4.0.0
@angular/platform-browser-dynamic: 4.0.0
@angular/router: 4.0.0
@angular/cli: 1.0.0
@angular/compiler-cli: 4.0.0
```
### Repro steps.
Just type the command below:
```
$ ng build accepts any argument list and silently ignores them
```
You then get the result:
```
$ ng build accepts any argument list and silently ignores them
Hash: 7293d841f09f4d666afe
Time: 32858ms
chunk {0} main.bundle.js, main.bundle.js.map (main) 2.05 MB {3} [initial] [rendered]
chunk {1} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 158 kB {4} [initial] [rendered]
chunk {2} styles.bundle.js, styles.bundle.js.map (styles) 132 kB {4} [initial] [rendered]
chunk {3} vendor.bundle.js, vendor.bundle.js.map (vendor) 4.06 MB [initial] [rendered]
chunk {4} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
$
```
### The log given by the failure.
No log: the command line seems to succeed, letting you believe it did what you expect.
In the above case, it is obvious that the parameters are ignored.
Unfortunately you can end up with a weird result when you are unlucky enough to type (actually paste it from a web page...) commands like:
```
ng build ——prod ——aot true
```
⚡️ You are toasted. ⚡️
Indeed there is a mistake in this command line: the use of '—' instead of '-' but depending on the font of your terminal chances are you will not notice it.
And `ng build` gently finishes you up by silently misleading you. :smiling_imp:
### Desired functionality.
At least a warning, perhaps an error, telling that you have supplied parameters that are ignored.
In any case, it is not correct for a command to encounter superfluous parameters and not report them as ignored. | 1.0 | ng build accepts any argument list and silently ignores them - ### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Versions.
```
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
@angular/cli: 1.0.0
node: 6.9.5
os: darwin x64
@angular/animations: 4.0.0
@angular/common: 4.0.0
@angular/compiler: 4.0.0
@angular/core: 4.0.0
@angular/forms: 4.0.0
@angular/http: 4.0.0
@angular/platform-browser: 4.0.0
@angular/platform-browser-dynamic: 4.0.0
@angular/router: 4.0.0
@angular/cli: 1.0.0
@angular/compiler-cli: 4.0.0
```
### Repro steps.
Just type the command below:
```
$ ng build accepts any argument list and silently ignores them
```
You then get the result:
```
$ ng build accepts any argument list and silently ignores them
Hash: 7293d841f09f4d666afe
Time: 32858ms
chunk {0} main.bundle.js, main.bundle.js.map (main) 2.05 MB {3} [initial] [rendered]
chunk {1} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 158 kB {4} [initial] [rendered]
chunk {2} styles.bundle.js, styles.bundle.js.map (styles) 132 kB {4} [initial] [rendered]
chunk {3} vendor.bundle.js, vendor.bundle.js.map (vendor) 4.06 MB [initial] [rendered]
chunk {4} inline.bundle.js, inline.bundle.js.map (inline) 0 bytes [entry] [rendered]
$
```
### The log given by the failure.
No log: the command line seems to succeed, letting you believe it did what you expect.
In the above case, it is obvious that the parameters are ignored.
Unfortunately you can end up with a weird result when you are unlucky enough to type (actually paste it from a web page...) commands like:
```
ng build ——prod ——aot true
```
⚡️ You are toasted. ⚡️
Indeed there is a mistake in this command line: the use of '—' instead of '-' but depending on the font of your terminal chances are you will not notice it.
And `ng build` gently finishes you up by silently misleading you. :smiling_imp:
### Desired functionality.
At least a warning, perhaps an error, telling that you have supplied parameters that are ignored.
In any case, it is not correct for a command to encounter superfluous parameters and not report them as ignored. | priority | ng build accepts any argument list and silently ignores them bug report or feature request mark with an x bug report please search issues before submitting feature request versions △ angular cli node os darwin angular animations angular common angular compiler angular core angular forms angular http angular platform browser angular platform browser dynamic angular router angular cli angular compiler cli repro steps just type the command below ng build accepts any argument list and silently ignores them you then get the result ng build accepts any argument list and silently ignores them hash time chunk main bundle js main bundle js map main mb chunk polyfills bundle js polyfills bundle js map polyfills kb chunk styles bundle js styles bundle js map styles kb chunk vendor bundle js vendor bundle js map vendor mb chunk inline bundle js inline bundle js map inline bytes the log given by the failure no log the command line seems to succeed letting you believe it did what you expect in the above case it is obvious that the parameters are ignored unfortunately you can end up with a weird result when you are unlucky enough to type actually paste it from a web page commands like ng build ——prod ——aot true ⚡️ you are toasted ⚡️ indeed there is a mistake in this command line the use of — instead of but depending on the font of your terminal chances are you will not notice it and ng build gently finishes you up by silently misleading you smiling imp desired functionality at least a warning perhaps an error telling that you have supplied parameters that are ignored in any case it is not correct for a command to encounter superfluous parameters and not report them as ignored | 1 |
687,763 | 23,537,756,124 | IssuesEvent | 2022-08-20 00:08:51 | projectdiscovery/nuclei | https://api.github.com/repos/projectdiscovery/nuclei | closed | A request is made for each payload even if the variable is not used in that request | Priority: Medium Status: Completed Type: Bug | <!--
1. Please search to see if an issue already exists for the bug you encountered.
2. For support requests, FAQs or "How to" questions, please use the GitHub Discussions section instead - https://github.com/projectdiscovery/nuclei/discussions or
3. Join our discord server at https://discord.gg/projectdiscovery and post the question on the #nuclei channel.
-->
<!-- ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. -->
### Nuclei version:
<!-- You can find current version of nuclei with "nuclei -version" -->
<!-- We only accept issues that are reproducible on the latest version of nuclei. -->
<!-- You can find the latest version of project at https://github.com/projectdiscovery/nuclei/releases/ -->
2.6.5
### Current Behavior:
<!-- A concise description of what you're experiencing. -->
A request is made for each payload even if the variable is not used in that request
```yaml
id: BugMultipleRequest
info:
name: Bug Multiple Request
author: brenocss
severity: info
requests:
- raw:
- |
GET / HTTP/1.1
payloads:
teste:
- '1'
- '2'
- '3'
matchers:
- type: word
words:
- '1'
condition: or
```
```zsh
nuclei -t teste1.yaml -u 'https://google.com' -duc -debug-req
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.6.5
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
[INF] Using Nuclei Engine 2.6.5 (latest)
[INF] Using Nuclei Templates 8.9.1 (latest)
[INF] Templates added in last update: 45
[INF] Templates loaded for scan: 1
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=1]
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=2]
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=3]
```
### Expected Behavior:
```zsh
nuclei -t teste1.yaml -u 'https://google.com' -duc -debug-req
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.6.5
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
[INF] Using Nuclei Engine 2.6.5 (latest)
[INF] Using Nuclei Templates 8.9.1 (latest)
[INF] Templates added in last update: 45
[INF] Templates loaded for scan: 1
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
Connection: close
Accept-Encoding: gzip
```
| 1.0 | A request is made for each payload even if the variable is not used in that request - <!--
1. Please search to see if an issue already exists for the bug you encountered.
2. For support requests, FAQs or "How to" questions, please use the GitHub Discussions section instead - https://github.com/projectdiscovery/nuclei/discussions or
3. Join our discord server at https://discord.gg/projectdiscovery and post the question on the #nuclei channel.
-->
<!-- ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION. -->
### Nuclei version:
<!-- You can find current version of nuclei with "nuclei -version" -->
<!-- We only accept issues that are reproducible on the latest version of nuclei. -->
<!-- You can find the latest version of project at https://github.com/projectdiscovery/nuclei/releases/ -->
2.6.5
### Current Behavior:
<!-- A concise description of what you're experiencing. -->
A request is made for each payload even if the variable is not used in that request
```yaml
id: BugMultipleRequest
info:
name: Bug Multiple Request
author: brenocss
severity: info
requests:
- raw:
- |
GET / HTTP/1.1
payloads:
teste:
- '1'
- '2'
- '3'
matchers:
- type: word
words:
- '1'
condition: or
```
```zsh
nuclei -t teste1.yaml -u 'https://google.com' -duc -debug-req
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.6.5
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
[INF] Using Nuclei Engine 2.6.5 (latest)
[INF] Using Nuclei Templates 8.9.1 (latest)
[INF] Templates added in last update: 45
[INF] Templates loaded for scan: 1
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=1]
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.1 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=2]
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 6.4; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36
Connection: close
Accept-Encoding: gzip
[2022-03-27 18:39:00] [BugMultipleRequest] [http] [info] https://google.com/ [teste=3]
```
### Expected Behavior:
```zsh
nuclei -t teste1.yaml -u 'https://google.com' -duc -debug-req
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ 2.6.5
projectdiscovery.io
[WRN] Use with caution. You are responsible for your actions.
[WRN] Developers assume no liability and are not responsible for any misuse or damage.
[INF] Using Nuclei Engine 2.6.5 (latest)
[INF] Using Nuclei Templates 8.9.1 (latest)
[INF] Templates added in last update: 45
[INF] Templates loaded for scan: 1
[INF] [BugMultipleRequest] Dumped HTTP request for https://google.com
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36
Connection: close
Accept-Encoding: gzip
```
| priority | a request is made for each payload even if the variable is not used in that request please search to see if an issue already exists for the bug you encountered for support requests faqs or how to questions please use the github discussions section instead or join our discord server at and post the question on the nuclei channel nuclei version current behavior a request is made for each payload even if the variable is not used in that request yaml id bugmultiplerequest info name bug multiple request author brenocss severity info requests raw get http payloads teste matchers type word words condition or zsh nuclei t yaml u duc debug req projectdiscovery io use with caution you are responsible for your actions developers assume no liability and are not responsible for any misuse or damage using nuclei engine latest using nuclei templates latest templates added in last update templates loaded for scan dumped http request for get http host google com user agent mozilla windows nt applewebkit khtml like gecko chrome safari connection close accept encoding gzip dumped http request for get http host google com user agent mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari connection close accept encoding gzip dumped http request for get http host google com user agent mozilla windows nt applewebkit khtml like gecko chrome safari connection close accept encoding gzip expected behavior zsh nuclei t yaml u duc debug req projectdiscovery io use with caution you are responsible for your actions developers assume no liability and are not responsible for any misuse or damage using nuclei engine latest using nuclei templates latest templates added in last update templates loaded for scan dumped http request for get http host google com user agent mozilla windows nt applewebkit khtml like gecko chrome safari connection close accept encoding gzip | 1 |
280,522 | 8,683,387,766 | IssuesEvent | 2018-12-02 17:42:08 | certificate-helper/TLS-Inspector | https://api.github.com/repos/certificate-helper/TLS-Inspector | closed | Update OpenSSL to 1.1.1a | CertificateKit medium priority | Update OpenSSL from 1.1.1 to 1.1.1a (ugh I was really hoping they'd drop the suffix and stick to strict semantic versioning) | 1.0 | Update OpenSSL to 1.1.1a - Update OpenSSL from 1.1.1 to 1.1.1a (ugh I was really hoping they'd drop the suffix and stick to strict semantic versioning) | priority | update openssl to update openssl from to ugh i was really hoping they d drop the suffix and stick to strict semantic versioning | 1 |
209,579 | 7,177,448,986 | IssuesEvent | 2018-01-31 13:41:34 | hpi-swt2/sport-portal | https://api.github.com/repos/hpi-swt2/sport-portal | closed | faster OpenID signup | priority medium review team swteam user story | **As an** individual signing up with OpenID
**I** do not **want to** have to enter my Name during the registration
**In order to** have no unnecessary effort.
##Acceptance Criteria:
* if you sign up with OpenID you shouldn't have to enter additional data since the platform should be able to receive the necessary data
workflow:

| 1.0 | faster OpenID signup - **As an** individual signing up with OpenID
**I** do not **want to** have to enter my Name during the registration
**In order to** have no unnecessary effort.
##Acceptance Criteria:
* if you sign up with OpenID you shouldn't have to enter additional data since the platform should be able to receive the necessary data
workflow:

| priority | faster openid signup as an individual signing up with openid i do not want to have to enter my name during the registration in order to have no unnecessary effort acceptance criteria if you sign up with openid you shouldn t have to enter additional data since the platform should be able to receive the necessary data workflow | 1 |
758,911 | 26,573,661,953 | IssuesEvent | 2023-01-21 14:14:06 | jedmund/hensei-web | https://api.github.com/repos/jedmund/hensei-web | opened | Add reactions to grids with emoji (or Granblue stickers) | feature priority: medium | This way we can create social proof while sidestepping the moderation problem.
Granblue stickers is a better idea than emoji because people can spell words with emoji and that is bad. | 1.0 | Add reactions to grids with emoji (or Granblue stickers) - This way we can create social proof while sidestepping the moderation problem.
Granblue stickers is a better idea than emoji because people can spell words with emoji and that is bad. | priority | add reactions to grids with emoji or granblue stickers this way we can create social proof while sidestepping the moderation problem granblue stickers is a better idea than emoji because people can spell words with emoji and that is bad | 1 |
225,690 | 7,494,321,897 | IssuesEvent | 2018-04-07 08:08:45 | trimstray/sandmap | https://api.github.com/repos/trimstray/sandmap | opened | Module: smb-vuln | Priority: Medium Status: In Progress Type: Feature | Module name: **smb-vuln**
Category: **nse**
Status: **In Progress**
NSE scripts for the SMB vulnerabilities. | 1.0 | Module: smb-vuln - Module name: **smb-vuln**
Category: **nse**
Status: **In Progress**
NSE scripts for the SMB vulnerabilities. | priority | module smb vuln module name smb vuln category nse status in progress nse scripts for the smb vulnerabilities | 1 |
438,157 | 12,620,113,109 | IssuesEvent | 2020-06-13 04:40:11 | cloudfoundry-incubator/kubecf | https://api.github.com/repos/cloudfoundry-incubator/kubecf | closed | Store subcharts inside the kubecf repo | Priority: Medium Type: Maintenance | Right now we reference the bundled subcharts via `requirements.yaml`:
```yaml
dependencies:
- name: eirini
version: 1.0.2
repository: http://opensource.suse.com/eirini-release/
...
```
and we only fetch the actual charts via `helm dep up` when we build the kubecf helm chart.
I would like to propose that we run `helm dep up` every time we bump the dependency and check the downloaded subcharts into the kubecf repo. This has the following advantages (in decreasing order of importance to me):
* It becomes possible to quickly look up something in the subchart without having to fetch it manually, e.g. to see how something is configurable.
* When a chart is bumped, you can see the diff of what has changed, both on Github and locally with git. It can provide context to understand why some subchart settings are being changed in kubecf.
* We can now guarantee that the subchart doesn't change between rebuilds. `requirements.yaml` only uses a version number, not a shasum, so currently we rely on charts not being re-published to the external repo without version number updates.
* The build becomes slightly faster, although only trivially so.
Are there any reasons why we should **not** do this? | 1.0 | Store subcharts inside the kubecf repo - Right now we reference the bundled subcharts via `requirements.yaml`:
```yaml
dependencies:
- name: eirini
version: 1.0.2
repository: http://opensource.suse.com/eirini-release/
...
```
and we only fetch the actual charts via `helm dep up` when we build the kubecf helm chart.
I would like to propose that we run `helm dep up` every time we bump the dependency and check the downloaded subcharts into the kubecf repo. This has the following advantages (in decreasing order of importance to me):
* It becomes possible to quickly look up something in the subchart without having to fetch it manually, e.g. to see how something is configurable.
* When a chart is bumped, you can see the diff of what has changed, both on Github and locally with git. It can provide context to understand why some subchart settings are being changed in kubecf.
* We can now guarantee that the subchart doesn't change between rebuilds. `requirements.yaml` only uses a version number, not a shasum, so currently we rely on charts not being re-published to the external repo without version number updates.
* The build becomes slightly faster, although only trivially so.
Are there any reasons why we should **not** do this? | priority | store subcharts inside the kubecf repo right now we reference the bundled subcharts via requirements yaml yaml dependencies name eirini version repository and we only fetch the actual charts via helm dep up when we build the kubecf helm chart i would like to propose that we run helm dep up every time we bump the dependency and check the downloaded subcharts into the kubecf repo this has the following advantages in decreasing order of importance to me it becomes possible to quickly look up something in the subchart without having to fetch it manually e g to see how something is configurable when a chart is bumped you can see the diff of what has changed both on github and locally with git it can provide context to understand why some subchart settings are being changed in kubecf we can now guarantee that the subchart doesn t change between rebuilds requirements yaml only uses a version number not a shasum so currently we rely on charts not being re published to the external repo without version number updates the build becomes slightly faster although only trivially so are there any reasons why we should not do this | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.